CN111383279B - External parameter calibration method and device and electronic equipment - Google Patents

External parameter calibration method and device and electronic equipment Download PDF

Info

Publication number
CN111383279B
CN111383279B CN201811636001.XA CN201811636001A CN111383279B CN 111383279 B CN111383279 B CN 111383279B CN 201811636001 A CN201811636001 A CN 201811636001A CN 111383279 B CN111383279 B CN 111383279B
Authority
CN
China
Prior art keywords
point cloud
cloud data
preset
determining
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811636001.XA
Other languages
Chinese (zh)
Other versions
CN111383279A (en
Inventor
李方震
孙伟健
王兵
王刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuzhou Online E Commerce Beijing Co ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811636001.XA priority Critical patent/CN111383279B/en
Publication of CN111383279A publication Critical patent/CN111383279A/en
Application granted granted Critical
Publication of CN111383279B publication Critical patent/CN111383279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The embodiment of the application discloses an external parameter calibration method, an external parameter calibration device and electronic equipment, wherein the method comprises the following steps: acquiring image data acquired by camera equipment on a target calibration reference object and point cloud data acquired by radar equipment on the target calibration reference object, wherein the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles; processing according to the point cloud data, and determining a plurality of point cloud data sets with different specific position characteristics; projecting the point cloud data in the set into an image acquired by the camera device; and determining a calibration result according to the overlapping degree between the point cloud data in the set and the projection result in the image acquired by the camera equipment. According to the embodiment of the application, the external parameter calibration result with higher precision can be obtained more simply and conveniently.

Description

External parameter calibration method and device and electronic equipment
Technical Field
The application relates to the technical field of environment sensing of multi-sensor fusion, in particular to an external parameter calibration method, an external parameter calibration device and electronic equipment.
Background
Environmental perception is a core technology in the industries of robots, autopilot, intelligent manufacturing, intelligent monitoring, intelligent traffic and the like, and sensors for realizing environmental perception are laser radars, cameras, millimeter wave radars and the like, wherein the laser radars and the cameras are most commonly used. The various sensors have advantages and disadvantages due to their different principle characteristics, and in order to better realize environmental awareness, the data of the various sensors are generally required to be used together. For example, in the autopilot industry, it may be necessary to equip the same roadside device with both laser radar and camera devices, and by fusing the data of the two sensors, the perception of information such as specific road conditions in the environment is achieved. To realize data fusion among different sensors, the problem of the coordinate system unification of the data of each sensor is solved. That is, a rotational translational relationship between the coordinate systems of the different sensors is required. In practical application, when different sensor devices are installed, the sensor devices are installed according to a certain angle or a certain translation distance, however, the angle or the distance formed after the actual installation may not be exactly equal to a certain set value, but have a certain error. For example, in a certain scenario, the angle between the camera and the lidar may be 30 degrees, the distance 50cm, but after actual installation, it may be 31 degrees, the distance 50.3cm, etc. At this time, the actual rotation-translation relationship needs to be calibrated by some means, otherwise, the fusion result of the environmental perception may also generate errors.
Therefore, how to simply and conveniently obtain the calibration result of the rotation-translation relationship between the camera device and the radar device coordinate system becomes a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The application provides an external parameter calibration method, an external parameter calibration device and electronic equipment, which can obtain an external parameter calibration result with higher precision more simply and conveniently.
The application provides the following scheme:
a method for calibrating external parameters comprises the following steps:
acquiring image data acquired by camera equipment on a target calibration reference object and point cloud data acquired by radar equipment on the target calibration reference object, wherein the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
processing according to the point cloud data, and determining a plurality of point cloud data sets with different specific position characteristics;
projecting the point cloud data in the set into an image acquired by the camera device;
and determining a calibration result according to the overlapping degree between the point cloud data in the set and the projection result in the image acquired by the camera equipment.
A calibration reference object, which is a calibration standard,
the calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles, and is used for calibrating the rotation-translation relation between the coordinate systems of camera equipment and radar equipment installed in the same environment sensing system.
An environmental awareness system, comprising:
the camera equipment is used for collecting image data of the target environment;
the radar equipment is used for acquiring point cloud data of the target environment;
the data processing device is used for pre-storing a conversion relation calibration result between the camera device coordinate system and the radar device coordinate system, and fusing the data acquired by the camera device and the radar device according to the calibration result to obtain a perception result of the target environment; the calibration result is obtained through a target calibration reference object, and the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles.
A method of environmental awareness, comprising:
pre-storing a conversion relation calibration result between a camera equipment coordinate system and a radar equipment coordinate system, wherein the calibration result is obtained through a target calibration reference object, and the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
and fusing the data acquired by the camera equipment and the radar equipment according to the calibration result to obtain a perception result of the target environment.
An external parameter calibration device, comprising:
the system comprises a data acquisition unit, a radar device and a target calibration reference object acquisition unit, wherein the data acquisition unit is used for acquiring image data acquired by the camera device on the target calibration reference object and point cloud data acquired by the radar device on the target calibration reference object, and the target calibration reference object comprises a plurality of plane bodies which have preset colors and form preset angles;
the point cloud data set determining unit is used for processing according to the point cloud data to determine a plurality of point cloud data sets with different specific position characteristics;
a projection unit, configured to project the point cloud data in the set into an image acquired by the camera device;
and the calibration result determining unit is used for determining a calibration result according to the overlapping degree between the point cloud data in the set and the projection result in the image acquired by the camera equipment.
An environmental awareness apparatus comprising:
the storage unit is used for pre-storing a conversion relation calibration result between a camera equipment coordinate system and a radar equipment coordinate system, wherein the calibration result is obtained through a target calibration reference object, and the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
And the sensing unit is used for fusing the data acquired by the camera equipment and the radar equipment according to the calibration result to obtain a sensing result of the target environment.
An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
acquiring image data acquired by camera equipment on a target calibration reference object and point cloud data acquired by radar equipment on the target calibration reference object, wherein the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
processing according to the point cloud data, and determining a plurality of point cloud data sets with different specific position characteristics;
projecting the point cloud data in the set into an image acquired by the camera device;
and determining a calibration result according to the overlapping degree between the point cloud data in the set and the projection result in the image acquired by the camera equipment.
An electronic device, comprising:
one or more processors; and
A memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
pre-storing a conversion relation calibration result between a camera equipment coordinate system and a radar equipment coordinate system, wherein the calibration result is obtained through a target calibration reference object, and the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
and fusing the data acquired by the camera equipment and the radar equipment according to the calibration result to obtain a perception result of the target environment.
According to a specific embodiment provided by the application, the application discloses the following technical effects:
according to the embodiment of the application, a calibration reference object can be formed by adopting a plane body with preset color and a preset angle, then data acquisition is carried out on the calibration reference object through camera equipment and radar equipment, a plurality of point cloud data sets with different specific position features can be determined from the point cloud data and respectively projected into images acquired by the camera equipment, and then a calibration result can be determined according to the overlapping degree between the point cloud data in the sets and the projection results in the images acquired by the camera equipment. By the method, the calibration reference object is an object in a three-dimensional space, and is not a two-dimensional calibration plate, so that the radar equipment can obtain higher data acquisition precision without a sensor with higher precision. In addition, because the preset angles and the prior information with different colors are arranged among the planes, the position features of the specific point cloud data can be obtained from the point cloud data acquired by the radar equipment, for example, whether the specific point cloud data is positioned on a certain plane, whether the specific point cloud data is positioned on a certain characteristic straight line or not, and the like, and further, according to the position feature information and the projection of the specific point cloud data in the image acquired by the camera equipment, the overlapping degree between the two accords with preset conditions through gradually adjusting the angles, the translation distance and the like, so that the corresponding calibration result can be obtained. In addition, as the calibration reference object is provided with a plurality of planes, the characteristic planes with a plurality of angles can be extracted at one time, so that in the whole calibration process, the camera equipment and the radar equipment can obtain a satisfactory calibration result only by executing data acquisition operation once, and the camera equipment and the radar equipment do not need to acquire and calculate for a plurality of times, thereby improving the efficiency.
In addition, in the preferred implementation scheme, the characteristics of the image acquired by the camera equipment after the gray gradient inverse depth transformation can be matched with the laser point cloud data, so that compared with the direct use of the image edge characteristics, the matching optimization is more stable, and the global optimal solution is easier to obtain.
Moreover, under the condition of using the same group of calibration data, the final calibration result can be solved by dividing the same group of calibration data into two steps of coarse calibration and fine calibration, and compared with the traditional scheme for calculating the calibration result at one time, the method has the advantage that the calibration accuracy is higher.
Of course, not all of the above-described advantages need be achieved at the same time in practicing any one of the products of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a calibration system provided by an embodiment of the present application;
FIG. 2 is a flow chart of a first method provided by an embodiment of the present application;
FIG. 3 is a schematic view of a calibration reference provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a first application system provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a second application system provided in an embodiment of the present application;
FIG. 6 is a flow chart of a second method provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a first apparatus provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of a second apparatus provided in an embodiment of the present application;
fig. 9 is a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The inventor of the application finds that if a rotation translation relation between the camera device and the coordinate system of the radar device is needed, the coordinate system of the laser radar can be used as a reference coordinate system to perform external parameter calibration on the camera. The specific calibration scheme comprises calibration based on a calibration plate or calibration based on a calibration workshop. For the calibration mode based on the calibration plate, in a specific implementation mode, the planar calibration plate is used as a calibration reference, and the camera and the laser radar shoot or scan the calibration plate at the same time. Then, extracting angular point coordinates (or other characteristic point coordinates) of the calibration plate from the image acquired by the camera, extracting three-dimensional space coordinates (or corresponding characteristic point coordinates) of each corresponding angular point from point cloud data acquired by the laser radar, and calculating external parameters of the camera and the laser radar by solving PNP (camera pose estimation) problems by using a 2D-3D coordinate corresponding relation. Alternatively, a checkerboard calibration plate can be used to extract each corner point of the checkerboard from the image collected by the camera, and the 2D coordinates of the corner points are returned to the 3D space of the camera by using the camera internal parameter calibration principle. Meanwhile, extracting angular point 3D coordinates of the checkerboard in the laser point cloud, and solving external parameters of the laser radar and the camera by utilizing methods such as ICP and the like.
The calibration scheme based on the calibration plate has the defect that the calibration plate is a two-dimensional plane object, and when the laser beam of the laser radar strikes the single-plane object, the situation of low precision exists. So that when the camera equipment is calibrated by using the data acquired by the laser radar, certain errors are brought, and the calibration precision is not high.
The method based on the calibration workshop is basically the same as that of the calibration plates, namely a plurality of calibration plates are hung on the wall surface of a room for calibration, camera equipment uniformly collects images of the calibration plates under a plurality of angle distributions to reduce calibration errors, the process of collecting data is complex and time-consuming, and the calibration result is unstable.
Another scheme for improving the calibration precision is to introduce a third sensor device with higher precision, respectively calibrating the external parameters of the camera device and the external parameters of the laser radar device through the third sensor device, and then converting the rotation translation relation between the coordinate systems of the camera device and the laser radar device through the two calibration results. However, this solution is still cumbersome.
Aiming at the problems in the various schemes, the scheme capable of obtaining the calibration result with higher precision more conveniently and rapidly is provided in the embodiment of the application. In the scheme, firstly, a calibration reference object is improved, and a specific calibration method is provided on the basis of the improved calibration reference object. The calibration reference may be composed of a plurality of planar bodies having preset colors and forming preset angles. In this way, since there are a plurality of planes with preset colors, it is convenient to identify the boundary lines between the planes and have a priori information of a certain angle with each other, so that it is convenient to determine the position characteristics of different point clouds from the point cloud data collected by the radar device, for example, on which plane a certain point cloud is located, whether it is located on a certain straight line (the boundary line between different planes or the boundary line of the planes), etc., and then it is possible to utilize the position characteristic information to project an image collected by the camera device, and complete the final calibration according to the overlapping degree between the point cloud data and the corresponding projection. In this way, the external parameter calibration with high precision can be realized without changing the angle to perform multiple data acquisitions and without resorting to other third sensors with higher precision.
In particular, from the system architecture perspective, first, an object to be calibrated may be an external parameter of a camera device, where the camera device needs to cooperate with a radar device to perform environmental sensing in a certain scene. Therefore, the camera device and the radar device are usually installed in advance according to a certain rule, for example, in a scene that the road side device collects road condition information, the camera device and the radar device can be installed on the same road side device at the same time, a certain angle may be formed between the two devices, a certain translation distance may be formed between the two devices, and the purpose of calibration is to accurately identify the actual angle and the translation distance between the two devices. Therefore, in the specific calibration, as shown in fig. 1, the camera device 101 and the radar device 102 may be placed according to the angle and distance that the camera device and the radar device are actually installed, and at the same time, data acquisition is performed on the calibration reference object. In addition, a processing device 103 (for example, a computer or the like may be provided, in which an application program specifically used for performing calibration may be executed), and after a set of data about the calibration reference object is collected by the camera device and the radar device, the data may be transmitted to the processing device, and a specific calibration operation may be performed by the processing device, so as to obtain a calibration result. The specific calibration result may be a set of data, where the data is used to describe information such as an angle (angles in multiple directions) and a distance (distances in multiple directions) of the coordinate system of the camera device with respect to the coordinate system of the radar device. After calibration is completed, the specific camera equipment and radar equipment can be installed into an actual application scene according to the angle and the translation distance which are formed during calibration, and a specific calibration result can be provided for a data fusion processing system in the actual application scene.
Specific embodiments provided in the embodiments of the present application are described in detail below.
Example 1
This embodiment provides a method for calibrating external parameters, mainly from the point of view of the processing apparatus shown in fig. 1, and the method may be implemented by a computer program running in the processing apparatus, or may be implemented by a processing module solidified in the processing apparatus by means of hardware, or the like. Specifically, referring to fig. 2, the method may specifically include:
s210: acquiring image data acquired by camera equipment on a target calibration reference object and point cloud data acquired by radar equipment on the target calibration reference object, wherein the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
the purpose of the calibration specifically in the embodiments of the present application is to obtain, when the internal parameters of the camera device are known, a conversion relationship between the coordinate system of the radar device (for example, a laser radar) and the coordinate system of the camera device, where the conversion relationship may be represented by a rotation euler angle and a translation distance of the coordinate system of the camera device relative to the coordinate system of the radar device. That is, it is necessary to determine how much the coordinate system of the camera device is rotated in what direction and how much the translation is required in each translation direction, so that the coordinate system of the camera device can be superimposed on the coordinate system of the camera device. Only under the condition that the coordinate systems of the two are coincident, the data actually collected by the two can be fused, so that the perception of a specific environment space is realized.
Specifically, when calibration is performed, some preparation stage works may be performed in advance, including: the camera device and the radar device can be aligned to the calibration reference object, so that the radar device can scan all planes, and all plane surfaces in the camera device are clearly visible. Then, specific radar equipment and camera equipment can be started, data acquisition is carried out on the calibration reference object, and a group of data acquired by the radar equipment and the camera equipment are respectively acquired. Wherein, the set of data may specifically include: the radar device usually has a rotating part, and under the rotation of the rotating part, the laser beam emitted by the radar device also collects the point cloud data of 360 degrees), and one frame of image data is extracted from the calibration reference object after the camera device shoots the calibration reference object.
In this embodiment of the present application, the specific calibration reference is a plurality of planar bodies having preset colors and forming preset angles, where in specific implementation, the specific angles, colors, and the like may be determined according to actual needs, and in one preferred implementation, the specific calibration reference may be a plurality of planar bodies that are orthogonal to each other. That is, the angle between the planes is 90 degrees. In one mode, the color information may be the same in the same plane, and different planes may have different colors. Alternatively, the same plane body may have the same color, and two plane bodies having an intersecting relationship may have different colors, that is, only the boundary between different planes can be easily distinguished by the color. For example, as shown in fig. 3, in a specific implementation, the calibration reference may be specifically three orthogonal planes, and the colors of the three planes may be respectively one of three primary colors, that is, respectively coated with three colors of red, green and blue. It should be noted that the calibration reference object shown in fig. 3 has three planes in total, not a cuboid, but rather, the cuboid is cut off in half along the diagonal, and the remaining three planes, and the inner surfaces thereof are respectively coated with three colors of red, green and blue. Of course, in practical applications, more planes may be used for implementation. In general, each plane may have a single color, and the colors of the different planes may be different to facilitate identification of the respective planes from the specifically acquired data. Moreover, since the multiple planes have mutually orthogonal prior information, it is also convenient to identify the location features of a specific point or pixel, for example, whether the specific point or pixel is located on a certain plane, whether the specific point or pixel is located on a boundary line between two planes, whether the specific point or pixel is located on a boundary line of a certain plane, and so on.
S220: processing according to the point cloud data, and determining a plurality of point cloud data sets with different specific position characteristics;
after a set of data acquired by the radar device and the camera device, respectively, is acquired, a specific calibration process may be performed using the set of data. When the calibration is specifically performed, the processing can be performed according to the point cloud data, and a plurality of point cloud data sets with different specific position features are determined.
The initial data collected by the laser radar may be a result collected by the laser radar after one rotation, but the calibration reference object only exists in a certain scanning angle, and only the area where the calibration reference object is located is the area needing to be focused, and other parts of data are invalid data. Therefore, the ROI area can be roughly defined, the point cloud falling on the calibration reference object is extracted, and then the specific position characteristics of the specific point cloud data are respectively determined.
The specific location features may be varied, for example, one of which may be: for a certain point, whether it is located on a specific plane in the calibration reference. Specifically, for example, assuming that the calibration reference includes three planes in total, namely, planes A, B, C, after the radar device collects specific point cloud data, it can be identified through specific processing operations, which points are located on a plane a, which points are located on a plane B, which points are located on a plane C, and so on. That is, the point cloud data may be processed according to the point cloud data, so as to determine point cloud data having points located on different planes, and further determine a plurality of first point cloud data sets corresponding to a plurality of different planes. The characteristic data can be projected into image data acquired by camera equipment, and then a specific calibration result can be obtained according to the coincidence ratio of the specific projection result. The specific position features are features of a plane where the point cloud is located, so that the point cloud data in the plurality of first point cloud data sets can be directly projected to three-channel images of the color original image acquired by the camera device when the projection is performed.
In particular, there may be a plurality of ways in determining the point cloud data having points respectively located on different planar bodies. For example, in one manner, fitting may be performed according to the point cloud data collected by the radar device and related to the calibration reference object (for example, fitting may be performed specifically according to a RANSCA algorithm, etc.), so as to obtain plane equations corresponding to a plurality of different plane bodies respectively, and the point cloud data belonging to the different plane bodies respectively are segmented. Of course, since the points included in the point cloud data acquired by the lidar may be very many and dense, if all the matching operations are performed, the operation amount may be very large, and may be unnecessary. Therefore, in an alternative embodiment, according to the size information of the planes of the plane equations, the point cloud data in each plane body may be sampled at preset intervals, so as to obtain first point cloud data sets corresponding to different plane bodies respectively. For example, in the case of using three orthogonal planes as calibration references, points on each plane can be uniformly sampled at intervals la according to three plane equations, and plane dimensions L, W, H, respectively, resulting in a plane point set pp= { p_n++i ε [ plane ].
It should be noted that there may be some errors in the fitting of the plane equations, for example, the angle between the fitted plane equations may not be exactly the same as the actual angle between the planes in the calibration reference. Therefore, in order to further improve the calibration accuracy, after the plane equation is fitted, the plane equation can be calibrated according to the angle prior information among the plurality of plane bodies. For example, if the planes in the calibration reference are orthogonal to each other, the orthogonal plane equations can be calibrated by rotating the plane equations by the angle step θ using the prior information of the orthogonal planes, and adjusting the parameters of the plane equations so that the normal directions of the plane equations are orthogonal to each other and the sum of the point-to-plane distances on the planes is minimized.
Regarding the specific location feature where the specific point cloud data is located, in addition to the location relationship between the point cloud and the plane where the specific point cloud is located, the location relationship between the specific point cloud and the feature straight line where the specific point cloud is located may be included, where the feature straight line may include a boundary line between two planes, a boundary line of a single plane, and the like. Therefore, in particular, when determining a plurality of point cloud data sets with different specific position features, it is also possible to determine point cloud data respectively located on different feature straight lines, and determine a plurality of second point cloud data sets corresponding to a plurality of different feature straight lines.
In particular, there may be multiple implementations when determining point cloud data respectively located on different feature lines. For example, in one mode, based on the plane equations corresponding to the plurality of different planes fitted in the previous step, the intersection equation between every two planes and the boundary line equation of the plane are calculated according to the plane equations corresponding to the different planes, and then the point cloud data on each straight line can be sampled at preset intervals on each straight line, so as to obtain the second point cloud data sets respectively corresponding to the different straight lines.
For example, when three orthogonal planes are used as calibration references, an intersection equation and a plane boundary line equation between every two planes can be obtained according to three plane equations and plane dimensions, respectively, and a set of points is generated by uniformly sampling at intervals lb on each straight line, so as to obtain pl= { p_m+j+ [ line ∈ {0,1,2 … }, where j represents a plane boundary line plus 9 intersection lines, and m represents the number of points sampled on each line.
S230: projecting the point cloud data in the set into an image acquired by the camera device;
after obtaining the specific location feature of the specific point cloud data and obtaining a plurality of point cloud data sets (where the point cloud data in the same set have the same location feature, for example, are located in the same plane, or are in the same straight line, etc.), the point cloud data in the set may be projected into the image acquired by the camera device.
In the projection process, the plane characteristics or the linear characteristics of the point cloud data can be used for projection alone and calibration can be carried out, or the plane characteristics and the linear characteristics can be combined to obtain a more accurate calibration result.
Specifically, if the plane feature where the point cloud data is located is used for projection, the point cloud data in the plurality of first point cloud data sets may be respectively projected into three-channel images of the color artwork acquired by the camera device. In which each pixel point has 3 values, it is called a three-channel image. For example, an RGB picture is a three-channel picture, and an RGB color mode is a color standard in industry, and various colors are obtained by changing three color channels of red (R), green (G) and blue (B) and overlapping the three color channels with each other. In the embodiment of the application, since the plurality of different planes in the calibration reference object have different colors respectively, and the colors of the pixel points on the same plane are the same, three-way values are the same, and further, which pixel points are respectively located on which plane can be determined. Therefore, after the plane characteristics of the point cloud data collected by the radar equipment are determined and projected into the three-channel image, the coordinate overlapping degree between the camera equipment and the radar equipment can be determined according to the overlapping degree between the specific point cloud data and the corresponding projection result, if the overlapping degree does not meet the requirement, the overlapping degree can be improved by gradually adjusting the angle, the translation distance and the like, and when the overlapping degree reaches the maximum, the specific calibration result can be obtained.
If the projection is performed by using the straight line feature of the point cloud data, the image data collected by the camera device may be preprocessed in advance to obtain the image data with the characteristic straight line information, and then the point cloud data in the plurality of second point cloud data sets are respectively projected into the image with the characteristic straight line information.
In particular, when obtaining the image data with the characteristic line information, there may be various implementation manners, for example, in one implementation manner, the color image data collected by the camera device may be subjected to gray processing, a gradient value of a gray value is calculated in a neighborhood of a first preset number of each pixel, and a gradient maximum value in one neighborhood is taken as a gradient value in a corresponding pixel, so as to obtain a gradient image, so that the characteristic line information is determined according to the gradient value. The gradient image may be, in particular, an image of the same size as the original image acquired by the camera device, but the corresponding information on each pixel is no longer a three-channel color information value on each pixel, but a gray gradient value. Such a gradation gradient value is used to represent a difference in gradation between one pixel point and other adjacent pixel points. In the embodiment of the present application, since the colors of the calibration references on the same plane are the same, and different planes have different colors, if a pixel is not the boundary between a certain plane and another plane, the gradient gray level between the pixel and each surrounding pixel is 0. When a pixel is located at the boundary between a plane and another plane or on the boundary line of a plane, the gray gradient between the pixel and one of the 8 surrounding pixels may be 255 or other non-0 values. Therefore, it is possible to determine which pixels are located on the boundary line of two different planes, which pixels are located on the boundary line of a plane, which boundary line of which two planes each belong to, which boundary line of which plane, and so on through such calculation of the gradation gradient value.
After the above straight line characteristics in the image data are obtained, point cloud data with straight line characteristics in the radar device can be respectively projected into the image data with characteristic straight line information. Alternatively, in a specific implementation, when the pixel points of the straight line feature in the image data are identified by the above-described method of calculating the gradation gradient, the identified boundary line, the boundary line, or the like may be very "narrow", for example, only one pixel wide. This may be disadvantageous for a subsequent specific projection matching, so that in a preferred embodiment, after the above-mentioned gray scale gradient image is calculated, inverse depth transform information for each pixel in a second preset number (e.g. 24 neighborhoods) of neighborhoods may also be calculated in the gradient image, resulting in an inverse depth transform map for determining the characteristic straight line information from the inverse depth transform information. That is, pixel points at the boundary between different planes can be generalized by inverse depth transformation, so that the boundary becomes "wide" for comparison in the subsequent step. The inverse depth transform image is also an image with the same size as the original image acquired by the camera device, where the information on each pixel point may also be gray gradient information, but a specific "straight line" is widened relative to the gray gradient image. For example, in the gray gradient map, the gray gradient values of some pixels may be (… 0, 255,0,0,0 …), and after the inverse transformation, the gray gradient values may be represented as (… 0, 50, 100, 255, 100, 50,0,0 …), that is, the pixels that may be located on the boundary line are increased, so that the calibration result may be obtained more efficiently in the process of performing the projection comparison later.
S240: and determining a calibration result according to the overlapping degree between the point cloud data in the set and the projection result in the image acquired by the camera equipment.
After the projection of the point cloud data in the specific point cloud data set into the image acquired by the camera device is completed, a specific calibration result can be determined according to the overlapping degree between the point cloud data and the projection result. The initial value of the rotation euler angle and the initial value of the translation vector can be determined first, for example, according to the approximate angle and distance between the camera device and the radar device when the camera device is installed. At the position of the initial value, the coincidence degree between the point cloud data in the specific set and the projection result can be calculated and obtained. Then, searching can be performed near the initial value according to a preset angle step length and a translation step length, and the coincidence degree between the point cloud data in the specific set and the projection result is respectively determined during each searching. Then, a rotation euler angle and a translation distance corresponding to the condition that the contact ratio meets the preset condition (for example, the contact ratio reaches the maximum value) can be obtained from the rotation euler angle and the translation distance, and then the rotation euler angle and the translation distance can be determined as calibration results.
For example, when the projection is performed by using the point cloud data in the first point cloud data set with the planar feature, the first calibration result may be determined according to the corresponding rotation euler angle and the translation vector when the overlapping degree between the point cloud data in the first point cloud data set and the projection in the three-channel image meets a preset condition.
When the point cloud data in the second point cloud data set with the linear characteristic is utilized for projection, when the overlapping degree between the point cloud data in the second point cloud data set and the projection in the image data with the linear characteristic information accords with a preset condition, a first calibration result can be determined according to the corresponding rotation Euler angle and the translation vector. The specific image data with the linear characteristic information may include the gray gradient map or the inverse depth transform map, and the like.
In addition, the first point cloud data set with the plane characteristic and the second point cloud data set with the linear characteristic can be combined to obtain a more accurate calibration result. For example, specifically, the point cloud data in the first point cloud data set may be projected into the three-channel image, then, with the rough angle and the translation distance at the time of installation as initial values, searching is performed according to a preset angle step length and a translation step length, and when the overlapping degree between the point cloud data in the first point cloud data set and the projection in the three-channel image meets a preset condition, the first calibration result is determined according to the corresponding rotation euler angle and the translation vector. The first calibration result obtained at this time may be referred to as a "coarse calibration" result. For example, in specific implementation, firstly, three channel diagrams of RGB in a color original image acquired by a camera device can be respectively extracted, then, according to installation prior information of the camera device and a radar device, a rotation euler angle and a rough initial value of a translation vector are given, and point cloud data in a first point cloud data set are respectively projected into images of three channels by utilizing a camera pinhole imaging principle. Then, greedy search is carried out near an initial value by using an angle step length of 0.5 degrees and a translation step length of 10cm, so that a value with maximum overlapping of point cloud data in a first point cloud data set and projections of three planes of red, green and blue in an image is obtained, and a coarse calibration Result 1= { roll1, pitch1, yaw1, x1, y1, z1} isobtained.
Then, the point cloud data in the second point cloud data set is projected into the inverse depth transformation image, the first calibration result is taken as an initial value, when the overlapping degree between the point cloud data in the second point cloud data set and the projection in the image with the linear characteristic information accords with a preset condition, a second calibration result is determined according to the corresponding rotation Euler angle and the translation vector, and the precision of the obtained result is further improved on the basis of the coarse calibration result, so that the result can be called as a 'fine calibration' result. For example, based on the coarse calibration Result1, the point cloud data in the second point cloud data set is projected into the inverse depth transformation map by using a camera pinhole imaging principle. Then, greedy search is carried out near Result1 by the angle step length of 0.05 degrees and the translation step length of 1cm respectively, so that a value with maximum overlapping of the point cloud data in the second point cloud data set and the projection of the characteristic straight line in the inverse depth conversion map is obtained, and a final Result result= { roll, pitch, yaw, x, y, z } of fine calibration is obtained.
In summary, according to the embodiment of the application, a calibration reference object can be formed by adopting a plane body with a plurality of preset colors and forming a preset angle, then data acquisition is performed on the calibration reference object through camera equipment and radar equipment, a plurality of point cloud data sets with different specific position features can be further determined from the point cloud data and respectively projected into images acquired by the camera equipment, and then a calibration result can be determined according to the overlapping degree between the point cloud data in the sets and the projection results in the images acquired by the camera equipment. By the method, the calibration reference object is an object in a three-dimensional space, and is not a two-dimensional calibration plate, so that the radar equipment can obtain higher data acquisition precision without a sensor with higher precision. In addition, because the multiple planes have the prior information of mutually preset angles, preset colors and the like, the position features of the specific point cloud data, such as whether the specific point cloud data is positioned on a certain plane, on a certain characteristic straight line or not, and the like, can be obtained from the point cloud data acquired by the radar equipment, and further, according to the position feature information and the projection of the specific point cloud data in the image acquired by the camera equipment, the overlapping degree between the two accords with preset conditions through gradually adjusting the angles, the translation distances and the like, so that the corresponding calibration result can be obtained. In addition, as the calibration reference object is provided with a plurality of planes, the characteristic planes with a plurality of angles can be extracted at one time, so that in the whole calibration process, the camera equipment and the radar equipment can obtain a satisfactory calibration result only by executing data acquisition operation once, and the camera equipment and the radar equipment do not need to acquire and calculate for a plurality of times, thereby improving the efficiency.
In addition, in the preferred implementation scheme, the characteristics of the image acquired by the camera equipment after the gray gradient inverse depth transformation can be matched with the laser point cloud data, so that compared with the direct use of the image edge characteristics, the matching optimization is more stable, and the global optimal solution is easier to obtain.
Moreover, under the condition of using the same group of calibration data, the final calibration result can be solved by dividing the same group of calibration data into two steps of coarse calibration and fine calibration, and compared with the traditional scheme for calculating the calibration result at one time, the method has the advantage that the calibration accuracy is higher.
Example two
The second embodiment provides a calibration reference, where the calibration reference includes a plurality of planar bodies having preset colors and forming preset angles, and the calibration reference is used for calibrating a rotational-translational relationship between a coordinate system of a camera device and a radar device installed in the same environmental perception system.
In particular, the target calibration reference may comprise a plurality of mutually orthogonal planar bodies.
In the target calibration reference object, the same plane body has the same color, and different planes have different colors.
Alternatively, the same planar body has the same color, and two planar bodies having an intersecting relationship have different colors.
Specifically, referring to fig. 3, the number of the planar bodies may be three, and the planar bodies are orthogonal to each other, and the colors are respectively one of three primary colors, that is, red, green and blue. Of course, in practical application, other colors are also possible, as long as they are easily distinguishable.
Example III
The third embodiment provides an environment sensing system from the perspective of a specific application scenario, where the specific environment sensing system may also have a plurality of more specific applications, for example, including road condition information sensing, or sensing of the surrounding environment by the robot device, and so on. Specifically, referring to fig. 4, the system may specifically include:
a camera device 410 for image data acquisition of a target environment;
the radar device 420 is configured to perform point cloud data acquisition on the target environment;
the data processing device 430 is configured to pre-store a calibration result of a conversion relationship between the camera device coordinate system and the radar device coordinate system, and fuse data acquired by the camera device and the radar device according to the calibration result, so as to obtain a perception result of the target environment; the calibration result is obtained through a target calibration reference object, and the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles.
The specific target environment may be various, for example, in a specific application scenario, the target environment may be a target road environment, that is, the camera device and the radar device may be deployed to realize the perception of the road environment, so as to guide the vehicle to automatically drive, or provide real-time road condition information for the driver, and so on.
In specific implementation, as shown in fig. 5, the target road environment further includes a plurality of road side units RSUs deployed according to a preset arrangement manner;
at this time, the camera device, the radar device, and the data processing device may be disposed on the RSU, and in addition, a wireless communication module may be disposed on the RSU;
in this way, the data processing device may also be configured to broadcast the perceived result of the target road environment via the wireless communication module on the RSU.
Accordingly, specific traffic participant objects such as specific vehicles can hear specific road condition information through associated terminal devices (mobile terminals, vehicle-mounted terminals and the like), and specific driving decisions can be made for automatically driven vehicles.
Alternatively, the camera device, radar device and data processing device may be deployed on a traffic participant object in the target road environment. At this time, if the traffic participant object is a traffic participant object of an autopilot class, the data processing apparatus may be further configured to make a driving decision based on a perceived result of the target road environment.
Example IV
The fourth embodiment corresponds to the third embodiment, and from the perspective of a specific data processing device, there is provided an environment sensing method, referring to fig. 6, the method may specifically include:
s610: pre-storing a conversion relation calibration result between a camera equipment coordinate system and a radar equipment coordinate system, wherein the calibration result is obtained through a target calibration reference object, and the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
s620: and fusing the data acquired by the camera equipment and the radar equipment according to the calibration result to obtain a perception result of the target environment.
For the undescribed portions of the second to fourth embodiments, reference may be made to the description of the first embodiment, and the description is omitted here.
Corresponding to the first embodiment, the embodiment of the present application further provides an external parameter calibration device, referring to fig. 7, the device may specifically include:
a data obtaining unit 710, configured to obtain image data collected by a camera device for a target calibration reference object, and point cloud data collected by a radar device for the target calibration reference object, where the target calibration reference object includes a plurality of planar bodies having preset colors and forming a preset angle;
A point cloud data set determining unit 720, configured to determine a plurality of point cloud data sets with different specific location features according to the processing performed by the point cloud data;
a projection unit 730, configured to project the point cloud data in the set into an image acquired by the camera device;
and a calibration result determining unit 740, configured to determine a calibration result according to the overlapping degree between the point cloud data in the set and the projection result in the image acquired by the camera device.
Wherein the camera device can simultaneously acquire images of the plurality of planar bodies; the radar apparatus may scan the plurality of planar bodies.
In one implementation manner, the point cloud data set determining unit may specifically be configured to:
processing according to the point cloud data, determining point cloud data respectively positioned on different plane bodies, and determining a plurality of first point cloud data sets corresponding to a plurality of different plane bodies;
the projection unit may be specifically configured to:
and respectively projecting the point cloud data in the plurality of first point cloud data sets into three-channel images of the color original image acquired by the camera equipment.
The point cloud data set determining unit may specifically include:
The fitting subunit is used for fitting according to the point cloud data which are acquired by the radar equipment and related to the calibration reference object, obtaining plane equations respectively corresponding to a plurality of different plane bodies, and dividing the point cloud data respectively belonging to the different plane bodies;
the first sampling unit is used for respectively sampling the point cloud data in each plane body according to the size information of the planes of the plane equations and preset intervals to obtain first point cloud data sets respectively corresponding to different plane bodies.
In order to further improve the accuracy, the device may further include:
and the calibration unit is used for calibrating the plane equation according to the angle prior information among the plurality of plane bodies after the plane equation is fitted.
At this time, the specific calibration result determining unit may specifically be configured to:
searching at the preset initial value of the rotary Euler angle and the initial value of the translation vector according to the preset angle step length and the translation step length, and determining a first calibration result according to the corresponding rotary Euler angle and the translation vector when the overlapping degree between the point cloud data in the first point cloud data set and the projection in the three-channel image accords with a preset condition.
In addition, in a preferred implementation, the point cloud data set determination unit may be further configured to:
processing according to the point cloud data, determining point cloud data respectively positioned on different characteristic straight lines, and determining a plurality of second point cloud data sets corresponding to a plurality of different characteristic straight lines, wherein the characteristic straight lines comprise intersecting lines among a plurality of orthogonal planes and/or boundary lines of each plane body;
the apparatus may further include:
the image processing unit is used for processing the image data acquired by the camera equipment to obtain image data with characteristic straight line information;
the projection unit is further configured to: after the first calibration result is determined, respectively projecting the point cloud data in the plurality of second point cloud data sets into the image with the characteristic straight line information;
the calibration result determining unit is further configured to: and searching according to a preset angle step length and a translation step length near the first calibration result, and determining a second calibration result according to a corresponding rotation Euler angle and a translation vector when the overlapping degree between the point cloud data in the second point cloud data set and the projection on the characteristic straight line in the image meets a preset condition.
In another manner, the point cloud data set determining unit may specifically be configured to:
processing according to the point cloud data, determining point cloud data respectively positioned on different characteristic straight lines, and determining a plurality of second point cloud data sets corresponding to a plurality of different characteristic straight lines, wherein the characteristic straight lines comprise intersecting lines among a plurality of orthogonal planes and/or boundary lines of each plane body;
the apparatus may further include:
the image processing unit is used for processing the image data acquired by the camera equipment to obtain image data with characteristic straight line information;
the projection unit may be specifically configured to:
and respectively projecting the point cloud data in the plurality of second point cloud data sets into the images with the characteristic straight line information.
The processing of the point cloud data may specifically include:
the fitting unit is used for fitting according to the point cloud data which is acquired by the radar equipment and related to the calibration reference object, so as to obtain plane equations which respectively correspond to a plurality of different plane bodies;
the boundary line equation determining unit is used for calculating an intersection line equation between every two planes and a boundary line equation of the plane body according to the plane equations corresponding to the different plane bodies;
And the second sampling unit is used for respectively sampling the point cloud data on each straight line according to preset intervals on each straight line to obtain second point cloud data sets respectively corresponding to different straight lines.
Wherein, the image processing unit may specifically be used for:
and carrying out graying treatment on the color image data acquired by the camera equipment, calculating the gradient value of the gray value in a neighborhood of a first preset number of each pixel, taking the maximum gradient value in one neighborhood as the gradient value of the corresponding pixel, and obtaining a gradient image so as to determine the characteristic straight line information according to the gradient value.
Alternatively, in a preferred implementation, the image processing unit may be further configured to:
and calculating inverse depth transformation information of each pixel in a second preset number of neighborhoods in the gradient image to obtain an inverse depth transformation image so as to determine the characteristic straight line information according to the inverse depth transformation information.
Corresponding to the fourth embodiment, the embodiment of the present application further provides an environment sensing device, referring to fig. 8, the device may specifically include:
a storage unit 810 for pre-storing a calibration result of a conversion relationship between a camera device coordinate system and a radar device coordinate system, wherein the calibration result is obtained by a target calibration reference object, and the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
And the sensing unit 820 is used for fusing the data acquired by the camera equipment and the radar equipment according to the calibration result to obtain a sensing result of the target environment.
In addition, the embodiment of the application also provides electronic equipment, which comprises:
one or more processors; and
a memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
acquiring image data acquired by camera equipment on a target calibration reference object and point cloud data acquired by radar equipment on the target calibration reference object, wherein the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
processing according to the point cloud data, and determining a plurality of point cloud data sets with different specific position characteristics;
projecting the point cloud data in the set into an image acquired by the camera device;
and determining a calibration result according to the overlapping degree between the point cloud data in the set and the projection result in the image acquired by the camera equipment.
And another electronic device, comprising:
One or more processors; and
a memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
pre-storing a conversion relation calibration result between a camera equipment coordinate system and a radar equipment coordinate system, wherein the calibration result is obtained through a target calibration reference object, and the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
and fusing the data acquired by the camera equipment and the radar equipment according to the calibration result to obtain a perception result of the target environment.
Fig. 9 illustrates an architecture of an electronic device, which may include a processor 910, a video display adapter 911, a disk drive 912, an input/output interface 913, a network interface 914, and a memory 920. The processor 910, the video display adapter 911, the disk drive 912, the input/output interface 913, the network interface 914, and the memory 920 may be communicatively connected by a communication bus 930.
The processor 910 may be implemented by a general-purpose CPU (Central Processing Unit ), a microprocessor, an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc., for executing relevant programs to implement the technical solutions provided in the present application.
The Memory 920 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. The memory 920 may store an operating system 921 for controlling the operation of the electronic device 900, and a Basic Input Output System (BIOS) for controlling low-level operation of the electronic device 900. In addition, a web browser 923, a data storage management system 924, a calibration processing system 925, and the like may also be stored. The calibration processing system 925 may be an application program that specifically implements the operations of the foregoing steps in the embodiments of the present application. In general, when the technical solutions provided in the present application are implemented in software or firmware, relevant program codes are stored in the memory 920 and invoked by the processor 910 to be executed.
The input/output interface 913 is used to connect with the input/output module to realize information input and output. The input/output module may be configured as a component in a device (not shown) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
The network interface 914 is used to connect communication modules (not shown) to enable communication interactions of the present device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 930 includes a path for transferring information between components of the device (e.g., processor 910, video display adapter 911, disk drive 912, input/output interface 913, network interface 914, and memory 920).
In addition, the electronic device 900 may also obtain information about specific acquisition conditions from the virtual resource object acquisition condition information database 941 for performing condition judgment, and so on.
It is noted that although the above-described devices illustrate only the processor 910, video display adapter 911, disk drive 912, input/output interface 913, network interface 914, memory 920, bus 930, etc., the device may include other components necessary to achieve proper operation in an implementation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the present application, and not all the components shown in the drawings.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in the embodiments or some parts of the embodiments of the present application.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The external parameter calibration method, device and electronic equipment provided by the application are described in detail, and specific examples are applied to illustrate the principle and implementation of the application, and the description of the above examples is only used for helping to understand the method and core idea of the application; also, as will occur to those of ordinary skill in the art, many modifications are possible in view of the teachings of the present application, both in the detailed description and the scope of its applications. In view of the foregoing, this description should not be construed as limiting the application.

Claims (20)

1. The external parameter calibration method is characterized by comprising the following steps of:
acquiring image data acquired by camera equipment on a target calibration reference object and point cloud data acquired by radar equipment on the target calibration reference object, wherein the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
processing according to the point cloud data, determining point cloud data respectively positioned on different plane bodies, and determining a plurality of first point cloud data sets corresponding to a plurality of different plane bodies; processing according to the point cloud data, determining point cloud data respectively positioned on different characteristic straight lines, and determining a plurality of second point cloud data sets corresponding to a plurality of different characteristic straight lines, wherein the characteristic straight lines comprise intersecting lines among a plurality of orthogonal planes and/or boundary lines of each plane body;
Respectively projecting the point cloud data in the first point cloud data set into three-channel images of the color original image acquired by the camera equipment;
searching at a preset initial value of a rotary Euler angle and a preset initial value of a translation vector according to a preset angle step length and a preset translation step length, and determining a first calibration result according to the corresponding rotary Euler angle and the corresponding translation vector when the overlapping degree between point cloud data in the first point cloud data set and projections in the three-channel image accords with a preset condition;
respectively projecting the point cloud data in the plurality of second point cloud data sets into images with characteristic straight line information;
and searching according to a preset angle step length and a translation step length near the first calibration result, and determining a second calibration result according to a corresponding rotation Euler angle and a translation vector when the overlapping degree between the point cloud data in the second point cloud data set and the projection on the characteristic straight line in the image meets a preset condition.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the target calibration reference comprises a plurality of mutually orthogonal planar bodies.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
In the target calibration reference object, the same plane body has the same color, and different planes have different colors.
4. The method of claim 1, wherein the step of determining the position of the substrate comprises,
in the target calibration reference object, the same plane body has the same color, and two plane bodies with an intersecting relationship have different colors.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the camera device can acquire images of the plurality of plane bodies at the same time; the radar apparatus may scan the plurality of planar bodies.
6. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the determining a plurality of first point cloud data sets corresponding to a plurality of different planes includes:
fitting according to the point cloud data which are acquired by the radar equipment and related to the calibration reference object, obtaining plane equations respectively corresponding to a plurality of different planes, and dividing the point cloud data respectively belonging to the different planes;
and according to the size information of the planes of the plane equations, sampling the point cloud data in each plane body according to preset intervals to obtain first point cloud data sets respectively corresponding to different plane bodies.
7. The method as recited in claim 6, further comprising:
and after fitting out the plane equation, calibrating the plane equation according to the angle priori information among the plurality of plane bodies.
8. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the determining a plurality of second point cloud data sets corresponding to a plurality of different characteristic lines includes:
fitting according to the point cloud data which are acquired by the radar equipment and related to the calibration reference object, so as to obtain plane equations respectively corresponding to a plurality of different plane bodies;
calculating an intersection equation between every two planes and a boundary line equation of the plane body according to the plane equations corresponding to the different plane bodies;
and sampling the point cloud data on each straight line according to preset intervals to obtain second point cloud data sets respectively corresponding to different straight lines.
9. The method of claim 1, wherein the step of determining the position of the substrate comprises,
processing the image data acquired by the camera device to obtain image data with characteristic straight line information, wherein the processing comprises the following steps:
and carrying out graying treatment on the color image data acquired by the camera equipment, calculating the gradient value of the gray value in a neighborhood of a first preset number of each pixel, taking the maximum gradient value in one neighborhood as the gradient value of the corresponding pixel, and obtaining a gradient image so as to determine the characteristic straight line information according to the gradient value.
10. The method as recited in claim 9, further comprising:
and calculating inverse depth transformation information of each pixel in a second preset number of neighborhoods in the gradient image to obtain an inverse depth transformation image so as to determine the characteristic straight line information according to the inverse depth transformation information.
11. An environmental awareness system, comprising:
the camera equipment is used for collecting image data of the target environment;
the radar equipment is used for acquiring point cloud data of the target environment;
the data processing device is used for pre-storing a conversion relation calibration result between the camera device coordinate system and the radar device coordinate system, and fusing the data acquired by the camera device and the radar device according to the calibration result to obtain a perception result of the target environment; the calibration result is obtained through a target calibration reference object, wherein the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
wherein, the calibration result is obtained by the following way:
acquiring image data acquired by camera equipment on a target calibration reference object and point cloud data acquired by radar equipment on the target calibration reference object;
Processing according to the point cloud data, determining point cloud data respectively positioned on different plane bodies, and determining a plurality of first point cloud data sets corresponding to a plurality of different plane bodies; processing according to the point cloud data, determining point cloud data respectively positioned on different characteristic straight lines, and determining a plurality of second point cloud data sets corresponding to a plurality of different characteristic straight lines, wherein the characteristic straight lines comprise intersecting lines among a plurality of orthogonal planes and/or boundary lines of each plane body;
respectively projecting the point cloud data in the first point cloud data set into three-channel images of the color original image acquired by the camera equipment;
searching at a preset initial value of a rotary Euler angle and a preset initial value of a translation vector according to a preset angle step length and a preset translation step length, and determining a first calibration result according to the corresponding rotary Euler angle and the corresponding translation vector when the overlapping degree between point cloud data in the first point cloud data set and projections in the three-channel image accords with a preset condition;
respectively projecting the point cloud data in the plurality of second point cloud data sets into images with characteristic straight line information;
and searching according to a preset angle step length and a translation step length near the first calibration result, and determining a second calibration result according to a corresponding rotation Euler angle and a translation vector when the overlapping degree between the point cloud data in the second point cloud data set and the projection on the characteristic straight line in the image meets a preset condition.
12. The system of claim 11, wherein the system further comprises a controller configured to control the controller,
the target environment includes a target road environment.
13. The system of claim 12, wherein the system further comprises a controller configured to control the controller,
the target road environment further comprises a plurality of road side units RSU deployed according to a preset arrangement mode;
the camera device, the radar device and the data processing device are deployed on the RSU, and a wireless communication module is also deployed on the RSU;
the data processing device is further used for broadcasting the perception result of the target road environment through the wireless communication module on the RSU.
14. The system of claim 12, wherein the system further comprises a controller configured to control the controller,
the camera device, radar device, and data processing device are deployed on a traffic participant object in the target road environment.
15. The system of claim 14, wherein the system further comprises a controller configured to control the controller,
the traffic participant object comprises a traffic participant object of an autopilot class;
the data processing device is also used for carrying out driving decision according to the perception result of the target road environment.
16. A method of environmental awareness, comprising:
pre-storing a conversion relation calibration result between a camera equipment coordinate system and a radar equipment coordinate system, wherein the calibration result is obtained through a target calibration reference object, and the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
Fusing the data acquired by the camera equipment and the radar equipment according to the calibration result to obtain a perception result of the target environment;
wherein, the calibration result is obtained by the following way:
acquiring image data acquired by camera equipment on a target calibration reference object and point cloud data acquired by radar equipment on the target calibration reference object;
processing according to the point cloud data, determining point cloud data respectively positioned on different plane bodies, and determining a plurality of first point cloud data sets corresponding to a plurality of different plane bodies; processing according to the point cloud data, determining point cloud data respectively positioned on different characteristic straight lines, and determining a plurality of second point cloud data sets corresponding to a plurality of different characteristic straight lines, wherein the characteristic straight lines comprise intersecting lines among a plurality of orthogonal planes and/or boundary lines of each plane body;
respectively projecting the point cloud data in the first point cloud data set into three-channel images of the color original image acquired by the camera equipment;
searching at a preset initial value of a rotary Euler angle and a preset initial value of a translation vector according to a preset angle step length and a preset translation step length, and determining a first calibration result according to the corresponding rotary Euler angle and the corresponding translation vector when the overlapping degree between point cloud data in the first point cloud data set and projections in the three-channel image accords with a preset condition;
Respectively projecting the point cloud data in the plurality of second point cloud data sets into images with characteristic straight line information;
and searching according to a preset angle step length and a translation step length near the first calibration result, and determining a second calibration result according to a corresponding rotation Euler angle and a translation vector when the overlapping degree between the point cloud data in the second point cloud data set and the projection on the characteristic straight line in the image meets a preset condition.
17. An external reference calibration device, comprising:
the system comprises a data acquisition unit, a radar device and a target calibration reference object acquisition unit, wherein the data acquisition unit is used for acquiring image data acquired by the camera device on the target calibration reference object and point cloud data acquired by the radar device on the target calibration reference object, and the target calibration reference object comprises a plurality of plane bodies which have preset colors and form preset angles;
the point cloud data set determining unit is used for processing according to the point cloud data, determining point cloud data respectively positioned on different planes, and determining a plurality of first point cloud data sets corresponding to a plurality of different planes; processing according to the point cloud data, determining point cloud data respectively positioned on different characteristic straight lines, and determining a plurality of units of a second point cloud data set corresponding to the different characteristic straight lines, wherein the characteristic straight lines comprise intersecting lines among a plurality of orthogonal planes and/or boundary lines of all plane bodies;
The projection unit is used for respectively projecting the point cloud data in the first point cloud data set into three-channel images of the color original image acquired by the camera equipment;
the calibration result determining unit is used for searching at the preset initial value of the rotary Euler angle and the initial value of the translation vector according to the preset angle step length and the translation step length, and determining a first calibration result according to the corresponding rotary Euler angle and the translation vector when the overlapping degree between the point cloud data in the first point cloud data set and the projection in the three-channel image accords with the preset condition;
the projection unit is further configured to: respectively projecting the point cloud data in the plurality of second point cloud data sets into images with characteristic straight line information;
the calibration result determining unit is further configured to: and searching according to a preset angle step length and a translation step length near the first calibration result, and determining a second calibration result according to a corresponding rotation Euler angle and a translation vector when the overlapping degree between the point cloud data in the second point cloud data set and the projection on the characteristic straight line in the image meets a preset condition.
18. An environmental awareness apparatus, comprising:
The storage unit is used for pre-storing a conversion relation calibration result between a camera equipment coordinate system and a radar equipment coordinate system, wherein the calibration result is obtained through a target calibration reference object, and the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
the sensing unit is used for fusing the data acquired by the camera equipment and the radar equipment according to the calibration result to obtain a sensing result of the target environment;
wherein, the calibration result is obtained by the following way:
acquiring image data acquired by camera equipment on a target calibration reference object and point cloud data acquired by radar equipment on the target calibration reference object;
processing according to the point cloud data, determining point cloud data respectively positioned on different plane bodies, and determining a plurality of first point cloud data sets corresponding to a plurality of different plane bodies; processing according to the point cloud data, determining point cloud data respectively positioned on different characteristic straight lines, and determining a plurality of second point cloud data sets corresponding to a plurality of different characteristic straight lines, wherein the characteristic straight lines comprise intersecting lines among a plurality of orthogonal planes and/or boundary lines of each plane body;
Respectively projecting the point cloud data in the first point cloud data set into three-channel images of the color original image acquired by the camera equipment;
searching at a preset initial value of a rotary Euler angle and a preset initial value of a translation vector according to a preset angle step length and a preset translation step length, and determining a first calibration result according to the corresponding rotary Euler angle and the corresponding translation vector when the overlapping degree between point cloud data in the first point cloud data set and projections in the three-channel image accords with a preset condition;
respectively projecting the point cloud data in the plurality of second point cloud data sets into images with characteristic straight line information;
and searching according to a preset angle step length and a translation step length near the first calibration result, and determining a second calibration result according to a corresponding rotation Euler angle and a translation vector when the overlapping degree between the point cloud data in the second point cloud data set and the projection on the characteristic straight line in the image meets a preset condition.
19. An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
Acquiring image data acquired by camera equipment on a target calibration reference object and point cloud data acquired by radar equipment on the target calibration reference object, wherein the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
processing according to the point cloud data, determining point cloud data respectively positioned on different plane bodies, and determining a plurality of first point cloud data sets corresponding to a plurality of different plane bodies; processing according to the point cloud data, determining point cloud data respectively positioned on different characteristic straight lines, and determining a plurality of second point cloud data sets corresponding to a plurality of different characteristic straight lines, wherein the characteristic straight lines comprise intersecting lines among a plurality of orthogonal planes and/or boundary lines of each plane body;
respectively projecting the point cloud data in the first point cloud data set into three-channel images of the color original image acquired by the camera equipment;
searching at a preset initial value of a rotary Euler angle and a preset initial value of a translation vector according to a preset angle step length and a preset translation step length, and determining a first calibration result according to the corresponding rotary Euler angle and the corresponding translation vector when the overlapping degree between point cloud data in the first point cloud data set and projections in the three-channel image accords with a preset condition;
Respectively projecting the point cloud data in the plurality of second point cloud data sets into images with characteristic straight line information;
and searching according to a preset angle step length and a translation step length near the first calibration result, and determining a second calibration result according to a corresponding rotation Euler angle and a translation vector when the overlapping degree between the point cloud data in the second point cloud data set and the projection on the characteristic straight line in the image meets a preset condition.
20. An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
pre-storing a conversion relation calibration result between a camera equipment coordinate system and a radar equipment coordinate system, wherein the calibration result is obtained through a target calibration reference object, and the target calibration reference object comprises a plurality of plane bodies with preset colors and forming preset angles;
fusing the data acquired by the camera equipment and the radar equipment according to the calibration result to obtain a perception result of the target environment;
Wherein, the calibration result is obtained by the following way:
acquiring image data acquired by camera equipment on a target calibration reference object and point cloud data acquired by radar equipment on the target calibration reference object;
processing according to the point cloud data, determining point cloud data respectively positioned on different plane bodies, and determining a plurality of first point cloud data sets corresponding to a plurality of different plane bodies; processing according to the point cloud data, determining point cloud data respectively positioned on different characteristic straight lines, and determining a plurality of second point cloud data sets corresponding to a plurality of different characteristic straight lines, wherein the characteristic straight lines comprise intersecting lines among a plurality of orthogonal planes and/or boundary lines of each plane body;
respectively projecting the point cloud data in the first point cloud data set into three-channel images of the color original image acquired by the camera equipment;
searching at a preset initial value of a rotary Euler angle and a preset initial value of a translation vector according to a preset angle step length and a preset translation step length, and determining a first calibration result according to the corresponding rotary Euler angle and the corresponding translation vector when the overlapping degree between point cloud data in the first point cloud data set and projections in the three-channel image accords with a preset condition;
Respectively projecting the point cloud data in the plurality of second point cloud data sets into images with characteristic straight line information;
and searching according to a preset angle step length and a translation step length near the first calibration result, and determining a second calibration result according to a corresponding rotation Euler angle and a translation vector when the overlapping degree between the point cloud data in the second point cloud data set and the projection on the characteristic straight line in the image meets a preset condition.
CN201811636001.XA 2018-12-29 2018-12-29 External parameter calibration method and device and electronic equipment Active CN111383279B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811636001.XA CN111383279B (en) 2018-12-29 2018-12-29 External parameter calibration method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811636001.XA CN111383279B (en) 2018-12-29 2018-12-29 External parameter calibration method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111383279A CN111383279A (en) 2020-07-07
CN111383279B true CN111383279B (en) 2023-06-20

Family

ID=71220951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811636001.XA Active CN111383279B (en) 2018-12-29 2018-12-29 External parameter calibration method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111383279B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815717B (en) * 2020-07-15 2022-05-17 西北工业大学 Multi-sensor fusion external parameter combination semi-autonomous calibration method
CN112184828B (en) * 2020-08-21 2023-12-05 阿波罗智联(北京)科技有限公司 Laser radar and camera external parameter calibration method and device and automatic driving vehicle
CN113763478B (en) * 2020-09-09 2024-04-12 北京京东尚科信息技术有限公司 Unmanned vehicle camera calibration method, device, equipment, storage medium and system
CN112419420B (en) * 2020-09-17 2022-01-28 腾讯科技(深圳)有限公司 Camera calibration method and device, electronic equipment and storage medium
CN112802126A (en) * 2021-02-26 2021-05-14 上海商汤临港智能科技有限公司 Calibration method, calibration device, computer equipment and storage medium
CN113406604A (en) * 2021-06-30 2021-09-17 山东新一代信息产业技术研究院有限公司 Device and method for calibrating positions of laser radar and camera
CN113341401A (en) * 2021-07-12 2021-09-03 广州小鹏自动驾驶科技有限公司 Vehicle-mounted laser radar calibration method and device, vehicle and storage medium
CN116071431A (en) * 2021-11-03 2023-05-05 北京三快在线科技有限公司 Calibration method and device, storage medium and electronic equipment
CN116449347B (en) * 2023-06-14 2023-10-03 蘑菇车联信息科技有限公司 Calibration method and device of roadside laser radar and electronic equipment
CN117784121A (en) * 2024-02-23 2024-03-29 四川天府新区北理工创新装备研究院 Combined calibration method and system for road side sensor and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049912A (en) * 2012-12-21 2013-04-17 浙江大学 Random trihedron-based radar-camera system external parameter calibration method
CN103983961A (en) * 2014-05-20 2014-08-13 南京理工大学 Three-dimensional calibration target for joint calibration of 3D laser radar and camera
CN104484887A (en) * 2015-01-19 2015-04-01 河北工业大学 External parameter calibration method used when camera and two-dimensional laser range finder are used in combined mode
CN108828606A (en) * 2018-03-22 2018-11-16 中国科学院西安光学精密机械研究所 One kind being based on laser radar and binocular Visible Light Camera union measuring method
CN109100707A (en) * 2018-08-21 2018-12-28 百度在线网络技术(北京)有限公司 Scaling method, device, equipment and the storage medium of radar sensor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106405555B (en) * 2016-09-23 2019-01-01 百度在线网络技术(北京)有限公司 Obstacle detection method and device for Vehicular radar system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049912A (en) * 2012-12-21 2013-04-17 浙江大学 Random trihedron-based radar-camera system external parameter calibration method
CN103983961A (en) * 2014-05-20 2014-08-13 南京理工大学 Three-dimensional calibration target for joint calibration of 3D laser radar and camera
CN104484887A (en) * 2015-01-19 2015-04-01 河北工业大学 External parameter calibration method used when camera and two-dimensional laser range finder are used in combined mode
CN108828606A (en) * 2018-03-22 2018-11-16 中国科学院西安光学精密机械研究所 One kind being based on laser radar and binocular Visible Light Camera union measuring method
CN109100707A (en) * 2018-08-21 2018-12-28 百度在线网络技术(北京)有限公司 Scaling method, device, equipment and the storage medium of radar sensor

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Diana-Margarita Córdova-Esparza 等.A multiple camera calibration and point cloud fusion tool for Kinect V2.Science of Computer Programming.2017,全文. *
贾子永 ; 任国全 ; 李冬伟 ; 程子阳 ; .基于梯形棋盘格的摄像机和激光雷达标定方法.计算机应用.2017,(07),全文. *
赵松 等.基于立体标定靶的扫描仪与数码相机联合标定.测绘科学技术学报.2012,全文. *
闫利 ; 曹亮 ; 陈长军 ; 黄亮 ; .车载全景影像与激光点云数据配准方法研究.测绘通报.2015,(03),全文. *

Also Published As

Publication number Publication date
CN111383279A (en) 2020-07-07

Similar Documents

Publication Publication Date Title
CN111383279B (en) External parameter calibration method and device and electronic equipment
CN110264520B (en) Vehicle-mounted sensor and vehicle pose relation calibration method, device, equipment and medium
CN110148185B (en) Method and device for determining coordinate system conversion parameters of imaging equipment and electronic equipment
US9972067B2 (en) System and method for upsampling of sparse point cloud for 3D registration
CN109801333B (en) Volume measurement method, device and system and computing equipment
CN112270713A (en) Calibration method and device, storage medium and electronic device
CN111815716A (en) Parameter calibration method and related device
CN111080662A (en) Lane line extraction method and device and computer equipment
CN111815707A (en) Point cloud determining method, point cloud screening device and computer equipment
CN108362205B (en) Space distance measuring method based on fringe projection
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
CN114494466B (en) External parameter calibration method, device and equipment and storage medium
CN113034612A (en) Calibration device and method and depth camera
EP3617937A1 (en) Image processing device, driving assistance system, image processing method, and program
KR20160070874A (en) Location-based Facility Management System Using Mobile Device
EP3321882B1 (en) Matching cost computation method and device
EP3782363B1 (en) Method for dynamic stereoscopic calibration
CN116630444A (en) Optimization method for fusion calibration of camera and laser radar
CN116245937A (en) Method and device for predicting stacking height of goods stack, equipment and storage medium
CN111538008B (en) Transformation matrix determining method, system and device
CN111382591B (en) Binocular camera ranging correction method and vehicle-mounted equipment
WO2019012004A1 (en) Method for determining a spatial uncertainty in images of an environmental area of a motor vehicle, driver assistance system as well as motor vehicle
KR20230003803A (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
CN111862208B (en) Vehicle positioning method, device and server based on screen optical communication
KR102195040B1 (en) Method for collecting road signs information using MMS and mono camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230721

Address after: Room 437, Floor 4, Building 3, No. 969, Wenyi West Road, Wuchang Subdistrict, Yuhang District, Hangzhou City, Zhejiang Province

Patentee after: Wuzhou Online E-Commerce (Beijing) Co.,Ltd.

Address before: Box 847, four, Grand Cayman capital, Cayman Islands, UK

Patentee before: ALIBABA GROUP HOLDING Ltd.