WO2020233443A1 - 一种激光雷达与相机之间的标定方法和装置 - Google Patents
一种激光雷达与相机之间的标定方法和装置 Download PDFInfo
- Publication number
- WO2020233443A1 WO2020233443A1 PCT/CN2020/089722 CN2020089722W WO2020233443A1 WO 2020233443 A1 WO2020233443 A1 WO 2020233443A1 CN 2020089722 W CN2020089722 W CN 2020089722W WO 2020233443 A1 WO2020233443 A1 WO 2020233443A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- camera
- point cloud
- preset
- interval
- calibration
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Definitions
- This application relates to the field of computer technology, in particular to a calibration method between a lidar and a camera, a calibration method, a calibration device between a lidar and a camera, and a calibration device.
- multi-sensor calibration is mainly divided into manual calibration and automatic calibration.
- Manual calibration is performed by professionals with certain calibration experience through specific calibration methods through the sensor data collected offline. It is not suitable for batch calibration;
- Automatic calibration is to realize automatic calibration of multiple sensors through specific algorithms by selecting specific calibration scenarios and custom tools.
- the environmental point cloud information obtained is not as rich and accurate as that of the high-end radar. If the calibration algorithm similar to the high-end radar is used, it cannot meet the requirements of the low-end The calibration accuracy requirements of Lidar's unmanned vehicles.
- the embodiments of the present application are proposed to provide a method for calibration between a lidar and a camera, a method for calibration, and a method for calibration between a lidar and a camera that overcome the above problems or at least partially solve the above problems.
- the calibration device and a calibration device are proposed to provide a method for calibration between a lidar and a camera, a method for calibration, and a method for calibration between a lidar and a camera that overcome the above problems or at least partially solve the above problems.
- the embodiment of the present application discloses a calibration method between the lidar and the camera, including:
- the first rotation vector corresponding to the maximum degree of coincidence is determined as the rotation vector of the coordinate system of the laser radar calibrated to the coordinate system of the camera.
- the calculating the coincidence degree between the corresponding image and the point cloud respectively according to each first rotation vector includes:
- the first conversion matrix and the internal parameters of the camera are used to calculate the degree of coincidence between the corresponding image and the point cloud.
- the using the first conversion matrix and the internal parameters of the camera to calculate the degree of coincidence between the corresponding image and the point cloud includes:
- the number of projection points of the first target is used to determine the degree of overlap between the image and the point cloud.
- the using the number of the first target projection points to determine the degree of overlap between the image and the point cloud includes:
- the ratio of the first target projection point is used to determine the degree of overlap between the image and the point cloud.
- the determining a plurality of first rotation vectors within a preset first rotation vector interval includes:
- a plurality of first rotation vectors are determined according to a preset radian interval.
- the preset first rotation vector interval includes a preset first roll angle interval, a preset first pitch angle interval, and a preset first yaw angle interval; said being within the preset first rotation vector interval , According to the preset arc interval, determine multiple first rotation vectors, including:
- the preset first roll angle interval determine a plurality of roll angles according to a preset arc interval
- a roll angle is selected from the plurality of roll angles, a pitch angle is selected from the plurality of pitch angles, and a yaw angle is selected from the plurality of yaw angles to combine to obtain a plurality of first rotations Vector.
- it also includes:
- the smaller of the first arc and the second arc is used as the preset arc interval.
- it also includes:
- the reference rotation vector and the preset arc interval are used to determine the preset first rotation vector interval.
- the determining the reference rotation vector includes:
- the preset second rotation vector interval includes a preset second roll angle interval, a preset second pitch angle interval, and a preset second yaw angle interval;
- a reference rotation vector is determined.
- the determining a reference rotation vector from the plurality of second rotation vectors includes:
- the second rotation vector corresponding to the maximum coincidence degree is determined as the reference rotation vector.
- the determining the three-dimensional coordinates of the point cloud of the calibration plate located in the calibration plate in the point cloud includes:
- the determining the three-dimensional coordinates of the point cloud of the calibration plate located in the calibration plate in the point cloud includes:
- the determining the three-dimensional coordinates of the point cloud of the calibration plate located in the calibration plate in the point cloud includes:
- the embodiment of the application also discloses a calibration method, which is applied to an unmanned vehicle.
- the unmanned vehicle includes at least one camera and at least one lidar.
- the at least one camera and the at least one lidar each have its own coordinates.
- Department, the method includes:
- the first camera corresponding to the first lidar is determined, and the coordinate system of the first camera is calibrated to the coordinate system of the corresponding first lidar.
- the coordinate system of the second camera is calibrated to the coordinate system of the associated first lidar, and the coordinate system of the second lidar is calibrated to the coordinate system of the second camera.
- the at least one camera includes: at least one industrial camera and at least one surround view camera; the selecting a target camera from the at least one camera includes:
- One of the at least one industrial camera is selected as the target camera.
- the determining the first camera corresponding to the first lidar among cameras other than the target camera includes:
- a first surround view camera corresponding to the first lidar is determined.
- the determining the second camera corresponding to the second lidar includes:
- the embodiment of the application also discloses a calibration device between the lidar and the camera, including:
- An image acquisition module configured to acquire the image collected by the camera for the calibration board and the point cloud collected by the lidar for the calibration board;
- the first rotation vector determining module is configured to determine a plurality of first rotation vectors within a preset first rotation vector interval
- the first degree of coincidence calculation module is configured to calculate the degree of coincidence between the corresponding image and the point cloud according to each first rotation vector
- the rotation vector calibration module is used to determine the first rotation vector corresponding to the maximum degree of coincidence as the rotation vector for calibrating the coordinate system of the lidar to the coordinate system of the camera.
- the first degree of coincidence calculation module includes:
- the parameter acquisition sub-module is used to acquire the translation vector between the coordinate system of the lidar and the coordinate system of the camera, and to acquire the internal parameters of the camera;
- the first conversion matrix determining sub-module is configured to use the multiple first rotation vectors and the translation vectors to determine multiple first conversion matrices;
- the first degree of coincidence calculation submodule is configured to use the first conversion matrix and the internal parameters of the camera to calculate the degree of coincidence between the corresponding image and the point cloud for one first conversion matrix.
- the first degree of coincidence calculation submodule includes:
- a camera coordinate system acquisition unit for acquiring the camera coordinate system of the camera
- An image information determining unit configured to determine the contour of the calibration plate in the image, and determine the three-dimensional coordinates of the calibration plate point cloud located in the calibration plate in the point cloud;
- a projection unit configured to use the first conversion matrix, the internal parameters of the camera, and the three-dimensional coordinates of the calibration plate point cloud to project the calibration plate point cloud onto the image to obtain a first projection point cloud;
- a target projection point determination unit configured to determine the number of first target projection points in the first projection point cloud that fall within the contour of the calibration plate in the image
- the first coincidence degree determining unit is configured to use the number of the first target projection points to determine the degree of coincidence between the image and the point cloud.
- the first coincidence degree determining unit includes:
- the projection ratio calculation subunit is used to calculate the first target projection point ratio of the number of first target projection points corresponding to a calibration board to the number of calibration board point clouds of the calibration board;
- the first degree of coincidence determining subunit is configured to adopt the first target projection point ratio to determine the degree of coincidence between the image and the point cloud.
- the first rotation vector determining module includes:
- the first rotation vector determining sub-module is configured to determine a plurality of first rotation vectors in a preset first rotation vector interval according to a preset radian interval.
- the preset first rotation vector interval includes a preset first roll angle interval, a preset first pitch angle interval, and a preset first yaw angle interval;
- the first rotation vector determining submodule includes:
- a roll angle determining unit configured to determine a plurality of roll angles according to a preset arc interval within the preset first roll angle interval
- a pitch angle determining unit configured to determine multiple pitch angles according to the preset arc interval within the preset first pitch angle interval
- a yaw angle determining unit configured to determine a plurality of yaw angles according to the preset radian interval within the preset first yaw angle interval;
- the first rotation vector determining unit is configured to select a roll angle from the plurality of roll angles, select a pitch angle from the plurality of pitch angles, and select a yaw angle from the plurality of yaw angles Combine to obtain multiple first rotation vectors.
- it also includes:
- a camera parameter acquisition module for acquiring the horizontal field of view and vertical field of view of the camera, and the resolution of the image
- the first radian determination module is configured to divide the horizontal field of view by the width of the resolution to obtain the first radian
- the second radian determination module is configured to divide the vertical field of view by the height of the resolution to obtain the second radian
- the radian interval determination module is configured to use the smaller of the first radian and the second radian as the preset radian interval.
- it also includes:
- the reference rotation vector determination module is used to determine the reference rotation vector
- the first rotation vector interval determination module is configured to use the reference rotation vector and the preset arc interval to determine the preset first rotation vector interval.
- the reference rotation vector determining module includes:
- the second rotation vector interval acquisition sub-module is configured to acquire a preset second rotation vector interval, where the preset second rotation vector interval includes a preset second roll angle interval, a preset second pitch angle interval, and a preset second Yaw angle interval;
- An angle adjustment sub-module configured to adjust the pitch angle in the preset second pitch angle interval, and adjust the yaw angle in the preset second yaw angle interval;
- the target angle determination sub-module is used to determine the target pitch angle and target yaw angle when the center of the calibration board of the image coincides with the center of the first projection point cloud;
- the second rotation vector determining submodule is configured to adjust the roll angle within the preset second roll angle interval under the target pitch angle and the target yaw angle to obtain a plurality of second rotation vectors;
- the reference rotation vector determining sub-module is used to determine the reference rotation vector from the plurality of second rotation vectors.
- the reference rotation vector determining submodule includes:
- a second conversion matrix determining unit configured to use the multiple second rotation vectors and the translation vectors between the coordinate system of the lidar and the coordinate system of the camera to determine multiple second conversion matrices
- a second degree of coincidence calculation unit configured to calculate the degree of coincidence between the corresponding image and the point cloud by using the second conversion matrix and the internal parameters of the camera for one second conversion matrix
- the reference rotation vector determining unit is used to determine the second rotation vector corresponding to the maximum degree of coincidence as the reference rotation vector.
- the image information determining unit includes:
- the first calibration board point cloud determination subunit is configured to adopt a point cloud clustering algorithm to extract the calibration board point cloud located in the calibration board from the point cloud;
- the first point cloud coordinate determination subunit is used to determine the three-dimensional coordinates of the point cloud of the calibration board.
- the image information determining unit includes:
- the reflectivity acquisition subunit is used to acquire the reflectivity of each point in the point cloud
- the second calibration plate point cloud determining subunit is used to determine the point cloud of the calibration plate located in the calibration plate by using points with reflectance greater than a preset reflectivity threshold;
- the second point cloud coordinate determination subunit is used to determine the three-dimensional coordinates of the point cloud of the calibration plate.
- the image information determining unit includes:
- the size information acquisition subunit is used to acquire the size information of the calibration board
- the third calibration board point cloud determination subunit is configured to use the size information of the calibration board to determine the point cloud of the calibration board located in the calibration board in the point cloud;
- the third point cloud coordinate determination subunit is used to determine the three-dimensional coordinates of the point cloud of the calibration plate.
- the embodiment of the application also discloses a calibration device, which is applied to an unmanned vehicle.
- the unmanned vehicle includes at least one camera and at least one lidar.
- the at least one camera and the at least one lidar respectively have their own coordinates.
- Department, the device includes:
- a reference coordinate system determining module configured to select a target camera from the at least one camera, and use the coordinate system of the target camera as the reference coordinate system;
- the first calibration module is configured to determine the first laser radar associated with the target camera in the at least one laser radar, and calibrate the coordinate system of the first laser radar to the reference coordinate system;
- the second calibration module is used to determine the first camera corresponding to the first laser radar among cameras other than the target camera, and calibrate the coordinate system of the first camera to the corresponding first laser The coordinate system of the radar.
- a non-association determining module configured to determine a second laser radar that is not associated with the target camera, and determine a second camera corresponding to the second laser radar;
- the third calibration module is used to calibrate the coordinate system of the second camera to the coordinate system of the associated first lidar, and calibrate the coordinate system of the second lidar to the coordinate system of the second camera.
- the at least one camera includes: at least one industrial camera and at least one surround view camera;
- the reference coordinate system determination module includes:
- the target camera selection submodule is used to select one of the at least one industrial camera as the target camera.
- the second calibration module includes:
- the first surround view camera determining sub-module is configured to determine a first surround view camera corresponding to the first lidar among the at least one surround view camera.
- the non-association determining module includes:
- the second surround view camera determining sub-module is used to determine the second surround view camera corresponding to the second lidar.
- the embodiment of the application also discloses a device, including:
- One or more processors are One or more processors.
- One or more machine-readable media having instructions stored thereon, when executed by the one or more processors, cause the apparatus to perform one or more of the methods described above.
- the embodiment of the present application also discloses one or more machine-readable media, on which instructions are stored, which, when executed by one or more processors, cause the processors to execute one or more of the methods described above.
- the first rotation vector that makes the image collected by the camera and the point cloud collected by the lidar coincide with the highest degree can be determined.
- a rotation vector, the first rotation vector corresponding to the maximum coincidence degree is used as the rotation vector for finally calibrating the coordinate system of the lidar to the coordinate system of the camera.
- FIG. 1 is a flow chart of the steps of Embodiment 1 of a calibration method between a lidar and a camera according to the present application;
- Fig. 3 is a schematic diagram of projecting a calibration plate point cloud onto an image in an embodiment of the present application
- Fig. 4 is another schematic diagram of projecting a calibration plate point cloud onto an image in an embodiment of the present application
- FIG. 5 is a flowchart of steps of an embodiment of a calibration method of the present application.
- Fig. 6 is a schematic diagram of an unmanned vehicle calibration scenario in an embodiment of the present application.
- FIG. 7 is a structural block diagram of an embodiment of a calibration device between a lidar and a camera according to the present application.
- Fig. 8 is a structural block diagram of an embodiment of a calibration device of the present application.
- the current unmanned logistics vehicles use mid- and low-end lidars. If a calibration algorithm similar to high-end radars is used, the calibration accuracy requirements of the unmanned logistics vehicles cannot be met.
- the calibration from laser to camera is to determine the transformation matrix RT from the laser coordinate system to the camera coordinate system.
- the transformation matrix RT can be composed of the translation vector T (x, y, z) and the rotation vector R (r, p). , Y) is uniquely determined. If 6 variables are optimized and solved at the same time, the search solution space is huge, and the algorithm is extremely easy to converge to the local optimal solution.
- the fixed translation vector is adopted to traverse the rotation vector solution space to find the best Optimal rotation vector to obtain the optimal transformation matrix.
- Step 101 Obtain an image collected by the camera with respect to a calibration board and a point cloud collected by the lidar with respect to the calibration board;
- the calibration method in the embodiment of the present application is a calibration method proposed for low-end and mid-end lidars. In addition to being suitable for mid- and low-end lidars, it is also applicable to high-end lidars.
- the number of cameras and lidars may include multiple, and the method of the embodiment of the application can be used to achieve calibration between each camera and each lidar.
- the cameras may include industrial cameras, surround view cameras, and other cameras used in unmanned vehicles.
- a camera and a lidar are used to collect the calibration board.
- the camera collects an image, and the image contains the image of the calibration board; the lidar collects a point cloud, which contains the laser points directed to the calibration board and reflected by the calibration board.
- the transmitter of the lidar emits a laser beam. After the laser beam encounters an object, it undergoes diffuse reflection and returns to the laser receiver to obtain a laser spot.
- the number and colors of the calibration plates are not limited, and any color and any number of calibration plates can be used.
- three red chevron boards with a size of 80cm*80cm can be used as calibration boards.
- Step 102 Determine a plurality of first rotation vectors within a preset first rotation vector interval
- the translation vector T between the camera and the lidar can be accurately measured. Therefore, it is only necessary to find the optimal rotation vector in the preset first rotation vector interval to obtain The optimal transformation matrix.
- Step 103 Calculate the degree of coincidence between the corresponding image and the point cloud according to each first rotation vector
- the image collected by the camera contains an object, and the position of the object is determined in the image; the point cloud is determined by the laser radar based on the laser light reflected by the object, and the coordinate position of the point cloud reflects the position of the object.
- the degree of coincidence is a parameter describing the degree of coincidence between the coordinate position of the point cloud and the position of the object in the image.
- Step 104 Determine the first rotation vector corresponding to the maximum degree of coincidence as the rotation vector of the coordinate system of the laser radar calibrated to the coordinate system of the camera.
- the first rotation vector when the degree of coincidence is maximized can be used as the rotation vector for finally calibrating the coordinate system of the lidar to the coordinate system of the camera.
- the first rotation vector that makes the image collected by the camera and the point cloud collected by the lidar coincide with the highest degree can be determined.
- a rotation vector, the first rotation vector corresponding to the maximum coincidence degree is used as the rotation vector for finally calibrating the coordinate system of the lidar to the coordinate system of the camera.
- a reference coordinate system can be determined first, for example, the coordinate system of a camera is selected as the reference coordinate system.
- the coordinate system of a camera is selected as the reference coordinate system.
- the calibration method of the embodiment of the present application can realize automatic calibration.
- various sensors will inevitably be replaced when the vehicle is put into actual operation. This also means that the vehicle needs to re-examine the replaced sensors. Calibration is performed, and the vehicle cannot be put into operation until the calibration of the newly replaced sensor is completed. Therefore, the calibration method of this application can achieve the goal of instant replacement of the sensor, instant calibration, and instant operation.
- FIG. 2 a flowchart of the second embodiment of a calibration method between a lidar and a camera according to the present application is shown, which may specifically include the following steps:
- Step 201 Obtain an image collected by the camera with respect to the calibration board and a point cloud collected by the lidar with respect to the calibration board;
- Step 202 Determine a plurality of first rotation vectors within a preset first rotation vector interval
- the step 202 may include: determining a plurality of first rotation vectors according to a preset radian interval within a preset first rotation vector interval.
- the preset arc interval may be used as the step size to traverse the entire preset first rotation vector interval to determine multiple first rotation vectors.
- the preset first rotation vector interval includes a preset first roll angle interval, a preset first pitch angle interval, and a preset first yaw angle interval, which may be within the preset first roll angle interval , Determine a plurality of roll angles according to a preset arc interval; in the preset first pitch angle interval, determine a plurality of pitch angles according to the preset arc interval; in the preset first yaw angle interval, A plurality of yaw angles are determined according to the preset arc interval; a roll angle is selected from the plurality of roll angles, a pitch angle is selected from the plurality of pitch angles, and a pitch angle is selected from the plurality of yaw angles Select a yaw angle to combine to obtain multiple first rotation vectors.
- the preset first rotation vector interval is [(r1, p1, y1), (r2, p2, y3)], where the preset first roll angle interval is [r1, r2], which is set to Determine n1 roll angles; the preset first pitch angle interval is [p1, p2], and n2 pitch angles are determined from the preset arc interval; the preset first yaw angle interval is [y1, y2], according to The preset arc interval determines n3 yaw angles.
- a roll angle is selected from n1 roll angles
- a pitch angle is selected from n2 pitch angles
- a yaw angle is selected from n3 yaw angles for combination.
- n1*n2*n3 first rotation vectors can be obtained.
- the preset arc interval can be determined through the following steps:
- the preset first rotation vector interval may be determined by the following steps: determining a reference rotation vector; using the reference rotation vector and the preset arc interval to determine the preset first rotation vector interval.
- the reference rotation vector is (r0, p0, y0), where r0 is the reference roll angle, p0 is the reference pitch angle, and y0 is the reference yaw angle.
- the reference roll angle r0 can be used to subtract the product of the preset first reference value M and the preset arc interval s to obtain the lower limit of the roll angle interval r0-M*s; the reference roll angle r0 can be used, plus the preset first reference
- the product of the value M and the preset radian interval s gives the upper limit of the roll angle interval r0+M*s; the lower limit of the roll angle interval and the upper limit of the roll angle interval are used to determine the preset first roll angle interval [r0-M*s, r0+ M*s].
- the reference pitch angle p0 can be used to subtract the product of the preset first reference value M and the preset arc interval s to obtain the lower limit of the pitch angle interval p0-M*s; the reference pitch angle p0 can be used, plus the preset first reference
- the product of the value M and the preset arc interval s gives the upper limit of the pitch angle interval p0+M*s; the lower limit of the pitch angle interval and the upper limit of the pitch angle interval are used to determine the preset first pitch angle interval [p0-M*s, p0+ M*s].
- the reference yaw angle y0 can be used to subtract the product of the preset first reference value M and the preset arc interval s to obtain the lower limit of the yaw angle interval y0-M*s; the reference yaw angle y0 can be used, plus the preset The product of the first reference value M and the preset arc interval s gives the upper limit of the yaw angle interval y0+M*s; the lower limit of the yaw angle interval and the upper limit of the yaw angle interval are used to determine the preset first yaw angle interval [y0 -M*s, y0+M*s].
- the preset radian interval is usually set to be very small, such as 0.001rad, and usually (r, p, y ) The reasonable variation interval is often very large compared to the preset arc interval.
- the pitch and yaw are adjusted oriented first, so that the center of the first projection point cloud of the calibration plate point cloud projected to the image coincides with the center of the target plate in the image.
- this method only needs to iterate 50-100 times. Then it will converge and get a benchmark p0 and y0.
- this solution can find a benchmark solution (r0, p0, y0), and this benchmark solution is the center.
- the embodiment of this application can find the optimal solution in a small interval [-0.015, 0.015], And experimental tests show that this solution is also the global optimal solution.
- roll can be adjusted only after p0 and y0 are determined. R0 and p0 cannot be determined first, and then yaw can be adjusted, or r0 and y0 can be determined first, and then pitch can be adjusted.
- the step of determining the reference rotation vector may include:
- the preset second rotation vector interval includes a preset second roll angle interval, a preset second pitch angle interval, and a preset second yaw angle interval;
- a reference rotation vector is determined.
- the step of determining a reference rotation vector from the plurality of second rotation vectors may include:
- the multiple second rotation vectors and the translation vector between the coordinate system of the lidar and the coordinate system of the camera are used to determine multiple second transformation matrices; for one second transformation matrix, The second conversion matrix and the internal parameters of the camera are used to calculate the degree of coincidence between the corresponding image and the point cloud; the second rotation vector corresponding to the maximum degree of coincidence is determined as the reference rotation vector.
- the step of calculating the coincidence degree between the corresponding image and the point cloud by using the second conversion matrix and the internal parameters of the camera may include:
- the second conversion matrix, the camera's internal parameters and the three-dimensional coordinates of the calibration plate point cloud to project the calibration plate point cloud to the camera coordinate system to obtain the second projection point cloud; determine the calibration of the second projection point cloud and fall into the image
- the number of second target projection points within the contour of the board; the number of second target projection points is used to determine the degree of coincidence between the image and the point cloud.
- the number of second target projection points may be used as the degree of overlap between the image and the point cloud. The greater the number of second target projection points, the higher the degree of coincidence.
- the ratio of the second target projection point to the point cloud of the calibration plate may be used to determine the degree of coincidence.
- the second target projection point ratio of the number of second target projection points corresponding to a calibration plate to the number of point clouds of the calibration plate of the calibration plate can be calculated; the second target projection point ratio is used to determine the ratio between the image and the point cloud Coincidence degree.
- Step 203 Obtain the translation vector between the coordinate system of the lidar and the coordinate system of the camera, and obtain the internal parameters of the camera;
- Internal parameters are parameters that describe the characteristics of the camera. Since the camera coordinate system uses a millimeter system, and the image plane uses pixels as a unit, the function of the internal parameter is to change linearly between the two coordinate systems. The internal parameters of the camera can be obtained through the camera calibration tool.
- Step 204 using the multiple first rotation vectors and the translation vectors to determine multiple first transformation matrices
- each first conversion matrix is composed of a first rotation vector and a fixed translation vector.
- Step 205 For one of the first conversion matrices, use the first conversion matrix and the internal parameters of the camera to calculate the degree of coincidence between the corresponding image and the point cloud;
- the step 205 may include the following sub-steps:
- Sub-step S11 acquiring the camera coordinate system of the camera
- Sub-step S12 determining the contour of the calibration plate in the image, and determining the three-dimensional coordinates of the point cloud of the calibration plate in the point cloud;
- the point cloud data collected by lidar is three-dimensional, represented by a Cartesian coordinate system (X, Y, Z).
- a point cloud clustering algorithm may be used to determine the three-dimensional coordinates of the point cloud of the calibration plate.
- a point cloud clustering algorithm may be used to extract a calibration board point cloud located in the calibration board from the point cloud; determine the three-dimensional coordinates of the calibration board point cloud.
- the reflectivity of the calibration plate to the laser can be used as the prior information to determine the three-dimensional coordinates of the point cloud of the calibration plate. Since objects of different materials reflect different degrees of laser light, a calibration plate made of high reflectivity materials can be selected. In the collected laser point cloud data, by setting an appropriate reflectivity threshold, the laser points with reflectivity greater than the reflectivity threshold can be determined as the points hit by the laser on the calibration plate.
- the reflectivity of each point in the point cloud can be obtained; the point whose reflectivity is greater than a preset reflectivity threshold is used to determine the calibration plate point cloud located in the calibration plate; the three-dimensionality of the calibration plate point cloud is determined coordinate.
- the size information of the calibration plate may be used as the prior information to determine the three-dimensional coordinates of the point cloud of the calibration plate.
- the size information of the calibration board can be obtained; the size information of the calibration board is used to determine the point cloud of the calibration board located in the calibration board in the point cloud; the three-dimensional coordinates of the point cloud of the calibration board are determined .
- Sub-step S13 using the first conversion matrix, the internal parameters of the camera and the three-dimensional coordinates of the calibration plate point cloud to project the calibration plate point cloud onto the image to obtain a first projection point cloud;
- a dedicated software interface can be called to realize projection.
- the projection function ProjectPoints of OpenCV software is used to project three-dimensional coordinates into a two-dimensional image.
- FIG. 3 is a schematic diagram of projecting a calibration plate point cloud onto an image in an embodiment of the application. As shown in Figure 3, the point cloud of the calibration plate projected into the image has a low degree of overlap with the calibration plate in the image. Under different conversion matrices, the position of the projected point cloud in the image will change.
- Sub-step S14 determining the number of first target projection points in the first projection point cloud that fall within the contour of the calibration plate in the image;
- Sub-step S15 using the number of the first target projection points to determine the degree of overlap between the image and the point cloud.
- the number of first target projection points may be used as the degree of overlap between the image and the point cloud. The greater the number of projection points of the first target, the higher the degree of coincidence.
- the points of the laser emitted by the lidar to the two calibration boards are 120 and 100 respectively.
- the number of the first target projection points of the calibration plate point cloud projected into the two calibration plate contours of the image is 90 and 80 respectively. If the first target projection point of each calibration plate is The total is the coincidence degree, and the coincidence degree is 170.
- the ratio of the first target projection point to the point cloud of the calibration plate may be used to determine the degree of coincidence.
- the sub-step S15 may include: calculating the first target projection point ratio of the number of first target projection points corresponding to a calibration board to the number of calibration board point clouds of the calibration board; using the first target projection The point ratio determines the degree of overlap between the image and the point cloud.
- Step 206 Determine the first rotation vector corresponding to the maximum degree of coincidence as the rotation vector of the coordinate system of the laser radar calibrated to the coordinate system of the camera.
- FIG. 4 is another schematic diagram of projecting a calibration plate point cloud onto an image in an embodiment of this application.
- the degree of coincidence is the highest, the projection point cloud of the point cloud of the calibration plate completely corresponds to the calibration plate in the image, and the entire image is also completely corresponding to the point cloud.
- the first rotation vector that makes the image collected by the camera and the point cloud collected by the lidar coincide with the highest degree can be determined.
- a rotation vector, the first rotation vector corresponding to the maximum coincidence degree is used as the rotation vector for finally calibrating the coordinate system of the lidar to the coordinate system of the camera.
- the method is applied to an unmanned vehicle, the unmanned vehicle includes at least one industrial camera, at least one surround view camera, and at least one lidar
- the unmanned vehicle includes at least one industrial camera, at least one surround view camera, and at least one lidar
- the at least one camera and the at least one lidar each have its own coordinate system, and the method may specifically include the following steps:
- Step 501 Select a target camera from the at least one camera, and use the coordinate system of the target camera as a reference coordinate system;
- the unmanned vehicle may be provided with multiple cameras, and may include at least one industrial camera and at least one surround view camera.
- Industrial cameras have high image stability, high transmission capacity and high anti-interference ability, and are generally set in front of unmanned vehicles to collect images in the space ahead.
- the surround view camera has a relatively large field of view.
- the installation of multiple surround view cameras in the unmanned vehicle can cover the 360-degree area around the unmanned vehicle, and can ensure that the blind area of the unmanned vehicle's vision is as small as possible.
- the calibration process will be different, the complexity is also different, in practice, according to the relative position of the industrial camera, surround view camera, lidar in the unmanned vehicle, from the industrial Choose one of the camera and the surround view camera as the target camera.
- Cameras or lidars can be installed in the front, rear, left, and right directions of the unmanned vehicle.
- calibration boards can be placed in the corresponding directions. The camera is used to collect the image of the calibration board, and the laser radar is used to collect the point cloud for the calibration board.
- the industrial camera may include a left industrial camera set on the left front and a right industrial camera set on the right front, and the two industrial cameras form a binocular camera.
- the lidar may include a front lidar arranged in the front, a rear lidar arranged in the rear, a left lidar arranged in the left, and a right lidar arranged in the right.
- the surround view camera may include a front surround view camera set in the front, a rear surround view camera set in the rear, a left surround view camera set in the left, and a right surround view camera set in the right.
- At least one industrial camera may be selected as the target camera.
- the left industrial camera can be selected as the target camera, and the coordinate system of the left industrial camera can be selected as the reference coordinate system.
- the coordinate system of the right industrial camera can be directly calibrated to the reference coordinate system of the left industrial camera.
- Step 502 In the at least one lidar, determine a first lidar associated with the target camera, and calibrate the coordinate system of the first lidar to the reference coordinate system;
- the association between the camera and the lidar refers to the association between the shooting space of the two.
- the two need to be photographed in the same space to be related, and the two can be directly calibrated. If the two do not have a common shooting space, they are not related and cannot be calibrated directly between the two.
- the lidar installed behind the unmanned vehicle collects the point cloud behind
- the industrial camera installed in front of the unmanned vehicle collects the front-end image. There is no common shooting space between the two, so it is the difference between the two. It cannot be calibrated directly.
- the front lidar, the left lidar, and the right lidar and the left industrial camera may have a common shooting space, so they are related.
- the coordinate system of the first lidar associated with the target camera can be directly calibrated to the reference coordinate system.
- Step 503 Among cameras other than the target camera, determine the first camera corresponding to the first lidar, and calibrate the coordinate system of the first camera to the coordinate system of the corresponding first lidar ;
- Correspondence mentioned here refers to the correspondence of orientation. Specifically, it may be to determine the first surround view camera corresponding to the first lidar.
- the surround-view camera and lidar are used correspondingly.
- the front lidar corresponds to the front surround-view camera
- the rear lidar corresponds to the rear surround-view camera
- the left lidar corresponds to the left surround-view camera
- the right lidar corresponds to the right surround-view camera.
- the coordinate system of the front surround view camera can be directly calibrated to the coordinate system of the front lidar, thereby indirectly calibrated the reference coordinate system;
- the coordinate system of the left surround view camera can be directly calibrated to the coordinate system of the left lidar, thereby indirectly calibrated the reference coordinate system;
- the coordinate system of the right looking camera can be directly calibrated to the coordinate system of the right lidar, thus indirectly calibrated the reference coordinate system.
- Step 504 Determine a second lidar that is not associated with the target camera, and determine a second camera corresponding to the second lidar;
- the second camera corresponding to the rear lidar may specifically be: a corresponding second surround view camera.
- the rear lidar and the left industrial camera do not have a common shooting space, so they are not related. It is possible to determine the rear surround camera corresponding to the rear lidar.
- Step 505 Calibrate the coordinate system of the second camera to the coordinate system of the associated first lidar, and calibrate the coordinate system of the second lidar to the coordinate system of the second camera.
- the coordinate system of the first laser radar that has been calibrated can be used to achieve indirect calibration.
- the coordinate system of the second lidar can be indirectly calibrated to the reference coordinate system.
- the first lidar associated with the back look camera includes a left lidar and a right lidar.
- the coordinate system of the rear look camera can be calibrated to the coordinate system of the left lidar, and then the coordinate system of the rear lidar can be calibrated to the rear look camera The coordinate system.
- the calibration process between the industrial camera and the lidar, and the calibration process between the surround view camera and the lidar can all be implemented using the aforementioned embodiment of the calibration method between the lidar and the camera.
- the calibration method of the embodiment of this application is suitable for unmanned vehicles with multiple sensors.
- the industrial camera, surround view camera and lidar in the unmanned vehicle can be directly or indirectly calibrated to a reference coordinate system, and the calibration accuracy is high, which can be realized Automatic calibration.
- the reference coordinate system can also be used to calibrate other sensors.
- the reference coordinate system can be calibrated to an inertial measurement unit IMU (Inertial Measurement Unit).
- IMU Inertial Measurement Unit
- FIG. 7 there is shown a structural block diagram of an embodiment of a calibration device between a lidar and a camera according to the present application, which may specifically include the following modules:
- the image acquisition module 701 is configured to acquire the image collected by the camera on the calibration board and the point cloud collected by the lidar on the calibration board;
- the first rotation vector determining module 702 is configured to determine a plurality of first rotation vectors within a preset first rotation vector interval
- the first degree of coincidence calculation module 703 is configured to calculate the degree of coincidence between the corresponding image and the point cloud according to each first rotation vector;
- the rotation vector calibration module 704 is configured to determine the first rotation vector corresponding to the maximum degree of coincidence as the rotation vector for calibrating the coordinate system of the lidar to the coordinate system of the camera.
- the first coincidence degree calculation module 703 may include:
- the parameter acquisition sub-module is used to acquire the translation vector between the coordinate system of the lidar and the coordinate system of the camera, and to acquire the internal parameters of the camera;
- the first conversion matrix determining sub-module is configured to use the multiple first rotation vectors and the translation vectors to determine multiple first conversion matrices;
- the first degree of coincidence calculation submodule is configured to use the first conversion matrix and the internal parameters of the camera to calculate the degree of coincidence between the corresponding image and the point cloud for one first conversion matrix.
- the first coincidence degree calculation submodule may include:
- a camera coordinate system acquisition unit for acquiring the camera coordinate system of the camera
- An image information determining unit configured to determine the contour of the calibration plate in the image, and determine the three-dimensional coordinates of the calibration plate point cloud located in the calibration plate in the point cloud;
- a projection unit configured to use the first conversion matrix, the internal parameters of the camera, and the three-dimensional coordinates of the calibration plate point cloud to project the calibration plate point cloud onto the image to obtain a first projection point cloud;
- a target projection point determination unit configured to determine the number of first target projection points in the first projection point cloud that fall within the contour of the calibration plate in the image
- the first coincidence degree determining unit is configured to use the number of the first target projection points to determine the degree of coincidence between the image and the point cloud.
- the first coincidence degree determining unit may include:
- the projection ratio calculation subunit is used to calculate the first target projection point ratio of the number of first target projection points corresponding to a calibration board to the number of calibration board point clouds of the calibration board;
- the first degree of coincidence determining subunit is configured to adopt the first target projection point ratio to determine the degree of coincidence between the image and the point cloud.
- the first rotation vector determining module 702 may include:
- the first rotation vector determining sub-module is configured to determine a plurality of first rotation vectors in a preset first rotation vector interval according to a preset radian interval.
- the preset first rotation vector interval includes a preset first roll angle interval, a preset first pitch angle interval, and a preset first yaw angle interval;
- the first rotation vector determiner Modules can include:
- a roll angle determining unit configured to determine a plurality of roll angles according to a preset arc interval within the preset first roll angle interval
- a pitch angle determining unit configured to determine multiple pitch angles according to the preset arc interval within the preset first pitch angle interval
- a yaw angle determining unit configured to determine a plurality of yaw angles according to the preset radian interval within the preset first yaw angle interval;
- the first rotation vector determining unit is configured to select a roll angle from the plurality of roll angles, select a pitch angle from the plurality of pitch angles, and select a yaw angle from the plurality of yaw angles Combine to obtain multiple first rotation vectors.
- the device may further include:
- a camera parameter acquisition module for acquiring the horizontal field of view and vertical field of view of the camera, and the resolution of the image
- the first radian determination module is configured to divide the horizontal field of view by the width of the resolution to obtain the first radian
- the second radian determination module is configured to divide the vertical field of view by the height of the resolution to obtain the second radian
- the radian interval determination module is configured to use the smaller of the first radian and the second radian as the preset radian interval.
- the device may further include:
- the reference rotation vector determination module is used to determine the reference rotation vector
- the first rotation vector interval determination module is configured to use the reference rotation vector and the preset arc interval to determine the preset first rotation vector interval.
- the reference rotation vector determination module may include:
- the second rotation vector interval acquisition sub-module is configured to acquire a preset second rotation vector interval, where the preset second rotation vector interval includes a preset second roll angle interval, a preset second pitch angle interval, and a preset second Yaw angle interval;
- An angle adjustment sub-module configured to adjust the pitch angle in the preset second pitch angle interval, and adjust the yaw angle in the preset second yaw angle interval;
- the target angle determination sub-module is used to determine the target pitch angle and target yaw angle when the center of the calibration board of the image coincides with the center of the first projection point cloud;
- the second rotation vector determining submodule is configured to adjust the roll angle within the preset second roll angle interval under the target pitch angle and the target yaw angle to obtain a plurality of second rotation vectors;
- the reference rotation vector determining sub-module is used to determine the reference rotation vector from the plurality of second rotation vectors.
- the reference rotation vector determining sub-module may include:
- a second conversion matrix determining unit configured to use the multiple second rotation vectors and the translation vectors between the coordinate system of the lidar and the coordinate system of the camera to determine multiple second conversion matrices
- a second degree of coincidence calculation unit configured to calculate the degree of coincidence between the corresponding image and the point cloud by using the second conversion matrix and the internal parameters of the camera for one second conversion matrix
- the reference rotation vector determining unit is used to determine the second rotation vector corresponding to the maximum degree of coincidence as the reference rotation vector.
- the image information determining unit may include:
- the first calibration board point cloud determination subunit is configured to adopt a point cloud clustering algorithm to extract the calibration board point cloud located in the calibration board from the point cloud;
- the first point cloud coordinate determination subunit is used to determine the three-dimensional coordinates of the point cloud of the calibration board.
- the image information determining unit may include:
- the reflectivity acquisition subunit is used to acquire the reflectivity of each point in the point cloud
- the second calibration plate point cloud determining subunit is used to determine the point cloud of the calibration plate located in the calibration plate by using points with reflectance greater than the preset reflectivity threshold;
- the second point cloud coordinate determination subunit is used to determine the three-dimensional coordinates of the point cloud of the calibration plate.
- the image information determining unit may include:
- the size information acquisition subunit is used to acquire the size information of the calibration board
- the third calibration board point cloud determination subunit is configured to use the size information of the calibration board to determine the point cloud of the calibration board located in the calibration board in the point cloud;
- the third point cloud coordinate determination subunit is used to determine the three-dimensional coordinates of the point cloud of the calibration plate.
- the calibration device is applied to an unmanned vehicle.
- the unmanned vehicle includes at least one camera and at least one lidar, and the at least one camera Each has its own coordinate system with the at least one lidar, and the device may specifically include the following modules:
- the reference coordinate system determining module 801 is configured to select a target camera from the at least one camera, and use the coordinate system of the target camera as the reference coordinate system;
- the first calibration module 802 is configured to determine the first laser radar associated with the target camera in the at least one laser radar, and calibrate the coordinate system of the first laser radar to the reference coordinate system;
- the second calibration module 803 is configured to determine the first camera corresponding to the first lidar among cameras other than the target camera, and calibrate the coordinate system of the first camera to the corresponding first camera The coordinate system of the lidar.
- the non-association determining module 804 is configured to determine a second lidar that is not associated with the target camera, and determine a second camera corresponding to the second lidar;
- the third calibration module 805 is used to calibrate the coordinate system of the second camera to the coordinate system of the associated first lidar, and to calibrate the coordinate system of the second lidar to the coordinate system of the second camera .
- the target camera selection submodule is used to select one of the at least one industrial camera as the target camera.
- the second calibration module 803 may include:
- the first surround view camera determining sub-module is configured to determine a first surround view camera corresponding to the first lidar among the at least one surround view camera.
- the non-association determining module 804 may include:
- the second surround view camera determining sub-module is used to determine the second surround view camera corresponding to the second lidar.
- the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
- An embodiment of the present application also provides a device, including:
- One or more processors are One or more processors.
- One or more machine-readable media having instructions stored thereon, when executed by the one or more processors, cause the device to execute the method described in the embodiment of the present application.
- the embodiments of the present application also provide one or more machine-readable media on which instructions are stored, which when executed by one or more processors, cause the processors to execute the methods described in the embodiments of the present application.
- the embodiments of the embodiments of the present application may be provided as methods, devices, or computer program products. Therefore, the embodiments of the present application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the embodiments of the present application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing terminal equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
- the instruction device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
一种激光雷达与相机之间的标定方法和装置,方法包括:获取相机针对标定板采集的图像和激光雷达针对标定板采集的点云(101);在预设第一旋转矢量区间内,确定多个第一旋转矢量(102);分别根据各个第一旋转矢量,计算对应的图像与点云之间的重合度(103);将对应最大重合度的第一旋转矢量,确定为激光雷达的坐标系标定到相机的坐标系的旋转矢量(104)。该标定方法在将中低精度的激光雷达标定到相机时,也能能够满足无人车的标定精度要求。
Description
本申请要求2019年05月21日递交的申请号为201910425720.5、发明名称为“一种激光雷达与相机之间的标定方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及计算机技术领域,特别是涉及一种激光雷达与相机之间的标定方法、一种标定方法、一种激光雷达与相机之间的标定装置和一种标定装置。
随着无人驾驶技术的发展,目前几乎所有的无人车都采用了多传感器融合方案,安装了激光雷达、工业相机等多种传感器。无人驾驶方案中,需要将多个传感器的坐标系变换到一个统一的坐标系下,实现多传感器数据的空间融合。
目前多传感器标定主要分为手工标定和自动标定两种,手工标定是由有一定标定经验的专业人员通过离线采集的传感器数据通过特定的标定方法进行手工标定,不适用于批量化标定;
自动标定是通过选取特定的标定场景和标定制具,通过特定的算法实现多传感器的自动化标定。
目前市面上的自动化标定方案,大都是适用于采用高端激光雷达的无人汽车,而这些自动化标定方案并不适用于采用中低端激光雷达的无人车。
由于中低端激光雷达的测距精度和激光线数均远低于高端激光雷达,因而得到的环境点云信息没有高端雷达的丰富准确,如果使用类似高端雷达的标定算法无法满足采用中低端激光雷达的无人车的标定精度要求。
发明内容
鉴于上述问题,提出了本申请实施例以便提供一种克服上述问题或者至少部分地解决上述问题的一种激光雷达与相机之间的标定方法、一种标定方法、一种激光雷达与相机之间的标定装置和一种标定装置。
为了解决上述问题,本申请实施例公开了一种激光雷达与相机之间的标定方法,包 括:
获取所述相机针对标定板采集的图像和所述激光雷达针对所述标定板采集的点云;
在预设第一旋转矢量区间内,确定多个第一旋转矢量;
分别根据各个第一旋转矢量,计算对应的所述图像与所述点云之间的重合度;
将对应最大重合度的第一旋转矢量,确定为所述激光雷达的坐标系标定到所述相机的坐标系的旋转矢量。
可选地,所述分别根据各个第一旋转矢量,计算对应的所述图像与所述点云之间的重合度,包括:
获取所述激光雷达的坐标系与相机的坐标系之间的平移矢量,以及获取所述相机的内参;
分别采用所述多个第一旋转矢量和所述平移矢量,确定多个第一转换矩阵;
针对一个所述第一转换矩阵,采用所述第一转换矩阵和所述相机的内参,计算对应的所述图像与所述点云之间的重合度。
可选地,所述采用所述第一转换矩阵和所述相机的内参,计算对应的所述图像与所述点云之间的重合度,包括:
获取所述相机的相机坐标系;
确定所述图像中所述标定板的轮廓,以及确定所述点云中位于所述标定板内的标定板点云的三维坐标;
采用所述第一转换矩阵、所述相机的内参和所述标定板点云的三维坐标,将所述标定板点云投影到所述图像,得到第一投影点云;
确定所述第一投影点云中,落入所述图像中的标定板的轮廓内的第一目标投影点的数量;
采用所述第一目标投影点的数量,确定所述图像与所述点云的重合度。
可选地,所述采用所述第一目标投影点的数量,确定所述图像与所述点云的重合度,包括:
计算一个标定板对应的第一目标投影点的数量与该标定板的标定板点云的数量的第一目标投影点比例;
采用所述第一目标投影点比例,确定所述图像与所述点云的重合度。
可选地,所述在预设第一旋转矢量区间内,确定多个第一旋转矢量,包括:
在预设第一旋转矢量区间内,按照预设弧度间隔,确定多个第一旋转矢量。
可选地,所述预设第一旋转矢量区间包括预设第一翻滚角区间、预设第一俯仰角区间和预设第一偏航角区间;所述在预设第一旋转矢量区间内,按照预设弧度间隔,确定多个第一旋转矢量,包括:
在所述预设第一翻滚角区间内,按照预设弧度间隔确定多个翻滚角;
在所述预设第一俯仰角区间内,按照所述预设弧度间隔确定多个俯仰角;
在所述预设第一偏航角区间内,按照所述预设弧度间隔确定多个偏航角;
分别从所述多个翻滚角中选取一个翻滚角,从所述多个俯仰角中选取一个俯仰角,从所述多个偏航角中选取一个偏航角进行组合,得到多个第一旋转矢量。
可选地,还包括:
获取所述相机的水平视场角和垂直视场角,以及所述图像的分辨率;
采用所述水平视场角除以所述分辨率的宽度,得到第一弧度;
采用所述垂直视场角除以所述分辨率的高度,得到第二弧度;
将所述第一弧度和所述第二弧度中,较小的作为所述预设弧度间隔。
可选地,还包括:
确定基准旋转矢量;
采用所述基准旋转矢量和所述预设弧度间隔,确定所述预设第一旋转矢量区间。
可选地,所述确定基准旋转矢量,包括:
获取预设第二旋转矢量区间,所述预设第二旋转矢量区间包括预设第二翻滚角区间、预设第二俯仰角区间和预设第二偏航角区间;
在所述预设第二俯仰角区间内调整俯仰角,并且在所述预设第二偏航角区间内调整偏航角;
确定所述图像的标定板的中心与所述第一投影点云的中心重合时的目标俯仰角和目标偏航角;
在所述目标俯仰角和目标偏航角下,在所述预设第二翻滚角区间内调整所述翻滚角,得到多个第二旋转矢量;
从所述多个第二旋转矢量中,确定基准旋转矢量。
可选地,所述从所述多个第二旋转矢量中,确定基准旋转矢量,包括:
分别采用所述多个第二旋转矢量,以及所述激光雷达的坐标系与所述相机的坐标系之间的平移矢量,确定多个第二转换矩阵;
针对一个所述第二转换矩阵,采用所述第二转换矩阵和所述相机的内参,计算对应 的所述图像与所述点云之间的重合度;
将对应最大重合度的第二旋转矢量,确定为基准旋转矢量。
可选地,所述确定所述点云中位于所述标定板内的标定板点云的三维坐标,包括:
采用点云聚类算法,从所述点云中提取位于所述标定板内的标定板点云;
确定所述标定板点云的三维坐标。
可选地,所述确定所述点云中位于所述标定板内的标定板点云的三维坐标,包括:
获取所述点云中各个点的反射率;
采用反射率大于预设反射率阈值的点,确定位于所述标定板内的标定板点云;
确定所述标定板点云的三维坐标。
可选地,所述确定所述点云中位于所述标定板内的标定板点云的三维坐标,包括:
获取所述标定板的尺寸信息;
采用所述标定板的尺寸信息,确定所述点云中位于所述标定板内的标定板点云;
确定所述标定板点云的三维坐标。
本申请实施例还公开了一种标定方法,应用于无人车,所述无人车包括至少一个相机以及至少一个激光雷达,所述至少一个相机和所述至少一个激光雷达分别具有自身的坐标系,所述方法包括:
从所述至少一个相机选取一个目标相机,将所述目标相机的坐标系作为基准坐标系;
在所述至少一个激光雷达中,确定与所述目标相机关联的第一激光雷达,并将所述第一激光雷达的坐标系标定到所述基准坐标系;
在除所述目标相机之外的相机中,确定与所述第一激光雷达对应的第一相机,并将所述第一相机的坐标系标定到对应的第一激光雷达的坐标系。
确定与所述目标相机不关联的第二激光雷达,以及确定与所述第二激光雷达对应的第二相机;
将所述第二相机的坐标系标定到关联的第一激光雷达的坐标系,以及将所述第二激光雷达的坐标系标定到所述第二相机的坐标系。
可选地,所述至少一个相机包括:至少一个工业相机、至少一个环视相机;所述从所述至少一个相机选取一个目标相机包括:
从所述至少一个工业相机选取一个作为目标相机。
可选地,所述在除所述目标相机之外的相机中,确定与所述第一激光雷达对应的第 一相机,包括:
在所述至少一个环视相机中,确定与所述第一激光雷达对应的第一环视相机。
可选地,所述确定与所述第二激光雷达对应的第二相机,包括:
确定与所述第二激光雷达对应的第二环视相机。
本申请实施例还公开了一种激光雷达与相机之间的标定装置,包括:
图像获取模块,用于获取所述相机针对标定板采集的图像和所述激光雷达针对所述标定板采集的点云;
第一旋转矢量确定模块,用于在预设第一旋转矢量区间内,确定多个第一旋转矢量;
第一重合度计算模块,用于分别根据各个第一旋转矢量,计算对应的所述图像与所述点云之间的重合度;
旋转矢量标定模块,用于将对应最大重合度的第一旋转矢量,确定为所述激光雷达的坐标系标定到所述相机的坐标系的旋转矢量。
可选地,所述第一重合度计算模块包括:
参数获取子模块,用于获取所述激光雷达的坐标系与相机的坐标系之间的平移矢量,以及获取所述相机的内参;
第一转换矩阵确定子模块,用于分别采用所述多个第一旋转矢量和所述平移矢量,确定多个第一转换矩阵;
第一重合度计算子模块,用于针对一个所述第一转换矩阵,采用所述第一转换矩阵和所述相机的内参,计算对应的所述图像与所述点云之间的重合度。
可选地,所述第一重合度计算子模块包括:
相机坐标系获取单元,用于获取所述相机的相机坐标系;
图像信息确定单元,用于确定所述图像中所述标定板的轮廓,以及确定所述点云中位于所述标定板内的标定板点云的三维坐标;
投影单元,用于采用所述第一转换矩阵、所述相机的内参和所述标定板点云的三维坐标,将所述标定板点云投影到所述图像,得到第一投影点云;
目标投影点确定单元,用于确定所述第一投影点云中,落入所述图像中的标定板的轮廓内的第一目标投影点的数量;
第一重合度确定单元,用于采用所述第一目标投影点的数量,确定所述图像与所述点云的重合度。
可选地,所述第一重合度确定单元包括:
投影比例计算子单元,用于计算一个标定板对应的第一目标投影点的数量与该标定板的标定板点云的数量的第一目标投影点比例;
第一重合度确定子单元,用于采用所述第一目标投影点比例,确定所述图像与所述点云的重合度。
可选地,所述第一旋转矢量确定模块包括:
第一旋转矢量确定子模块,用于在预设第一旋转矢量区间内,按照预设弧度间隔,确定多个第一旋转矢量。
可选地,所述预设第一旋转矢量区间包括预设第一翻滚角区间、预设第一俯仰角区间和预设第一偏航角区间;所述第一旋转矢量确定子模块包括:
翻滚角确定单元,用于在所述预设第一翻滚角区间内,按照预设弧度间隔确定多个翻滚角;
俯仰角确定单元,用于在所述预设第一俯仰角区间内,按照所述预设弧度间隔确定多个俯仰角;
偏航角确定单元,用于在所述预设第一偏航角区间内,按照所述预设弧度间隔确定多个偏航角;
第一旋转矢量确定单元,用于分别从所述多个翻滚角中选取一个翻滚角,从所述多个俯仰角中选取一个俯仰角,从所述多个偏航角中选取一个偏航角进行组合,得到多个第一旋转矢量。
可选地,还包括:
相机参数获取模块,用于获取所述相机的水平视场角和垂直视场角,以及所述图像的分辨率;
第一弧度确定模块,用于采用所述水平视场角除以所述分辨率的宽度,得到第一弧度;
第二弧度确定模块,用于采用所述垂直视场角除以所述分辨率的高度,得到第二弧度;
弧度间隔确定模块,用于将所述第一弧度和所述第二弧度中,较小的作为所述预设弧度间隔。
可选地,还包括:
基准旋转矢量确定模块,用于确定基准旋转矢量;
第一旋转矢量区间确定模块,用于采用所述基准旋转矢量和所述预设弧度间隔,确定所述预设第一旋转矢量区间。
可选地,所述基准旋转矢量确定模块包括:
第二旋转矢量区间获取子模块,用于获取预设第二旋转矢量区间,所述预设第二旋转矢量区间包括预设第二翻滚角区间、预设第二俯仰角区间和预设第二偏航角区间;
角度调整子模块,用于在所述预设第二俯仰角区间内调整俯仰角,并且在所述预设第二偏航角区间内调整偏航角;
目标角度确定子模块,用于确定所述图像的标定板的中心与所述第一投影点云的中心重合时的目标俯仰角和目标偏航角;
第二旋转矢量确定子模块,用于在所述目标俯仰角和目标偏航角下,在所述预设第二翻滚角区间内调整所述翻滚角,得到多个第二旋转矢量;
基准旋转矢量确定子模块,用于从所述多个第二旋转矢量中,确定基准旋转矢量。
可选地,所述基准旋转矢量确定子模块包括:
第二转换矩阵确定单元,用于分别采用所述多个第二旋转矢量,以及所述激光雷达的坐标系与所述相机的坐标系之间的平移矢量,确定多个第二转换矩阵;
第二重合度计算单元,用于针对一个所述第二转换矩阵,采用所述第二转换矩阵和所述相机的内参,计算对应的所述图像与所述点云之间的重合度;
基准旋转矢量确定单元,用于将对应最大重合度的第二旋转矢量,确定为基准旋转矢量。
可选地,所述图像信息确定单元包括:
第一标定板点云确定子单元,用于采用点云聚类算法,从所述点云中提取位于所述标定板内的标定板点云;
第一点云坐标确定子单元,用于确定所述标定板点云的三维坐标。
可选地,所述图像信息确定单元包括:
反射率获取子单元,用于获取所述点云中各个点的反射率;
第二标定板点云确定子单元,用于采用反射率大于预设反射率阈值的点,确定位于所述标定板内的标定板点云;
第二点云坐标确定子单元,用于确定所述标定板点云的三维坐标。
可选地,所述图像信息确定单元包括:
尺寸信息获取子单元,用于获取所述标定板的尺寸信息;
第三标定板点云确定子单元,用于采用所述标定板的尺寸信息,确定所述点云中位于所述标定板内的标定板点云;
第三点云坐标确定子单元,用于确定所述标定板点云的三维坐标。
本申请实施例还公开了一种标定装置,应用于无人车,所述无人车包括至少一个相机以及至少一个激光雷达,所述至少一个相机和所述至少一个激光雷达分别具有自身的坐标系,所述装置包括:
基准坐标系确定模块,用于从所述至少一个相机选取一个目标相机,将所述目标相机的坐标系作为基准坐标系;
第一标定模块,用于在所述至少一个激光雷达中,确定与所述目标相机关联的第一激光雷达,并将所述第一激光雷达的坐标系标定到所述基准坐标系;
第二标定模块,用于在除所述目标相机之外的相机中,确定与所述第一激光雷达对应的第一相机,并将所述第一相机的坐标系标定到对应的第一激光雷达的坐标系。
不关联确定模块,用于确定与所述目标相机不关联的第二激光雷达,以及确定与所述第二激光雷达对应的第二相机;
第三标定模块,用于将所述第二相机的坐标系标定到关联的第一激光雷达的坐标系,以及将所述第二激光雷达的坐标系标定到所述第二相机的坐标系。
可选地,所述至少一个相机包括:至少一个工业相机、至少一个环视相机;所述基准坐标系确定模块包括:
目标相机选取子模块,用于从所述至少一个工业相机选取一个作为目标相机。
可选地,所述第二标定模块包括:
第一环视相机确定子模块,用于在所述至少一个环视相机中,确定与所述第一激光雷达对应的第一环视相机。
可选地,所述不关联确定模块包括:
第二环视相机确定子模块,用于确定与所述第二激光雷达对应的第二环视相机。
本申请实施例还公开了一种装置,包括:
一个或多个处理器;和
其上存储有指令的一个或多个机器可读介质,当由所述一个或多个处理器执行时,使得所述装置执行如上所述的一个或多个的方法。
本申请实施例还公开了一个或多个机器可读介质,其上存储有指令,当由一个或多个处理器执行时,使得所述处理器执行如上所述的一个或多个的方法。
本申请实施例包括以下优点:
本申请实施例中,可以在相机与激光雷达之间的平移矢量固定的情况下,在预设第一旋转矢量区间内,确定使得相机采集的图像与激光雷达采集的点云重合度最高的第一旋转矢量,将对应最大重合度的第一旋转矢量,作为最终将激光雷达的坐标系标定到相机的坐标系的旋转矢量。采用本申请实施例的标定方法,在将中低精度的激光雷达标定到相机时,也能能够满足无人车的标定精度要求。
图1是本申请的一种激光雷达与相机之间的标定方法实施例一的步骤流程图;
图2是本申请的一种激光雷达与相机之间的标定方法实施例二的步骤流程图;
图3是本申请实施例中将标定板点云投影到图像的示意图;
图4是本申请实施例中另一种将标定板点云投影到图像的示意图;
图5是本申请的一种标定方法实施例的步骤流程图;
图6是本申请实施例中无人车标定场景的示意图;
图7是本申请的一种激光雷达与相机之间的标定装置实施例的结构框图;
图8是本申请的一种标定装置实施例的结构框图。
为使本申请的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本申请作进一步详细的说明。
目前的物流无人车使用的是中低端激光雷达,如果使用类似高端雷达的标定算法无法满足标定物流无人车的标定精度要求。
激光到相机(工业相机、环视相机)的标定就是要确定激光坐标系到相机坐标系的变换矩阵RT,变换矩阵RT可以由平移矢量T(x,y,z)和旋转矢量R(r,p,y)唯一确定,如果同时对6个变量进行优化求解,搜索解空间巨大,算法极其容易收敛到局部最优解。
考虑到相机和激光雷达安装固定后,相对位置就固定了,而且可以通过测量获得十分准确的平移矢量T的值,因此本申请实施例中采取固定平移矢量的来遍历旋转矢量解空间,寻找最优的旋转矢量,从而求得最优的变换矩阵。以下对具体实现方式进行详细介绍。
参照图1,示出了本申请的一种激光雷达与相机之间的标定方法实施例一的步骤流程图,具体可以包括如下步骤:
步骤101,获取所述相机针对标定板采集的图像和所述激光雷达针对所述标定板采集的点云;
本申请实施例的标定方法是针对中低端激光雷达提出的标定方法,除了适用于中低端激光雷达,也适用于高端激光雷达。
在无人车中,相机和激光雷达的数目都可以包括多个,每个相机和每个激光雷达之间,都可以采用本申请实施例的方法实现标定。相机可以包括工业相机、环视相机等被应用于无人车的相机。
采用相机和激光雷达对标定板采集,相机采集的是图像,图像中包含标定板的图像;激光雷达采集的是点云,点云中包含射向标定板并由标定板反射的激光点。激光雷达的发射器发射出一束激光,激光光束遇到物体后,经过漫反射,返回至激光接收器,得到激光点。
本申请实施例中,对标定板的数量和颜色未作限制,可以使用任意颜色、任意数量的标定板。例如,可以使用3个尺寸为80cm*80cm的红色雪弗板作为标定板。
步骤102,在预设第一旋转矢量区间内,确定多个第一旋转矢量;
旋转矢量(r,p,y),其中r为翻滚角roll,p为俯仰角pitch,y为偏航角yaw。
当相机与激光雷达的相对位置确定后,相机与激光雷达之间的平移矢量T就可以准确测量得到,因此只需要在预设第一旋转矢量区间,寻找最优的旋转矢量,就可以求得最优的变换矩阵。
步骤103,分别根据各个第一旋转矢量,计算对应的所述图像与所述点云之间的重合度;
相机采集的图像中包含有物体,物体的位置在图像中是确定的;点云是激光雷达根据由物体反射的激光确定的,点云的坐标位置反映了物体的位置。重合度是描述点云的坐标位置与图像中的物体位置的重合程度的参数。
在不同的旋转矢量下,图像与点云的相对位置会发生变化,重合度也会发生变化。
步骤104,将对应最大重合度的第一旋转矢量,确定为所述激光雷达的坐标系标定到所述相机的坐标系的旋转矢量。
重合度越大,标定结果越准确。因此可以将使得重合度最大时的第一旋转矢量,作为最终将激光雷达的坐标系标定到相机的坐标系的旋转矢量。
本申请实施例中,可以在相机与激光雷达之间的平移矢量固定的情况下,在预设第一旋转矢量区间内,确定使得相机采集的图像与激光雷达采集的点云重合度最高的第一旋转矢量,将对应最大重合度的第一旋转矢量,作为最终将激光雷达的坐标系标定到相机的坐标系的旋转矢量。采用本申请实施例的标定方法,在将中低精度的激光雷达标定到相机时,也能能够满足无人车的标定精度要求。
在对无人车的相机和激光雷达进行标定时,可以首先确定一个基准坐标系,例如选取一个相机的坐标系作为基准坐标系。通过本申请实施例的方法,可以将即除了基准坐标系之外的,激光雷达的坐标系或者相机的坐标系都标定到基准坐标系,实现对无人车的标定。
并且本申请实施例的标定方法能够实现自动化标定。在无人车实际的运营场景中,在完成无人车出厂整车标定后,在车辆投入实际运营中不可避免地要更换各种传感器,而这也意味着这辆车需要对更换的传感器重新进行标定,而在完成对新更换的传感器的标定工作之前,这辆车均无法投入运营,因此采用本申请的标定方法,能够达到传感器的即时更换,即时标定,即时运营的目标。
参照图2,示出了本申请的一种激光雷达与相机之间的标定方法实施例二的步骤流程图,具体可以包括如下步骤:
步骤201,获取所述相机针对标定板采集的图像和所述激光雷达针对所述标定板采集的点云;
步骤202,在预设第一旋转矢量区间内,确定多个第一旋转矢量;
在本申请实施例中,所述步骤202可以包括:在预设第一旋转矢量区间内,按照预设弧度间隔,确定多个第一旋转矢量。
在实现中,可以以预设弧度间隔为步长,遍历整个预设第一旋转矢量区间,确定多个第一旋转矢量。
具体的,所述预设第一旋转矢量区间包括预设第一翻滚角区间、预设第一俯仰角区间和预设第一偏航角区间,可以在所述预设第一翻滚角区间内,按照预设弧度间隔确定多个翻滚角;在所述预设第一俯仰角区间内,按照所述预设弧度间隔确定多个俯仰角; 在所述预设第一偏航角区间内,按照所述预设弧度间隔确定多个偏航角;分别从所述多个翻滚角中选取一个翻滚角,从所述多个俯仰角中选取一个俯仰角,从所述多个偏航角中选取一个偏航角进行组合,得到多个第一旋转矢量。
例如,预设第一旋转矢量区间为[(r1,p1,y1),(r2,p2,y3)],其中,预设第一翻滚角区间为[r1,r2],按照预设弧度间隔从中确定出n1个翻滚角;预设第一俯仰角区间为[p1,p2],按照预设弧度间隔从中确定出n2个俯仰角;预设第一偏航角区间为[y1,y2],按照预设弧度间隔从中确定出n3个偏航角。分别从n1个翻滚角选取一个翻滚角,从n2个俯仰角选取一个俯仰角,从n3个偏航角中选取一个偏航角进行组合,总共可以得到n1*n2*n3个第一旋转矢量。
在本申请实施例中,可以通过如下步骤确定预设弧度间隔:
获取所述相机的水平视场角α和垂直视场角β,以及所述图像的分辨率w*h;采用所述水平视场角α除以所述分辨率的宽度w,得到第一弧度;采用所述垂直视场角β除以所述分辨率的高度h,得到第二弧度;将所述第一弧度和所述第二弧度中,较小的作为所述预设弧度间隔。
在本申请实施例中,预设第一旋转矢量区间可以通过如下步骤确定:确定基准旋转矢量;采用所述基准旋转矢量和所述预设弧度间隔,确定所述预设第一旋转矢量区间。
具体的,假设基准旋转矢量为(r0,p0,y0),其中r0为基准翻滚角,p0为基准俯仰角,y0为基准偏航角。
可以采用基准翻滚角r0,减去预设第一参考数值M与预设弧度间隔s的乘积,得到翻滚角区间下限r0-M*s;可以采用基准翻滚角r0,加上预设第一参考数值M与预设弧度间隔s的乘积,得到翻滚角区间上限r0+M*s;采用翻滚角区间下限和翻滚角区间上限,确定预设第一翻滚角区间[r0-M*s,r0+M*s]。
可以采用基准俯仰角p0,减去预设第一参考数值M与预设弧度间隔s的乘积,得到俯仰角区间下限p0-M*s;可以采用基准俯仰角p0,加上预设第一参考数值M与预设弧度间隔s的乘积,得到俯仰角区间上限p0+M*s;采用俯仰角区间下限和俯仰角区间上限,确定预设第一俯仰角区间[p0-M*s,p0+M*s]。
可以采用基准偏航角y0,减去预设第一参考数值M与预设弧度间隔s的乘积,得到偏航角区间下限y0-M*s;可以采用基准偏航角y0,加上预设第一参考数值M与预设弧度间隔s的乘积,得到偏航角区间上限y0+M*s;采用偏航角区间下限和偏航角区间上限,确定预设第一偏航角区间[y0-M*s,y0+M*s]。
第一参考数值M为正整数,为保证全局最优解,M通常需要设置的比较大,例如M=200。
事实上考虑到相机分辨率,视场角及激光雷达的角度分辨率,要获得比较高的标定精度,预设弧度间隔通常要设置的很小,例如0.001rad,而通常(r,p,y)合理的变化区间相对于预设弧度间隔来说往往很大,例如[-0.1,0.1]rad是一个比较正常的变化区间,因此完成整个解空间的遍历次数为(0.2/0.001)*(0.2/0.001)*(0.2/0.001)=8000000次,假设程序每次遍历仅耗时1ms(实际值远大于1ms,约为3~4ms),那么标定一组参数需要的时间为8000000/1000/3600=2.2小时,而这仅仅是标定一组参数的时间,实际场景中可能要标定多组参数,如此长的运行时间显然无法接受。因此如何缩小预设第一旋转矢量区间,减少程序的运行时间显得格外关键。
因此在本申请实施例中,先通过定向调整pitch和yaw,使得标定板点云投影到图像的第一投影点云中心和图像中标板的中心重合,通常这种方式只需要迭代50~100次后就会收敛,得到一个基准的p0和y0。
之后再固定p0和y0,在roll的原始区间调整roll,找到该区间内使得第一投影点云落到标定板图像区域内最多的roll值记为r0,此步骤需要迭代200次。
通过这种方法,本方案可以找到一个基准解(r0,p0,y0),以此基准解为中心,本申请实施例可以在一个很小的区间内[-0.015,0.015]找到最优解,而且实验测试表明该解也是全局最优解。
在实际中,只有确定了p0和y0后才能调整roll,并不能先确定r0和p0,再调整yaw或者先确定r0和y0后,在调整pitch。
优化后的方案需要的循环迭代次数为100+200+(0.03/0.001)*(0.03/0.001)*(0.03/0.001)=27300/1000=27s,之后通过OpenMP使用多核多线程进行加速,能够再将时间降为原来的1/4,因此标定一组参数只需要6~8秒左右,因此本申请实施例的方法能够完成即时标定。
在本申请实施例中,确定基准旋转矢量的步骤可以包括:
获取预设第二旋转矢量区间,所述预设第二旋转矢量区间包括预设第二翻滚角区间、预设第二俯仰角区间和预设第二偏航角区间;
在所述预设第二俯仰角区间内调整俯仰角,并且在所述预设第二偏航角区间内调整偏航角;
确定所述图像的标定板的中心,与所述第一投影点云的中心重合时的目标俯仰角和 目标偏航角;
在所述目标俯仰角和目标偏航角下,在所述预设第二翻滚角区间内调整所述翻滚角,得到多个第二旋转矢量;
从所述多个第二旋转矢量中,确定基准旋转矢量。
其中,所述从所述多个第二旋转矢量中,确定基准旋转矢量的步骤可以包括:
分别采用所述多个第二旋转矢量,以及所述激光雷达的坐标系与所述相机的坐标系之间的平移矢量,确定多个第二转换矩阵;针对一个所述第二转换矩阵,采用所述第二转换矩阵和所述相机的内参,计算对应的所述图像与所述点云之间的重合度;将对应最大重合度的第二旋转矢量,确定为基准旋转矢量。
其中,采用所述第二转换矩阵和所述相机的内参,计算对应的所述图像与所述点云之间的重合度的步骤可以包括:
采用第二转换矩阵、相机的内参和标定板点云的三维坐标,将标定板点云投影到相机坐标系,得到第二投影点云;确定第二投影点云中,落入图像中的标定板的轮廓内的第二目标投影点的数量;采用第二目标投影点的数量,确定图像与点云的重合度。
在一种示例中,可以将第二目标投影点的数量,作为图像与所述点云的重合度。第二目标投影点的数量越多,重合度越高。
在另一种示例中,可以采用第二目标投影点与标定板点云的比值,确定重合度。具体的,可以计算一个标定板对应的第二目标投影点的数量与该标定板的标定板点云的数量的第二目标投影点比例;采用第二目标投影点比例,确定图像与点云的重合度。
步骤203,获取所述激光雷达的坐标系与相机的坐标系之间的平移矢量,以及获取所述相机的内参;
内参是描述相机特性的参数。由于相机坐标系使用的是毫米制的单位,而图像平面使用的像素为单位,内参数的作用就是在这两个坐标系之间进行线性的变化。相机的内参可以通过相机标定工具获取。
步骤204,分别采用所述多个第一旋转矢量和所述平移矢量,确定多个第一转换矩阵;
在本申请实施例中,相机和激光雷达之间的平移矢量是固定的,每一个第一转换矩阵由一个第一旋转矢量和固定的平移矢量组成。
步骤205,针对一个所述第一转换矩阵,采用所述第一转换矩阵和所述相机的内参,计算对应的所述图像与所述点云之间的重合度;
在不同的转换矩阵下,图像与点云的相对位置会发生变化,重合度也会发生变化。
在本申请实施例中,所述步骤205可以包括如下子步骤:
子步骤S11,获取所述相机的相机坐标系;
子步骤S12,确定所述图像中所述标定板的轮廓,以及确定所述点云中位于所述标定板内的标定板点云的三维坐标;
激光雷达采集的点云数据是三维的,由笛卡尔坐标系(X,Y,Z)表示。
在一种示例中,可以采用点云聚类算法确定标定板点云的三维坐标。具体的,可以采用点云聚类算法,从所述点云中提取位于所述标定板内的标定板点云;确定所述标定板点云的三维坐标。
在另一种示例中,可以采用标定板对激光的反射率作为先验信息,确定标定板点云的三维坐标。由于不同材质的物体对激光的反射程度不同,可以选取高反射率材质的标定板。在采集得到的激光点云数据中,通过设置合适的反射率阈值,可以将反射率大于反射率阈值的激光点确定为激光打到标定板上的点。
具体的,可以获取所述点云中各个点的反射率;采用反射率大于预设反射率阈值的点,确定位于所述标定板内的标定板点云;确定所述标定板点云的三维坐标。
在又一种示例中,可以采用标定板的尺寸信息作为先验信息,确定标定板点云的三维坐标。具体的,可以获取所述标定板的尺寸信息;采用所述标定板的尺寸信息,确定所述点云中位于所述标定板内的标定板点云;确定所述标定板点云的三维坐标。
子步骤S13,采用所述第一转换矩阵、所述相机的内参和所述标定板点云的三维坐标,将所述标定板点云投影到所述图像,得到第一投影点云;
在实际中,在已知转换矩阵和相机内参的情况下,可以调用专用的软件接口来实现投影,例如,采用OpenCV软件的投影函数ProjectPoints,将三维坐标投影到二维的图像中。
参照图3所示为本申请实施例中将标定板点云投影到图像的示意图。如图3所示,标定板点云投影到图像中的投影点云与图像中的标定板的重合度较低。在不同的转换矩阵下,投影点云在图像中的位置会改变。
子步骤S14,确定所述第一投影点云中,落入所述图像中的标定板的轮廓内的第一目标投影点的数量;
子步骤S15,采用所述第一目标投影点的数量,确定所述图像与所述点云的重合度。
在一种示例中,可以将第一目标投影点的数量,作为图像与所述点云的重合度。第 一目标投影点的数量越多,重合度越高。
例如,假如使用了两块标定板,激光雷达射出的激光射到两个标定板的点数分别为120和100。在某个第一转换矩阵下,标定板点云投影到图像的两个标定板轮廓内的第一目标投影点的数量分别为90和80,若以针对各个标定板的第一目标投影点的总数为重合度,则重合度为170。
在另一个示例中,可以采用第一目标投影点与标定板点云的比值,确定重合度。具体的,所述子步骤S15可以包括:计算一个标定板对应的第一目标投影点的数量与该标定板的标定板点云的数量的第一目标投影点比例;采用所述第一目标投影点比例,确定所述图像与所述点云的重合度。
例如,在上述例子中,两个标定板的第一目标投影点比例分别为90/120=0.75和80/100=0.8,若以针对各个标定板的第一目标投影点的数量与标定板点云的数量的比值总和为重合度,则重合度为1.55。
步骤206,将对应最大重合度的第一旋转矢量,确定为所述激光雷达的坐标系标定到所述相机的坐标系的旋转矢量。
参照图4所示为本申请实施例中另一种将标定板点云投影到图像的示意图。图4中,在重合度最高时,标定板点云的投影点云与图像中的标定板完全对应,整幅图像与点云也完全对应。
本申请实施例中,可以在相机与激光雷达之间的平移矢量固定的情况下,在预设第一旋转矢量区间内,确定使得相机采集的图像与激光雷达采集的点云重合度最高的第一旋转矢量,将对应最大重合度的第一旋转矢量,作为最终将激光雷达的坐标系标定到相机的坐标系的旋转矢量。采用本申请实施例的标定方法,在将中低精度的激光雷达标定到相机时,也能能够满足无人车的标定精度要求并且能够实现自动化标定。
参照图5,示出了本申请的一种标定方法实施例的步骤流程图,该方法应用于无人车,所述无人车包括至少一个工业相机、至少一个环视相机,以及至少一个激光雷达,所述至少一个相机和所述至少一个激光雷达分别具有自身的坐标系,所述方法具体可以包括如下步骤:
步骤501,从所述至少一个相机选取一个目标相机,将所述目标相机的坐标系作为基准坐标系;
无人车可以设有多个相机,可以包括至少一个工业相机和至少一个环视相机。
工业相机具有高的图像稳定性、高传输能力和高抗干扰能力,一般设置在无人车前 方用来采集前方空间的图像。
环视相机的视场角较大,在无人车设置多个环视相机能够覆盖无人车周围360度的区域,能够确保无人车行进过程中的视野盲区尽可能小。
选择不同的相机的坐标系为基准坐标系,其标定过程会有所不同,复杂度也有所不同,在实际中,可以根据无人车中工业相机、环视相机、激光雷达的相对位置,从工业相机和环视相机中选取一个作为目标相机。
参照图6所示为本申请实施例中无人车标定场景的示意图。在无人车的前后左右四个方向可以设置有相机或者激光雷达,对于需要标定的相机和激光雷达,可以在相应的方向放置标定板。采用相机采集标定板的图像,采集激光雷达针对标定板采集点云。
在本申请实施例的一种示例中,工业相机可以包括在左前方设置的左工业相机和在右前方设置的右工业相机,两个工业相机组成双目相机。
激光雷达可以包括设置在前方的前激光雷达、设置在后方的后激光雷达、设置在左方的左激光雷达、设置在右方的右激光雷达。
环视相机可以包括设置在前方的前环视相机、设置在后方的后环视相机、设置在左方的左环视相机、设置在右方的右环视相机。
为了简单起见,在选取目标相机时,可以从至少一个工业相机选取一个作为目标相机。
在上述示例中,可以选取左工业相机作为目标相机,将左工业相机的坐标系选取为基准坐标系。右工业相机的坐标系可以直接标定到左工业相机的基准坐标系。
步骤502,在所述至少一个激光雷达中,确定与所述目标相机关联的第一激光雷达,并将所述第一激光雷达的坐标系标定到所述基准坐标系;
相机与激光雷达之间的关联是指两者的拍摄空间之间的关联。两者需要拍摄共同的空间才具有关联,两者之间才能直接标定。如果两者没有共同的拍摄空间,则两者没有关联,两者之间不能直接标定。例如,设置在无人车后方的激光雷达采集的是后方的点云,设置在无人车前方的工业相机采集的是前端的图像,两者之间没有共同的拍摄空间,因此即两者之间不能直接标定。
在上述示例中,前激光雷达、左激光雷达和右激光雷达与左工业相机可以具有共同的拍摄空间,因此他们之间具有关联。与目标相机具有关联的第一激光雷达的坐标系,可以直接标定到基准坐标系。
步骤503,在除所述目标相机之外的相机中,确定与所述第一激光雷达对应的第一 相机,并将所述第一相机的坐标系标定到对应的第一激光雷达的坐标系;
这里所说的对应的是指方位的对应。具体的,可以是确定与第一激光雷达对应的第一环视相机。
在上述示例中,环视相机和激光雷达是对应使用的,前激光雷达与前环视相机对应,后激光雷达与后环视相机对应,左激光雷达与左环视相机对应,右激光雷达与右环视相机对应。
前环视相机的坐标系可以直接标定到前激光雷达的坐标系,从而间接标定的基准坐标系;左环视相机的坐标系可以直接标定到左激光雷达的坐标系,从而间接标定的基准坐标系;右环视相机的坐标系可以直接标定到右激光雷达的坐标系,从而间接标定的基准坐标系。
步骤504,确定与所述目标相机不关联的第二激光雷达,以及确定与所述第二激光雷达对应的第二相机;
对于与目标相机不关联的第二激光雷达,其坐标系不能直接标定到基准坐标系,可以通过与第二激光雷达对应的第二相机,间接标定到基准坐标系。与后激光雷达对应的第二相机具体可以为:对应的第二环视相机。
例如,后激光雷达与左工业相机由于不具有共同的拍摄空间,因此两者之间不关联。可以确定与后激光雷达对应的后环视相机。
步骤505,将所述第二相机的坐标系标定到关联的第一激光雷达的坐标系,以及将所述第二激光雷达的坐标系标定到所述第二相机的坐标系。
本申请实施例中,可以利用已经标定的第一激光雷达的坐标系,实现间接标定。
确定与第二相机关联的第一激光雷达,将第二相机的坐标系标定到关联的第一激光雷达的坐标系,然后将第二激光雷达的坐标系标定到该第二相机的坐标系,实现将第二激光雷达的坐标系间接标定到基准坐标系。
例如,后环视相机关联的第一激光雷达有左激光雷达和右激光雷达,可以将后环视相机的坐标系标定到左激光雷达的坐标系,然后将后激光雷达的坐标系标定到后环视相机的坐标系。
在本申请实施例中,工业相机与激光雷达之间的标定过程,环视相机与激光雷达之间的标定过程都可以采用前述的激光雷达与相机之间的标定方法实施例实现。
本申请实施例的标定方法,适用于具有多传感器的无人车,可以将无人车中的工业相机、环视相机和激光雷达直接或间接标定到一个基准坐标系,并且标定精度高,能够 实现自动化标定。通过基准坐标系还可以实现对其他传感器的标定,例如,可以将基准坐标系标定到惯性测量单元IMU(Inertial measurement unit)。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请实施例并不受所描述的动作顺序的限制,因为依据本申请实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本申请实施例所必须的。
参照图7,示出了本申请的一种激光雷达与相机之间的标定装置实施例的结构框图,具体可以包括如下模块:
图像获取模块701,用于获取所述相机针对标定板采集的图像和所述激光雷达针对所述标定板采集的点云;
第一旋转矢量确定模块702,用于在预设第一旋转矢量区间内,确定多个第一旋转矢量;
第一重合度计算模块703,用于分别根据各个第一旋转矢量,计算对应的所述图像与所述点云之间的重合度;
旋转矢量标定模块704,用于将对应最大重合度的第一旋转矢量,确定为所述激光雷达的坐标系标定到所述相机的坐标系的旋转矢量。
在本申请实施例中,所述第一重合度计算模块703可以包括:
参数获取子模块,用于获取所述激光雷达的坐标系与相机的坐标系之间的平移矢量,以及获取所述相机的内参;
第一转换矩阵确定子模块,用于分别采用所述多个第一旋转矢量和所述平移矢量,确定多个第一转换矩阵;
第一重合度计算子模块,用于针对一个所述第一转换矩阵,采用所述第一转换矩阵和所述相机的内参,计算对应的所述图像与所述点云之间的重合度。
在本申请实施例中,所述第一重合度计算子模块可以包括:
相机坐标系获取单元,用于获取所述相机的相机坐标系;
图像信息确定单元,用于确定所述图像中所述标定板的轮廓,以及确定所述点云中位于所述标定板内的标定板点云的三维坐标;
投影单元,用于采用所述第一转换矩阵、所述相机的内参和所述标定板点云的三维坐标,将所述标定板点云投影到所述图像,得到第一投影点云;
目标投影点确定单元,用于确定所述第一投影点云中,落入所述图像中的标定板的轮廓内的第一目标投影点的数量;
第一重合度确定单元,用于采用所述第一目标投影点的数量,确定所述图像与所述点云的重合度。
在本申请实施例中,所述第一重合度确定单元可以包括:
投影比例计算子单元,用于计算一个标定板对应的第一目标投影点的数量与该标定板的标定板点云的数量的第一目标投影点比例;
第一重合度确定子单元,用于采用所述第一目标投影点比例,确定所述图像与所述点云的重合度。
在本申请实施例中,所述第一旋转矢量确定模块702可以包括:
第一旋转矢量确定子模块,用于在预设第一旋转矢量区间内,按照预设弧度间隔,确定多个第一旋转矢量。
在本申请实施例中,所述预设第一旋转矢量区间包括预设第一翻滚角区间、预设第一俯仰角区间和预设第一偏航角区间;所述第一旋转矢量确定子模块可以包括:
翻滚角确定单元,用于在所述预设第一翻滚角区间内,按照预设弧度间隔确定多个翻滚角;
俯仰角确定单元,用于在所述预设第一俯仰角区间内,按照所述预设弧度间隔确定多个俯仰角;
偏航角确定单元,用于在所述预设第一偏航角区间内,按照所述预设弧度间隔确定多个偏航角;
第一旋转矢量确定单元,用于分别从所述多个翻滚角中选取一个翻滚角,从所述多个俯仰角中选取一个俯仰角,从所述多个偏航角中选取一个偏航角进行组合,得到多个第一旋转矢量。
在本申请实施例中,所述的装置还可以包括:
相机参数获取模块,用于获取所述相机的水平视场角和垂直视场角,以及所述图像的分辨率;
第一弧度确定模块,用于采用所述水平视场角除以所述分辨率的宽度,得到第一弧度;
第二弧度确定模块,用于采用所述垂直视场角除以所述分辨率的高度,得到第二弧度;
弧度间隔确定模块,用于将所述第一弧度和所述第二弧度中,较小的作为所述预设弧度间隔。
在本申请实施例中,所述的装置还可以包括:
基准旋转矢量确定模块,用于确定基准旋转矢量;
第一旋转矢量区间确定模块,用于采用所述基准旋转矢量和所述预设弧度间隔,确定所述预设第一旋转矢量区间。
在本申请实施例中,所述基准旋转矢量确定模块可以包括:
第二旋转矢量区间获取子模块,用于获取预设第二旋转矢量区间,所述预设第二旋转矢量区间包括预设第二翻滚角区间、预设第二俯仰角区间和预设第二偏航角区间;
角度调整子模块,用于在所述预设第二俯仰角区间内调整俯仰角,并且在所述预设第二偏航角区间内调整偏航角;
目标角度确定子模块,用于确定所述图像的标定板的中心与所述第一投影点云的中心重合时的目标俯仰角和目标偏航角;
第二旋转矢量确定子模块,用于在所述目标俯仰角和目标偏航角下,在所述预设第二翻滚角区间内调整所述翻滚角,得到多个第二旋转矢量;
基准旋转矢量确定子模块,用于从所述多个第二旋转矢量中,确定基准旋转矢量。
在本申请实施例中,所述基准旋转矢量确定子模块可以包括:
第二转换矩阵确定单元,用于分别采用所述多个第二旋转矢量,以及所述激光雷达的坐标系与所述相机的坐标系之间的平移矢量,确定多个第二转换矩阵;
第二重合度计算单元,用于针对一个所述第二转换矩阵,采用所述第二转换矩阵和所述相机的内参,计算对应的所述图像与所述点云之间的重合度;
基准旋转矢量确定单元,用于将对应最大重合度的第二旋转矢量,确定为基准旋转矢量。
在本申请实施例中,所述图像信息确定单元可以包括:
第一标定板点云确定子单元,用于采用点云聚类算法,从所述点云中提取位于所述标定板内的标定板点云;
第一点云坐标确定子单元,用于确定所述标定板点云的三维坐标。
在本申请实施例中,所述图像信息确定单元可以包括:
反射率获取子单元,用于获取所述点云中各个点的反射率;
第二标定板点云确定子单元,用于采用反射率大于预设反射率阈值的点,确定位于 所述标定板内的标定板点云;
第二点云坐标确定子单元,用于确定所述标定板点云的三维坐标。
在本申请实施例中,所述图像信息确定单元可以包括:
尺寸信息获取子单元,用于获取所述标定板的尺寸信息;
第三标定板点云确定子单元,用于采用所述标定板的尺寸信息,确定所述点云中位于所述标定板内的标定板点云;
第三点云坐标确定子单元,用于确定所述标定板点云的三维坐标。
参照图8,示出了本申请的一种标定装置实施例的结构框图,所述标定装置应用于无人车,所述无人车包括至少一个相机以及至少一个激光雷达,所述至少一个相机和所述至少一个激光雷达分别具有自身的坐标系,所述装置具体可以包括如下模块:
基准坐标系确定模块801,用于从所述至少一个相机选取一个目标相机,将所述目标相机的坐标系作为基准坐标系;
第一标定模块802,用于在所述至少一个激光雷达中,确定与所述目标相机关联的第一激光雷达,并将所述第一激光雷达的坐标系标定到所述基准坐标系;
第二标定模块803,用于在除所述目标相机之外的相机中,确定与所述第一激光雷达对应的第一相机,并将所述第一相机的坐标系标定到对应的第一激光雷达的坐标系。
不关联确定模块804,用于确定与所述目标相机不关联的第二激光雷达,以及确定与所述第二激光雷达对应的第二相机;
第三标定模块805,用于将所述第二相机的坐标系标定到关联的第一激光雷达的坐标系,以及将所述第二激光雷达的坐标系标定到所述第二相机的坐标系。
在本申请实施例中,所述至少一个相机可以包括:至少一个工业相机、至少一个环视相机;所述基准坐标系确定模块801可以包括:
目标相机选取子模块,用于从所述至少一个工业相机选取一个作为目标相机。
在本申请实施例中,所述第二标定模块803可以包括:
第一环视相机确定子模块,用于在所述至少一个环视相机中,确定与所述第一激光雷达对应的第一环视相机。
在本申请实施例中,所述不关联确定模块804可以包括:
第二环视相机确定子模块,用于确定与所述第二激光雷达对应的第二环视相机。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本申请实施例还提供了一种装置,包括:
一个或多个处理器;和
其上存储有指令的一个或多个机器可读介质,当由所述一个或多个处理器执行时,使得所述装置执行本申请实施例所述的方法。
本申请实施例还提供了一个或多个机器可读介质,其上存储有指令,当由一个或多个处理器执行时,使得所述处理器执行本申请实施例所述的方法。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域内的技术人员应明白,本申请实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本申请实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请实施例是参照根据本申请实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请实施例的优选实施例,但本领域内的技术人员一旦得知了基本 创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本申请所提供的一种激光雷达与相机之间的标定方法、一种标定方法、一种激光雷达与相机之间的标定装置和一种标定装置,进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。
Claims (36)
- 一种激光雷达与相机之间的标定方法,其特征在于,包括:获取所述相机针对标定板采集的图像和所述激光雷达针对所述标定板采集的点云;在预设第一旋转矢量区间内,确定多个第一旋转矢量;分别根据各个第一旋转矢量,计算对应的所述图像与所述点云之间的重合度;将对应最大重合度的第一旋转矢量,确定为所述激光雷达的坐标系标定到所述相机的坐标系的旋转矢量。
- 根据权利要求1所述的方法,其特征在于,所述分别根据各个第一旋转矢量,计算对应的所述图像与所述点云之间的重合度,包括:获取所述激光雷达的坐标系与相机的坐标系之间的平移矢量,以及获取所述相机的内参;分别采用所述多个第一旋转矢量和所述平移矢量,确定多个第一转换矩阵;针对一个所述第一转换矩阵,采用所述第一转换矩阵和所述相机的内参,计算对应的所述图像与所述点云之间的重合度。
- 根据权利要求2所述的方法,其特征在于,所述采用所述第一转换矩阵和所述相机的内参,计算对应的所述图像与所述点云之间的重合度,包括:获取所述相机的相机坐标系;确定所述图像中所述标定板的轮廓,以及确定所述点云中位于所述标定板内的标定板点云的三维坐标;采用所述第一转换矩阵、所述相机的内参和所述标定板点云的三维坐标,将所述标定板点云投影到所述图像,得到第一投影点云;确定所述第一投影点云中,落入所述图像中的标定板的轮廓内的第一目标投影点的数量;采用所述第一目标投影点的数量,确定所述图像与所述点云的重合度。
- 根据权利要求3所述的方法,其特征在于,所述采用所述第一目标投影点的数量,确定所述图像与所述点云的重合度,包括:计算一个标定板对应的第一目标投影点的数量与该标定板的标定板点云的数量的第一目标投影点比例;采用所述第一目标投影点比例,确定所述图像与所述点云的重合度。
- 根据权利要求1所述的方法,其特征在于,所述在预设第一旋转矢量区间内,确 定多个第一旋转矢量,包括:在预设第一旋转矢量区间内,按照预设弧度间隔,确定多个第一旋转矢量。
- 根据权利要求5所述的方法,其特征在于,所述预设第一旋转矢量区间包括预设第一翻滚角区间、预设第一俯仰角区间和预设第一偏航角区间;所述在预设第一旋转矢量区间内,按照预设弧度间隔,确定多个第一旋转矢量,包括:在所述预设第一翻滚角区间内,按照预设弧度间隔确定多个翻滚角;在所述预设第一俯仰角区间内,按照所述预设弧度间隔确定多个俯仰角;在所述预设第一偏航角区间内,按照所述预设弧度间隔确定多个偏航角;分别从所述多个翻滚角中选取一个翻滚角,从所述多个俯仰角中选取一个俯仰角,从所述多个偏航角中选取一个偏航角进行组合,得到多个第一旋转矢量。
- 根据权利要求5所述的方法,其特征在于,还包括:获取所述相机的水平视场角和垂直视场角,以及所述图像的分辨率;采用所述水平视场角除以所述分辨率的宽度,得到第一弧度;采用所述垂直视场角除以所述分辨率的高度,得到第二弧度;将所述第一弧度和所述第二弧度中,较小的作为所述预设弧度间隔。
- 根据权利要求5所述的方法,其特征在于,还包括:确定基准旋转矢量;采用所述基准旋转矢量和所述预设弧度间隔,确定所述预设第一旋转矢量区间。
- 根据权利要求8所述的方法,其特征在于,所述确定基准旋转矢量,包括:获取预设第二旋转矢量区间,所述预设第二旋转矢量区间包括预设第二翻滚角区间、预设第二俯仰角区间和预设第二偏航角区间;在所述预设第二俯仰角区间内调整俯仰角,并且在所述预设第二偏航角区间内调整偏航角;确定所述图像的标定板的中心与第一投影点云的中心重合时的目标俯仰角和目标偏航角;在所述目标俯仰角和目标偏航角下,在所述预设第二翻滚角区间内调整所述翻滚角,得到多个第二旋转矢量;从所述多个第二旋转矢量中,确定基准旋转矢量。
- 根据权利要求9所述的方法,其特征在于,所述从所述多个第二旋转矢量中,确定基准旋转矢量,包括:分别采用所述多个第二旋转矢量,以及所述激光雷达的坐标系与所述相机的坐标系之间的平移矢量,确定多个第二转换矩阵;针对一个所述第二转换矩阵,采用所述第二转换矩阵和所述相机的内参,计算对应的所述图像与所述点云之间的重合度;将对应最大重合度的第二旋转矢量,确定为基准旋转矢量。
- 根据权利要求3所述的方法,其特征在于,所述确定所述点云中位于所述标定板内的标定板点云的三维坐标,包括:采用点云聚类算法,从所述点云中提取位于所述标定板内的标定板点云;确定所述标定板点云的三维坐标。
- 根据权利要求3所述的方法,其特征在于,所述确定所述点云中位于所述标定板内的标定板点云的三维坐标,包括:获取所述点云中各个点的反射率;采用反射率大于预设反射率阈值的点,确定位于所述标定板内的标定板点云;确定所述标定板点云的三维坐标。
- 根据权利要求3所述的方法,其特征在于,所述确定所述点云中位于所述标定板内的标定板点云的三维坐标,包括:获取所述标定板的尺寸信息;采用所述标定板的尺寸信息,确定所述点云中位于所述标定板内的标定板点云;确定所述标定板点云的三维坐标。
- 一种标定方法,其特征在于,应用于无人车,所述无人车包括至少一个相机以及至少一个激光雷达,所述至少一个相机和所述至少一个激光雷达分别具有自身的坐标系,所述方法包括:从所述至少一个相机选取一个目标相机,将所述目标相机的坐标系作为基准坐标系;在所述至少一个激光雷达中,确定与所述目标相机关联的第一激光雷达,并将所述第一激光雷达的坐标系标定到所述基准坐标系;在除所述目标相机之外的相机中,确定与所述第一激光雷达对应的第一相机,并将所述第一相机的坐标系标定到对应的第一激光雷达的坐标系;确定与所述目标相机不关联的第二激光雷达,以及确定与所述第二激光雷达对应的 第二相机;将所述第二相机的坐标系标定到关联的第一激光雷达的坐标系,以及将所述第二激光雷达的坐标系标定到所述第二相机的坐标系。
- 根据权利要求14所述的方法,其特征在于,所述至少一个相机包括:至少一个工业相机、至少一个环视相机;所述从所述至少一个相机选取一个目标相机包括:从所述至少一个工业相机选取一个作为目标相机。
- 根据权利要求15所述的方法,其特征在于,所述在除所述目标相机之外的相机中,确定与所述第一激光雷达对应的第一相机,包括:在所述至少一个环视相机中,确定与所述第一激光雷达对应的第一环视相机。
- 根据权利要求15所述的方法,其特征在于,所述确定与所述第二激光雷达对应的第二相机,包括:确定与所述第二激光雷达对应的第二环视相机。
- 一种激光雷达与相机之间的标定装置,其特征在于,包括:图像获取模块,用于获取所述相机针对标定板采集的图像和所述激光雷达针对所述标定板采集的点云;第一旋转矢量确定模块,用于在预设第一旋转矢量区间内,确定多个第一旋转矢量;第一重合度计算模块,用于分别根据各个第一旋转矢量,计算对应的所述图像与所述点云之间的重合度;旋转矢量标定模块,用于将对应最大重合度的第一旋转矢量,确定为所述激光雷达的坐标系标定到所述相机的坐标系的旋转矢量。
- 根据权利要求18所述的装置,其特征在于,所述第一重合度计算模块包括:参数获取子模块,用于获取所述激光雷达的坐标系与相机的坐标系之间的平移矢量,以及获取所述相机的内参;第一转换矩阵确定子模块,用于分别采用所述多个第一旋转矢量和所述平移矢量,确定多个第一转换矩阵;第一重合度计算子模块,用于针对一个所述第一转换矩阵,采用所述第一转换矩阵和所述相机的内参,计算对应的所述图像与所述点云之间的重合度。
- 根据权利要求19所述的装置,其特征在于,所述第一重合度计算子模块包括:相机坐标系获取单元,用于获取所述相机的相机坐标系;图像信息确定单元,用于确定所述图像中所述标定板的轮廓,以及确定所述点云中位于所述标定板内的标定板点云的三维坐标;投影单元,用于采用所述第一转换矩阵、所述相机的内参和所述标定板点云的三维坐标,将所述标定板点云投影到所述图像,得到第一投影点云;目标投影点确定单元,用于确定所述第一投影点云中,落入所述图像中的标定板的轮廓内的第一目标投影点的数量;第一重合度确定单元,用于采用所述第一目标投影点的数量,确定所述图像与所述点云的重合度。
- 根据权利要求20所述的装置,其特征在于,所述第一重合度确定单元包括:投影比例计算子单元,用于计算一个标定板对应的第一目标投影点的数量与该标定板的标定板点云的数量的第一目标投影点比例;第一重合度确定子单元,用于采用所述第一目标投影点比例,确定所述图像与所述点云的重合度。
- 根据权利要求18所述的装置,其特征在于,所述第一旋转矢量确定模块包括:第一旋转矢量确定子模块,用于在预设第一旋转矢量区间内,按照预设弧度间隔,确定多个第一旋转矢量。
- 根据权利要求22所述的装置,其特征在于,所述预设第一旋转矢量区间包括预设第一翻滚角区间、预设第一俯仰角区间和预设第一偏航角区间;所述第一旋转矢量确定子模块包括:翻滚角确定单元,用于在所述预设第一翻滚角区间内,按照预设弧度间隔确定多个翻滚角;俯仰角确定单元,用于在所述预设第一俯仰角区间内,按照所述预设弧度间隔确定多个俯仰角;偏航角确定单元,用于在所述预设第一偏航角区间内,按照所述预设弧度间隔确定多个偏航角;第一旋转矢量确定单元,用于分别从所述多个翻滚角中选取一个翻滚角,从所述多个俯仰角中选取一个俯仰角,从所述多个偏航角中选取一个偏航角进行组合,得到多个第一旋转矢量。
- 根据权利要求22所述的装置,其特征在于,还包括:相机参数获取模块,用于获取所述相机的水平视场角和垂直视场角,以及所述图像 的分辨率;第一弧度确定模块,用于采用所述水平视场角除以所述分辨率的宽度,得到第一弧度;第二弧度确定模块,用于采用所述垂直视场角除以所述分辨率的高度,得到第二弧度;弧度间隔确定模块,用于将所述第一弧度和所述第二弧度中,较小的作为所述预设弧度间隔。
- 根据权利要求22所述的装置,其特征在于,还包括:基准旋转矢量确定模块,用于确定基准旋转矢量;第一旋转矢量区间确定模块,用于采用所述基准旋转矢量和所述预设弧度间隔,确定所述预设第一旋转矢量区间。
- 根据权利要求25所述的装置,其特征在于,所述基准旋转矢量确定模块包括:第二旋转矢量区间获取子模块,用于获取预设第二旋转矢量区间,所述预设第二旋转矢量区间包括预设第二翻滚角区间、预设第二俯仰角区间和预设第二偏航角区间;角度调整子模块,用于在所述预设第二俯仰角区间内调整俯仰角,并且在所述预设第二偏航角区间内调整偏航角;目标角度确定子模块,用于确定所述图像的标定板的中心与第一投影点云的中心重合时的目标俯仰角和目标偏航角;第二旋转矢量确定子模块,用于在所述目标俯仰角和目标偏航角下,在所述预设第二翻滚角区间内调整所述翻滚角,得到多个第二旋转矢量;基准旋转矢量确定子模块,用于从所述多个第二旋转矢量中,确定基准旋转矢量。
- 根据权利要求26所述的装置,其特征在于,所述基准旋转矢量确定子模块包括:第二转换矩阵确定单元,用于分别采用所述多个第二旋转矢量,以及所述激光雷达的坐标系与所述相机的坐标系之间的平移矢量,确定多个第二转换矩阵;第二重合度计算单元,用于针对一个所述第二转换矩阵,采用所述第二转换矩阵和所述相机的内参,计算对应的所述图像与所述点云之间的重合度;基准旋转矢量确定单元,用于将对应最大重合度的第二旋转矢量,确定为基准旋转矢量。
- 根据权利要求20所述的装置,其特征在于,所述图像信息确定单元包括:第一标定板点云确定子单元,用于采用点云聚类算法,从所述点云中提取位于所述 标定板内的标定板点云;第一点云坐标确定子单元,用于确定所述标定板点云的三维坐标。
- 根据权利要求20所述的装置,其特征在于,所述图像信息确定单元包括:反射率获取子单元,用于获取所述点云中各个点的反射率;第二标定板点云确定子单元,用于采用反射率大于预设反射率阈值的点,确定位于所述标定板内的标定板点云;第二点云坐标确定子单元,用于确定所述标定板点云的三维坐标。
- 根据权利要求20所述的装置,其特征在于,所述图像信息确定单元包括:尺寸信息获取子单元,用于获取所述标定板的尺寸信息;第三标定板点云确定子单元,用于采用所述标定板的尺寸信息,确定所述点云中位于所述标定板内的标定板点云;第三点云坐标确定子单元,用于确定所述标定板点云的三维坐标。
- 一种标定装置,其特征在于,应用于无人车,所述无人车包括至少一个相机以及至少一个激光雷达,所述至少一个相机和所述至少一个激光雷达分别具有自身的坐标系,所述装置包括:基准坐标系确定模块,用于从所述至少一个相机选取一个目标相机,将所述目标相机的坐标系作为基准坐标系;第一标定模块,用于在所述至少一个激光雷达中,确定与所述目标相机关联的第一激光雷达,并将所述第一激光雷达的坐标系标定到所述基准坐标系;第二标定模块,用于在除所述目标相机之外的相机中,确定与所述第一激光雷达对应的第一相机,并将所述第一相机的坐标系标定到对应的第一激光雷达的坐标系;不关联确定模块,用于确定与所述目标相机不关联的第二激光雷达,以及确定与所述第二激光雷达对应的第二相机;第三标定模块,用于将所述第二相机的坐标系标定到关联的第一激光雷达的坐标系,以及将所述第二激光雷达的坐标系标定到所述第二相机的坐标系。
- 根据权利要求31所述的装置,其特征在于,所述至少一个相机包括:至少一个工业相机、至少一个环视相机;所述基准坐标系确定模块包括:目标相机选取子模块,用于从所述至少一个工业相机选取一个作为目标相机。
- 根据权利要求32所述的装置,其特征在于,所述第二标定模块包括:第一环视相机确定子模块,用于在所述至少一个环视相机中,确定与所述第一激光雷达对应的第一环视相机。
- 根据权利要求32所述的装置,其特征在于,所述不关联确定模块包括:第二环视相机确定子模块,用于确定与所述第二激光雷达对应的第二环视相机。
- 一种标定装置,其特征在于,包括:一个或多个处理器;和其上存储有指令的一个或多个机器可读介质,当由所述一个或多个处理器执行时,使得所述装置执行如权利要求1-13或14-17所述的一个或多个的方法。
- 一个或多个机器可读介质,其上存储有指令,当由一个或多个处理器执行时,使得所述处理器执行如权利要求1-13或14-17所述的一个或多个的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910425720.5 | 2019-05-21 | ||
CN201910425720.5A CN110221275B (zh) | 2019-05-21 | 2019-05-21 | 一种激光雷达与相机之间的标定方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020233443A1 true WO2020233443A1 (zh) | 2020-11-26 |
Family
ID=67821629
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/089722 WO2020233443A1 (zh) | 2019-05-21 | 2020-05-12 | 一种激光雷达与相机之间的标定方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110221275B (zh) |
WO (1) | WO2020233443A1 (zh) |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112881999A (zh) * | 2021-01-25 | 2021-06-01 | 上海西虹桥导航技术有限公司 | 一种用于多线激光雷达与视觉传感器的半自动标定方法 |
CN112946612A (zh) * | 2021-03-29 | 2021-06-11 | 上海商汤临港智能科技有限公司 | 外参标定方法、装置、电子设备及存储介质 |
CN112946591A (zh) * | 2021-02-26 | 2021-06-11 | 商汤集团有限公司 | 外参标定方法、装置、电子设备及存储介质 |
CN113009456A (zh) * | 2021-02-22 | 2021-06-22 | 中国铁道科学研究院集团有限公司 | 车载激光雷达数据校准方法、装置及系统 |
CN113156407A (zh) * | 2021-02-24 | 2021-07-23 | 长沙行深智能科技有限公司 | 车载激光雷达外参数联合标定方法、系统、介质及设备 |
CN113177988A (zh) * | 2021-04-30 | 2021-07-27 | 中德(珠海)人工智能研究院有限公司 | 一种球幕相机与激光的标定方法、装置、设备及存储介质 |
CN113188569A (zh) * | 2021-04-07 | 2021-07-30 | 东软睿驰汽车技术(沈阳)有限公司 | 车辆与激光雷达的坐标系标定方法、设备及存储介质 |
CN113436278A (zh) * | 2021-07-22 | 2021-09-24 | 深圳市道通智能汽车有限公司 | 标定方法、标定装置、测距系统及计算机可读存储介质 |
CN113643382A (zh) * | 2021-08-22 | 2021-11-12 | 浙江大学 | 一种基于旋转激光融合相机的稠密着色点云获取方法及装置 |
CN113744344A (zh) * | 2021-08-18 | 2021-12-03 | 深圳市裕展精密科技有限公司 | 激光设备的标定方法、装置、设备及存储介质 |
CN113790738A (zh) * | 2021-08-13 | 2021-12-14 | 上海智能网联汽车技术中心有限公司 | 一种基于智能云台imu的数据补偿方法 |
CN113838141A (zh) * | 2021-09-02 | 2021-12-24 | 中南大学 | 一种单线激光雷达与可见光相机的外参标定方法及系统 |
CN113884278A (zh) * | 2021-09-16 | 2022-01-04 | 杭州海康机器人技术有限公司 | 一种线激光设备的系统标定方法和装置 |
CN114022569A (zh) * | 2021-11-18 | 2022-02-08 | 湖北中烟工业有限责任公司 | 一种基于视觉测量箱体方正度的方法及装置 |
CN114022566A (zh) * | 2021-11-04 | 2022-02-08 | 安徽省爱夫卡电子科技有限公司 | 一种用于单线激光雷达和相机的联合标定方法 |
CN114035187A (zh) * | 2021-10-26 | 2022-02-11 | 北京国家新能源汽车技术创新中心有限公司 | 一种自动驾驶系统的感知融合方法 |
CN114167393A (zh) * | 2021-12-02 | 2022-03-11 | 新境智能交通技术(南京)研究院有限公司 | 交通雷达的位置标定方法及装置、存储介质、电子设备 |
CN114371472A (zh) * | 2021-12-15 | 2022-04-19 | 中电海康集团有限公司 | 一种激光雷达和相机的自动化联合标定装置及方法 |
CN114494806A (zh) * | 2021-12-17 | 2022-05-13 | 湖南国天电子科技有限公司 | 基于多元信息融合的目标识别方法、系统、设备和介质 |
CN114549651A (zh) * | 2021-12-03 | 2022-05-27 | 聚好看科技股份有限公司 | 一种基于多面体几何约束的多个3d相机标定方法和设备 |
CN114723715A (zh) * | 2022-04-12 | 2022-07-08 | 小米汽车科技有限公司 | 车辆目标检测方法、装置、设备、车辆及介质 |
CN114755662A (zh) * | 2022-03-21 | 2022-07-15 | 北京航空航天大学 | 一种路车融合感知的激光雷达和gps的标定方法和装置 |
CN114779188A (zh) * | 2022-01-24 | 2022-07-22 | 南京慧尔视智能科技有限公司 | 一种标定效果的评价方法、装置、设备及介质 |
EP4040391A1 (en) * | 2021-02-09 | 2022-08-10 | Techman Robot Inc. | Method for calibrating 3d camera by employing calibrated 2d camera |
US11418771B1 (en) | 2021-01-31 | 2022-08-16 | Techman Robot Inc. | Method for calibrating 3D camera by employing calibrated 2D camera |
CN115100287A (zh) * | 2022-04-14 | 2022-09-23 | 美的集团(上海)有限公司 | 外参标定方法及机器人 |
CN115856849A (zh) * | 2023-02-28 | 2023-03-28 | 季华实验室 | 一种深度相机与2d激光雷达标定方法及相关设备 |
CN116540219A (zh) * | 2023-07-04 | 2023-08-04 | 北醒(北京)光子科技有限公司 | 修正雷达出射光角度的方法、装置、存储介质及电子设备 |
CN116630444A (zh) * | 2023-07-24 | 2023-08-22 | 中国矿业大学 | 一种相机与激光雷达融合校准的优化方法 |
CN116740197A (zh) * | 2023-08-11 | 2023-09-12 | 之江实验室 | 一种外参的标定方法、装置、存储介质及电子设备 |
CN117073581A (zh) * | 2023-09-12 | 2023-11-17 | 梅卡曼德(北京)机器人科技有限公司 | 线激光轮廓仪系统的标定方法、装置和电子设备 |
CN117607829A (zh) * | 2023-12-01 | 2024-02-27 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | 激光雷达点云的有序化重建方法、计算机可读存储介质 |
CN117630892A (zh) * | 2024-01-25 | 2024-03-01 | 北京科技大学 | 可见光相机、红外相机与激光雷达的联合标定方法及系统 |
CN115166701B (zh) * | 2022-06-17 | 2024-04-09 | 清华大学 | 一种rgb-d相机和激光雷达的系统标定方法及装置 |
CN118553055A (zh) * | 2024-07-30 | 2024-08-27 | 安徽全采智能科技有限公司 | 一种基于雷视一体机的三维电子围栏预警方法 |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110221275B (zh) * | 2019-05-21 | 2023-06-23 | 菜鸟智能物流控股有限公司 | 一种激光雷达与相机之间的标定方法和装置 |
CN112823294B (zh) * | 2019-09-18 | 2024-02-02 | 北京航迹科技有限公司 | 用于标定相机和多线激光雷达的系统和方法 |
CN112578396B (zh) * | 2019-09-30 | 2022-04-19 | 上海禾赛科技有限公司 | 雷达间坐标变换方法及装置、计算机可读存储介质 |
CN112669388B (zh) * | 2019-09-30 | 2022-06-21 | 上海禾赛科技有限公司 | 激光雷达与摄像装置的标定方法及装置、可读存储介质 |
CN110596683B (zh) * | 2019-10-25 | 2021-03-26 | 中山大学 | 一种多组激光雷达外参标定系统及其方法 |
CN110988801A (zh) * | 2019-10-25 | 2020-04-10 | 东软睿驰汽车技术(沈阳)有限公司 | 一种雷达的安装角度调整方法和装置 |
CN110853101B (zh) * | 2019-11-06 | 2022-08-23 | 深圳市巨力方视觉技术有限公司 | 相机的位置标定方法、装置和计算机可读存储介质 |
CN112785649A (zh) * | 2019-11-11 | 2021-05-11 | 北京京邦达贸易有限公司 | 激光雷达和相机的标定方法、装置、电子设备及介质 |
CN111179358B (zh) * | 2019-12-30 | 2024-01-05 | 浙江商汤科技开发有限公司 | 标定方法、装置、设备及存储介质 |
CN111122128B (zh) * | 2020-01-03 | 2022-04-19 | 浙江大华技术股份有限公司 | 一种球形摄像机的标定方法及装置 |
CN113077517B (zh) * | 2020-01-03 | 2022-06-24 | 湖南科天健光电技术有限公司 | 基于光束直线特性的空间光测系统标定装置及方法 |
CN113866779A (zh) * | 2020-06-30 | 2021-12-31 | 上海商汤智能科技有限公司 | 点云数据的融合方法、装置、电子设备及存储介质 |
CN111918203B (zh) * | 2020-07-03 | 2022-10-28 | 武汉万集信息技术有限公司 | 目标运输车的定位方法及装置、存储介质及电子设备 |
CN114076936A (zh) * | 2020-08-20 | 2022-02-22 | 北京万集科技股份有限公司 | 联合标定参数的精度评估方法和装置、服务器、计算机可读存储介质 |
CN112017250B (zh) * | 2020-08-31 | 2023-07-25 | 杭州海康威视数字技术股份有限公司 | 标定参数确定方法、装置、雷视设备和雷球接力系统 |
CN112017251B (zh) * | 2020-10-19 | 2021-02-26 | 杭州飞步科技有限公司 | 标定方法、装置、路侧设备和计算机可读存储介质 |
CN112233188B (zh) * | 2020-10-26 | 2024-03-12 | 南昌智能新能源汽车研究院 | 一种激光雷达和全景相机的数据融合系统的标定方法 |
CN112180348B (zh) * | 2020-11-27 | 2021-03-02 | 深兰人工智能(深圳)有限公司 | 车载多线激光雷达的姿态标定方法和装置 |
CN112363130B (zh) * | 2020-11-30 | 2023-11-14 | 东风汽车有限公司 | 车载传感器标定方法、存储介质及系统 |
CN112446927B (zh) * | 2020-12-18 | 2024-08-30 | 广东电网有限责任公司 | 激光雷达和相机的联合标定方法、装置、设备及存储介质 |
CN112819861B (zh) * | 2021-02-26 | 2024-06-04 | 广州小马慧行科技有限公司 | 点云的运动补偿方法、装置与计算机可读存储介质 |
CN113740829A (zh) * | 2021-11-05 | 2021-12-03 | 新石器慧通(北京)科技有限公司 | 环境感知设备的外参监控方法、装置、介质及行驶装置 |
CN114152935B (zh) * | 2021-11-19 | 2023-02-03 | 苏州一径科技有限公司 | 一种雷达外参标定精度的评估方法、装置及设备 |
CN114460552A (zh) * | 2022-01-21 | 2022-05-10 | 苏州皓宇云联科技有限公司 | 基于高精地图的路端多传感器联合标定方法 |
CN116897300A (zh) * | 2022-02-10 | 2023-10-17 | 华为技术有限公司 | 一种标定方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050228555A1 (en) * | 2003-08-20 | 2005-10-13 | Samsung Electronics Co., Ltd. | Method of constructing artificial mark for autonomous driving, apparatus and method of determining position of intelligent system using artificial mark and intelligent system employing the same |
CN107167790A (zh) * | 2017-05-24 | 2017-09-15 | 北京控制工程研究所 | 一种基于标定场的激光雷达两步标定方法 |
CN109029284A (zh) * | 2018-06-14 | 2018-12-18 | 大连理工大学 | 一种基于几何约束的三维激光扫描仪与相机标定方法 |
CN109215063A (zh) * | 2018-07-05 | 2019-01-15 | 中山大学 | 一种事件触发相机与三维激光雷达的配准方法 |
CN110221275A (zh) * | 2019-05-21 | 2019-09-10 | 菜鸟智能物流控股有限公司 | 一种激光雷达与相机之间的标定方法和装置 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018195999A1 (en) * | 2017-04-28 | 2018-11-01 | SZ DJI Technology Co., Ltd. | Calibration of laser and vision sensors |
CN109118542B (zh) * | 2017-06-22 | 2021-11-23 | 阿波罗智能技术(北京)有限公司 | 激光雷达与相机之间的标定方法、装置、设备及存储介质 |
CN107564069B (zh) * | 2017-09-04 | 2020-09-29 | 北京京东尚科信息技术有限公司 | 标定参数的确定方法、装置及计算机可读存储介质 |
CN109521403B (zh) * | 2017-09-19 | 2020-11-20 | 百度在线网络技术(北京)有限公司 | 多线激光雷达的参数标定方法及装置、设备及可读介质 |
CN109151439B (zh) * | 2018-09-28 | 2020-07-31 | 上海爱观视觉科技有限公司 | 一种基于视觉的自动追踪拍摄系统及方法 |
-
2019
- 2019-05-21 CN CN201910425720.5A patent/CN110221275B/zh active Active
-
2020
- 2020-05-12 WO PCT/CN2020/089722 patent/WO2020233443A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050228555A1 (en) * | 2003-08-20 | 2005-10-13 | Samsung Electronics Co., Ltd. | Method of constructing artificial mark for autonomous driving, apparatus and method of determining position of intelligent system using artificial mark and intelligent system employing the same |
CN107167790A (zh) * | 2017-05-24 | 2017-09-15 | 北京控制工程研究所 | 一种基于标定场的激光雷达两步标定方法 |
CN109029284A (zh) * | 2018-06-14 | 2018-12-18 | 大连理工大学 | 一种基于几何约束的三维激光扫描仪与相机标定方法 |
CN109215063A (zh) * | 2018-07-05 | 2019-01-15 | 中山大学 | 一种事件触发相机与三维激光雷达的配准方法 |
CN110221275A (zh) * | 2019-05-21 | 2019-09-10 | 菜鸟智能物流控股有限公司 | 一种激光雷达与相机之间的标定方法和装置 |
Cited By (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112881999B (zh) * | 2021-01-25 | 2024-02-02 | 上海西虹桥导航技术有限公司 | 一种用于多线激光雷达与视觉传感器的半自动标定方法 |
CN112881999A (zh) * | 2021-01-25 | 2021-06-01 | 上海西虹桥导航技术有限公司 | 一种用于多线激光雷达与视觉传感器的半自动标定方法 |
US11418771B1 (en) | 2021-01-31 | 2022-08-16 | Techman Robot Inc. | Method for calibrating 3D camera by employing calibrated 2D camera |
EP4040391A1 (en) * | 2021-02-09 | 2022-08-10 | Techman Robot Inc. | Method for calibrating 3d camera by employing calibrated 2d camera |
CN113009456A (zh) * | 2021-02-22 | 2021-06-22 | 中国铁道科学研究院集团有限公司 | 车载激光雷达数据校准方法、装置及系统 |
CN113009456B (zh) * | 2021-02-22 | 2023-12-05 | 中国铁道科学研究院集团有限公司 | 车载激光雷达数据校准方法、装置及系统 |
CN113156407A (zh) * | 2021-02-24 | 2021-07-23 | 长沙行深智能科技有限公司 | 车载激光雷达外参数联合标定方法、系统、介质及设备 |
CN113156407B (zh) * | 2021-02-24 | 2023-09-05 | 长沙行深智能科技有限公司 | 车载激光雷达外参数联合标定方法、系统、介质及设备 |
CN112946591A (zh) * | 2021-02-26 | 2021-06-11 | 商汤集团有限公司 | 外参标定方法、装置、电子设备及存储介质 |
CN112946612A (zh) * | 2021-03-29 | 2021-06-11 | 上海商汤临港智能科技有限公司 | 外参标定方法、装置、电子设备及存储介质 |
CN112946612B (zh) * | 2021-03-29 | 2024-05-17 | 上海商汤临港智能科技有限公司 | 外参标定方法、装置、电子设备及存储介质 |
CN113188569A (zh) * | 2021-04-07 | 2021-07-30 | 东软睿驰汽车技术(沈阳)有限公司 | 车辆与激光雷达的坐标系标定方法、设备及存储介质 |
CN113177988A (zh) * | 2021-04-30 | 2021-07-27 | 中德(珠海)人工智能研究院有限公司 | 一种球幕相机与激光的标定方法、装置、设备及存储介质 |
CN113177988B (zh) * | 2021-04-30 | 2023-12-05 | 中德(珠海)人工智能研究院有限公司 | 一种球幕相机与激光的标定方法、装置、设备及存储介质 |
CN113436278A (zh) * | 2021-07-22 | 2021-09-24 | 深圳市道通智能汽车有限公司 | 标定方法、标定装置、测距系统及计算机可读存储介质 |
CN113790738A (zh) * | 2021-08-13 | 2021-12-14 | 上海智能网联汽车技术中心有限公司 | 一种基于智能云台imu的数据补偿方法 |
CN113744344A (zh) * | 2021-08-18 | 2021-12-03 | 深圳市裕展精密科技有限公司 | 激光设备的标定方法、装置、设备及存储介质 |
CN113744344B (zh) * | 2021-08-18 | 2023-09-08 | 富联裕展科技(深圳)有限公司 | 激光设备的标定方法、装置、设备及存储介质 |
CN113643382A (zh) * | 2021-08-22 | 2021-11-12 | 浙江大学 | 一种基于旋转激光融合相机的稠密着色点云获取方法及装置 |
CN113643382B (zh) * | 2021-08-22 | 2023-10-10 | 浙江大学 | 一种基于旋转激光融合相机的稠密着色点云获取方法及装置 |
CN113838141A (zh) * | 2021-09-02 | 2021-12-24 | 中南大学 | 一种单线激光雷达与可见光相机的外参标定方法及系统 |
CN113884278A (zh) * | 2021-09-16 | 2022-01-04 | 杭州海康机器人技术有限公司 | 一种线激光设备的系统标定方法和装置 |
CN113884278B (zh) * | 2021-09-16 | 2023-10-27 | 杭州海康机器人股份有限公司 | 一种线激光设备的系统标定方法和装置 |
CN114035187A (zh) * | 2021-10-26 | 2022-02-11 | 北京国家新能源汽车技术创新中心有限公司 | 一种自动驾驶系统的感知融合方法 |
CN114022566A (zh) * | 2021-11-04 | 2022-02-08 | 安徽省爱夫卡电子科技有限公司 | 一种用于单线激光雷达和相机的联合标定方法 |
CN114022569B (zh) * | 2021-11-18 | 2024-06-07 | 湖北中烟工业有限责任公司 | 一种基于视觉测量箱体方正度的方法及装置 |
CN114022569A (zh) * | 2021-11-18 | 2022-02-08 | 湖北中烟工业有限责任公司 | 一种基于视觉测量箱体方正度的方法及装置 |
CN114167393A (zh) * | 2021-12-02 | 2022-03-11 | 新境智能交通技术(南京)研究院有限公司 | 交通雷达的位置标定方法及装置、存储介质、电子设备 |
CN114549651A (zh) * | 2021-12-03 | 2022-05-27 | 聚好看科技股份有限公司 | 一种基于多面体几何约束的多个3d相机标定方法和设备 |
CN114371472A (zh) * | 2021-12-15 | 2022-04-19 | 中电海康集团有限公司 | 一种激光雷达和相机的自动化联合标定装置及方法 |
CN114494806A (zh) * | 2021-12-17 | 2022-05-13 | 湖南国天电子科技有限公司 | 基于多元信息融合的目标识别方法、系统、设备和介质 |
CN114779188A (zh) * | 2022-01-24 | 2022-07-22 | 南京慧尔视智能科技有限公司 | 一种标定效果的评价方法、装置、设备及介质 |
CN114779188B (zh) * | 2022-01-24 | 2023-11-03 | 南京慧尔视智能科技有限公司 | 一种标定效果的评价方法、装置、设备及介质 |
CN114755662B (zh) * | 2022-03-21 | 2024-04-30 | 北京航空航天大学 | 一种路车融合感知的激光雷达和gps的标定方法和装置 |
CN114755662A (zh) * | 2022-03-21 | 2022-07-15 | 北京航空航天大学 | 一种路车融合感知的激光雷达和gps的标定方法和装置 |
CN114723715B (zh) * | 2022-04-12 | 2023-09-19 | 小米汽车科技有限公司 | 车辆目标检测方法、装置、设备、车辆及介质 |
CN114723715A (zh) * | 2022-04-12 | 2022-07-08 | 小米汽车科技有限公司 | 车辆目标检测方法、装置、设备、车辆及介质 |
CN115100287A (zh) * | 2022-04-14 | 2022-09-23 | 美的集团(上海)有限公司 | 外参标定方法及机器人 |
CN115166701B (zh) * | 2022-06-17 | 2024-04-09 | 清华大学 | 一种rgb-d相机和激光雷达的系统标定方法及装置 |
CN115856849A (zh) * | 2023-02-28 | 2023-03-28 | 季华实验室 | 一种深度相机与2d激光雷达标定方法及相关设备 |
CN115856849B (zh) * | 2023-02-28 | 2023-05-05 | 季华实验室 | 一种深度相机与2d激光雷达标定方法及相关设备 |
CN116540219B (zh) * | 2023-07-04 | 2023-09-22 | 北醒(北京)光子科技有限公司 | 修正雷达出射光角度的方法、装置、存储介质及电子设备 |
CN116540219A (zh) * | 2023-07-04 | 2023-08-04 | 北醒(北京)光子科技有限公司 | 修正雷达出射光角度的方法、装置、存储介质及电子设备 |
CN116630444B (zh) * | 2023-07-24 | 2023-09-29 | 中国矿业大学 | 一种相机与激光雷达融合校准的优化方法 |
CN116630444A (zh) * | 2023-07-24 | 2023-08-22 | 中国矿业大学 | 一种相机与激光雷达融合校准的优化方法 |
CN116740197A (zh) * | 2023-08-11 | 2023-09-12 | 之江实验室 | 一种外参的标定方法、装置、存储介质及电子设备 |
CN116740197B (zh) * | 2023-08-11 | 2023-11-21 | 之江实验室 | 一种外参的标定方法、装置、存储介质及电子设备 |
CN117073581B (zh) * | 2023-09-12 | 2024-01-26 | 梅卡曼德(北京)机器人科技有限公司 | 线激光轮廓仪系统的标定方法、装置和电子设备 |
CN117073581A (zh) * | 2023-09-12 | 2023-11-17 | 梅卡曼德(北京)机器人科技有限公司 | 线激光轮廓仪系统的标定方法、装置和电子设备 |
CN117607829A (zh) * | 2023-12-01 | 2024-02-27 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | 激光雷达点云的有序化重建方法、计算机可读存储介质 |
CN117630892B (zh) * | 2024-01-25 | 2024-03-29 | 北京科技大学 | 可见光相机、红外相机与激光雷达的联合标定方法及系统 |
CN117630892A (zh) * | 2024-01-25 | 2024-03-01 | 北京科技大学 | 可见光相机、红外相机与激光雷达的联合标定方法及系统 |
CN118553055A (zh) * | 2024-07-30 | 2024-08-27 | 安徽全采智能科技有限公司 | 一种基于雷视一体机的三维电子围栏预警方法 |
Also Published As
Publication number | Publication date |
---|---|
CN110221275A (zh) | 2019-09-10 |
CN110221275B (zh) | 2023-06-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020233443A1 (zh) | 一种激光雷达与相机之间的标定方法和装置 | |
US12114107B2 (en) | Projector keystone correction method, apparatus and system, and readable storage medium | |
WO2021189468A1 (zh) | 激光雷达的姿态校正方法、装置和系统 | |
CN110244282B (zh) | 一种多相机系统和激光雷达联合系统及其联合标定方法 | |
WO2021098448A1 (zh) | 传感器标定方法及装置、存储介质、标定系统和程序产品 | |
US7023473B2 (en) | Camera calibration device and method, and computer system | |
CN112816949B (zh) | 传感器的标定方法及装置、存储介质、标定系统 | |
CN207766424U (zh) | 一种拍摄装置及成像设备 | |
CN111383279A (zh) | 外参标定方法、装置及电子设备 | |
CN106027887B (zh) | 针对旋镜云台的枪球联动控制的方法、装置及电子设备 | |
CN111739104A (zh) | 一种激光标定系统的标定方法、装置以及激光标定系统 | |
CN113034612B (zh) | 一种标定装置、方法及深度相机 | |
CN111429521A (zh) | 相机与激光雷达的外参标定方法、装置、介质及电子设备 | |
Nedevschi | Online cross-calibration of camera and lidar | |
CN117250956A (zh) | 一种多观测源融合的移动机器人避障方法和避障装置 | |
CN109798877A (zh) | 一种仿生水下机器鱼双目立体测距方法 | |
CN110750094A (zh) | 确定可移动设备的位姿变化信息的方法、装置和系统 | |
CN117579793A (zh) | 投影校正方法及投影设备 | |
CN109587304B (zh) | 电子设备和移动平台 | |
CN116684740A (zh) | 感知训练数据生成方法、装置、计算机设备和存储介质 | |
US20220311985A1 (en) | Image capture device and depth information calculation method thereof | |
CN112669388B (zh) | 激光雷达与摄像装置的标定方法及装置、可读存储介质 | |
CN115239816A (zh) | 一种相机标定方法、系统、电子设备及存储介质 | |
CN212163540U (zh) | 全向立体视觉的摄像机配置系统 | |
CN109788195B (zh) | 电子设备和移动平台 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20808782 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20808782 Country of ref document: EP Kind code of ref document: A1 |