CN116524014A - Method and device for calibrating external parameters on line - Google Patents

Method and device for calibrating external parameters on line Download PDF

Info

Publication number
CN116524014A
CN116524014A CN202310581790.6A CN202310581790A CN116524014A CN 116524014 A CN116524014 A CN 116524014A CN 202310581790 A CN202310581790 A CN 202310581790A CN 116524014 A CN116524014 A CN 116524014A
Authority
CN
China
Prior art keywords
data
laser radar
camera
type
radar data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310581790.6A
Other languages
Chinese (zh)
Inventor
焦江磊
郭林栋
刘羿
何贝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siqian Shanghai Technology Co ltd
Original Assignee
Siqian Shanghai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siqian Shanghai Technology Co ltd filed Critical Siqian Shanghai Technology Co ltd
Priority to CN202310581790.6A priority Critical patent/CN116524014A/en
Publication of CN116524014A publication Critical patent/CN116524014A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The application provides a method, a device, electronic equipment and a machine-readable storage medium for calibrating external parameters online, wherein the method comprises the following steps: acquiring camera data and first-class laser radar data; for each frame of the camera data, searching the first type of laser radar data with the closest time distance from the time stamp of the first type of laser radar data, and synchronously combining the first type of laser radar data with the historical N frames to obtain one frame of second type of laser radar data; the method for realizing synchronous merging comprises a point cloud registration algorithm and a point cloud fusion algorithm; extracting features of static objects in the second-class laser radar data and the camera data; and carrying out superposition matching on the outline of the static object in the second-class laser radar data and the outline of the static object in the camera data, and calculating external parameters of the laser radar and the camera.

Description

Method and device for calibrating external parameters on line
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to a method and apparatus for calibrating external parameters online, an electronic device, and a machine-readable storage medium.
Background
There are currently numerous types of sensors commonly mounted on autopilot or robotic devices, the most common of which are lidar and cameras. When the laser radar and the camera are used in combination, the space-time external parameters between the laser radar and the camera need to be calibrated. The popular calibration scheme is off-line, the calibration plate is used for common view under a specific scene, and the edge characteristics of the calibration plate are extracted for PnP solution (namely Perspective n-Point problem solution, perselect-n-Point solution); or using a hand-eye calibration method, solving external parameters by calculating the pose of the motion trail of each of the laser radar and the camera and using a classical AX=XB hand-eye calibration equation. These calibration schemes are all performed off-line and require operation in a specific deployment scenario. With automatic driving or long-time movement of the robot device, relative movement between the laser radar and the camera can occur, so that external parameters are changed, and external parameters calibrated offline before cannot be accurately applied, so that calibration may need to be performed again. In addition, the existing popular methods such as hand-eye calibration and the like cannot solve the problem that time stamps between a laser radar and a camera are not synchronous. Due to the instability of the time stamps, an online calibration method must be used to handle this situation.
Therefore, how to effectively solve the problem that offline calibration cannot be used for a long time and solve the problem that the timestamps are not synchronous is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The application provides a method for calibrating external parameters on line, which comprises the following steps:
acquiring camera data and first-class laser radar data;
for each frame of the camera data, searching the first type of laser radar data with the closest time distance from the time stamp of the first type of laser radar data, and synchronously combining the first type of laser radar data with the historical N frames to obtain one frame of second type of laser radar data; the method for realizing synchronous merging comprises a point cloud registration algorithm and a point cloud fusion algorithm
Extracting features of static objects in the second-class laser radar data and the camera data;
and carrying out superposition matching on the outline of the static object in the second-class laser radar data and the outline of the static object in the camera data, and calculating external parameters of the laser radar and the camera.
Optionally, the acquiring camera data and first-class laser radar data includes:
and acquiring camera data and first-class laser radar data, performing SLAM operation on the first-class laser radar data, and calculating the pose of the first-class laser radar data of each frame, wherein the pose comprises rotation information and displacement information.
Optionally, the performing coincidence matching on the outline of the static object in the second type of laser radar data and the outline of the static object in the camera data includes:
matching the outline of the static object in the second-class laser radar data with the outline of the static object in the camera data to find out the feature points corresponding to each other;
calculating external parameter information between the laser radar and the camera based on the corresponding characteristic points, wherein the external parameter information comprises rotation and displacement;
and transforming the outline of the static object in the second type of laser radar data according to the calculated external parameter information, and aligning the outline of the static object in the camera data.
Optionally, the feature extraction of the static object in the second class of laser radar data and the camera data includes:
and extracting the characteristics of the lane lines in the second-class laser radar data and the camera data.
The application provides a device of online demarcating external parameter, the device includes:
the data acquisition module is used for acquiring camera data and first-class laser radar data;
the data fusion module is used for searching the first type of laser radar data closest to the time stamp of each frame of the camera data, and synchronously combining the first type of laser radar data with the first type of laser radar data of the historical N frames to obtain one frame of second type of laser radar data;
the feature extraction module is used for extracting features of the second-class laser radar data and static objects in the camera data;
and the external parameter calculation module is used for carrying out superposition matching on the outline of the static object in the second-class laser radar data and the outline of the static object in the camera data, and calculating external parameters of the laser radar and the camera.
Optionally, the acquiring camera data and first-class laser radar data includes:
and acquiring camera data and first-class laser radar data, performing SLAM operation on the first-class laser radar data, and calculating the pose of the first-class laser radar data of each frame, wherein the pose comprises rotation information and displacement information.
Optionally, the performing coincidence matching on the outline of the static object in the second type of laser radar data and the outline of the static object in the camera data includes:
matching the outline of the static object in the second-class laser radar data with the outline of the static object in the camera data to find out the feature points corresponding to each other;
calculating external parameter information between the laser radar and the camera based on the corresponding characteristic points, wherein the external parameter information comprises rotation and displacement;
and transforming the outline of the static object in the second type of laser radar data according to the calculated external parameter information, and aligning the outline of the static object in the camera data.
Optionally, the feature extraction of the static object in the second class of laser radar data and the camera data includes:
and extracting the characteristics of the lane lines in the second-class laser radar data and the camera data.
The application also provides an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the steps of the above method by executing the executable instructions.
The present application also provides a machine-readable storage medium having stored thereon computer instructions which when executed by a processor perform the steps of the above-described method.
By finding the first type of laser radar data with the nearest time stamp time distance, the time synchronization between the camera and the laser radar can be realized, and the data acquired at the same time can be ensured to correspond; by extracting the characteristics of the static objects in the second-class laser radar data and the camera data and performing superposition matching on the outlines of the static objects, the corresponding relation between the laser radar and the camera can be obtained, so that the external reference relation between the laser radar and the camera is calculated, and the influence of the dynamic objects on the characteristic outline alignment can be reduced.
The embodiment can bring more accurate time synchronization, more complete laser radar data and more accurate external parameter calculation, thereby improving the sensing and positioning performance of equipment such as automatic driving, robots and the like in complex environments.
Drawings
FIG. 1 is a flow chart illustrating an online calibration of an external parameter method in accordance with an exemplary embodiment;
FIG. 2 is a block diagram of an online calibration of a reference-out device, as shown in an exemplary embodiment;
FIG. 3 is a hardware block diagram of an electronic device in which an online calibration external device is located, as shown in an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
It should be noted that: in other embodiments, the steps of the corresponding method are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than described in this specification. Furthermore, individual steps described in this specification, in other embodiments, may be described as being split into multiple steps; while various steps described in this specification may be combined into a single step in other embodiments.
In order to make the technical solution in the embodiments of the present specification better understood by those skilled in the art, the related art related to the embodiments of the present specification will be briefly described below.
And (3) point cloud data: the laser radar emits laser pulse outwards, reflects from the ground or object surface to form multiple echoes, and returns the echoes to the laser radar sensor, and the processed reflected data is called point cloud data.
Pose: a transformation matrix corresponding to the relative relationship between the pose of the robot, the vehicle, etc. and the global coordinate system is used to describe the position and orientation of the robot, the vehicle, etc.
Iterative least squares algorithm (Iterative Closest Point, abbreviated ICP): a registration method based on iterative closest point matching is widely applied to the fields of laser radar point cloud matching, 3D model matching and the like.
Perspective n-Point (PnP): an algorithm for estimating the position and pose of a camera in computer vision and image processing. The camera pose estimation method based on the feature points can calculate the position and the pose of a camera from the corresponding relation between a group of known 3D points and the corresponding 2D image points. The basic idea of PnP solution is to calculate the external parameters of the camera, i.e. the rotation and translation matrices of the camera, by the correspondence between the feature points detected in the image and their corresponding 3D points in the real world. This may be used to enable camera position location, object tracking, three-dimensional reconstruction, etc. applications. Various PnP solving methods exist, including enhanced perspective n-Point algorithm (Efficient Perspective-n-Point, abbreviated as EPnP), direct linear transformation (Direct Linear Transformation, abbreviated as DLT), non-calibrated perspective n-Point algorithm (Uncalibrated Perspective-n-Point, abbreviated as UPnP) and the like, wherein EPnP is an efficient and widely applied PnP solving method. PnP solutions have wide applications in computer vision and image processing, including fields of augmented reality (Augmented Reality, AR for short), autonomous navigation, robotic vision, medical image processing, and the like.
External parameters (Extrinsic Parameters): refers to the position and pose information of a camera or laser sensor in the world coordinate system (or global coordinate system). It describes the position, rotation and scale relationship of a camera or laser sensor with respect to the world coordinate system. The external parameters typically include a translation vector (or displacement vector) and a rotation matrix (or quaternion) for aligning the coordinate system of the camera or laser sensor with the world coordinate system. In joint processing of laser and camera data, the calculation of the external parameters is very important to achieve accurate alignment and registration between the laser point cloud and the camera image. The external parameters may be obtained in various ways, for example, calibration of the camera and the laser sensor using a calibration board, position and attitude information provided by external sensors such as a global positioning system (Global Positioning System, abbreviated as GPS), an inertial measurement unit (Inertial Measurement Unit, abbreviated as IMU), etc., or estimated from the data using a specific algorithm. The accuracy and stability of the external parameters are critical to the accuracy and robustness of fusion and registration.
Simultaneous localization and mapping (Simultaneous Localization and Mapping, SLAM for short): techniques for use in mobile robots and autonomous driving vehicles and the like applications. SLAM realizes autonomous positioning and mapping of a robot in an unknown environment by simultaneously estimating the pose (position and direction) of the robot in the unknown environment and constructing a map of the environment. SLAM technology typically uses sensor data, such as lidar, cameras, IMUs, etc., to sense the environment surrounding the robot and through processing and fusion of these sensor data, to achieve autonomous localization and mapping of the robot in the environment. The basic idea of SLAM is to infer the position of a robot in an unknown environment by using sensor data of the robot and to use these position estimates for the construction of a map. The map construction can be further used for improving the positioning estimation of the robot, and a closed-loop positioning and map updating cycle is formed. SLAM has wide application in the fields of autopilot, unmanned aerial vehicle, robotic navigation, augmented reality, and the like. It provides the ability for mobile robots and autonomously driven vehicles to autonomously locate and map in unknown environments, thereby enabling autonomous navigation and location in complex and unknown environments.
Calibration plate: the calibration plate is a special plate or object for camera and lidar calibration, typically with known geometry and feature points, for measuring and calculating the external parameters between the camera and the lidar. Calibration plates typically comprise some known geometry, such as a two-or three-dimensional checkerboard, a dot grid, a sphere or cube, etc. These geometries can be used to extract feature points and used for calculation of calibration algorithms. The working principle of the calibration plate is that the calibration plate is placed between the camera and the laser radar, and the characteristic points of the calibration plate are observed through the camera and the laser radar at the same time under a specific scene, so that the external reference relationship between the camera and the laser radar is obtained. For example, assuming that a camera and a lidar are mounted on an autopilot, external parameters between them need to be calibrated. In practice, a calibration plate, such as a checkerboard plate having a known lattice structure, may be selected. The checkerboard is then placed in a particular scene, such as the ground in front of a vehicle. Next, feature points of the camera and the lidar, such as checkerboard corner points and lattice points in the lidar point cloud, are extracted by simultaneously observing the checkerboard with the camera and the lidar. Through the feature points, a calibration algorithm can be used for calculating the external parameter relation between the camera and the laser radar, including information such as position, rotation, scale and the like. Thus, the external parameter calibration process of the camera and the laser radar is completed, and the external parameter relation can be used for data fusion and perception processing in subsequent application.
Application scenario overview
In autopilot or robotic devices, a variety of sensors, such as lidar and cameras, are often used for sensing and positioning. The spatiotemporal external parameters between these sensors need to be calibrated to ensure that the relative position and attitude information between them is accurate.
The current popular calibration scheme is usually performed off-line, for example, a calibration plate is used for common vision in a specific scene, edge features of the calibration plate are extracted for perspective n-point problem solving (PnP solving) or an eye calibration method is used for solving external parameters by calculating the pose of the motion trail of each of the laser radar and the camera. These methods require operations in specific scenarios, specific arrangements of devices.
However, with long-time movement of the autopilot or robot, relative movement between the lidar and the camera may occur, resulting in changes in external parameters, thereby affecting the accuracy of positioning and mapping. Furthermore, the time stamps between the sensors may not be synchronized, resulting in a problem of inconsistent time stamps in the data processing. Thus, conventional off-line calibration methods may not be able to handle these dynamic changes and time stamp dyssynchrony, and on-line calibration methods are required to address these issues.
The online calibration method can update and calibrate the external parameters between the sensors in real time so as to adapt to the change of the equipment in the motion process. Meanwhile, the online calibration method can synchronize the time stamps in real time, so that the problem of inconsistent time stamps in data processing is solved, and accurate alignment and fusion of sensor data are ensured.
In summary, the method of on-line calibration can effectively solve the problems of time-space external parameter change and time stamp asynchronism between the laser radar and the camera in the automatic driving equipment or the robot equipment, thereby improving the positioning and mapping accuracy and reliability.
Inventive concept
As described above, the off-line calibration method has the problems that the external parameters cannot be used for a long time and the time stamps are not synchronized.
In view of this, the present specification aims to propose a solution for performing outlier calculation by synchronous merging of camera and lidar data and static object feature extraction.
The core concept of the specification is as follows:
first, data may be acquired from a camera and a first type of lidar and may be synchronously combined into one frame of second type lidar data, including historical N frames of lidar data, by timestamp matching. Next, profile information of the second type of lidar data and the camera data may be obtained by performing feature extraction on the static objects. Then, the static object contour in the second type of laser radar data and the static object contour in the camera data can be subjected to superposition matching, so that the external parameter relation between the laser radar and the camera is calculated.
The following describes the present application through specific embodiments and in connection with specific application scenarios.
Referring to fig. 1, fig. 1 is a flowchart illustrating an online calibration method according to an exemplary embodiment, wherein the method performs the following steps:
step 102: camera data and first type lidar data are acquired.
Step 104: for each frame of the camera data, searching the first type of laser radar data with the closest time distance from the time stamp of the first type of laser radar data, and synchronously combining the first type of laser radar data with the historical N frames to obtain one frame of second type of laser radar data; the method for realizing synchronous merging comprises a point cloud registration algorithm and a point cloud fusion algorithm.
Step 106: and extracting the characteristics of the static objects in the second-class laser radar data and the camera data.
Step 108: and carrying out superposition matching on the outline of the static object in the second-class laser radar data and the outline of the static object in the camera data, and calculating external parameters of the laser radar and the camera.
To meet positioning requirements, autonomous vehicles, robots, etc. may typically carry a lidar. Lidar is a sensor commonly used in automatic driving vehicles, robots, and the like, and acquires information such as distance, position, and shape of the surrounding environment by transmitting laser pulses outward and receiving echoes. Lidar typically scans the current scene through multiple scanning beams, and laser pulses reflected from the surface of the ground or object are returned to the sensor and processed to form laser data. The laser data can be used for applications such as environment sensing, obstacle detection, map construction, positioning and the like, so that the requirements of tasks such as automatic driving, robot navigation and the like are met. The characteristics of high precision, high speed, large-scale scanning and the like of the laser radar make the laser radar an indispensable sensor in the automatic driving and robot technology. By combining the camera and the laser radar data, more accurate and robust environment sensing and positioning can be realized, so that the safety and reliability of the system are improved.
Acquiring data from a camera and a first type of laser radar sensor, wherein the data comprises image data and point cloud data, after acquiring the image data and the first type of laser radar data, for each frame of camera data, finding a frame of first type of laser radar data with the nearest time distance by comparing the time stamp of the frame of camera data with the time stamp of the first type of laser radar data, and carrying out point cloud registration and fusion on the frame of laser radar data and the first type of laser radar data of a historical N frame of laser radar data to realize synchronous combination; next, feature information of the static object, such as an outline, shape, color, etc., of the object may be extracted from the second type lidar data and the camera data; then, the outline of the static object in the second type of laser radar data and the outline of the static object in the camera data can be matched in a superposition way, and external parameters (external parameters) are calculated, namely the relative position and posture relation between the laser radar and the camera.
In one embodiment shown, camera data and first type lidar data are acquired, SLAM operations may be performed on the first type lidar data, and the pose of each frame of the first type lidar data, including rotation information and displacement information, is calculated.
For example, each frame of first-class lidar data may be first preprocessed, including outlier removal, filtering of the data, duplicate laser spot removal, and so on. These preprocessing operations help to improve the accuracy and efficiency of SLAM operations; then, feature extraction can be performed through the first type of laser radar data, such as information of corner points, face points and the like of laser point clouds, so that the matching search range is reduced; then, estimating the initial pose of the first type of laser radar data of the current frame through the pose of the laser radar data of the previous frame, IMU (inertial measurement unit) data, odometer data and other information; the characteristics of the laser radar data of the current frame can be matched with the existing characteristics in the map, the best matching characteristics are found, and the pose of the current frame is corrected according to the matching result; finally, pose is optimized by methods such as minimizing reprojection errors or maximizing posterior probability, and information of the first type of laser radar data of the current frame and a map are fused and updated to obtain new map information.
In the illustrated embodiment, the profile of the static object in the second type of laser radar data and the profile of the static object in the camera data may be matched to find feature points corresponding to each other; then, external parameter information between the laser radar and the camera, including rotation and displacement, can be calculated based on the corresponding feature points; finally, the outline of the static object in the second type of laser radar data can be transformed according to the calculated external parameter information, and the outline of the static object in the camera data can be aligned.
The corresponding feature points can be obtained by extracting the outline of the static object from the second type of laser radar data and the camera data. The method can be realized through some feature extraction algorithms, such as edge detection, angular point detection and the like, and the function of feature point matching is to find the corresponding relation between laser radar data and static objects in camera data, so as to provide a basis for subsequent pose estimation and data alignment; then, external parameter information between the laser radar and the camera can be calculated based on the corresponding feature points, wherein the external parameter information comprises rotation and displacement, and the external parameter estimation is used for determining the relative position relationship between the laser radar and the camera so as to align the data of the laser radar and the camera; finally, the static object outline in the second type of laser radar data can be transformed according to the calculated external parameter information, and aligned to the static object outline in the camera data, and the function of data alignment is to compare and fuse the two types of sensor data under the same coordinate system, so that the static object information in the environment can be understood more accurately.
For example, points in a contour point cloud of a static object may beImage plane projected to camera:
wherein R and t are set laser-camera external parameter initial values, and Z is the depth of converting the P point into a corresponding point under a camera coordinate system by using RP+t.
Finding distances in contours extracted from imagesThe nearest three points p 0 、p 1 、p 2 The three points are fitted to a straight line:
l=ax+by+c
calculation of P c Distance from straight line l:
constructing a least square equation:
where k is the number of point clouds of the outline of the static object, x= [ R t ], i.e. the external parameters between the lidar and the camera;
and (3) carrying out optimization solution on the least square constructed by the steps to obtain the optimal external parameters.
In one embodiment shown, feature extraction may be performed on the lane lines in the second type of lidar data and the camera data.
For example, the lane line contour may be extracted using gray scale processing and canny marginalization on the machine picture data first. Gray scale processing and Canny edge detection are common image processing techniques, commonly used to extract edge information in images. The grayscale process may convert color camera picture data into a grayscale image. The gray-scale image contains only one kind of luminance information and does not contain color information, so that the complexity of processing can be reduced, and it is generally sufficient to provide sufficient edge information in lane line detection. The gray scale process can be obtained by weighted averaging pixel values of three channels of red, green and blue of a color image according to a certain weight. Canny edge detection is a classical edge detection algorithm, and edge information in an image is extracted by detecting areas with large brightness changes in the image. The Canny edge detection algorithm has certain resistance to noise, and can extract edges with clear details.
Then, the second type of laser radar data can be subjected to ground fitting firstly, parameters of a ground plane, such as normal vectors of the ground plane, are found, and then the characteristics of the contour of the lane line are extracted through inconsistent reflection caused by paint on the lane line and inconsistent reflection of the road surface around the lane line. Since the lane lines are usually coated with reflective paint, they have different reflective properties in the laser point cloud than in other areas of the road surface. The point cloud with the reflection intensity higher than the threshold value can be extracted as a lane line candidate point by setting a certain reflection intensity threshold value; on the other hand, the lane line is usually located on the road surface, and the reflection characteristics on the road surface may be different according to different road surface materials, humidity and other factors, so that the reflection characteristics of the lane line and the surrounding road surface are inconsistent. The point cloud inconsistent with the reflection characteristics of the peripheral pavement can be extracted as the lane line candidate point by setting a certain reflection consistency threshold value. Through the steps, the lane line candidate point cloud can be obtained. The candidate point clouds may be further processed using some point cloud processing techniques, such as clustering, filtering, etc., to extract the contour feature point clouds of the roadway lines.
Referring to fig. 2, fig. 2 is a block diagram illustrating an online calibration of an external parameter according to an exemplary embodiment, where the apparatus includes:
a data acquisition module 210, configured to acquire camera data and first-class lidar data;
the data fusion module 220 is configured to find, for each frame of the camera data, the first type of lidar data that has the closest time distance to the timestamp of the camera data, and combine the first type of lidar data with the first type of lidar data of the historical N frames in a synchronous manner to obtain one frame of second type of lidar data;
a feature extraction module 230, configured to perform feature extraction on the second type of laser radar data and the static object in the camera data;
and the external parameter calculating module 240 is configured to perform coincidence matching on the outline of the static object in the second type of laser radar data and the outline of the static object in the camera data, and calculate external parameters of the laser radar and the camera.
Optionally, the acquiring camera data and first-class laser radar data includes:
and acquiring camera data and first-class laser radar data, performing SLAM operation on the first-class laser radar data, and calculating the pose of the first-class laser radar data of each frame, wherein the pose comprises rotation information and displacement information.
Optionally, the performing coincidence matching on the outline of the static object in the second type of laser radar data and the outline of the static object in the camera data includes:
matching the outline of the static object in the second-class laser radar data with the outline of the static object in the camera data to find out the feature points corresponding to each other;
calculating external parameter information between the laser radar and the camera based on the corresponding characteristic points, wherein the external parameter information comprises rotation and displacement;
and transforming the outline of the static object in the second type of laser radar data according to the calculated external parameter information, and aligning the outline of the static object in the camera data.
Optionally, the feature extraction of the static object in the second class of laser radar data and the camera data includes:
and extracting the characteristics of the lane lines in the second-class laser radar data and the camera data.
Referring to fig. 3, fig. 3 is a hardware configuration diagram of an electronic device where an online calibration external parameter device is located in an exemplary embodiment. At the hardware level, the device includes a processor 302, an internal bus 304, a network interface 306, memory 308, and non-volatile storage 310, although other hardware required for the service is possible. One or more embodiments of the present description may be implemented in a software-based manner, such as by the processor 302 reading a corresponding computer program from the non-volatile storage 310 into the memory 308 and then running. Of course, in addition to software implementation, one or more embodiments of the present disclosure do not exclude other implementation manners, such as a logic device or a combination of software and hardware, etc., that is, the execution subject of the following processing flow is not limited to each logic unit, but may also be hardware or a logic device.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are illustrative only, in that the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present description. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
The foregoing description of the preferred embodiment(s) is (are) merely intended to illustrate the embodiment(s) of the present invention, and it is not intended to limit the embodiment(s) of the present invention to the particular embodiment(s) described.

Claims (10)

1. A method for calibrating an external parameter on line, the method comprising:
acquiring camera data and first-class laser radar data;
for each frame of the camera data, searching the first type of laser radar data with the closest time distance from the time stamp of the first type of laser radar data, and synchronously combining the first type of laser radar data with the historical N frames to obtain one frame of second type of laser radar data; the method for realizing synchronous merging comprises a point cloud registration algorithm and a point cloud fusion algorithm;
extracting features of static objects in the second-class laser radar data and the camera data;
and carrying out superposition matching on the outline of the static object in the second-class laser radar data and the outline of the static object in the camera data, and calculating external parameters of the laser radar and the camera.
2. The method of claim 1, wherein the acquiring camera data and first type lidar data comprises:
and acquiring camera data and first-class laser radar data, performing SLAM operation on the first-class laser radar data, and calculating the pose of the first-class laser radar data of each frame, wherein the pose comprises rotation information and displacement information.
3. The method according to claim 1, wherein said coincidence matching of the profile of the static object in the second type of lidar data and the profile of the static object in the camera data comprises:
matching the outline of the static object in the second-class laser radar data with the outline of the static object in the camera data to find out the feature points corresponding to each other;
calculating external parameter information between the laser radar and the camera based on the corresponding characteristic points, wherein the external parameter information comprises rotation parameters and displacement parameters;
and transforming the outline of the static object in the second type of laser radar data according to the calculated external parameter information, and aligning the outline of the static object in the camera data.
4. The method of claim 1, wherein the feature extraction of the static object in the second type of lidar data and the camera data comprises:
and extracting the characteristics of the lane lines in the second-class laser radar data and the camera data.
5. An apparatus for calibrating an external parameter online, the apparatus comprising:
the data acquisition module is used for acquiring camera data and first-class laser radar data;
the data fusion module is used for searching the first type of laser radar data closest to the time stamp of each frame of the camera data, and synchronously combining the first type of laser radar data with the first type of laser radar data of the historical N frames to obtain one frame of second type of laser radar data;
the feature extraction module is used for extracting features of the second-class laser radar data and static objects in the camera data;
and the external parameter calculation module is used for carrying out superposition matching on the outline of the static object in the second-class laser radar data and the outline of the static object in the camera data, and calculating external parameters of the laser radar and the camera.
6. The apparatus of claim 5, wherein the acquiring camera data and first type lidar data comprises:
and acquiring camera data and first-class laser radar data, performing SLAM operation on the first-class laser radar data, and calculating the pose of the first-class laser radar data of each frame, wherein the pose comprises rotation information and displacement information.
7. The apparatus of claim 5, wherein said matching the outline of the static object in the second type of lidar data with the outline of the static object in the camera data comprises:
matching the outline of the static object in the second-class laser radar data with the outline of the static object in the camera data to find out the feature points corresponding to each other;
calculating external parameter information between the laser radar and the camera based on the corresponding characteristic points, wherein the external parameter information comprises rotation parameters and displacement parameters;
and transforming the outline of the static object in the second type of laser radar data according to the calculated external parameter information, and aligning the outline of the static object in the camera data.
8. The apparatus of claim 5, wherein the feature extraction of static objects in the second type of lidar data and the camera data comprises:
and extracting the characteristics of the lane lines in the second-class laser radar data and the camera data.
9. A machine readable storage medium having stored thereon computer instructions which when executed by a processor implement the steps of the method of any of claims 1-4.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the steps of the method of any of claims 1-4 by executing the executable instructions.
CN202310581790.6A 2023-05-23 2023-05-23 Method and device for calibrating external parameters on line Pending CN116524014A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310581790.6A CN116524014A (en) 2023-05-23 2023-05-23 Method and device for calibrating external parameters on line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310581790.6A CN116524014A (en) 2023-05-23 2023-05-23 Method and device for calibrating external parameters on line

Publications (1)

Publication Number Publication Date
CN116524014A true CN116524014A (en) 2023-08-01

Family

ID=87406368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310581790.6A Pending CN116524014A (en) 2023-05-23 2023-05-23 Method and device for calibrating external parameters on line

Country Status (1)

Country Link
CN (1) CN116524014A (en)

Similar Documents

Publication Publication Date Title
CN111024066B (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
CN112384891B (en) Method and system for point cloud coloring
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN113096183B (en) Barrier detection and measurement method based on laser radar and monocular camera
CN113888639B (en) Visual odometer positioning method and system based on event camera and depth camera
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN111998862A (en) Dense binocular SLAM method based on BNN
Bazin et al. UAV attitude estimation by vanishing points in catadioptric images
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN115371665A (en) Mobile robot positioning method based on depth camera and inertia fusion
CN113450334B (en) Overwater target detection method, electronic equipment and storage medium
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
Spacek et al. Instantaneous robot self-localization and motion estimation with omnidirectional vision
CN116128966A (en) Semantic positioning method based on environmental object
CN116500595A (en) External parameter calibration method for mutual constraint of laser radar, camera and inertial sensor
CN114459467B (en) VI-SLAM-based target positioning method in unknown rescue environment
CN114485648B (en) Navigation positioning method based on bionic compound eye inertial system
CN115930948A (en) Orchard robot fusion positioning method
CN116524014A (en) Method and device for calibrating external parameters on line
Leishman et al. Robust Motion Estimation with RBG-D Cameras
CN113223163A (en) Point cloud map construction method and device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination