WO2021180149A1 - 一种激光雷达参数标定方法及装置 - Google Patents

一种激光雷达参数标定方法及装置 Download PDF

Info

Publication number
WO2021180149A1
WO2021180149A1 PCT/CN2021/080111 CN2021080111W WO2021180149A1 WO 2021180149 A1 WO2021180149 A1 WO 2021180149A1 CN 2021080111 W CN2021080111 W CN 2021080111W WO 2021180149 A1 WO2021180149 A1 WO 2021180149A1
Authority
WO
WIPO (PCT)
Prior art keywords
parameter
calibration
sampling points
plane
function
Prior art date
Application number
PCT/CN2021/080111
Other languages
English (en)
French (fr)
Inventor
胡烜
石现领
黄志臻
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CA3171089A priority Critical patent/CA3171089A1/en
Priority to EP21767321.9A priority patent/EP4109131A4/en
Publication of WO2021180149A1 publication Critical patent/WO2021180149A1/zh
Priority to US17/942,380 priority patent/US20230003855A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S7/4972Alignment of sensor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/006Theoretical aspects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4814Constructional features, e.g. arrangements of optical elements of transmitters alone
    • G01S7/4815Constructional features, e.g. arrangements of optical elements of transmitters alone using multiple transmitters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4817Constructional features, e.g. arrangements of optical elements relating to scanning

Definitions

  • This application relates to the field of communication technology, and in particular to a method and device for calibration of lidar parameters.
  • Laser detection and ranging is generally represented by the abbreviation LiDAR or lidar.
  • LiDAR can emit laser to the detection environment to detect the echo signal reflected by each sampling point in the detection environment, and measure each sample according to each echo signal Point’s target distance and target angle, and then input each target distance and target angle into the pre-stored point cloud calculation algorithm of the lidar to obtain the measured value of the three-dimensional coordinates of each sampling point in the same coordinate system, which is used to represent the detection environment The measured value of the point cloud.
  • one or more variable parameters are set in the point cloud computing algorithm.
  • the error between the measured value of the point cloud obtained by the point cloud computing algorithm and its true value (referred to as measurement error) can be changed.
  • the lidar system can calibrate the value of the variable parameter in the point cloud calculation algorithm before the measurement operation, and then measure according to the calibrated point cloud calculation algorithm Operation.
  • the lidar system In the existing method for calibrating the error parameters of the lidar system, first set M plane calibration surfaces in the calibration field, the lidar system scans each plane calibration surface, and assigns the error parameters in the point cloud calculation algorithm to the measured value to obtain the detection According to the first point cloud, the plane equations of M calibration surfaces are fitted according to the first point cloud.
  • the error parameter in the point cloud computing algorithm is used as a variable to obtain the corresponding detected second point cloud.
  • the position of any sampling point in the second point cloud is a function with the error parameter as the independent variable.
  • the second point cloud meets the plane equation used to describe the corresponding calibration surface.
  • a cost function is constructed according to the distance from the second point cloud to the plane represented by the fitted plane equation, and the variable of the cost function is an error parameter. Then, the error parameters of the lidar system are calibrated by calculating the optimal solution of the cost function.
  • the embodiments of the present application provide a method and device for calibrating lidar parameters, which are used to calibrate the parameters in the point cloud computing algorithm of the lidar system, so as to improve the point cloud measurement accuracy of the lidar system.
  • an embodiment of the present application provides a method for calibrating lidar parameters, including: acquiring the three-dimensional coordinates of multiple sampling points in the same coordinate system detected by multiple laser beams emitted by the lidar system on a calibration surface.
  • the three-dimensional coordinates of the multiple sampling points are obtained by inputting the measurement information of the multiple sampling points into a point cloud computing algorithm using the first parameter as a variable, and the three-dimensional coordinates of any one of the multiple sampling points is
  • the first parameter is a function of an independent variable, and the measurement information of the multiple sampling points is used to determine the target angle and the target distance of the multiple sampling points relative to the lidar system.
  • the cost function is based on the three-dimensional coordinates of the multiple sampling points and the calculation of the multiple
  • the fitting function of the sampling points is determined, and the predicted value of the first parameter is used to make the three-dimensional coordinates of the multiple sampling points satisfy the fitting function.
  • the fitting function is obtained by approximating or fitting the three-dimensional coordinates of the multiple sampling points to an equation representing the calibration surface, and the equation of the calibration surface is based on The shape of the calibration surface is determined. Assign a value to the first parameter in the point cloud computing algorithm according to the predicted value of the first parameter.
  • the cost function used to determine the predicted value of the first parameter is determined according to the three-dimensional coordinates of the sampling point and the fitting function to the sampling point, and the simulation of multiple sampling points
  • the resultant function takes the first parameter as the independent variable, and the accuracy of the fitting function is determined by the accuracy of the predicted value of the first parameter.
  • the fitting surface corresponding to the fitting function is closer to the calibration surface, and the predicted value of the first parameter is closer to the true value of the first parameter than the preset initial value of the first parameter in the prior art. Therefore, the method of the present application It is helpful to improve the accuracy of the predicted value of the first parameter.
  • the fitting surface corresponding to the fitting function is the calibration surface.
  • the calibration surface is a plane
  • the fitting function is a plane equation
  • the cost function is positively correlated with the first cost function; the first cost function is based on the first cost function between the multiple sampling points and the plane represented by the fitting function. If the distance is determined, the first distance is a function with a first parameter as an independent variable.
  • the calibration surface includes a first calibration plane and a second calibration plane
  • the multiple sampling points include first samples detected by the lidar system on the first calibration plane.
  • the fitting function includes a first fitting function to the first sampling point and a second fitting to the second sampling point function.
  • the cost function is positively correlated with a second cost function, and the second cost function is based on the first fitting function, the second fitting function, and the relative position between the first calibration plane and the second calibration plane If the relationship is determined, the second cost function uses the first parameter as an independent variable.
  • the relative position relationship is used to indicate: the first calibration plane and the second calibration plane are perpendicular to each other; or, the first calibration plane and the second calibration plane are mutually perpendicular. Parallel; or, the distance between the first calibration plane and the second calibration plane that are parallel to each other.
  • the first parameter is used to eliminate calculation errors of the point cloud calculation algorithm.
  • the first parameter includes at least one of a measurement error parameter and a coordinate transformation error parameter
  • the measurement error parameter is used to eliminate errors in measurement information of the multiple sampling points
  • the coordinate transformation error parameter is used to eliminate errors introduced by the coordinate transformation process
  • the coordinate transformation process is used to transform the three-dimensional coordinates of the sampling points detected by different laser modules in the lidar system into the same coordinate system .
  • an embodiment of the present application provides a laser radar parameter calibration device, which includes: an acquisition module for acquiring the multiple sampling points detected on the calibration surface of multiple laser beams emitted by the laser radar system in the same coordinate system Three-dimensional coordinates, the three-dimensional coordinates of the plurality of sampling points are obtained by inputting measurement information of the plurality of sampling points into a point cloud computing algorithm using a first parameter as a variable, any one sampling point of the plurality of sampling points The three-dimensional coordinates of is a function with the first parameter as an independent variable, and the measurement information of the multiple sampling points is used to determine the target angle and target distance of the multiple sampling points relative to the lidar system; parameter prediction Module, used to determine the predicted value of the first parameter that takes the cost function with the first parameter as an independent variable to an optimal solution, and the cost function is based on the three-dimensional coordinates and pairs of the multiple sampling points If the fitting function of the multiple sampling points is determined, the predicted value of the first parameter is used to make the three-dimensional coordinates of the multiple sampling points
  • the calibration surface is a plane
  • the fitting function is a plane equation
  • the cost function is positively correlated with the first cost function; the first cost function is based on the first cost function between the multiple sampling points and the plane represented by the fitting function. If the distance is determined, the first distance is a function with a first parameter as an independent variable.
  • the calibration surface includes a first calibration plane and a second calibration plane
  • the multiple sampling points include first samples detected by the lidar system on the first calibration plane.
  • Point and a second sampling point detected on the second calibration plane the fitting function includes a first fitting function to the first sampling point and a second fitting to the second sampling point Function; the cost function is positively correlated with the second cost function, the second cost function is based on the first fitting function, the second fitting function and the first calibration plane and the second calibration plane If the relative position relationship is determined, the second cost function uses the first parameter as an independent variable.
  • the relative position relationship is used to indicate: the first calibration plane and the second calibration plane are perpendicular to each other; or, the first calibration plane and the second calibration plane are mutually perpendicular. Parallel; or, the distance between the first calibration plane and the second calibration plane that are parallel to each other.
  • the first parameter is used to eliminate calculation errors of the point cloud calculation algorithm.
  • the first parameter includes at least one of a measurement error parameter and a coordinate transformation error parameter
  • the measurement error parameter is used to eliminate errors in measurement information of the multiple sampling points
  • the coordinate transformation error parameter is used to eliminate errors introduced by the coordinate transformation process
  • the coordinate transformation process is used to transform the three-dimensional coordinates of the sampling points detected by different laser modules in the lidar system into the same coordinate system .
  • an embodiment of the present application provides a computer device, including a processor and a memory, and when the processor runs the computer instructions stored in the memory, it executes the same as described in the first aspect or any one of the implementation manners of the first aspect. The method described.
  • an embodiment of the present application provides a laser radar system, including a laser source, a photodetector, a processor, and a memory; the laser source is used to generate and emit multiple laser beams to a calibration surface, and the photodetector is used for The echo signals of the multiple laser beams are detected; the processor executes the method according to the first aspect or any one of the implementation manners of the first aspect when running the computer instructions stored in the memory.
  • an embodiment of the present application provides a computer-readable storage medium, including instructions, which when run on a computer, cause the computer to execute the method as described in the first aspect or any one of the implementation manners of the first aspect .
  • embodiments of the present application provide a computer program product, including instructions, which when run on a computer, cause the computer to execute the method described in the first aspect or any one of the implementation manners of the first aspect.
  • an embodiment of the present application provides a chip.
  • the chip includes a processor and a memory.
  • the processor runs a computer program or instruction in the memory, it can implement any of the first aspect or the first aspect. Implement the method described in the mode.
  • Figure 1a is a schematic diagram of an embodiment of the lidar system of the present invention.
  • FIG. 1b is a schematic diagram of another embodiment of the lidar system of the present invention.
  • Figure 2 is a schematic diagram of the principle on which the point cloud computing algorithm is based
  • Figure 3a is a laser radar system coordinate system established with a reference laser beam starting point as the origin;
  • Figure 3b is a side view of Figure 3a
  • Figure 3c is a top view of Figure 3a
  • Figure 4 is a schematic diagram of the existing calibration field and internal calibration surface
  • Fig. 5 is a schematic diagram of an embodiment of a method for calibrating lidar parameters according to the present application
  • Figure 6a is a schematic diagram of the calibration field and internal calibration surface of this application.
  • Fig. 6b is a front view of the sampling points drawn according to the true value of the point cloud of a calibration surface in Fig. 6a;
  • Fig. 6c is a side view of the sampling points drawn according to the true value of the point cloud of a calibration surface in Fig. 6a;
  • Figures 7a to 7e are schematic diagrams of calibration results of error parameters D i , ⁇ i , ⁇ i , V i , and H i obtained according to the existing laser radar parameter calibration method;
  • FIGS. 8a ⁇ 8e FIG error parameter D i is sequentially lidar parameter calibration method provided in the embodiment according to the present application obtained, a schematic view of the calibration results ⁇ i, ⁇ i, V i , H i is;
  • 9 is a schematic diagram of the point cloud of each plane reflector obtained by assigning the error parameter in the lidar system to 0;
  • FIG. 10 shows a schematic diagram of the point cloud of the plane reflector before and after calibration according to the method of the present application
  • FIG. 11 is a schematic structural diagram of a laser radar parameter calibration device provided by an embodiment of the present application.
  • Fig. 12 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • Laser detection and ranging are generally also represented by the abbreviation "LiDAR” or laser radar.
  • laser detection and ranging is referred to as laser radar.
  • Three-dimensional object scanners, automatic or semi-autonomous vehicles, security cameras and other devices can use lidar systems to scan objects.
  • the laser radar system provided by this application is introduced below.
  • the lidar system may refer to a single lidar.
  • FIG. 1a shows a schematic structural diagram of a lidar system provided by an embodiment of the present application. 1a, the lidar system 100 includes a processor 111, a laser source 112, a photodetector 113, a memory 114, and the like.
  • the laser source 112 includes one or more lasers (not specifically shown in FIG. 1a), and the laser source 112 can generate laser light and emit laser light to the detection environment.
  • the photodetector 113 is used to detect the echo signal of the laser light (or called the reflected laser light), and based on the received echo signal, generate and output a data signal.
  • the processor 111 is used to receive the data signal output by the optical detector 113, and determine the measurement information of the sampling point detected by the laser according to the received data signal, and the measurement information of the sampling point is used to determine the target angle of the sampling point relative to the lidar system Distance to the target, etc.
  • the processor 111 may also determine the point cloud of the detection environment according to the measurement information of the sampling points, including the three-dimensional coordinates of the sampling points in the coordinate system of the lidar system.
  • the processor in the processor 111 may specifically be a digital signal processor (digital signal processor, DSP), a field programmable gate array (field programmable gate array, FPGA), and a central processing unit (central processing unit). processing unit, CPU) or other processors.
  • the lidar system 100 may further include a memory 114 for storing executable programs, and the processor 111 may execute the executable programs in the memory 114 to obtain a point cloud of the detection environment.
  • the lidar system 100 may also include a mechanical device (not shown in FIG. 1a), which is used to change the angle at which the laser source 112 emits laser light and change the optical detector 113 to detect the echo signal. Angle.
  • the lidar system may include a computer device and one or more lidars. All or part of the functions of the processor 111 in FIG. 1a may be implemented by computer equipment (for example, a server, a desktop computer, a notebook computer, a mobile terminal, etc.).
  • FIG. 1b shows another schematic structural diagram of the lidar system provided by an embodiment of the present application, and FIG. 1b takes the lidar system including a lidar as an example.
  • the lidar system 100 includes a computer device 120 and a lidar 110.
  • the lidar 110 includes at least the laser source 112 and the light detector 113 in the embodiment corresponding to FIG. 1a
  • the computer device 120 includes at least the processor 111 in the embodiment corresponding to FIG. 1a.
  • the lidar 110 is connected to the computer device 120 and can transmit data signals to each other.
  • the optical detector 113 can send a data signal to the computer device 120 based on the received echo signal, and the computer device 120 can obtain the lidar based on the received data signal. 110 point cloud detected.
  • the lidar system can emit lasers to the detection environment and detect the echo signals reflected by the sampling points of the detection environment. When the echo signals of a certain laser beam are detected, the laser can be calculated
  • the measurement information in other words, the measurement information of the sampling points detected by the laser.
  • the lidar system can calculate the distance between the sampling point and the exit point of the laser based on the flight time (called the detection distance of the laser); the lidar system can also calculate the launch angle of the laser, such as the horizontal exit angle and the vertical exit Horn.
  • the lidar system can input laser measurement information (ie, detection distance and emission angle) into the point cloud calculation algorithm to obtain the three-dimensional coordinates of each sampling point.
  • the point cloud calculation algorithm can be a program pre-stored in the lidar system, and the program is used to output the three-dimensional coordinates of each sampling point detected by the lidar system.
  • Three-dimensional coordinates refer to points with a certain meaning formed by three independent variables. It represents a point in space and has different expression forms in different three-dimensional coordinate systems.
  • the three-dimensional coordinate system is a three-dimensional Cartesian coordinate system
  • the three-dimensional coordinate is a three-dimensional Cartesian coordinate as an example.
  • Figure 2 is a schematic diagram of the principle of the point cloud computing algorithm
  • the three-dimensional coordinate system in Figure 2 is the coordinate system of the lidar system (Referred to as the radar coordinate system)
  • the origin o of the coordinate system represents the exit position of the laser in the lidar system
  • the dotted line with an arrow represents the laser emitted from the exit position o.
  • the three-dimensional coordinates of the sampling point T Calculated as follows:
  • is the horizontal emission angle of the laser
  • is the vertical emission angle of the laser
  • R is the detection distance of the laser.
  • the length of the dashed line with an arrow represents the detection distance R of the laser.
  • the lidar system generally has the following measurement errors and corresponding error parameters:
  • the laser radar system generally has measurement errors (specifically called angle errors) in the process of calculating the laser emission angle.
  • the angle error parameter can be introduced into formula (1) to eliminate the angle error and obtain an accurate emission angle.
  • the process of calculating the detection distance of the laser by the lidar system generally has a measurement error (called distance error).
  • the distance error parameter can be introduced into the formula (1) to eliminate the distance error and obtain an accurate detection distance.
  • Lidar systems can be divided into single-beam lidar systems and multi-beam lidar systems.
  • the single-beam lidar system scans only one laser scan line at a time, and the multi-beam lidar system scans one time to generate multiple laser scan lines.
  • each module or each lidar in the lidar system includes one or more lasers and light detectors, which can generally generate Multiple lasers.
  • the emission angles of lasers in different modules or different lidars are generally different. For example, the vertical emission angles of lasers emitted by different modules or different lidars are different.
  • Each module or each lidar can respectively emit laser light to the detection environment, and according to formula (1), the measurement position under the coordinate system of the corresponding module or lidar (referred to as the module coordinate system) will be obtained.
  • the origin of the module coordinate system is generally determined by the emission position of the laser. Because the laser emission positions of different modules or lidars are different, the coordinate systems of different modules generally do not overlap.
  • one of the module coordinate systems is selected as a unified The coordinate system (called the radar coordinate system), in order to calculate the position of each sampling point in the detection environment in the same coordinate system (called the radar coordinate system) according to the measurement information of each module or each laser radar, can be directed to the point cloud
  • the module error parameter is introduced into the calculation algorithm to eliminate the difference between the coordinate system of other modules and the radar coordinate system (called module error).
  • the laser radar system coordinate system xyz is established with the starting point of the reference laser beam as the origin o, the x-axis and z-axis directions are respectively defined by the horizontal and vertical exit angles of the laser beam, and the y-axis direction is determined by the right-hand rule.
  • the projection of the straight line (dotted line with arrow) on the xy plane where the i-th laser (or the i-th laser) is located is the straight line where the BC is located.
  • Point B draws a vertical line to the line where the laser of the i-th line is located, and the vertical foot point is point A.
  • Fig. 3b is a side view of Fig. 3a
  • the line of sight direction is oB direction.
  • Fig. 3c is a top view of Fig. 3a, and the direction of the line of sight is the opposite direction of the z-axis.
  • the point cloud calculation algorithm of the i-th line of laser light is as follows, which is used to calculate the three-dimensional coordinates of the sampling point Ti of the i- th line of laser light in the radar coordinate system:
  • a point represents the true value of the distance T i of sampling points (or correction value);
  • D i also represents the distance error parameter.
  • D i is called the distance compensation factor.
  • the absolute value of D i represents the length of the line segment DA.
  • V i is the module error parameter, which represents the projection of the vertical distance from the origin o of the radar coordinate system to the i-th line of the laser beam on the side view plane. Its absolute value is the length of the line segment AB, and the coordinate of point A on the z axis is timing, V i is a positive value, and vice versa, V i takes a negative value;
  • ⁇ i ⁇ 'i + ⁇ i denotes the i-line laser light perpendicular to the angle true value
  • ⁇ ' i denotes the i-line laser light perpendicular to the measurement value angle is
  • ⁇ i represents the i-line laser light perpendicular to the angle of the angle Error parameter, the angle from the line where the line segment BC is located to the positive half axis of the z-axis is positive, and vice versa;
  • H i is the module error parameter, which represents the vertical distance from the origin o to the i-th line of laser projection on the xy plane.
  • the absolute value of H i is the length of oB.
  • ⁇ i ⁇ 'i + ⁇ i denotes the i-line laser level output angle true value
  • ⁇ ' i denotes the i-line laser level measured value angle is
  • ⁇ i represents the i-line laser level exit angle of the angle Error parameter, the angle from the positive half axis of the x-axis to the positive half-axis of the y-axis is positive, and vice versa.
  • the lidar system can first calibrate the error parameters in the point cloud calculation algorithm of the lidar system (or called calibration).
  • each of the parameters the parameters are known for the R 'i, ⁇ ' i and ⁇ 'i, unknown error parameters to be calibrated as: k, ⁇ R', D i , ⁇ i, ⁇ i, V i and H i, wherein the different modules of the laser beam corresponding to the point cloud computing algorithms, k and ⁇ R 'the same, and D i, ⁇ i, ⁇ i , V i and H i may be different, and therefore, if the laser line If the number is N, the number of error parameters to be calibrated in the point cloud calculation algorithm of each module of the lidar system is 5N+2.
  • the error parameters D i , ⁇ i , ⁇ i , V i corresponding to the reference laser beam And H i are both 0, and the number of error parameters to be calibrated is 5N-3 in the point cloud calculation algorithm of each module of the lidar system.
  • the prior art provides a method for calibration of lidar parameters.
  • M plane calibration surfaces are set in the calibration field (in Figure 4, calibration surface 1, calibration surface 2 and calibration surface 3 are set as examples). It is assumed that the lidar system includes I modules, and each module pair The calibration surface of each plane is scanned. Assuming that each module emits J beams of laser at different positions or angles during the scanning of a single calibration surface, then the lidar system emits M ⁇ I ⁇ J to M calibration surfaces. A laser beam detected M ⁇ I ⁇ J sampling points. Among them, M and I are positive integers, and J is a positive integer greater than 2.
  • the existing laser radar parameter calibration method includes the following steps:
  • the initial value of the point cloud of the calibration surface is respectively subjected to plane fitting, and the fitting plane parameters A m , B m and C m in each plane equation are obtained.
  • the point cloud function includes M ⁇ I ⁇
  • the position function of J sampling points that is, the position of each sampling point is a function with k, ⁇ R′, D i , ⁇ i , ⁇ i , V i , and Hi as independent variables.
  • the error parameters k, ⁇ R′, D i , ⁇ i , ⁇ i , V i and H i are estimated through numerical optimization.
  • the theoretical constraint condition corresponding to formula (3) is: in the point cloud function of the m-th calibration surface, the sampling points corresponding to each position function are located in the plane of the m-th calibration surface.
  • the actual constraint condition corresponding to formula (3) is: in the point cloud function of the m-th calibration surface, the sampling points corresponding to each position function are located in the fitting plane of the m-th calibration surface, where the m-th calibration surface is determined
  • the point cloud on which the fitting plane of the surface is based is obtained by setting each error parameter in formula (2) to the initial value. Since the initial value of each error parameter is usually quite different from the true value of each error parameter, the There is a big difference between the position of the point cloud obtained by setting the error parameter to the initial value and the m-th calibration plane, and there is a big difference between the plane described by the plane equation fitted by the point cloud and the m-th calibration plane.
  • this application provides another embodiment of the method for calibration of the lidar parameters, which will be introduced below.
  • Fig. 5 is a schematic diagram of an embodiment of the laser radar parameter calibration method of this application. Referring to Fig. 5, taking the laser radar system shown in Fig. 1a as an example, the laser radar parameter calibration method of the embodiment of the present application may include the following steps:
  • the laser source 112 of the lidar system 100 can emit multiple laser beams to one or more calibration surfaces set in the calibration field, and based on The photodetector 113 of the lidar system 100 receives the echo signals of multiple lasers, and the processor 111 can obtain the measurement information of multiple sampling points detected by the multiple lasers on the calibration surface, and any one of the multiple sampling points is sampled.
  • the measurement point information is used to determine the sample point relative to the target angle of the laser radar system 100 (e.g., the error in the model parameters ⁇ 'i and ⁇ ' i) and the target distance (e.g., the error parameter model R 'i) .
  • the lidar system stores a cloud computing algorithm program.
  • the input of the program includes the measurement information of the sampling point, and the output includes the three-dimensional coordinates of the sampling point.
  • the processor 111 obtains the measurement information of multiple sampling points, it can use the first parameter in the point cloud calculation algorithm as a variable to execute the point cloud calculation algorithm according to the measurement information of the multiple sampling points to obtain that multiple sampling points are on the same radar.
  • the three-dimensional coordinates in the coordinate system, and the three-dimensional coordinates of any one of the multiple sampling points are functions with the first parameter as the independent variable.
  • the laser radar parameter calibration device can obtain the three-dimensional coordinates of multiple sampling points.
  • the lidar parameter calibration device can determine the value corresponding to the first parameter when the cost function with the first parameter as the independent variable is the optimal solution.
  • the cost function is taken as The value corresponding to the first parameter when the optimal solution is reached is called the predicted value of the first parameter.
  • the cost function is determined based on the three-dimensional coordinates of multiple sampling points and a fitting function for multiple sampling points, and when the three-dimensional coordinates of multiple sampling points satisfy the fitting function, the cost function takes the optimal solution In other words, the predicted value of the first parameter is used to make the three-dimensional coordinates of multiple sampling points satisfy the fitting function.
  • the fitting function for the multiple sampling points is obtained by approximating or fitting the three-dimensional coordinates of the multiple sampling points to an equation representing the calibration surface.
  • the equation used to represent the calibration surface is determined according to the surface shape of the calibration surface. For calibration surfaces with the same surface shape, the equations may be the same.
  • the equation used to express the calibration surface is a plane equation.
  • the equation used to express the calibration surface includes undetermined parameters. The value of the undetermined parameter is determined by the position and angle of the calibration surface relative to the lidar system, or it can be obtained by approximating or fitting the three-dimensional coordinate points of multiple sampling points on the calibration surface .
  • the lidar parameter calibration device may assign a value to the first parameter in the point cloud computing algorithm according to the predicted value of the first parameter to complete the calibration of the first parameter in the point cloud computing algorithm.
  • the lidar system 100 can emit laser light to the detection environment, and based on the echo signal of the detected laser, input the measurement information of the detected sampling points into the point cloud computing algorithm to obtain Probe the point cloud of the environment.
  • the first parameter is used to eliminate calculation errors of the point cloud calculation algorithm, which is beneficial for the processor to obtain the true three-dimensional coordinates of the sampling point according to the measurement information of the sampling point and the point cloud calculation algorithm.
  • the first parameter includes at least one of a measurement error parameter and a coordinate transformation error parameter.
  • the measurement error parameter is used to eliminate the error of the measurement information at multiple sampling points, such as the aforementioned angle error parameter and distance error parameter. More specifically, the angle error parameter can be ⁇ i and ⁇ i in the aforementioned error parameter model. error error parameters may be the parameters of the model k, ⁇ R ', D i and the like.
  • the coordinate transformation error parameters are used to eliminate the errors introduced by the coordinate transformation process.
  • the coordinate transformation process is used to convert the three-dimensional coordinates of the sampling points detected by different laser modules in the lidar system to the same radar coordinate system.
  • the coordinate transformation error parameters can be It is the aforementioned module error parameter, and more specifically, it can be Vi , Hi, etc. in the aforementioned error parameter model.
  • the true value of the first parameter generally changes with the use of the lidar system. This change is usually random and difficult to predict. Therefore, in order to ensure the accuracy of the lidar system to detect the point cloud, it is necessary to frequently calculate the point cloud.
  • the first parameter in the algorithm is calibrated.
  • the cost function used to determine the predicted value of the first parameter is determined according to the equation of the three-dimensional coordinates of the sampling point and the fitting surface of the sampling point.
  • the prior art determines the fitting function according to the preset initial value of the first parameter.
  • the initial value of the first parameter is a fixedly set value.
  • the true value of the first parameter generally changes with the use of the lidar system Therefore, the difference between the initial value of the first parameter and the true value of the first parameter is usually large, which leads to a large difference between the fitted surface and the calibration surface, which in turn leads to lower accuracy of the predicted value of the first parameter.
  • the cost function used to determine the predicted value of the first parameter is determined according to the three-dimensional coordinates of the sampling point and the fitting function to the sampling point, and the cost function for multiple sampling points
  • the fitting function takes the first parameter as the independent variable, and the accuracy of the fitting function is determined by the accuracy of the predicted value of the first parameter, because the predicted value of the first parameter is the optimal solution based on the measurement information of the sampling point and the cost function Therefore, the fitting surface corresponding to the fitting function is closer to the calibration surface, and the predicted value of the first parameter is closer to the true value of the first parameter than the preset initial value of the first parameter in the prior art. Therefore, the present application The method is beneficial to improve the accuracy of the predicted value of the first parameter. When the predicted value of the first parameter is the true value of the first parameter, the fitting surface corresponding to the fitting function is the calibration surface.
  • the set calibration surface may be a plane
  • the fitting function may be a plane equation.
  • the cost function is positively correlated with the first cost function, and the first cost function is determined according to the first distance between the multiple sampling points and the plane represented by the fitting function.
  • the three-dimensional coordinates of the point and the fitting function both use the first parameter as an independent variable, so the first distance is a function with the first parameter as the independent variable.
  • the calibration surface set in the calibration field can include N calibration surfaces, and N is a positive integer, then the multiple sampling points detected by the multiple laser beams emitted by the lidar system are distributed on the N calibration surfaces.
  • the point fitting function includes N fitting functions corresponding to the N calibration surfaces, and each fitting function is determined according to the three-dimensional coordinates of the sampling points distributed on the corresponding calibration surface.
  • the multiple sampling points detected by multiple laser beams include multiple sampling points detected on the first calibration surface (referred to as the first sampling point) and multiple sampling points detected on the second calibration surface (referred to as the As the second sampling point), the fitting function includes a first fitting function for the first sampling point and a second fitting function for the second sampling point.
  • the cost function is positively related to the second cost function, and the second cost function is based on the first fitting function, the second fitting function, and the relative relationship between the first calibration surface and the second calibration surface. The location relationship is determined. Since the first fitting function and the second fitting function use the first parameter as the independent variable, the second cost function uses the first parameter as the independent variable.
  • the first calibration surface and the second calibration surface are both planes. Therefore, the first calibration surface and the second calibration surface may be referred to as the first calibration plane and the second calibration plane, respectively.
  • the relative positional relationship between the first calibration plane and the second calibration plane is used to indicate that the first calibration plane and the second calibration plane are perpendicular to each other, or are used to indicate that the first calibration plane and the second calibration plane are perpendicular to each other.
  • the calibration planes are parallel to each other, or are used to indicate the distance between the first calibration plane and the second calibration plane that are parallel to each other.
  • the calibration field is open, which is conducive to the extraction of the plane point cloud of the calibration surface, for example, its size is not less than 10m*10m;
  • the size of the calibration surface is not less than 1m*1m, and its height is adjustable, and the adjustment range is not less than 1m, so that all laser beams can receive echo;
  • the pitch angle of the calibration surface is adjustable, and the adjustment range is, for example, between -60° and 60°;
  • each calibration surface facing the lidar system (referred to as the surface of the calibration surface) is flat, and its reflectivity is uniform;
  • Fig. 6a shows only 3 calibration surfaces (calibration surface 1, calibration surface 2 and calibration surface 3), and does not show other M-3 calibration surfaces.
  • the embodiment of this application does not limit other M-3 calibration surfaces to be calibrated.
  • the positions or angles between the M calibration surfaces are different, or the positions and angles are all different.
  • the calibration surfaces in the calibration field can not be set at the same time, but set in sequence. For example, first set up one or more calibration surfaces in the calibration field, use the lidar system to scan the set calibration surfaces, then retract the calibration surfaces in the calibration field, set other calibration surfaces, or change the calibration surface in the calibration field
  • the laser radar system is used to scan the newly set calibration surface until the scanning of M calibration surfaces with different positions and/or angles is completed.
  • the laser radar system can be operated to emit laser, scan the surface of each calibration surface in the calibration field, and detect the echo signal reflected by the surface of each calibration surface, and get The measurement information of each laser includes the measured value R′ i of the detection distance, the measured value ⁇ ′ i of the vertical angle, and the measured value ⁇ ′ i of the horizontal angle.
  • the lidar system includes I modules, each module scans each plane calibration surface, assuming that each module emits J beams of laser at different positions or angles while scanning a single calibration surface, then the laser The radar system emits a total of M ⁇ I ⁇ J laser beams to M calibration surfaces, and detects M ⁇ I ⁇ J sampling points.
  • the lidar system can be set to the calibration mode.
  • the first parameter in the point cloud calculation algorithm in the processor can be set as a variable, and then the measured information of each laser and the point cloud calculation algorithm can be obtained according to the measurement information of each laser.
  • the three-dimensional coordinates of the sampling point can be set to the calibration mode.
  • a first point cloud parameter calculation algorithm comprises k, ⁇ R ', D i, ⁇ i, ⁇ i, V i, H i.
  • the lidar parameter calibration device estimates the error parameters k, ⁇ R′, D i , ⁇ i , ⁇ i , V i , H i by solving the optimal solution of the cost function shown in the following formula:
  • the cost function corresponding to formula (4) is determined according to the three-dimensional coordinates of the sampling points obtained by the point cloud calculation algorithm and the fitting function to the sampling points on the M calibration surfaces. Specifically, the cost function corresponding to formula (4) is positively correlated with the functions F M , F P , F R , and F V. These four functions are respectively introduced below.
  • F M is the cost function based on plane limited construction:
  • (x m,i,j , y m,i,j , z m,i,j ) represents the position of the sampling point detected by the laser (m,i,j), which can be determined by the formula (2) Obtained, and use the parameters ⁇ R′, k, D i , V i , ⁇ i , H i , ⁇ i in formula (2) as variables, so the position of the sampling point is based on ⁇ R′, k, D i, V i, ⁇ i, H i, ⁇ i is the argument of the function (or a function of position).
  • the parameters A m , B m , and C m in are calculated based on the three-dimensional coordinates of the sampling points on the m-th calibration surface under the least squares criterion.
  • the expressions of A m , B m , and C m are as follows:
  • P is used to represent the following matrix:
  • F P in formula (4) is a cost function based on the parallel construction of the surface of calibration surface 1 and the surface of calibration surface 2:
  • F R in formula (4) is a cost function constructed based on the distance between the surface of calibration surface 1 and the surface of calibration surface 2:
  • F V in the formula (4) is the cost function based on the vertical structure of the surface of the calibration surface 1 and the surface of the calibration surface 3:
  • the difference between the surfaces of the two calibration planes improves the calibration accuracy of the error parameters of the lidar system, thereby improving the accuracy of the point cloud detected by the lidar system.
  • the surface of the calibration surface is flat, the number of laser lines: 32; the vertical field of view range: 30°-61°; the vertical exit angle interval: 1°; the vertical exit angle measurement error distribution: Gaussian distribution, mean value 0, standard deviation 0.5 °; Horizontal field of view range: between 10° and 61°; horizontal exit angle interval: 0.2°; horizontal exit angle measurement error distribution: Gaussian distribution, mean value 0, standard deviation 0.125°; distance offset factor ⁇ R′: 0.2 m; distance correction factor k: 0.001.
  • Figure 6b is a front view of the sampling points of the calibration surface drawn according to the true value of the point cloud of a calibration surface in Figure 6a
  • Figure 6c is a front view of the sampling points of the calibration surface drawn according to the true value of the point cloud of a calibration surface in Figure 6a picture.
  • the side view of the sampling points drawn according to the true value of the point cloud corresponding to the calibration surface is shown as the straight line 6-1 in Figure 6c.
  • the view is shown in the line segment set 6-2 in Figure 6b. It can be seen from Figure 6c that the unknown error parameters will cause the point cloud used to describe the sampling points on the calibration plane to be misaligned.
  • the thickness of the point cloud used to describe the calibration plane increases significantly, reducing the accuracy of the point cloud obtained by the lidar system.
  • FIGS. 7a ⁇ 7e FIG error parameter D i were obtained according to the conventional laser radar parameter calibration method, the calibration results ⁇ i, ⁇ i, V i , H i 's.
  • the ordinate predicted value represents of D i
  • the ordinate represents of D i true value
  • the point, the point on the 7a-1 in FIG. 7a polyline points on the polyline-2 7a fold lines on the 7a-3
  • D i calibration error i.e. the difference between the predicted value and the true value
  • the calibration error ordinate represents the point on the 7e-1 polyline H i (i.e., the true value and the predicted
  • the ordinate of the point on the broken line 7e-2 represents the true value of Hi
  • the ordinate of the point on the broken line 7e-3 represents the predicted value of Hi.
  • FIGS. 8a ⁇ 8e were FIG error parameter D i lidar parameter calibration method provided in the embodiment according to the present application obtained, calibration results ⁇ i, ⁇ i, V i , H i 's.
  • Figure 8a the predicted value represents of D i
  • the point on the fold line 8a-2 points on 8a-1 polyline ordinate ordinate represents of D i true value
  • a point on the fold line 8a-1 and the fold line 8a -2 coincides with a point on the ordinate represents the calibration error of D i points on fold line 8a-3 (i.e., the difference between the true and predicted values)
  • Figure 8b a point on the longitudinal fold line 8b-1
  • the coordinates represent the true value of ⁇ i
  • the ordinate of the point on the broken line 8b-2 represents the predicted value of ⁇ i
  • the point on the broken line 8b-1 coincides with the point on the broken line 8b-2
  • the point on the broken line 8b-3 is The ordinate represents the calibration error
  • FIG. 8e, 8e-1 polyline
  • the ordinate of the point above represents the true value of Hi
  • the ordinate of the point on the polyline 8e-2 represents the predicted value of Hi
  • the point on the polyline 8e-1 coincides with the point on the polyline 8e-2
  • the polyline 8e- The ordinate of the point on 3 represents the calibration error of Hi (that is, the difference between the true value and the predicted value).
  • the test plane is used to quantitatively evaluate the calibration effect.
  • the normal vector of the test plane is (2, 1, 4), and the test plane passes the point (10m, 10m, 10m).
  • the mean square distance of the point cloud relative to the test plane is shown in Table 1. Obviously, the point cloud position is more accurate after calibration in this scheme.
  • the scheme of the present invention is verified based on actual lidar system data, and 14 1m*1m plane reflectors and a wall are used as the plane calibration surface to calibrate the lidar system.
  • the error parameter in the lidar system is assigned a value of 0, and the point cloud of each plane calibration surface is obtained as shown in Figure 9.
  • Figure 10 shows a side view of the point cloud of a flat reflector before and after calibration.
  • the short horizontal line that crosses the vertical line represents the point cloud of the flat reflector after calibration, and the short one that does not cross the vertical line.
  • the horizontal line represents the point cloud of the plane reflector before calibration.
  • Table 2 shows the mean square distance of the point cloud relative to the fitting plane. It can be seen that after calibration, the mean square distance of the point cloud relative to the fitting plane is reduced by more than 50%, indicating that the calibration scheme is effective.
  • the solution of the present invention may be extended for external parameter calibration between multiple lidar systems, because some of the lidar system error parameters characterize the relative position and angle relationship between multiple modules in the lidar system, and multiple laser radar systems.
  • the external parameters between radar systems characterize the relative position and angle relationship between multiple lidar systems, and the two have similarities.
  • the lidar parameter calibration device in the foregoing embodiment may refer to the processor 111 in FIG. 1a or the computer device in FIG. 1b. In practical applications, other devices with corresponding functions may execute the method embodiments of this application.
  • the laser radar parameter calibration device can be implemented in a hardware structure and/or software module. Whether a function of the laser radar parameter calibration device is implemented by hardware or computer software-driven hardware depends on the specific application and design constraints of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • FIG. 11 shows a schematic structural diagram of a laser radar parameter calibration device.
  • the laser radar parameter calibration device 1100 includes an acquisition module 1101, a parameter prediction module 1102, and a calibration module 1103.
  • the acquisition module 1101 is used to acquire the three-dimensional coordinates of multiple sampling points in the same radar coordinate system detected by multiple laser beams emitted by the lidar system on the calibration surface.
  • the measurement information of a point is input from the point cloud computing algorithm using the first parameter as a variable.
  • the three-dimensional coordinates of any one of the multiple sampling points is a function with the first parameter as the independent variable.
  • the measurement information of multiple sampling points is Used to determine the target angle and target distance of multiple sampling points relative to the lidar system.
  • the parameter prediction module 1102 is used to determine the predicted value of the first parameter that enables the cost function with the first parameter as the independent variable to obtain the optimal solution.
  • the cost function is based on the three-dimensional coordinates of multiple sampling points and the Determined by the fitting function, the predicted value of the first parameter is used to make the three-dimensional coordinates of the multiple sampling points satisfy the fitting function.
  • the calibration module 1103 is configured to assign a value to the first parameter in the point cloud computing algorithm according to the predicted value of the first parameter.
  • the calibration surface is a plane
  • the fitting function is a plane equation
  • the cost function is positively correlated with the first cost function; the first cost function is determined according to the first distance between the multiple sampling points and the plane represented by the fitting function, and the first distance is A function with the first parameter as the argument.
  • the calibration plane includes a first calibration plane and a second calibration plane
  • the multiple sampling points include the first sampling points detected by the lidar system on the first calibration plane and those on the second calibration plane.
  • the fitting function includes a first fitting function for the first sampling point and a second fitting function for the second sampling point; the cost function is positively correlated with the second cost function, and the second cost function To be determined according to the first fitting function, the second fitting function, and the relative positional relationship between the first calibration plane and the second calibration plane, the second cost function takes the first parameter as an independent variable.
  • the relative position relationship is used to indicate: the first calibration plane and the second calibration plane are perpendicular to each other; or, the first calibration plane and the second calibration plane are parallel to each other; or, the first calibration plane is parallel to each other The distance between the plane and the second calibration plane.
  • the first parameter is used to eliminate calculation errors of the point cloud calculation algorithm.
  • the first parameter includes at least one of a measurement error parameter and a coordinate transformation error parameter.
  • the measurement error parameter is used to eliminate errors in measurement information of multiple sampling points
  • the coordinate transformation error parameter is used to eliminate Errors introduced by the coordinate transformation process.
  • the coordinate transformation process is used to convert the three-dimensional coordinates of the sampling points detected by different laser modules in the lidar system into the same radar coordinate system.
  • the laser radar parameter calibration device may be implemented in the form of a chip, and the chip may include a processor and an interface circuit.
  • the interface circuit (or communication interface) may be, for example, an input/output interface, pin or circuit on the chip.
  • the processor can execute computer instructions stored in the memory, so that the chip executes any of the foregoing method embodiments.
  • the memory may be a storage unit in the chip, such as a register, a cache, etc., or the memory may be a memory located outside the chip in a computer device, such as a read-only memory (ROM) or Other types of static storage devices that can store static information and instructions, random access memory (RAM), etc.
  • the processor may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more for controlling the implementation of any of the above methods Examples of integrated circuits for program execution.
  • the laser radar parameter calibration device can be implemented in the form of a computer device.
  • FIG. 12 is a schematic diagram of the computer device 1200 provided in this application.
  • the parameter calibration device may be the computer device 1200 shown in FIG. 12.
  • the computer device 1200 may include components such as a processor 1201 and a memory 1202.
  • FIG. 12 does not constitute a limitation on the computer device, and may include more or fewer components than those shown in the figure, or a combination of certain components, or different component arrangements.
  • the memory 1202 may be used to store software programs and modules.
  • the processor 1201 executes various functional applications and data processing of the computer device by running the software programs and modules stored in the memory 1202.
  • the memory 1202 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of a computer device.
  • the memory 1202 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 1201 is the control center of the computer equipment. It uses various interfaces and lines to connect the various parts of the entire computer equipment, runs or executes the software programs and/or modules stored in the memory 1202, and calls the data stored in the memory 1202. , Perform various functions of computer equipment and process data, so as to monitor the computer equipment as a whole.
  • the processor 1201 may be a central processing unit (CPU), a network processor (NP) or a combination of CPU and NP, a digital signal processor (DSP), or an application specific integrated circuit (application specific integrated circuit). integrated circuit, ASIC), ready-made programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • the methods, steps, and logic block diagrams disclosed in this application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps in the method disclosed in this application can be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the apparatus may include multiple processors or the processors may include multiple processing units.
  • the processor 1201 may be a single-core processor, or a multi-core or many-core processor.
  • the processor 1201 may be an ARM architecture processor.
  • the processor 1201 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, and an application program, and the modem processor mainly processes wireless communication. It can be understood that the above-mentioned modem processor may not be integrated into the processor 1201.
  • the computing device 1200 may further include a communication interface 1203 and a bus 1204.
  • the memory 1202 and the communication interface 1203 may be connected to the processor 1201 through the bus 1204.
  • the bus 1204 may be a peripheral component interconnect standard (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • the bus 1204 can be divided into an address bus, a data bus, a control bus, and so on. For ease of representation, only one line is used to represent in FIG. 12, but it does not mean that there is only one bus or one type of bus.
  • the computer device 1200 can be connected to the optical detector of the lidar through the communication interface 1203, and receive the data signal sent by the optical detector.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website site, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • wired such as coaxial cable, optical fiber, digital subscriber line (DSL)
  • wireless such as infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
  • An embodiment of the application also provides a laser radar system, including a laser source, a photodetector, a processor, and a memory.
  • the laser source is used to generate and emit multiple laser beams to a calibration surface
  • the photodetector is used to detect the
  • the processor executes the method described in any of the foregoing method embodiments provided in this application when running the computer instructions stored in the memory.
  • the structure of the lidar system please refer to the corresponding embodiments in FIG. 1a and FIG. 1b, which will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

一种激光雷达参数标定方法及装置,用于提高对激光雷达系统中点云计算算法的参数的标定准确度,进而提高激光雷达系统的探测精度。方法包括:获取激光雷达系统发射的多束激光在标定面上探测到的多个采样点在同一雷达坐标系中的三维坐标,多个采样点的三维坐标为将多个采样点的测量信息输入以第一参数为变量的点云计算算法中得到的,多个采样点中任意一个采样点的三维坐标为以第一参数为自变量的函数;确定使以第一参数为自变量的代价函数取到最优解的第一参数的预测值,代价函数为根据多个采样点的三维坐标和对多个采样点的拟合函数确定的;根据第一参数的预测值对点云计算算法中的第一参数进行赋值。还包括与标定方法对应的装置。

Description

一种激光雷达参数标定方法及装置
本申请要求于2020年03月12日提交中国专利局、申请号为202010170340.4、发明名称为“一种激光雷达参数标定方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及一种激光雷达参数标定方法及装置。
背景技术
激光探测及测距(light detection and ranging)一般用缩写LiDAR或激光雷达来表示,激光雷达可以向探测环境发射激光,检测探测环境内各采样点反射的回波信号,根据各回波信号测量各采样点的目标距离和目标角度,之后,将各目标距离和目标角度输入激光雷达预存的点云计算算法中,得到各采样点在同一坐标系中的三维坐标的测量值,即用于表示探测环境的点云的测量值。
一般在点云计算算法中设置一个或多个变量参数,通过调整对变量参数的赋值,可以改变点云计算算法得到的点云的测量值与其真实值之间的误差(简称测量误差)。为了减少测量误差,提高激光雷达系统的准确性,在进行测量作业之前,激光雷达系统可以先行对点云计算算法中的变量参数的值进行标定,之后,根据标定后的点云计算算法进行测量作业。
在现有激光雷达系统误差参数的标定方法中,首先在标定场设置M个平面标定面,激光雷达系统对各平面标定面进行扫描,将点云计算算法中的误差参数赋予测量值,得到探测到的第一点云,根据第一点云拟合M个标定面的平面方程。将点云计算算法中的误差参数作为变量,得到相应探测到的第二点云,第二点云中任一采样点的位置为以误差参数为自变量的函数。对于准确的误差参数,需要满足如下限制条件:使第二点云满足用于描述相应标定面的平面方程。因此,现有技术中,根据第二点云到拟合的平面方程所表示的平面的距离构造代价函数,该代价函数的变量为误差参数。之后通过计算代价函数的最优解,对激光雷达系统的误差参数的标定。
由于各误差参数的初值通常与各误差参数的真实值有较大差异,因此,将误差参数设置为初值得到的点云与第m个标定面的位置存在较大差异,根据该点云拟合的平面方程所描述的平面与第m个标定面平面之间存在较大差异,不利于提高激光雷达系统的点云测量精度。
发明内容
本申请实施例提供了一种激光雷达参数标定方法及装置,用于对激光雷达系统中点云计算算法中的参数进行标定,以提高激光雷达系统的点云测量精度。
第一方面,本申请实施例提供一种激光雷达参数标定方法,包括:获取激光雷达系统发射的多束激光在标定面上探测到的多个采样点在同一坐标系中的三维坐标,所述多个采 样点的三维坐标为将所述多个采样点的测量信息输入以第一参数为变量的点云计算算法中得到的,所述多个采样点中任意一个采样点的三维坐标为以所述第一参数为自变量的函数,所述多个采样点的测量信息用于确定所述多个采样点相对于所述激光雷达系统的目标角度和目标距离。确定使以所述第一参数为自变量的代价函数取到最优解的所述第一参数的预测值,所述代价函数为根据所述多个采样点的三维坐标和对所述多个采样点的拟合函数确定的,所述第一参数的预测值用于使所述多个采样点的三维坐标满足所述拟合函数。在一种可能的实现方式中,所述拟合函数为通过将用于表示所述标定面的方程逼近或拟合所述多个采样点的三维坐标得到的,所述标定面的方程为根据所述标定面的形状确定的。根据所述第一参数的预测值对所述点云计算算法中的所述第一参数进行赋值。
本申请实施例提供的激光雷达参数标定方法中,用于确定第一参数的预测值的代价函数为根据采样点的三维坐标和对采样点的拟合函数确定的,对多个采样点的拟合函数以第一参数为自变量,拟合函数的准确性由第一参数的预测值的准确性决定,由于第一参数的预测值为根据采样点的测量信息和代价函数的最优解确定的,因此,拟合函数对应的拟合面更接近标定面,第一参数的预测值比现有技术中第一参数的预设初值更加接近第一参数的真实值,因此,本申请方法有利于提高第一参数的预测值的准确性。当第一参数的预测值为第一参数的真实值时,拟合函数对应的拟合面即为标定面。
在一种可能的实现方式中,所述标定面为平面,所述拟合函数为平面方程。
在一种可能的实现方式中,所述代价函数与第一代价函数正相关;所述第一代价函数为根据所述多个采样点与所述拟合函数所表示的平面之间的第一距离确定的,所述第一距离为以第一参数为自变量的函数。
在一种可能的实现方式中,所述标定面包括第一标定平面和第二标定平面,所述多个采样点包括所述激光雷达系统在所述第一标定平面上探测到的第一采样点和在所述第二标定平面上探测到的第二采样点,所述拟合函数包括对所述第一采样点的第一拟合函数和对所述第二采样点的第二拟合函数。所述代价函数与第二代价函数正相关,所述第二代价函数为根据第一拟合函数、第二拟合函数和所述第一标定平面与所述第二标定平面之间的相对位置关系确定的,所述第二代价函数以所述第一参数为自变量。
在一种可能的实现方式中,所述相对位置关系用于指示:所述第一标定平面与所述第二标定平面相互垂直;或者,所述第一标定平面与所述第二标定平面相互平行;或者,相互平行的所述第一标定平面与所述第二标定平面之间的距离。
在一种可能的实现方式中,所述第一参数用于消除所述点云计算算法的计算误差。
在一种可能的实现方式中,所述第一参数包括测量误差参数和坐标变换误差参数中的至少一种,所述测量误差参数用于消除所述多个采样点的测量信息的误差,所述坐标变换误差参数用于消除坐标变换过程所引入的误差,所述坐标变换过程用于将所述激光雷达系统中不同激光模组探测到的采样点的三维坐标转换到所述同一坐标系中。
第二方面,本申请实施例提供一种激光雷达参数标定装置,包括:获取模块,用于获取激光雷达系统发射的多束激光在标定面上探测到的多个采样点在同一坐标系中的三维坐标,所述多个采样点的三维坐标为将所述多个采样点的测量信息输入以第一参数为变量的 点云计算算法中得到的,所述多个采样点中任意一个采样点的三维坐标为以所述第一参数为自变量的函数,所述多个采样点的测量信息用于确定所述多个采样点相对于所述激光雷达系统的目标角度和目标距离;参数预测模块,用于确定使以所述第一参数为自变量的代价函数取到最优解的所述第一参数的预测值,所述代价函数为根据所述多个采样点的三维坐标和对所述多个采样点的拟合函数确定的,所述第一参数的预测值用于使所述多个采样点的三维坐标满足所述拟合函数;标定模块,用于根据所述第一参数的预测值对所述点云计算算法中的所述第一参数进行赋值。
在一种可能的实现方式中,所述标定面为平面,所述拟合函数为平面方程。
在一种可能的实现方式中,所述代价函数与第一代价函数正相关;所述第一代价函数为根据所述多个采样点与所述拟合函数所表示的平面之间的第一距离确定的,所述第一距离为以第一参数为自变量的函数。
在一种可能的实现方式中,所述标定面包括第一标定平面和第二标定平面,所述多个采样点包括所述激光雷达系统在所述第一标定平面上探测到的第一采样点和在所述第二标定平面上探测到的第二采样点,所述拟合函数包括对所述第一采样点的第一拟合函数和对所述第二采样点的第二拟合函数;所述代价函数与第二代价函数正相关,所述第二代价函数为根据第一拟合函数、第二拟合函数和所述第一标定平面与所述第二标定平面之间的相对位置关系确定的,所述第二代价函数以所述第一参数为自变量。
在一种可能的实现方式中,所述相对位置关系用于指示:所述第一标定平面与所述第二标定平面相互垂直;或者,所述第一标定平面与所述第二标定平面相互平行;或者,相互平行的所述第一标定平面与所述第二标定平面之间的距离。
在一种可能的实现方式中,所述第一参数用于消除所述点云计算算法的计算误差。
在一种可能的实现方式中,所述第一参数包括测量误差参数和坐标变换误差参数中的至少一种,所述测量误差参数用于消除所述多个采样点的测量信息的误差,所述坐标变换误差参数用于消除坐标变换过程所引入的误差,所述坐标变换过程用于将所述激光雷达系统中不同激光模组探测到的采样点的三维坐标转换到所述同一坐标系中。
第三方面,本申请实施例提供一种计算机设备,包括处理器和存储器,所述处理器在运行所述存储器存储的计算机指令时,执行如第一方面或第一方面任一种实现方式所述的方法。
第四方面,本申请实施例提供一种激光雷达系统,包括激光源、光探测器、处理器和存储器;所述激光源用于产生并向标定面发射多束激光,所述光探测器用于检测所述多束激光的回波信号;所述处理器在运行所述存储器存储的计算机指令时,执行如第一方面或第一方面任一种实现方式所述的方法。
第五方面,本申请实施例提供一种计算机可读存储介质,包括指令,当所述指令在计算机上运行时,使得计算机执行如第一方面或第一方面任一种实现方式所述的方法。
第六方面,本申请实施例提供一种计算机程序产品,包括指令,当所述指令在计算机上运行时,使得计算机执行如第一方面或第一方面任一种实现方式所述的方法。
第七方面,本申请实施例提供一种芯片,所述芯片包括:处理器和存储器,所述处理 器在运行存储器中的计算机程序或指令时,实现如第一方面或第一方面任一种实现方式所述的方法。
附图说明
图1a是本发明激光雷达系统的一个实施例示意图;
图1b是本发明激光雷达系统的另一个实施例示意图;
图2是点云计算算法所基于的一个原理示意图;
图3a是以参考激光束起点为原点建立的激光雷达系统坐标系;
图3b是图3a的侧视图;
图3c是图3a的俯视图;
图4是现有标定场和内部标定面的一个示意图;
图5是本申请激光雷达参数标定方法一个实施例示意图;
图6a是本申请标定场和内部标定面的一个示意图;
图6b是根据图6a中一个标定面的点云真值绘制的采样点的正视图;
图6c是根据图6a中一个标定面的点云真值绘制的采样点的侧视图;
图7a至图7e依次是按照现有激光雷达参数标定方法得到的误差参数D i、Δθ i、Δβ i、V i、H i的标定结果示意图;
图8a~图8e依次是按照本申请实施例提供的激光雷达参数标定方法得到的误差参数D i、Δθ i、Δβ i、V i、H i的标定结果示意图;
图9是将激光雷达系统中的误差参数赋值0得到的各平面反射板的点云示意图;
图10示出了按照本申请方法进行标定前后的平面反射板的点云示意图;
图11是本申请实施例提供的激光雷达参数标定装置的一个结构示意图;
图12是本申请实施例提供的计算机设备的一个结构示意图。
具体实施方式
下面结合附图,对本申请的实施例进行描述。
激光探测及测距(light detection and ranging)一般也用缩写“LiDAR”或激光雷达(laser radar)来表示,下面将激光探测及测距称作激光雷达。三维物体扫描仪、自动或半自动驾驶车辆、安全摄像机等装置可以利用激光雷达系统来实现物体扫描。
下面对本申请提供的激光雷达系统进行介绍。
在一种可能的实现方式方式中,激光雷达系统可以指单个激光雷达。相应的,图1a示出了本申请实施例提供的激光雷达系统的一个结构示意图。参考图1a,激光雷达系统100包括处理器111、激光源112、光探测器113和存储器114等。
其中,激光源112包括一个或多个激光器(图1a中未具体示出),激光源112可以生成激光,并向探测环境发射激光。光探测器113用于检测激光的回波信号(或称反射的激光),基于接收到回波信号,生成并输出数据信号。
处理器111用于接收光探测器113输出的数据信号,根据接收到的数据信号确定激光 探测到的采样点的测量信息,采样点的测量信息用于确定采样点相对于激光雷达系统的目标角度和目标距离等。处理器111还可以根据采样点的测量信息确定探测环境的点云,包括采样点在激光雷达系统坐标系中的三维坐标等。在一种可能的实现方式中,处理器111中的处理器可以具体为数字信号处理器(digital signal processor,DSP)、现场可编程门阵列(field programmable gate array,FPGA)、中央处理器(central processing unit,CPU)或其它处理器。
激光雷达系统100还可以包括存储器114,用于存储可执行程序,处理器111可以执行存储器114中的可执行程序,得到探测环境的点云。
在一种可能的实现方式中,激光雷达系统100还可以包括机械装置(图1a中未示出),该机械装置用于改变激光源112发射激光的角度,改变光探测器113检测回波信号的角度。
在一种可能的实现方式方式中,激光雷达系统可以包括计算机设备和一个或多个激光雷达。图1a中处理器111的全部或部分功能可以由计算机设备(例如服务器、台式电脑、笔记本电脑、移动终端等)实现。相应的,图1b示出了本申请实施例提供的激光雷达系统的另一个结构示意图,图1b以激光雷达系统包括一个激光雷达为例。激光雷达系统100包括计算机设备120和激光雷达110,激光雷达110至少包括图1a对应的实施例中的激光源112和光探测器113,计算机设备120至少包括图1a对应的实施例中的处理器111和存储器114,相关描述可以参考前述实施例,此处不再赘述。激光雷达110与计算机设备120相连,可以相互传输数据信号,例如,光探测器113基于接收到回波信号,可以向计算机设备120发送数据信号,计算机设备120可以根据接收到的数据信号得到激光雷达110探测到的点云。
以图1a所示的激光雷达系统为例,激光雷达系统可以向探测环境发射激光,检测探测环境的采样点反射的回波信号,当检测到某束激光的回波信号时,可以计算该激光的测量信息,或者说,激光探测到的采样点的测量信息。示例性的,激光雷达系统可以根据飞行时长计算采样点与激光的出射点之间的距离(称作激光的探测距离);激光雷达系统还可以计算激光的发射角度,例如水平出射角和垂直出射角。之后,激光雷达系统可以将激光的测量信息(即探测距离和发射角度)输入点云计算算法中,得到各采样点的三维坐标。该点云计算算法可以为激光雷达系统中预存的一段程序,该程序用于输出激光雷达系统探测到的各采样点的三维坐标。
三维坐标,是指通过相互独立的三个变量构成的具有一定意义的点。它表示空间的点,在不同的三维坐标系下,具有不同的表达形式。本申请实施例以三维坐标系为三维笛卡尔坐标系、三维坐标为三维笛卡尔坐标为例。以点云计算算法的输入包括激光的水平出射角、垂直出射角和探测距离为例,图2为点云计算算法所基于的原理示意图,图2中的三维坐标系为激光雷达系统的坐标系(简称雷达坐标系),坐标系原点o代表激光雷达系统中激光的出射位置,带有箭头的虚线代表由出射位置o发射的激光,示例性的,参考图2,采样点T的三维坐标的计算公式如下:
Figure PCTCN2021080111-appb-000001
其中,β为激光的水平出射角,θ为激光的垂直出射角,R为激光的探测距离,在图2中用带有箭头的虚线的长度代表激光的探测距离R。
但是,点云计算算法得到的采样点的三维坐标与采样点的真实位置之间一般存在误差(称作测量误差),公式(1)并未考虑相应的测量误差,会导致激光雷达系统计算得到的采样点的测量位置与采样点的真实位置之间的差异较大,降低激光雷达系统探测结果的准确性。为了提高计算结果的准确性,需要向公式(1)中引入用于消除相应测量误差的参数(称作误差参数)。
示例性的,激光雷达系统一般存在以下几种测量误差和相应的误差参数:
1)激光雷达系统计算激光的发射角度的过程一般存在测量误差(具体称作角度误差),可以向公式(1)中引入角度误差参数,以消除角度误差,得到准确的发射角度。
2)激光雷达系统计算激光的探测距离的过程一般存在测量误差(称作距离误差),可以向公式(1)中引入距离误差参数,以消除距离误差,得到准确的探测距离。
3)激光雷达系统可分为单线束激光雷达系统与多线束激光雷达系统。单线束激光雷达系统扫描一次只产生一束激光扫描线,多线束激光雷达系统扫描一次可产生多束激光扫描线。对于多模组激光雷达对应的激光雷达系统或多个激光雷达组成的激光雷达系统,激光雷达系统中的每个模组或每个激光雷达包括一个或多个激光器和光探测器,一般一次可产生多束激光。不同模组或不同激光雷达中的激光器的发射角度一般不同,例如,不同模组或不同激光雷达发射的激光的垂直出射角不同。各模组或各激光雷达可以分别向探测环境发射激光,按照公式(1)将得到在相应模组或激光雷达的坐标系(简称模组坐标系)下的测量位置。模组坐标系的原点一般由激光的出射位置确定,由于不同模组或激光雷达的激光的发射位置不同,不同模组坐标系间一般不重合,一般选择其中的一个模组坐标系作为统一的坐标系(称作雷达坐标系),为了根据各模组或各激光雷达对激光的测量信息计算探测环境中各采样点在同一坐标系(称作雷达坐标系)中的位置,可以向点云计算算法中引入模组误差参数,以消除其他模组坐标系与雷达坐标系之间的差异(称作模组误差)。
下面对激光雷达系统的可能的误差参数模型进行介绍。
如图3a所示,以参考激光束起点为原点o建立激光雷达系统坐标系xyz,x轴和z轴指向分别由激光束水平出射角和垂直出射角定义,y轴指向由右手定则确定。第i线激光(或称第i束激光)所在直线(带有箭头的虚线)在xy平面的投影为BC所在直线,过o点向BC所在直线作垂线,垂足点为B点,过B点向第i线激光所在直线作垂线,垂足点为A点。图3b为图3a的侧视图,视线方向为oB方向。图3c为图3a的俯视图,视线方向为z轴反方向。
如图3a至图3c所示,第i线激光的点云计算算法如下,用于计算第i线激光的采样 点T i在雷达坐标系中的三维坐标:
Figure PCTCN2021080111-appb-000002
其中,各参数定义如下:
1)
Figure PCTCN2021080111-appb-000003
表示A点到采样点T i的距离的真实值(或称校正值);
2)在1)中的
Figure PCTCN2021080111-appb-000004
表示第i线激光(束)的起点(D点)到采样点T i的距离的真实值,R' i表示第i线激光的探测距离的测量值,k和ΔR'均表示距离误差参数,将k称作距离校正因子,将ΔR'称作距离偏置因子;
3)D i也表示距离误差参数,将D i称作距离补偿因子,D i的绝对值表示线段DA的长度,A点位于第i线激光束上时,D i为负值,A点位于第i线激光束的反向延长线上时,D i为正值;
4)V i为模组误差参数,表示雷达坐标系的原点o到第i线激光束的垂直距离在侧视平面的投影,其绝对值为线段AB的长度,A点在z轴的坐标为正时,V i取正值,反之,V i取负值;
5)θ i=θ' i+Δθ i表示第i线激光垂直出射角的真实值,θ' i表示第i线激光垂直出射角的测量值,Δθ i表示第i线激光垂直出射角的角度误差参数,角度从线段BC所在直线向z轴正半轴为正,反之为负;
6)H i为模组误差参数,表示原点o到第i线激光在x-y平面投影的垂直距离,H i的绝对值为oB的长度,B点位于雷达坐标系x-y平面的一二象限时,H i取正值,反之H i取负值;
7)β i=β' i+Δβ i表示第i线激光水平出射角的真实值,β' i表示第i线激光水平出射角的测量值,Δβ i表示第i线激光水平出射角的角度误差参数,角度从x轴正半轴向y轴正半轴为正,反之为负。
为了提高激光雷达系统测量的点云的准确性,激光雷达系统在进行测量作业之前,可以先行对激光雷达系统点云计算算法中的误差参数进行校准(或称作标定)。
在公式(2)的各参数中,已知参数为R' i、θ' i和β' i,未知待标定的误差参数为:k、ΔR'、D i、Δθ i、Δβ i、V i和H i,其中,不同模组的激光束对应的点云计算算法中,k和ΔR'相同,而D i、Δθ i、Δβ i、V i和H i可能不同,因此,若的激光线数为N,那么激光雷达系统各模组的点云计算算法中,待标定的误差参数个数为5N+2个。若以激光雷达系统中一个模组的坐标系作为雷达坐标系,将该模组发射的激光束称作参考激光束,那么参考激光束对应的误差参数D i、Δθ i、Δβ i、V i和H i均为0,激光雷达系统各模组的点云计算算法中,待标定的误差参数个数为5N-3个。
为了对激光雷达系统中的上述误差参数进行标定,现有技术提供一种激光雷达参数标定方法。如图4所示,在标定场设置M个平面标定面(图4中以设置标定面1、标定面2和标定面3为例),假设激光雷达系统包括I个模组,各模组对各平面标定面进行扫描,假设,每个模组对单个标定面进行扫描的过程中,在不同的位置或角度发射J束激光,那么 激光雷达系统对M个标定面共发射M·I·J束激光,检测到M·I·J个采样点。其中,M和I为正整数,J为大于2的正整数。
现有激光雷达参数标定方法包括如下步骤:
1)将待标定的误差参数k、ΔR'、D i、Δθ i、Δβ i、V i和H i置0,基于公式(2)反演平面标定面的点云初值,每个平面标定面的点云包括I·J个采样点的位置,激光雷达系统得到的点云一共包括M·I·J个采样点的位置。
2)假定m为小于M的任意一个正整数,用于描述第m个标定面平面的平面方程为:A m·x+B m·y+C m·z+D=0,可以对各平面标定面的点云初值分别进行平面拟合,获得各平面方程中的拟合平面参数A m、B m和C m
3)以待标定的误差参数k、ΔR′、D i、Δθ i、Δβ i、V i、H i为自变量,根据公式(2)反演点云函数,点云函数包括M·I·J个采样点的位置函数,即每个采样点的位置为以k、ΔR′、D i、Δθ i、Δβ i、V i、H i为自变量的函数。
4)根据点云函数中各位置函数对应的采样点到各平面方程对应的平面的均方距离构造代价函数:
Figure PCTCN2021080111-appb-000005
其中,
Figure PCTCN2021080111-appb-000006
表示点云中的采样点(m,i,j)到第m个标定面的平面方程对应的平面的距离。
以公式(3)所示代价函数最小为准则,通过数值优化估计误差参数k、ΔR′、D i、Δθ i、Δβ i、V i、H i
公式(3)对应的理论约束条件为:第m个标定面的点云函数中,各位置函数对应的采样点位于第m个标定面平面中。
但是,公式(3)对应的实际约束条件为:第m个标定面的点云函数中,各位置函数对应的采样点位于第m个标定面的拟合平面中,其中,确定第m个标定面的拟合平面所依据的点云为将公式(2)中各误差参数设置为初值得到的,由于各误差参数的初值通常与各误差参数的真实值有较大差异,因此,将误差参数设置为初值得到的点云与第m个标定面的位置存在较大差异,根据该点云拟合的平面方程所描述的平面与第m个标定面平面之间存在较大差异。可见,现有激光雷达系统通过计算公式(3)的代价函数的最优解来对各误差参数进行标定后,得到的点云与M个标定面的位置存在较大差异,对误差参数的标定精度较差,进而降低了激光雷达系统探测的点云的精度。
为了提高对激光雷达系统的误差参数的标定精度,本申请提供激光雷达参数标定方法另一个实施例,下面对该实施例进行介绍。
图5为本申请激光雷达参数标定方法一个实施例示意图,参考图5,以图1a所示的激光雷达系统为例,本申请实施例激光雷达参数标定方法可以包括如下步骤:
501、获取激光雷达系统发射的多束激光在标定面上探测到的多个采样点在同一雷达坐标系中的三维坐标;
在对点云计算算法中的参数(称作第一参数)进行标定的过程中,激光雷达系统100的激光源112可以向标定场中设置的一个或多个标定面发射多束激光,并且基于激光雷达系统100的光探测器113接收到多束激光的回波信号,处理器111可以获取多束激光在标定面上探测到的多个采样点的测量信息,多个采样点中任意一个采样点的测量信息用于确定该采样点相对于激光雷达系统100的目标角度(例如前述误差参数模型中的θ' i和β' i)和目标距离(例如前述误差参数模型中的R' i)。
激光雷达系统中存储有点云计算算法的程序,该程序的输入包括采样点的测量信息,输出包括采样点的三维坐标。处理器111获取多个采样点的测量信息之后,可以以点云计算算法中的第一参数为变量,根据多个采样点的测量信息执行该点云计算算法,得到多个采样点在同一雷达坐标系中的三维坐标,多个采样点中任意一个采样点的三维坐标为以第一参数为自变量的函数。之后,激光雷达参数标定装置可以获取多个采样点的三维坐标。
502、确定使以第一参数为自变量的代价函数取到最优解的第一参数的预测值;
得到多个采样点的三维坐标之后,激光雷达参数标定装置可以确定使以第一参数为自变量的代价函数取到最优解时第一参数所对应的值,为了便于描述,将代价函数取到最优解时第一参数对应的的值称作第一参数的预测值。
其中,该代价函数为根据多个采样点的三维坐标和对多个采样点的拟合函数确定的,并且,多个采样点的三维坐标满足该拟合函数时,代价函数取到最优解,或者说,第一参数的预测值用于使多个采样点的三维坐标满足拟合函数。
在一种可能的实现方式中,对多个采样点的拟合函数为通过将用于表示标定面的方程逼近或拟合多个采样点的三维坐标得到的。在一种可能的实现方式中,用于表示标定面的方程为根据标定面的表面形状确定的,对于表面形状相同的标定面,其方程可以相同。例如,对于平面标定面,用于表示该标定面的方程为平面方程。用于表示标定面的方程包括待定参数,待定参数的值由标定面相对于激光雷达系统的位置和角度等决定,或者说,通过逼近或拟合标定面上的多个采样点的三维坐标点得到。
503、根据第一参数的预测值对点云计算算法中的第一参数进行赋值。
确定第一参数的预测值后,激光雷达参数标定装置可以根据第一参数的预测值对点云计算算法中的第一参数进行赋值,以完成对点云计算算法中第一参数的标定。
完成点云计算算法中第一参数的标定之后,激光雷达系统100可以向探测环境发射激光,基于检测到激光的回波信号,将探测到的采样点的测量信息输入该点云计算算法,得到探测环境的点云。
在一种可能的实现方式中,第一参数用于消除点云计算算法的计算误差,有利于处理器根据采样点的测量信息和点云计算算法得到采样点的真实三维坐标。
示例性的,第一参数包括测量误差参数和坐标变换误差参数中的至少一种。
测量误差参数用于消除多个采样点的测量信息的误差,例如前述角度误差参数和距离误差参数,更为具体的,角度误差参数可以为前述误差参数模型中的Δθ i和Δβ i等,距离误差参数可以为前述误差参数模型中的k、ΔR'、D i等。
坐标变换误差参数用于消除坐标变换过程所引入的误差,坐标变换过程用于将激光雷 达系统中不同激光模组探测到的采样点的三维坐标转换到同一雷达坐标系中,坐标变换误差参数可以为前述模组误差参数,更为具体的,可以为前述误差参数模型中的V i和H i等。
第一参数的真实值一般随着激光雷达系统的使用而发生变化,这种变化通常是随机的,难以预测,因此,为了保证激光雷达系统探测到点云的准确性,需要经常对点云计算算法中的第一参数进行标定。
用于确定第一参数的预测值的代价函数为根据采样点的三维坐标和采样点的拟合面的方程确定的。现有技术根据第一参数的预设初值来确定拟合函数,第一参数的初值为固定设置的值,但是,由于第一参数的真实值一般随着激光雷达系统的使用而发生变化,因此第一参数的初值与第一参数的真实值之间的差异通常较大,导致拟合面与标定面之间差异较大,进而导致第一参数的预测值准确性较低。而本申请实施例提供的激光雷达参数标定方法中,用于确定第一参数的预测值的代价函数为根据采样点的三维坐标和对采样点的拟合函数确定的,对多个采样点的拟合函数以第一参数为自变量,拟合函数的准确性由第一参数的预测值的准确性决定,由于第一参数的预测值为根据采样点的测量信息和代价函数的最优解确定的,因此,拟合函数对应的拟合面更接近标定面,第一参数的预测值比现有技术中第一参数的预设初值更加接近第一参数的真实值,因此,本申请方法有利于提高第一参数的预测值的准确性。当第一参数的预测值为第一参数的真实值时,拟合函数对应的拟合面即为标定面。
在一种可能的实现方式中,设置的标定面可以为平面,拟合函数可以为平面方程。示例性的,平面方程的形式可以如A m·x+B m·y+C m·z+D=0,其中A m、B m、C m、D中至少一个参数以第一参数为自变量。
在一种可能的实现方式中,代价函数与第一代价函数正相关,第一代价函数为根据多个采样点与拟合函数所表示的平面之间的第一距离确定的,由于多个采样点的三维坐标和拟合函数均以第一参数为自变量,因此,第一距离为以第一参数为自变量的函数。
假设在标定场中设置的标定面可以包括N个标定面,N为正整数,那么,激光雷达系统发射的多束激光探测到的多个采样点分布在N个标定面上,对多个采样点的拟合函数包括对应于N个标定面的N个拟合函数,每个拟合函数为根据分布在相应标定面上的采样点的三维坐标确定的。
以在标定场中设置两个标定面为例,为了便于描述,将这两个标定面称作第一标定面和第二标定面。那么,多束激光探测到的多个采样点包括在第一标定面上探测到的多个采样点(称作第一采样点)和在第二标定面上探测到的多个采样点(称作第二采样点),拟合函数包括对第一采样点的第一拟合函数和对第二采样点的第二拟合函数。
在一种可能的实现方式中,代价函数与第二代价函数正相关,第二代价函数为根据第一拟合函数、第二拟合函数和第一标定面与第二标定面之间的相对位置关系确定的。由于第一拟合函数和第二拟合函数以第一参数为自变量,因此,第二代价函数以第一参数为自变量。
在一种可能的实现方式中,第一标定面和第二标定面均为平面,因此,可以将第一标定面和第二标定面分别称作第一标定平面和第二标定平面。
在一种可能的实现方式中,第一标定面与第二标定面之间的相对位置关系用于指示第一标定平面与第二标定平面相互垂直,或者用于指示第一标定平面与第二标定平面相互平行,或者用于指示相互平行的第一标定平面与第二标定平面之间的距离。
下面对本申请激光雷达参数标定方法的一种可能的具体实施例进行介绍。
参考图6a,假设对标定场和内部的标定面按如下方式进行设置:
1、标定场开阔,有利于标定面平面点云的提取,例如,其尺寸不小于10m*10m;
2、标定场内设置M个标定面,例如,M不少于10个。
3、标定面的尺寸不小于1m*1m,并且其高度可调,调节范围不小于1m,以使各束激光均能收到回波;
4、标定面的俯仰角可调,调节范围例如在-60°至60°之间;
5、各标定面面向激光雷达系统的表面(简称标定面的表面)为平面,且其反射率均匀;
6、将标定面1的表面与标定面2的表面设置为相互平行,并且测量二者之间的间距,假设间距为D,间距测量精度1mm;
7、将标定面3的表面设置为垂直于平面标定面1的表面;
8、对其他M-3个标定面的位置进行设置,使得不同标定面的位置不同,且不同标定面的表面的法向量不同。
图6a中仅示出3个标定面(标定面1、标定面2和标定面3),未示出其他M-3个标定面,本申请实施例不限定其他M-3个标定面在标定场中的位置和角度。M个标定面之间的位置不同,或角度不同,或位置和角度均不同。标定场中的标定面可以不同时设置,而是依次设置。例如,首先在标定场中设置一个或多个标定面,利用激光雷达系统对设置的标定面进行扫描,之后收回标定场中的标定面,设置其他标定面,或者,改变标定场中标定面的位置和/或角度,利用激光雷达系统对新设置的标定面进行扫描,直至完成对M个不同位置和/或角度的标定面的扫描。
按照上述方式对标定场中的标定面进行设置后,可以操作激光雷达系统发射激光,对标定场中各标定面的表面进行扫描,并对各标定面的表面反射的回波信号进行检测,得到各激光的测量信息,包括探测距离的测量值R′ i、垂直角度的测量值θ′ i和水平角度的测量值β′ i。假设激光雷达系统包括I个模组,各模组对各平面标定面进行扫描,假设,每个模组对单个标定面进行扫描的过程中,在不同的位置或角度发射J束激光,那么激光雷达系统对M个标定面共发射M·I·J束激光,检测到M·I·J个采样点。
可以将激光雷达系统设置为标定模式,具体的,可以将处理器中的点云计算算法中的第一参数设置为变量,之后根据各激光的测量信息和点云计算算法得到各激光探测到的采样点的三维坐标。
继续以前述误差参数模型为例,假设点云计算算法中的第一参数包括k、ΔR′、D i、Δθ i、Δβ i、V i、H i
激光雷达参数标定装置通过求解下式所示的代价函数的最优解,对误差参数k、ΔR′、D i、Δθ i、Δβ i、V i、H i进行估计:
Figure PCTCN2021080111-appb-000007
Figure PCTCN2021080111-appb-000008
公式(4)对应的代价函数为根据点云计算算法得到的采样点的三维坐标和对M个标定面上的采样点的拟合函数确定的。具体的,公式(4)对应的代价函数与函数F M、F P、F R、F V正相关,下面分别对这四个函数进行介绍。
其中,F M为基于平面限定构造的代价函数:
Figure PCTCN2021080111-appb-000009
公式(5)中,(x m,i,j、y m,i,j、z m,i,j)代表激光(m,i,j)检测到的采样点的位置,该位置可以通过公式(2)得到,并且以公式(2)中的参数ΔR′、k、D i、V i、Δθ i、H i、Δβ i作为变量,因此该采样点的位置为以ΔR′、k、D i、V i、Δθ i、H i、Δβ i为自变量的函数(或称位置函数)。
A m·x+B m·y-z+C m=0为对第m个标定面上的采样点的拟合函数,或者说,参数为以第一参数为变量的平面方程,该平面方程中的参数A m、B m、C m是在最小二乘准则下,根据第m个标定面上的采样点的三维坐标计算得到的。A m、B m、C m的表达式如下所示:
Figure PCTCN2021080111-appb-000010
其中,P用于表示如下矩阵:
Figure PCTCN2021080111-appb-000011
Q用于表示矩阵:
Figure PCTCN2021080111-appb-000012
公式(4)中的F P为基于标定面1的表面和标定面2的表面平行构造的代价函数:
F P=(A 1-A 2) 2+(B 1-B 2) 2;(7)
公式(4)中的F R为基于标定面1的表面和标定面2的表面的间距构造的代价函数:
Figure PCTCN2021080111-appb-000013
公式(4)中的F V为基于标定面1的表面和标定面3的表面垂直构造的代价函数:
F V=(A 1·A 3+B 1·B 3+1) 2;(9)
下面对本申请具体实施例的有益效果进行分析:
1)平面方程A m·x+B m·y-z+C m=0对应的平面与第m个标定面的表面之间的差异由A m、B m、C m的准确性决定,A m、B m、C m的准确性由ΔR′、k、D i、V i、Δθ i、H i、Δβ i的估计值的准确性决定。和现有技术中,平面方程中的参数由ΔR′、k、ΔD i、V i、Δθ i、H i、Δβ i的初值决定相比,由于本申请实施例得到的ΔR′、k、D i、V i、Δθ i、H i、Δβ i的估计值为求解最优化问题得到的,比ΔR′、k、D i、V i、Δθ i、H i、Δβ i的初值更接近其真值,因此,本申请实施例有利于提高A m、B m、C m的准确性,减少平面方程A m·x+B m·y-z+C m=0对应的平面与第m个标定面的表面之间的差异,提高对激光雷达系统误差参数的标定精度,进而提高 激光雷达系统探测的点云的精度。
2)代价函数中待标定误差参数较多,而现有技术仅仅采用平面限定条件构造代价函数,以对激光雷达系统的误差参数进行寻优,使得最优化过程容易陷入局部最优解。本申请实施例将最优化过程所使用的单一平面限定条件,扩展为组合限定条件,包括点云处于平面内的限定条件、平面平行的限定条件、平面垂直的限定条件和平面距离的限定条件,放宽了对误差参数初值精度的要求,有利于使得最优化过程得到全局最优解。
下面分别通过仿真实验和激光雷达系统的实际探测结果来验证本申请实施例的效果。
一、首先通过仿真结果介绍本申请实施例相比于现有技术的优势。仿真参数如下所示:
标定面的表面为平面,激光线数:32;垂直视场范围:30°-61°;垂直出射角间隔:1°;垂直出射角测量误差分布:高斯分布、均值为0、标准差为0.5°;水平视场范围:10°至61°之间;水平出射角间隔:0.2°;水平出射角测量误差分布:高斯分布、均值为0、标准差0.125°;距离偏置因子ΔR′:0.2m;距离校正因子k:0.001。
图6b为根据图6a中一个标定面的点云真值绘制的标定面的采样点的正视图,图6c为根据图6a中一个标定面的点云真值绘制的标定面的采样点的正视图。其中,根据标定面对应的点云真值绘制的采样点的侧视图如图6c中的直线6-1所示,根据误差参数未知情况下反演出的点云预测值绘制的采样点的侧视图如图6b中的线段集6-2所示。通过图6c可以看出,误差参数未知会导致用于描述标定平面上的采样点的点云错位,用于描述标定平面的点云厚度显著增加,降低激光雷达系统得到的点云的准确性。
图7a~图7e依次为按照现有激光雷达参数标定方法得到的误差参数D i、Δθ i、Δβ i、V i、H i的标定结果。其中,图7a中,折线7a-1上的点的纵坐标代表D i的预测值,折线7a-2上的点的纵坐标代表D i的真值,折线7a-3上的点的纵坐标代表D i的标定误差(即真值与预测值之间的差值);图7b中,折线7b-1上的点的纵坐标代表Δθ i的真值,折线7b-2上的点的纵坐标代表Δθ i的预测值,折线7b-3上的点的纵坐标代表Δθ i的标定误差(即真值与预测值之间的差值);图7c中,折线7c-1上的点的纵坐标代表Δβ i的标定误差(即真值与预测值之间的差值),折线7c-2上的点的纵坐标代表Δβ i的真值,折线7c-3上的点的纵坐标代表Δβ i的预测值;图7d中,折线7d-1上的点的纵坐标代表V i的标定误差(即真值与预测值之间的差值),折线7d-2上的点的纵坐标代表V i的真值,折线7d-3上的点的纵坐标代表V i的预测值;图7e中,折线7e-1上的点的纵坐标代表H i的标定误差(即真值与预测值之间的差值),折线7e-2上的点的纵坐标代表H i的真值,折线7e-3上的点的纵坐标代表H i的预测值。
图8a~图8e依次为按照本申请实施例提供的激光雷达参数标定方法得到的误差参数D i、Δθ i、Δβ i、V i、H i的标定结果。其中,图8a中,折线8a-1上的点的纵坐标代表D i的预测值,折线8a-2上的点的纵坐标代表D i的真值,折线8a-1上的点与折线8a-2上的点重合,折线8a-3上的点的纵坐标代表D i的标定误差(即真值与预测值之间的差值);图8b中,折线8b-1上的点的纵坐标代表Δθ i的真值,折线8b-2上的点的纵坐标代表Δθ i的预测值,折线8b-1上的点与折线8b-2上的点重合,折线8b-3上的点的纵坐标代表Δθ i的标定误差(即真值与预测值之间的差值);图8c中,折线8c-1上的点的纵坐标代表Δβ i的真值,折线8c-2上的 点的纵坐标代表Δβ i的预测值,折线8c-1上的点与折线8c-2上的点重合,折线8c-3上的点的纵坐标代表Δβ i的标定误差(即真值与预测值之间的差值);图8d中,折线8d-1上的点的纵坐标代表V i的真值,折线8d-2上的点的纵坐标代表V i的预测值,折线8d-1上的点与折线8d-2上的点重合,折线8d-3上的点的纵坐标代表V i的标定误差(即真值与预测值之间的差值);图8e中,折线8e-1上的点的纵坐标代表H i的真值,折线8e-2上的点的纵坐标代表H i的预测值,折线8e-1上的点与折线8e-2上的点重合,折线8e-3上的点的纵坐标代表H i的标定误差(即真值与预测值之间的差值)。
通过比较图7a~7e与图8a~8e,可见,本发明方案的标定精度远优于现有技术方案。
采用测试平面对标定效果进行定量评估,测试平面的法向量为(2,1,4),测试平面过点(10m,10m,10m),点云相对测试平面的均方距离如表1所示,显然本方案标定后点云位置更加精确。
表1
  标定前 现有技术标定后 本方案标定后
相对真实平面 23.2cm 3.98cm 0.07cm
相对拟合平面 3cm 0.43cm 0.01cm
二、实际数据处理结果
基于实际激光雷达系统数据对本发明方案进行验证,采用14个1m*1m的平面反射板和一堵墙面作为平面标定面,对激光雷达系统进行标定。将激光雷达系统中的误差参数赋值0,得到各平面标定面点云如图9所示。
由于所使用的激光雷达系统样机误差参数初值未知,根据实验结果,现有技术已无法对该激光雷达系统样机进行标定,所以下面仅给出本发明方案的标定结果。
图10给出了标定前后的一个平面反射板点云的侧视图,图10中,以与竖线交叉的短横线代表标定后的平面反射板的点云,以未与竖线交叉的短横线代表标定前的平面反射板的点云。表2给出了点云相对拟合平面的均方距离,可以看出,标定后,点云相对拟合平面的均方距离下降50%以上,说明标定方案有效。
表2
  原数据 标定后
图10左侧反射板 3.2cm 0.5cm
图10右侧反射板 4cm 0.7cm
本发明方案可能被延伸用于多个激光雷达系统之间的外参标定,因为部分激光雷达系统误差参数表征了激光雷达系统内部多个模组之间的相对位置、角度关系,而多个激光雷达系统之间的外参表征的是多个激光雷达系统之间的相对位置、角度关系,这二者具有相似性。
若将本方案用于多激光雷达组成的激光雷达系统的外参标定,需要改变的是待标定参数,且需要多个激光雷达扫描相同的标定平面。
应理解,本申请实施例中的具体的例子只是为了帮助本领域技术人员更好地理解本申请实施例,而非限制本申请实施例的范围。
前述实施例中的激光雷达参数标定装置,可以指图1a中的处理器111,或图1b中的计算机设备,在实际应用中,可以由其他具有相应功能的装置来执行本申请方法实施例。激光雷达参数标定装置可以以硬件结构和/或软件模块,激光雷达参数标定装置的某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
以采用集成的方式划分各个功能单元的情况下,图11示出了一种激光雷达参数标定装置的结构示意图。如图11所示,激光雷达参数标定装置1100包括获取模块1101、参数预测模块1102和标定模块1103。
其中,获取模块1101,用于获取激光雷达系统发射的多束激光在标定面上探测到的多个采样点在同一雷达坐标系中的三维坐标,多个采样点的三维坐标为将多个采样点的测量信息输入以第一参数为变量的点云计算算法中得到的,多个采样点中任意一个采样点的三维坐标为以第一参数为自变量的函数,多个采样点的测量信息用于确定多个采样点相对于激光雷达系统的目标角度和目标距离。参数预测模块1102,用于确定使以第一参数为自变量的代价函数取到最优解的第一参数的预测值,代价函数为根据多个采样点的三维坐标和对多个采样点的拟合函数确定的,第一参数的预测值用于使多个采样点的三维坐标满足拟合函数。标定模块1103,用于根据第一参数的预测值对点云计算算法中的第一参数进行赋值。
在一种可能的实现方式中,标定面为平面,拟合函数为平面方程。
在一种可能的实现方式中,代价函数与第一代价函数正相关;第一代价函数为根据多个采样点与拟合函数所表示的平面之间的第一距离确定的,第一距离为以第一参数为自变量的函数。
在一种可能的实现方式中,标定面包括第一标定平面和第二标定平面,多个采样点包括激光雷达系统在第一标定平面上探测到的第一采样点和在第二标定平面上探测到的第二采样点,拟合函数包括对第一采样点的第一拟合函数和对第二采样点的第二拟合函数;代价函数与第二代价函数正相关,第二代价函数为根据第一拟合函数、第二拟合函数和第一标定平面与第二标定平面之间的相对位置关系确定的,第二代价函数以第一参数为自变量。
在一种可能的实现方式中,相对位置关系用于指示:第一标定平面与第二标定平面相互垂直;或者,第一标定平面与第二标定平面相互平行;或者,相互平行的第一标定平面与第二标定平面之间的距离。
在一种可能的实现方式中,第一参数用于消除点云计算算法的计算误差。
在一种可能的实现方式中,第一参数包括测量误差参数和坐标变换误差参数中的至少一种,测量误差参数用于消除多个采样点的测量信息的误差,坐标变换误差参数用于消除坐标变换过程所引入的误差,坐标变换过程用于将激光雷达系统中不同激光模组探测到的 采样点的三维坐标转换到同一雷达坐标系中。
一个示例中,激光雷达参数标定装置可以采用芯片的形式实现,该芯片可以包括:处理器和接口电路。该接口电路(或称通信接口)例如可以是该芯片上的输入/输出接口、管脚或电路等。该处理器可执行存储器存储的计算机指令,以使该芯片执行上述任一方法实施例。可选地,该存储器可以为该芯片内的存储单元,如寄存器、缓存等,或者,该存储器可以是计算机设备内的位于芯片外部的存储器,如只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)等。可选的,该处理器,可以是一个通用中央处理器(CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制上述任一方法实施例的程序执行的集成电路。
一个示例中,激光雷达参数参数标定装置可以采用计算机设备的形式实现,参考图12,为本申请提供的计算机设备1200的一个示意图,参数标定装置可以为图12所示的计算机设备1200。该计算机设备1200可以包括:处理器1201和存储器1202等部件。
本领域技术人员可以理解,图12中示出的计算机设备结构并不构成对计算机设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图12对计算机设备1200的各个构成部件进行具体的介绍:
存储器1202可用于存储软件程序以及模块,处理器1201通过运行存储在存储器1202的软件程序以及模块,从而执行计算机设备的各种功能应用以及数据处理。
存储器1202可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据计算机设备的使用所创建的数据等。此外,存储器1202可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器1201是计算机设备的控制中心,利用各种接口和线路连接整个计算机设备的各个部分,通过运行或执行存储在存储器1202内的软件程序和/或模块,以及调用存储在存储器1202内的数据,执行计算机设备的各种功能和处理数据,从而对计算机设备进行整体监控。处理器1201可以是中央处理器(central processing unit,CPU),网络处理器(network processor,NP)或者CPU和NP的组合、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。虽然图中仅仅示出了一个处理器,该装置可以包括多个处理器或者处理器包括多个处理单元。具体的,处理器1201可以是一个单核处理器,也可以是一个多核或众核处理器。该处 理器1201可以是ARM架构处理器。可选的,处理器1201可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1201中。
在一种可能的实现方式中,计算设备1200还可以包括通信接口1203和总线1204。其中,存储器1202、通信接口1203可以通过总线1204与处理器1201连接。总线1204可以是外设部件互连标准(peripheralcomponent interconnect,PCI)总线或扩展工业标准结构(extended industry standardarchitecture,EISA)总线等。总线1204可以分为地址总线、数据总线、控制总线等。为便于表示,图12中仅用一条线表示,但并不表示仅有一根总线或一种类型的总线。
计算机设备1200可以通过通信接口1203与激光雷达的光探测器相连,接收光探测器发送的数据信号。
上述实施例,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现,当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机执行指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本申请实施例还提供一种激光雷达系统,包括激光源、光探测器、处理器和存储器,所述激光源用于产生并向标定面发射多束激光,所述光探测器用于检测所述多束激光的回波信号,所述处理器在运行所述存储器存储的计算机指令时,执行本申请提供的前述任一方法实施例所述的方法。激光雷达系统的结构可以参考图1a和图1b对应的实施例,此处不再赘述。
以上对本申请所提供的技术方案进行了详细介绍,本申请中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (17)

  1. 一种激光雷达参数标定方法,其特征在于,包括:
    获取激光雷达系统发射的多束激光在标定面上探测到的多个采样点在同一坐标系中的三维坐标,所述多个采样点的三维坐标为将所述多个采样点的测量信息输入以第一参数为变量的点云计算算法中得到的,所述多个采样点中任意一个采样点的三维坐标为以所述第一参数为自变量的函数,所述多个采样点的测量信息用于确定所述多个采样点相对于所述激光雷达系统的目标角度和目标距离;
    确定使以所述第一参数为自变量的代价函数取到最优解的所述第一参数的预测值,所述代价函数为根据所述多个采样点的三维坐标和对所述多个采样点的拟合函数确定的,所述第一参数的预测值用于使所述多个采样点的三维坐标满足所述拟合函数;
    根据所述第一参数的预测值对所述点云计算算法中的所述第一参数进行赋值。
  2. 根据权利要求1所述的方法,其特征在于,所述标定面为平面,所述拟合函数为平面方程。
  3. 根据权利要求2所述的方法,其特征在于,所述代价函数与第一代价函数正相关;
    所述第一代价函数为根据所述多个采样点与所述拟合函数所表示的平面之间的第一距离确定的,所述第一距离为以第一参数为自变量的函数。
  4. 根据权利要求2或3所述的方法,其特征在于,所述标定面包括第一标定平面和第二标定平面,所述多个采样点包括所述激光雷达系统在所述第一标定平面上探测到的第一采样点和在所述第二标定平面上探测到的第二采样点,所述拟合函数包括对所述第一采样点的第一拟合函数和对所述第二采样点的第二拟合函数;
    所述代价函数与第二代价函数正相关,所述第二代价函数为根据第一拟合函数、第二拟合函数和所述第一标定平面与所述第二标定平面之间的相对位置关系确定的,所述第二代价函数以所述第一参数为自变量。
  5. 根据权利要求4所述的方法,其特征在于,所述相对位置关系用于指示:
    所述第一标定平面与所述第二标定平面相互垂直;
    或者,所述第一标定平面与所述第二标定平面相互平行;
    或者,相互平行的所述第一标定平面与所述第二标定平面之间的距离。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述第一参数用于消除所述点云计算算法的计算误差。
  7. 根据权利要求6所述的方法,其特征在于,所述第一参数包括测量误差参数和坐标变换误差参数中的至少一种,所述测量误差参数用于消除所述多个采样点的测量信息的误差,所述坐标变换误差参数用于消除坐标变换过程所引入的误差,所述坐标变换过程用于将所述激光雷达系统中不同激光模组探测到的采样点的三维坐标转换到所述同一坐标系中。
  8. 一种激光雷达参数标定装置,其特征在于,包括:
    获取模块,用于获取激光雷达系统发射的多束激光在标定面上探测到的多个采样点在同一坐标系中的三维坐标,所述多个采样点的三维坐标为将所述多个采样点的测量信息输入以第一参数为变量的点云计算算法中得到的,所述多个采样点中任意一个采样点的三维 坐标为以所述第一参数为自变量的函数,所述多个采样点的测量信息用于确定所述多个采样点相对于所述激光雷达系统的目标角度和目标距离;
    参数预测模块,用于确定使以所述第一参数为自变量的代价函数取到最优解的所述第一参数的预测值,所述代价函数为根据所述多个采样点的三维坐标和对所述多个采样点的拟合函数确定的,所述第一参数的预测值用于使所述多个采样点的三维坐标满足所述拟合函数;
    标定模块,用于根据所述第一参数的预测值对所述点云计算算法中的所述第一参数进行赋值。
  9. 根据权利要求8所述的装置,其特征在于,所述标定面为平面,所述拟合函数为平面方程。
  10. 根据权利要求9所述的装置,其特征在于,所述代价函数与第一代价函数正相关;
    所述第一代价函数为根据所述多个采样点与所述拟合函数所表示的平面之间的第一距离确定的,所述第一距离为以第一参数为自变量的函数。
  11. 根据权利要求9或10所述的装置,其特征在于,所述标定面包括第一标定平面和第二标定平面,所述多个采样点包括所述激光雷达系统在所述第一标定平面上探测到的第一采样点和在所述第二标定平面上探测到的第二采样点,所述拟合函数包括对所述第一采样点的第一拟合函数和对所述第二采样点的第二拟合函数;
    所述代价函数与第二代价函数正相关,所述第二代价函数为根据第一拟合函数、第二拟合函数和所述第一标定平面与所述第二标定平面之间的相对位置关系确定的,所述第二代价函数以所述第一参数为自变量。
  12. 根据权利要求11所述的装置,其特征在于,所述相对位置关系用于指示:
    所述第一标定平面与所述第二标定平面相互垂直;
    或者,所述第一标定平面与所述第二标定平面相互平行;
    或者,相互平行的所述第一标定平面与所述第二标定平面之间的距离。
  13. 根据权利要求8至12中任一项所述的装置,其特征在于,所述第一参数用于消除所述点云计算算法的计算误差。
  14. 根据权利要求13所述的装置,其特征在于,所述第一参数包括测量误差参数和坐标变换误差参数中的至少一种,所述测量误差参数用于消除所述多个采样点的测量信息的误差,所述坐标变换误差参数用于消除坐标变换过程所引入的误差,所述坐标变换过程用于将所述激光雷达系统中不同激光模组探测到的采样点的三维坐标转换到所述同一坐标系中。
  15. 一种芯片,其特征在于,包括处理器和存储器,所述处理器在运行所述存储器存储的计算机指令时,执行如权利要求1至7中任一项所述的方法。
  16. 一种计算机可读存储介质,其特征在于,包括指令,当所述指令在计算机上运行时,使得计算机执行如权利要求1至7中任一项所述的方法。
  17. 一种激光雷达系统,其特征在于,包括激光源、光探测器、处理器和存储器;
    所述激光源用于产生并向标定面发射多束激光,所述光探测器用于检测所述多束激光 的回波信号;
    所述处理器在运行所述存储器存储的计算机指令时,执行如权利要求1至7中任一项所述的方法。
PCT/CN2021/080111 2020-03-12 2021-03-11 一种激光雷达参数标定方法及装置 WO2021180149A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CA3171089A CA3171089A1 (en) 2020-03-12 2021-03-11 Method and apparatus for calibrating parameter of laser radar
EP21767321.9A EP4109131A4 (en) 2020-03-12 2021-03-11 METHOD AND DEVICE FOR CALIBRATION OF LASER RADAR PARAMETERS
US17/942,380 US20230003855A1 (en) 2020-03-12 2022-09-12 Method and apparatus for calibrating parameter of laser radar

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010170340.4 2020-03-12
CN202010170340.4A CN113466834A (zh) 2020-03-12 2020-03-12 一种激光雷达参数标定方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/942,380 Continuation US20230003855A1 (en) 2020-03-12 2022-09-12 Method and apparatus for calibrating parameter of laser radar

Publications (1)

Publication Number Publication Date
WO2021180149A1 true WO2021180149A1 (zh) 2021-09-16

Family

ID=77671191

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/080111 WO2021180149A1 (zh) 2020-03-12 2021-03-11 一种激光雷达参数标定方法及装置

Country Status (5)

Country Link
US (1) US20230003855A1 (zh)
EP (1) EP4109131A4 (zh)
CN (1) CN113466834A (zh)
CA (1) CA3171089A1 (zh)
WO (1) WO2021180149A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114966626A (zh) * 2022-04-26 2022-08-30 珠海视熙科技有限公司 激光雷达误差修正的方法、装置及电子设备和存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114355321B (zh) * 2022-03-18 2022-07-05 深圳市欢创科技有限公司 激光雷达的标定方法、装置、系统、激光雷达及机器人
CN115291198B (zh) * 2022-10-10 2023-01-24 西安晟昕科技发展有限公司 一种雷达信号发射及信号处理方法
CN115661269B (zh) * 2022-11-18 2023-03-10 深圳市智绘科技有限公司 相机与激光雷达的外参标定方法、装置及存储介质
CN115965925B (zh) * 2023-03-03 2023-06-23 安徽蔚来智驾科技有限公司 点云目标检测方法、计算机设备、存储介质及车辆

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106597417A (zh) * 2017-01-10 2017-04-26 北京航天计量测试技术研究所 一种远距离扫描激光雷达测量误差的修正方法
CN107192350A (zh) * 2017-05-19 2017-09-22 中国人民解放军信息工程大学 一种三维激光扫描仪内参数标定方法及装置
US20190056484A1 (en) * 2017-08-17 2019-02-21 Uber Technologies, Inc. Calibration for an autonomous vehicle lidar module

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100437144C (zh) * 2007-08-13 2008-11-26 北京航空航天大学 一种卫星导航增强系统的定位方法
CN105182384A (zh) * 2015-08-24 2015-12-23 桂林电子科技大学 一种双模实时伪距差分定位系统和伪距改正数据生成方法
CN106405555B (zh) * 2016-09-23 2019-01-01 百度在线网络技术(北京)有限公司 用于车载雷达系统的障碍物检测方法和装置
WO2019032588A1 (en) * 2017-08-11 2019-02-14 Zoox, Inc. CALIBRATION AND LOCATION OF VEHICLE SENSOR
CN109521403B (zh) * 2017-09-19 2020-11-20 百度在线网络技术(北京)有限公司 多线激光雷达的参数标定方法及装置、设备及可读介质
JP7007167B2 (ja) * 2017-12-05 2022-01-24 株式会社トプコン 測量装置、測量装置の校正方法および測量装置の校正用プログラム
CN109946680B (zh) * 2019-02-28 2021-07-09 北京旷视科技有限公司 探测系统的外参数标定方法、装置、存储介质及标定系统
CN110031824B (zh) * 2019-04-12 2020-10-30 杭州飞步科技有限公司 激光雷达联合标定方法及装置
CN110333503B (zh) * 2019-05-29 2023-06-09 菜鸟智能物流控股有限公司 激光雷达的标定方法、装置及电子设备
CN110349221A (zh) * 2019-07-16 2019-10-18 北京航空航天大学 一种三维激光雷达与双目可见光传感器的融合标定方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106597417A (zh) * 2017-01-10 2017-04-26 北京航天计量测试技术研究所 一种远距离扫描激光雷达测量误差的修正方法
CN107192350A (zh) * 2017-05-19 2017-09-22 中国人民解放军信息工程大学 一种三维激光扫描仪内参数标定方法及装置
US20190056484A1 (en) * 2017-08-17 2019-02-21 Uber Technologies, Inc. Calibration for an autonomous vehicle lidar module

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
N MUHAMMAD ; S LACROIX: "Calibration of a Rotating Multi-Beam Lidar", INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2010 IEEE/RSJ INTERNATIONAL CONFERENCE ON, 18 October 2010 (2010-10-18), pages 5648 - 5653, XP031920639, ISBN: 978-1-4244-6674-0, DOI: 10.1109/IROS.2010.5651382 *
See also references of EP4109131A4
YAN, LI: "Error Analysis and Self-Calibration of the Multi-Line Terrestrial Laser Scanner", JOURNAL OF GEOMATICS, vol. 44, no. 05, 31 October 2019 (2019-10-31), pages 1 - 7, XP009530303, ISSN: 1007-3817, DOI: 10.14188/j.2095- 6045.2018486 *
YU, DEQI: "Integration and Quality Control of Mobile Laser Scanning Mapping System", BASIC SCIENCES, CHINA MASTER’S THESES FULL-TEXT DATABASE, 20 October 2018 (2018-10-20), CN, pages 1 - 92, XP009530315, ISSN: 1674-0246 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114966626A (zh) * 2022-04-26 2022-08-30 珠海视熙科技有限公司 激光雷达误差修正的方法、装置及电子设备和存储介质
CN114966626B (zh) * 2022-04-26 2023-03-10 珠海视熙科技有限公司 激光雷达误差修正的方法、装置及电子设备和存储介质

Also Published As

Publication number Publication date
US20230003855A1 (en) 2023-01-05
CA3171089A1 (en) 2021-09-16
EP4109131A1 (en) 2022-12-28
EP4109131A4 (en) 2023-07-26
CN113466834A (zh) 2021-10-01

Similar Documents

Publication Publication Date Title
WO2021180149A1 (zh) 一种激光雷达参数标定方法及装置
WO2020048499A1 (zh) 一种校准方法及通信设备
CN108638062B (zh) 机器人定位方法、装置、定位设备及存储介质
CN112098964B (zh) 路端雷达的标定方法、装置、设备及存储介质
CN112415493A (zh) 三维扫描激光雷达坐标误差修正方法
CN110111384A (zh) 一种tof深度模组的标定方法、装置及系统
CN111913169B (zh) 激光雷达内参、点云数据的修正方法、设备及存储介质
CN111982124B (zh) 基于深度学习的玻璃场景下三维激光雷达导航方法及设备
WO2024031809A1 (zh) 一种标定方法、标定系统、深度相机及可读存储介质
CN113777592B (zh) 方位角标定方法和装置
CN110310309B (zh) 一种图像配准方法、图像配准装置及终端
CN112014829B (zh) 一种激光雷达扫描仪的性能指标测试方法与装置
CN115859445B (zh) 基于全站扫描技术和逆向算法的桥梁组装仿真测试方法
WO2022160879A1 (zh) 一种转换参数的确定方法和装置
CN110736452A (zh) 一种应用于控制测量领域的导线测量方法及系统
CN115061102A (zh) 一种标定方法、装置、电子设备和存储介质
CN116087921A (zh) 探测性能测试方法、装置、计算设备及存储介质
WO2023004792A1 (zh) 一种激光雷达的姿态标定方法、相关装置、以及存储介质
JP2022087822A (ja) レーダー追跡方法、ノイズ除去方法、装置及び機器
JP2019121076A (ja) 情報処理装置、プログラム及び情報処理方法
CN110310312B (zh) 一种图像配准方法、图像配准装置及终端
CN116261674A (zh) 用于校准三维扫描仪和优化点云数据的设备和方法
CN110207699B (zh) 一种定位方法和装置
CN117537719B (zh) 基于角度效应解耦的位移测量方法及其相关设备
CN115880345A (zh) 点云数据配准方法、装置、设备以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21767321

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3171089

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2021767321

Country of ref document: EP

Effective date: 20220921

NENP Non-entry into the national phase

Ref country code: DE