CN115546315A - Sensor on-line calibration method and device for automatic driving vehicle and storage medium - Google Patents

Sensor on-line calibration method and device for automatic driving vehicle and storage medium Download PDF

Info

Publication number
CN115546315A
CN115546315A CN202211248093.0A CN202211248093A CN115546315A CN 115546315 A CN115546315 A CN 115546315A CN 202211248093 A CN202211248093 A CN 202211248093A CN 115546315 A CN115546315 A CN 115546315A
Authority
CN
China
Prior art keywords
detection target
point cloud
target
data
automatic driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211248093.0A
Other languages
Chinese (zh)
Inventor
胡铮铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202211248093.0A priority Critical patent/CN115546315A/en
Publication of CN115546315A publication Critical patent/CN115546315A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The application discloses a method and a device for calibrating a sensor of an automatic driving vehicle on line, wherein the method comprises the following steps: determining whether a driving lane of the autonomous vehicle meets an online calibration condition; when the online calibration condition is met, acquiring road image data acquired by a camera of an automatic driving vehicle and laser point cloud data acquired by a laser radar, and performing target detection on the road image data and the laser point cloud data to obtain an image detection target and a point cloud detection target; determining whether the image detection target and the point cloud detection target are the same target; and when the target is the same target, acquiring first feature point data of the image detection target and second feature point data of the point cloud detection target, and performing matching calculation on the first feature point data and the second feature point data to obtain an online calibration result between the camera and the laser radar. According to the technical scheme, the on-line calibration of the sensor of the automatic driving vehicle can be completed quickly and accurately.

Description

Sensor on-line calibration method and device for automatic driving vehicle and storage medium
Technical Field
The application relates to the technical field of sensor calibration, in particular to an online calibration method and device for a sensor of an automatic driving vehicle.
Background
The multi-sensor fusion technology is a very key algorithm module in the field of automatic driving, but the multi-sensor fusion technology depends on the synchronization of time and space seriously, the time synchronization can be solved easily in a hardware triggering mode, and the space synchronization does not have a good solution.
The spatial synchronization is expressed as the calibration of external parameters among all sensors, the existing calibration method is mostly off-line calibration, for example, a calibration workshop or manual calibration is used to calibrate the initial external parameters, but the relative pose among the sensors is changed due to various factors such as vibration and the like during the running of an automatic driving vehicle or after a period of time.
For the above problems, some online calibration methods exist in the prior art, but most of the existing online calibration methods are based on matching of linear characteristics of image data and point cloud data, and it is difficult for the existing matching algorithm to efficiently and accurately obtain an online calibration result.
Disclosure of Invention
Based on the above problems in the prior art, embodiments of the present application provide an online calibration method and device for sensors of an autonomous vehicle, so as to quickly and accurately calculate relative external parameters between the sensors.
The embodiment of the application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides an online calibration method for a sensor of an autonomous vehicle, where the method includes:
determining whether a driving lane of the automatic driving vehicle meets an online calibration condition;
when the online calibration condition is met, acquiring road image data acquired by a camera of an automatic driving vehicle and laser point cloud data acquired by a laser radar, and performing target detection on the road image data and the laser point cloud data to obtain an image detection target and a point cloud detection target;
determining whether the image detection target and the point cloud detection target are the same target;
and when the target is the same target, acquiring first feature point data of the image detection target and second feature point data of the point cloud detection target, and performing matching calculation on the first feature point data and the second feature point data to obtain an online calibration result between the camera and the laser radar.
Optionally, the determining whether the driving lane of the autonomous vehicle meets the online calibration condition includes:
acquiring the lane type of a current driving lane of the automatic driving vehicle;
and if the automatic driving vehicle runs on the straight lane currently, determining that the online calibration condition is met, otherwise, determining that the online calibration condition is not met.
Optionally, the determining whether the image detection target and the point cloud detection target are the same target includes:
determining whether the image detection target and the point cloud detection target are located in a lane in which the autonomous vehicle is currently driving;
if the point cloud detection target is located in the current driving lane of the automatic driving vehicle, determining whether the image detection target and the point cloud detection target are both front targets or both rear targets;
and if the two targets are the front target or the rear target, determining that the image detection target and the point cloud detection target are the same target.
Optionally, the determining whether the image detection target and the point cloud detection target are within a lane in which the autonomous vehicle is currently traveling comprises:
detecting lane lines of the road image data to obtain a lane boundary line of the current running of the automatic driving vehicle;
acquiring the position relation between the image detection target and the lane boundary line, and if the image detection target is positioned in the lane boundary line, determining that the image detection target is positioned in a lane where the automatic driving vehicle runs currently;
virtualizing a lane boundary line of a current driving lane of the automatic driving vehicle according to the vehicle width information of the automatic driving vehicle in the laser point cloud data;
and acquiring the position relation between the point cloud detection target and the virtual lane boundary line, and if the point cloud detection target is positioned in the virtual lane boundary line, determining that the point cloud detection target is positioned in the current driving lane of the automatic driving vehicle.
Optionally, the determining whether the image detection target and the point cloud detection target are both a vehicle front target or both vehicle rear targets includes:
acquiring the position relation between the image detection target and the automatic driving vehicle and the position relation between the point cloud detection target and the automatic driving vehicle;
if the image detection target and the point cloud detection target are located in front of the automatic driving vehicle, determining that the image detection target and the point cloud detection target are both front targets;
and if the image detection target and the point cloud detection target are positioned behind the automatic driving vehicle, determining that the image detection target and the point cloud detection target are the vehicle rear target.
Optionally, the method further comprises:
if a plurality of image detection targets and point cloud detection targets which are positioned in a lane where the automatic driving vehicle runs currently and are also front targets are provided, determining the image detection target and the point cloud detection target which are closest to the automatic driving vehicle as optimal targets;
the acquiring of the first feature point data of the image detection target and the second feature point data of the point cloud detection target includes:
and acquiring the optimal first characteristic point data of the image detection target and the optimal second characteristic point data of the point cloud detection target.
Optionally, the image detection target includes an image target detection frame, the point cloud detection target includes a point cloud target detection frame, the obtaining of the first feature point data of the image detection target and the second feature point data of the point cloud detection target, the performing of matching calculation on the first feature point data and the second feature point data to obtain the online calibration result between the camera and the laser radar includes:
acquiring corner data of the image target detection frame as the first feature point data and acquiring corner data of the point cloud target detection frame as the second feature point data;
acquiring relative external parameters between the camera and the laser radar by adopting a multipoint perspective imaging PNP algorithm for the first feature point data and the second feature point data;
and obtaining the online calibration result according to the relative external reference between the camera and the laser radar.
In a second aspect, an embodiment of the present application further provides an online sensor calibration device for an autonomous vehicle, where the device includes:
the automatic driving vehicle comprises a first judging unit, a second judging unit and a control unit, wherein the first judging unit is used for determining whether a driving lane of the automatic driving vehicle meets an online calibration condition;
the system comprises a target detection unit, a point cloud detection unit and a data processing unit, wherein the target detection unit is used for acquiring road image data acquired by a camera of an automatic driving vehicle and laser point cloud data acquired by a laser radar when an online calibration condition is met, and performing target detection on the road image data and the laser point cloud data to obtain an image detection target and a point cloud detection target;
the second judgment unit is used for determining whether the image detection target and the point cloud detection target are the same target;
and the calibration calculation unit is used for acquiring first feature point data of the image detection target and second feature point data of the point cloud detection target when the images are the same target, and performing matching calculation on the first feature point data and the second feature point data to obtain an online calibration result between the camera and the laser radar.
In a third aspect, an embodiment of the present application further provides an electronic device, including:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform any of the aforementioned methods of sensor online calibration for an autonomous vehicle.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium storing one or more programs, which when executed by an electronic device including a plurality of application programs, cause the electronic device to perform any one of the above-mentioned methods for online calibration of sensors of an autonomous vehicle.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects: when the sensor of the automatic driving vehicle needs to be calibrated online, whether a driving lane of the automatic driving vehicle meets online calibration conditions is judged firstly, when the online calibration conditions are met, road image data and corresponding laser point cloud data are obtained and target detection is respectively carried out, target consistency judgment is carried out on detected image detection targets and point cloud detection targets, when the two detection targets correspond to the same target, relative external reference calculation of a camera and a laser radar is carried out based on feature points of the two detection targets, and an online calibration result is obtained.
The method and the device take the driving lane of the automatic driving vehicle as the constraint condition, when the constraint condition is met, the method and the device can accurately judge whether the image detection target and the point cloud detection target correspond to the same target or not, when the image detection target and the point cloud detection target correspond to the same target, in the relative external parameter calculation process, the feature point matching can be accurately and quickly carried out without a complex feature point matching algorithm, in the target detection process, the existing target detection algorithm of the automatic driving vehicle can be utilized, the calculation efficiency of online calibration can be further improved, and calculation resources can be saved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic flow chart of a method for calibrating a sensor of an autonomous vehicle on line according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating a lane boundary line detection result in the embodiment of the present application;
FIG. 3 is a schematic diagram of road image data and its image detection target shown in the embodiment of the present application;
fig. 4 is a schematic diagram of laser point cloud data and a point cloud detection target thereof shown in the embodiment of the present application;
FIG. 5 is a schematic structural diagram of an online sensor calibration device for an autonomous vehicle according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
The embodiment of the present application provides an online sensor calibration method for an autonomous vehicle, and as shown in fig. 1, provides a schematic flow chart of the online sensor calibration method for the autonomous vehicle in the embodiment of the present application, where the method at least includes the following steps S110 to S140:
step S110, determining whether the driving lane of the automatic driving vehicle meets an online calibration condition.
And step S120, when the online calibration condition is met, acquiring road image data acquired by a camera of the automatic driving vehicle and laser point cloud data acquired by a laser radar, and performing target detection on the road image data and the laser point cloud data to obtain an image detection target and a point cloud detection target.
The sensor online calibration method of the automatic driving vehicle is executed by the automatic driving vehicle, the camera and the laser radar are installed on the automatic driving vehicle in advance, and data can be normally acquired in real time after initial calibration is completed. When initial external parameters of sensors such as a camera and a laser radar change and need to be calibrated on line to update or correct relative external parameters between the sensors, whether a current driving lane meets an on-line calibration condition needs to be judged, and when the on-line calibration condition is met, road image data collected by the current camera and laser point cloud data collected by the laser radar are acquired, so that normal execution of subsequent steps is guaranteed.
In addition, because the data acquisition frequencies of the camera and the laser radar are different, in order to ensure the accuracy of subsequent data processing, time synchronization processing needs to be performed on road image data and laser point cloud data, for example, data acquisition is performed in a hardware triggering manner.
After the collected road image data and laser point cloud data, the embodiment can perform target detection on the road image data and the laser point cloud data based on the existing target detection algorithm of the automatic driving vehicle, and does not need to additionally design the target detection algorithm, so as to save computing resources.
Step S130, determining whether the image detection target and the point cloud detection target are the same target.
Two key problems that need to be solved by online calibration are matching between targets and relative external reference calculation between targets, respectively. If the first problem is solved, the second problem is easily solved.
Therefore, after the detection targets of the two kinds of data are obtained, whether the two detection targets correspond to the same target or not is judged, and when the two detection targets correspond to the same target, the feature points of the two detection targets are subjected to relative reference calculation based on a feature point matching algorithm to obtain an online calibration result.
Step S140, when the target is the same target, acquiring first feature point data of the image detection target and second feature point data of the point cloud detection target, and performing matching calculation on the first feature point data and the second feature point data to obtain an online calibration result between the camera and the laser radar.
As can be known from the online calibration method shown in fig. 1, in this embodiment, when online calibration needs to be performed on a sensor of an autonomous vehicle, it is first determined whether a driving lane of the autonomous vehicle meets an online calibration condition, when the online calibration condition is met, road image data and corresponding laser point cloud data are acquired and target detection is performed respectively, target consistency determination is performed on detected image detection targets and detected point cloud detection targets, and when two detection targets are the same target, relative extrinsic parameter calculation between a camera and a laser radar is performed based on feature points of the two detection targets, so as to obtain an online calibration result.
The embodiment of the invention takes a driving lane of an automatic driving vehicle as a constraint condition, when the constraint condition is met, the embodiment of the invention can accurately judge whether an image detection target and a point cloud detection target correspond to the same target, when the image detection target and the point cloud detection target correspond to the same target, in the relative external reference calculation process, the feature point matching can be accurately and quickly carried out without a complex feature point matching algorithm, in the target detection process, the existing target detection algorithm of the automatic driving vehicle can be utilized, the calculation efficiency of online calibration can be further improved, and the calculation resources can be saved.
In some embodiments of the present application, the determining whether the driving lane of the autonomous vehicle satisfies the online calibration condition includes:
acquiring the lane type of a current driving lane of the automatic driving vehicle;
and if the automatic driving vehicle runs on the straight lane currently, determining that the online calibration condition is met, otherwise, determining that the online calibration condition is not met.
The method has the advantages that when the automatic driving vehicle runs on a straight lane, the camera and the laser radar of the automatic driving vehicle can acquire complete data of a square front target or a square rear target of the vehicle, the shielding problem is avoided, the accuracy of target detection can be guaranteed, and a data basis is provided for accurate matching of feature points; and on the straight lane, the automatic driving vehicle is taken as a reference, whether the image detection target and the point cloud detection target correspond to a target right in front of the vehicle or a target right behind the vehicle can be accurately judged, and the target matching among different sensors is simple and easy to implement.
Therefore, the method and the device are particularly used for judging whether the automatic driving vehicle runs on a straight lane, as shown in fig. 3, when the automatic driving vehicle runs on a straight road and keeps running on a lane center line, it can be determined that a front target or a rear target collected by a camera and a laser radar of the automatic driving vehicle is the same target, wherein the front target or the rear target mainly refers to a target running right ahead or right behind the lane of the automatic driving vehicle, and the detection target comprises a traffic participant such as a vehicle. Therefore, the driving lane of the automatic driving vehicle is used as the constraint condition of online calibration, when the constraint condition is met, two subsequently detected detection targets can be determined to correspond to the same target, and the problem of matching between the targets is solved.
In some embodiments of the present application, the determining whether the image detection target and the point cloud detection target are the same target includes:
determining whether the image detection target and the point cloud detection target are within a lane in which the autonomous vehicle is currently traveling;
if the point cloud detection target is located in the current driving lane of the automatic driving vehicle, determining whether the image detection target and the point cloud detection target are both front targets or both rear targets;
and if the two targets are the front target or the rear target, determining that the image detection target and the point cloud detection target are the same target.
Wherein it is determined whether the image detection target is within a lane in which the autonomous vehicle is currently traveling by:
performing lane line detection on the road image data to obtain a lane boundary line of the current running of the automatic driving vehicle, wherein the lane line detection algorithm can adopt an existing detection algorithm of the automatic driving vehicle so as to save computing resources;
and acquiring the position relation between the image detection target and the lane boundary line, and if the image detection target is positioned in the lane boundary line, determining that the image detection target is positioned in the current driving lane of the automatic driving vehicle.
As shown in fig. 3, when the autonomous vehicle travels on a straight road, the target detection is performed on the road image data using the existing image target detection algorithm of the autonomous vehicle to obtain the 3D image target detection result shown in fig. 2, and the lane line detection is performed on the road image data using the existing lane line detection algorithm of the autonomous vehicle to obtain the double-sided boundary lines shown in fig. 3, whereby it can be determined whether the 3D image detection target is on the own lane of the autonomous vehicle based on the position information.
In this embodiment, when the positional relationship between the image detection target and the lane boundary line is obtained, the image detection target and the lane boundary line should be converted into the same coordinate system. For example, converting the pixel points of the road image data into the vehicle body coordinate system, assuming that (u, v) is the coordinate of the point projected on the image plane, (c) x ,c y ) Is a reference point (f) x ,f y ) Is the focal length in pixel, X c 、Y c 、Z c Is a coordinate point, X, in the camera coordinate system w 、Y w 、Z w Is a coordinate point in the vehicle body coordinate system, where the reference point may be a center point of the image.
Then according to the pinhole imaging formula:
Figure BDA0003886825180000091
deriving formula (2):
Figure BDA0003886825180000092
Figure BDA0003886825180000093
then assume that the height of the above-ground object is zero, Z w If not less than 0, then the initial external reference calibration matrix of the camera and the laser radar is used
Figure BDA0003886825180000094
Based on the following formula (3):
Figure BDA0003886825180000095
the following equation (4) can be calculated:
Figure BDA0003886825180000096
Figure BDA0003886825180000097
Figure BDA0003886825180000098
the lane line detection result in the image can be converted to the current own vehicle coordinate system by using the formula (4), meanwhile, the central pixel point of the 3D image detection target is projected to the front own vehicle coordinate system, and whether the 3D image detection target is on the current driving lane of the automatic driving vehicle is judged according to the position relation.
Wherein some embodiments of the application may determine whether the point cloud detection target is within a lane in which the autonomous vehicle is currently traveling by:
virtualizing a lane boundary line of a current driving lane of the automatic driving vehicle according to the vehicle width information of the automatic driving vehicle in the laser point cloud data; alternatively, it is also possible to acquire high-precision map data, and obtain the lane boundary line of the lane in which the autonomous vehicle is traveling, based on the position information of the autonomous vehicle.
And acquiring the position relation between the point cloud detection target and the virtual lane boundary line, and if the point cloud detection target is positioned in the virtual lane boundary line, determining that the point cloud detection target is positioned in the current driving lane of the automatic driving vehicle.
After determining that the image detection target and the point cloud detection target are both in the current driving lane of the automatic driving vehicle, determining whether the image detection target and the point cloud detection target are both front-vehicle targets or both rear-vehicle targets through the following steps:
acquiring the position relation between the image detection target and the automatic driving vehicle and the position relation between the point cloud detection target and the automatic driving vehicle;
if the image detection target and the point cloud detection target are located in front of the automatic driving vehicle, determining that the image detection target and the point cloud detection target are both front targets; and if the image detection target and the point cloud detection target are positioned behind the automatic driving vehicle, determining that the image detection target and the point cloud detection target are the vehicle rear target.
In the embodiments shown in the above equations (1) to (4), after the image detection target is converted into the body coordinate system, the positional relationship between the image detection target and the autonomous vehicle can be obtained, and it can be determined that the image detection target is the vehicle front target or the vehicle rear target. Since the point cloud detection targets detected by the existing point cloud detection algorithm of the autonomous vehicle generally give positions based on the coordinate system of the autonomous vehicle, as shown in fig. 4, it can be easily determined which point cloud detection targets belong to the vehicle front targets and which point cloud detection targets belong to the vehicle rear targets.
In some embodiments of the present application, the method further comprises:
if a plurality of image detection targets and point cloud detection targets which are positioned in a lane where the automatic driving vehicle runs currently and are also front targets are provided, determining the image detection target and the point cloud detection target which are closest to the automatic driving vehicle as optimal targets;
the acquiring of the first feature point data of the image detection target and the second feature point data of the point cloud detection target includes:
and acquiring the optimal first characteristic point data of the image detection target and the optimal second characteristic point data of the point cloud detection target.
That is, when there are a plurality of image detection targets and point cloud detection targets in front of the vehicle, the present embodiment preferably performs the relative-external-reference calculation with the vehicle front target closest to the autonomous vehicle as the optimal target, because the obtained feature is the most complete when the vehicle front target closest in distance is acquired by the sensor, and the accuracy of the calculation result can be improved by performing the relative-external-reference calculation based on the feature points of the optimal vehicle front target.
Based on the embodiment, the image detection target and the point cloud detection target can be associated, the detection target matching between the camera and the laser radar is realized, and then the relative external parameters between the camera and the laser radar can be calculated by utilizing a feature point matching algorithm in the prior art.
As known with reference to fig. 3 and 4, the image detection target includes an image target detection frame, and the point cloud detection target includes a point cloud target detection frame, then obtaining the first feature point data of the image detection target and the second feature point data of the point cloud detection target includes:
acquiring corner data of the image target detection frame as first feature point data and acquiring corner data of the point cloud target detection frame as second feature point data, namely, taking 8 corners of the image target detection frame as first feature point data, taking 8 corners of the point cloud target detection frame as second feature point data, and acquiring relative external reference between the camera and the laser radar by adopting a multi-point perspective imaging (PNP) algorithm for the first feature point data and the second feature point data; and obtaining the online calibration result according to the relative external parameters between the camera and the laser radar.
Let the coordinate of the 3D point in space be [ X ] w Y w Z w ] T The 3D point's homogeneous coordinate is denoted as [ X ] w Y w Z w 1] T The coordinate of the projection point is [ uv ]] T Homogeneous coordinate system [ u v 1] T And the internal reference matrix of the camera is K, then the relative external reference between the camera and the laser radar is solved based on the PNP algorithm, namely the process of rotating the matrix R and the displacement vector t is as follows:
the perspective projection model is:
Figure BDA0003886825180000121
after the formula (5) is developed, the following formula (6) is obtained:
Figure BDA0003886825180000122
the equation set form of equation (6) is as follows:
z c u c =f 11 X w +f 12 Y w +f 13 Z w +f 14
z c v c =f 21 X w +f 22 Y w +f 23 Z w +f 24
z c =f 31 X w +f 32 Y w +f 33 Z w +f 34 (7)
removing Z from the formula (7) c After calculation, the following formula (8) is obtained:
f 11 X w +f 12 Y w +f 13 Z w +f 14 -f 31 X w u c +f 32 Y w u c +f 33 Z w u c +f 34 u c =0
f 21 X w +f 22 Y w +f 23 Z w +f 24 -f 31 X w v c +f 32 Y w v c +f 33 Z w v c +f 34 v c =0 (8)
each group of 3D-2D matching points corresponds to two equations, 12 unknowns are in total, and at least 6 groups of matching points are needed. Is provided with N groups of matching points, then
Figure BDA0003886825180000123
Writing the above equation (9) as a momentThe array form is adopted, AF =0 is obtained, and when N =6, a linear equation system can be directly solved; when N is more than or equal to 6, a least square solution can be obtained, and Singular Value Decomposition (SVD for short) Decomposition can be carried out on F, namely F = UDV T
Since F = [ KR Kt ], the estimated values of the rotation matrix R and the displacement vector t are as follows:
Figure BDA0003886825180000124
Figure BDA0003886825180000125
based on the PNP algorithm shown in the above embodiments, it can be seen that after the matching between the image detection target and the point cloud detection target is completed, the relative external reference calculation can be completed by using a simpler PNP algorithm, and in the relative external reference calculation process, the rotation matrix and the displacement vector can be quickly calculated without setting an initial value, so as to implement online calibration.
The embodiment of the present application further provides an online sensor calibration device 500 for an autonomous vehicle, as shown in fig. 5, which provides a schematic structural diagram of the online sensor calibration device for the autonomous vehicle in the embodiment of the present application, where the device 500 includes: a first judgment unit 510, an object detection unit 520, a second judgment unit 530, and a calibration calculation unit 540, wherein:
a first judgment unit 510, configured to determine whether a driving lane of the autonomous vehicle meets an online calibration condition;
the target detection unit 520 is used for acquiring road image data acquired by a camera of the automatic driving vehicle and laser point cloud data acquired by a laser radar when an online calibration condition is met, and performing target detection on the road image data and the laser point cloud data to obtain an image detection target and a point cloud detection target;
a second determining unit 530, configured to determine whether the image detection target and the point cloud detection target are the same target;
and the calibration calculation unit 540 is configured to, when the target is the same target, obtain first feature point data of the image detection target and second feature point data of the point cloud detection target, perform matching calculation on the first feature point data and the second feature point data, and obtain an online calibration result between the camera and the laser radar.
In an embodiment of the present application, the first determining unit 510 is specifically configured to obtain a lane type of a current driving lane of the autonomous vehicle; and if the automatic driving vehicle runs on the straight lane currently, determining that the online calibration condition is met, otherwise, determining that the online calibration condition is not met.
In an embodiment of the present application, the second determining unit 530 is specifically configured to determine whether the image detection target and the point cloud detection target are located in a lane where the autonomous vehicle is currently driving; if the point cloud detection target is located in the current driving lane of the automatic driving vehicle, determining whether the image detection target and the point cloud detection target are the vehicle front target or the vehicle rear target; and if the image detection target and the point cloud detection target are both front targets or rear targets, determining that the image detection target and the point cloud detection target are the same target.
In one embodiment of the present application, the second judging unit 530 includes a lane judging module and a vehicle-ahead rear judging module;
the lane judgment module is used for detecting lane lines of the road image data to obtain a lane boundary line of the current running of the automatic driving vehicle; acquiring the position relation between the image detection target and the lane boundary line, and if the image detection target is positioned in the lane boundary line, determining that the image detection target is positioned in a lane where the automatic driving vehicle runs currently; virtualizing a lane boundary line of a current driving lane of the automatic driving vehicle according to the vehicle width information of the automatic driving vehicle in the laser point cloud data; and acquiring the position relation between the point cloud detection target and the virtual lane boundary line, and if the point cloud detection target is positioned in the virtual lane boundary line, determining that the point cloud detection target is positioned in the current driving lane of the automatic driving vehicle.
The system comprises a vehicle-front rear judging module, a vehicle-front rear judging module and a vehicle-front rear judging module, wherein the vehicle-front rear judging module is used for acquiring the position relationship between the image detection target and the automatic driving vehicle and the position relationship between the point cloud detection target and the automatic driving vehicle; if the image detection target and the point cloud detection target are located in front of the automatic driving vehicle, determining that the image detection target and the point cloud detection target are both front targets; and if the image detection target and the point cloud detection target are positioned behind the automatic driving vehicle, determining that the image detection target and the point cloud detection target are the vehicle rear target.
In an embodiment of the application, the second determining unit 530 is further configured to determine the image detection target and the point cloud detection target closest to the autonomous vehicle as the optimal targets if there are a plurality of image detection targets and point cloud detection targets that are both targets in front of the autonomous vehicle in the lane where the autonomous vehicle is currently driving;
the calibration calculating unit 540 is further configured to obtain the optimal first feature point data of the image detection target and the optimal second feature point data of the point cloud detection target.
In an embodiment of the present application, the image detection target includes an image target detection frame, the point cloud detection target includes a point cloud target detection frame, the calibration calculation unit 540 is specifically configured to obtain corner data of the image target detection frame as the first feature point data and obtain corner data of the point cloud target detection frame as the second feature point data, and obtain relative external reference between the camera and the laser radar by using a multipoint perspective imaging PNP algorithm for the first feature point data and the second feature point data; and obtaining the online calibration result according to the relative external parameters between the camera and the laser radar.
It can be understood that the above sensor online calibration device for an autonomous vehicle can implement each step of the sensor online calibration method for an autonomous vehicle provided in the foregoing embodiments, and the relevant explanations about the sensor online calibration method for an autonomous vehicle are applicable to the sensor online calibration device for an autonomous vehicle, and are not described herein again.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 6, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other by an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the high-precision map updating device on the logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
determining whether a driving lane of the autonomous vehicle meets an online calibration condition;
when the online calibration condition is met, acquiring road image data acquired by a camera of an automatic driving vehicle and laser point cloud data acquired by a laser radar, and performing target detection on the road image data and the laser point cloud data to obtain an image detection target and a point cloud detection target;
determining whether the image detection target and the point cloud detection target are the same target;
and when the image is the same target, acquiring first feature point data of the image detection target and second feature point data of the point cloud detection target, and performing matching calculation on the first feature point data and the second feature point data to obtain an online calibration result between the camera and the laser radar.
The method executed by the sensor online calibration device of the automatic driving vehicle disclosed by the embodiment of fig. 1 of the application can be applied to or realized by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The electronic device may further execute the method executed by the sensor online calibration device of the autonomous vehicle in fig. 1, and implement the functions of the sensor online calibration device of the autonomous vehicle in the embodiment shown in fig. 1, which are not described herein again.
An embodiment of the present application further provides a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which when executed by an electronic device including a plurality of application programs, enable the electronic device to perform the method performed by the sensor online calibration apparatus of the autonomous vehicle in the embodiment shown in fig. 1, and are specifically configured to perform:
determining whether a driving lane of the autonomous vehicle meets an online calibration condition;
when the online calibration condition is met, acquiring road image data acquired by a camera of an automatic driving vehicle and laser point cloud data acquired by a laser radar, and performing target detection on the road image data and the laser point cloud data to obtain an image detection target and a point cloud detection target;
determining whether the image detection target and the point cloud detection target are the same target;
and when the target is the same target, acquiring first feature point data of the image detection target and second feature point data of the point cloud detection target, and performing matching calculation on the first feature point data and the second feature point data to obtain an online calibration result between the camera and the laser radar.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus comprising the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present application shall be included in the scope of the claims of the present application.

Claims (10)

1. An online calibration method for a sensor of an autonomous vehicle, the method comprising:
determining whether a driving lane of the automatic driving vehicle meets an online calibration condition;
when an online calibration condition is met, acquiring road image data acquired by a camera of an automatic driving vehicle and laser point cloud data acquired by a laser radar, and performing target detection on the road image data and the laser point cloud data to obtain an image detection target and a point cloud detection target;
determining whether the image detection target and the point cloud detection target are the same target;
and when the target is the same target, acquiring first feature point data of the image detection target and second feature point data of the point cloud detection target, and performing matching calculation on the first feature point data and the second feature point data to obtain an online calibration result between the camera and the laser radar.
2. The method of claim 1, wherein determining whether the driving lane of the autonomous vehicle satisfies an online calibration condition comprises:
acquiring the lane type of a current driving lane of the automatic driving vehicle;
and if the automatic driving vehicle runs on the straight lane currently, determining that the online calibration condition is met, otherwise, determining that the online calibration condition is not met.
3. The method of claim 1, wherein determining whether the image detection target and the point cloud detection target are the same target comprises:
determining whether the image detection target and the point cloud detection target are within a lane in which the autonomous vehicle is currently traveling;
if the point cloud detection target is located in the current driving lane of the automatic driving vehicle, determining whether the image detection target and the point cloud detection target are the vehicle front target or the vehicle rear target;
and if the image detection target and the point cloud detection target are both front targets or rear targets, determining that the image detection target and the point cloud detection target are the same target.
4. The method of claim 3, wherein the determining whether the image detection target and the point cloud detection target are within a lane currently being traveled by the autonomous vehicle comprises:
detecting lane lines of the road image data to obtain a lane boundary line of the current running of the automatic driving vehicle;
acquiring the position relation between the image detection target and the lane boundary line, and if the image detection target is positioned in the lane boundary line, determining that the image detection target is positioned in a lane where the automatic driving vehicle runs currently;
virtualizing a lane boundary line of a current driving lane of the automatic driving vehicle according to the vehicle width information of the automatic driving vehicle in the laser point cloud data;
and acquiring the position relation between the point cloud detection target and the virtual lane boundary line, and if the point cloud detection target is positioned in the virtual lane boundary line, determining that the point cloud detection target is positioned in the current driving lane of the automatic driving vehicle.
5. The method of claim 3, wherein determining whether the image detection object and the point cloud detection object are both in front of a vehicle or both behind a vehicle comprises:
acquiring the position relation between the image detection target and the automatic driving vehicle and the position relation between the point cloud detection target and the automatic driving vehicle;
if the image detection target and the point cloud detection target are located in front of the automatic driving vehicle, determining that the image detection target and the point cloud detection target are both front targets;
and if the image detection target and the point cloud detection target are positioned behind the automatic driving vehicle, determining that the image detection target and the point cloud detection target are the vehicle rear target.
6. The method of claim 3, wherein the method further comprises:
if a plurality of image detection targets and point cloud detection targets which are positioned in a lane where the automatic driving vehicle runs currently and are also front targets are provided, determining the image detection target and the point cloud detection target which are closest to the automatic driving vehicle as optimal targets;
the acquiring of the first feature point data of the image detection target and the second feature point data of the point cloud detection target includes:
and acquiring the optimal first characteristic point data of the image detection target and the optimal second characteristic point data of the point cloud detection target.
7. The method of claim 1, wherein the image detection target comprises an image target detection box, the point cloud detection target comprises a point cloud target detection box, the obtaining of the first feature point data of the image detection target and the second feature point data of the point cloud detection target, and the performing of the matching calculation on the first feature point data and the second feature point data to obtain the online calibration result between the camera and the laser radar comprise:
acquiring corner data of the image target detection frame as the first feature point data and acquiring corner data of the point cloud target detection frame as the second feature point data;
acquiring relative external parameters between the camera and the laser radar by adopting a multipoint perspective imaging PNP algorithm for the first feature point data and the second feature point data;
and obtaining the online calibration result according to the relative external reference between the camera and the laser radar.
8. An on-line sensor calibration device for an autonomous vehicle, the device comprising:
the automatic driving vehicle comprises a first judging unit, a second judging unit and a control unit, wherein the first judging unit is used for determining whether a driving lane of the automatic driving vehicle meets an online calibration condition or not;
the target detection unit is used for acquiring road image data acquired by a camera of the automatic driving vehicle and laser point cloud data acquired by a laser radar when an online calibration condition is met, and performing target detection on the road image data and the laser point cloud data to obtain an image detection target and a point cloud detection target;
the second judgment unit is used for determining whether the image detection target and the point cloud detection target are the same target;
and the calibration calculation unit is used for acquiring first feature point data of the image detection target and second feature point data of the point cloud detection target when the images are the same target, and performing matching calculation on the first feature point data and the second feature point data to obtain an online calibration result between the camera and the laser radar.
9. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the method of any one of claims 1 to 7.
10. A computer readable storage medium storing one or more programs which, when executed by an electronic device comprising a plurality of application programs, cause the electronic device to perform the method of any of claims 1-7.
CN202211248093.0A 2022-10-12 2022-10-12 Sensor on-line calibration method and device for automatic driving vehicle and storage medium Pending CN115546315A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211248093.0A CN115546315A (en) 2022-10-12 2022-10-12 Sensor on-line calibration method and device for automatic driving vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211248093.0A CN115546315A (en) 2022-10-12 2022-10-12 Sensor on-line calibration method and device for automatic driving vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN115546315A true CN115546315A (en) 2022-12-30

Family

ID=84732797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211248093.0A Pending CN115546315A (en) 2022-10-12 2022-10-12 Sensor on-line calibration method and device for automatic driving vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN115546315A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116434563A (en) * 2023-03-13 2023-07-14 山东华夏高科信息股份有限公司 Method, system, equipment and storage medium for detecting vehicle overguard
CN116572995A (en) * 2023-07-11 2023-08-11 小米汽车科技有限公司 Automatic driving method and device of vehicle and vehicle

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116434563A (en) * 2023-03-13 2023-07-14 山东华夏高科信息股份有限公司 Method, system, equipment and storage medium for detecting vehicle overguard
CN116572995A (en) * 2023-07-11 2023-08-11 小米汽车科技有限公司 Automatic driving method and device of vehicle and vehicle
CN116572995B (en) * 2023-07-11 2023-12-22 小米汽车科技有限公司 Automatic driving method and device of vehicle and vehicle

Similar Documents

Publication Publication Date Title
CN115546315A (en) Sensor on-line calibration method and device for automatic driving vehicle and storage medium
CN114279453B (en) Automatic driving vehicle positioning method and device based on vehicle-road cooperation and electronic equipment
CN114088114B (en) Vehicle pose calibration method and device and electronic equipment
CN114705121A (en) Vehicle pose measuring method and device, electronic equipment and storage medium
CN115376090A (en) High-precision map construction method and device, electronic equipment and storage medium
CN114966632A (en) Laser radar calibration method and device, electronic equipment and storage medium
CN114973198A (en) Course angle prediction method and device of target vehicle, electronic equipment and storage medium
CN114754761A (en) Optimization method and device for lane line of high-precision map, electronic equipment and storage medium
CN115950441B (en) Fusion positioning method and device for automatic driving vehicle and electronic equipment
CN116164763A (en) Target course angle determining method and device, electronic equipment and storage medium
CN116148821A (en) Laser radar external parameter correction method and device, electronic equipment and storage medium
CN116152347A (en) Vehicle-mounted camera mounting attitude angle calibration method and system
CN114755663A (en) External reference calibration method and device for vehicle sensor and computer readable storage medium
CN116051812A (en) Target detection method and device, electronic equipment and storage medium
CN115112125A (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115031755A (en) Automatic driving vehicle positioning method and device, electronic equipment and storage medium
CN114677448A (en) External reference correction method and device for vehicle-mounted camera, electronic equipment and storage medium
CN115014395A (en) Real-time calibration method and device for vehicle course angle for automatic driving
CN114494200A (en) Method and device for measuring trailer rotation angle
CN113890668A (en) Multi-sensor time synchronization method and device, electronic equipment and storage medium
CN112017241A (en) Data processing method and device
CN116610766A (en) Map updating method and device, electronic equipment and computer readable storage medium
CN114814911A (en) Calibration method and device for automatic driving vehicle, electronic equipment and storage medium
CN114648576B (en) Target vehicle positioning method, device and system
CN114723819A (en) Calibration method and device for automatic driving vehicle, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination