CN117818463A - Vehicle-mounted ADB system architecture based on radar fusion and application method - Google Patents

Vehicle-mounted ADB system architecture based on radar fusion and application method Download PDF

Info

Publication number
CN117818463A
CN117818463A CN202311860941.8A CN202311860941A CN117818463A CN 117818463 A CN117818463 A CN 117818463A CN 202311860941 A CN202311860941 A CN 202311860941A CN 117818463 A CN117818463 A CN 117818463A
Authority
CN
China
Prior art keywords
radar
data
coordinate system
camera
adb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311860941.8A
Other languages
Chinese (zh)
Inventor
郭得岁
李辉
秦赛锋
梁梦颢
王朝阳
李靖
肖潇
黄松涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Henan THB Electric Co Ltd
Original Assignee
Xidian University
Henan THB Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University, Henan THB Electric Co Ltd filed Critical Xidian University
Priority to CN202311860941.8A priority Critical patent/CN117818463A/en
Publication of CN117818463A publication Critical patent/CN117818463A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention provides a vehicle-mounted ADB system frame based on radar fusion, which comprises a data collection module, an ADB intelligent decision module and an ADB lamp control module which are sequentially connected, wherein the ADB intelligent decision module comprises a CAN communication module and a processor, the CAN communication module is connected with the ADB lamp control module, the data collection module comprises a millimeter wave radar and a camera, the millimeter wave radar and the camera are arranged on the front side of an automobile, the millimeter wave radar is connected with the processor, and a left lamp and a right lamp of the automobile are connected with the ADB lamp control module. The invention utilizes the interaction fusion of the radar and the camera data, can ensure that the data source is accurate and error-free, and improves the robustness of the system. The method provided by the invention can utilize millimeter wave radar data to assist video stream data acquired by a camera to perform accurate physical position information calculation, is suitable for various severe weather scenes such as rain, snow, fog and the like, and increases the application scenes of an ADB system.

Description

Vehicle-mounted ADB system architecture based on radar fusion and application method
Technical Field
The invention relates to the technical field of vehicle lighting systems, in particular to a vehicle-mounted ADB system architecture based on radar fusion and a using method thereof.
Background
An adaptive high beam (Adaptive driving beam, ADB) system is an intelligent adaptive high beam control system capable of transforming the high beam projection according to the driving scene. On the premise of ensuring good vision of the driver of the self-vehicle, the self-vehicle road vehicle does not cause glaring to other road users. The ADB system introduces the concept of matrix beam, and mainly relies on the high-resolution LED matrix lamp beads in the head lamp module to be on or off to form illumination beams. The ADB system can intelligently turn on and off the far-reaching headlamp according to the running state of the self-vehicle, the traffic environment state and the road vehicle state, identify targets such as vehicles and pedestrians according to the data of the cameras, independently control each light-emitting unit through the control system, turn off part of the illumination area, and adaptively change the high-beam light shape distribution so as to avoid glaring to other road users, as shown in fig. 1. The scene modes which can be processed by the ADB comprise a meeting mode, a following mode, a single-target mode and a multi-target mode, and real-time dynamic adjustment is realized.
The traditional vehicle-mounted ADB system architecture is shown in fig. 2, and the system uses the condition of a road vehicle collected by a camera as original data, performs internal and external parameter calibration on the camera, and realizes conversion between real world coordinates and camera pixel coordinates; the camera data are transmitted to the processor through the CAN bus, the processor calculates and detects the real distance and the corresponding angle between the vehicle and the self-driving vehicle, and the rectangular coordinates of the boundary at the two sides of the target relative to the LED matrix lamp group are obtained according to the position angle data, so as to obtain the spherical coordinates of the boundary at the two sides of the target taking the LED matrix lamp group as the origin and the minimum shielding included angle of the LED matrix lamp group (if the target vehicle is not arranged in front of the self-driving vehicle according to the image data, the LED matrix lamp group maintains a high beam state, if the target vehicle is arranged in front of the self-driving vehicle according to the image data, the LED lamp corresponding to the minimum shielding included angle in the LED matrix lamp group is closed); the processor issues the lamp control information to the LED matrix lamp bead control switch through the CAN bus, and the matrix lamp bead executes corresponding operation to realize the self-adaptive far-near switching of the headlamp beam.
As can be seen from the above system architecture, since only the camera provides information flow data for the ADB system, the following problems of inaccurate control of the lamp beads may occur, such as: 1. when the target vehicle and the pedestrian in the image are far away from the driving vehicle and the detection is inaccurate, the closing areas of the matrix lamp beads are not matched; 2. when the distance between the target and the vehicle body is relatively short, the area of the target in the image is increased, so that the calculated shielding included angle is too large, and the number of the turned-off lamp beads is far more than the number of the turned-off lamp beads. Furthermore, when faced with severe weather, the image data of the camera may become unreliable: for example, rain, snow, fog may cause blurring or occlusion of the camera, resulting in inaccurate data collected by the camera, such as: the radial positions of the vehicles and pedestrians are fuzzy, the vehicles and the pedestrians cannot be accurately positioned, and the accuracy of the closing area of the matrix lamp beads is seriously affected.
Disclosure of Invention
Aiming at the technical problem of inaccurate lamp bead control of the traditional system, the invention provides a vehicle-mounted ADB system architecture and an implementation method based on the thunder fusion, and the intelligent scene recognition ADB system architecture adopting video data and millimeter wave radar data uses the millimeter wave radar data to make up for the defects of video image data, so that the problem of fuzzy radial positions of vehicles and pedestrians detected by the video data can be effectively solved, the accurate positioning of the vehicles and pedestrians is realized, and the accurate control of the lamp beads is realized based on the accurate positioning.
In order to achieve the above purpose, the technical scheme of the invention is realized as follows:
the beneficial effects of the invention are as follows:
(1) The millimeter wave radar is added on the basis of the traditional architecture for data acquisition, so that the specific physical position information of other vehicles and pedestrians on the road can be detected more accurately. Meanwhile, the high-resolution image data collected by the camera and the data detected by the radar are subjected to interactive fusion, and the radar and the camera assist each other to jointly determine the accurate position information of the object. The traditional ADB system architecture can generate some misjudgments, such as false identification under some specific conditions, when the data are acquired by a single camera, and the radar and the camera data are used for interactive fusion, so that the accuracy of the data source can be ensured, and the robustness of the system can be improved.
(2) Can be suitable for various bad weather, such as rain, snow, fog and other scenes. Because the traditional camera only uses the camera to collect data, the problems of blurred shooting images, unclear imaging and the like of the camera can occur under the scenes of rain, snow, fog and the like, thereby limiting the application scenes of the traditional camera. However, the vehicle-mounted ADB system architecture based on the radar fusion is added with millimeter wave radar data, video stream data acquired by a camera can be assisted by the millimeter wave radar data, accurate physical position information calculation can be performed, and the application scene of the ADB system is increased.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a front light pattern pair in an ADB-free/ADB scene.
Fig. 2 is a conventional onboard ADB system architecture.
Fig. 3 is a vehicle-mounted ADB system architecture based on radar fusion.
Fig. 4 is a diagram of five-axis coordinate system conversion relation.
Fig. 5 is a coordinate system relationship diagram.
Fig. 6 is a graph of radar coordinates versus world coordinate plane.
Fig. 7 is a schematic diagram of camera aperture imaging.
FIG. 8 is a graph of the relationship between the pixel coordinates and the image.
Fig. 9 is a schematic diagram of radial distortion.
Fig. 10 is a schematic view of tangential distortion.
Fig. 11 is a YOLOv3 network structure.
Fig. 12 is a schematic diagram of a nearest neighbor matching flow.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without any inventive effort, are intended to be within the scope of the invention.
Example 1
As shown in fig. 1, a vehicle-mounted ADB system frame based on radar fusion comprises a data collection module, an ADB intelligent decision module and an ADB lamp control module which are sequentially connected, wherein the ADB intelligent decision module comprises a CAN communication module and a processor, the CAN communication module is connected with the ADB lamp control module, the data collection module comprises a millimeter wave radar and a camera, the millimeter wave radar and the camera are both arranged on the front side of an automobile, the millimeter wave radar is connected with the camera, the camera is connected with the processor through a coaxial line, and a left car lamp and a right car lamp of the automobile are both connected with the ADB lamp control module. The millimeter wave radar and the camera are mainly used for collecting radar data and image data in the environment to obtain target pedestrians in front of the vehicle, world coordinates of the vehicle and positions of the target pedestrians and the vehicle in the image. The ADB decision module is mainly used for deciding how to control the ADB matrix lamp beads, and the ADB lamp control module is mainly used for controlling the ADB matrix lamp beads in the left car lamp and the right car lamp according to the control scheme selected by the ADB decision module.
Specifically, the CAN communication module comprises a channel I and a channel II, wherein the channel I and the channel II are both connected with the processor, the millimeter wave radar is connected with the channel I through a CAN bus, and the channel II is connected with the ADB lamp control module through the CAN bus. The processor is an Nvidia Xavier processor, and the millimeter wave radar is ARS408 millimeter wave radar.
Example 2
The application method of the vehicle-mounted ADB system architecture based on the radar fusion comprises the following specific steps:
s1: and acquiring data by utilizing the millimeter wave radar and the camera, and transmitting the acquired data to the ADB decision module.
Specifically, millimeter wave radars are utilized to collect the radial distance and transverse distance data of vehicles and pedestrians in road conditions, radar data obtained by the millimeter wave radars are set to form world coordinates, and the world coordinates are transmitted to an ADB decision module through a CAN bus; the camera collects high-definition image data on road conditions, the image data obtained by the camera is set to be image coordinates, and the image coordinates are transmitted to the ADB decision module through the coaxial line.
The camera transmits the collected image information to the processor in real time through the coaxial line, the processor detects pedestrians and vehicles in the video by using a YOLOv3 algorithm, and the pedestrians and vehicles are marked on the image of each frame of the video by using a frame.
The millimeter wave radar is an ARS408 millimeter wave radar, acquires the radar data of the front side environment of the automobile, and returns the data in the form of a CAN message. The CAN message returned by the millimeter wave radar contains information such as a target serial number, a relative radial distance (Range) between the target and the radar, a relative motion speed (Verl), an Azimuth angle (Azimuth), a reflection sectional area (RCS), a signal-to-noise ratio (SNR) and the like, but each field in the CAN message does not directly represent the true value of the required information, and the true value CAN be obtained by analyzing each field.
The analytical formula is shown in Table 1:
TABLE 1CAN message parsing formula
S2: and (3) calibrating the camera so that the world coordinates and the image coordinates can be in one-to-one correspondence, and converting the world coordinates obtained by analyzing the radar data into the image coordinates. The specific method comprises the following steps:
s21: and (3) calibrating the camera by establishing a conversion relation between an image coordinate system and a world coordinate system:
image coordinate system (O) c -X c Y c Z c ) And world coordinate system (O) w -X w Y w Z w ) Also referred to as camera extrinsic calibration, as shown in FIG. 5, there is an offset Y in the Y-direction of the image coordinate system and the world coordinate system cw (i.e. the position of the camera is different from the bottom of the headstock) and the offset Z in the Z direction cw (i.e. the camera is positioned at a distance from the front side of the bottom of the vehicle head).
Meanwhile, since the camera cannot be completely horizontal and vertical during the actual installation, there may also exist pitch, yaw and roll relations around the X, Y, Z axis between the image coordinate system and the world coordinate system.
Defining a three-dimensional translation vector T and a 3X 3 rotation matrix R, the conversion relation between the image coordinate system and the world coordinate system is as follows:
wherein X is c Is the x-axis coordinate of an image coordinate system, Y c Z is the y-axis coordinate of the image coordinate system c Is the z-axis coordinate of an image coordinate system, X w Is the x-axis coordinate of the world coordinate system, Y w Is the y-axis coordinate of the world coordinate system,Z w is the z-axis coordinate of the world coordinate system.
For convenience of subsequent computation, it is converted here into a fourth order matrix form:
m in the formula 1 Is a 4 x 4 camera extrinsic matrix.
S22: transforming the world coordinate system into the pixel coordinate system by matrix:
the essence of the image is a two-dimensional matrix with the size of M multiplied by N, one element in the two-dimensional matrix is one pixel point, and a picture coordinate system (O t -X t Y t ) And a pixel coordinate system (O p -U p V p ) As shown in FIG. 8, (u, v) is the pixel point in the v-th row of the u-th column, the origin of the picture coordinate system (x t0 ,y t0 ) The corresponding coordinates in the pixel coordinate system are (u) c ,v c ) And the point is at the center of the pixel coordinate system.
Let dx be t And dy t X in the picture coordinate system is respectively for each pixel point t Axial direction and Y t The unit length in the axial direction is (u, v) and (x) t ,y t ) The relation of (2) is:
the conversion relationship between the pixel coordinate system and the picture coordinate system is:
the conversion relationship between the pixel coordinate system and the image coordinate system is as follows:
wherein M is 2 Is a 3 x 4 camera reference matrix.
S23: the conversion relation between the pixel coordinate system and the radar coordinate system is obtained according to the conversion relation between the image coordinate system and the world coordinate system and the conversion relation between the pixel coordinate system and the picture coordinate system, and is as follows:
and converting world coordinates collected by the millimeter wave radar into image coordinates by using the conversion relation between the pixel coordinate system and the radar coordinate system.
S24: and carrying out distortion correction on the information fused image by using a distortion correction formula.
The actual imaging of the camera is not a simple ideal aperture imaging and some distortion may occur, including radial distortion caused by the lens shape and tangential distortion generated during assembly of the camera, the schematic diagrams of which are shown in fig. 9 and 10, respectively.
The distortion correction formula of the camera is:
wherein (x ', y') is the corrected pixel point coordinate, and (x, y) is the ideal imaging pixel point coordinate, k 1 ,k 2 For radial distortion correction coefficient, p 1 ,p 2 For tangential distortion correction coefficient, r 2 =x 2 +y 2 . And inputting the coordinate information of the image pixel points with the information fusion into a distortion correction formula to correct, so that the subsequent failure of identifying pedestrians and vehicles in the image due to image deformation is avoided.
S3: and comparing world coordinates converted into image coordinates with positions of pedestrians and vehicles in image data acquired by previous shooting by using a YOLOv3 algorithm, so as to realize fusion matching of two paths of data information.
YOLOv3 is a regression-based network improved by YOLOv1 and YOLOv2, and the running speed is very fast on the premise of ensuring the precision, so that the real-time detection requirement can be met, and the identification and detection capability of objects with smaller sizes is enhanced. The YOLOv3 a priori detection system re-uses a classifier or locator to perform the detection tasks. The network may be applied where images of different sizes are different. The network structure of YOLOv3 is shown in fig. 11.
The specific method comprises the following steps:
s31: and adopting a nearest neighbor frame matching algorithm to synchronize the millimeter wave radar data with the visual data in time.
After the millimeter wave radar and the camera data are calibrated in space, the millimeter wave radar and the camera data work at different time frequencies, and delay conditions exist in the process of data communication among sensors, the sampling frame frequency of the millimeter wave radar is 20 frames/second, and the sampling frame frequency of the camera is 30 frames/second, so that data acquired by the sensors cannot be effectively corresponding to each frame due to inconsistent sampling frequency, and therefore time synchronization of radar coordinates and video detection data of the same target is a key for improving the target fusion tracking precision.
A common method of temporal registration is interpolation extrapolation. Interpolation extrapolation primarily computes data on a high-accuracy measurement timeline to a low-accuracy measurement timeline. Firstly, defining a time slice, wherein the dividing standard is based on the motion state of a target: the movement speed is high, and the corresponding fusion time slices are shortened; sequencing the sensor measurement data according to the measurement time precision; and finally, interpolating and extrapolating each high-precision time data to the lowest-precision time point to finish the time registration operation.
Firstly, millimeter wave radar data with a time stamp is acquired according to radar data collected by a radar, image frame data with a time stamp is acquired according to image data collected by a camera, the time stamp of the millimeter wave radar data with the time stamp and the time stamp of the image frame data with the time stamp are respectively extracted, corresponding average time delay is subtracted, and then nearest neighbor matching of the radar data and the image frame data is carried out, as shown in fig. 12.
Let the image data frame set be defined as:
C={c 1 ,c 2 ,c 3 ,…,c m }
the radar data frame set is defined as:
R={r 1 ,r 2 ,r 3 ,…,r n }
the image data frame and the radar point cloud true set time stamp are respectively as follows:
T c ={t c1 ,t c2 ,t c3 ,…,t cm }
T r ={t r1 ,t r2 ,t r3 ,…,t rn }
since the camera clock and millimeter wave radar clock are not perfectly synchronized with the host clock, there is a delay between the two. Therefore, let the average delay between the camera clock and the host clock be delta C The average delay between the millimeter wave radar clock and the host clock is delta R . Due to the image data frame rate f c And millimeter wave Lei Dazhen rate f r Selecting a lower frame rate sensor with a frame rate denoted as f min Setting a frame time difference threshold T th Representing the frame synchronization time accuracy, satisfying the formula:
and fusing the image frame data and the millimeter wave radar frame data by using a sliding window method to generate a result set.
I.e. for
For the result set O (C, R), wherein each set of data (C i ,R j ) Are data sets with synchronization time errors less than the frame time difference threshold.
S32: and the millimeter wave radar is fused with visual information.
The invention needs to fuse the target category information obtained by the camera and the target distance and speed information obtained by the millimeter wave radar, the two data types are different, and the calculation amount during data fusion can not be too high in consideration of the requirement of detection instantaneity, so that the decision layer fusion is adopted.
After the association of the five coordinate systems and the time synchronization of the camera calibration and the data are completed, the millimeter wave radar scanning points can be projected into the pixel coordinate system, and the camera detection result can output the pixel coordinate information of four corner points of the surrounding frame, so that the millimeter wave radar detection result and the camera detection result can be matched according to whether the projection points of the radar scanning points in the pixel coordinate system are in the target surrounding frame or not.
Let the pixel coordinates of the four corner points of a certain target bounding box be (u) 1 ,v 1 )、(u 1 ,v 2 )、(u 2 ,v 1 ) Sum (u) 2 ,v 2 ) The coordinates of the projection of the radar scan point to the pixel coordinate system are (u) r ,v r ) Since the radar mounting position is lower than the camera, the projected point of the scanning point in the pixel coordinate system is generally not located at the center of the target bounding box, but is located inside the bounding box, so the determination rule can be set as: u (u) 1 <u r <u 2 And v 1 <v r V, performing information fusion processing on the target conforming to the judgment rule, and outputting category, speed and distance information of the target; and for the targets which do not meet the judgment rules, no fusion processing is performed. In the pixel coordinate system, if the interior of the target bounding box is not matched with the radar projection point, the radar is missed to detect the target, and only the category information is output; if the radar projection point does not match the target bounding box around it, the camera misses the target and only outputs its speed and distance information.
In fig. 11, a 416×416×3 RGB image is subjected to operations such as convolution, pooling, upsampling, tensor stitching, and the like, to generate feature maps with 3 scales, which are y1:13×13×255, y2:26×26×255, y3:52×52×255. In the above diagram, DBL represents a convolutional layer, BN normalization layer, and leak Relu activation layer. resn represents zero padding, DBL and n res_units, where res_unit represents a residual structure, an original tensor x and tensor x pass through two DBL layers, and then tensor add operation is performed. resx denotes the combination of res1, res2 and res8, USP denotes the up-sampling operation. conv represents a convolution operation. concat represents a tensor stitching operation. We detected pedestrians/vehicles in the video using YOLOv3 model, and output the detected result frame on each frame image.
S4: and issuing lamp control information by using the fused result.
S41: through step S3, the interaction fusion of the millimeter wave radar and the camera information stream has been realized, the original radar data and the video data of the target meeting the determination rule are acquired for calculating the included angle with the own vehicle, and the coordinates of the camera device on the image pixels are set as (x o ,y o ) We use radar data as a standard, and the coordinates of the radar data on the image pixels are (x r ,y r ) Included angle ofSequentially calculating the included angle theta corresponding to the objects matched with each frame of the video stream of the millimeter wave radar and the camera, and finally, merging the included angles theta corresponding to all the objects to obtain the included angle theta corresponding to the frame data final =U(θ 12 ,…,θ n ) Where U () represents the union, e.g., θ 1 Is between 10 DEG and 20 DEG, theta 2 Between 15 deg. and 30 deg., then U (theta 12 ) From 10 ° to 30 °.
S42: the detected included angle between the vehicle and the own vehicle is obtained through 1), corresponding ADB matrix lamp bead control information is selected according to the one-to-one correspondence between the preset included angle and the ABD matrix lamp bead control information, the ADB matrix lamp bead control information is sent to an ADB matrix lamp bead control module through a CAN bus, and the module executes corresponding operation to realize the far and near light switching control of the ADB matrix lamp bead.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention, but rather to cover all modifications, equivalents, alternatives, and improvements within the spirit and principles of the inventionWithin the range.
The conversion relationship between the pixel coordinate system and the image coordinate system is as follows:
wherein M is 2 A 3 x 4 camera reference matrix;
s23: the conversion relation between the pixel coordinate system and the radar coordinate system is obtained according to the conversion relation between the image coordinate system and the world coordinate system and the conversion relation between the pixel coordinate system and the picture coordinate system, and is as follows:
converting world coordinates collected by the millimeter wave radar into image coordinates by utilizing the conversion relation between the pixel coordinate system and the radar coordinate system;
s24: and carrying out distortion correction on the information fused image by using a distortion correction formula:
the actual imaging of the camera is not simple and ideal aperture imaging, and distortion to a certain extent can occur, including radial distortion caused by the shape of the lens and tangential distortion generated in the assembly process of the camera, and the distortion correction formula of the camera is as follows:
x 'is the corrected pixel abscissa, y' is the corrected pixel ordinate, x is the ideal imaged pixel abscissa, y is the ideal imaged pixel ordinate, k 1 ,k 2 For radial distortion correction coefficient, p 1 ,p 2 For tangential distortion correction coefficient, r 2 =x 2 +y 2 Inputting the coordinate information of the image pixel points with the information fusion into a distortion correction formula to correct。
1. The method for using a vehicle-mounted ADB system frame based on radar fusion according to claim 6, wherein the method for fusion matching of radar data and camera data information in step S3 is as follows:
s31: adopting a nearest neighbor frame matching algorithm to synchronize millimeter wave radar data with visual data in time;
s32: and fusing the millimeter wave radar with visual information.
2. The method for using a vehicle-mounted ADB system frame based on radar fusion according to claim 7, wherein in step S31, the method for time synchronizing millimeter wave radar data with visual data by using nearest neighbor frame matching algorithm is as follows:
firstly, millimeter wave radar data with a time stamp is acquired according to radar data collected by a radar, image frame data with the time stamp is acquired according to image data collected by a camera, the time stamp of the millimeter wave radar data with the time stamp and the time stamp of the image frame data with the time stamp are respectively extracted, corresponding average time delay is subtracted, and then nearest neighbor matching of the radar data and the image frame data is carried out;
let the image data frame set be defined as:
C={c 1 ,c 2 ,c 3 ,…,c m };
the radar data frame set is defined as:
R={r 1 ,r 2 ,r 3 ,…,r n };
the image data frame and the radar point cloud true set time stamp are respectively as follows:
T c ={t c1 ,t c2 ,t c3 ,…,t cm }
T r ={t r1 ,t r2 ,t r3 ,…,t rn };
because the camera clock and the millimeter wave radar clock are not completely synchronous with the host clock, time delay exists between the camera clock and the host clock, and the average time delay of the camera clock and the host clock is delta C The average delay between the millimeter wave radar clock and the host clock is delta R
Due to the image data frame rate f c And millimeter wave Lei Dazhen rate f r Selecting a lower frame rate sensor with a frame rate denoted as f min Setting a frame time difference threshold T th Representing the frame synchronization time accuracy, satisfying the formula:
fusing the image frame data and the millimeter wave radar frame data by using a sliding window method to generate a result set, and for the image frame data and the millimeter wave radar frame data
For the result set O (C, R), wherein each set of data (C i ,R j ) Are data sets with synchronization time errors less than the frame time difference threshold.
3. The method for using a vehicle-mounted ADB system frame based on radar fusion according to claim 8, wherein the specific method for fusing millimeter wave radar with visual information in step S32 is as follows:
let the pixel coordinates of the four corner points of a certain target bounding box be (u) 1 ,v 1 )、(u 1 ,v 2 )、(u 2 ,v 1 ) Sum (u) 2 ,v 2 ) The coordinates of the projection of the radar scan point to the pixel coordinate system are (u) r ,v r ) The determination rule is set as follows: u (u) 1 <u r <u 2 And v 1 <v r V, performing information fusion processing on the target conforming to the judgment rule, and outputting category, speed and distance information of the target; and for the targets which do not meet the judgment rules, no fusion processing is performed. In the pixel coordinate system, if the interior of the target bounding box is not matched with the radar projection point, the radar is missed to detect the target, and only the category information is output; if the radarIf the projection point is not matched with the target bounding box around the projection point, the camera fails to detect the target and only outputs the speed and distance information of the target.
4. The method for using a vehicle-mounted ADB system frame based on the radar fusion according to any one of claims 7 to 9, wherein the method for issuing the lighting control information by using the fused result in step S4 is as follows: s41: let the coordinates of the camera device on the image pixels be (x o ,y o ) We use radar data as a standard, and the coordinates of the radar data on the image pixels are (x r ,y r ) Included angle ofSequentially calculating the included angle theta corresponding to the objects matched with each frame of the video stream of the millimeter wave radar and the camera, and finally, merging the included angles theta corresponding to all the objects to obtain the included angle theta corresponding to the frame data final =U(θ 12 ,…,θ n ) Wherein U () represents the union;
s42: the detected included angle between the vehicle and the own vehicle is obtained through S41, corresponding ADB matrix lamp bead control information is selected according to the one-to-one correspondence between the preset included angle and the ABD matrix lamp bead control information, the corresponding ADB matrix lamp bead control information is sent to an ADB matrix lamp bead control module through a CAN bus, and the module executes corresponding operation to realize the far and near light switching control of the ADB matrix lamp bead.

Claims (10)

1. The vehicle-mounted ADB system architecture based on the radar fusion is characterized by comprising a data collection module, an ADB intelligent decision module and an ADB lamp control module which are sequentially connected, wherein the ADB intelligent decision module comprises a CAN communication module and a processor, the CAN communication module is connected with the processor, the CAN communication module and the processor are both connected with the data collection module, the data collection module comprises a millimeter wave radar and a camera, the millimeter wave radar and the camera are both arranged on the front side of an automobile, the millimeter wave radar is connected with the processor, and a left lamp and a right lamp of the automobile are both connected with the ADB lamp control module.
2. The vehicle-mounted ADB system architecture based on the radar fusion according to claim 1, wherein the CAN communication module comprises a channel i and a channel ii, the channel i and the channel ii are both connected with the processor, the millimeter wave radar is connected with the channel i through a CAN bus, and the channel ii is connected with the ADB lamp control module through the CAN bus.
3. The vehicle-mounted ADB system architecture based on the radar fusion of claim 2, wherein the processor is an Nvidia Xavier processor.
4. A vehicle-mounted ADB system architecture based on radar fusion according to claim 3, wherein the millimeter wave radar is ARS408 millimeter wave radar.
5. The method for using the vehicle-mounted ADB system frame based on the radar fusion according to any one of claims 1 to 4, characterized by comprising the following specific steps:
s1: acquiring data by utilizing a millimeter wave radar and a camera, and transmitting the acquired data to an ADB decision module;
s2: performing camera calibration on the camera, and converting world coordinates obtained by radar data analysis into image coordinates;
s3: comparing the world coordinates converted to the image coordinates with positions of pedestrians and vehicles in the image data acquired by previous shooting to realize fusion matching of radar data and camera data information;
s4: and issuing lamp control information by using the fused result.
6. The method for using a vehicle-mounted ADB system frame based on radar fusion according to claim 5, wherein the method for converting world coordinates obtained by analyzing radar data into image coordinates in step S2 is as follows:
s21: and (3) calibrating the camera by establishing a conversion relation between an image coordinate system and a world coordinate system:
image coordinate system (O) c -X c Y c Z c ) And world coordinate system (O) w -X w Y w Z w ) Is also called camera extrinsic calibration, the image coordinate system and the world coordinate system have an offset Y in the Y direction cw And an offset Z in the Z direction cw The method comprises the steps of carrying out a first treatment on the surface of the Since the camera cannot be completely horizontal and vertical in the actual installation process, a pitching, yawing and rolling relation around a X, Y, Z axis exists between the image coordinate system and the world coordinate system;
defining a three-dimensional translation vector T and a 3X 3 rotation matrix R, the conversion relation between the image coordinate system and the world coordinate system is as follows:
wherein X is c Is the x-axis coordinate of an image coordinate system, Y c Z is the y-axis coordinate of the image coordinate system c Is the z-axis coordinate of an image coordinate system, X w Is the x-axis coordinate of the world coordinate system, Y w Z is the y-axis coordinate of the world coordinate system w Z-axis coordinates of a world coordinate system;
here it is converted into a fourth order matrix form:
m in the formula 1 A camera extrinsic matrix of 4 x 4;
s22: transforming the world coordinate system into the pixel coordinate system by matrix:
let the image be a two-dimensional matrix with the size of M x N, one element in the two-dimensional matrix is a pixel point, (u, v) be the pixel point in the ith column and the ith row, and the origin of the picture coordinate system (x t0 ,y t0 ) The corresponding coordinates in the pixel coordinate system are (u) c ,v c ) And the point is at the center of the pixel coordinate system;
let dx be t And dy t Respectively sitting on the picture for each pixel pointX in standard system t Axial direction and Y t The unit length in the axial direction is (u, v) and (x) t ,y t ) The relation of (2) is:
the conversion relationship between the pixel coordinate system and the picture coordinate system is:
the conversion relationship between the pixel coordinate system and the image coordinate system is as follows:
wherein M is 2 A 3 x 4 camera reference matrix;
s23: the conversion relation between the pixel coordinate system and the radar coordinate system is obtained according to the conversion relation between the image coordinate system and the world coordinate system and the conversion relation between the pixel coordinate system and the picture coordinate system, and is as follows:
converting world coordinates collected by the millimeter wave radar into image coordinates by utilizing the conversion relation between the pixel coordinate system and the radar coordinate system;
s24: and carrying out distortion correction on the information fused image by using a distortion correction formula:
the actual imaging of the camera is not simple and ideal aperture imaging, and distortion to a certain extent can occur, including radial distortion caused by the shape of the lens and tangential distortion generated in the assembly process of the camera, and the distortion correction formula of the camera is as follows:
x 'is the corrected pixel abscissa, y' is the corrected pixel ordinate, x is the ideal imaged pixel abscissa, y is the ideal imaged pixel ordinate, k 1 ,k 2 For radial distortion correction coefficient, p 1 ,p 2 For tangential distortion correction coefficient, r 2 =x 2 +y 2 And inputting the coordinate information of the image pixel points with the information fusion into a distortion correction formula for correction.
7. The method for using a vehicle-mounted ADB system frame based on radar fusion according to claim 6, wherein the method for fusion matching of radar data and camera data information in step S3 is as follows:
s31: adopting a nearest neighbor frame matching algorithm to synchronize millimeter wave radar data with visual data in time;
s32: and fusing the millimeter wave radar with visual information.
8. The method for using a vehicle-mounted ADB system frame based on radar fusion according to claim 7, wherein in step S31, the method for time synchronizing millimeter wave radar data with visual data by using nearest neighbor frame matching algorithm is as follows:
firstly, millimeter wave radar data with a time stamp is acquired according to radar data collected by a radar, image frame data with the time stamp is acquired according to image data collected by a camera, the time stamp of the millimeter wave radar data with the time stamp and the time stamp of the image frame data with the time stamp are respectively extracted, corresponding average time delay is subtracted, and then nearest neighbor matching of the radar data and the image frame data is carried out;
let the image data frame set be defined as:
C={c 1 ,c 2 ,c 3 ,…,c m };
the radar data frame set is defined as:
R={r 1 ,r 2 ,r 3 ,…,r n };
the image data frame and the radar point cloud true set time stamp are respectively as follows:
T c ={t c1 ,t c2 ,t c3 ,…,t cm }
T r ={t r1 ,t r2 ,t r3 ,…,t rn };
because the camera clock and the millimeter wave radar clock are not completely synchronous with the host clock, time delay exists between the camera clock and the host clock, and the average time delay of the camera clock and the host clock is delta C The average delay between the millimeter wave radar clock and the host clock is delta R
Due to the image data frame rate f c And millimeter wave Lei Dazhen rate f r Selecting a lower frame rate sensor with a frame rate denoted as f min Setting a frame time difference threshold T th Representing the frame synchronization time accuracy, satisfying the formula:
fusing the image frame data and the millimeter wave radar frame data by using a sliding window method to generate a result set, and for the image frame data and the millimeter wave radar frame data
O(C,R)=O(C,R)+(C j ,R j )For the result set O (C, R), wherein each set of data (C i ,R j ) Are data sets with synchronization time errors less than the frame time difference threshold.
9. The method for using a vehicle-mounted ADB system frame based on radar fusion according to claim 8, wherein the specific method for fusing millimeter wave radar with visual information in step S32 is as follows:
let the pixel coordinates of the four corner points of a certain target bounding box be (u) 1 ,v 1 )、(u 1 ,v 2 )、(u 2 ,v 1 ) Sum (u) 2 ,v 2 ) The coordinates of the projection of the radar scan point to the pixel coordinate system are (u) r ,v r ) The determination rule is set as follows: u (u) 1 <u r <u 2 And v 1 <v r V, performing information fusion processing on the target conforming to the judgment rule, and outputting category, speed and distance information of the target; and for the targets which do not meet the judgment rules, no fusion processing is performed. In the pixel coordinate system, if the interior of the target bounding box is not matched with the radar projection point, the radar is missed to detect the target, and only the category information is output; if the radar projection point does not match the target bounding box around it, the camera misses the target and only outputs its speed and distance information.
10. The method for using a vehicle-mounted ADB system frame based on the radar fusion according to any one of claims 7 to 9, wherein the method for issuing the lighting control information by using the fused result in step S4 is as follows:
s41: let the coordinates of the camera device on the image pixels be (x o ,y o ) We use radar data as a standard, and the coordinates of the radar data on the image pixels are (x r ,y r ) Included angle ofSequentially calculating the included angle theta corresponding to the objects matched with each frame of the video stream of the millimeter wave radar and the camera, and finally, merging the included angles theta corresponding to all the objects to obtain the included angle theta corresponding to the frame data final =U(θ 1 ,θ 2 ,…,θ n ) Wherein U () represents the union;
s42: the detected included angle between the vehicle and the own vehicle is obtained through S41, corresponding ADB matrix lamp bead control information is selected according to the one-to-one correspondence between the preset included angle and the ABD matrix lamp bead control information, the corresponding ADB matrix lamp bead control information is sent to an ADB matrix lamp bead control module through a CAN bus, and the module executes corresponding operation to realize the far and near light switching control of the ADB matrix lamp bead.
CN202311860941.8A 2023-12-31 2023-12-31 Vehicle-mounted ADB system architecture based on radar fusion and application method Pending CN117818463A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311860941.8A CN117818463A (en) 2023-12-31 2023-12-31 Vehicle-mounted ADB system architecture based on radar fusion and application method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311860941.8A CN117818463A (en) 2023-12-31 2023-12-31 Vehicle-mounted ADB system architecture based on radar fusion and application method

Publications (1)

Publication Number Publication Date
CN117818463A true CN117818463A (en) 2024-04-05

Family

ID=90514848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311860941.8A Pending CN117818463A (en) 2023-12-31 2023-12-31 Vehicle-mounted ADB system architecture based on radar fusion and application method

Country Status (1)

Country Link
CN (1) CN117818463A (en)

Similar Documents

Publication Publication Date Title
CN111352112B (en) Target detection method based on vision, laser radar and millimeter wave radar
CN111368706B (en) Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision
CN111951305B (en) Target detection and motion state estimation method based on vision and laser radar
US10627512B1 (en) Early fusion of lidar return data with camera information
CN110979321B (en) Obstacle avoidance method for unmanned vehicle
JP6825569B2 (en) Signal processor, signal processing method, and program
KR101949263B1 (en) Online calibration of a motor vehicle camera system
Bertozzi et al. Obstacle detection and classification fusing radar and vision
US10909395B2 (en) Object detection apparatus
CN112991369B (en) Method for detecting outline size of running vehicle based on binocular vision
CN110766760B (en) Method, device, equipment and storage medium for camera calibration
CN110443819B (en) Method and device for detecting track of monorail train
CN112731436B (en) Multi-mode data fusion travelable region detection method based on point cloud up-sampling
CN114137511B (en) Airport runway foreign matter fusion detection method based on multi-source heterogeneous sensor
CN110717445A (en) Front vehicle distance tracking system and method for automatic driving
CN110750153A (en) Dynamic virtualization device of unmanned vehicle
CN117452410A (en) Millimeter wave radar-based vehicle detection system
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN115187941A (en) Target detection positioning method, system, equipment and storage medium
JP2004257837A (en) Stereo adapter imaging system
CN114463303A (en) Road target detection method based on fusion of binocular camera and laser radar
CN111860270B (en) Obstacle detection method and device based on fisheye camera
CN111538008A (en) Transformation matrix determining method, system and device
CN117818463A (en) Vehicle-mounted ADB system architecture based on radar fusion and application method
CN116403186A (en) Automatic driving three-dimensional target detection method based on FPN Swin Transformer and Pointernet++

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination