CN111898582B - Obstacle information fusion method and system for binocular camera and millimeter wave radar - Google Patents

Obstacle information fusion method and system for binocular camera and millimeter wave radar Download PDF

Info

Publication number
CN111898582B
CN111898582B CN202010809921.8A CN202010809921A CN111898582B CN 111898582 B CN111898582 B CN 111898582B CN 202010809921 A CN202010809921 A CN 202010809921A CN 111898582 B CN111898582 B CN 111898582B
Authority
CN
China
Prior art keywords
information
millimeter wave
lane
list
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010809921.8A
Other languages
Chinese (zh)
Other versions
CN111898582A (en
Inventor
周坤
曾小韬
孙辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Suzhou Automotive Research Institute of Tsinghua University
Original Assignee
Tsinghua University
Suzhou Automotive Research Institute of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Suzhou Automotive Research Institute of Tsinghua University filed Critical Tsinghua University
Priority to CN202010809921.8A priority Critical patent/CN111898582B/en
Publication of CN111898582A publication Critical patent/CN111898582A/en
Application granted granted Critical
Publication of CN111898582B publication Critical patent/CN111898582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a barrier information fusion method of a binocular camera and a millimeter wave radar, which comprises the following steps: obtaining a camera object list and a millimeter wave object list, wherein the object list comprises an object ID and object information; according to a lane line equation, calculating an object of a lane to obtain a camera main object list and a millimeter wave main object list, and calculating objects of lanes at two sides of the lane to obtain a camera secondary object list and a millimeter wave secondary object list; respectively fusing a main object list and a secondary object list in the camera object list and the millimeter wave object list, searching an object ID of a previous frame of fused obstacle list, and updating object information if the object ID is found; otherwise, calculating the distance according to the position information in the object information, if the distance is smaller than the threshold value, judging the object as the same object, and updating the object information; otherwise, the object is judged to be in doubt. The problem of missing detection is solved, and the reliability of sensor information is improved.

Description

Obstacle information fusion method and system for binocular camera and millimeter wave radar
Technical Field
The invention relates to the technical field of sensor information fusion processing, in particular to a barrier information fusion method and a barrier information fusion system of a binocular camera and a millimeter wave radar.
Background
The camera, millimeter wave radar and laser radar are sensors for identifying common objects in the auxiliary driving stage. For the early warning function of Mobil eye, a purely visual approach is basically sufficient. But the accuracy becomes insufficient from the early warning to the execution layer. The fault tolerance of the automobile for reverse control is very low, so that at least two kinds of sensor information are required to be subjected to redundant verification, the precision is improved, and meanwhile, the standby scheme when one kind of sensor function is damaged in extreme weather is met.
The 'camera + millimeter wave radar' scheme of the front mainstream: optimal choice between cost, accuracy, speed. For cameras, a monocular camera needs to establish and continuously maintain a huge sample characteristic database, if characteristic data of an object to be identified is lacking, the system cannot identify and measure the distance, the problem of missed detection occurs, and accidents are easy to occur. The binocular camera can directly range the front scenery through the depth map, so that the defect of the monocular camera is avoided, and the binocular camera is about 20% to 30% more expensive than the monocular camera in cost. But it is nearly one hundred percent lower than the combined camera and radar solution and has very good performance. So future sensor fusion, binocular camera plus radar is the most potential combination. The present invention has been made in view of the above.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method and a system for fusing obstacle information of a binocular camera and a millimeter wave radar, which are used for fusing and matching the obstacle information detected by the binocular camera and the obstacle information detected by the millimeter wave radar, tracking, improving the reliability of obstacle detection, reducing the omission rate and the false detection rate, improving the detection distance precision and providing information for longitudinal control of subsequent vehicles and path planning of the vehicles.
The technical scheme of the invention is as follows:
a barrier information fusion method of a binocular camera and a millimeter wave radar comprises the following steps:
s01: processing the binocular camera data to obtain a lane line equation;
s02: obtaining a camera object list according to binocular camera data, and obtaining a millimeter wave object list according to millimeter wave radar data, wherein the object list comprises object IDs and object information;
s03: according to a lane line equation, calculating an object of a lane to obtain a camera main object list and a millimeter wave main object list, and calculating objects of lanes at two sides of the lane to obtain a camera secondary object list and a millimeter wave secondary object list;
s04: respectively fusing a main object list and a secondary object list in the camera object list and the millimeter wave object list, searching an object ID of a previous frame of fused obstacle list, and updating object information if the object ID is found; otherwise, calculating the distance according to the position information in the object information, if the distance is smaller than the threshold value, judging the object as the same object, and updating the object information; otherwise, the object is judged to be in doubt.
In a preferred embodiment, in the step S04, calculating the distance according to the position information in the object information further includes: and calculating the distance according to the position information in the object information of the main object list of the camera/the secondary object list of the camera of the frame and the position information in the object information of the main object list of the millimeter wave/the secondary object list of the millimeter wave of the frame, and judging the same object if the distance is smaller than a threshold value, so as to obtain new fusion data.
In the preferred technical scheme, in the step S01, a two-degree-of-freedom bicycle kinematic model is used, the center of the bicycle is used as the origin of coordinates, a vehicle coordinate system OXY is established, and a lane line equation is obtained:
Y=C 0 +C 1 X+C 2 X 2 +C 3 X 3
wherein C is 0 The distance of the vehicle shifting the lane line is represented, the right is negative, and the left is positive; c (C) 1 The included angle between the longitudinal axis of the vehicle body and the tangential direction of the center line of the lane, namely the included angle between the tangential direction of the center line of the lane and the OX direction; c (C) 2 Representing the curvature related information of the whole lane line; c (C) 3 Information indicating a curvature change rate of the entire lane line;
and carrying out Kalman tracking on the lane line, generating a predicted value of a next frame through the estimated value tracked by the previous frame, and weighting by using the predicted value of the next frame and the measured value of the next frame to obtain the estimated value of the next frame.
In a preferred technical scheme, the data in the step S04 is updated with a fixed period, and the period is greater than the lane line update period and less than the camera update period.
In a preferred embodiment, the step S02 further includes searching for an object ID, calculating the previous frame of object information by kalman filtering if the previous frame of object ID does not exist in the new frame of object list, adding the previous frame of object information to the new frame of camera object list or the millimeter wave object list, and calculating the object information by the previous frame of data and the present frame of data if the object ID exists in the new frame of object list.
In the preferred technical scheme, in the step S03, the area range of the own lane is calculated according to the lane line center line equation and the lane line width, and whether the object is in the own lane area is judged according to the lateral position of the object, if so, the object is considered as the own lane object; and calculating the area range of the left lane and the area range of the right lane through the lane line central line equation and the lane line width, judging whether the object is in the left lane area or the right lane area according to the transverse position of the object, and if so, judging the object to be the left lane object or the right lane object.
In the preferred technical scheme, if the number of objects in the lane is greater than a threshold value, the distance between the object and the vehicle is calculated, and the set number of objects with the nearest distance is selected.
The invention also discloses an obstacle information fusion system of the binocular camera and the millimeter wave radar, which comprises:
the lane line tracking module is used for processing the binocular camera data to obtain a lane line equation, carrying out Kalman tracking on the lane line, generating a predicted value of a next frame through an estimated value tracked by a previous frame, and weighting the predicted value of the next frame and a measured value of the next frame to obtain an estimated value of the next frame;
the obstacle information fusion module is used for obtaining a camera object list according to binocular camera data and obtaining a millimeter wave object list according to millimeter wave radar data, wherein the object list comprises an object ID and object information; according to a lane line equation, calculating an object of a lane to obtain a camera main object list and a millimeter wave main object list, and calculating objects of lanes at two sides of the lane to obtain a camera secondary object list and a millimeter wave secondary object list; respectively fusing a main object list and a secondary object list in the camera object list and the millimeter wave object list, searching an object ID of a previous frame of fused obstacle list, and updating object information if the object ID is found; otherwise, calculating the distance according to the position information in the object information, if the distance is smaller than the threshold value, judging the object as the same object, and updating the object information; otherwise, the object is judged to be in doubt.
In a preferred embodiment, the calculating the distance according to the position information in the object information further includes: and calculating the distance according to the position information in the object information of the main object list of the camera/the secondary object list of the camera of the frame and the position information in the object information of the main object list of the millimeter wave/the secondary object list of the millimeter wave of the frame, and judging the same object if the distance is smaller than a threshold value, so as to obtain new fusion data.
In a preferred embodiment, the obstacle information fusion module further includes, according to the object ID, calculating the previous frame of object information by kalman filtering if the previous frame of object ID does not exist in the new frame of object list, adding the previous frame of object information to the new frame of camera object list or the millimeter wave object list, and calculating the object information by the previous frame of data and the present frame of data if the object ID exists in the new frame of object list.
Compared with the prior art, the invention has the advantages that:
1. and the obstacle information detected by the binocular camera and the obstacle information detected by the millimeter wave radar are fused and matched, and are tracked, so that the reliability of obstacle detection is improved, the omission rate and the false detection rate are reduced, the detection distance precision is improved, and information is provided for longitudinal control of subsequent vehicles and path planning of the vehicles. According to the invention, the obstacle tracking in the 2-2-2 mode can ensure the tracking stability, and the tracking mutation caused by the obstacle change can not occur.
2. By setting the suspicious object in the tracking algorithm, missed detection and false detection can be eliminated to the greatest extent.
3. The binocular camera data and millimeter wave radar data can be tracked more accurately in position and speed through a Kalman tracking algorithm.
4. And (3) performing distance matching on the camera and the millimeter wave, performing distance replacement on the camera and the fused target list of the previous frame, and updating the fused obstacle from the near to the far, so that the reliability of obstacle detection can be improved.
Drawings
The invention is further described below with reference to the accompanying drawings and examples:
FIG. 1 is a diagram of a binocular camera lane tracking for a two-degree-of-freedom bicycle kinematic model of the present invention;
FIG. 2 is a schematic diagram of Kalman tracking of lane lines according to the present invention;
FIG. 3 is a schematic view of obstacle tracking according to the present invention;
fig. 4 is a flow chart of the camera and millimeter wave data fusion of the present invention.
Detailed Description
The objects, technical solutions and advantages of the present invention will become more apparent by the following detailed description of the present invention with reference to the accompanying drawings. It should be understood that the description is only illustrative and is not intended to limit the scope of the invention. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the present invention.
Examples:
preferred embodiments of the present invention will be further described with reference to the accompanying drawings.
The invention provides a barrier information fusion scheme of a binocular camera and a millimeter wave radar, which can realize more reliable environment perception information. The invention is premised on that the binocular camera and the millimeter wave radar are transformed into the same coordinate system through pose transformation, and can be transformed into a vehicle body coordinate system or a global coordinate system in practical use. The invention mainly provides a scheme for processing and fusing information of the camera and millimeter wave data, and the information of the two sensors is fused through the scheme, so that more reliable and robust environment perception is realized.
A barrier information fusion system of a binocular camera and a millimeter wave radar comprises two modules: the system comprises a lane line tracking module and an obstacle information fusion module.
The lane line tracking module is used for processing information of the binocular camera, providing more accurate information input for subsequent vehicle transverse control and providing lane information for the obstacle information fusion module.
The obstacle fusion module is used for matching object (obstacle) information detected by the binocular camera and object (obstacle) information detected by the millimeter wave radar, tracking, improving the reliability of obstacle detection, reducing the omission rate and the false detection rate, improving the detection distance precision and providing information for longitudinal control of subsequent vehicles and path planning of the vehicles.
The lane line tracking module:
the lane line identification mainly uses a binocular camera, and the millimeter wave radar cannot identify the lane line. But binocular camera/sensor identification suffers from certain noise, such as occasional jumps. And therefore a kalman filter is required for the lane lines. The equation of the lane line adopts polynomial fitting, and for convenience of explanation, we use polynomial fitting of three times, but the fitting order can adopt other orders and the processing mode is the same. With the two-degree-of-freedom bicycle kinematic model and the center of the bicycle as the origin of coordinates, a vehicle coordinate system OXY is established as shown in FIG. 1, and the output equation of the lane line is as follows:
Y=C 0 +C 1 X+C 2 X 2 +C 3 X 3
wherein C is 0 Indicating the distance of the vehicle deviating from the center of the lane line, wherein the right is negative, the left is positive, C 1 Representing the angle between the running direction of the vehicle and the lane line, i.e. the angle between the tangential direction of the lane line and the OX direction, C 2 Curvature information indicating lane line C 3 The curvature change rate of the lane line is indicated.
The Kalman tracking is carried out on the lane line, namely, the 4 parameters of the lane line are subjected to filtering correction, and in theory, the 4 parameters of the lane line are required to be subjected to tracking correction at the same time, but the operation of a 4x4 covariance matrix is introduced, so that the operation load is greatly increased. Therefore, in practical application, we track each parameter separately.
For the two-degree-of-freedom bicycle model shown in FIG. 1, l f And l r Respectively representing the front wheelbase and the rear wheelbase. The x-direction represents the forward direction of the vehicle main axis, and the y-direction represents the leftward direction perpendicular to the vehicle main axis. v x And v y Indicating the velocity in the x, y directions, respectively. Typically, the velocity in the y-direction is small and typically negligible.Indicating the yaw rate of the vehicle. The vehicle visual dynamics equation is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,indicating the rate of change of the distance deviation of the vehicle from the center of the lane line,/->Indicating the rate of change of the yaw angle of the vehicle.A yaw rate indicating a change of direction of the road, i.e., a desired yaw rate, +.>Wherein v is x The speed of the vehicle is represented, and R represents the radius of the lane line.
The process logic for lane tracking is shown in fig. 2. The lane line parameters are respectively tracked, and the tracking formula is as follows:
wherein T is a period.
Estimation value tracked by the previous frame (C0 T-1 ,C1 T-1 ,C2 T-1 ,C3 T-1 ) Generating a predicted value (C0) of the next frame T ,C1 T ,C2 T ,C3 T ). And then weighting the predicted value of the next frame and the measured value of the next frame to obtain the estimated value of the next frame, and repeating the process to realize continuous tracking. It is noted that equation 4 is based on an assumption that the curvature rate of the lane lines is constant, which may be satisfied in a real scene, especially within a short time window.
Obstacle information fusion module:
the detection of obstacles is mainly directed to the longitudinal control of the vehicle. And carrying out Kalman tracking processing according to the new frame of data of the camera and the millimeter wave radar data and combining the previous frame of data, and then carrying out matching fusion on the millimeter wave data and the camera data to output reliable obstacle information.
The tracking mode of the obstacle is performed according to a 2-2-2 mode, namely, 2 lane line obstacles are present on two sides respectively, and of course, other number of modes can be adopted, and the embodiment is only described in a 2-2-2 mode. The obstacle information of the lane line is mainly used for distance measurement, so that safety and functions are ensured. The obstacle information on the left side and the right side is mainly used for judging cutting-in of the vehicle in time and guaranteeing comfort and stability of vehicle control.
The core logic of the obstacle data fusion scheme is shown in fig. 3, and in the actual use process, the frame rates of the binocular camera and the millimeter wave radar are different, so that the fusion scheme is updated uniformly by adopting a fixed period, and 50ms is selected in the embodiment. If the new measurement has not arrived at the time of the value update, the old measurement is used directly. As shown in fig. 3, the data period of the camera is 60-100 ms, the millimeter wave radar period is 80ms, and the lane line tracking adopts an updating speed of 20 ms. The overall processing logic is as follows:
(1) And (3) carrying out message analysis processing on the original data of the binocular camera to obtain a lane line equation, and a camera object list, a millimeter wave object list and object (obstacle) information which contain the ID (identification number, positive integer) of the obstacle, wherein the obstacle is relative to the position information of the vehicle and the speed information of the vehicle.
(2) Reliable lane line equations (the method shown in fig. 2) are obtained by lane line raman tracking.
(3) Inquiring the object ID numbers of the camera object list, the millimeter wave object list, the main objects successfully tracked in the previous frame and the secondary objects on two sides (the main targets of the previous frame are 6) according to the object ID numbers, if the object ID numbers successfully tracked are lost, indicating that the binocular camera fails to detect, calculating the new position of the main secondary objects in the previous frame through Kalman filtering, and adding the new position of the main secondary objects in the previous frame into the camera object list in the new frame. If the corresponding object ID number is found to still exist in the new frame object list, the position of the object is calculated through the last frame data and the frame data.
(4) The binocular camera data/millimeter wave data and the lane line data are matched with each other, and 6 new major and minor objects based on the binocular camera/millimeter wave are formed through lane line information: the number of the lane lines is 2, and the number of the lane lines is 2 on two sides. The specific method comprises the following steps: the area range of the lane is calculated through a lane line central line equation and a lane line width, if the lateral position of the obstacle is in the range, the obstacle is considered as the obstacle of the lane, and then two obstacles closest to the lane are selected through distance. Similarly, the area range of the left lane is calculated through the lane line center line and the lane line width, whether the obstacle falls in the left lane area is judged according to the transverse position of the obstacle, then the nearest two obstacles are selected according to the distance, and the right lane line is similar.
(5) The new camera object list (6 obstacles) or millimeter wave object list based on camera formed according to step (4), of course, if the obstacles are not so many, the corresponding obstacle positions may be left empty. For example, it is also possible that the left lane has only one obstacle. The millimeter wave radar is processed similarly to a camera.
(6) The primary object and the secondary object of the object list respectively formed by the camera and the millimeter wave are matched, and the matching logic is shown in fig. 4. The primary object and the secondary object are matched and characterized in that: the object of the lane is a main object, and the objects of the left lane and the right lane are secondary objects. According to which lane the obstacle belongs to, the camera and the millimeter wave are matched respectively. The camera of the lane is matched with the millimeter wave obstacle, the camera of the left lane is matched with the millimeter wave of the left lane, and the right lane is the same.
The data fusion logic of the binocular camera and the millimeter wave is shown in fig. 4, and the fusion result 2-2-2 is already stored in the previous frame, and corresponds to two objects (which may have no value) of the left-middle-right lane respectively. Meanwhile, for each lane, the suspicious object 1-1-1 is also recorded. An object in question is that an obstacle may be present but is not completely determined, and if the frame still detects the object in question, the presence of the obstacle may be determined.
The detection logic of the suspicious object is: if the object appearing in the present frame is not present in the main object list (6 obstacles) tracked in the previous frame, the object is set as an in-doubt object. It is known from this logic that there may be multiple suspicious objects, so we choose the obstacle nearest to the host vehicle in each lane (left, middle, right) as the suspicious object.
As can be seen from fig. 4, the processing of the binocular camera and the millimeter wave is symmetrical, so the binocular camera is used for illustration, to illustrate the fusion logic of the obstacle.
1) Firstly, detecting the obstacle of a target list newly added by a binocular camera, and can not be found in a fused obstacle list successfully tracked by the previous frame, namely the ID is the same, if the ID is found, the successful obstacle tracking is indicated, and directly replacing the data of the camera with the data in the fused obstacle list (corresponding to the 6 main targets of the previous frame in FIG. 3). Processing of the next obstacle continues, otherwise jump to 2).
2) Detecting whether the data of the binocular camera is similar to the distance information of the data fused with the previous frame, if so, indicating that the data is the same object, directly replacing the data of the binocular camera with the data fused with the previous frame, and continuing the next obstacle processing, otherwise, jumping to 3).
3) Detecting whether the camera data can be matched with the millimeter wave obstacle in distance, if so, fusing the two new frames of data to form new fused data. Continuing to process the next object, otherwise jumping to 4).
4) If the ID of the camera obstacle is the same as the ID of the suspicious object in the previous frame, a new reliable detection object is generated, and if not, the detection object is generated. The camera object in doubt is set as the camera object of the frame. The next cycle is continued.
The judging method of the proximity of the distance information comprises the following steps: the sum of the difference between the lateral positions and the difference between the longitudinal positions of the two obstacles is smaller than a set threshold value.
And after all cameras and millimeter wave obstacles are processed, according to 3), 4), the newly generated fusion data and reliable obstacle data in the step 4) are subjected to distance replacement with a fusion target list of the previous frame, and the fusion obstacles are updated from the near to the far.
The specific implementation method is as follows: the fusion target list of the previous frame is provided with 2 objects of a left lane, 2 objects of a host lane and 2 objects of a right lane. Taking the left lane as an example, since a fusion object of a new left lane may be generated through 3,4 steps, for example, 2 objects are newly generated, then there are 4 fusion objects in the left lane, and two objects with the closest distance need to be selected according to the principle of closest distance. The processing of other lanes is the same.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explanation of the principles of the present invention and are in no way limiting of the invention. Accordingly, any modification, equivalent replacement, improvement, etc. made without departing from the spirit and scope of the present invention should be included in the scope of the present invention. Furthermore, the appended claims are intended to cover all such changes and modifications that fall within the scope and boundary of the appended claims, or equivalents of such scope and boundary.

Claims (9)

1. The obstacle information fusion method of the binocular camera and the millimeter wave radar is characterized by comprising the following steps of:
s01: processing the binocular camera data to obtain a lane line equation;
s02: obtaining a camera object list according to binocular camera data, and obtaining a millimeter wave object list according to millimeter wave radar data, wherein the object list comprises object IDs and object information;
s03: according to a lane line equation, calculating an object of a lane to obtain a camera main object list and a millimeter wave main object list, and calculating objects of lanes at two sides of the lane to obtain a camera secondary object list and a millimeter wave secondary object list;
s04: respectively fusing a main object list and a secondary object list in the camera object list and the millimeter wave object list, searching an object ID of a previous frame of fused obstacle list, and updating object information if the object ID is found; otherwise, calculating the distance according to the position information in the object information, if the distance is smaller than the threshold value, judging the object as the same object, and updating the object information; otherwise, judging as an object in doubt;
in the step S01, a two-degree-of-freedom bicycle kinematic model is used, the center of the bicycle is used as the origin of coordinates, a vehicle coordinate system OXY is established, and a lane line equation is obtained:
Y=C 0 +C 1 X+C 2 X 2 +C 3 X 3
wherein C is 0 The distance of the vehicle shifting the lane line is represented, the right is negative, and the left is positive; c (C) 1 The included angle between the longitudinal axis of the vehicle body and the tangential direction of the center line of the lane, namely the included angle between the tangential direction of the center line of the lane and the OX direction; c (C) 2 Representing the curvature related information of the whole lane line; c (C) 3 Information indicating a curvature change rate of the entire lane line;
and carrying out Kalman tracking on the lane line, generating a predicted value of a next frame through the estimated value tracked by the previous frame, and weighting by using the predicted value of the next frame and the measured value of the next frame to obtain the estimated value of the next frame.
2. The method for merging obstacle information of a binocular camera and a millimeter wave radar according to claim 1, wherein the calculating the distance according to the position information in the object information in the step S04 further comprises: and calculating the distance according to the position information in the object information of the main object list of the camera/the secondary object list of the camera of the frame and the position information in the object information of the main object list of the millimeter wave/the secondary object list of the millimeter wave of the frame, and judging the same object if the distance is smaller than a threshold value, so as to obtain new fusion data.
3. The method for merging obstacle information of a binocular camera and a millimeter wave radar according to claim 1, wherein the data in the step S04 is updated with a fixed period, and the period is greater than the lane line update period and less than the camera update period.
4. The method of merging obstacle information of a binocular camera and a millimeter wave radar according to claim 1, wherein the step S02 further comprises, searching according to the object ID, calculating the last frame of object information by kalman filtering if the last frame of object ID does not exist in the new frame of object list, and adding the last frame of object information to the new frame of camera object list or the millimeter wave object list, and calculating the object information by the last frame of data and the present frame of data if the object ID exists in the new frame of object list.
5. The method for merging obstacle information of a binocular camera and a millimeter wave radar according to claim 1, wherein in the step S03, the area range of the own lane is calculated by a lane line center line equation and a lane line width, and whether the object is in the own lane area is determined according to the lateral position of the object, and if so, the object is considered as the own lane object; and calculating the area range of the left lane and the area range of the right lane through the lane line central line equation and the lane line width, judging whether the object is in the left lane area or the right lane area according to the transverse position of the object, and if so, judging the object to be the left lane object or the right lane object.
6. The obstacle information fusion method of a binocular camera and a millimeter wave radar according to claim 5, wherein if the number of objects in a lane is greater than a threshold value, the distance between the object and the host vehicle is calculated, and a set number of objects closest to the host vehicle is selected.
7. The utility model provides a binocular camera and millimeter wave radar's barrier information fusion system which characterized in that includes:
the lane line tracking module is used for processing the binocular camera data to obtain a lane line equation, carrying out Kalman tracking on the lane line, generating a predicted value of a next frame through an estimated value tracked by a previous frame, and weighting the predicted value of the next frame and a measured value of the next frame to obtain an estimated value of the next frame;
the obstacle information fusion module is used for obtaining a camera object list according to binocular camera data and obtaining a millimeter wave object list according to millimeter wave radar data, wherein the object list comprises an object ID and object information; according to a lane line equation, calculating an object of a lane to obtain a camera main object list and a millimeter wave main object list, and calculating objects of lanes at two sides of the lane to obtain a camera secondary object list and a millimeter wave secondary object list; respectively fusing a main object list and a secondary object list in the camera object list and the millimeter wave object list, searching an object ID of a previous frame of fused obstacle list, and updating object information if the object ID is found; otherwise, calculating the distance according to the position information in the object information, if the distance is smaller than the threshold value, judging the object as the same object, and updating the object information; otherwise, judging as an object in doubt;
in the lane line tracking module, a two-degree-of-freedom bicycle kinematic model is used, the center of a vehicle is used as a coordinate origin, a vehicle coordinate system OXY is established, and a lane line equation is obtained:
Y=C 0 +C 1 X+C 2 X 2 +C 3 X 3
wherein C is 0 The distance of the vehicle shifting the lane line is represented, the right is negative, and the left is positive; c (C) 1 The included angle between the longitudinal axis of the vehicle body and the tangential direction of the center line of the lane, namely the included angle between the tangential direction of the center line of the lane and the OX direction; c (C) 2 Representing the curvature related information of the whole lane line; c (C) 3 Information indicating a curvature change rate of the entire lane line;
and carrying out Kalman tracking on the lane line, generating a predicted value of a next frame through the estimated value tracked by the previous frame, and weighting by using the predicted value of the next frame and the measured value of the next frame to obtain the estimated value of the next frame.
8. The obstacle information fusion system of a binocular camera and millimeter wave radar of claim 7, wherein the calculating the distance from the position information in the object information further comprises: and calculating the distance according to the position information in the object information of the main object list of the camera/the secondary object list of the camera of the frame and the position information in the object information of the main object list of the millimeter wave/the secondary object list of the millimeter wave of the frame, and judging the same object if the distance is smaller than a threshold value, so as to obtain new fusion data.
9. The obstacle information fusion system of a binocular camera and a millimeter wave radar according to claim 7, wherein the obstacle information fusion module further comprises, for inquiry according to the object ID, calculating the last frame of object information by a kalman filter if the last frame of object ID does not exist in the new frame of object list, and adding to the new frame of camera object list or the millimeter wave object list, and calculating the object information by the last frame of data and the present frame of data if the object ID exists in the new frame of object list.
CN202010809921.8A 2020-08-13 2020-08-13 Obstacle information fusion method and system for binocular camera and millimeter wave radar Active CN111898582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010809921.8A CN111898582B (en) 2020-08-13 2020-08-13 Obstacle information fusion method and system for binocular camera and millimeter wave radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010809921.8A CN111898582B (en) 2020-08-13 2020-08-13 Obstacle information fusion method and system for binocular camera and millimeter wave radar

Publications (2)

Publication Number Publication Date
CN111898582A CN111898582A (en) 2020-11-06
CN111898582B true CN111898582B (en) 2023-09-12

Family

ID=73230182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010809921.8A Active CN111898582B (en) 2020-08-13 2020-08-13 Obstacle information fusion method and system for binocular camera and millimeter wave radar

Country Status (1)

Country Link
CN (1) CN111898582B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010249613A (en) * 2009-04-14 2010-11-04 Toyota Motor Corp Obstacle recognition device and vehicle control unit
CN109360228A (en) * 2018-10-18 2019-02-19 清华大学苏州汽车研究院(吴江) Pose method for registering between monocular cam and millimetre-wave radar
CN109829386A (en) * 2019-01-04 2019-05-31 清华大学 Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method
CN110126824A (en) * 2019-05-22 2019-08-16 河南工业大学 A kind of commercial vehicle AEBS system of integrated binocular camera and millimetre-wave radar
CN110517303A (en) * 2019-08-30 2019-11-29 的卢技术有限公司 A kind of fusion SLAM method and system based on binocular camera and millimetre-wave radar

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010249613A (en) * 2009-04-14 2010-11-04 Toyota Motor Corp Obstacle recognition device and vehicle control unit
CN109360228A (en) * 2018-10-18 2019-02-19 清华大学苏州汽车研究院(吴江) Pose method for registering between monocular cam and millimetre-wave radar
CN109829386A (en) * 2019-01-04 2019-05-31 清华大学 Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method
CN110126824A (en) * 2019-05-22 2019-08-16 河南工业大学 A kind of commercial vehicle AEBS system of integrated binocular camera and millimetre-wave radar
CN110517303A (en) * 2019-08-30 2019-11-29 的卢技术有限公司 A kind of fusion SLAM method and system based on binocular camera and millimetre-wave radar

Also Published As

Publication number Publication date
CN111898582A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
US10650253B2 (en) Method for estimating traffic lanes
Ferryman et al. Visual surveillance for moving vehicles
CN111738207B (en) Lane line detection method and device, electronic device and readable storage medium
Huang et al. Finding multiple lanes in urban road networks with vision and lidar
CN107085938B (en) The fault-tolerant planing method of intelligent driving local path followed based on lane line and GPS
CN112154455B (en) Data processing method, equipment and movable platform
CN105937912A (en) Map data processing device for vehicle
CN102542843A (en) Early warning method for preventing vehicle collision and device
KR102592830B1 (en) Apparatus and method for predicting sensor fusion target in vehicle and vehicle including the same
Karthika et al. Distance estimation of preceding vehicle based on mono vision camera and artificial neural networks
KR101030317B1 (en) Apparatus for tracking obstacle using stereo vision and method thereof
US11332124B2 (en) Vehicular control system
CN115923839A (en) Vehicle path planning method
US20220314979A1 (en) Apparatus and Method for Controlling Driving of Vehicle
CN114084129A (en) Fusion-based vehicle automatic driving control method and system
Gruyer et al. Vehicle detection and tracking by collaborative fusion between laser scanner and camera
CN115223131A (en) Adaptive cruise following target vehicle detection method and device and automobile
CN111898582B (en) Obstacle information fusion method and system for binocular camera and millimeter wave radar
Fakhfakh et al. Weighted v-disparity approach for obstacles localization in highway environments
CN116052469A (en) Vehicle collision early warning method based on vehicle-road collaborative track prediction
CN116363611A (en) Multi-sensor decision-level fusion vehicle track tracking method
CN115223361A (en) Layout optimization method for roadside sensors in vehicle-road cooperative system
CN115388880A (en) Low-cost memory parking map building and positioning method and device and electronic equipment
JP5682302B2 (en) Traveling road estimation device, method and program
Yoon et al. High-definition map based motion planning, and control for urban autonomous driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant