CN113432615B - Detection method and system based on multi-sensor fusion drivable area and vehicle - Google Patents

Detection method and system based on multi-sensor fusion drivable area and vehicle Download PDF

Info

Publication number
CN113432615B
CN113432615B CN202110876811.8A CN202110876811A CN113432615B CN 113432615 B CN113432615 B CN 113432615B CN 202110876811 A CN202110876811 A CN 202110876811A CN 113432615 B CN113432615 B CN 113432615B
Authority
CN
China
Prior art keywords
lane
data point
ass
target data
dataset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110876811.8A
Other languages
Chinese (zh)
Other versions
CN113432615A (en
Inventor
王皓
谭余
刘金彦
张帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202110876811.8A priority Critical patent/CN113432615B/en
Publication of CN113432615A publication Critical patent/CN113432615A/en
Application granted granted Critical
Publication of CN113432615B publication Critical patent/CN113432615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a detection method, a detection system and a detection vehicle based on a multi-sensor fusion drivable area. The method can detect the drivable area in real time, continuously output the maximum passable range of each lane, effectively improve the defect caused by missed detection of the single target detection module, improve the safety of the whole vehicle and provide effective judgment basis for planning and decision making of an unmanned system.

Description

Detection method and system based on multi-sensor fusion drivable area and vehicle
Technical Field
The invention relates to the technical field of automatic driving of vehicles, in particular to a detection method and system based on multi-sensor fusion of drivable areas and a vehicle.
Background
The automatic driving system is an active safety system, and can automatically control the running of the vehicle, including running, lane changing, parking and the like, so that the driving experience and comfort are improved, and meanwhile, the driving safety is ensured. The sensing system is a key component of an automatic driving system, realizes the sensing of the environment by using sensors such as cameras, millimeter wave radars, laser radars, ultrasonic waves and the like which are arranged on the vehicle, and efficiently and safely automatically runs on the premise of ensuring the compliance of traffic rules by identifying lane lines, vehicle pedestrian targets, traffic signs, calculating a movable area and the like. In an unmanned system, detection of surrounding vehicles, pedestrians and obstacles is an important subsystem of the unmanned perception system, and directly influences unmanned planning, decision-making and control. In recent years, the use of exercised areas has started to form trends and trends, and a main sensing system and security sensing system architecture is formed. Specifically, the main perception system is a common target level scene perception, the target state can be described by length, width, speed, category and the like, the intention or track is predicted, and particularly, the obstacles in the scene are represented by an explicit result. The safety awareness system (equivalent to the travelable region) no longer emphasizes the target level awareness, especially for dynamic targets, with invisible resultant representations. The two perception systems can be mutually checked in the fusion module.
At present, a camera, a laser radar and the like are mainly used for detecting a drivable area, one of the two representation modes is vector envelope representation, a certain number of points are arranged under a polar coordinate or a rectangular coordinate system, the data size is small, the trafficability situation behind an obstacle is difficult to express, and the expression mode is more for vision; the other type is the grid characterization, the grid can use a fixed grid or a variable grid characterization and the like, and has the advantages of expressing traffic situation after an obstacle, and the disadvantage of large data size and more expression modes for the laser radar. In the unmanned system, the detection of the vision-based target level has the defects of missed detection of a transverse vehicle or large size error, missed detection of a static short object and the like, and similar defects of the target detection have great negative influence on a planning decision system and further influence the safety of the whole vehicle.
Disclosure of Invention
The invention aims to provide a detection method, a detection system and a vehicle based on multi-sensor fusion drivable area, which can detect the drivable area in real time, continuously output the maximum passable range of each lane, effectively improve the defect caused by missed detection of a single target detection module, improve the safety of the whole vehicle and provide effective judgment basis for planning and decision making of an unmanned driving system.
In order to achieve the above object, the present invention provides a method for detecting a drivable region based on multi-sensor fusion, comprising the steps of:
acquiring visual data points acquired by a vehicle-mounted visual sensor and target data points acquired by a vehicle-mounted radar in real time; wherein the visual data points comprise types of obstacles including vehicles, pedestrians, and curbs, and position coordinates, and the target data points comprise types, positions, and speeds of targets including vehicles and pedestrians;
screening out visual data in each lane and generating a visual data point set camera_lane of a corresponding lane i Dataset; screening out target point data in each lane and generating a target data point set Mmw _lane of the corresponding lane i Dataset; where i=current lane, left lane and right lane; or i=current lane and left lane; or i=current lane and right lane;
calculating a minimum visual data point P_C in the visual data point set of each lane i Min and the minimum target data point P_M in the target point data point set corresponding to each lane i Min, wherein the minimum visual data point represents the coordinate point closest to the vehicle in the visual data point set, and the minimum target data point represents the coordinate point closest to the vehicle in the target point data point set;
minimum visual data point P_C of each lane i Visual data point sets camera_lane of the corresponding lanes respectively corresponding to min i Dataset and target data point set Mmw _lane i Associating_dataset, and judging a visual data point set camera_lane corresponding to each lane i Dividing the minimum visual data point P_C by_dataset i Whether or not there is at least one visual data point outside of _min and the minimum visual data point P_C of its corresponding lane i The distance of _min is smaller than a first preset threshold value P i If yes, the association is successful, and Ass _C is made i Dc=c; otherwise, the association fails, ass _C i Dc=d; wherein Ass _C i DC represents the minimum visual data point P_C corresponding to each lane i Min and visual data point set Camera_Lane i Correlation results of_dataset;
judging a target data point set Mmw _Lane corresponding to each lane i Whether or not there is at least one minimum visual data point P_C of the target data point and its corresponding lane in the dataset i The distance of _min is smaller than a second preset threshold Q i If yes, the association is successful, and Ass _C is made i Dm=c; otherwise, the association fails, ass _C i Dm=d; wherein Ass _C i DM represents the minimum visual data point P_C corresponding to each lane i _min and target data point set Mmw _Lane i Correlation results of_dataset;
through Ass _C i DC and Ass _C i Value of_DMValidation of P_C i Final correlation result Ass _C for_min i Determine Ass _C i DC and Ass _C i Whether at least one of the_DM is equal to C, if so, the association is successful, and Ass _C is made i Otherwise, the association fails, let Ass _c i =D;
Minimum target data point P_M of each lane i Visual data point sets camera_lane of the corresponding lanes respectively corresponding to min i Dataset and target data point set Mmw _lane i Associating_dataset, and judging a visual data point set camera_lane corresponding to each lane i Whether or not there is at least one visual data point in the dataset and the smallest target data point p_m of its corresponding lane i The distance of _min is smaller than a third preset threshold F i If yes, the association is successful, let Ass _M i Dc=c; otherwise, the association fails, let Ass _M i Dc=d; wherein Ass _M i DC represents the minimum target data point P_M corresponding to each lane i Min and visual data point set Camera_Lane i Correlation results of_dataset;
judging a target data point set Mmw _Lane corresponding to each lane i Dividing the smallest target data point P_M in_dataset i Whether or not there is at least one target data point outside of _min and the minimum target data point P_M of its corresponding lane i The distance of _min is smaller than a fourth preset threshold G i If yes, the association is successful, let Ass _M i Dm=c; otherwise, the association fails, let Ass _M i Dm=d; wherein Ass _M i DM represents the minimum target data point P_M corresponding to each lane i _min and target data point set Mmw _Lane i Correlation results of_dataset;
through Ass _M i DC and Ass _M i Value confirmation p_m of_dm i Final correlation result Ass _m for_min i Determine Ass _M i DC and Ass _M i Whether at least one of the_DM is equal to C, if so, the association is successful, and Ass _M is given i Otherwise, the association fails, let Ass _M i =D;
Comparison of Ass _C i And Ass _M i
If Ass _C i =C,Ass_M i =d, then output p_c i Min is taken as a cut-off point of the drivable zone of the lane;
if Ass _C i =D,Ass_M i C, output P_M i Min is taken as a cut-off point of the drivable zone of the lane;
if Ass _C i =Ass_M i =c, then p_c i Min and P_M i Comparing the values of _min and if P_C i Min is less than P_M i Min, output P_C i Min is taken as a cut-off point of the drivable area of the lane, otherwise, P_M is output i Min is taken as a cut-off point of the drivable zone of the lane;
if Ass _C i =Ass_M i And =d, no output.
Further, the vehicle-mounted vision sensor is a camera, and the vehicle-mounted radar is a millimeter wave radar.
Further, the minimum visual data point P_C i The calculation formula of _min is:
indicating lane i->Coordinates of the individual visual data points;
minimum target data point P_M i The calculation formula of _min is:
indicating lane i->Coordinates of the individual target data points.
The X coordinate axis points to the advancing direction of the vehicle and the Y coordinate system points to the right side of the vehicle.
Further, the visual data point set Camera_Lane i Dataset and target data point set Mmw _lane i The screening steps of_dataset are as follows:
calculating two adjacent lane lines corresponding to each lane according to lane line parameters, judging whether each visual data point and each target data point are between the two adjacent lane lines corresponding to each lane, after confirming the lanes where each visual data point and each target data point are located, respectively storing the visual data point and the target data point on each lane to generate a visual data point set camera_lane corresponding to each lane i Dataset and target data point set Mmw _lane i _dataset。
The invention also provides a detection system based on the multi-sensor fusion drivable area, which comprises a vehicle-mounted vision sensor for acquiring vision data points, a vehicle-mounted radar for acquiring target data points and a data processing module, wherein the data processing module is configured to execute the steps of the detection method based on the multi-sensor fusion drivable area.
The invention also provides a vehicle, which comprises the detection system based on the multi-sensor fusion drivable area.
Compared with the prior art, the invention has the following advantages:
according to the detection method, the system and the vehicle based on the multi-sensor fusion drivable area, the drivable area is detected in real time by using the vehicle-mounted visual sensor and the vehicle-mounted radar which are currently mainstream, the maximum passable range of each lane is continuously output, the defect caused by missed detection of a single target detection module is effectively overcome, the safety of the whole vehicle is improved, and an effective judgment basis is provided for planning and decision making of an unmanned driving system.
Drawings
FIG. 1 is a flow chart of a method for detecting a drivable region based on multi-sensor fusion in accordance with the present invention;
fig. 2 is a schematic structural diagram of a detection system based on a multi-sensor fusion drivable region according to the present invention.
In the figure:
1-vehicle vision sensor, 2-vehicle radar, 3-data processing module.
Detailed Description
The following describes the embodiments of the present invention further with reference to the drawings.
Referring to fig. 1, the embodiment discloses a method for detecting a drivable region based on multi-sensor fusion, which comprises the following steps:
acquiring visual data points acquired by a vehicle-mounted visual sensor and target data points acquired by a vehicle-mounted radar in real time; wherein the visual data points comprise types of obstacles including vehicles, pedestrians, and curbs, and position coordinates, and the target data points comprise types, positions, and speeds of targets including vehicles and pedestrians;
screening out visual data in each lane and generating a visual data point set camera_lane of a corresponding lane i Dataset; screening out target point data in each lane and generating a target data point set Mmw _lane of the corresponding lane i Dataset; where i=current lane, left lane and right lane; or i=current lane and left lane; or i=current lane and right lane; when left lanes and right lanes are respectively arranged on two sides of a current lane where the vehicle is located, the vehicle-mounted vision sensor and the vehicle-mounted radar can collect data points of the three lanes; when the current lane where the vehicle is located has a lane (left lane or right lane) on only one side, the vehicle-mounted vision sensor and the vehicle-mounted radar can collect data points of the two lanes.
Calculating a minimum visual data point P_C in the visual data point set of each lane i Min and the minimum target data point P_M in the target point data point set corresponding to each lane i Min, wherein the minimum visual data point represents the coordinate point closest to the vehicle in the visual data point set, and the minimum target data point represents the coordinate point closest to the vehicle in the target point data point set;
minimum visual data point P_C of each lane i Visual data point sets camera_lane of the corresponding lanes respectively corresponding to min i Dataset and target data point set Mmw _lane i Associating_dataset, and judging a visual data point set camera_lane corresponding to each lane i Dividing the minimum visual data point P_C by_dataset i Whether or not there is at least one visual data point outside of _min and the minimum visual data point P_C of its corresponding lane i The distance of _min is smaller than a first preset threshold value P i If yes, the association is successful, and Ass _C is made i Dc=c; otherwise, the association fails, ass _C i Dc=d; wherein Ass _C i DC represents the minimum visual data point P_C corresponding to each lane i Min and visual data point set Camera_Lane i Correlation results of_dataset;
judging a target data point set Mmw _Lane corresponding to each lane i Whether or not there is at least one minimum visual data point P_C of the target data point and its corresponding lane in the dataset i The distance of _min is smaller than a second preset threshold Q i If yes, the association is successful, and Ass _C is made i Dm=c; otherwise, the association fails, ass _C i Dm=d; wherein Ass _C i DM represents the minimum visual data point P_C corresponding to each lane i _min and target data point set Mmw _Lane i Correlation results of_dataset;
through Ass _C i DC and Ass _C i Value confirmation p_c of_dm i Final correlation result Ass _C for_min i Determine Ass _C i DC and Ass _C i Whether at least one of the_DM is equal to C, if so, the association is successful, and Ass _C is made i Otherwise, the association fails, let Ass _c i =D;
Minimum target data point P_M of each lane i Visual data point sets camera_lane of the corresponding lanes respectively corresponding to min i Dataset and target data point set Mmw _lane i Associating_dataset, and judging a visual data point set camera_lane corresponding to each lane i Whether or not there is at least one visual data point in the dataset and the smallest target data point p_m of its corresponding lane i The distance of _min is smaller than a third preset threshold F i If yes, the association is successful, let Ass _M i Dc=c; otherwise, the association fails, let Ass _M i Dc=d; wherein Ass _M i DC represents the minimum target data point P_M corresponding to each lane i Min and visual data point set Camera_Lane i Correlation results of_dataset;
judging a target data point set Mmw _Lane corresponding to each lane i Dividing the smallest target data point P_M in_dataset i Whether or not there is at least one target data point outside of _min and the minimum target data point P_M of its corresponding lane i The distance of _min is smaller than a fourth preset threshold G i If yes, the association is successful, let Ass _M i Dm=c; otherwise, the association fails, let Ass _M i Dm=d; wherein Ass _M i DM represents the minimum target data point P_M corresponding to each lane i _min and target data point set Mmw _Lane i Correlation results of_dataset;
through Ass _M i DC and Ass _M i Value confirmation p_m of_dm i Final correlation result Ass _m for_min i Determine Ass _M i DC and Ass _M i Whether at least one of the_DM is equal to C, if so, the association is successful, and Ass _M is given i Otherwise, the association fails, let Ass _M i =D;
Comparison of Ass _C i And Ass _M i
If Ass _C i =C,Ass_M i =d, then output p_c i Min is taken as a cut-off point of the drivable zone of the lane;
if Ass _C i =D,Ass_M i C, output P_M i Min is taken as a cut-off point of the drivable zone of the lane;
if Ass _C i =Ass_M i =c, then p_c i Min and P_M i Comparing the values of _min and if P_C i Min is less than P_M i Min, output P_C i Min is taken as a cut-off point of the drivable area of the lane, otherwise, P_M is output i Min is taken as a cut-off point of the drivable zone of the lane;
if Ass _C i =Ass_M i And =d, no output.
In this embodiment, c=1, d=0; in certain embodiments, c=0, d=1; the values of C and D are set according to practical situations, but are not limited thereto.
The drivable zone is a zone in which vehicles can pass, without any objects within the drivable zone.
Due to the minimum visual data point P_C i Camera_Lane with min being an associated visual data point set i In_dataset, the smallest target data point P_M i Min is in the associated target data point set Mmw _lane i In dataset, so P_C is avoided at the time of association i Min and visual data point set Camera_Lane i Self data point association in dataset; avoiding P_M i Min avoids the target data point set Mmw _Lane i Self data point association in dataset.
In the present embodiment, a first preset threshold value P i A second preset threshold value Q i A third preset threshold F i And a fourth preset threshold G i For the correlation parameters, it is determined based on the accuracy of the sensor.
Minimum visual data point P_C of each lane i Visual data point sets camera_lane of the corresponding lanes respectively corresponding to min i Dataset and target data point set Mmw _lane i Dataset is associated, i.e.:
Ass_C i _DC=Asso_lane i (P_C i _min,Camera_lane i _dataset);
Ass_C i _DM=Asso_lane i (P_C i _min,Mmw_lane i _dataset)。
minimum target data for each lanePoint P_M i Visual data point sets camera_lane of the corresponding lanes respectively corresponding to min i Dataset and target data point set Mmw _lane i Dataset is associated, i.e.:
Ass_M i _DC=Asso_lane i (P_M i _min,Camera_lane i _dataset);
Ass_M i _DM=Asso_lane i (P_M i _min,Mmw_lane i _dataset)。
Asso_lane i () The function is a correlation function, and the calculation method of the correlation function is to sequentially calculate P_C i Min or P_M i Coordinates of_min are respectively matched with visual data point set camera_lane i Dataset and target data point set Mmw _lane i Distance dis_r of data points in_dataset.
To calculate P_C i Min and visual data point set Camera_Lane i Distance dis_r of data points in_dataset is illustrated by way of example, assuming P_C i The coordinates of_min are (x_min, y_min), and the visual data point set camera_lane i If the coordinates of any data point in_dataset are (x_r, y_r), the calculation formula of the distance dis_r is:
dis_r=sqrt((x_r_x_min) 2 +(y_r_y_min) 2 ) Sqrt is open square.
In this embodiment, the vehicle-mounted vision sensor is a camera, and the vehicle-mounted radar is a millimeter wave radar.
In the present embodiment, the smallest visual data point P_C i The calculation formula of _min is:
indicating lane i->Coordinates of the individual visual data points; according to the quantity of the vehicle-mounted vision sensor collected in different lanes, the number of the different lanes is determined>The values of (2) may be the same or different, different lanes +.>The values of (2) may be the same or different.
Minimum target data point P_M i The calculation formula of _min is:
indicating lane i->Coordinates of the individual target data points. />According to the number of target data points acquired by the vehicle-mounted radar in different lanes, determining the +.>The values of (2) may be the same or different, different lanes +.>The values of (2) may be the same or different.
The X coordinate axis points to the advancing direction of the vehicle and the Y coordinate system points to the right side of the vehicle.
In this embodiment, the visual data point set camera_lane i Dataset and target data point set Mmw _lane i The screening steps of_dataset are as follows:
calculating two adjacent lane lines corresponding to each lane according to lane line parameters, judging whether each visual data point and each target data point are between the two adjacent lane lines corresponding to each lane, after confirming the lanes where each visual data point and each target data point are located, respectively storing the visual data point and the target data point on each lane to generate a visual data point set camera_lane corresponding to each lane i Dataset and target data point set Mmw _lane i _dataset。
The coordinates of the data points are expressed by (x, y), and the lane line equation is expressed as function_lane (x) =a 0 +A 1 x+A 2 x 2 +A 3 x 3 Wherein A0, A1, A2, A3 are lane line parameters;
illustrating: if the left lane line function is function_left (x), the right lane line function is function_right (x), for any data point, e.g., (x 0, y 0), the point must satisfy the condition between lane lines:
y0-function_leftlane(x0)>=0;
and y0-function_right (x 0) <=0;
the data point set is represented as follows:
camera_lanei_dataset= { p1, p2, p3, … }, representing the visual data point set of the vehicle-mounted visual sensor in the i lanes;
mmw _lanei_dataset= { p1, p2, p3, … }, representing the target set of data points for the on-board radar in the i lanes.
The embodiment discloses a detection system based on a multi-sensor fusion drivable area, which comprises an on-vehicle vision sensor 1 for acquiring vision data points, an on-vehicle radar 2 for acquiring target data points and a data processing module 3, wherein the data processing module 3 is configured to execute the steps of the detection method based on the multi-sensor fusion drivable area.
The embodiment discloses a vehicle, which comprises the detection system based on the multi-sensor fusion drivable area.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention by one of ordinary skill in the art without undue burden. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by the person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.

Claims (4)

1. The detection method based on the multi-sensor fusion drivable area is characterized by comprising the following steps:
acquiring visual data points acquired by a vehicle-mounted visual sensor and target data points acquired by a vehicle-mounted radar in real time; wherein the visual data points comprise types of obstacles including vehicles, pedestrians, and curbs, and position coordinates, and the target data points comprise types, positions, and speeds of targets including vehicles and pedestrians;
screening out visual data in each lane and generating a visual data point set camera_lane of a corresponding lane i Dataset; screening out target point data in each lane and generating a target data point set Mmw _lane of the corresponding lane i Dataset; where i=current lane, left lane and right lane; or i=current lane and left lane; or i=current lane and right lane;
calculating a minimum visual data point P_C in the visual data point set of each lane i Min and the smallest target data point p_m in the target point data point set i Min, wherein the smallest visual data point represents the coordinate point within the set of visual data points that is closest to the host vehicle,the minimum target data point represents the coordinate point closest to the vehicle in the target point data point set;
minimum visual data point P_C of each lane i Visual data point sets camera_lane of the corresponding lanes respectively corresponding to min i Dataset and target data point set Mmw _lane i Associating_dataset, and judging a visual data point set camera_lane corresponding to each lane i Dividing the minimum visual data point P_C by_dataset i Whether or not there is at least one visual data point outside of _min and the minimum visual data point P_C of its corresponding lane i The distance of _min is smaller than a first preset threshold value P i If yes, the association is successful, and Ass _C is made i Dc=c; otherwise, the association fails, ass _C i Dc=d; wherein Ass _C i DC represents the minimum visual data point P_C corresponding to each lane i Min and visual data point set Camera_Lane i Correlation results of_dataset;
judging a target data point set Mmw _Lane corresponding to each lane i Whether or not there is at least one minimum visual data point P_C of the target data point and its corresponding lane in the dataset i The distance of _min is smaller than a second preset threshold Q i If yes, the association is successful, and Ass _C is made i Dm=c; otherwise, the association fails, ass _C i Dm=d; wherein Ass _C i DM represents the minimum visual data point P_C corresponding to each lane i _min and target data point set Mmw _Lane i Correlation results of_dataset;
through Ass _C i DC and Ass _C i Value confirmation p_c of_dm i Final correlation result Ass _C for_min i Determine Ass _C i DC and Ass _C i Whether at least one of the_DM is equal to C, if so, the association is successful, and Ass _C is made i Otherwise, the association fails, let Ass _c i =D;
Minimum target data point P_M of each lane i Visual data point sets camera_lane of the corresponding lanes respectively corresponding to min i Dataset and target data point set Mmw _lane i Associating_dataset, and judging a visual data point set camera_lane corresponding to each lane i Whether or not in_datasetMinimum target data point P_M with at least one visual data point and its corresponding lane i The distance of _min is smaller than a third preset threshold F i If yes, the association is successful, let Ass _M i Dc=c; otherwise, the association fails, let Ass _M i Dc=d; wherein Ass _M i DC represents the minimum target data point P_M corresponding to each lane i Min and visual data point set Camera_Lane i Correlation results of_dataset;
judging a target data point set Mmw _Lane corresponding to each lane i Dividing the smallest target data point P_M in_dataset i Whether or not there is at least one target data point outside of _min and the minimum target data point P_M of its corresponding lane i The distance of _min is smaller than a fourth preset threshold G i If yes, the association is successful, let Ass _M i Dm=c; otherwise, the association fails, let Ass _M i Dm=d; wherein Ass _M i DM represents the minimum target data point P_M corresponding to each lane i _min and target data point set Mmw _Lane i Correlation results of_dataset;
through Ass _M i DC and Ass _M i Value confirmation p_m of_dm i Final correlation result Ass _m for_min i Determine Ass _M i DC and Ass _M i Whether at least one of the_DM is equal to C, if so, the association is successful, and Ass _M is given i Otherwise, the association fails, let Ass _M i =D;
Comparison of Ass _C i And Ass _M i
If Ass _C i =C,Ass_M i =d, then output p_c i Min is taken as a cut-off point of the drivable zone of the lane;
if Ass _C i =D,Ass_M i C, output P_M i Min is taken as a cut-off point of the drivable zone of the lane;
if Ass _C i =Ass_M i =c, then p_c i Min and P_M i Comparing the values of _min and if P_C i Min is less than P_M i Min, output P_C i Min is taken as a cut-off point of the drivable area of the lane, otherwise, P_M is output i ' minA cut-off point for a drivable zone of the lane;
if Ass _C i =Ass_M i =d, no output;
wherein: minimum visual data point P_C i The calculation formula of _min is:
indicating lane i->Coordinates of the individual visual data points;
minimum target data point P_M i The calculation formula of _min is:
indicating lane i->Coordinates of the individual target data points;
the method comprises the steps of establishing an X_Y coordinate system by taking the center of a front bumper of the vehicle as an origin, wherein the X coordinate axis points to the advancing direction of the vehicle, and the Y coordinate system points to the right side of the vehicle;
the vision is thatData point set camera_Lane i Dataset and target data point set Mmw _lane i The screening steps of_dataset are as follows:
calculating two adjacent lane lines corresponding to each lane according to lane line parameters, judging whether each visual data point and each target data point are between the two adjacent lane lines corresponding to each lane, after confirming the lanes where each visual data point and each target data point are located, respectively storing the visual data point and the target data point on each lane to generate a visual data point set camera_lane corresponding to each lane i Dataset and target data point set Mmw _lane i _dataset。
2. The method for detecting the drivable region based on the multi-sensor fusion according to claim 1, wherein the vehicle-mounted vision sensor is a camera and the vehicle-mounted radar is a millimeter wave radar.
3. A multi-sensor fusion drivable zone based detection system, characterized in that it comprises an in-vehicle vision sensor (1) for acquiring vision data points, an in-vehicle radar (2) for acquiring target data points and a data processing module (3), said data processing module (3) being configured to perform the steps of the multi-sensor fusion drivable zone based detection method as claimed in claim 1 or 2.
4. A vehicle comprising a multi-sensor fusion drivable zone-based detection system as claimed in claim 3.
CN202110876811.8A 2021-07-31 2021-07-31 Detection method and system based on multi-sensor fusion drivable area and vehicle Active CN113432615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110876811.8A CN113432615B (en) 2021-07-31 2021-07-31 Detection method and system based on multi-sensor fusion drivable area and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110876811.8A CN113432615B (en) 2021-07-31 2021-07-31 Detection method and system based on multi-sensor fusion drivable area and vehicle

Publications (2)

Publication Number Publication Date
CN113432615A CN113432615A (en) 2021-09-24
CN113432615B true CN113432615B (en) 2024-02-13

Family

ID=77762749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110876811.8A Active CN113432615B (en) 2021-07-31 2021-07-31 Detection method and system based on multi-sensor fusion drivable area and vehicle

Country Status (1)

Country Link
CN (1) CN113432615B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113886634B (en) * 2021-09-30 2024-04-12 重庆长安汽车股份有限公司 Lane line offline data visualization method and device
CN114354209A (en) * 2021-12-07 2022-04-15 重庆长安汽车股份有限公司 Automatic driving lane line and target combined simulation method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109443369A (en) * 2018-08-20 2019-03-08 北京主线科技有限公司 The method for constructing sound state grating map using laser radar and visual sensor
KR101998298B1 (en) * 2018-12-14 2019-07-09 위고코리아 주식회사 Vehicle Autonomous Driving Method Using Camera and LiDAR Sensor
CN110532896A (en) * 2019-08-06 2019-12-03 北京航空航天大学 A kind of road vehicle detection method merged based on trackside millimetre-wave radar and machine vision
CN110949395A (en) * 2019-11-15 2020-04-03 江苏大学 Curve ACC target vehicle identification method based on multi-sensor fusion
WO2020135772A1 (en) * 2018-12-29 2020-07-02 长城汽车股份有限公司 Generation method and generation system for dynamic target line during automatic driving of vehicle, and vehicle

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI674984B (en) * 2018-11-15 2019-10-21 財團法人車輛研究測試中心 Driving track planning system and method for self-driving vehicles

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109443369A (en) * 2018-08-20 2019-03-08 北京主线科技有限公司 The method for constructing sound state grating map using laser radar and visual sensor
KR101998298B1 (en) * 2018-12-14 2019-07-09 위고코리아 주식회사 Vehicle Autonomous Driving Method Using Camera and LiDAR Sensor
WO2020135772A1 (en) * 2018-12-29 2020-07-02 长城汽车股份有限公司 Generation method and generation system for dynamic target line during automatic driving of vehicle, and vehicle
CN110532896A (en) * 2019-08-06 2019-12-03 北京航空航天大学 A kind of road vehicle detection method merged based on trackside millimetre-wave radar and machine vision
CN110949395A (en) * 2019-11-15 2020-04-03 江苏大学 Curve ACC target vehicle identification method based on multi-sensor fusion

Also Published As

Publication number Publication date
CN113432615A (en) 2021-09-24

Similar Documents

Publication Publication Date Title
CN109556615B (en) Driving map generation method based on multi-sensor fusion cognition of automatic driving
CN109634282B (en) Autonomous vehicle, method and apparatus
US10528055B2 (en) Road sign recognition
EP3879455A1 (en) Multi-sensor data fusion method and device
CN106335509B (en) The drive assistance device of vehicle
WO2018076855A1 (en) Assisting system for vehicle driving on narrow road
CN110065494B (en) Vehicle anti-collision method based on wheel detection
EP3715204A1 (en) Vehicle control device
CN105620489A (en) Driving assistance system and real-time warning and prompting method for vehicle
CN110632617B (en) Laser radar point cloud data processing method and device
CN110816540B (en) Traffic jam determining method, device and system and vehicle
DE102016109592A1 (en) Collision mitigation and avoidance
CN103455144A (en) Vehicle-mounted man-machine interaction system and method
CN104573646A (en) Detection method and system, based on laser radar and binocular camera, for pedestrian in front of vehicle
CN109871787B (en) Obstacle detection method and device
CN110214106B (en) Apparatus operable to determine a position of a portion of a lane
CN113432615B (en) Detection method and system based on multi-sensor fusion drivable area and vehicle
CN112512887B (en) Driving decision selection method and device
Kim et al. Probabilistic threat assessment with environment description and rule-based multi-traffic prediction for integrated risk management system
CN114442101A (en) Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar
CN111222441A (en) Point cloud target detection and blind area target detection method and system based on vehicle-road cooperation
CN115223131A (en) Adaptive cruise following target vehicle detection method and device and automobile
CN113870246A (en) Obstacle detection and identification method based on deep learning
US11087147B2 (en) Vehicle lane mapping
CN111959515A (en) Forward target selection method, device and system based on visual detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant