CN111208839B - Fusion method and system of real-time perception information and automatic driving map - Google Patents

Fusion method and system of real-time perception information and automatic driving map Download PDF

Info

Publication number
CN111208839B
CN111208839B CN202010329502.4A CN202010329502A CN111208839B CN 111208839 B CN111208839 B CN 111208839B CN 202010329502 A CN202010329502 A CN 202010329502A CN 111208839 B CN111208839 B CN 111208839B
Authority
CN
China
Prior art keywords
map
obstacle
road
boundary
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010329502.4A
Other languages
Chinese (zh)
Other versions
CN111208839A (en
Inventor
杨殿阁
江昆
焦新宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010329502.4A priority Critical patent/CN111208839B/en
Publication of CN111208839A publication Critical patent/CN111208839A/en
Application granted granted Critical
Publication of CN111208839B publication Critical patent/CN111208839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Abstract

The invention relates to a method and a system for fusing real-time perception information and an automatic driving map, wherein the method comprises the following steps: determining the relation between the barrier and the road in the map; fusion of a drivable area and boundary state analysis; the relation between the obstacles and the roads in the map, the drivable area fusion and the boundary state analysis result are combined with the static environment information of the map, so that the integrated output of the perception result is realized. The invention filters out the obstacles outside the road, and defines the lane where the obstacles in the road are and the position angle relationship between the obstacles and the lane, thereby providing a basis for motion prediction; unreasonable parts in the driving areas obtained by perception are removed according to the map information, and the semantic types and speeds of the boundaries are obtained. To this end, various environmental elements are linked and integrated on the map platform.

Description

Fusion method and system of real-time perception information and automatic driving map
Technical Field
The invention relates to the technical field of automatic driving, in particular to a method and a system for fusing real-time perception information and an automatic driving map.
Background
Autonomous driving is a highly complex integrated system, and high-level autonomous driving requires complete and robust context awareness, decision planning, and vehicle control. The environment sensing technology is the basis of the whole system, and the detection rate, accuracy and the like of the prior art can not meet the safety requirements, so that the prior information provided by a map is used for assisting real-time sensing, and the prior information is gradually known in the industry. The autopilot map provides a large amount of high-precision a priori static information, such as road boundaries, road surface shapes, lane lines, traffic signs, fixed obstacles, and the like, while real-time sensing mainly provides dynamic information.
Actually, map information and real-time perception information are not simple static and dynamic superposition relations, a lot of static information can be obtained through real-time perception, the movement rules of a lot of dynamic information are hidden in an automatic driving map, and meanwhile, the relation between the static information and the dynamic information is important content for environment understanding. In the existing research, some map information and real-time perception information are simply combined, and some information fusion of the map information and the real-time perception information is realized from a certain aspect, such as positioning, a travelable area, a reference path and the like, but the relation among various elements is not completely disclosed and utilized.
Disclosure of Invention
Aiming at the defect of the consideration of the connection between the automatic driving map and the real-time perception information in the existing research, the invention aims to provide a method and a system for fusing the real-time perception information and the automatic driving map, which can realize the connection and integration of various environmental elements on a map platform.
In order to achieve the purpose, the invention adopts the following technical scheme: a method for fusing real-time perception information and an automatic driving map comprises the following steps: 1) carrying out map matching positioning, giving the position of the self-vehicle in a map coordinate system, and determining the relation between the self-vehicle coordinate system and the map coordinate system; 2) determining the relation between the barrier and the road in the map; the real-time perceived obstacle information is divided into three categories according to the relationship with the road: an in-road target, an out-of-road target, and a road boundary target; wherein, the target in the road should confirm the lane where the target is located; comparing the barrier information with the map information according to the relationship between a vehicle coordinate system and a map coordinate system given during map matching and positioning to obtain the relationship between the barrier and the road; the relationship of obstacles to roads in a map is divided into two cases: 2.1) obstacle detected by the three-dimensional sensor: the sensors can directly detect the motion state of the barrier in the self-vehicle coordinate system, and the position of the barrier is transferred to the map by utilizing the relation between the self-vehicle coordinate system and the map coordinate system so as to be compared with the map; 2.2) obstacles detected by the monocular vision sensor in the image plane: the sensor obtains a plane projection of a map coordinate system on an image coordinate system, a road boundary and a lane line in the map are projected into the image through a calibration matrix, and then are compared with an obstacle in the image, the relationship between the obstacle and the road is judged, and a lane where the obstacle is located is obtained; 3) fusion of a drivable area and boundary state analysis; 4) and (3) combining the step 2) and the step 3) with the static environment information of the map to realize the integrated output of the perception result.
Further, in the step 2), the real-time perceived obstacle is transferred to a map coordinate system by using a map matching positioning result, the obstacle is used as a target, the target is screened according to a road boundary, the lane in which the target is located is judged for the target in the road boundary, the relation between the target and the lane is determined, and the dynamic and static target fused with the map information is obtained.
Further, the relationship of the obstacle to the road is calculated as follows:
Figure DEST_PATH_IMAGE001
(1)
in the formula (I), the compound is shown in the specification,
Figure 224223DEST_PATH_IMAGE002
is the left boundary of the road;
Figure DEST_PATH_IMAGE003
is the right boundary of the road;ois an obstacle;distas a function of distance.
Further, if the shape of the obstacle is not observed, calculating the distance to the lane line according to the position of the obstacle; if the shape of the obstacle is observed, calculating the distance to the lane line aiming at each vertex of the bounding box, and if the symbols are all the same, taking the smallest value; if the symbols are not completely the same, then 0 is taken as shown in the following formula:
Figure 423123DEST_PATH_IMAGE004
(2)
in the formula (I), the compound is shown in the specification,das a function of the distance from the point to the straight line,lis a lane line, and is characterized in that,pis the position of the obstacle, and is,v i is the peak of the obstacle, and the peak of the obstacle,iis in the range of 1 tonThe number of the integer (c) of (d),nthe number of the top points of the obstacle.
Further, when the lane where the target is located is judged in the road, the lanes are checked one by one, and the left and right boundaries of the given lane are used for replacing the left and right boundaries of the road in the formula (1), so that whether the target is located in the lane can be judged until a certain lane is found, and the target is located in the lane boundary or on the boundary; for the target in the road, the relation between the target and the lane is further calculated: judging a deflection angle of the direction of the obstacle relative to the direction of the center line of the lane according to the direction of the obstacle and the direction of the lane line; and (3) calculating the distance from the obstacle to the left lane line and the right lane line by using a formula (2) according to the position and the shape of the obstacle and the lane lines.
Further, the road boundary and the lane line in the map are projected into the image, and the formula is as follows:
Figure DEST_PATH_IMAGE005
in the formula (I), the compound is shown in the specification,
Figure 154319DEST_PATH_IMAGE006
Figure DEST_PATH_IMAGE007
Figure 147682DEST_PATH_IMAGE008
is the coordinate in the coordinate system of the self-vehicle,m qj q =0, 1, 2, j =1, 2, 3, 4, an element of a camera parameter matrix;uvrespectively, the horizontal and vertical coordinates of the image pixel, and Z is the optical center height of the camera.
Further, in the step 3), the travelable area obtained through real-time sensing is fused with the travelable area provided by the map according to the travelable area, and the state of the boundary of the travelable area is judged in a segmented manner according to the real-time sensing obstacle information, so that the boundary semantics and the speed information are obtained.
Furthermore, real-time perception of a travelable area is realized by dividing a ground communication area of a laser radar or a vision sensor; the method comprises the following steps of reducing the range of a drivable area by utilizing a road boundary provided by a map and boundaries which cannot be crossed by traffic rules such as stop lines and solid lines; meanwhile, the semantics and the speed of the boundary of the travelable area are determined by combining the obstacle information.
A fusion system of real-time perception information and an automatic driving map comprises a map matching positioning module, a relation determining module, an analyzing module and an output module; the map matching and positioning module is used for performing map matching and positioning, giving the position of the self-vehicle in a map coordinate system and determining the relation between the self-vehicle coordinate system and the map coordinate system; the relation determining module is used for determining the relation between the obstacles and the roads in the map; the real-time perceived obstacle information is divided into three categories according to the relationship with the road: an in-road target, an out-of-road target, and a road boundary target; wherein, the target in the road should confirm the lane where the target is located; comparing the barrier information with the map information according to the relationship between a vehicle coordinate system and a map coordinate system given during map matching and positioning to obtain the relationship between the barrier and the road; the relationship of obstacles to roads in a map is divided into two cases: (1) obstacle detected by the three-dimensional sensor: the sensors can directly detect the motion state of the barrier in the self-vehicle coordinate system, and the position of the barrier is transferred to the map by utilizing the relation between the self-vehicle coordinate system and the map coordinate system so as to be compared with the map; (2) obstacle detected by monocular vision sensor in image plane: the sensor obtains a plane projection of a map coordinate system on an image coordinate system, a road boundary and a lane line in the map are projected into the image through a calibration matrix, and then are compared with an obstacle in the image, the relationship between the obstacle and the road is judged, and a lane where the obstacle is located is obtained; the analysis module analyzes the travelable region fusion and the boundary state; and the output module combines the results of the relationship determination module and the analysis module with the static environment information of the map, so as to realize the integrated output of the perception result.
Further, in the analysis module, the travelable area obtained through real-time perception is fused with the travelable area provided by the map according to the travelable area, and the state of the boundary of the travelable area is judged in a segmented mode according to the real-time perception obstacle information to obtain boundary semantics and speed information.
Due to the adoption of the technical scheme, the invention has the following advantages: the invention filters out the obstacles outside the road, and defines the lane where the obstacles in the road are and the position angle relationship between the obstacles and the lane, thereby providing a basis for motion prediction; unreasonable parts in the driving areas obtained by perception are removed according to the map information, and the semantic types and speeds of the boundaries are obtained. To this end, various environmental elements are linked and integrated on the map platform.
Drawings
FIG. 1 is a schematic flow diagram of the overall process of the present invention.
Fig. 2 is a schematic diagram illustrating determination of a relationship between an obstacle and a road in map coordinates.
Fig. 3 is a schematic diagram illustrating the relationship between an obstacle and a lane under map coordinates.
Fig. 4 is a schematic diagram illustrating the relationship between obstacles and roads and lanes in image coordinates.
Fig. 5 is a schematic diagram of travelable region fusion and boundary state analysis.
Detailed Description
The method comprises two parts of the relation between the barrier and the road in the map, the fusion of the travelable areas and the boundary state analysis. The map matching positioning technology is the basis of the invention, and can obtain more accurate self-vehicle state through sensing static information in real time and comparing the static information with a map, determine the relationship between the self-vehicle and the road and unify a coordinate system. The invention is described in detail below with reference to the figures and examples.
As shown in fig. 1, the present invention provides a method for fusing real-time perception information and an automatic driving map, which comprises the following steps:
1) carrying out map matching positioning, giving the position of the self-vehicle in a map coordinate system, and determining the relation between the self-vehicle coordinate system and the map coordinate system;
map matching and positioning have more existing mature technologies, which are not described herein any more, and the existing method is adopted.
2) Determining the relation between the obstacles and the roads in the map: and transferring the real-time perceived barrier to a map coordinate system by using a map matching positioning result, screening the target according to the road boundary by taking the barrier as the target, judging which lane the target is positioned in the target in the road boundary, and determining the relationship between the target and the lane to obtain the dynamic and static target fused with the map information.
The real-time perceived obstacle information can be classified into three categories according to the relationship with the road: in-road, out-of-road, and road boundary targets:
the off-road targets are irrelevant to driving and can be deleted, and the off-road targets are filtered and do not appear in the final integrated perception result any more.
The target in the road should further confirm the lane in which it is located; because the relation between the coordinate system of the vehicle and the coordinate system of the map (namely the three-dimensional rotation translation matrix of the two coordinate systems) is given when the map is matched and positioned, the relation between the barrier and the road can be obtained by comparing the barrier information with the map information.
The specific determination of the relationship between obstacles and roads in a map can be divided into two cases:
2.1) obstacles detected by three-dimensional sensors such as millimeter wave radar, laser radar, multi-view stereo vision and the like.
The sensors can directly detect the motion states of the obstacle such as position, orientation, speed and the like in the own vehicle coordinate system, and the position of the obstacle can be transferred to the map by utilizing the relation between the own vehicle coordinate system and the map coordinate system so as to be compared with the map.
For example, as shown in fig. 2, if the left road boundary is a guardrail, an obstacle on the opposite lane on the left side of the guardrail does not affect the driving of the vehicle, and belongs to an off-road target, and the obstacle is filtered out. And millimeter wave radar echoes can be generated at the right road boundary and the sidewalk road edge and output as targets, and the targets can be confirmed to belong to the road boundary targets by combining a map. The rest points are positioned on the roadway or the sidewalk, and are targets in the road, and the lane in which the targets are positioned can be judged according to the positions of the targets.
The relationship of the obstacle to the road is calculated as follows:
Figure 964329DEST_PATH_IMAGE001
(1)
in the formula (I), the compound is shown in the specification,
Figure 701340DEST_PATH_IMAGE002
is the left boundary of the road;
Figure 238108DEST_PATH_IMAGE003
is the right boundary of the road;ois an obstacle;distfor distance function, the left side is defined as positive and the right side is defined as negative, and the distance to the left and right sidesThe product of the boundary distance is less than zero, and is located inside the road, and is equal to zero, and is located on the boundary, and is greater than zero, and is located outside the road.
If the shape of the obstacle is not observed, calculating the distance to the lane line according to the position of the obstacle; if the shape of the obstacle is observed, calculating the distance to the lane line aiming at each vertex of the bounding box, and if the symbols are all the same, taking the smallest value; if the symbols are not completely the same, then 0 is taken as shown in the following formula:
Figure DEST_PATH_IMAGE011
(2)
in the formula (I), the compound is shown in the specification,das a function of the distance from the point to the line,lis a lane line, and is characterized in that,pis the position of the obstacle, and is,v i is the peak of the obstacle, and the peak of the obstacle,iis in the range of 1 tonThe number of the integer (c) of (d),nthe number of the top points of the obstacle.
When the lane where the target is located is judged in the road, the lanes are checked one by one, the left and right boundaries of the given lane are used for replacing the left and right boundaries of the road in the formula (1), and whether the target is located in the lane can be judged until a certain lane is found, so that the target is located in the lane boundary or on the boundary.
For the target in the road, the relation between the target and the lane is further calculated. As shown in fig. 3, the yaw angle of the obstacle with respect to the lane center line is determined from the obstacle direction and the lane line direction. Calculating the distance from the obstacle to the left lane line by using a formula (2) according to the position and the shape of the obstacle and the lane line
Figure 31282DEST_PATH_IMAGE012
Distance to right lane line
Figure DEST_PATH_IMAGE013
. The information provides basis for the prediction of the future motion state of the target.
2.2) obstacles detected by the monocular vision sensor in the image plane.
The sensor obtains a plane projection of a map coordinate system on an image coordinate system, so that the sensor does not have three-dimensional dimension information and cannot restore the position and the shape of the sensor in the map coordinate system through an obstacle in the image plane. However, by calibrating the matrix, the road boundary and the lane line in the map can be projected into the image, and then compared with the obstacle in the image, the relationship between the obstacle and the road is judged, and the lane where the obstacle is located is obtained.
Projecting the road boundary and the lane line in the map into the image, wherein the formula is as follows:
Figure 306405DEST_PATH_IMAGE014
(3)
in the formula (I), the compound is shown in the specification,
Figure 949876DEST_PATH_IMAGE006
Figure 144097DEST_PATH_IMAGE007
Figure 873019DEST_PATH_IMAGE008
is the coordinate under the coordinate system of the self-vehicle,m qj is an element of a camera parameter matrix, where q =0, 1, 2, j =1, 2, 3, 4.uvThe horizontal and vertical coordinates of the image pixel, respectively, and Z is a parameter (camera optical center height), which can be found from the third row of the matrix (3). And converting the sampling points of the road boundary and the lane line into image coordinates through a formula (3), and obtaining the projection of the road boundary and the lane line in the image coordinates.
As shown in fig. 4, an embodiment of determining the relationship between an obstacle and a road lane based on the in-image-plane map projection and the in-image-plane obstacle information is given. If the image-based obstacle detection algorithm can only provide a 2D surrounding frame (such as obstacles 1, 2 and 3), whether the 2D surrounding frame is located in the road boundary can be judged according to the bottom edge (namely a grounding line) of the 2D surrounding frame, the lane where the 2D surrounding frame is located can also be judged aiming at the target in the road boundary, two vertexes of the bottom edge are used as vertexes in a formula (2), and the formulas (1) and (2) are applied in pixel coordinates, so that the lane where the target is located in the road boundary and the relative position relation between the target and the lane can be judged.
However, when the two surfaces on the obstacle side and the back side are seen, the 2D bounding box cannot distinguish the obstacles, and therefore, errors are caused by using the 2D bounding box bottom edge. If the detection algorithm can provide 3D obstacle projection information (e.g., the obstacle 4), the equations (1) and (2) can be applied to determine the lane where the target is located and the relative position relationship between the target and the lane according to the 3 vertices (i.e., A, B, C in the figure) of the side edge and the bottom edge of the target.
3) And (3) travelable region fusion and boundary state analysis: and aiming at the drivable area, fusing the drivable area obtained by real-time perception and the drivable area provided by the map, and judging the state of the boundary of the drivable area in a segmented manner according to the real-time perception obstacle information to obtain boundary semantics and speed information.
The specific fusion method comprises the following steps: the real-time perception of the travelable area can be realized by dividing the ground communication area of the laser radar or the vision sensor. The road boundary provided by the map and the virtual boundary which cannot be spanned and is formed by traffic regulation constraints such as a stop line, a solid line and the like are utilized, so that the range of a drivable area can be reduced; meanwhile, the semantics (referring to the type of the boundary) and the speed of the boundary of the travelable area are defined by combining the obstacle information.
As shown in fig. 5, the range of the perceived travelable region is narrowed in consideration of the virtual boundary of the traffic regulation constraint. Meanwhile, semantic information, namely a road physical boundary and a traffic regulation boundary formed by a road edge guardrail and the like, is given to partial boundary segments of the segment boundary, and the boundary speed is 0. Meanwhile, the method is fused with a real-time sensing barrier, which boundary sections are formed by the dynamic barrier are determined, and meanwhile, the speed of the dynamic barrier is taken as the boundary speed of the drivable area. The rest boundary segments are formed for the detection boundary of the sensor, the boundary speed is 0, and the semantic and speed division of all the boundary segments is completed.
4) Combining the step 2) and the step 3) with the static environment information of the map, realizing the integrated output of the perception result:
by utilizing the association among various elements in the environment, the real-time perception and the map information are fused through the relation between the barrier and the road, the fusion of the drivable area and the analysis of the boundary state. The method filters out obstacles outside the road, and defines the lane where the obstacles in the road are and the position angle relationship between the obstacles and the lane, thereby providing a basis for motion prediction; unreasonable parts in the driving areas obtained by perception are removed according to the map information, and the semantic types and speeds of the boundaries are obtained. To this end, various environmental elements are linked and integrated on the map platform.
The invention also provides a system for fusing the real-time perception information and the automatic driving map, which comprises a map matching and positioning module, a relation determining module, an analyzing module and an output module;
the map matching and positioning module is used for performing map matching and positioning, giving the position of the vehicle in a map coordinate system and determining the relationship between the vehicle coordinate system and the map coordinate system;
the relation determining module is used for determining the relation between the barrier and the road in the map; the real-time perceived obstacle information is divided into three categories according to the relationship with the road: an in-road target, an out-of-road target, and a road boundary target; wherein, the target in the road should confirm the lane where the target is located; comparing the barrier information with the map information according to the relationship between a vehicle coordinate system and a map coordinate system given during map matching and positioning to obtain the relationship between the barrier and the road;
the relationship of obstacles to roads in a map is divided into two cases:
(1) obstacle detected by the three-dimensional sensor: the sensors can directly detect the motion state of the barrier in the self-vehicle coordinate system, and the position of the barrier is transferred to the map by utilizing the relation between the self-vehicle coordinate system and the map coordinate system so as to be compared with the map;
(2) obstacle detected by monocular vision sensor in image plane: the sensor obtains a plane projection of a map coordinate system on an image coordinate system, a road boundary and a lane line in the map are projected into the image through a calibration matrix, and then are compared with an obstacle in the image, the relationship between the obstacle and the road is judged, and a lane where the obstacle is located is obtained;
the analysis module analyzes the travelable region fusion and the boundary state;
and the output module combines the results of the relation determination module and the analysis module with the static environment information of the map to realize the integrated output of the perception result.
In the above embodiment, in the analysis module, the travelable area obtained through real-time sensing is fused with the travelable area provided by the map, and the state of the boundary of the travelable area is judged in segments according to the real-time sensing obstacle information to obtain the boundary semantics and the speed information.
The above embodiments are only for illustrating the present invention, and the steps may be changed, and on the basis of the technical solution of the present invention, the modification and equivalent changes of the individual steps according to the principle of the present invention should not be excluded from the protection scope of the present invention.

Claims (8)

1. A method for fusing real-time perception information and an automatic driving map is characterized by comprising the following steps:
1) carrying out map matching positioning, giving the position of the self-vehicle in a map coordinate system, and determining the relation between the self-vehicle coordinate system and the map coordinate system;
2) determining the relation between the barrier and the road in the map;
the real-time perceived obstacle information is divided into three categories according to the relationship with the road: an in-road target, an out-of-road target, and a road boundary target; wherein, the target in the road should confirm the lane where the target is located; comparing the barrier information with the map information according to the relationship between a vehicle coordinate system and a map coordinate system given during map matching and positioning to obtain the relationship between the barrier and the road;
the relationship of obstacles to roads in a map is divided into two cases:
2.1) obstacle detected by the three-dimensional sensor: the sensors can directly detect the motion state of the barrier in the self-vehicle coordinate system, and the position of the barrier is transferred to the map by utilizing the relation between the self-vehicle coordinate system and the map coordinate system so as to be compared with the map;
2.2) obstacles detected by the monocular vision sensor in the image plane: the sensor obtains a plane projection of a map coordinate system on an image coordinate system, a road boundary and a lane line in the map are projected into the image through a calibration matrix, and then are compared with an obstacle in the image, the relationship between the obstacle and the road is judged, and a lane where the obstacle is located is obtained;
3) fusion of a drivable area and boundary state analysis;
4) combining the step 2) and the step 3) with the static environment information of the map to realize the integrated output of the perception result;
the relationship of the obstacle to the road is calculated as follows:
Figure DEST_PATH_IMAGE002
(1)
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE004
is the left boundary of the road;
Figure DEST_PATH_IMAGE006
is the right boundary of the road;ois an obstacle;distis a function of distance;
if the shape of the obstacle is not observed, calculating the distance to the lane line according to the position of the obstacle; if the shape of the obstacle is observed, calculating the distance to the lane line aiming at each vertex of the bounding box, and if the symbols are all the same, taking the smallest value; if the symbols are not completely the same, then 0 is taken as shown in the following formula:
Figure DEST_PATH_IMAGE008
(2)
in the formula (I), the compound is shown in the specification,das a function of the distance from the point to the straight line,lis a lane line, and is characterized in that,pis the position of the obstacle, and is,v i is a barrier toThe top point of the obstacle is provided with a plurality of convex points,iis in the range of 1 tonThe number of the integer (c) of (d),nthe number of the top points of the obstacle.
2. The fusion method of claim 1, wherein: in the step 2), the real-time perceived obstacles are transferred to a map coordinate system by using a map matching positioning result, the obstacles are used as targets, the targets are screened according to the road boundary, the lane in which the targets are located is judged for the targets in the road boundary, the relation between the targets and the lane is determined, and the dynamic and static targets fused with the map information are obtained.
3. The fusion method of claim 1, wherein: when the lane where the target is located is judged in the road, the lanes are checked one by one, and the left and right boundaries of the given lane are used for replacing the left and right boundaries of the road in the formula (1), so that whether the target is located in the lane can be judged until a certain lane is found, and the target is located in the lane boundary or on the boundary;
for the target in the road, the relation between the target and the lane is further calculated: judging a deflection angle of the direction of the obstacle relative to the direction of the center line of the lane according to the direction of the obstacle and the direction of the lane line; and (3) calculating the distance from the obstacle to the left lane line and the right lane line by using a formula (2) according to the position and the shape of the obstacle and the lane lines.
4. The fusion method of claim 1, wherein: projecting the road boundary and the lane line in the map into the image, wherein the formula is as follows:
Figure DEST_PATH_IMAGE010
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE012
Figure DEST_PATH_IMAGE014
Figure DEST_PATH_IMAGE016
is the coordinate in the coordinate system of the self-vehicle,m qj q =0, 1, 2, j =1, 2, 3, 4, an element of a camera parameter matrix;uvrespectively, the horizontal and vertical coordinates of the image pixel, and Z is the optical center height of the camera.
5. The fusion method according to any one of claims 1 to 4, wherein: in the step 3), the drivable area obtained through real-time perception is fused with the drivable area provided by the map aiming at the drivable area, and the state of the boundary of the drivable area is judged in a segmented manner according to the real-time perception obstacle information to obtain boundary semantics and speed information.
6. The fusion method of claim 5, wherein: the real-time perception of the drivable area is realized by dividing the ground connected area of the laser radar or the vision sensor; the method comprises the following steps of reducing the range of a drivable area by utilizing a road boundary provided by a map and boundaries which cannot be crossed by traffic rules such as stop lines and solid lines; meanwhile, the semantics and the speed of the boundary of the travelable area are determined by combining the obstacle information.
7. A fusion system of real-time perception information and an automatic driving map is characterized by comprising a map matching positioning module, a relation determining module, an analyzing module and an output module;
the map matching and positioning module is used for performing map matching and positioning, giving the position of the self-vehicle in a map coordinate system and determining the relation between the self-vehicle coordinate system and the map coordinate system;
the relation determining module is used for determining the relation between the obstacles and the roads in the map; the real-time perceived obstacle information is divided into three categories according to the relationship with the road: an in-road target, an out-of-road target, and a road boundary target; wherein, the target in the road should confirm the lane where the target is located; comparing the barrier information with the map information according to the relationship between a vehicle coordinate system and a map coordinate system given during map matching and positioning to obtain the relationship between the barrier and the road;
the relationship of obstacles to roads in a map is divided into two cases:
(1) obstacle detected by the three-dimensional sensor: the sensors can directly detect the motion state of the barrier in the self-vehicle coordinate system, and the position of the barrier is transferred to the map by utilizing the relation between the self-vehicle coordinate system and the map coordinate system so as to be compared with the map;
(2) obstacle detected by monocular vision sensor in image plane: the sensor obtains a plane projection of a map coordinate system on an image coordinate system, a road boundary and a lane line in the map are projected into the image through a calibration matrix, and then are compared with an obstacle in the image, the relationship between the obstacle and the road is judged, and a lane where the obstacle is located is obtained;
the analysis module analyzes the travelable region fusion and the boundary state;
the output module combines the results of the relation determining module and the analysis module with the static environment information of the map to realize the integrated output of the perception result;
the relationship of the obstacle to the road is calculated as follows:
Figure 615689DEST_PATH_IMAGE002
(1)
in the formula (I), the compound is shown in the specification,
Figure 798408DEST_PATH_IMAGE004
is the left boundary of the road;
Figure 109304DEST_PATH_IMAGE006
is the right boundary of the road;ois an obstacle;distis a function of distance;
if the shape of the obstacle is not observed, calculating the distance to the lane line according to the position of the obstacle; if the shape of the obstacle is observed, calculating the distance to the lane line aiming at each vertex of the bounding box, and if the symbols are all the same, taking the smallest value; if the symbols are not completely the same, then 0 is taken as shown in the following formula:
Figure DEST_PATH_IMAGE017
(2)
in the formula (I), the compound is shown in the specification,das a function of the distance from the point to the straight line,lis a lane line, and is characterized in that,pis the position of the obstacle, and is,v i is the peak of the obstacle, and the peak of the obstacle,iis in the range of 1 tonThe number of the integer (c) of (d),nthe number of the top points of the obstacle.
8. The fusion system of claim 7, wherein: and in the analysis module, aiming at the travelable area, the travelable area obtained by real-time perception is fused with the travelable area provided by the map, and the state of the boundary of the travelable area is judged in a segmented manner according to the real-time perception obstacle information to obtain boundary semantics and speed information.
CN202010329502.4A 2020-04-24 2020-04-24 Fusion method and system of real-time perception information and automatic driving map Active CN111208839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010329502.4A CN111208839B (en) 2020-04-24 2020-04-24 Fusion method and system of real-time perception information and automatic driving map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010329502.4A CN111208839B (en) 2020-04-24 2020-04-24 Fusion method and system of real-time perception information and automatic driving map

Publications (2)

Publication Number Publication Date
CN111208839A CN111208839A (en) 2020-05-29
CN111208839B true CN111208839B (en) 2020-08-04

Family

ID=70788965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010329502.4A Active CN111208839B (en) 2020-04-24 2020-04-24 Fusion method and system of real-time perception information and automatic driving map

Country Status (1)

Country Link
CN (1) CN111208839B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022860A (en) * 2020-07-16 2022-02-08 长沙智能驾驶研究院有限公司 Target detection method and device and electronic equipment
CN111912418A (en) * 2020-07-16 2020-11-10 知行汽车科技(苏州)有限公司 Method, device and medium for deleting obstacles in non-driving area of mobile carrier
CN112382085A (en) * 2020-10-20 2021-02-19 华南理工大学 System and method suitable for intelligent vehicle traffic scene understanding and beyond visual range perception
CN113189610A (en) * 2021-04-28 2021-07-30 中国科学技术大学 Map-enhanced autonomous driving multi-target tracking method and related equipment
CN113688880A (en) * 2021-08-02 2021-11-23 南京理工大学 Obstacle map creating method based on cloud computing
CN113418522B (en) * 2021-08-25 2021-12-14 季华实验室 AGV path planning method, following method, device, equipment and storage medium
CN113682300B (en) * 2021-08-25 2023-09-15 驭势科技(北京)有限公司 Decision method, device, equipment and medium for avoiding obstacle
CN115797900B (en) * 2021-09-09 2023-06-27 廊坊和易生活网络科技股份有限公司 Vehicle-road gesture sensing method based on monocular vision
CN115774444B (en) * 2021-09-09 2023-07-25 廊坊和易生活网络科技股份有限公司 Path planning optimization method based on sparse navigation map
CN114332818B (en) * 2021-12-28 2024-04-09 阿波罗智联(北京)科技有限公司 Obstacle detection method and device and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6559535B2 (en) * 2015-10-22 2019-08-14 株式会社東芝 Obstacle map generation device, method thereof, and program thereof
CN105929823A (en) * 2016-04-29 2016-09-07 大连楼兰科技股份有限公司 Automatic driving system and driving method based on existing map
KR102395283B1 (en) * 2016-12-14 2022-05-09 현대자동차주식회사 Apparatus for controlling automatic driving, system having the same and method thereof
KR20180106417A (en) * 2017-03-20 2018-10-01 현대자동차주식회사 System and Method for recognizing location of vehicle
CN109829386B (en) * 2019-01-04 2020-12-11 清华大学 Intelligent vehicle passable area detection method based on multi-source information fusion
CN110032181B (en) * 2019-02-26 2022-05-17 文远知行有限公司 Method and device for positioning barrier in semantic map, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111208839A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
CN111208839B (en) Fusion method and system of real-time perception information and automatic driving map
CN108960183B (en) Curve target identification system and method based on multi-sensor fusion
CN107646114B (en) Method for estimating lane
US11790668B2 (en) Automated road edge boundary detection
CN107161141B (en) Unmanned automobile system and automobile
Huang et al. Finding multiple lanes in urban road networks with vision and lidar
Broggi et al. The ARGO autonomous vehicle’s vision and control systems
WO2022141910A1 (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
Smadja et al. Road extraction and environment interpretation from LiDAR sensors
Aycard et al. Intersection safety using lidar and stereo vision sensors
WO2020185489A1 (en) Sensor validation using semantic segmentation information
CN110379168B (en) Traffic vehicle information acquisition method based on Mask R-CNN
CN110307791B (en) Vehicle length and speed calculation method based on three-dimensional vehicle boundary frame
CN110197173B (en) Road edge detection method based on binocular vision
US20230266473A1 (en) Method and system for object detection for a mobile robot with time-of-flight camera
CN109895697B (en) Driving auxiliary prompting system and method
Hara et al. Vehicle localization based on the detection of line segments from multi-camera images
CN113988197A (en) Multi-camera and multi-laser radar based combined calibration and target fusion detection method
Huang et al. Probabilistic lane estimation for autonomous driving using basis curves
CN110415299B (en) Vehicle position estimation method based on set guideboard under motion constraint
Gehrig et al. 6D vision goes fisheye for intersection assistance
CN111353481A (en) Road obstacle identification method based on laser point cloud and video image
CN116385997A (en) Vehicle-mounted obstacle accurate sensing method, system and storage medium
CN114119896B (en) Driving path planning method
KR102368262B1 (en) Method for estimating traffic light arrangement information using multiple observation information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant