CN114136328B - Sensor information fusion method and device - Google Patents

Sensor information fusion method and device Download PDF

Info

Publication number
CN114136328B
CN114136328B CN202111416302.3A CN202111416302A CN114136328B CN 114136328 B CN114136328 B CN 114136328B CN 202111416302 A CN202111416302 A CN 202111416302A CN 114136328 B CN114136328 B CN 114136328B
Authority
CN
China
Prior art keywords
target
vehicle
boundary
sensor
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111416302.3A
Other languages
Chinese (zh)
Other versions
CN114136328A (en
Inventor
万国强
战阳
张斯怡
朱明�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingwei Hirain Tech Co Ltd
Original Assignee
Beijing Jingwei Hirain Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingwei Hirain Tech Co Ltd filed Critical Beijing Jingwei Hirain Tech Co Ltd
Priority to CN202111416302.3A priority Critical patent/CN114136328B/en
Publication of CN114136328A publication Critical patent/CN114136328A/en
Application granted granted Critical
Publication of CN114136328B publication Critical patent/CN114136328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides a sensor information fusion method and device, which are applied to the technical field of automobiles. The method can effectively reduce the data volume of the sensor information to be fused, improve the fusion efficiency and simultaneously meet the requirement of automatic driving for quick driving decision.

Description

Sensor information fusion method and device
Technical Field
The invention belongs to the technical field of automobiles, and particularly relates to a sensor information fusion method and device.
Background
In practical applications, an autonomous vehicle is often provided with a plurality of positioning devices and sensors to determine the position of the vehicle and environmental information around the vehicle, providing reference information for the autonomous system to determine driving decisions.
Because the vehicle is provided with a plurality of sensors, for the same real object around the vehicle driving route, each sensor feeds back a corresponding perception target, and each perception target corresponds to corresponding sensor information respectively. In order to fully and fully describe a certain physical object, the sensor information of the sensing target corresponding to the same physical object needs to be combined together, and the process is that the sensor information is fused.
In the existing sensor information fusion process, a large amount of data calculation is required to be carried out aiming at a large number of perception targets, the information fusion process is long in time consumption and low in efficiency, a large amount of hardware resources are occupied, and the requirement of fast driving decision of automatic driving is difficult to meet.
Disclosure of Invention
In view of the above, the present invention aims to provide a method and an apparatus for fusing sensor information, which combine detection deviation of each sensor to determine an effective sensing area adapted to the sensor, and fuse sensor information only for a reference sensing target in the effective sensing area, so as to effectively reduce data amount of sensor information to be fused, thereby being beneficial to shortening time consumption of a fusion process, improving fusion efficiency, reducing occupation of hardware resources, and meeting requirements of fast driving decision of automatic driving, and the specific scheme is as follows:
In a first aspect, the present invention provides a method for fusing sensor information, including:
acquiring vehicle positioning information, a high-precision map and reference information of each vehicle-mounted sensor, wherein the reference information comprises preset information related to detection deviation and perception targets corresponding to different objects;
determining the position coordinates of the vehicle in the high-precision map according to the vehicle positioning information;
based on the position coordinates, preset information of each vehicle-mounted sensor and lane boundaries in the high-precision map, respectively determining effective sensing areas corresponding to each vehicle-mounted sensor;
for each vehicle-mounted sensor, determining a reference sensing target in an effective sensing area corresponding to the sensing target of the vehicle-mounted sensor;
and fusing the sensor information of the reference sensing target corresponding to the same real object into the sensor information of the corresponding real object.
Optionally, the determining, based on the position coordinates, preset information of each vehicle-mounted sensor, and a lane boundary in the high-precision map, an effective sensing area corresponding to each vehicle-mounted sensor includes:
taking each vehicle-mounted sensor as a target sensor;
Determining a target lane boundary and boundary parameters of the target lane boundary in the high-precision map according to the position coordinates and preset information of the target sensor;
determining a plurality of regional vertexes according to preset information of the target sensor and the boundary parameters;
and taking the polygonal area corresponding to each area vertex as an effective sensing area of the target sensor.
Optionally, the preset information related to the detection deviation includes: boundary length, preset boundary width and transverse deviation function;
the boundary parameters include: the starting point coordinates, the end point coordinates and the boundary included angles between the boundary of the target lane and the horizontal direction;
the determining a plurality of area vertices according to the preset information of the target sensor and the boundary parameter includes:
inputting preset information of the target sensor and boundary parameters of the target lane boundary into the following formula to obtain corresponding regional vertexes:
wherein L is BS-E Representing the boundary length;
x V representing the abscissa of the vehicle in the high-precision map;
x BS an abscissa, y, representing the coordinates of the origin of the boundary of the target lane BS An ordinate representing the coordinates of the start point of the boundary of the target lane;
x BE An abscissa, y, representing the boundary end point coordinates of the target lane BE An ordinate representing the target lane boundary end point coordinates;
E RAD representing a corresponding lateral deviation function of the target sensor;
θ represents the boundary angle between the boundary of the target lane and the horizontal direction;
W 1 representing a lateral detection deviation of the target sensor at a boundary end point of the target lane boundary;
W 3 representing a lateral detection deviation of the target sensor at a boundary start point of the target lane boundary;
W 2 representing the preset boundary width;
x A representing the abscissa of the vertex of the first region, y A An ordinate representing the vertex of the first region;
x B representing the abscissa of the vertex of the second region, y B An ordinate representing the vertex of the second region;
x C an abscissa representing the vertex of the third region, y C An ordinate representing the vertex of the third region;
x D an abscissa representing the vertex of the fourth region, y D Representing the ordinate of the vertex of the fourth region.
Optionally, the sensing target of the vehicle-mounted sensor is represented by a coordinate point;
the determining the reference sensing target in the corresponding effective sensing area in the sensing target of the vehicle-mounted sensor comprises the following steps:
taking coordinate points of all perception targets of the vehicle-mounted sensor as target coordinate points respectively;
Judging whether the target coordinate point is in an effective sensing area corresponding to the vehicle-mounted sensor;
and if the target coordinate point is in the effective sensing area, judging that the sensing target corresponding to the target coordinate point is a reference sensing target in the effective sensing area of the vehicle-mounted sensor.
Optionally, the sensing target of the vehicle-mounted sensor is represented by a directed bounding box;
the determining the reference sensing target in the corresponding effective sensing area in the sensing target of the vehicle-mounted sensor comprises the following steps:
taking the directed bounding boxes corresponding to the sensing targets of the vehicle-mounted sensor as target directed bounding boxes respectively;
determining an overlapping area of the target directional bounding box and the effective perception area;
calculating the overlapping rate of the overlapping area and the effective sensing area;
and if the overlapping rate is greater than or equal to a preset overlapping rate threshold value, judging that the sensing target corresponding to the target directional bounding box is a reference sensing target in an effective sensing area of the vehicle-mounted sensor.
Optionally, the determining the overlapping area of the target directional bounding box and the effective sensing area includes:
Determining boundary intersection points of the target directional bounding box and the effective perception area, target directional bounding box vertexes and target area vertexes, wherein the target directional bounding box vertexes are vertexes which are located in the effective perception area in all bounding box vertexes of the target directional bounding box;
when the target region vertex is a vertex within the directional bounding box among the region vertices of the effective sensing region, a polygon region having the target directional bounding box vertex, the target region vertex, and the boundary intersection point as vertices is taken as an overlapping region of the target directional bounding box and the effective sensing region;
and taking the area corresponding to the target directional bounding box as an overlapping area when all bounding box vertexes of the target directional bounding box are in the effective perception area.
Optionally, the polygonal area using the target directional bounding box vertex, the target area vertex and the boundary intersection point as vertices is used as an overlapping area of the target directional bounding box and the effective sensing area, and the method includes:
when determining that one target directional bounding box vertex and two boundary intersection points are not present and the target area vertex is not present, taking a triangle area surrounded by the boundary intersection points and the target directional bounding box vertex as an overlapping area;
When determining that one target directional bounding box vertex and two boundary intersection points exist and one target area vertex exists, taking a quadrilateral area formed by the boundary intersection points, the target directional bounding box vertex and the target area vertex as an overlapping area;
under the condition that the two target directional bounding box vertexes and the two boundary intersection points are determined and the target area vertexes are not present, taking quadrilateral areas corresponding to the two target directional bounding box vertexes and the two boundary intersection points as overlapping areas;
when determining that two target directional bounding box vertexes and two boundary intersection points exist and one target area vertex exists, taking a pentagon area formed by the two target directional bounding box vertexes, the two boundary intersection points and the target area vertex as an overlapping area;
when determining the three target effective bounding box vertexes and the two boundary intersection points, taking the effective bounding box vertexes except the three target effective bounding box vertexes and the directional bounding box area except the triangular area formed by the two boundary intersection points in the directional bounding box as an overlapping area.
Optionally, the fusing the sensor information of the reference sensing target corresponding to the same physical object into the sensor information of the corresponding physical object includes:
determining effective perception targets influencing driving decisions in the reference perception targets of the vehicle-mounted sensors respectively;
respectively adding sensor information influencing driving decision into the sensor information of the effective perception target of each vehicle-mounted sensor;
taking the sensor information with the highest confidence coefficient corresponding to each effective perception target as target sensor information;
and fusing the target sensor information of the effective perceived target corresponding to the same real object into the sensor information of the corresponding real object.
Optionally, the determining the effective sensing target affecting the driving decision in the reference sensing targets of the vehicle-mounted sensors includes:
taking each vehicle-mounted sensor as a target vehicle-mounted sensor;
determining the position relation between each reference sensing target of the target vehicle-mounted sensor and a lane boundary in the high-precision map, and a preset reference object corresponding to the target vehicle-mounted sensor;
and taking the reference sensing targets in the lane boundary, the reference sensing targets which are outside the lane boundary and belong to moving targets and the reference sensing targets corresponding to the preset reference objects as effective sensing targets for influencing driving decisions.
In a second aspect, the present invention provides a sensor information fusion apparatus, comprising:
the acquisition unit is used for acquiring vehicle positioning information, a high-precision map and reference information of each vehicle-mounted sensor;
the reference information comprises preset information related to detection deviation and perception targets corresponding to different objects;
a first determining unit configured to determine a position coordinate of a vehicle in the high-precision map according to the vehicle positioning information;
the second determining unit is used for determining effective sensing areas corresponding to the vehicle-mounted sensors respectively based on the position coordinates, preset information of the vehicle-mounted sensors and lane boundaries in the high-precision map;
a third determining unit, configured to determine, for each of the vehicle-mounted sensors, a reference sensing target that is located in an effective sensing area corresponding to the third determining unit, from among sensing targets of the vehicle-mounted sensors;
and the fusion unit is used for fusing the sensor information of the reference sensing target corresponding to the same physical object into the sensor information of the corresponding physical object.
According to the sensor information fusion method provided by the invention, after the vehicle positioning information, the high-precision map, the preset information related to the detection deviation of each vehicle-mounted sensor and the sensing targets corresponding to different objects are obtained, the position coordinates of the vehicle in the high-precision map are firstly determined according to the vehicle positioning information, the effective sensing areas corresponding to each vehicle-mounted sensor are further determined based on the position coordinates, the preset information of each vehicle-mounted sensor and the lane boundaries in the high-precision map, then for each vehicle-mounted sensor, the reference sensing target in the corresponding effective sensing area is determined in the sensing targets of the vehicle-mounted sensor, and finally the sensor information of the reference sensing target corresponding to the same object is fused into the sensor information of the corresponding object.
Compared with the method that different sensors in the prior art adopt unified sensing areas, the method for fusing the sensor information provided by the invention has the advantages that the division of the sensing areas is more targeted, the first reduction of calculation data can be realized, further, the sensing targets outside the effective sensing areas are eliminated, only the reference sensing targets in the effective sensing areas are calculated, the number of calculation objects is effectively reduced, and the data volume is further reduced, so that the method can effectively reduce the data volume of the sensor information to be fused, is beneficial to shortening the time consumption of the fusion process, improving the fusion efficiency, reducing the occupation of hardware resources, and simultaneously meeting the requirement of automatic driving for fast driving decision.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a sensor information fusion method provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of an effective sensing area of an in-vehicle sensor according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a method for determining a reference perceived target according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another method for determining a reference perceived target provided by an embodiment of the present invention;
fig. 5 is a block diagram of a sensor information fusion device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The sensor information fusion method provided by the invention is applied to electronic equipment, and the electronic equipment can be an on-board controller, such as a controller of an automatic driving system, other controllers with multi-sensor information application requirements on the whole automobile, personal computers with data processing functions, data servers and the like, and can be applied to servers on a network side in certain cases. Referring to fig. 1, the flow of the sensor information fusion method provided by the embodiment of the invention includes:
S100, acquiring vehicle positioning information, a high-precision map and reference information of each vehicle-mounted sensor.
The vehicle positioning information comprises coordinate information fed back by a global satellite positioning system and positioning information fed back by an on-board inertial measurement unit, and of course, other information capable of representing the position of the vehicle in the prior art is not listed here.
It should be specifically noted that, because the method for fusing sensor information provided by the embodiment of the present invention needs to determine the position coordinates of the vehicle in the high-precision map in a subsequent step, positioning can be completed by further integrating information fed back by the vehicle-mounted sensor on the basis of coordinate information fed back by the global satellite positioning system and the like, and therefore, the vehicle positioning information described in the embodiment of the present invention further includes sensor information fed back by the vehicle-mounted sensor, for example, images around the vehicle fed back by the vehicle-mounted camera, point cloud information fed back by the vehicle-mounted laser radar and the like.
Compared with a common map in the prior art, the high-precision map is recorded with high-precision map information, and can provide centimeter-level map data.
In practical applications, in order to sense the surrounding environment of the vehicle as comprehensively as possible and perform more accurate driving control, the vehicle is often provided with various types of vehicle-mounted sensors, such as the vehicle-mounted cameras, the vehicle-mounted laser radar, the vehicle-mounted ultrasonic radar, the vehicle-mounted millimeter wave radar, and the like, which are described in the foregoing, and the different types of vehicle-mounted sensors not only have different feedback sensor information, but also have different performances, such as detection ranges, detection deviations, optimal working scenes, and the like.
Based on this, the reference information of the vehicle-mounted sensor obtained in this step not only includes the sensing targets corresponding to different real objects fed back by the vehicle-mounted sensor, that is, for any vehicle-mounted sensor, the reference information includes at least one sensing target, further includes preset information related to the detection deviation of the sensor, such as a boundary length, a preset boundary width, a transverse deviation function, and the like, but also includes these contents if the sensing targets fed back by the individual vehicle-mounted sensor for a certain type of real objects are not trusted. The specific content and application of the reference information will be developed in the following, and will not be described in detail here.
S110, determining the position coordinates of the vehicle in the high-precision map according to the vehicle positioning information.
Alternatively, the vehicle positioning information may be vehicle coordinates fed back by the global satellite positioning system, positioning information fed back by the vehicle-mounted inertial measurement unit, and relevant information fed back by the vehicle-mounted sensor, based on which the approximate position of the vehicle in the high-precision map may be determined by first using the vehicle coordinates fed back by the global satellite positioning system and the positioning information fed back by the vehicle-mounted inertial measurement unit, then, sensor information fed back by the vehicle-mounted sensor, such as point cloud information fed back by the vehicle-mounted laser radar and an environmental picture around the vehicle fed back by the vehicle-mounted camera, is obtained, and these sensor information are compared and matched with map information around the approximate position of the vehicle recorded in the high-precision map, so as to determine the position coordinates of the vehicle in the high-precision map according to the matching result.
The specific implementation method for determining the position coordinates of the vehicle in the high-precision map, which is not illustrated in the present step, may be implemented based on the related art, and is not developed here.
S120, based on the position coordinates, preset information of each vehicle-mounted sensor and lane boundaries in the high-precision map, effective sensing areas corresponding to the vehicle-mounted sensors are respectively determined.
It is known based on the operation principle of the in-vehicle sensor that the sensing range of any type of in-vehicle sensor is limited, and therefore, it is not necessary to consider all map information in a high-precision map when the sensor information is fused, which is very difficult to achieve and also unnecessary. Therefore, in practical applications, it is often necessary to initially determine an effective sensing area with a vehicle as a reference position in a high-precision map, and only process sensing targets and map information in the effective sensing area.
The related art uses an area in a preset range based on a vehicle in a high-precision map as an effective sensing area, namely, an effective sensing area with a determined range is set, and it can be understood that the fixed effective sensing area is set based on the preset range, so that the characteristics of various sensors are not met, and the detection deviation of various sensors cannot be effectively compatible.
In order to solve the problem in the prior art, the step is to conduct differentiation processing on each vehicle-mounted sensor according to the performance difference of each vehicle-mounted sensor, fully consider the performance difference of each vehicle-mounted sensor on the basis of the position coordinates of the vehicle and the lane boundary of the high-precision map, and determine an effective sensing area matched with the performance of each vehicle-mounted sensor.
Based on the detection principle of the existing vehicle-mounted sensor, the transverse detection deviation of the vehicle-mounted sensor is in a direct proportion relation with the longitudinal detection distance, namely, the transverse detection deviation of the vehicle-mounted sensor gradually increases along with the increase of the longitudinal detection distance, and for the determined type of the vehicle-mounted sensor, the relation between the longitudinal detection distance and the transverse detection deviation can be expressed through a fitting function, and the fitting function is the transverse deviation function described in the embodiment of the invention. Of course, for different types of vehicle-mounted sensors, the specific expression of the corresponding transverse deviation function is different, and as for the specific expression of the transverse deviation function, the specific expression of the transverse deviation function can be obtained through a fitting function based on the performance parameters of the vehicle-mounted sensors and a large amount of detection test data, and the invention is not limited to the specific form of the transverse deviation function.
In addition to the lateral deviation function, the preset information related to the detection deviation of the vehicle-mounted sensor also includes a boundary length and a preset boundary width, which will be developed in the following, for the specific application of the boundary length and the preset boundary width, which will not be described in detail here.
Based on the above, each vehicle-mounted sensor is used as a target sensor, and first, a target lane boundary and boundary parameters of the target lane boundary are determined in the high-precision map according to the position coordinates of the vehicle in the high-precision map and the boundary length in preset information, wherein the boundary parameters comprise: the start point coordinates and the end point coordinates of the boundary of the target lane and the boundary included angle between the boundary of the target lane and the horizontal direction.
Specifically, on a road on which a vehicle is traveling, including a left road boundary and a right road boundary, the target lane boundary described in the present invention may be a left road boundary or a right road boundary, the position coordinates of the vehicle in the high-precision map have been determined through the foregoing steps, and the lane boundaries in the high-precision map are also known, based on which, by calculating the vertical distance between the position coordinates of the vehicle and the lane boundaries, the lane boundary that is closer to the vehicle, that is, the lane boundary with a shorter vertical distance, of the two lane boundaries is taken as the target lane boundary.
Then, a point of the target lane boundary having the shortest distance from the vehicle position coordinates is set as a boundary start point, and coordinates of the boundary start point are set as start point coordinates of the target lane boundary. Then, a point on the boundary of the target lane, which is distant from the boundary start point by the boundary length, is taken as a boundary end point along the vehicle traveling direction, and the coordinates of the boundary end point are correspondingly taken as the end point coordinates of the target road boundary.
It should be noted that, in practical application, the boundary length should be selected based on the effective longitudinal detection distance of the target sensor, and in order to ensure the detection accuracy, the boundary length should not be greater than the effective longitudinal detection distance of the target sensor. For lane boundaries such as a fence, which have a limited length, the length of the fence may be used as the boundary length.
Furthermore, the boundary included angle between the boundary of the target lane and the horizontal direction needs to be determined, and the specific calculation process of the boundary included angle can be realized by combining the prior art, so that the invention is not particularly limited.
After the boundary parameters are determined, a plurality of area vertexes can be determined according to the preset boundary width, the transverse deviation function and the boundary parameters obtained in the previous steps, and then the polygonal area corresponding to each area vertex is used as an effective sensing area of the target sensor.
Specifically, taking the vehicle-mounted millimeter wave radar as an example, the process of determining the effective sensing area corresponding to the vehicle-mounted millimeter wave radar in this step is described with reference to fig. 2.
The area vertices of the effective perceived area are calculated according to the following formula:
what needs to be explained about the above formula is:
the position coordinates of the vehicle in the high-precision map are expressed as (x V ,y V ) Wherein x is V Representing the abscissa of the vehicle in the high-precision map, corresponding to N (x BS ,y BS ) I.e. the boundary origin of the target lane boundary, M (x BE ,y BE ) Namely, the boundary end point of the boundary of the target lane;
L BS-E represents the boundary length, i.e., the distance between the N and M points;
θ represents the boundary angle between the boundary of the target lane and the horizontal direction;
E RAD the transverse deviation function is the transverse deviation function of the vehicle millimeter wave radar;
Based on the above-mentioned matters,representing the longitudinal detection distance, W, between the position coordinates of the vehicle relative to the M point 1 The transverse detection deviation of the vehicle millimeter wave radar at the point M is represented, namely the distance between the point A and the point M;
L BS-E -W 1 then represents the longitudinal detection distance, W, between the position coordinates of the vehicle relative to the N points 2 The transverse detection deviation of the vehicle millimeter wave radar at the point N is represented, namely the distance between the point D and the point N;
w2 denotes a preset boundary width outside the boundary of the target lane (with respect to the lane on which the vehicle is traveling), that is, the distance between the M point and the B point, and of course, the distance between the N point and the C point. It can be appreciated that the selection of the preset boundary width determines the size of the effective sensing area, and in practical application, a specific value of the preset boundary width needs to be determined by combining hardware calculation force and specific detection precision requirements.
According to the above formula, the coordinates of four region fixed points, namely points A, B, C and D, can be respectively calculated, specifically, can be respectively expressed as A (x A ,y A ),B(x B ,y B ),C(x C ,y C ),D(x D ,y D ). And a polygonal area formed by four points A, B, C and D, namely an effective sensing area of the vehicle millimeter wave radar.
Based on the above, it can be seen that: the determination of the effective sensing area is directly related to the performance of the vehicle-mounted sensor, and is obtained by combining the characteristics of the vehicle-mounted sensor and the detection deviation of the vehicle-mounted sensor, and different vehicle-mounted sensors correspond to the effective sensing areas with matched characteristics, so that detection errors and invalid detection caused by the fact that different types of vehicle-mounted sensors adopt the same sensing area are avoided.
S130, determining a reference perception target in an effective perception area corresponding to each vehicle-mounted sensor from the perception targets of the vehicle-mounted sensors.
In practical application, specific representation modes of sensing targets fed back by different vehicle-mounted sensors are different, such as millimeter wave radars, the sensing targets fed back by the different vehicle-mounted sensors are represented by coordinate points of corresponding objects, and for the laser radars, the sensing targets fed back by the different vehicle-mounted sensors are represented by directional bounding boxes (a process of determining from laser point clouds to the directional bounding boxes can be realized by a laser radar processor based on the prior art, the invention is not limited by comparison), so that different processing modes are adopted for specific forms of different sensing targets, and the position relation between the sensing targets of different vehicle-mounted sensors and effective sensing areas corresponding to the vehicle-mounted sensors is determined.
Optionally, if the sensing target is represented by a coordinate point, the coordinate points of each sensing target of the vehicle-mounted sensor are respectively taken as target coordinate points, whether the target coordinate points are in the effective sensing areas corresponding to the vehicle-mounted sensor is judged, if the target coordinate points are in the effective sensing areas corresponding to the vehicle-mounted sensor, the sensing target corresponding to the target coordinate points is judged to be a reference sensing target in the effective sensing areas of the vehicle-mounted sensor, and conversely, if the target coordinate points are not in the effective sensing areas corresponding to the vehicle-mounted sensor, the sensing target corresponding to the target coordinate points is judged not to be in the effective sensing areas.
As for a method of determining whether any target coordinate point is within the effective sensing region, reference may be made to fig. 3. In the example shown in fig. 3, A, B, C, D respectively represents four area vertices of the effective sensing area, P points represent target sensing targets, vectors are respectively constructed with two adjacent area vertices in a clockwise direction, meanwhile, the vectors respectively take the four area vertices as starting points, the target sensing targets P points as ending points, corresponding vectors are constructed, then judgment is performed according to the following formula, if the corresponding judgment conditions are met, it can be judged that the P points are in the effective sensing area, and the target sensing targets corresponding to the P points can be used as reference sensing targets.
Of course, other manners of judging whether the sensing target represented by the coordinate point is in the effective sensing area may be adopted, which is not listed here, and the sensing target is also within the scope of the present invention without exceeding the scope of the core idea of the present invention.
Alternatively, if the perceived target is represented by a directed bounding box, this step is implemented primarily based on the overlap ratio of the directed bounding box with the effective perceived area.
Referring to fig. 4, fig. 4 is a schematic diagram of another method for determining a reference sensing target according to an embodiment of the present invention, where four area vertices of A, B, C, D are used to form an effective sensing area, and all other rectangular areas outside the effective sensing area are sensing targets represented by directional bounding boxes.
As can be seen from fig. 4, the positional relationship between the effective sensing area and the directional bounding boxes corresponding to the respective real objects is different according to the actual positions of the real objects and the vehicle. When a reference sensing target is specifically determined, the directional bounding boxes corresponding to the sensing targets of the vehicle-mounted sensor are respectively taken as target directional bounding boxes, the overlapping area of the target directional bounding boxes and the effective sensing area is determined, then the overlapping rate of the overlapping area and the effective sensing area is calculated, if the overlapping rate corresponding to the target directional bounding boxes is greater than or equal to a preset overlapping rate threshold value, the sensing target corresponding to the target directional bounding boxes is determined to be the reference sensing target in the effective sensing area of the vehicle-mounted sensor, and conversely, if the overlapping rate corresponding to the target directional bounding boxes is smaller than the preset overlapping rate threshold value, the sensing target corresponding to the target directional bounding boxes is determined not to be the reference sensing target.
It should be noted that, for the specific value of the preset overlap rate threshold, the specific value of the preset overlap rate threshold is not limited mainly based on the accuracy requirement of the sensor information fusion result, the performance of the corresponding vehicle-mounted sensor, and the hardware performance of the electronic device executing the sensor information fusion method provided by the embodiment of the invention.
Further, for a specific process of determining the overlapping area of the target directional bounding box and the effective sensing area, the following method may be adopted:
first, determining boundary intersection points of a target directional bounding box and an effective sensing area, target directional bounding box vertexes and target area vertexes, wherein the target directional bounding box vertexes are vertexes in the effective sensing area in all bounding box vertexes of the target directional bounding box, and the target area vertexes are vertexes in the directional bounding box in all area vertexes of the effective sensing area. P is shown in conjunction with FIG. 4 in Representing the vertices of a target directed bounding box, P int Represents boundary intersection point, P out The directional bounding box vertices are represented.
It should be noted that, the determination of the target directional bounding box vertex and the target area vertex may be implemented based on the embodiment shown in fig. 3, which is not described herein. For the determination of the intersection points of the boundaries, this may be achieved by calculating the intersection points of the corresponding boundaries, and the specific calculation process may refer to the prior art, which is not described in detail here.
Further, a polygonal region formed by the target directional bounding box vertices, the target region vertices, and the boundary intersections as vertices is used as an overlapping region of the target directional bounding box and the effective sensing region.
In a specific implementation process, the following different situations can be classified according to specific numbers of boundary intersection points, target directional bounding box vertices and target area vertices.
1. Determining a target directional bounding box vertex and two boundary intersection points, further judging whether a target area vertex exists or not, and if the target area vertex does not exist, taking a triangular area surrounded by the boundary intersection points and the target directional bounding box vertex as an overlapping area; correspondingly, if one target area vertex exists, taking a quadrilateral area formed by the boundary intersection point, the target directional bounding box vertex and the target area vertex as an overlapping area;
2. determining the vertexes of the two target directional bounding boxes and the intersection points of the two boundaries, and taking quadrilateral areas corresponding to the vertexes of the two target directional bounding boxes and the intersection points of the two boundaries as overlapping areas if no vertexes of the target areas exist; correspondingly, if one target area vertex exists, taking a pentagon area formed by two target directional bounding box vertices, two boundary intersection points and the target area vertex as an overlapping area;
3. determining three target effective bounding box vertexes and two boundary intersection points, and taking a directed bounding box region outside a triangular region formed by the effective bounding box vertexes except the three target effective bounding box vertexes and the two boundary intersection points in the directed bounding box as an overlapping region;
4. And determining four effective bounding box vertexes and zero boundary intersection points, namely that all bounding box vertexes of the target directed bounding box are in an effective sensing area, and taking an area corresponding to the target directed bounding box as an overlapping area.
S140, fusing the sensor information of the reference sensing target corresponding to the same physical object into the sensor information of the corresponding physical object.
It can be understood that, through the foregoing steps, the number of reference sensing targets corresponding to each vehicle-mounted sensor is different, and accordingly, for real objects on the vehicle driving path, the number of reference sensing targets corresponding to each vehicle-mounted sensor is also different, some real objects correspond to a plurality of reference sensing targets, and some real objects may correspond to only one reference sensing target.
The step is to use the vehicle-mounted sensor as an object to expand, and when the step is reached, the real object on the vehicle driving path is used as the object to determine which reference sensing targets corresponding to the same real object are respectively, namely, the reference sensing targets of all the vehicle-mounted sensors need to be clustered, and the reference sensing targets corresponding to the same real object are determined. As for clustering the reference perceived objects, the process of determining the reference perceived objects belonging to the same entity can be implemented based on the prior art, and is not developed here.
Further, the sensor information of each reference sensing target corresponding to the same object is fused, so that the sensor information of the corresponding object can be obtained. Optionally, in practical application, the sensor information after the fusion of the real objects may be represented by a matrix, and each element in the matrix may be used as sensor information.
Optionally, considering that in practical application, part of the reference perception targets will not affect the driving decision or have an erroneous effect on the driving decision, the embodiment of the invention provides a method for further screening the reference perception targets.
Firstly, effective sensing targets influencing driving decisions are respectively determined in reference sensing targets of all vehicle-mounted sensors, namely all vehicle-mounted sensors are respectively taken as target vehicle-mounted sensors, the position relation between all reference sensing targets of the target vehicle-mounted sensors and lane boundaries in a high-precision map and preset reference objects corresponding to the target vehicle-mounted sensors are determined, and then among all reference sensing targets of the target vehicle-mounted sensors, reference sensing targets which are positioned in the lane boundaries, reference sensing targets which are positioned outside the lane boundaries and belong to moving targets and reference sensing targets which correspond to the preset reference objects are taken as effective sensing targets influencing driving decisions.
For example, for reference perceived objects outside lane boundaries, stationary objects outside lane boundaries have no effect on the travel of the vehicle, such reference perceived objects may be deleted directly; although the moving object outside the lane boundary does not affect the vehicle running during normal running, the moving object outside the lane boundary remains for the safety of automatic driving.
The reference perception target in the lane boundary can be used as an effective perception target.
For a preset reference object, such as a sewer cover, the specific performance of the vehicle-mounted sensor needs to be combined for processing. If the millimeter wave radar is sensitive to metal detection, a sensing target influencing driving decision can be stably output for a sewer cover on a lane where a vehicle belongs, and a corresponding identification bit is set in corresponding sensor information for the reference sensing target fed back by the millimeter wave radar corresponding to the preset reference object to show distinction. Other types of onboard sensor signals are processed in a similar manner and are not listed here.
Secondly, sensor information influencing driving decisions is added to the sensor information of the effective sensing targets of the vehicle-mounted sensors respectively, and if the effective sensing targets of the vehicle-mounted sensors are reference sensing targets in lane boundaries, lane information of the effective sensing targets is added to the sensor information of the effective sensing targets; if the effective sensing target of the vehicle-mounted sensor is a reference sensing target corresponding to a preset reference object, adding the mark information of the preset reference object into the sensor information of the effective sensing target, and adding the mark information of the sewer manhole cover into the sensor information of the effective sensing target corresponding to the millimeter wave radar, as described in the foregoing.
And finally, taking the sensor information with the highest confidence corresponding to each effective perception target as target sensor information, and fusing the target sensor information of the effective perception targets corresponding to the same real object into the sensor information of the corresponding real object. Different effective perception targets in the lane boundary are matched according to the attributes of the lane line, the position, the speed and the like, and sensor information with good corresponding performance of each vehicle-mounted sensor is selected as a final fusion result.
For example, if the accuracy of the recognition of the object position by the vehicle-mounted sensor 1 is high, the position information of the object uses the result of the vehicle-mounted sensor 1; and the speed measurement accuracy of the vehicle-mounted sensor 2 is high, and then the speed information of the real object uses the result of the vehicle-mounted sensor 2.
Particularly, for a preset reference object, characteristic complementation among various vehicle-mounted sensors is considered, for example, at a sewer cover, the vehicle-mounted millimeter wave radar outputs an effective sensing target with a sewer cover marker bit, if the effective sensing target is not matched with other effective sensing targets, the sensing target is eliminated, if other sensing targets matched with the effective sensing target exist, namely, the other vehicle-mounted sensors detect the effective sensing target at the well cover position of a high-precision map, sensor information of the effective sensing targets of the other vehicle-mounted sensors at the well cover is taken as a fusion result, and the processing mode can avoid eliminating useful sensor information and reduce interference signals.
In summary, according to the sensor information fusion method provided by the invention, the effective sensing area matched with the sensor is determined based on the performance parameters of the sensor, compared with the method that different sensors adopt uniform sensing areas in the prior art, the first reduction of calculated data can be realized, further, the sensing targets outside the effective sensing area are eliminated, only the reference sensing targets in the effective sensing area are calculated, the number of calculated objects is effectively reduced, and the data volume is further reduced.
Furthermore, different sensor information is selected as sensor information of a real object based on the performance of the sensor, and the sensor information of different types of finally used vehicle-mounted sensors is the information with the highest confidence, so that the accuracy of the sensor information of the fused real object can be further improved.
The following describes the sensor information fusion device provided by the embodiment of the present invention, and the sensor information fusion device described below may be regarded as a functional module architecture to be set in a central device in order to implement the sensor information fusion method provided by the embodiment of the present invention; the following description may be referred to with respect to the above.
Referring to fig. 5, fig. 5 is a block diagram of a sensor information fusion device according to an embodiment of the present invention, where the device includes:
an acquisition unit 10 for acquiring vehicle positioning information, a high-precision map, and reference information of each vehicle-mounted sensor;
the reference information comprises preset information related to detection deviation and perception targets corresponding to different objects;
a first determining unit 20 for determining a position coordinate of the vehicle in the high-precision map based on the vehicle positioning information;
a second determining unit 30, configured to determine effective sensing areas corresponding to the vehicle-mounted sensors respectively based on the position coordinates, preset information of the vehicle-mounted sensors, and lane boundaries in the high-precision map;
a third determining unit 40, configured to determine, for each in-vehicle sensor, a reference sensing target that is within the effective sensing area corresponding to the in-vehicle sensor among sensing targets of the in-vehicle sensor;
and the fusion unit 50 is used for fusing the sensor information of the reference sensing target corresponding to the same physical object into the sensor information of the corresponding physical object.
Optionally, the second determining unit 30 is configured to determine, based on the position coordinates, preset information of each vehicle-mounted sensor, and a lane boundary in the high-precision map, an effective sensing area corresponding to each vehicle-mounted sensor, where the second determining unit includes:
Taking each vehicle-mounted sensor as a target sensor;
determining a target lane boundary and boundary parameters of the target lane boundary in a high-precision map according to the position coordinates and preset information of the target sensor;
determining a plurality of regional vertexes according to preset information and boundary parameters of the target sensor;
and taking the polygonal area corresponding to each area vertex as an effective sensing area of the target sensor.
Optionally, the preset information related to the detection deviation includes: boundary length, preset boundary width and transverse deviation function;
the boundary parameters include: the start point coordinates, the end point coordinates and the boundary included angles between the boundary of the target lane and the horizontal direction;
the second determining unit 30, configured to determine a plurality of area vertices according to preset information of the target sensor and boundary parameters, includes:
inputting preset information of the target sensor and boundary parameters of the target lane boundary into the following formula to obtain corresponding regional vertexes:
wherein L is BS-E Representing the boundary length;
x V representing the abscissa of the vehicle in the high-precision map;
x BS an abscissa, y, representing coordinates of a boundary origin of a target lane BS An ordinate representing the coordinates of the start point of the boundary of the target lane;
x BE an abscissa, y, representing the boundary end point coordinates of the target lane BE An ordinate representing the target lane boundary end point coordinates;
E RAD representing a corresponding lateral deviation function of the target sensor;
θ represents the boundary angle between the boundary of the target lane and the horizontal direction;
W 1 representing a lateral detection deviation of the target sensor at a boundary end point of the target lane boundary;
W 3 edge representing object sensor at boundary of object laneDetecting a deviation transversely at the boundary point;
W 2 representing a preset boundary width;
x A representing the abscissa of the vertex of the first region, y A An ordinate representing the vertex of the first region;
x B representing the abscissa of the vertex of the second region, y B An ordinate representing the vertex of the second region;
x C an abscissa representing the vertex of the third region, y C An ordinate representing the vertex of the third region;
x D an abscissa representing the vertex of the fourth region, y D Representing the ordinate of the vertex of the fourth region.
Optionally, the sensing target of the vehicle-mounted sensor is represented by a coordinate point;
the third determining unit 40, configured to determine, among sensing targets of the vehicle-mounted sensor, a reference sensing target that is within an effective sensing area corresponding to the third determining unit, includes:
taking coordinate points of all sensing targets of the vehicle-mounted sensor as target coordinate points respectively;
judging whether the target coordinate point is in an effective sensing area corresponding to the vehicle-mounted sensor;
And if the target coordinate point is in the effective sensing area, judging that the sensing target corresponding to the target coordinate point is a reference sensing target in the effective sensing area of the vehicle-mounted sensor.
Optionally, the sensing target of the vehicle-mounted sensor is represented by a directed bounding box;
the third determining unit 40, configured to determine, among sensing targets of the vehicle-mounted sensor, a reference sensing target that is within an effective sensing area corresponding to the third determining unit, includes:
taking the directed bounding boxes corresponding to the sensing targets of the vehicle-mounted sensor as target directed bounding boxes respectively;
determining an overlapping area of the target directional bounding box and the effective sensing area;
calculating the overlapping rate of the overlapping area and the effective sensing area;
and if the overlapping rate is greater than or equal to a preset overlapping rate threshold value, judging that the sensing target corresponding to the target directional bounding box is a reference sensing target in an effective sensing area of the vehicle-mounted sensor.
Optionally, the third determining unit 40 is configured to determine an overlapping area of the target directional bounding box and the effective sensing area, and includes:
determining boundary intersection points of the target directional bounding boxes and the effective sensing areas, target directional bounding box vertexes and target area vertexes, wherein the target directional bounding box vertexes are vertexes which are located in the effective sensing areas in all bounding box vertexes of the target directional bounding boxes;
When the target area vertex is a vertex in the directional bounding box among the area vertices of the effective sensing area, taking the target directional bounding box vertex, the target area vertex and the polygonal area with the boundary intersection point as the vertex as an overlapping area of the target directional bounding box and the effective sensing area;
when all bounding box vertices of the target directed bounding box are within the effective sensing region, the region corresponding to the target directed bounding box is taken as an overlapping region.
Optionally, the third determining unit 40 is configured to take, as an overlapping area of the target directional bounding box and the effective sensing area, a polygon area with a target directional bounding box vertex, a target area vertex, and a boundary intersection point as vertices, and includes:
under the condition that a target directional bounding box vertex and two boundary intersection points are determined and no target area vertex exists, taking a triangular area surrounded by the boundary intersection points and the target directional bounding box vertex as an overlapping area;
when determining that one target directional bounding box vertex and two boundary intersection points exist and one target area vertex exists, taking a quadrilateral area formed by the boundary intersection points, the target directional bounding box vertex and the target area vertex as an overlapping area;
Under the condition that the vertexes of the two target directional bounding boxes and the intersection points of the two boundaries are determined and no vertexes of the target area exist, taking quadrilateral areas corresponding to the vertexes of the two target directional bounding boxes and the intersection points of the two boundaries as overlapping areas;
when determining that two target directional bounding box vertexes and two boundary intersection points exist and one target area vertex exists, taking a pentagon area formed by the two target directional bounding box vertexes, the two boundary intersection points and the target area vertex as an overlapping area;
when determining the points of intersection of the three target effective bounding box vertices and the two boundaries, the directional bounding box regions outside the triangular region formed by the effective bounding box vertices outside the three target effective bounding box vertices and the two boundary intersection points in the directional bounding box are taken as overlapping regions.
Optionally, the fusing unit 50 is configured to fuse sensor information of a reference sensing target corresponding to the same physical object into sensor information of a corresponding physical object, and includes:
determining effective perception targets influencing driving decisions in the reference perception targets of the vehicle-mounted sensors respectively;
sensor information influencing driving decisions is added to the sensor information of the effective sensing targets of each vehicle-mounted sensor respectively;
Taking the sensor information with the highest confidence corresponding to each effective perception target as target sensor information;
and fusing the target sensor information of the effective perceived target corresponding to the same real object into the sensor information of the corresponding real object.
Optionally, the fusion unit 50 is configured to determine, among the reference perceived targets of the respective vehicle sensors, an effective perceived target affecting the driving decision, and includes:
taking each vehicle-mounted sensor as a target vehicle-mounted sensor;
determining the position relation between each reference sensing target of the target vehicle-mounted sensor and the lane boundary in the high-precision map, and a preset reference object corresponding to the target vehicle-mounted sensor;
and taking the reference sensing targets in the lane boundary, the reference sensing targets which are outside the lane boundary and belong to the moving targets and the reference sensing targets corresponding to the preset reference objects in the reference sensing targets of the target vehicle-mounted sensor as effective sensing targets for influencing driving decisions.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A method for fusing sensor information, comprising:
acquiring vehicle positioning information, a high-precision map and reference information of each vehicle-mounted sensor, wherein the reference information comprises preset information related to detection deviation and perception targets corresponding to different objects;
determining the position coordinates of the vehicle in the high-precision map according to the vehicle positioning information;
based on the position coordinates, preset information of each vehicle-mounted sensor and lane boundaries in the high-precision map, respectively determining effective sensing areas adaptive to the performance of each vehicle-mounted sensor;
for each vehicle-mounted sensor, determining a reference sensing target in an effective sensing area corresponding to the sensing target of the vehicle-mounted sensor;
fusing sensor information of reference sensing targets corresponding to the same real object into sensor information of the corresponding real object;
the determining, based on the position coordinates, preset information of each vehicle-mounted sensor, and lane boundaries in the high-precision map, an effective sensing area adapted to performance of each vehicle-mounted sensor includes:
taking each vehicle-mounted sensor as a target sensor;
Determining a target lane boundary and boundary parameters of the target lane boundary in the high-precision map according to the position coordinates and preset information of the target sensor;
determining a plurality of regional vertexes according to preset information of the target sensor and the boundary parameters;
taking a polygonal area corresponding to each area vertex as an effective sensing area matched with the performance of the target sensor;
the preset information related to the detection deviation includes: boundary length, preset boundary width and transverse deviation function;
the boundary parameters include: and the start point coordinates, the end point coordinates and the boundary included angles between the boundary of the target lane and the horizontal direction.
2. The method of claim 1, wherein determining a plurality of area vertices according to the preset information of the target sensor and the boundary parameter comprises:
inputting preset information of the target sensor and boundary parameters of the target lane boundary into the following formula to obtain corresponding regional vertexes:
wherein L is BS-E Representing the boundary length;
x V representing the abscissa of the vehicle in the high-precision map;
x BS an abscissa, y, representing the coordinates of the origin of the boundary of the target lane BS An ordinate representing the coordinates of the start point of the boundary of the target lane;
x BE an abscissa, y, representing the boundary end point coordinates of the target lane BE An ordinate representing the target lane boundary end point coordinates;
E RAD representing a corresponding lateral deviation function of the target sensor;
θ represents the boundary angle between the boundary of the target lane and the horizontal direction;
W 1 representing a lateral detection deviation of the target sensor at a boundary end point of the target lane boundary;
W 3 representing a lateral detection deviation of the target sensor at a boundary start point of the target lane boundary;
W 2 representing the preset boundary width;
x A representing the abscissa of the vertex of the first region, y A An ordinate representing the vertex of the first region;
x B representing the abscissa of the vertex of the second region, y B An ordinate representing the vertex of the second region;
x C an abscissa representing the vertex of the third region, y C An ordinate representing the vertex of the third region;
x D an abscissa representing the vertex of the fourth region, y D Representing the ordinate of the vertex of the fourth region.
3. The sensor information fusion method according to claim 1, wherein the sensing target of the in-vehicle sensor is represented by a coordinate point;
the determining the reference sensing target in the corresponding effective sensing area in the sensing target of the vehicle-mounted sensor comprises the following steps:
Taking coordinate points of all perception targets of the vehicle-mounted sensor as target coordinate points respectively;
judging whether the target coordinate point is in an effective sensing area corresponding to the vehicle-mounted sensor;
and if the target coordinate point is in the effective sensing area, judging that the sensing target corresponding to the target coordinate point is a reference sensing target in the effective sensing area of the vehicle-mounted sensor.
4. The sensor information fusion method according to claim 1, wherein the sensing target of the in-vehicle sensor is represented by a directed bounding box;
the determining the reference sensing target in the corresponding effective sensing area in the sensing target of the vehicle-mounted sensor comprises the following steps:
taking the directed bounding boxes corresponding to the sensing targets of the vehicle-mounted sensor as target directed bounding boxes respectively;
determining an overlapping area of the target directional bounding box and the effective perception area;
calculating the overlapping rate of the overlapping area and the effective sensing area;
and if the overlapping rate is greater than or equal to a preset overlapping rate threshold value, judging that the sensing target corresponding to the target directional bounding box is a reference sensing target in an effective sensing area of the vehicle-mounted sensor.
5. The method of claim 4, wherein determining the overlapping region of the target directional bounding box and the active sensing region comprises:
determining boundary intersection points of the target directional bounding box and the effective perception area, target directional bounding box vertexes and target area vertexes, wherein the target directional bounding box vertexes are vertexes which are located in the effective perception area in all bounding box vertexes of the target directional bounding box;
when the target region vertex is a vertex within the directional bounding box among the region vertices of the effective sensing region, a polygon region having the target directional bounding box vertex, the target region vertex, and the boundary intersection point as vertices is taken as an overlapping region of the target directional bounding box and the effective sensing region;
and taking the area corresponding to the target directional bounding box as an overlapping area when all bounding box vertexes of the target directional bounding box are in the effective perception area.
6. The method according to claim 5, wherein the step of using a polygonal area having the target directional bounding box vertices, the target area vertices, and the boundary intersections as vertices as an overlapping area of the target directional bounding box and the effective sensing area comprises:
When determining that one target directional bounding box vertex and two boundary intersection points are not present and the target area vertex is not present, taking a triangle area surrounded by the boundary intersection points and the target directional bounding box vertex as an overlapping area;
when determining that one target directional bounding box vertex and two boundary intersection points exist and one target area vertex exists, taking a quadrilateral area formed by the boundary intersection points, the target directional bounding box vertex and the target area vertex as an overlapping area;
under the condition that the two target directional bounding box vertexes and the two boundary intersection points are determined and the target area vertexes are not present, taking quadrilateral areas corresponding to the two target directional bounding box vertexes and the two boundary intersection points as overlapping areas;
when determining that two target directional bounding box vertexes and two boundary intersection points exist and one target area vertex exists, taking a pentagon area formed by the two target directional bounding box vertexes, the two boundary intersection points and the target area vertex as an overlapping area;
when determining the three target effective bounding box vertexes and the two boundary intersection points, taking the effective bounding box vertexes except the three target effective bounding box vertexes and the directional bounding box area except the triangular area formed by the two boundary intersection points in the directional bounding box as an overlapping area.
7. The method for fusing sensor information according to any one of claims 1 to 6, wherein fusing sensor information of reference perceived objects corresponding to the same physical object into sensor information of corresponding physical objects comprises:
determining effective perception targets influencing driving decisions in the reference perception targets of the vehicle-mounted sensors respectively;
respectively adding sensor information influencing driving decision into the sensor information of the effective perception target of each vehicle-mounted sensor;
taking the sensor information with the highest confidence coefficient corresponding to each effective perception target as target sensor information;
and fusing the target sensor information of the effective perceived target corresponding to the same real object into the sensor information of the corresponding real object.
8. The sensor information fusion method according to claim 7, wherein the determining the effective perceived target that affects the driving decision among the reference perceived targets of the respective vehicle-mounted sensors, respectively, includes:
taking each vehicle-mounted sensor as a target vehicle-mounted sensor;
determining the position relation between each reference sensing target of the target vehicle-mounted sensor and a lane boundary in the high-precision map, and a preset reference object corresponding to the target vehicle-mounted sensor;
And taking the reference sensing targets in the lane boundary, the reference sensing targets which are outside the lane boundary and belong to moving targets and the reference sensing targets corresponding to the preset reference objects as effective sensing targets for influencing driving decisions.
9. A sensor information fusion device, comprising:
the acquisition unit is used for acquiring vehicle positioning information, a high-precision map and reference information of each vehicle-mounted sensor;
the reference information comprises preset information related to detection deviation and perception targets corresponding to different objects;
a first determining unit configured to determine a position coordinate of a vehicle in the high-precision map according to the vehicle positioning information;
a second determining unit, configured to determine an effective sensing area adapted to performance of each vehicle-mounted sensor based on the position coordinates, preset information of each vehicle-mounted sensor, and a lane boundary in the high-precision map, respectively;
a third determining unit, configured to determine, for each of the vehicle-mounted sensors, a reference sensing target that is located in an effective sensing area corresponding to the third determining unit, from among sensing targets of the vehicle-mounted sensors;
The fusion unit is used for fusing the sensor information of the reference sensing target corresponding to the same real object into the sensor information of the corresponding real object;
the second determining unit is specifically configured to:
taking each vehicle-mounted sensor as a target sensor;
determining a target lane boundary and boundary parameters of the target lane boundary in the high-precision map according to the position coordinates and preset information of the target sensor;
determining a plurality of regional vertexes according to preset information of the target sensor and the boundary parameters;
taking a polygonal area corresponding to each area vertex as an effective sensing area matched with the performance of the target sensor;
the preset information related to the detection deviation includes: boundary length, preset boundary width and transverse deviation function;
the boundary parameters include: and the start point coordinates, the end point coordinates and the boundary included angles between the boundary of the target lane and the horizontal direction.
CN202111416302.3A 2021-11-25 2021-11-25 Sensor information fusion method and device Active CN114136328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111416302.3A CN114136328B (en) 2021-11-25 2021-11-25 Sensor information fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111416302.3A CN114136328B (en) 2021-11-25 2021-11-25 Sensor information fusion method and device

Publications (2)

Publication Number Publication Date
CN114136328A CN114136328A (en) 2022-03-04
CN114136328B true CN114136328B (en) 2024-03-12

Family

ID=80387728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111416302.3A Active CN114136328B (en) 2021-11-25 2021-11-25 Sensor information fusion method and device

Country Status (1)

Country Link
CN (1) CN114136328B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013226887A1 (en) * 2013-12-20 2015-06-25 Gerd Reime Inductive sensor arrangement and method for detecting the position of at least one target
CN206734295U (en) * 2016-12-21 2017-12-12 驭势科技(北京)有限公司 A kind of detection system for being used to detect Vehicle target and its application
DE102017101530A1 (en) * 2017-01-26 2018-07-26 Valeo Schalter Und Sensoren Gmbh A lane-boundary detecting apparatus for a motor vehicle having a plurality of scanning areas arranged in a row
CN111177869A (en) * 2020-01-02 2020-05-19 北京百度网讯科技有限公司 Method, device and equipment for determining sensor layout scheme
GB202016383D0 (en) * 2020-10-15 2020-12-02 Continental Automotive Romania Srl Method of updating the existance probability of a track in fusion based on sensor perceived areas
CN112208529A (en) * 2019-07-09 2021-01-12 长城汽车股份有限公司 Perception system for object detection, driving assistance method, and unmanned device
CN113494927A (en) * 2020-03-20 2021-10-12 郑州宇通客车股份有限公司 Vehicle multi-sensor calibration method and device and vehicle
CN113625234A (en) * 2020-05-06 2021-11-09 上海海拉电子有限公司 Installation angle correction method of vehicle radar and vehicle radar
CN113671973A (en) * 2021-07-09 2021-11-19 东北大学 Target area searching method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016220075A1 (en) * 2016-10-14 2018-04-19 Audi Ag Motor vehicle and method for 360 ° field detection

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013226887A1 (en) * 2013-12-20 2015-06-25 Gerd Reime Inductive sensor arrangement and method for detecting the position of at least one target
CN206734295U (en) * 2016-12-21 2017-12-12 驭势科技(北京)有限公司 A kind of detection system for being used to detect Vehicle target and its application
DE102017101530A1 (en) * 2017-01-26 2018-07-26 Valeo Schalter Und Sensoren Gmbh A lane-boundary detecting apparatus for a motor vehicle having a plurality of scanning areas arranged in a row
CN112208529A (en) * 2019-07-09 2021-01-12 长城汽车股份有限公司 Perception system for object detection, driving assistance method, and unmanned device
CN111177869A (en) * 2020-01-02 2020-05-19 北京百度网讯科技有限公司 Method, device and equipment for determining sensor layout scheme
CN113494927A (en) * 2020-03-20 2021-10-12 郑州宇通客车股份有限公司 Vehicle multi-sensor calibration method and device and vehicle
CN113625234A (en) * 2020-05-06 2021-11-09 上海海拉电子有限公司 Installation angle correction method of vehicle radar and vehicle radar
GB202016383D0 (en) * 2020-10-15 2020-12-02 Continental Automotive Romania Srl Method of updating the existance probability of a track in fusion based on sensor perceived areas
CN113671973A (en) * 2021-07-09 2021-11-19 东北大学 Target area searching method and device, electronic equipment and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Continual Monitoring of Precision of Aerial Transport Objects;P. Kurdel等;《2018 XIII International Scientific Conference - New Trends in Aviation Development (NTAD)》;全文 *
Probabilistic Smoothing Based Generation of a Reliable Lane Geometry Map With Uncertainty of a Lane Detector;Seokwon Kim等;《 IEEE Access》;20200910;全文 *
一种改进的二进制检测传感器网络解析目标跟踪算法;周卫平;王中原;罗浩;;海军工程大学学报;20111015(第05期);全文 *
基于视觉的智能车辆横向偏差测量方法;李旭;张为公;;东南大学学报(自然科学版)(第01期);全文 *
智轨电车的传感器配置方案研究;刘飞;陈白帆;胡云卿;潘文波;龙腾;袁典;;控制与信息技术(第04期);全文 *

Also Published As

Publication number Publication date
CN114136328A (en) 2022-03-04

Similar Documents

Publication Publication Date Title
RU2638879C2 (en) Automatic control system for vehicle
CN109870680B (en) Target classification method and device
US20200049511A1 (en) Sensor fusion
CN111653113B (en) Method, device, terminal and storage medium for determining local path of vehicle
CN111797734A (en) Vehicle point cloud data processing method, device, equipment and storage medium
US11042759B2 (en) Roadside object recognition apparatus
CN110040135A (en) Controller of vehicle and control method for vehicle
GB2558752A (en) Vehicle vision
US11042160B2 (en) Autonomous driving trajectory determination device
EP3859273A1 (en) Method for constructing driving coordinate system, and application thereof
JPWO2018066133A1 (en) Vehicle determination method, travel route correction method, vehicle determination device, and travel route correction device
US20220035036A1 (en) Method and apparatus for positioning movable device, and movable device
CN112781599A (en) Method for determining the position of a vehicle
CN112036274A (en) Driving region detection method and device, electronic equipment and storage medium
WO2022078342A1 (en) Dynamic occupancy grid estimation method and apparatus
US20200062252A1 (en) Method and apparatus for diagonal lane detection
US11087147B2 (en) Vehicle lane mapping
CN110427034B (en) Target tracking system and method based on vehicle-road cooperation
CN114136328B (en) Sensor information fusion method and device
CN111781606A (en) Novel miniaturization implementation method for fusion of laser radar and ultrasonic radar
Eraqi et al. Static free space detection with laser scanner using occupancy grid maps
CN113155143A (en) Method, device and vehicle for evaluating a map for automatic driving
JP2018185156A (en) Target position estimation method and target position estimation device
JP2022007526A (en) Surrounding vehicle discrimination system, and surrounding vehicle discrimination program
WO2020021596A1 (en) Vehicle position estimation device and vehicle position estimation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant