CN116935640A - Road side sensing method, device, equipment and medium based on multiple sensors - Google Patents

Road side sensing method, device, equipment and medium based on multiple sensors Download PDF

Info

Publication number
CN116935640A
CN116935640A CN202310899881.4A CN202310899881A CN116935640A CN 116935640 A CN116935640 A CN 116935640A CN 202310899881 A CN202310899881 A CN 202310899881A CN 116935640 A CN116935640 A CN 116935640A
Authority
CN
China
Prior art keywords
data
target
fusion
association
unassociated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310899881.4A
Other languages
Chinese (zh)
Inventor
窦殿松
王孝润
岳莹莹
郑一辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Nebula Internet Technology Co ltd
Original Assignee
Beijing Nebula Internet Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Nebula Internet Technology Co ltd filed Critical Beijing Nebula Internet Technology Co ltd
Priority to CN202310899881.4A priority Critical patent/CN116935640A/en
Publication of CN116935640A publication Critical patent/CN116935640A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/042Detecting movement of traffic to be counted or controlled using inductive or magnetic detectors

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The embodiment of the invention provides a road side sensing method, a device, equipment and a medium based on multiple sensors, wherein the method comprises the following steps: obtaining to-be-fused sensing data of a sensing area; determining a data association result of the to-be-fused perceived data and the fusion target list according to the to-be-fused perceived data and the pre-created fusion target list; updating the fusion target list according to the data association result; and determining fusion target information of the sensing area according to the associated sensing data corresponding to each fusion target in the fusion target list. By using the method, only visual perception data are captured in a target creation area, visual perception data and radar perception data are captured in a data fusion area, and only radar perception data are captured in other areas, which is equivalent to data fusion in a data fusion area with higher accuracy of camera and radar detection, so that the target fusion effect is improved. And the furthest detection distance is not limited to the detection distance of the camera, so that the range of target fusion is enlarged.

Description

Road side sensing method, device, equipment and medium based on multiple sensors
Technical Field
The present invention relates to the field of road side sensing technologies, and in particular, to a road side sensing method, device, equipment, and medium based on multiple sensors.
Background
The road side perception is to use sensors such as cameras, millimeter wave radars, laser radars and the like and combine road side edge calculation, and the final purpose is to realize instant intelligent perception of traffic participants, road conditions and the like of the road section. The detection, positioning, tracking and the like of the target are realized through the fusion of the camera and the millimeter wave radar. The common millimeter wave radar and camera fusion is mainly based on cameras, and targets detected by the radar and target attributes are marked on the whole image for association fusion.
However, in the prior art, the sensing data captured by the camera is associated with the radar detection sensing data in the whole range, and because the radar easily generates a plurality of clustering points for a target with a larger volume, for example, a large truck is detected into a plurality of targets, so that the conditions of false detection, missed detection and the like of the radar are caused, the association between the radar and the camera is abnormal, and the fusion effect is poor (for example, the information of the position, the speed, the course angle and the like of the fused target is updated wrongly). In addition, since the detection distance of the camera is limited, the furthest detection distance of the entire detection range is limited to the camera.
Disclosure of Invention
The embodiment of the invention provides a road side sensing method, a device, equipment and a medium based on multiple sensors, which realize the improvement of a target fusion effect and the increase of a target fusion range.
In a first aspect, an embodiment of the present invention provides a multi-sensor-based road side sensing method, including:
obtaining to-be-fused sensing data of a sensing area;
determining a data association result of the to-be-fused perceived data and the fusion target list according to the to-be-fused perceived data and a pre-created fusion target list;
updating the fusion target list according to the data association result;
determining fusion target information of the sensing area according to associated sensing data corresponding to each fusion target in the fusion target list;
the sensing area is sequentially divided into a target creating area, a data fusion area and other areas according to the target moving direction, the target creating area corresponds to visual sensing data, the data fusion area corresponds to visual sensing data and radar sensing data, and the other areas correspond to radar sensing data.
In a second aspect, an embodiment of the present invention provides a multi-sensor-based roadside sensing device, including:
The data acquisition module is used for acquiring the to-be-fused sensing data of the sensing area;
the target association module is used for determining a data association result of the to-be-fused perceived data and the fusion target list according to the to-be-fused perceived data and the pre-created fusion target list;
the data fusion module is used for updating the fusion target list according to the data association result;
the information determining module is used for determining fusion target information of the sensing area according to the associated sensing data corresponding to each fusion target in the fusion target list;
the sensing area is sequentially divided into a target creating area, a data fusion area and other areas according to the target moving direction, the target creating area corresponds to visual sensing data, the data fusion area corresponds to visual sensing data and radar sensing data, and the other areas correspond to radar sensing data.
In a third aspect, an embodiment of the present invention further provides an edge computing device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the multi-sensor based road side perception method as described in the first aspect embodiment.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer executable instructions, which when executed by a computer processor, are for performing the multi-sensor based road side perception method as described in the embodiments of the first aspect.
The embodiment of the invention provides a road side sensing method, a device, equipment and a medium based on multiple sensors, wherein the method comprises the following steps: firstly, obtaining to-be-fused sensing data of a sensing area; secondly, determining a data association result of the to-be-fused sensing data and the fusion target list according to the to-be-fused sensing data and a pre-created fusion target list; then updating the fusion target list according to the data association result; finally, according to the associated perception data corresponding to each fusion target in the fusion target list, determining fusion target information of the perception region; the sensing area is sequentially divided into a target creating area, a data fusion area and other areas according to the target moving direction, the target creating area corresponds to visual sensing data, the data fusion area corresponds to visual sensing data and radar sensing data, and the other areas correspond to radar sensing data. According to the technical scheme, the sensing area is divided into three areas according to the detection precision conditions of different sensors, wherein only visual sensing data are captured in a target creation area, the visual sensing data and radar sensing data are captured in a data fusion area, and only radar sensing data are captured in other areas; the method is equivalent to the situation that the fusion of the visual perception data and the radar perception data is started only in a data fusion area with higher detection precision of the camera and the radar, so that the condition that the association of the radar and the camera is abnormal is avoided, the accuracy of target fusion information is ensured, and the target fusion effect is improved. And the radar sensing areas are set in other areas, so that the furthest detection distance of the sensing areas is not limited by the detection distance of the camera any more due to the fact that the radar detection distance is far, the range of target fusion is enlarged, and the target fusion effect is further improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a multi-sensor-based road side sensing method according to a first embodiment of the present invention;
FIG. 2 is a diagram illustrating a sensing region in a multi-sensor-based roadside sensing method according to an embodiment of the present invention;
fig. 3 is a flow chart of another road side sensing method based on multiple sensors according to the second embodiment of the present invention;
fig. 4 is a schematic structural diagram of a road side sensing device based on multiple sensors according to a third embodiment of the present invention;
Fig. 5 is a schematic structural diagram of an edge computing device according to a fourth embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "original," "target," and the like in the description and claims of the present invention and the above-described drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a schematic flow chart of a multi-sensor-based road side sensing method, which is applicable to a situation of road side sensing fusion based on a camera and a millimeter wave radar, and the method can be executed by a multi-sensor-based road side sensing device, which can be implemented in a hardware and/or software form and can be configured in an edge computing device.
Considering that in the prior art, visual perception data captured by a camera and radar perception data captured by the radar are associated in the whole perception area, as a plurality of clustering points are easily generated by the radar on a target with a larger volume, for example, three clustering points are generated by a large truck, and three targets are detected, two false targets are detected, so that the conditions of false detection, missed detection and the like of the radar are caused, and the two false targets are hit on an image captured by the camera, the radar and the camera are abnormal in association, so that the fusion effect is poor, for example, the information of the position, the speed, the course angle and the like of the fused target is updated in error.
In order to avoid the above-mentioned problems, in this embodiment, the sensing area is divided into three areas, which are sequentially divided into a target creation area, a data fusion area, and other areas according to the moving direction of the target. The target creation area only obtains visual perception data by a camera, radar perception data are obtained by a radar in the data fusion area, and radar perception data are obtained by the radar in other areas. The arrangement fully considers the detection precision characteristics of the camera and the radar, wherein the camera can well identify the target and the static attribute information of the target in a relatively close area, such as the information of the vehicle type, the color, the license plate and the like. The radar has a blind area in a nearer area, and the radar can have higher precision in a farther range than the camera, and the radar can well identify the target and the dynamic attribute information of the target, such as the speed, the position, the course angle and the like of the vehicle in the area outside the blind area. It can be understood that for a closer target creation area, the camera has higher detection accuracy while the radar is in a blind area, so that only visual perception data captured by the camera is acquired; for a data fusion area with higher detection precision of the camera and the radar, acquiring visual perception data captured by the camera and radar perception data captured by the radar; and acquiring radar sensing data captured by the radar for other far areas with higher radar detection precision and limited camera detection precision.
In this embodiment, to implement the multi-sensor-based road side sensing method, the required hardware devices include three devices, i.e., millimeter wave radar, camera, and edge computing device. When the millimeter wave radar and the camera are installed, the vehicle is driven to the tail direction. The fusion target track has unidirectional characteristics, namely: and the target driving direction is from the creation target area to the data fusion area, and finally runs to other areas. In this embodiment, the data captured by the radar is referred to as radar sensing data, and the data captured by the camera is referred to as visual sensing data. It should be noted that, in this embodiment, the present invention is not limited to cameras, but may be other vision acquisition sensors, and is not limited to millimeter wave radar, but may be laser radar.
It will also be appreciated that some attributes of the object itself, such as vehicle type, vehicle color, license plate, etc., are identified by the camera at the time the object creates the area, which would otherwise be unchanged, carrying the attributes all the way to the fused perceived area and other areas. The radar identifies a relatively accurate position and speed measurement, and the radar maintains the properties later.
The sensing area is divided into three areas, namely a target creation area, a data fusion area and other areas. Target creation area: the camera detects the target in the area to create a fusion target, and the fusion target gives a reliable type for targets which are difficult to detect by pedestrians and non-motor vehicles, and false targets are not split when large vehicles exist. The radar in this area is generally undetectable and does not create a target. Data fusion area: the calibration points in the area are dense, the range is small, the calibration precision and the simplicity are increased, the correlation reliability is greatly improved, and the targets in the data fusion area can be jointly maintained by the radar and the camera. Other areas: the targets which run out from the data fusion area are continuously maintained by the radar with specific attributes such as types, license plates, vehicle colors, license plates and the like given by the camera, at the moment, the fusion targets have reliable speed, position and course angle attributes, and the furthest detection distance is determined by the radar, so that the perception range is greatly increased.
The division size of the three areas in the sensing area can be determined according to actual situations. For example, if the speed of the target is relatively low in a city expressway, the target can be detected within 0-30 m of the created target area, the data fusion area can be divided into smaller points, and the points are more dense at the moment, because the speed is relatively low, the precision difference caused by the delay of data transmitted by the radar is small at the moment, and the data fusion area is divided into smaller points. If the target is in a high-speed scene, the speed of the target is higher, the error caused by time delay is larger at the moment, and if the data fusion area is smaller, the target just comes to the data fusion area and is not processed yet, and the target is already out of the data fusion area, so that the data fusion area needs to be divided into a plurality of areas in order to ensure data fusion.
Fig. 2 is an exemplary diagram of a sensing area in a multi-sensor-based roadside sensing method according to an embodiment of the present invention. As shown in fig. 2, in order to recognize the target in the sensing area 1, radar and a camera are mounted on the stick, and the sensing area is divided into a target creation area 11, a data fusion area 12, and other areas 13. Wherein the target creation area 11 captures the sense data only by the camera, the data fusion area 12 captures the sense data by the camera and the radar, and the other areas 13 capture the sense data by the radar.
As shown in fig. 1, the multi-sensor-based road side sensing method provided in the first embodiment may specifically include the following steps:
s110, obtaining the to-be-fused sensing data of the sensing area.
In this embodiment, the sensing area is sequentially divided into a target creating area, a data fusion area and other areas according to the target moving direction, the target creating area corresponds to visual sensing data, the data fusion area corresponds to visual sensing data and radar sensing data, and the other areas correspond to radar sensing data. The detected target in the present embodiment is not particularly limited, and may be, for example, a vehicle, a pedestrian, or the like.
The principle of capturing radar perception data by the radar is that firstly, the radar carries out electromagnetic wave signal processing, original point cloud information such as speed, position and the like of a detection target is given based on Doppler frequency shift, and then radar detection target structured data is output through a clustering and tracking algorithm and is recorded as radar perception data. This calculation work taking into account the radar may be present in the radar itself.
The principle of capturing visual perception data by a camera is that a camera process pulls to a camera stream based on GStream (GStreamer is an open source multimedia framework for constructing streaming media applications, for processing multimedia data in multiple formats), transfers image data to a target detection algorithm, for example, to a single-stage target detection algorithm YOLOV5, outputs a detection result, and uses a target tracking algorithm, for example, a bytetrack algorithm, to give implementation tracking, outputs camera detection target structured data, and records the data as visual perception data, and this step of calculation work may exist in an edge computing device.
In this embodiment, the to-be-fused sensing data output by the radar and the camera are different, and the radar sensing data captured by the radar is an identification number (ID), a position, and a speed. The visual perception data determined according to the image captured by the camera is information such as a target type, a vehicle color license plate and the like, and the position of the target on the image can be detected, but the distance of the target from the camera is not output, and the speed is not output. The radar and camera captured data subsequently needs to be combined to co-maintain the properties of the detected targets.
And capturing the sensing data to be fused by the radar and the camera, and transmitting the sensing data to the edge computing equipment for data fusion, so as to realize attribute maintenance on the detected target. It should be clear that before data fusion, it may also be determined whether the edge computing device receives the to-be-fused perceived data as null data, and if null data may be discarded, and the newly captured to-be-fused perceived data may be continuously received. In addition, the data to be fused can be stored, so that offline debugging can be performed when the problem is found later.
S120, determining a data association result of the to-be-fused sensing data and the fusion target list according to the to-be-fused sensing data and the pre-created fusion target list.
In this embodiment, after the to-be-fused sensing data is obtained, the attribute of the target is commonly maintained based on the to-be-fused sensing data. In this embodiment, the attribute maintenance of the target is characterized in the form of a tracking list, and is recorded as a fusion target list. The fusion target list contains the fusion targets and the associated attributes of the fusion targets. It will be appreciated that the fusion target list is empty when it is initially created. Along with the continuous transmission of the perception data to be fused, the fusion targets and the associated attributes in the fusion target list are continuously updated.
The to-be-fused perception data comprise visual perception data and radar perception data, and the visual perception data or the radar perception data need to be maintained in a fusion target list. In this embodiment, different data association policies are adopted in different areas to perform data fusion. The fusion is mainly divided into association and updating, namely different association strategies and updating strategies are implemented in a non-communication area, and the step mainly relates to how the association is carried out.
In this embodiment, whether the fusion target list is empty needs to be determined, and if the fusion target list is empty, the to-be-fused perceived data is unassociated perceived data as a data association result between the to-be-fused perceived data and the fusion target list. If the fusion target list is non-empty, determining a data association result of the to-be-fused perceived data and the fusion target list according to a preset data association strategy. The preset data association strategy comprises a first data association strategy, a second data association strategy and a third data association strategy. The first data association policy refers to association by an identification number. An identification number (ID) association is first made, this step being dependent on the tracking of the original sensor. The target is associated with the sensing data (radar or camera target) to be fused before fusion, and the target is also associated with the target as long as the course angle difference, the position difference and the speed difference of the fusion target and the measured value are within a certain threshold range in the fusion.
After the first step of association, three association results are generated, namely, a target association pair with successful association, unassociated to-be-fused perception data or unassociated fusion targets are recorded as the first association result. If a new perception target comes, the last step is not successful, namely ID association is not effective, and the second step of association is carried out at the moment. The second step of association refers to the cross-over (IntersectionOverUnion, IOU) association. For example, radar sensing data is transmitted before, radar coordinates xy of the radar sensing data are converted into pixel coordinates UV, UV information is also contained in the maintained fusion target list, and the coordinate information of the newly transmitted radar sensing data is associated with the pixel coordinate information of the maintained fusion target, namely, the IOU association of the fusion region. Each object has a rectangular box on the image, and the pixel coordinates of the newly incoming radar-aware data are also overlaid, and after overlaid, a rectangle is also generated. An overlapping area is arranged between the two rectangular frames, and the area of the overlapping area and the ratio of the two rectangular areas are used as the cross-over ratio. And carrying out data fusion of the data fusion area according to the cross ratio.
As described above, the other areas consider that the camera cannot detect or has no visual perception data, so a third data fusion strategy of the third step is needed, i.e. distance correlation is performed. The distance association is understood to mean that the sensing target and the fusion target have a straight line distance when they are located in other areas, and can be simply considered as who is closer to whom, and the same target.
Three association results are generated after data association is carried out, namely, a target association pair with successful association, unassociated perception data to be fused and unassociated fusion targets are remained, and the results are recorded as data association results. For example, assuming 10 radar-aware data is incoming, there are 10 fused targets in the fused target list, where there are 4 fused targets that can be associated with 4 radar-aware data, then the data association results are 6 unassociated-aware data, 4 target association pairs, and also 6 unassociated fused targets.
S130, updating the fusion target list according to the data association result.
In this embodiment, the association pair is updated after the association, and if the fusion target 1 and the sensing target B of the radar sensing data are associated, the information of the sensing target B, such as the position, the speed and the heading angle, is copied to the fusion target 1 after the association.
The data association result may include a target association pair, unassociated perception data and unassociated fusion targets. The target association pair can be understood as that the fusion data to be perceived and the fusion target are successfully associated. Unassociated sensory data may be understood as some sensory data outside of the fusion. Unassociated fusion targets refer to fusion targets in the fusion target list other than successfully associated. And updating the target attribute of the fusion target association in the fusion target list according to the target association pair in the data association result. The method can be regarded as updating the association pair, updating the target attribute corresponding to the associated perception data to be fused to the fusion target, and maintaining the fusion target to exist continuously. In this embodiment, after determining the target association pair, the attribute of the target association pair needs to be updated, that is, the attribute of the fusion target corresponding to the to-be-fused perceived data is determined according to the to-be-fused perceived data corresponding to the target association pair.
For unassociated to-be-fused perceived data, a fusion target is created when certain conditions are met, and the created new fusion target is placed in a maintained fusion target list. For an unassociated fusion target, the operation to be done is to determine whether it has expired, or whether this fusion target is truly missing from the monitored sensing region, which would need to be removed if it were missing. If the vanishing condition is not satisfied, the fusion target is also in the fusion target list, and maintenance is continued. And go to use its information again at the next association time.
When fusion is carried out, firstly whether visual perception data or radar perception data are identified, if the visual perception data are the visual perception data, the visual perception data are subjected to coordinate system conversion, and the UV coordinates of the image are converted into radar coordinates XY. Because the pixel coordinates are calibrated with the xy coordinates of the pixel coordinates themselves. After calibration, the visual perception data and the radar perception data have a mapping relation, a fusion list is maintained after conversion, and the visual perception data and the fusion target list are fused. Similarly, if the radar sensing data is the radar sensing data, the radar sensing data is subjected to coordinate system conversion, and detailed description is omitted here.
And S140, determining fusion target information of the sensing area according to the associated sensing data corresponding to each fusion target in the fusion target list.
Because this roadside awareness is for the roadside device V2X, V2X uses latitude and longitude information. In this embodiment, after updating the fusion target list, calculation is performed according to a required coordinate system, associated sensing data corresponding to each fusion target in the fusion target list is converted into a set coordinate system, and information such as longitude and latitude, id, course angle, speed, target type, license plate, vehicle color and the like corresponding to the fusion target is output as fusion target information of a sensing area.
It should be noted that, before determining the fusion target information according to the fusion target list, the associated perception data corresponding to each fusion target may also be subjected to kalman filtering. Considering that the sensor itself may have jitter at the location point of the detection target, mapping to a debug interface, or other display interface may find that the location is swaying back and forth from side to side. For example, a real vehicle is straight-line, and the output point may be jagged on the image or interface, so that a track smoothing process by kalman filtering is required.
The embodiment of the invention provides a road side sensing method based on multiple sensors, which comprises the following steps: firstly, obtaining to-be-fused sensing data of a sensing area; secondly, determining a data association result of the to-be-fused perceived data and the fusion target list according to the to-be-fused perceived data and the pre-created fusion target list; then updating the fusion target list according to the data association result; finally, according to the associated perception data corresponding to each fusion target in the fusion target list, determining fusion target information of the perception area; the sensing area is sequentially divided into a target creating area, a data fusion area and other areas according to the target moving direction, the target creating area corresponds to visual sensing data, the data fusion area corresponds to visual sensing data and radar sensing data, and the other areas correspond to radar sensing data. According to the technical scheme, the sensing area is divided into three areas according to the detection precision conditions of different sensors, wherein only visual sensing data are captured in a target creation area, the visual sensing data and radar sensing data are captured in a data fusion area, and only radar sensing data are captured in other areas; the method is equivalent to the situation that the fusion of the visual perception data and the radar perception data is started only in a data fusion area with higher detection precision of the camera and the radar, so that the condition that the association of the radar and the camera is abnormal is avoided, the accuracy of target fusion information is ensured, and the target fusion effect is improved. And the radar sensing areas are set in other areas, so that the furthest detection distance of the sensing areas is not limited by the detection distance of the camera any more due to the fact that the radar detection distance is far, the range of target fusion is enlarged, and the target fusion effect is further improved.
Example two
Fig. 3 is a schematic flow chart of another multi-sensor-based road side sensing method according to the second embodiment of the present invention, where the present embodiment is a further optimization of the foregoing embodiment, in the present embodiment, the optimization is further defined for "determining, according to the to-be-fused sensing data and the pre-created fusion target list, a data association result of the to-be-fused sensing data and the fusion target list", and the optimization is further defined for "updating, according to the data association result, the fusion target list", and further defining, according to the associated sensing data corresponding to each fusion target in the fusion target list, fusion target information of the sensing region ".
As shown in fig. 3, the second embodiment provides a road side sensing method based on multiple sensors, which specifically includes the following steps:
s210, obtaining the to-be-fused sensing data of the sensing area.
And S220, if the fusion target list is empty, the to-be-fused sensing data is unassociated sensing data, and the unassociated sensing data is used as a data association result of the to-be-fused sensing data and the fusion target list.
Specifically, if the fusion target list is empty, data fusion cannot be performed, so that the to-be-fused perceived data is unassociated perceived data as a data association result.
And S230, if the fusion target list is not empty, determining a data association result of the to-be-fused perceived data and the fusion target list according to a preset data association strategy.
The preset data association strategy comprises a first data association strategy, a second data association strategy and a third data association strategy. The first data association policy refers to association by an identification number. An ID association is first made, this step of association being dependent on the tracking of the original sensor. The target is associated with the sensing data (radar or camera target) to be fused before fusion, and the target is also associated with the target as long as the course angle difference, the position difference and the speed difference of the fusion target and the measured value are within a certain threshold range in the fusion.
After the first step of association, three association results are generated, namely, a target association pair with successful association, unassociated perception data to be fused and unassociated fusion targets are obtained, and the results are recorded as the first association result. If a new perception target comes, the last step is not successful, namely ID association is not effective, and the second step of association is carried out at the moment. The second step of association refers to the cross-over (IntersectionOverUnion, IOU) association. For example, radar sensing data is transmitted before, radar coordinates xy of the radar sensing data are converted into pixel coordinates UV, UV information is also contained in the maintained fusion target list, and the coordinate information of the newly transmitted radar sensing data is associated with the pixel coordinate information of the maintained fusion target, namely, the IOU association of the fusion region. Each object has a rectangular box on the image, and the pixel coordinates of the newly incoming radar-aware data are also overlaid, and after overlaid, a rectangle is also generated. An overlapping area is arranged between the two rectangular frames, and the area of the overlapping area and the ratio of the two rectangular areas are used as the cross-over ratio. And carrying out data fusion of the data fusion area according to the cross ratio.
As described above, the other areas consider that the camera cannot detect or has no visual perception data, so a third data fusion strategy of the third step is needed, i.e. distance correlation is performed. The distance association is understood to mean that the sensing target and the fusion target have a straight line distance when they are located in other areas, and can be simply considered as who is closer to whom, and the same target.
S240, updating the target attribute of the fusion target association in the fusion target list according to the target association pair in the data association result.
The step can be considered as updating the association pair, updating the target attribute corresponding to the associated sensing data to be fused to the fusion target, and maintaining the fusion target to exist continuously. In this embodiment, after determining the target association pair, the attribute of the target association pair needs to be updated, that is, the attribute of the fusion target corresponding to the to-be-fused perceived data is determined according to the to-be-fused perceived data corresponding to the target association pair.
It can be understood that the target association pair may be visual perception data or radar perception data corresponding to the to-be-fused perception data. If the target association pair corresponds to the to-be-fused sensing data, the to-be-added target attribute is taken as a static attribute in the visual sensing data, and meanwhile, some dynamic attributes such as the position, the speed, the course angle and the pixel coordinates of the target vehicle can be determined according to the data captured by the camera, but the dynamic attributes have larger errors and are required to be fused with the radar sensing data.
And if the target association pair corresponds to the to-be-fused sensing data is radar sensing data, taking the dynamic attribute in the radar sensing data as the target attribute to be added. Meanwhile, some dynamic attributes can be determined according to the data captured by the camera, and at the moment, the dynamic attributes corresponding to the radar sensing data and the dynamic attributes corresponding to the associated visual sensing data can be fused.
And determining the fusion target corresponding to the target association pair from the fusion target list while determining the target attribute to be added. The target attribute can be associated with the fusion target, so that the adding operation of the target attribute is realized.
S250, creating a new fusion target according to the unassociated perception data in the data association result and storing the new fusion target in a fusion target list.
Considering that in the prior art, visual perception data captured by a camera and radar perception data captured by the radar are associated in the whole perception area, as a plurality of clustering points are easily generated by the radar on a target with a larger volume, for example, three clustering points are generated by a large truck, and three targets are detected, two false targets are detected, so that the conditions of false detection, missed detection and the like of the radar are caused, and the two false targets are hit on an image captured by the camera, the radar and the camera are abnormal in association, so that the fusion effect is poor, for example, the information of the position, the speed, the course angle and the like of the fused target is updated in error.
In order to avoid the above-mentioned problems, in this embodiment, the sensing area is divided into three areas, which are sequentially divided into a target creation area, a data fusion area, and other areas according to the moving direction of the target. The target creation area only obtains visual perception data by a camera, radar perception data are obtained by a radar in the data fusion area, and radar perception data are obtained by the radar in other areas.
In this embodiment, the type of unassociated sensing data in the data association result may be determined, for example, the unassociated sensing data is visual sensing data or radar sensing data. And if the unassociated perception data is the visual perception data of the target creation area or the data fusion area, creating a new fusion target and adding the new fusion target into the fusion target list. The process can be understood as a new fusion target is created if certain conditions for the creation target are met.
S260, deleting the expired fusion target in the fusion target list according to the unassociated fusion target list in the data association result.
In this embodiment, in addition to updating the fusion target list according to the target association pair and the unassociated awareness data in the data association result, the fusion target list may also be updated based on the unassociated fusion target list in the data association result.
The step is used for updating the unassociated fusion target, namely deleting the expired fusion target. In this embodiment, after the to-be-fused sensing data is obtained, the category of the to-be-fused sensing data may be determined. If the to-be-fused perception data is visual perception data, the duration that the radar does not participate in updating the target attribute of the fusion target list can be updated, and the duration that the radar does not participate in updating can be understood to be increased. If the sensing data to be fused is radar sensing data, the duration that the camera does not participate in updating the target attribute of the fused target list can be updated, and the duration that the camera does not participate in updating can be understood as being increased.
If the perception data of the two categories of the unassociated fusion target are not updated and the non-updated time exceeds the set time threshold, the fusion target can be used as an outdated fusion target, and the outdated fusion target is deleted from the fusion target list. For example, if a fusion target has been driven out from another area and is far away from the radar, the radar and the camera cannot detect the sensing data of the fusion target, so that the fusion target does not have new sensing data to fuse, and when the set time threshold is exceeded, the fusion target can be determined to be an expired fusion target, and the expired fusion target is deleted from the fusion target list.
S270, acquiring associated perception data of each fusion target in the fusion target list.
Specifically, the fusion target list may be traversed to obtain associated perception data associated with each fusion target in the fusion target list.
S280, converting each associated sensing data according to a set coordinate system to obtain fusion target information in the converted sensing region.
The set coordinate system may be set according to actual requirements, for example, a longitude and latitude coordinate system. In this embodiment, since each associated sensing data is based on a camera coordinate system or a radar coordinate system, after the association update is completed, calculation is performed according to a required longitude and latitude coordinate system, and corresponding longitude and latitude, ID, heading angle, speed, target type, license plate, vehicle color and other information are output as fusion target information in the converted sensing area.
The second embodiment of the present invention provides a technical solution that embodies how to determine a data association result, how to update a fusion target list according to the data association result, and how to determine fusion target information of a sensing area according to associated sensing data corresponding to each fusion target in the fusion target list. According to the data association strategy, determining the data association result of the to-be-fused perceived data and the fusion target list, updating the fusion target list based on different association results in the data association result, updating the association attribute in the fusion target list, creating a new fusion target and deleting an overdue fusion target, so as to ensure that accurate data is stored in the target list, and further output fusion target information. The accuracy of the target fusion information is guaranteed, and the target fusion effect is improved. And the radar sensing areas are set in other areas, so that the furthest detection distance of the sensing areas is not limited by the detection distance of the camera any more due to the fact that the radar detection distance is far, the range of target fusion is enlarged, and the target fusion effect is further improved.
As a first optional embodiment of the second embodiment of the present invention, based on the foregoing embodiment, the implementation of determining a data association result of to-be-fused perceived data and a fusion target list according to a preset data association policy may be optimized, and includes the following steps:
a1 And determining a first association result of the to-be-fused perceived data and the fusion target list according to the first data association strategy.
The first data association policy refers to association through an identification number. An ID association is first made, this step of association being dependent on the tracking of the original sensor. The target is associated with the sensing data (radar or camera target) to be fused before fusion, and the target is also associated with the target as long as the course angle difference, the position difference and the speed difference of the fusion target and the measured value are within a certain threshold range in the fusion.
After the first step of association, three association results are generated, namely, a target association pair with successful association, unassociated perception data to be fused or unassociated fusion targets are generated, and the results are recorded as the first association result.
As a specific implementation, the implementation of determining the first association result of the to-be-fused perceptual data and the fusion target list according to the first data association policy may be optimized, including the following steps:
a11 Traversing the to-be-associated identification numbers of the to-be-associated sensing targets in the to-be-fused sensing data and the associated identification numbers of the associated sensing targets of the fused targets in the fused target list.
The target related to the sensing data to be fused is marked as a sensing target to be associated, and the identification number of the sensing target to be associated is marked as an identification number to be associated. The targets associated with the fusion targets contained in the fusion target list are marked as associated perception targets, and the identification numbers of the associated perception targets are marked as associated identification numbers. Specifically, traversing the to-be-associated identification numbers of all to-be-associated perception targets in the to-be-fused perception data and the associated identification numbers of the to-be-associated perception targets of all fused targets in the fused target list.
a12 If at least one identification number to be associated is the same as the associated identification number, judging whether the sensing data to be fused meets the preset check condition.
For example, assume that the fusion targets in the fusion target list are described by identification numbers 1, 2 and 3 respectively, assume that the fusion targets are associated with perception targets respectively, and assume that the associated perception targets are described by identification numbers A, B, C respectively, and assume that 1 is associated with A, 2 is associated with B, 3 is associated with C, if the identification number to be associated involved in the newly transmitted perception data to be fused is B, and after traversing, it can be determined that B is associated with 2, the perception data to be fused at this time is primarily considered to be associated with 2.
But before the association, it is further necessary to determine whether the to-be-fused perceptual data satisfies a preset check condition. The purpose of performing the preset check is to: the data of the previous frame and the data of the new current frame have time difference values, and when the new current frame is associated with the previous frame, displacement caused by errors due to time exists. If the two frames are separated by a large time, the position difference between the new current frame and the one frame may be poor. Therefore, track prediction is carried out on the previous frame according to the time difference and the current speed, the position difference between the predicted track position and the position of the new current frame is judged, for example, the position difference between the predicted track position and the position of the new current frame is more than 10 meters, the previous frame perceives that the previous frame is associated with the fusion target, but the previous frame is considered to be problematic if the position difference between the current frame and the previous frame is too much, the identification association is forcedly given to be detached, the previous association is considered to be an incorrect association, and the previous association is considered to be problematic if the previous frame is also associated according to the identification number.
It can be understood that, by using three association strategies, any one step of association is wrong, and the next time the association is performed according to the identification number, the association is theoretically considered to be still performed, the previous step of association is wrong, one step of verification is added when the association is performed again, and the error is verified with the set threshold value by comparing the predicted position, heading angle, speed and error of the current frame. It is also understood what conditions need to be met to consider the previously associated identification number as associated. However, when the distance threshold value is determined, a prediction is made on the position of the previous frame, and since the time between the current frame and the previous frame which are newly transmitted is not a moment in time, a track prediction is made on the time difference between the previous frame and the current frame according to uniform linear motion or uniform acceleration linear motion, and a position difference calculation is performed by using the predicted position and the new position.
For example, assuming that a fusion object and a perception object have been associated before, for example, the fusion object is straight, and one perception object is parallel to the fusion object, but turns right, then, because the fusion object and the perception object are close to each other, the perception data of the perception object is associated with the straight fusion object, the fusion object is moved straight forward at the next moment, the perception object is turned right, and the fusion object and the perception object are considered to be associated at the last time, so that the association error is caused. Considering that the position difference between the two is larger and larger as time goes by, the problem of misassociation can be solved based on the preset check condition. In other words, the relevance exists between the fusion target and the to-be-fused sensing data meeting the conditions, and the to-be-fused sensing data which does not meet the verification conditions is not related, so that the false relevance is avoided, and the calculated amount is reduced.
If the above conditions are met, it is necessary to continue to determine whether the unassociated sensed data meets the preset check condition. First, the time difference between the first unassociated sensing data of the current frame and the previous and subsequent frames of the associated data of the previous frame of the fusion target is calculated. According to the time difference and the position, speed and course angle of the related data of the previous frame, the position, speed and course angle of the fusion target of the current frame can be predicted. Further, the position difference, the speed difference and the course angle difference between the current frame and the prediction can be calculated. It should be noted that if the speed in the first uncorrelated perceived data is small, at this time, the heading angle is considered as the dead speed threshold, only the position difference and the speed difference between the current frame and the prediction need to be calculated. The preset check condition means that the position difference, the speed difference and the course angle difference are within a set threshold range. For example, assuming that the fusion target is in uniform linear motion, the position of the target of the current frame may be predicted according to the time difference and the speed.
a13 If yes, determining that the fusion target associated with the associated identification number with the same identification number and the perception target to be associated are a first target association pair.
Specifically, if one identification number to be associated is the same as the associated identification number and the frame of to-be-fused sensing data meets a preset check condition, the fusion target associated with the associated identification number with the specific same identification number and the to-be-associated sensing target are taken as a target association pair and recorded as a first target association pair.
a14 And taking the perception data except for the perception target to be associated in the first target association pair in the perception data to be fused as first unassociated perception data.
Specifically, the sensing data except the sensing data to be fused of the target association pair in the sensing data to be fused is taken as unassociated sensing data, and is recorded as first unassociated sensing data.
a15 Taking the fusion targets except the fusion targets in the first target association pair in the fusion target list as first unassociated fusion targets.
Specifically, the fusion targets except the fusion targets of the target association pair in the fusion target list are used as unassociated fusion targets and are marked as first unassociated fusion targets.
a16 The first target association pair, the first unassociated awareness data and the first unassociated fusion target are used as a first association result.
The technical scheme embodies the step of associating the to-be-fused perception data with the first fusion target list in an identification number association mode to obtain a first association result.
b1 If the first association result contains the first unassociated sensing data and the first unassociated fusion target, determining a second association result of the first unassociated sensing data and the first unassociated fusion target according to the second data association policy.
Specifically, if the unassociated to-be-fused perceived data and unassociated fusion targets still exist after the previous data association strategy is passed, performing secondary association. Illustratively, a target moves from the target creation area to the data fusion area, where there may be no identification number association of radar-aware data when it is first moved to the data fusion area, at which time only the IOU may be performed. When IOU association is performed, it is necessary to determine whether at least one data belongs to the data fusion area, and it can be understood that IOU association is performed only when at least one data belongs to the data fusion area. When the fusion target runs to the data fusion area, if the input sensing data to be fused is radar sensing data, extracting visual sensing data associated with the fusion target, and performing frame intersection ratio calculation with the new radar sensing data. Similarly, if the data to be fused is the visual fusion data, extracting radar perception data associated in the fusion target, performing frame cross ratio calculation with the new visual fusion data, generating a matching cost matrix, and transmitting Hungary matching to obtain a data association result.
In this embodiment, IOU association is performed on the first unassociated sensing data and the first unassociated fusion target, and the associated result is recorded as a second association result. It is known that the second association result may include a target association pair, which is denoted as a second target association pair; the method also can comprise unassociated perception data to be fused, and the unassociated perception data is recorded as second unassociated perception data; an unassociated fusion target may also be included, noted as a second unassociated fusion target.
As a specific implementation, the implementation of determining the second association result of the first unassociated awareness data and the first unassociated fusion target according to the second data association policy may be optimized, including the following steps:
b11 Traversing the first unassociated awareness data and the first unassociated fusion target.
b12 Judging whether at least one frame of first unassociated perception data exists or whether data in a first unassociated fusion target belongs to a data fusion area.
Specifically, whether one frame of first unassociated sensing data exists or whether the related data in the first unassociated fusion target belongs to the data fusion area is judged, and the corresponding data can be understood as the data captured when the target is in the data fusion area.
If the above condition is not met, it may be determined that the first unassociated awareness data and a fusion target of the first unassociated fusion targets are unassociated. At this time, the cross ratio may be set to be the smallest so that the result output by the subsequent hungarian algorithm is irrelevant.
b13 If yes, judging whether the first unassociated sensing data meets the preset check condition.
If the above conditions are met, it is necessary to continue to determine whether the unassociated sensed data meets the preset check condition. First, the time difference between the first unassociated sensing data of the current frame and the previous and subsequent frames of the associated data of the previous frame of the fusion target is calculated. According to the time difference and the position, speed and course angle of the related data of the previous frame, the position, speed and course angle of the fusion target of the current frame can be predicted. Further, the position difference, the speed difference and the course angle difference between the current frame and the prediction can be calculated. It should be noted that if the speed in the first uncorrelated perceived data is small, at this time, the heading angle is considered as the dead speed threshold, only the position difference and the speed difference between the current frame and the prediction need to be calculated. The preset check condition means that the position difference, the speed difference and the course angle difference are within a set threshold range. The verification purpose is referred to above and will not be discussed here.
b14 If yes, carrying out cross ratio calculation on the first uncorrelated perception data and heterogeneous perception data in the first uncorrelated fusion target, and determining a first association matrix.
If the preset verification condition is met, IOU calculation is carried out on the first associated sensing data and the heterogeneous sensing data in the first unassociated fusion target, and the generated matrix is recorded as a first association matrix according to (1-IOU).
The first unassociated sensing data and the heterogeneous sensing data in the first unassociated fusion target refer to radar sensing data if the first unassociated sensing data is visual sensing data. If the first unassociated awareness data is radar awareness data, the disparate awareness data in the first unassociated fused target refers to visual awareness data.
b15 Inputting the first association matrix into a Hungary algorithm to obtain a second target association pair, second unassociated sensing data and a second unassociated fusion target, and taking the second target association pair, the second unassociated sensing data and the second unassociated fusion target as a second association result.
Specifically, the first association matrix is input into a hungarian algorithm, association matching is performed through the hungarian algorithm, whether the first unassociated sensing data can be associated with the first unassociated fusion target is determined, in this embodiment, the target association pair obtained through the step is marked as a second target association pair, the obtained unassociated sensing data is marked as second unassociated sensing data, and the obtained second unassociated fusion target is marked as a second unassociated fusion target. And taking the second target association pair, the second unassociated perception data and the second unassociated fusion target as a second association result.
The technical scheme embodies the step of associating the first unassociated perception data with the first unassociated fusion target in an IOU association mode to obtain a second association result.
c1 If the second association result contains the second unassociated sensing data and the second unassociated fusion target, determining a third association result of the second unassociated sensing data and the second unassociated fusion target according to a third data association policy.
Specifically, if there are still unassociated to-be-fused sensing data and unassociated fusion targets after the first two times of data association strategies, in this embodiment, the unassociated to-be-fused sensing data is marked as second unassociated sensing data, and the unassociated fusion targets are marked as second unassociated fusion targets. In the step, the distance association is carried out on the two previous unassociated fusion target lists, and the distance association is recorded as a third data association strategy. It will be appreciated that when a fused object moves from a fused area to another area, the fused object already carries the unique attribute of the camera (such as the license plate of the vehicle, the type, the color of the vehicle, etc.) and the dominant attribute of the radar (such as the position, the speed, the heading angle, etc.) of the vehicle, then the fused object in that area increases the distance association, thus realizing the continued maintenance of the position, the speed, the heading angle, etc. information of the object by the radar.
In this embodiment, the distance association is performed between the second unassociated sensing data and the second unassociated fusion target, and the associated result is denoted as a third association result. It is known that the third association result may include a target association pair, which is denoted as a third target association pair; the method also can comprise uncorrelated perception data to be fused, and the uncorrelated perception data is marked as third uncorrelated perception data; it is also possible to include unassociated fusion targets, and the inclusion unassociated fusion targets are marked as third unassociated fusion targets.
As a specific implementation, the implementation of determining the third association result of the second unassociated awareness data and the second unassociated fusion target according to the third data association policy may be optimized, including the following steps:
c11 Traversing the second unassociated awareness data and the second unassociated fusion target.
c12 Judging whether the second unassociated sensing data and the data in the second unassociated fusion target belong to other areas or not, and judging that the second unassociated sensing data is radar sensing data.
Specifically, whether the second unassociated perceived data and the related data in the second unassociated fusion target are both in other areas is judged, and the corresponding data can be understood to be the data captured when the target is in the other areas. And it is necessary to determine whether the second unassociated awareness data is radar awareness data.
If the above condition is not met, it may be determined that the second unassociated awareness data and the second unassociated fusion target are unassociated. At this time, the amplitude of the cost element may be set to be the maximum, so that the result output by the subsequent hungarian algorithm is irrelevant.
c13 If yes, judging whether the second unassociated sensing data meets the preset check condition.
If the above conditions are met, it is necessary to continue to determine whether the unassociated sensed data meets the preset check condition. First, the time difference between the second unassociated perception data of the current frame and the previous and subsequent frames of the associated data of the previous frame of the fusion target is calculated. According to the time difference and the position, speed and course angle of the related data of the previous frame, the position, speed and course angle of the fusion target of the current frame can be predicted. Further, the position difference, the speed difference and the course angle difference between the current frame and the prediction can be calculated. It should be noted that if the speed in the second unassociated sensing data is very low, at this time, the heading angle is considered as the dead speed threshold, only the position difference and the speed difference between the current frame and the prediction need to be calculated. The preset check condition means that the position difference, the speed difference and the course angle difference are within a set threshold range. The verification purpose is referred to above and will not be discussed here.
c14 If so, performing distance calculation on the second uncorrelated perception data and the similar perception data in the second uncorrelated fusion target, and determining a second correlation matrix.
The second association matrix is a matrix indicating the association relation between the second unassociated sensing data and the fusion target in the second unassociated fusion target. Specifically, if the preset verification condition is met, distance calculation can be performed on the to-be-fused sensing data of the same kind in the second unassociated sensing data and the second unassociated fusion target and the associated sensing data of the fusion target, so as to obtain a second association matrix. The specific mode of the distance calculation is as follows: straight line distance/(threshold distance 2).
Wherein the second unassociated perceived data and the similar perceived data in the second unassociated fusion target refer to visual perceived data, and if the second unassociated perceived data is visual perceived data, the similar perceived data in the second unassociated fusion target refers to visual perceived data. If the second unassociated awareness data is radar awareness data, the like awareness data in the second unassociated fused target refers to radar awareness data.
c15 Inputting the second association matrix into a Hungary algorithm to obtain a third target association pair, third position association sensing data and a third unassociated fusion target, and taking the third target association pair, the third unassociated sensing data and the third unassociated fusion target as a third association result.
Specifically, the second association matrix is input into a hungarian algorithm, association matching is performed through the hungarian algorithm, whether the second unassociated sensing data can be associated with the second unassociated fusion target is determined, in this embodiment, the target association pair obtained through the step is marked as a third target association pair, the obtained unassociated sensing data is marked as third unassociated sensing data, and the obtained third unassociated fusion target is marked as a third unassociated fusion target. And taking the third target association pair, the third unassociated perception data and the third unassociated fusion target as a third association result.
The technical scheme embodies the step of associating the second unassociated perception data with the second unassociated fusion target in a distance association manner to obtain a third association result.
d1 Determining a data association result according to the first association result, the second association result and the third association result.
The first association result comprises a first target association pair, first unassociated perception data and a first unassociated fusion target as a first association result, and the first target association pair is acquired from the first association result, which is equivalent to acquiring a target association pair associated based on a first data association policy. The second association result comprises a second target association pair, second unassociated perception data and a second unassociated fusion target, and the second target association pair is acquired from the second association result, which is equivalent to acquiring the target association pair associated based on the second data association policy. The third association result comprises a third target association pair, third unassociated sensing data and a third unassociated fusion target, and the third target association pair, the third unassociated sensing data and the third unassociated fusion target are obtained.
When the first association result, the second association result and the third association result are obtained, determining a data association result after three times of association finally according to the results contained in the first association result, the second association result and the third association result.
According to the technical scheme, the step of determining the data association result of the to-be-fused perceived data and the fusion target list according to the preset data association strategy is embodied, the data association result is determined through three times of data association strategies, and a basis is provided for the subsequent attribute update of the fusion target.
As a specific implementation, the implementation of determining the data association result according to the first association result, the second association result and the third association result may be optimized, and the implementation includes the following steps:
d11 A first target association pair in a first association result, a second target association pair in a second association result, a third target association pair in a third association result, third unassociated perception data and a third unassociated fusion target are obtained.
The first association result comprises a first target association pair, first unassociated perception data and a first unassociated fusion target as a first association result, and the first target association pair is acquired from the first association result, which is equivalent to acquiring a target association pair associated based on a first data association policy. The second association result comprises a second target association pair, second unassociated perception data and a second unassociated fusion target, and the second target association pair is acquired from the second association result, which is equivalent to acquiring the target association pair associated based on the second data association policy. The third association result comprises a third target association pair, third unassociated sensing data and a third unassociated fusion target, and the third target association pair, the third unassociated sensing data and the third unassociated fusion target are obtained.
d12 A first target association pair, a second target association pair, and a third target association pair as target association pairs in the data association result.
It will be appreciated that the results produced after data correlation may include three types: associated pairs of targets, unassociated perception data, and unassociated fusion targets. This step is used to determine a target association pair on the association.
In this embodiment, the first target association pair corresponds to a target association pair associated based on the first data association policy, the second target association pair corresponds to a target association pair associated based on the second data association policy, and the third target association pair corresponds to a target association pair associated based on the third data association policy. The target association pair obtained by three times of association can be used as the target association pair on all associations of the round, namely the first target association pair, the second target association pair and the third target association pair can be used as the target association pair in the data association result.
d13 The third unassociated awareness data is taken as unassociated awareness data in the data association result.
The third unassociated sensing data may be considered as sensing data to be fused, which is unassociated by the three data association policies, and in this embodiment, the third unassociated sensing data may be used as unassociated sensing data in the data association result.
d14 The third unassociated fusion target is used as the unassociated fusion target in the data association result.
The third unassociated fusion target may be considered as a fusion target that is unassociated by the three times of data association policies, and in this embodiment, the third unassociated fusion target may be used as an unassociated fusion target in the data association result.
According to the technical scheme, the step of determining the data association result according to the first association result, the second association result and the third association result is embodied, the association pair in the three association results is extracted to be used as a target association pair, the third unassociated sensing data which is not associated yet is used as unassociated sensing data in the data association result after three times of data association strategies, and the third unassociated fusion target which is not associated yet is used as an unassociated fusion target in the data association result. Basic data is provided for subsequent updating of the fusion target list.
As a second optional embodiment of the second embodiment of the present invention, based on the foregoing embodiment, the present optional embodiment may optimize updating, according to a target association pair in a data association result, a target attribute of a fusion target association in a fusion target list, including:
a2 And determining the attribute of the target to be added according to the to-be-fused perception data corresponding to the target association pair.
In this embodiment, after determining the target association pair, the attribute of the target association pair needs to be updated, that is, the attribute of the fusion target corresponding to the to-be-fused perceived data is determined according to the to-be-fused perceived data corresponding to the target association pair and is recorded as the to-be-added target attribute.
It can be understood that the target association pair may be visual perception data or radar perception data corresponding to the to-be-fused perception data. If the target association pair corresponds to the to-be-fused sensing data, the to-be-added target attribute is taken as a static attribute in the visual sensing data, and meanwhile, some dynamic attributes such as the position, the speed, the course angle and the pixel coordinates of the target vehicle can be determined according to the data captured by the camera, but the dynamic attributes have larger errors and are required to be fused with the radar sensing data.
And if the target association pair corresponds to the to-be-fused sensing data is radar sensing data, taking the dynamic attribute in the radar sensing data as the target attribute to be added. Meanwhile, some dynamic attributes can be determined according to the data captured by the camera, and at the moment, the dynamic attributes corresponding to the radar sensing data and the dynamic attributes corresponding to the associated visual sensing data can be fused.
As a specific implementation, the implementation of determining the attribute of the target to be added according to the to-be-fused perceived data corresponding to the target association pair in the data association result may be specifically optimized as the following steps:
a21 If the target association pair corresponds to the to-be-fused sensing data, determining a first occupation ratio of first sensing data in the to-be-fused sensing data according to the association duration of the to-be-fused sensing data.
Considering that the attribute of the target in the created target area is provided by the camera after the target is moved from the created target area to the data fusion area, the target is related to the radar after being moved to the data fusion area, and the position information of the target also has the position information captured by the radar, and the position information captured by the camera and the radar may have position difference. From the creation of the target area to the data fusion area, a position jump may occur. Based on this, in this embodiment, when the to-be-fused sensing data is visual sensing data, the association duration of the fusion target associated camera is updated, and based on the association duration, the position information, the speed, the heading angle and the duty ratio weight of the error corresponding to the visual sensing data can be determined. It is to be appreciated that the first occupancy is calculated based on the first perception data and the radar perception data associated with the fusion target.
It should be clear that, in this embodiment, the purpose of determining the duty ratio of the camera is to preserve a part of the dynamic properties determined by the camera by updating the association duration of the fusion target associated camera, so that the value determined by the dynamic properties is not directly changed into the value determined by the radar. Instead, it is ensured that these dynamic properties are maintained by the camera position and radar position together and moved forward slowly until only the radar has acquired data, all of which is only data from the radar. This process can be understood as a trajectory smoothing operation.
a22 According to the first duty ratio and the first perception data, determining a first target attribute, and taking the first target attribute and the second perception data in the perception data to be fused as target attributes to be added.
In this embodiment, given the first sensing data, the radar sensing data associated with the fusion target corresponding to the target association pair, and the first duty ratio of the first sensing data, dynamic attributes, such as position information, speed, heading angle, etc., may be updated to obtain an updated target attribute, which is denoted as a first target attribute. Meanwhile, the static attribute related to the fusion target, such as the type, color, license plate and the like of the vehicle, is determined by the camera only, and the determined information is accurate, so that the static attribute based on the perception data to be fused can be used as the static attribute of the fusion target. In this embodiment, the static attribute in the to-be-fused sensing data is used as the second sensing data, and the first target attribute and the second sensing data in the to-be-fused sensing data are used as the target attribute to be added.
a23 If the target association pair corresponds to the to-be-fused sensing data, determining a second occupation ratio of third sensing data in the to-be-fused sensing data according to the association duration of the to-be-fused sensing data.
In this embodiment, if the target association pair corresponds to the to-be-fused sensing data, that is, the dynamic attribute of the fused target is involved, in order to ensure the track smoothness of the fused target, the to-be-added target attribute is updated by using the duty ratio of the to-be-fused sensing data.
Specifically, by updating the association duration of the radar associated with the fusion target, the purpose of determining the duty ratio of the radar is to preserve the dynamic attribute determined by the radar before, so that the value of the dynamic attribute, which is not determined by the camera, is directly changed into the value determined by the radar. Instead, it is ensured that these dynamic properties are maintained by the camera position and radar position together and moved forward slowly until only the radar has acquired data, all of which is only data from the radar. This process can be understood as a trajectory smoothing operation.
Based on this, in this embodiment, when the to-be-fused sensing data is radar sensing data, the association duration of the fusion target associated radar is updated, and based on the association duration, the position information, the speed, the heading angle, the duty ratio weight of the error corresponding to the radar sensing data can be determined. It will be appreciated that the occupancy is calculated from the third perception data and the visual perception data associated with the fusion target.
a24 And determining a third target attribute according to the second duty ratio and the third perception data, and taking the third target attribute as the target attribute to be added.
In this embodiment, given the third perception data, the visual perception data associated with the fusion target corresponding to the target association pair, and the occupation ratio of the third perception data, dynamic attributes, such as location information, speed, heading angle, etc., may be updated to obtain an updated target attribute, which is denoted as a third target attribute. And taking the third target attribute as the target attribute to be added.
According to the technical scheme, the determination process of the target attribute to be added is embodied, the target attribute to be added is determined in different modes according to different categories of the sensing data to be fused, and the smooth processing of the movement track of the fusion target is realized through the occupation ratio of the visual sensing data and the radar sensing data, so that the determined target attribute to be added is more accurate and real.
b2 Determining a fusion target corresponding to the target association pair from the fusion target list as a target to be added.
Specifically, a fusion target corresponding to the target association pair is determined from the fusion target list, and the fusion target is used as a target to be added.
c2 Adding the object attribute to be added to the object to be added.
Specifically, the attribute of the target to be added and the target to be added can be associated, so that the attribute of the target to be added is added to the target to be added.
According to the technical scheme, the step of updating the target attribute of the fusion target association in the fusion target list according to the target association pair in the data association result is embodied, the instant update of the fusion target attribute is realized, and the accuracy of the target attribute is ensured.
As a third optional embodiment of the second embodiment of the present invention, based on the foregoing embodiment, the present optional embodiment may optimize creating a new fusion target according to unassociated perceived data in the data association result and storing the new fusion target in a fusion target list, where the creating includes:
a3 A category of unassociated sensory data is determined.
In this embodiment, the type of unassociated sensing data in the data association result may be determined, for example, the unassociated sensing data is visual sensing data or radar sensing data.
b3 If the unassociated perception data is the visual perception data of the target creation area or the data fusion area, creating a new fusion target and adding the new fusion target into the fusion target list.
Considering that in the prior art, visual perception data captured by a camera and radar perception data captured by the radar are associated within the whole perception area, as a plurality of clustering points are easily generated by the radar on a target with a larger volume, for example, three clustering points are generated by a large truck, three targets are detected, two false targets are detected, so that the conditions of false detection, missed detection and the like of the radar are caused, and the two false targets are hit on an image captured by the camera, the radar and the camera are abnormal in association, so that the fusion effect is poor, for example, the information of the position, the speed, the course angle and the like of the fused target is updated in error.
In order to avoid the problem of false detection of radar, in the embodiment, when the target is created, only the visual perception data captured by the camera is used for creating the fusion target, namely, the corresponding fusion target is created based on the target identified in the visual perception data. And the fusion target is created by utilizing the visual perception data of the target creation area or the data fusion area with higher detection precision of the camera, and the newly created fusion target is added into the fusion target list.
c3 Adding unassociated awareness data as target attributes to the new fusion target.
After creating a new fusion target in the fusion target list, the new fusion target may be added according to the attribute contained in the unassociated awareness data as a target attribute. It will be appreciated that the added target attribute is based on visual perception data, e.g., assuming the fusion target is a vehicle, the target attribute includes an attribute of target type, vehicle type, license plate, vehicle color, etc.
It is of course also possible to include attributes of vehicle position, vehicle speed, vehicle pixel coordinates, heading angle, etc., but these attributes are determined based on images captured by the camera, which are more error-prone than those determined by radar.
d3 If the unassociated sensing data is radar sensing data, not updating the fusion target list.
Specifically, since there may be a possibility of missed detection and false detection of the target by the radar, the target creation is performed based on only the visual perception data captured by the camera, and is not performed based on the radar perception data. Thus, if the unassociated awareness data is radar awareness data, no new fusion targets are created, i.e. no updating of the fusion target list is performed.
According to the technical scheme, how to create a new fusion target according to unassociated perception data in the data association result and store the new fusion target in the fusion target list, and the accuracy of the created fusion target is ensured by creating the fusion target based on the target creation area or the visual perception data captured by the data fusion area.
As a fourth optional embodiment of the second embodiment of the present invention, based on the foregoing embodiment, the fourth optional embodiment may optimize, according to an unassociated fusion target list in a data association result, deleting an expired fusion target in the fusion target list, including:
a4 If the unassociated fusion target does not participate in the association time is greater than the set time threshold, determining that the unassociated fusion target is an overdue fusion target.
In this embodiment, after the to-be-fused sensing data is obtained, the category of the to-be-fused sensing data may be determined. If the to-be-fused perception data is visual perception data, the duration that the radar does not participate in updating the target attribute of the fusion target list can be updated, and the duration that the radar does not participate in updating can be understood to be increased. If the sensing data to be fused is radar sensing data, the duration that the camera does not participate in updating the target attribute of the fused target list can be updated, and the duration that the camera does not participate in updating can be understood as being increased.
And marking the fusion target in the unassociated fusion target list as an unassociated fusion target. Traversing the unassociated fusion targets, and setting the unassociated radar attribute as invalid if the duration of the attribute of the unassociated target of the radar under a certain unassociated fusion target exceeds a set time threshold. If the duration that the camera does not participate in the target attribute under a certain unassociated fusion target exceeds a set time threshold, the camera attribute can be set as an invalid associated camera attribute. And when the unassociated fusion target does not participate in the association time is larger than the set time threshold, and the unassociated fusion target is equivalent to the fact that the unassociated fusion target does not have the effective association radar attribute or the effective association camera attribute, determining that the unassociated fusion target is an overdue fusion target.
b4 The expired fusion target is deleted from the fusion target list.
Specifically, if the expired fusion target is determined, the expired fusion target may be deleted from the fusion target list.
According to the technical scheme, how to delete the outdated fusion targets in the fusion target list according to the unassociated fusion targets in the data association result is embodied. And determining the outdated fusion target by the time without participating in updating, deleting the outdated fusion target, saving the occupied resources, improving the speed of traversing the fusion target list, and further improving the efficiency of data fusion.
Example III
Fig. 4 is a schematic structural diagram of a multi-sensor-based roadside sensing device according to a third embodiment of the present invention, where the device is applicable to a case of roadside sensing fusion based on a vision sensor and a millimeter wave radar, and the multi-sensor-based roadside sensing device may be configured in an edge computing device, as shown in fig. 4, and the device includes: a data acquisition module 31, a target association module 32, a data fusion module 33, and an information determination module 34; wherein,,
the data acquisition module 31 is configured to acquire to-be-fused sensing data of a sensing area;
the target association module 32 is configured to determine a data association result of the to-be-fused perceived data and the fusion target list according to the to-be-fused perceived data and the pre-created fusion target list;
The data fusion module 33 is configured to update the fusion target list according to the data association result;
the information determining module 34 is configured to determine fusion target information of the sensing area according to associated sensing data corresponding to each fusion target in the fusion target list;
the sensing area is sequentially divided into a target creating area, a data fusion area and other areas according to the target moving direction, the target creating area corresponds to visual sensing data, the data fusion area corresponds to visual sensing data and radar sensing data, and the other areas correspond to radar sensing data.
According to the technical scheme, the sensing area is divided into three areas according to the detection precision conditions of different sensors, wherein only visual sensing data are captured in a target creation area, the visual sensing data and radar sensing data are captured in a data fusion area, and only radar sensing data are captured in other areas; the method is equivalent to the situation that the fusion of the visual perception data and the radar perception data is started only in a data fusion area with higher detection precision of the camera and the radar, so that the condition that the association of the radar and the camera is abnormal is avoided, the accuracy of target fusion information is ensured, and the target fusion effect is improved. And the radar sensing areas are set in other areas, so that the furthest detection distance of the sensing areas is not limited by the detection distance of the camera any more due to the fact that the radar detection distance is far, the range of target fusion is enlarged, and the target fusion effect is further improved.
Optionally, the target association module 32 may specifically include:
the first association unit is used for taking the to-be-fused sensing data as unassociated sensing data as a data association result of the to-be-fused sensing data and the fusion target list if the fusion target list is empty;
and the second association unit is used for determining a data association result of the to-be-fused perceived data and the fusion target list according to a preset data association strategy if the fusion target list is non-empty.
Optionally, the second association unit may specifically include:
the first determining subunit is used for determining a first association result of the to-be-fused perceived data and the fusion target list according to a first data association strategy;
the second determining subunit is configured to determine, if the first association result includes the first unassociated sensing data and the first unassociated fusion target, a second association result of the first unassociated sensing data and the first unassociated fusion target according to a second data association policy;
the third determining subunit is configured to determine, if the second association result includes the second unassociated sensing data and the second unassociated fusion target, a third association result of the second unassociated sensing data and the second unassociated fusion target according to a third data association policy;
And the fourth determining subunit is used for determining the data association result according to the first association result, the second association result and the third association result.
Optionally, the first determining subunit may be specifically configured to:
traversing the to-be-associated identification numbers of all to-be-associated sensing targets in the to-be-fused sensing data and the associated identification numbers of the associated sensing targets of all fused targets in the fused target list;
if at least one identification number to be associated is the same as the associated identification number, judging whether the sensing data to be fused meets a preset check condition or not;
if yes, determining that a fusion target associated with the associated identification number with the same identification number and a perception target to be associated are a first target association pair;
taking the sensing data except for the sensing targets to be associated in the first target association pair in the sensing data to be fused as first unassociated sensing data;
taking fusion targets except the fusion targets in the first target association pair in the fusion target list as first unassociated fusion targets;
and taking the first target association pair, the first unassociated perception data and the first unassociated fusion target as a first association result.
Optionally, the second determining subunit may be specifically configured to:
Traversing the first unassociated awareness data and the first unassociated fusion target;
judging whether at least one frame of first unassociated sensing data exists or whether data in a first unassociated fusion target belongs to a data fusion area;
if yes, judging whether the first unassociated sensing data meets a preset check condition;
if yes, carrying out cross-correlation ratio calculation on the first unassociated sensing data and heterogeneous sensing data in the first unassociated fusion target, and determining a first association matrix;
inputting the first correlation matrix into a Hungary algorithm to obtain a second target correlation pair, second unassociated sensing data and a second unassociated fusion target, and taking the second target correlation pair, the second unassociated sensing data and the second unassociated fusion target as a second correlation result.
Optionally, the third determining subunit may be specifically configured to:
traversing the second unassociated awareness data and the second unassociated fusion target;
judging whether the second unassociated sensing data and the data in the second unassociated fusion target belong to other areas or not, wherein the second unassociated sensing data is radar sensing data;
if yes, judging whether the second unassociated sensing data meets a preset check condition;
If so, performing distance calculation on the second unassociated sensing data and the similar sensing data in the second unassociated fusion target, and determining a second association matrix;
inputting the second correlation matrix into a Hungary algorithm to obtain a third target correlation pair, third unassociated sensing data and a third unassociated fusion target, and taking the third target correlation pair, the third unassociated sensing data and the third unassociated fusion target as a third correlation result.
Optionally, the fourth determining subunit may be specifically configured to:
acquiring a first target association pair in a first association result, a second target association pair in a second association result, a third target association pair in a third association result, third unassociated perception data and a third unassociated fusion target;
taking the first target association pair, the second target association pair and the third target association pair as target association pairs in the data association result;
taking the third unassociated sensing data as unassociated sensing data in the data association result;
and taking the third unassociated fusion target as the unassociated fusion target in the data association result.
Optionally, the data fusion module 33 may specifically include:
the attribute adding unit is used for updating the target attribute of the fusion target association in the fusion target list according to the target association pair in the data association result;
The target creation unit is used for creating a new fusion target according to the unassociated perceived data in the data association result and storing the new fusion target in the fusion target list;
and the target deleting unit is used for deleting the outdated fusion target in the fusion target list according to the unassociated fusion target in the data association result.
Optionally, the attribute adding unit may specifically include:
the attribute determining subunit is used for determining the attribute of the target to be added according to the to-be-fused perceived data corresponding to the target association pair;
the target determining subunit is used for determining a fusion target corresponding to the target association pair from the fusion target list as a target to be added;
and the adding subunit is used for adding the attribute of the target to be added to the target to be added.
Optionally, the attribute determination subunit may be specifically configured to:
if the target association pair corresponds to the to-be-fused sensing data, determining a first occupation ratio of first sensing data in the to-be-fused sensing data according to association duration of the to-be-fused sensing data;
determining a first target attribute according to the first duty ratio and the first perception data, and taking the first target attribute and the second perception data in the perception data to be fused as target attributes to be added;
If the target association pair corresponds to the to-be-fused sensing data as radar sensing target data, determining a second occupation ratio of third sensing data in the to-be-fused sensing data according to association duration of the to-be-fused sensing data;
and determining a third target attribute according to the second duty ratio and the third perception data, and taking the third target attribute as the target attribute to be added.
Optionally, the target creation unit may specifically be configured to:
determining the class of unassociated perception data;
if the unassociated perception data is the visual perception data of the target creation area or the data fusion area, creating a new fusion target and storing the new fusion target into a fusion target list;
adding the unassociated perception data as a target attribute to a new fusion target;
if the unassociated sensing data is radar sensing target data, the fusion target list is not updated.
Optionally, the target deleting unit may specifically be configured to:
if the unassociated fusion target does not participate in the association time is greater than the set time threshold, determining that the unassociated fusion target is an overdue fusion target;
and deleting the expired fusion target from the fusion target list.
Optionally, the information determining module 34 may specifically be configured to:
Acquiring associated perception data of each fusion target in a fusion target list;
and converting each associated perception data according to a set coordinate system to obtain fusion target information in the converted perception region.
The multi-sensor-based road side sensing device provided by the embodiment of the invention can execute the multi-sensor-based road side sensing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 5 is a schematic structural diagram of an edge computing device according to a fourth embodiment of the present invention. Edge computing devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Edge computing devices may also represent various forms of mobile equipment, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing equipment. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the edge computing device 40 includes at least one processor 41, and a memory communicatively connected to the at least one processor 41, such as a Read Only Memory (ROM) 42, a Random Access Memory (RAM) 43, etc., in which the memory stores a computer program executable by the at least one processor, and the processor 41 may perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 42 or the computer program loaded from the storage unit 48 into the Random Access Memory (RAM) 43. In the RAM43, various programs and data required for the operation of the edge computing device 40 may also be stored. The processor 41, the ROM42 and the RAM43 are connected to each other via a bus 44. An input/output (I/O) interface 45 is also connected to bus 44.
The various components in edge computing device 40 are connected to I/O interface 45, including: an input unit 46 such as a keyboard, a mouse, etc.; an output unit 47 such as various types of displays, speakers, and the like; a storage unit 48 such as a magnetic disk, an optical disk, or the like; and a communication unit 49 such as a network card, modem, wireless communication transceiver, etc. The communication unit 49 allows the edge computing device 40 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The processor 41 may be various general and/or special purpose processing components with processing and computing capabilities. Some examples of processor 41 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 41 performs the various methods and processes described above, such as a multi-sensor based approach to road side perception.
In some embodiments, the multi-sensor based roadside awareness method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 48. In some embodiments, part or all of the computer program may be loaded and/or installed onto the edge computing device 40 via the ROM42 and/or the communication unit 49. When the computer program is loaded into RAM43 and executed by processor 41, one or more steps of the multi-sensor based roadside awareness method described above may be performed. Alternatively, in other embodiments, the processor 41 may be configured to perform the multi-sensor based approach to road side perception by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an edge computing device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or a trackball) through which a user can provide input to the edge computing device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (16)

1. A multi-sensor-based roadside awareness method, comprising:
obtaining to-be-fused sensing data of a sensing area;
determining a data association result of the to-be-fused perceived data and the fusion target list according to the to-be-fused perceived data and a pre-created fusion target list;
updating the fusion target list according to the data association result;
determining fusion target information of the sensing area according to associated sensing data corresponding to each fusion target in the fusion target list;
The sensing area is sequentially divided into a target creating area, a data fusion area and other areas according to the target moving direction, the target creating area corresponds to visual sensing data, the data fusion area corresponds to visual sensing data and radar sensing data, and the other areas correspond to radar sensing data.
2. The method according to claim 1, wherein the determining a data association result of the to-be-fused perceptual data and the fusion target list according to the to-be-fused perceptual data and a pre-created fusion target list comprises:
if the fusion target list is empty, the perception data to be fused is unassociated with the perception data to be fused, and the perception data to be fused is used as a data association result of the perception data to be fused and the fusion target list;
and if the fusion target list is not empty, determining a data association result of the to-be-fused perceived data and the fusion target list according to a preset data association strategy.
3. The method according to claim 2, wherein the determining, according to a preset data association policy, a data association result of the to-be-fused perceptual data and the fusion target list includes:
Determining a first association result of the to-be-fused perceived data and the fusion target list according to a first data association strategy;
if the first association result contains first unassociated sensing data and a first unassociated fusion target, determining a second association result of the first unassociated sensing data and the first unassociated fusion target according to a second data association strategy;
if the second association result contains second unassociated sensing data and a second unassociated fusion target, determining a third association result of the second unassociated sensing data and the second unassociated fusion target according to a third data association policy;
and determining the data association result according to the first association result, the second association result and the third association result.
4. A method according to claim 3, wherein said determining a first association result of the to-be-fused perceptual data with the fusion target list according to a first data association policy comprises:
traversing the to-be-associated identification numbers of all to-be-associated perception targets in the to-be-fused perception data and the associated identification numbers of the associated perception targets of all fused targets in the fused target list;
If at least one identification number to be associated is the same as the associated identification number, judging whether the sensing data to be fused meets a preset check condition or not;
if yes, determining that a fusion target associated with the associated identification number with the same identification number and the perception target to be associated are a first target association pair;
taking the perception data except for the perception target to be associated in the first target association pair in the perception data to be fused as first unassociated perception data;
taking fusion targets except for the fusion targets in the first target association pair in the fusion target list as first unassociated fusion targets;
and taking the first target association pair, the first unassociated perception data and the first unassociated fusion target as a first association result.
5. A method according to claim 3, wherein said determining a second association result of said first unassociated awareness data with said first unassociated fusion target in accordance with a second data association policy comprises:
traversing the first unassociated awareness data and the first unassociated fusion target;
judging whether at least one frame of first unassociated perception data exists or whether data in the first unassociated fusion target belongs to the data fusion area;
If yes, judging whether the first unassociated sensing data meets a preset check condition;
if yes, performing cross-correlation ratio calculation on the first unassociated perception data and heterogeneous perception data in the first unassociated fusion target, and determining a first association matrix;
inputting the first association matrix into a Hungary algorithm to obtain a second target association pair, second unassociated sensing data and a second unassociated fusion target, and taking the second target association pair, the second unassociated sensing data and the second unassociated fusion target as a second association result.
6. A method according to claim 3, wherein said determining a third association result of said second unassociated awareness data with said second unassociated fusion target in accordance with a third data association policy comprises:
traversing the second unassociated awareness data and the second unassociated fusion target;
judging whether the second unassociated sensing data and the data in the second unassociated fusion target belong to other areas or not, wherein the second unassociated sensing data is radar sensing data;
if yes, judging whether the second unassociated sensing data meets a preset check condition;
If so, performing distance calculation on the second unassociated sensing data and the similar sensing data in the second unassociated fusion target, and determining a second association matrix;
inputting the second correlation matrix into a Hungary algorithm to obtain a third target correlation pair, third unassociated sensing data and a third unassociated fusion target, and taking the third target correlation pair, the third unassociated sensing data and the third unassociated fusion target as a third correlation result.
7. The method of claim 3, wherein the determining the data association result based on the first association result, the second association result, and the third association result comprises:
acquiring a first target association pair in the first association result, a second target association pair in the second association result, and a third target association pair, third unassociated perception data and a third unassociated fusion target in the third association result;
taking the first target association pair, the second target association pair and the third target association pair as target association pairs in the data association result;
taking the third unassociated sensing data as unassociated sensing data in the data association result;
And taking the third unassociated fusion target as an unassociated fusion target in the data association result.
8. The method of claim 1, wherein updating the fusion target list based on the data association result comprises:
updating the target attribute of the fusion target association in the fusion target list according to the target association pair in the data association result;
creating a new fusion target according to the unassociated perception data in the data association result and storing the new fusion target into the fusion target list;
and deleting the outdated fusion targets in the fusion target list according to the unassociated fusion targets in the data association result.
9. The method of claim 8, wherein updating the target attribute of the fusion target association in the fusion target list according to the target association pair in the data association result comprises:
determining target attributes to be added according to the target association pair corresponding to the to-be-fused perception data;
determining a fusion target corresponding to the target association pair from the fusion target list as a target to be added;
and adding the target attribute to be added to the target to be added.
10. The method of claim 9, wherein the determining the target attribute to be added according to the target association pair corresponding to the to-be-fused perceived data in the data association result includes:
if the target association pair corresponds to the to-be-fused sensing data, determining a first occupation ratio of first sensing data in the to-be-fused sensing data according to association duration of the to-be-fused sensing data;
determining a first target attribute according to the first duty ratio and the first perception data, and taking the first target attribute and second perception data in the perception data to be fused as target attributes to be added;
if the target association pair corresponds to the to-be-fused sensing data as radar sensing target data, determining a second occupation ratio of third sensing data in the to-be-fused sensing data according to association duration of the to-be-fused sensing data;
and determining a third target attribute according to the second duty ratio and the third perception data, and taking the third target attribute as a target attribute to be added.
11. The method of claim 8, wherein creating a new fusion target from the unassociated perceived data in the data association result and storing the new fusion target in the fusion target list comprises:
Determining a category of the unassociated sensory data;
if the unassociated perception data is the visual perception data of the target creation area or the data fusion area, creating a new fusion target and storing the new fusion target into the fusion target list;
adding the unassociated awareness data as a target attribute to the new fusion target;
and if the unassociated sensing data is radar sensing target data, not updating the fusion target list.
12. The method of claim 8, wherein the deleting the expired fusion target in the fusion target list according to the unassociated fusion target in the data association result comprises:
if the unassociated fusion target does not participate in the association time is greater than a set time threshold, determining that the unassociated fusion target is an overdue fusion target;
and deleting the expired fusion target from the fusion target list.
13. The method according to claim 1, wherein determining the fusion target information of the sensing region according to the associated sensing data corresponding to each fusion target in the fusion target list includes:
acquiring associated perception data of each fusion target in the fusion target list;
And converting each associated perception data according to a set coordinate system to obtain fusion target information in the perception region after conversion.
14. A multi-sensor-based roadside awareness device, comprising:
the data acquisition module is used for acquiring the to-be-fused sensing data of the sensing area;
the target association module is used for determining a data association result of the to-be-fused perceived data and the fusion target list according to the to-be-fused perceived data and the pre-created fusion target list;
the data fusion module is used for updating the fusion target list according to the data association result;
the information determining module is used for determining fusion target information of the sensing area according to the associated sensing data corresponding to each fusion target in the fusion target list;
the sensing area is sequentially divided into a target creating area, a data fusion area and other areas according to the target moving direction, the target creating area corresponds to visual sensing data, the data fusion area corresponds to visual sensing data and radar sensing data, and the other areas correspond to radar sensing data.
15. An edge computing device, comprising:
At least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the multi-sensor based roadside awareness method of any one of claims 1-13.
16. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the multi-sensor based roadside awareness method of any one of claims 1 to 13.
CN202310899881.4A 2023-07-21 2023-07-21 Road side sensing method, device, equipment and medium based on multiple sensors Pending CN116935640A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310899881.4A CN116935640A (en) 2023-07-21 2023-07-21 Road side sensing method, device, equipment and medium based on multiple sensors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310899881.4A CN116935640A (en) 2023-07-21 2023-07-21 Road side sensing method, device, equipment and medium based on multiple sensors

Publications (1)

Publication Number Publication Date
CN116935640A true CN116935640A (en) 2023-10-24

Family

ID=88374991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310899881.4A Pending CN116935640A (en) 2023-07-21 2023-07-21 Road side sensing method, device, equipment and medium based on multiple sensors

Country Status (1)

Country Link
CN (1) CN116935640A (en)

Similar Documents

Publication Publication Date Title
CN109087510B (en) Traffic monitoring method and device
EP3951741B1 (en) Method for acquiring traffic state, relevant apparatus, roadside device and cloud control platform
CN114758502B (en) Dual-vehicle combined track prediction method and device, electronic equipment and automatic driving vehicle
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
JP2021099877A (en) Method, device, apparatus and storage medium for reminding travel on exclusive driveway
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
JP2021168174A (en) Method and apparatus for identifying vehicle alignment information, electronic device, roadside device, cloud control platform, storage medium, and computer program product
WO2023142814A1 (en) Target recognition method and apparatus, and device and storage medium
CN115879060B (en) Multi-mode-based automatic driving perception method, device, equipment and medium
CN114842445A (en) Target detection method, device, equipment and medium based on multi-path fusion
US20220172295A1 (en) Systems, methods, and devices for aggregating and quantifying telematics data
CN115953434B (en) Track matching method, track matching device, electronic equipment and storage medium
CN112507964B (en) Detection method and device for lane-level event, road side equipment and cloud control platform
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium
Xiong et al. Fast and robust approaches for lane detection using multi‐camera fusion in complex scenes
CN116935640A (en) Road side sensing method, device, equipment and medium based on multiple sensors
CN115346374B (en) Intersection holographic perception method and device, edge computing equipment and storage medium
CN117962930B (en) Unmanned vehicle control method and device, unmanned vehicle and computer readable storage medium
CN115330042B (en) Conflict point determination method, device, equipment and readable storage medium
Dai Semantic Detection of Vehicle Violation Video Based on Computer 3D Vision
CN113721235B (en) Object state determining method, device, electronic equipment and storage medium
WO2023221848A1 (en) Vehicle starting behavior prediction method and apparatus, storage medium, and program product
CN118015559A (en) Object identification method and device, electronic equipment and storage medium
CN113807127B (en) Personnel archiving method and device and electronic equipment
CN118279860A (en) Method and device for determining non-passable area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination