CN113379805B - Multi-information resource fusion processing method for traffic nodes - Google Patents
Multi-information resource fusion processing method for traffic nodes Download PDFInfo
- Publication number
- CN113379805B CN113379805B CN202110922670.9A CN202110922670A CN113379805B CN 113379805 B CN113379805 B CN 113379805B CN 202110922670 A CN202110922670 A CN 202110922670A CN 113379805 B CN113379805 B CN 113379805B
- Authority
- CN
- China
- Prior art keywords
- target
- fusion
- data
- vehicle
- tracking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a multi-information resource fusion processing method of a traffic node, a computer and a storage medium, and belongs to the technical field of artificial intelligence. Firstly, detecting targets of each point location sensor in an area, and restoring sensed target coordinates to a world coordinate system of the point location; secondly, fusing the perception target into a tracking target queue of the last fusion time point in the coordinate system on the basis of time synchronization, and realizing updating of the tracking queue; secondly, matching detection results of different point positions to form target perception fusion in the whole area; and finally, converting the fused data from a GPS coordinate system to a coordinate system taking the vehicle body as a center, and performing early warning on surrounding obstacles. The method and the device realize that information as much as possible is given to the fused traffic participant portrait and sensor data deployed at different positions in the area are merged into a unified coordinate system through the fusion of the sensing result of the sensor, and the unified coordinate system is provided for the vehicle-mounted equipment.
Description
Technical Field
The application relates to an information resource fusion processing method, in particular to a multi-information resource fusion processing method for traffic nodes, and belongs to the technical field of artificial intelligence.
Background
The intelligent traffic development has driven the intelligent transformation upgrading of traffic infrastructure, and various perception equipment including equipment such as camera, laser radar, millimeter wave radar have been installed at more and more intersections today. Data received by the devices are mutually isolated, information islands are formed without effective combination, and sensing data from various devices are emphasized, so that the sensing data have defects and failure environments, a large number of sensor construction cannot achieve expected effects, and therefore sensor fusion needs to be achieved in a traffic scene.
At present, a multi-sensor fusion method in a traffic scene is mostly a vehicle-mounted scheme, because in an automatic driving scene, adaptive cruise, active braking, lane keeping and the like are all realized through various sensing devices such as an image sensor, a millimeter wave radar, a laser radar and the like, intelligent sensing is the premise of intelligent decision, and in order to provide sensing information without ambiguity for a decision layer, sensor data must be fused and output.
In the aspect of data fusion of the roadside sensor, the existing patent technology and the like have great defects, mainly because the positioning of roadside perception is fuzzy at present, and the advantages and the effects of the roadside perception are not exerted. The road side sensors are deployed on a large scale and need to be fused, the scene is mainly an intersection with a large traffic flow, in the scene, traffic participants move complicatedly, a large amount of mixed flow and shielding of people and vehicles exist, visual field blind areas possibly exist in drivers who drive people and unmanned laser radars and cameras, accidents are caused, the road side sensors sense and fuse the coordinates of recognized foreground obstacles to be converted into a GPS (global positioning system) to be pushed to vehicles, the vehicles determine the positions of the vehicles through high-precision differential positioning of the vehicles and restore intersection obstacle information, and full-scene mastering and early warning of dangerous cases of the intersection can be achieved.
CN202010361534 discloses a traffic behavior fusion system based on multi-source heterogeneous sensor information, which comprises an information acquisition unit, a multi-source information database, a traffic behavior matching unit, a behavior fusion unit and a traffic behavior output unit. The fusion method based on the fusion system utilizes various existing sensors as the acquisition equipment of the multi-source information, collects the detected traffic behaviors to a multi-source information database, and the multi-source information database carries out data time-space synchronization and redundancy processing on the acquired multi-source information to distinguish the detection target and the traffic behaviors thereof; and then, the traffic behaviors detected by the information sources are subjected to behavior fusion, so that the current traffic behavior is further determined, the speed and the accuracy of traffic behavior identification are improved, and the adaptability of the traffic behavior in a poor detection environment is enhanced.
The core idea of the technical scheme is that the behavior category and probability of a detected target in a certain time period of target information acquired by each sensor are used as a combined input database, and are corresponding to the results of other sensors, and a joint event probability is calculated. However, this method is poor in implementability, which is characterized in that there is a time range in which the behavior itself occurs, and the time ranges in which the detectors detect the same event of the same target may not correspond to each other many times; each sensor needs to be provided with an algorithm program to realize the flow of target detection, target tracking and event judgment, and due to the characteristics of the sensors, part of the sensors cannot judge some part of events at all to realize fusion.
The prior art has the following two problems:
problem 1: how to master the motion state and trend of the traffic participants at the whole intersection and even the whole area through sensor deployment, and the fused traffic participant portrait is endowed with information as much as possible through the sensing result of the fused sensor.
Problem 2: how to incorporate sensor data deployed at different locations within an area into a unified coordinate system and provide the data to an on-board device.
Therefore, the invention provides a method for processing fusion of multiple information resources of traffic nodes to solve the problems in the prior art.
Disclosure of Invention
The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. It should be understood that this summary is not an exhaustive overview of the invention. It is not intended to determine the key or critical elements of the present invention, nor is it intended to limit the scope of the present invention. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is discussed later.
In view of this, in order to solve the technical problems in the prior art that the test cost is high and the test is difficult to restore the real vehicle motion state, the invention provides a multi-information resource fusion processing method for traffic nodes, which comprises the following steps:
s110, detecting the perception targets of the sensors at the intersection, and restoring the perception targets to a world coordinate system with the center of the intersection as an origin;
S120 fusing the perception target to the perception target on the basis of time synchronizationUpdating the tracking queue in the tracking target queue of the previous fusion time point;
s130, time compensation is carried out on the tracking targets of all point positions to be synchronized to the same time point, and then the current tracking target is locatedConversion of internal position and orientation angle to GPS coordinate systemBased on two index pointsAndis/are as followsCoordinates of the objectAndcoordinates of the objectConverting by using a vector method, and finally matching detection results of different point positions in an overlapping area to form target perception fusion in the whole area;
s140, packaging the data of the tracking target queue and broadcasting the data through roadside vehicle-road cooperative equipment;
s150, receiving the road-side fusion data, acquiring self accurate positioning and pose information based on high-precision GPS positioning equipment, and enabling the fused data to be in a GPS coordinate systemConversion to body-centered coordinate systemAnd carrying out early warning of surrounding obstacles.
Preferably, in step S110, the specific method for detecting the sensing target of each sensor at the intersection is:
the image target detection method based on deep learning detects image frames acquired by a camera, predicts the position of a central point of an object, calibrates parameters based on the camera and determines the position of the camera in the image target detection method based on deep learningTo reduce the center point of the detection result toEncapsulating and pushing the result structured data and the corresponding timestamp to the fusion module;
preferably, in step S110, the specific method for detecting the sensing target of each sensor at the intersection is:
the laser radar target detection method based on deep learning detects a point cloud set acquired by a laser radar, estimates object types, obtains a three-dimensional bounding box and an accurate position of a detected object, and is based on installation parameters of the laser radar and the position of the detected objectThe bounding box of the detection result is restored toEncapsulating and pushing the result structured data and the corresponding timestamp to the fusion module;
preferably, in step S110, the specific method for detecting the sensing target of each sensor at the intersection is:
the millimeter wave radar and the ultrasonic sensor are used for acquiring angle and depth information of a target relative to equipment, and the angle and depth information is based on equipment installation parametersTo reduce the center point of the detection result toAnd encapsulating and pushing the result structured data and the corresponding timestamp to the fusion module.
Preferably, step S120 is performed to fuse the perception targets into the image on the basis of time synchronizationIn the tracking target queue of the previous fusion time point, the specific method for updating the tracking queue is as follows: the method comprises the following steps:
s210, receiving the data frame acquired from the sensing device, and putting the latest sensing target into the unmatched sensing target queue;
S220 comparisonWith the current trace queueMatching two by two in space, taking out the overlapped target to update the tracking target, and moving into the matched tracking queue;
S230 toAndgenerating a bipartite graph incidence matrix, wherein the incidence value R is the space distance between two targetsInter-class differencesArea difference after mapping to two-dimensional planeThe fusion value of (a) is:whereinTaking values for self-defined parameters through experiments;
s240, taking the correlation distance threshold Thread as a maximum matching threshold, matching the correlation matrix by using a Hungarian algorithm, and then processing the unmatched result: fetching unmatched trace object depositTaking out the unmatched sensing target and storing the unmatched sensing target into the unmatched sensing target queue;
s260, the KM algorithm is used for the incidence matrix, the sum of the incidence values R is minimum after the sensing target and the tracking target are completely matched, the matched tracking target is updated and stored;
S270If the tracking target which is not matched in the target exceeds a certain time threshold value, deleting the tracking target, otherwise, updating the tracking target based on three-dimensional space Kalman filtering, and storing the tracking target in the target;
Preferably, in step S3, the specific method for matching the detection results of different point locations in the overlapping region to form target sensing fusion in the whole region is as follows: the method comprises the following steps:
s310, the regional data fusion module is used for obtaining the latest data frame of the data pushed by the local point location fusion module, and a new timestamp is recorded;
S330 is based onCarrying out state estimation on the tracks of the past 5 time slices carried by each tracking target, realizing coordinate correction and reducing multi-module data fusion errors;
s340 will be presentTransferring the positions of all the tracking targets to a GPS coordinate system;
s350, voxel distribution and labeling are carried out on all tracking results in space, wherein if one tracking target is at the boundary of a plurality of voxels, the voxels are labeled, the two tracking targets with the same label are subjected to spatial overlapping degree comparison, and the value of the intersection ratio IoU is obtainedWhen is coming into contact withWhen the two targets are larger than the configured threshold value, the two targets are considered to be the same target, and the two targets are merged in the region fusion;
preferably, the specific method for encapsulating and broadcasting the data of the tracking target queue through the roadside vehicle-road cooperative device in S140 is as follows: and encapsulating all the target attributes into a data frame, broadcasting the data frame to the vehicle-road cooperative vehicle-mounted equipment, and analyzing the data frame by the vehicle-mounted end.
Preferably, the fused data is represented by a GPS coordinate system in step S150Conversion to body-centered coordinate systemThe specific method for early warning the surrounding obstacles comprises the following steps: the method comprises the following steps:
s510, the vehicle acquires the current GPS position and the direction angle through high-precision positioning equipment and records an in-orbit queue;
s520, acquiring real-time acceleration and speed information of the vehicle through the vehicle-mounted IMU unit;
s530, the position and the direction angle acquired by the current positioning equipmentPosition and direction angle of vehicle track predictionLocation and heading angle predicted from IMU dataThe three state prediction models are used as fusion items, the driving data of the vehicle under different scenes are collected, and a data set is established;
s540 based on the driving data set, training a fusion model by using a blending model fusion method to predict the vehicleCurrent location state information;
S550 of the known vehicleEstablishing a coordinate system centered on itselfAnd screening surrounding targets, and collecting the fusion result obtained from roadsideSwitch over toAnd the data is displayed in the vehicle-mounted equipment, and the dead-angle-free sensing and early warning of the surrounding environment are realized by combining a high-precision map which is carried by the vehicle or acquired from an RSU (remote subscriber Unit) end.
A computer comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the multi-information resource fusion processing method of the traffic node when executing the computer program.
A computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements a multiple information resource fusion processing method of a traffic node.
The invention has the following beneficial effects: the scheme of the invention realizes the perception of the traffic participants through the deployment of the perception equipment aiming at the intersections and key road sections which are easy to have traffic accidents, realizes the accurate tracking through the fusion of the perception information based on the fusion module, finally converts the attributes of the position, the direction and the angle of all the traffic participants and the like into a GPS coordinate system through the design of the calibration and fusion scheme, broadcasts the GPS coordinate system to the surrounding vehicles and provides the beyond-the-horizon perception information for the vehicles.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating steps according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a three-flow process according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating a fifth step according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a fusion process according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions and advantages of the embodiments of the present application more apparent, the following further detailed description of the exemplary embodiments of the present application with reference to the accompanying drawings makes it clear that the described embodiments are only a part of the embodiments of the present application, and are not exhaustive of all embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
In a first embodiment, the present embodiment is described with reference to fig. 1 to 5, and a method for processing multiple information resources fusion of a traffic node in the first embodiment includes the following steps:
s110, detecting the perception targets of the sensors at the intersection, and restoring the perception targets to a world coordinate system with the center of the intersection as an origin;
Specifically, the sensor comprises a camera, a laser radar, a millimeter wave radar and other equipment;
specifically, the edge calculation nodes are used for detecting targets of each point sensor in the area, and the targets are perception targets.
The present embodiment is illustrated by the following three target detection methods, but is not limited to the following three methods:
based onThe image target detection method for deep learning comprises the steps of detecting image frames acquired by a camera, predicting the position of a central point of an object, calibrating parameters based on the camera, and detecting the position of the camera in the image target detection methodTo reduce the center point of the detection result toEncapsulating and pushing the result structured data and the corresponding timestamp to the fusion module;
or, the laser radar target detection method based on deep learning detects the point cloud set collected by the laser radar, estimates the object type, obtains the three-dimensional bounding box and the accurate position of the detected object, and is based on the installation parameters of the laser radar and the position of the detected objectThe bounding box of the detection result is restored toEncapsulating and pushing the result structured data and the corresponding timestamp to the fusion module;
or the millimeter wave radar and the ultrasonic sensor are used for acquiring the angle and depth information of the target relative to the equipment, and the angle and depth information is based on the equipment installation parameters and the informationTo reduce the center point of the detection result toAnd encapsulating and pushing the result structured data and the corresponding timestamp to the fusion module.
S120 fusing the perception target to the perception target on the basis of time synchronizationIn the tracking target queue of the previous fusion time pointUpdating the current tracking queue;
specifically, the method comprises the following steps:
s210, receiving the data frame acquired from the sensing device, and putting the latest sensing target into the unmatched sensing target queue;
S220 comparisonWith the current trace queueMatching two by two in space, taking out the overlapped target to update the tracking target, and moving into the matched tracking queue;
S230 toAndgenerating a bipartite graph incidence matrix, wherein the incidence value R is the space distance between two targetsInter-class differencesArea difference after mapping to two-dimensional planeThe fusion value of (a) is:whereinTaking values for self-defined parameters through experiments;
s240, taking the correlation distance threshold Thread as a maximum matching threshold, matching the correlation matrix by using a Hungarian algorithm, and then processing the unmatched result: fetching unmatched trace object depositTaking out the unmatched sensing target and storing the unmatched sensing target into the unmatched sensing target queue;
s260, the KM algorithm is used for the incidence matrix, the sum of the incidence values R is minimum after the sensing target and the tracking target are completely matched, the matched tracking target is updated and stored;
S270If the tracking target which is not matched in the target exceeds a certain time threshold value, deleting the tracking target, otherwise, updating the tracking target based on three-dimensional space Kalman filtering, and storing the tracking target in the target;
Because perception equipment coverage is limited under most circumstances, local position fusion module can only fuse the perception equipment data of same position, can have a plurality of fusion module instances in the region, consequently need merge at vehicle road side equipment end in coordination to realize the conversion and the propelling movement of data:
s130, time compensation is carried out on the tracking targets of all point positions to be synchronized to the same time point, and then the current tracking target is locatedConversion of internal position and orientation angle to GPS coordinate systemBased on two index pointsAndis/are as followsCoordinates of the objectAndcoordinates of the objectConversion using a vector methodFinally, matching detection results of different point positions in an overlapping area to form target perception fusion in the whole area;
specifically, the method comprises the following steps:
s310, the regional data fusion module is used for obtaining the latest data frame of the data pushed by the local point location fusion module, and a new timestamp is recorded;
S330 is based onCarrying out state estimation on the tracks of the past 5 time slices carried by each tracking target, realizing coordinate correction and reducing multi-module data fusion errors;
s340 will be presentTransferring the positions of all the tracking targets to a GPS coordinate system;
s350, because partial point positions are mutually covered, voxel allocation and labeling are carried out on all tracking results in space, if one tracking target is at the boundary of a plurality of voxels, the labels of the plurality of voxels are labeled, the two tracking targets with the same label are compared in the spatial overlapping degree, and the value of the intersection ratio IoU is obtainedWhen is coming into contact withWhen the two targets are larger than the configured threshold value, the two targets are considered to be the same target, and the two targets are merged in the region fusion;
s140, packaging the data of the tracking target queue and broadcasting the data through roadside vehicle-road cooperative equipment;
specifically, the data of the tracked target queue includes target attributes such as a position, a three-dimensional bounding box, an acceleration, a direction angle, a track and the like.
Specifically, all target attributes are packaged into a data frame and broadcast to the vehicle-road cooperative vehicle-mounted equipment, and the vehicle-mounted end receives and analyzes data broadcast by the road-side vehicle-road cooperative equipment.
S150, receiving the road-side fusion data, acquiring self accurate positioning and pose information based on high-precision GPS positioning equipment, and enabling the fused data to be in a GPS coordinate systemConversion to body-centered coordinate systemAnd carrying out early warning of surrounding obstacles.
Specifically, the method comprises the following steps:
s510, the vehicle acquires the current GPS position and the direction angle through high-precision positioning equipment and records an in-orbit queue;
s520, acquiring real-time acceleration and speed information of the vehicle through the vehicle-mounted IMU unit;
s530, the position and the direction angle acquired by the current positioning equipmentPosition and direction angle of vehicle track predictionLocation and heading angle predicted from IMU dataThe three state prediction models are used as fusion items, the driving data of the vehicle under different scenes are collected, and a data set is established;
s540 based on the driving data set, training a fusion model by using a blending model fusion method to predict the current position state information of the vehicle;
S550 of the known vehicleEstablishing a coordinate system centered on itselfAnd screening surrounding targets, and collecting the fusion result obtained from roadsideSwitch over toAnd the data is displayed in the vehicle-mounted equipment, and the dead-angle-free sensing and early warning of the surrounding environment are realized by combining a high-precision map which is carried by the vehicle or acquired from an RSU (remote subscriber Unit) end.
Specifically, for example, an obstacle having a radius of 50 m or less, the fusion result obtained from the roadside is extracted fromSwitch over toAnd the information is displayed in vehicle-mounted equipment, and the sensing and early warning of the surrounding environment without dead angles are realized by combining a high-precision map carried by the vehicle or acquired from an RSU.
Specifically, the obstacle includes a frame, a position, a speed, a direction angle, and an acceleration.
Specifically, the establishment of the world coordinate system described in this embodiment may be implemented by calibrating the intersection center GPS value and calibrating the rectangular coordinate system, or may be implemented by selecting other centers and using a polar coordinate system.
Specifically, the method for detecting an image target and predicting the position of the center point of an object based on deep learning in this embodiment may be implemented by centernet, or by ssd, yolo v4, yolo v5, yolo tiny methods, and the like.
Specifically, the laser radar target detection based on deep learning in this embodiment estimates the object type, and obtains the three-dimensional bounding box of the detected object and the accurate position relative to the radar by means of pointpilars and a coordinate conversion module, or by means of a point cloud data detection method such as VoxelNet, RT3D, and the like in combination with other data processing methods;
the calculation of the correlation value in this embodiment may be performed by the spatial distance between two targets and the difference between the two targets, or may be performed by fusion calculation of other features such as the difference between feature vectors.
Specifically, the embodiment only takes the perception fusion implemented at one intersection as an example, and the scheme is described.
Specifically, the perception target described in this embodiment refers to a target obtained by processing various sensor data by a detection algorithm.
Specifically, the tracked target described in this embodiment refers to fusion target data that is continuously updated by using a perception target according to a fusion algorithm with a spatial coordinate as a main basis.
Specifically, the device connection described in this embodiment refers to connection using an optical fiber ring network.
Specifically, the time synchronization in step S120 in this embodiment is to perform time synchronization on all the sensors and the roadside edge computing nodes and the like through a unified clock source, so as to ensure that the time of the hardware device is unified.
Specifically, the fusion method described in this embodiment needs to calibrate the local coordinate system of each point location for the sensor of each point location, and select two pointsAnddemarcating its local coordinate systemCoordinates of the objectCorresponding GPS coordinate systemCoordinates of the object。
Specifically, the fusion method described in this embodiment needs to calibrate the internal and external parameters of the camera, and calibrate the positions of other sensing devices and the quaternion rotation angles.
The technical key points of the invention are as follows:
(1) a method for matching road side point location multi-type sensing equipment fusion flow with different sensor sensing targets in a traffic scene.
(2) The method is a technology for fusing sensing results of multiple fixed sensing devices in a large range in a traffic scene and a time synchronization method.
(3) And the perception results of the multiple devices are mutually overlapped to form target perception fusion in the whole area.
(4) And converting the road side sensing result into a vehicle-mounted end coordinate system.
The computer device of the present invention may be a device including a processor, a memory, and the like, for example, a single chip microcomputer including a central processing unit and the like. And the processor is used for implementing the steps of the recommendation method capable of modifying the relationship-driven recommendation data based on the CREO software when executing the computer program stored in the memory.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Computer-readable storage medium embodiments
The computer readable storage medium of the present invention may be any form of storage medium that can be read by a processor of a computer device, including but not limited to non-volatile memory, ferroelectric memory, etc., and the computer readable storage medium has stored thereon a computer program that, when the computer program stored in the memory is read and executed by the processor of the computer device, can implement the above-mentioned steps of the CREO-based software that can modify the modeling method of the relationship-driven modeling data.
The computer program comprises computer program code which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.
Claims (9)
1. A multi-information resource fusion processing method of a traffic node is characterized by comprising the following steps:
s110, detecting the perception targets of the sensors at the intersection, and restoring the perception targets to a world coordinate system C with the center of the intersection as an originp;
S120 merging perception target into C on the basis of time synchronizationpUpdating the tracking queue in the tracking target queue of the previous fusion time point; the specific method comprises the following steps: the method comprises the following steps:
s210, receiving data frames acquired from sensing equipment, and putting the latest sensing target into an unmatched sensing target queue Lobjs;
S220 comparing LobjsWith the current trace queue LtracksMatching two by two in space, taking out the overlapped target to update the tracking target, and moving into the matched tracking queueLassigned_tracks;
S230 with LobjsAnd LtracksGenerating a bipartite graph incidence matrix, wherein the incidence value R is the space distance d between two targetssInter-class difference dlArea difference d after mapping to two-dimensional planeaThe fusion value of (a) is: r ═ dsα+dlβ+daGamma, wherein alpha, beta and gamma are self-defined parameters and are taken through experiments;
s240, taking the correlation distance threshold Thread as a maximum matching threshold, matching the correlation matrix by using a Hungarian algorithm, and then processing the unmatched result: taking out unmatched tracking target and storing the tracking target in Lassigned_tracksTaking out the unmatched sensing target and storing the unmatched sensing target into the unmatched sensing target queue Lunassigend_objs;
S250 repeating the process with LobjsAnd LtracksGenerating a bipartite graph incidence matrix;
s260, the KM algorithm is used for the incidence matrix, the sum of the incidence values R is minimum after the sensing target and the tracking target are completely matched, the matched tracking target is updated, and the updated tracking target is stored into Lassigned_tracks;
S270Lassigned_tracksIf the tracking target which is not matched in the target exceeds a certain time threshold value, deleting the tracking target, otherwise, updating the tracking target based on three-dimensional space Kalman filtering, and storing the tracking target in Lassigned_tracks;
S280Lunassigend_objsGenerating new tracking target and storing into Lassigned_tracks;
S290Lassigned_tracksEnd of updating, change to LtracksWaiting for the next frame of sensing data to be received;
s130, time compensation is carried out on the tracking targets of all point positions to be synchronized to the same time point, and then the current tracking target is located at CpConversion of internal position and orientation angle to GPS coordinate system CgpsBased on two index points p1And p2C of (A)pCoordinate (x)1,y1),(x2,y2) And CgpsCoordinates (lon)1,lat1),(lon2,lat2) Conversion using a vector methodFinally, matching detection results of different point positions in an overlapping area to form target perception fusion in the whole area;
s140, packaging the data of the tracking target queue and broadcasting the data through roadside vehicle-road cooperative equipment;
s150, receiving the road-side fusion data, acquiring self accurate positioning and pose information based on high-precision GPS positioning equipment, and enabling the fused data to pass through a GPS coordinate system CgpsConversion to a coordinate system C centred on the vehicle itselfcAnd carrying out early warning of surrounding obstacles.
2. The method according to claim 1, wherein the step S110 of detecting the sensing target of each sensor at the intersection specifically comprises:
the image target detection method based on deep learning detects image frames acquired by a camera, predicts the position of a central point of an object, calibrates parameters based on the camera and detects the position of the camera at CpPosition of (2), reducing the center point of the detection result to CpAnd encapsulating and pushing the result structured data and the corresponding timestamp to the fusion module.
3. The method according to claim 1, wherein the step S110 of detecting the sensing target of each sensor at the intersection specifically comprises:
the laser radar target detection method based on deep learning comprises the steps of detecting a point cloud set acquired by a laser radar, estimating object types, obtaining a three-dimensional bounding box and an accurate position of a detected object, and obtaining a target object based on installation parameters of the laser radar and the target object in a C positionpThe bounding box of the detection result is reduced to CpAnd encapsulating and pushing the result structured data and the corresponding timestamp to the fusion module.
4. The method according to claim 1, wherein the step S110 of detecting the sensing target of each sensor at the intersection specifically comprises:
millimeter wave radar and ultrasonic sensorTaking the angle and depth information of the target relative to the equipment, and based on the installation parameters of the equipment and the information on the position CpPosition of (2), reducing the center point of the detection result to CpAnd encapsulating and pushing the result structured data and the corresponding timestamp to the fusion module.
5. The method according to claim 4, wherein the step S3 is to match the different point location detection results in the overlapping region to form a target perception fusion in the whole region by: the method comprises the following steps:
s310, the regional data fusion module is used for obtaining the latest data frame of the data pushed by the local point location fusion modules, and a new timestamp T is recordedfusion;
S320, calculating the time stamp T carried by each tracking targetiAnd TfusionDifference Δ T betweeni;
S330 is based on Delta TiCarrying out state estimation on the tracks of the past 5 time slices carried by each tracking target, realizing coordinate correction and reducing multi-module data fusion errors;
s340 connecting the current CpTransferring the positions of all the tracking targets to a GPS coordinate system;
s350, voxel allocation and labeling are carried out on all tracking results in space, wherein if one tracking target is located at the boundary of a plurality of voxels, labels of the plurality of voxels are labeled, spatial overlapping degree comparison is carried out on two tracking targets with the same label, the value rho of the intersection ratio IoU is obtained, and when the rho is larger than a configured threshold value, the two targets are considered to be the same target and are combined in region fusion.
6. The method according to claim 5, wherein the specific method for encapsulating and broadcasting the data of the tracking target queue by the roadside vehicle-road cooperative device in S140 is as follows: and encapsulating all the target attributes into a data frame, broadcasting the data frame to the vehicle-road cooperative vehicle-mounted equipment, and analyzing the data frame by the vehicle-mounted end.
7. The method of claim 6The method is characterized in that the fused data is processed by a GPS coordinate system C in the step S150gpsConversion to a coordinate system C centred on the vehicle itselfcThe specific method for early warning the surrounding obstacles comprises the following steps: the method comprises the following steps:
s510, the vehicle acquires the current GPS position and the direction angle through high-precision positioning equipment and records an in-orbit queue;
s520, acquiring real-time acceleration and speed information of the vehicle through the vehicle-mounted IMU unit;
s530 the position and the direction angle S acquired by the current positioning equipmentgpsPosition and direction angle S for vehicle trajectory predictiontrackLocation and heading angle S predicted from IMU dataimuThe three state prediction models are used as fusion items, the driving data of the vehicle under different scenes are collected, and a data set is established;
s540 based on the driving data set, training a fusion model by using a blending model fusion method to predict the current position state information S of the vehicle(loc,angle);
S550 of the known vehicle(loc,angle)Establishing a coordinate system C with the vehicle itself as the centercAnd screening surrounding targets, and collecting the fusion result obtained from roadside from CgpsConversion to CcAnd the data is displayed in the vehicle-mounted equipment, and the dead-angle-free sensing and early warning of the surrounding environment are realized by combining a high-precision map which is carried by the vehicle or acquired from an RSU (remote subscriber Unit) end.
8. A computer comprising a memory storing a computer program and a processor, wherein the processor implements the steps of the method for the multiple information resource fusion processing of a traffic node according to any one of claims 1 to 7 when executing the computer program.
9. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method for the multiple information resource fusion processing of a traffic node according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110922670.9A CN113379805B (en) | 2021-08-12 | 2021-08-12 | Multi-information resource fusion processing method for traffic nodes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110922670.9A CN113379805B (en) | 2021-08-12 | 2021-08-12 | Multi-information resource fusion processing method for traffic nodes |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113379805A CN113379805A (en) | 2021-09-10 |
CN113379805B true CN113379805B (en) | 2022-01-07 |
Family
ID=77576966
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110922670.9A Active CN113379805B (en) | 2021-08-12 | 2021-08-12 | Multi-information resource fusion processing method for traffic nodes |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113379805B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113920735B (en) * | 2021-10-21 | 2022-11-15 | 中国第一汽车股份有限公司 | Information fusion method and device, electronic equipment and storage medium |
CN113724298B (en) * | 2021-11-01 | 2022-03-18 | 深圳市城市交通规划设计研究中心股份有限公司 | Multipoint perception fusion method and device and computer readable storage medium |
CN114333294B (en) * | 2021-11-30 | 2022-12-13 | 上海电科智能系统股份有限公司 | Multi-element multi-object perception identification tracking method based on non-full coverage |
CN115169452B (en) * | 2022-06-30 | 2023-04-28 | 北京中盛国芯科技有限公司 | Target information system and method based on space-time synchronous queue characteristic radar fusion |
CN115330922B (en) * | 2022-08-10 | 2023-08-15 | 小米汽车科技有限公司 | Data processing method, device, vehicle, readable storage medium and chip |
CN115186781B (en) * | 2022-09-14 | 2022-11-22 | 南京感动科技有限公司 | Real-time fusion method for multi-source roadside traffic observation data |
CN115240430B (en) * | 2022-09-15 | 2023-01-03 | 湖南众天云科技有限公司 | Method, system and medium for distributed cascade fusion of roadside device information |
CN115472014B (en) * | 2022-09-16 | 2023-10-10 | 苏州映赛智能科技有限公司 | Traffic tracing method, system, server and computer storage medium |
CN116166939A (en) * | 2023-02-09 | 2023-05-26 | 浙江九州云信息科技有限公司 | Data preprocessing method and system based on vehicle-road cooperation |
CN116304994B (en) * | 2023-05-22 | 2023-09-15 | 浙江交科交通科技有限公司 | Multi-sensor target data fusion method, device, equipment and storage medium |
CN117058510B (en) * | 2023-08-22 | 2024-02-13 | 聚米画沙(北京)科技有限公司 | Multi-source security data fusion method and system based on space calculation |
CN116824869B (en) * | 2023-08-31 | 2023-11-24 | 国汽(北京)智能网联汽车研究院有限公司 | Vehicle-road cloud integrated traffic fusion perception testing method, device, system and medium |
CN117649777B (en) * | 2024-01-24 | 2024-04-19 | 苏州万集车联网技术有限公司 | Target matching method, device and computer equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106340197A (en) * | 2016-08-31 | 2017-01-18 | 北京万集科技股份有限公司 | Auxiliary cooperative vehicle infrastructure driving system and method |
CN108010360A (en) * | 2017-12-27 | 2018-05-08 | 中电海康集团有限公司 | A kind of automatic Pilot context aware systems based on bus or train route collaboration |
CN111477010A (en) * | 2020-04-08 | 2020-07-31 | 图达通智能科技(苏州)有限公司 | Device for intersection holographic sensing and control method thereof |
CN111754798A (en) * | 2020-07-02 | 2020-10-09 | 上海电科智能系统股份有限公司 | Method for realizing detection of vehicle and surrounding obstacles by fusing roadside laser radar and video |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109829386B (en) * | 2019-01-04 | 2020-12-11 | 清华大学 | Intelligent vehicle passable area detection method based on multi-source information fusion |
FR3095401B1 (en) * | 2019-04-26 | 2021-05-07 | Transdev Group | Platform and method for supervising an infrastructure for transport vehicles, vehicle, transport system and associated computer program |
CN110379157A (en) * | 2019-06-04 | 2019-10-25 | 深圳市速腾聚创科技有限公司 | Road blind area monitoring method, system, device, equipment and storage medium |
CN111650604B (en) * | 2020-07-02 | 2023-07-28 | 上海电科智能系统股份有限公司 | Method for realizing accurate detection of self-vehicle and surrounding obstacle by using accurate positioning |
CN112598899A (en) * | 2020-12-03 | 2021-04-02 | 中国联合网络通信集团有限公司 | Data processing method and device |
-
2021
- 2021-08-12 CN CN202110922670.9A patent/CN113379805B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106340197A (en) * | 2016-08-31 | 2017-01-18 | 北京万集科技股份有限公司 | Auxiliary cooperative vehicle infrastructure driving system and method |
CN108010360A (en) * | 2017-12-27 | 2018-05-08 | 中电海康集团有限公司 | A kind of automatic Pilot context aware systems based on bus or train route collaboration |
CN111477010A (en) * | 2020-04-08 | 2020-07-31 | 图达通智能科技(苏州)有限公司 | Device for intersection holographic sensing and control method thereof |
CN111754798A (en) * | 2020-07-02 | 2020-10-09 | 上海电科智能系统股份有限公司 | Method for realizing detection of vehicle and surrounding obstacles by fusing roadside laser radar and video |
Also Published As
Publication number | Publication date |
---|---|
CN113379805A (en) | 2021-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113379805B (en) | Multi-information resource fusion processing method for traffic nodes | |
Zhao et al. | On-road vehicle trajectory collection and scene-based lane change analysis: Part i | |
JP2022514975A (en) | Multi-sensor data fusion method and equipment | |
CN110785774A (en) | Method and system for closed loop sensing in autonomous vehicles | |
US11475678B2 (en) | Lane marker detection and lane instance recognition | |
CN111448478A (en) | System and method for correcting high-definition maps based on obstacle detection | |
CN110753953A (en) | Method and system for object-centric stereo vision in autonomous vehicles via cross-modality verification | |
CN110869559A (en) | Method and system for integrated global and distributed learning in autonomous vehicles | |
US20220373353A1 (en) | Map Updating Method and Apparatus, and Device | |
CN111611853A (en) | Sensing information fusion method and device and storage medium | |
KR102264152B1 (en) | Method and system for ground truth auto labeling advanced sensor data and image by camera | |
WO2022062571A1 (en) | Multi-site roadbed network perception method, apparatus and system, and terminal | |
CN117576652B (en) | Road object identification method and device, storage medium and electronic equipment | |
CN117321638A (en) | Correcting or expanding existing high-definition maps | |
CN112130153A (en) | Method for realizing edge detection of unmanned vehicle based on millimeter wave radar and camera | |
CN117130010B (en) | Obstacle sensing method and system for unmanned vehicle and unmanned vehicle | |
CN117496515A (en) | Point cloud data labeling method, storage medium and electronic equipment | |
Meuter et al. | 3D traffic sign tracking using a particle filter | |
CN114556419A (en) | Three-dimensional point cloud segmentation method and device and movable platform | |
KR102618680B1 (en) | Real-time 3D object detection and tracking system using visual and LiDAR | |
Eraqi et al. | Static free space detection with laser scanner using occupancy grid maps | |
CN116892949A (en) | Ground object detection device, ground object detection method, and computer program for ground object detection | |
CN115965847A (en) | Three-dimensional target detection method and system based on multi-modal feature fusion under cross view angle | |
CN113220805A (en) | Map generation device, recording medium, and map generation method | |
CN113390422B (en) | Automobile positioning method and device and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |