CN109996176B - Road side perception and vehicle terminal vehicle road cooperative fusion processing method and device - Google Patents

Road side perception and vehicle terminal vehicle road cooperative fusion processing method and device Download PDF

Info

Publication number
CN109996176B
CN109996176B CN201910420656.1A CN201910420656A CN109996176B CN 109996176 B CN109996176 B CN 109996176B CN 201910420656 A CN201910420656 A CN 201910420656A CN 109996176 B CN109996176 B CN 109996176B
Authority
CN
China
Prior art keywords
perception
information
sub
sensing
fusion processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910420656.1A
Other languages
Chinese (zh)
Other versions
CN109996176A (en
Inventor
曹获
邓烽
胡星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910420656.1A priority Critical patent/CN109996176B/en
Publication of CN109996176A publication Critical patent/CN109996176A/en
Application granted granted Critical
Publication of CN109996176B publication Critical patent/CN109996176B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/042Detecting movement of traffic to be counted or controlled using inductive or magnetic detectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/027Services making use of location information using location based information parameters using movement velocity, acceleration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a method, a device, a terminal and a storage medium for road side perception and vehicle terminal vehicle-road collaborative fusion processing, wherein the method comprises the following steps: acquiring a perception information set of a driving environment through a roadside perception system, wherein the perception information set comprises perception sub-information transmitted by different devices in the roadside perception system; and introducing virtual sensing sub-information into the sensing information set, and performing fusion processing on the sensing information set by using a fusion algorithm, wherein the virtual sensing sub-information is used for associating the sensing sub-information transmitted by different devices. The embodiment of the invention solves the problems that the sensing range of the vehicle-mounted sensing device is smaller in the driving process and the cost for deploying more sensing devices on the vehicle is higher, expands the sensing and detecting range in the driving process, reduces the deployment cost of the single-vehicle sensing device and is beneficial to promoting the popularization of the intelligent traffic technology.

Description

Road side perception and vehicle terminal vehicle road cooperative fusion processing method and device
Technical Field
The embodiment of the invention relates to the technical field of intelligent traffic, in particular to a method, a device, a terminal and a storage medium for road side perception and vehicle terminal vehicle-road collaborative fusion processing.
Background
The intelligent driving is used as a product of the times of intelligent manufacturing and Internet plus, causes the comprehensive upgrading and remodeling of the ecological and business modes of the automobile industry, and has great significance for promoting the scientific and technological progress of China, the economic development and the social harmony, the comprehensive national strength and the like.
Environmental awareness is the basis for intelligent driving techniques. The sensing device is arranged on the vehicle to sense the surrounding environment, so that intelligent auxiliary driving is realized. However, the sensing range of the vehicle sensing device is limited due to factors such as fixed position or effective visual angle, so that the acquired sensing information is difficult to meet the requirement of intelligent driving, and particularly in the aspect of automatic driving, the requirement on comprehensive environmental information is higher for ensuring driving safety. In addition, if each vehicle is provided with a plurality of sensing devices, the deployment cost of the vehicle owner is relatively high.
Disclosure of Invention
The embodiment of the invention provides a method, a device, a terminal and a storage medium for road side sensing and vehicle terminal vehicle-road collaborative fusion processing, which are used for expanding the sensing and detection range in the driving process and reducing the deployment cost of a single-vehicle sensing device.
In a first aspect, an embodiment of the present invention provides a method for road side sensing and vehicle terminal vehicle-road collaborative fusion processing, where the method includes:
acquiring a perception information set of a driving environment through a roadside perception system, wherein the perception information set comprises perception sub-information transmitted by different devices in the roadside perception system;
and introducing virtual perception sub-information into the perception information set, and performing fusion processing on the perception information set by using a fusion algorithm, wherein the virtual perception sub-information is used for associating the perception sub-information transmitted by different devices.
In a second aspect, an embodiment of the present invention further provides a road side sensing and vehicle terminal vehicle-road collaborative fusion processing apparatus, where the apparatus includes:
the system comprises a perception information set acquisition module, a road side perception system acquisition module and a driving information acquisition module, wherein the perception information set acquisition module is used for acquiring a perception information set of a driving environment through the road side perception system, and perception sub information in the perception information set comprises perception sub information transmitted by different equipment in the road side perception system;
and the fusion processing module is used for introducing virtual perception sub-information into the perception information set and carrying out fusion processing on the perception information set by using a fusion algorithm, wherein the virtual perception sub-information is used for associating the perception sub-information transmitted by different devices.
In a third aspect, an embodiment of the present invention further provides a terminal, including:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the road side perception and vehicle terminal vehicle road collaborative fusion processing method according to any embodiment of the invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a road side awareness and vehicle terminal vehicle-road collaborative fusion processing method according to any embodiment of the present invention.
The embodiment of the invention transmits the perception information set of the driving environment to the vehicle by distributing the distributed roadside perception system on the road and utilizing different devices in the roadside perception system; the vehicle-mounted equipment receives the perception information set, virtual perception sub-information is introduced into the perception information set, fusion processing of the perception information set is achieved through a fusion algorithm, and therefore a fusion processing result is applied to driving decision. On one hand, the embodiment of the invention distributes the collection task of the environment perception information into the roadside sensing system for execution, and utilizes the advantage of wider detection range of the roadside end sensing device, thereby solving the problem of smaller perception range of the vehicle-mounted sensing device in the driving process, expanding the perception detection range in the driving process, supplementing the blind area of vehicle-mounted perception, providing more environment perception information for the vehicle, ensuring the rationality of making a driving decision, and further ensuring the accuracy and completeness of a fusion processing result by introducing the virtual perception sub-information, and ensuring the realization safety of intelligent driving; on the other hand, the embodiment of the invention does not need to rely on the vehicle-mounted sensing device to acquire the environmental data, thereby solving the problem of higher cost of deploying more sensing devices on the vehicle, reducing the deployment cost of the single-vehicle sensing device and being beneficial to promoting the popularization of the intelligent traffic technology.
Drawings
Fig. 1 is a flowchart of a road side sensing and vehicle terminal vehicle-road collaborative fusion processing method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a roadside sensing system and a terminal for communication according to an embodiment of the present invention;
FIG. 3 is a flowchart of a road side sensing and vehicle terminal vehicle-road collaborative fusion processing method according to a second embodiment of the present invention;
FIG. 4 is a flowchart of a road side sensing and vehicle terminal vehicle-road collaborative fusion processing method provided by the third embodiment of the present invention;
FIG. 5 is a flowchart of another road side sensing and vehicle terminal vehicle-road collaborative fusion processing method according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of a roadside perception and vehicle terminal roadway collaborative fusion processing device according to a fourth embodiment of the present invention;
fig. 7 is a schematic structural diagram of a terminal according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a road side sensing and vehicle terminal vehicle-road collaborative fusion processing method according to an embodiment of the present invention, which is applicable to a situation where a road side sensing system and a terminal are matched to perform fusion processing on environmental sensing information acquired from the road side sensing system so as to assist driving. The method can be executed by a road side sensing and Vehicle terminal Vehicle-road cooperative fusion processing device, the device can be realized by adopting a software and/or hardware mode, and can be integrated on any terminal capable of executing road side sensing and Vehicle terminal Vehicle-road cooperative fusion processing operation, and the terminal comprises but is not limited To Vehicle-mounted equipment, mobile terminals and the like, such as Vehicle networking communication equipment (Vehicle To X, V2X equipment), smart phones, personal computers and the like.
In this embodiment, the roadside sensing system is configured to transmit the sensing information of the driving environment to a terminal on the vehicle, and the system may include various roadside devices such as a sensing device, a computing device, and a communication device installed on a road. The Computing device includes but is not limited to a Road Side Computing Unit (RSCU), the communication device includes but is not limited to a Road Side Unit (RSU), the types of the sensing devices include but are not limited to an image sensor (camera), a millimeter wave radar, a laser radar, an ultrasonic sensor, and the like, and the deployment number of each type of sensing device may be set according to actual needs. Each computing device may be connected with a plurality of sensing devices of at least one type and transmit the environment awareness information to a terminal on the vehicle through a communication device connected thereto, and the communication device may implement a function of broadcasting the environment awareness information to the outside. Further, the computing means and the communication means may also be integrated into one device.
The terminal on the vehicle can interact with at least one communication device to obtain the perception information of the current driving environment and perform fusion processing, so that the fusion processing result is applied to driving decision. For example, an On Board Unit (OBU) may be integrated On the terminal for communicating with the communication device and receiving the perception information. During the running of the vehicle, the terminal can acquire required perception information from any communication device according to the communication distance.
In the distributed deployment process of the roadside sensing system, a certain number of sensing devices, computing devices and communication devices are deployed in each target area (the size of the area can be adaptively set) on a road. On the basis of ensuring that the sensing information of the current driving environment can be fully acquired, the deployment density of each device in the roadside sensing system can be set according to requirements, and the embodiment is not particularly limited, and can be determined according to the communication distance between different devices and/or the partition number of the sensing detection range. The distance between the computing device and the communication device needs to satisfy a communicable distance requirement.
Fig. 2 is a schematic structural diagram illustrating a roadside sensing system and a terminal for communication according to an embodiment of the present invention, which is taken as an example, but should not be construed as a specific limitation to the embodiment. As shown in fig. 2, the roadside sensing system includes distributed deployed sensing devices, roadside computing units and roadside units, and in a target area, each roadside computing unit is configured to perform preliminary fusion processing on environmental data collected by a plurality of sensing devices, that is, environmental sensing information transmitted to a vehicle terminal by the roadside sensing system is data that has been subjected to preliminary processing, and sends sensing information obtained by the preliminary fusion processing to a corresponding roadside unit, and then the roadside unit sends a terminal on the vehicle. The road side unit can be integrated with a sending module, and the sending module can subscribe a topic (topic) of the perception information in advance and broadcast the received perception information. The technical scheme of the embodiment of the invention is explained in detail in the following with the accompanying drawings:
as shown in fig. 1, the method for road side sensing and vehicle terminal vehicle-road collaborative fusion processing provided by this embodiment may include:
s110, acquiring a perception information set of the driving environment through a roadside perception system, wherein the perception information set comprises perception sub-information transmitted by different devices in the roadside perception system.
Along with the running of the vehicle, the terminal on the vehicle can acquire the sensing information set of the current running environment in real time through the roadside sensing system. Illustratively, in the process of acquiring the perception information set, the terminal may call multithreading to implement, where a part of threads continuously monitor perception sub-information transmitted by different devices in the roadside perception system, and place the monitored perception sub-information into a queue, and another part of threads periodically read an empty queue to obtain the perception information set to be fused. The device for transmitting the perception sub-information to the terminal may refer to a communication device that directly performs data transmission with the terminal, for example, a roadside unit, where the perception sub-information transmitted by each device corresponds to a certain perception area, and the perception areas corresponding to different perception sub-information may overlap.
The method comprises the steps that sensing information of a current driving environment is obtained through a road side sensing system deployed on a road, for each vehicle, only a terminal interacting with the road side sensing system needs to be deployed, and the required sensing information is obtained from the road side sensing system, so that the cost of deploying more sensing devices for a single vehicle can be saved; moreover, the road side sensing system has a wider and more flexible viewing angle for the sensing device to acquire the environmental data, and particularly under the condition that a sensing blind area exists in the process of depending on a single vehicle sensing device to acquire the environmental data, the sensing blind area can be overcome and sensing information in the sensing blind area can be supplemented based on the arrangement comprehensiveness of the sensing device and a computing device in the road side sensing system, so that the accuracy of making a driving decision is improved, the driving safety is improved, and traffic accidents are reduced.
And S120, introducing virtual perception sub-information into the perception information set, and performing fusion processing on the perception information set by using a fusion algorithm, wherein the virtual perception sub-information is used for associating the perception sub-information transmitted by different equipment in the roadside perception system.
In this embodiment, the fusion algorithm may be any available algorithm capable of performing fusion processing on the perceptron information in the prior art, including but not limited to a track association algorithm, a hungarian algorithm, a bipartite graph maximum-weight matching algorithm (Kuhn-Munkres, KM algorithm), and the like. In the fusion processing process, the virtual perception sub-information is equivalent to a correlation bridge between perception sub-information transmitted by different equipment in the roadside perception system, so that the perception sub-information transmitted by any two or more equipment is correlated with each other, for example, the perception sub-information transmitted by different equipment in the roadside perception system is subjected to circulating correlation fusion processing by using the virtual perception sub-information, and the condition that the fusion processing result is inaccurate and incomplete due to the fact that part of perception sub-information is not fused and further the making of driving decisions is influenced can be avoided.
Optionally, the fusion processing is performed on the perception information set, and includes:
removing the duplication of the coincident sensing sub-information transmitted by different equipment in the roadside sensing system in the sensing information set;
and fusing the non-coincident perception sub-information transmitted by different equipment in the roadside perception system in the perception information set.
Sensing areas of a plurality of sensing devices deployed in a target area are overlapped, so that certain overlapped sensing sub-information exists in a sensing information set transmitted to a terminal by a roadside sensing system, and the overlapped sensing sub-information can be removed based on feature identification and matching; and then, integrating the non-coincident perception sub-information to obtain the complete perception information of the current running environment of the vehicle.
According to the technical scheme, on one hand, the collection task of the environment sensing information is distributed to the roadside sensing system to be executed, the advantage that the roadside end sensing device is wide in detection range is utilized, the problem that the sensing range of the vehicle-mounted sensing device is small in the driving process is solved, the sensing detection range in the driving process is expanded, the blind area of vehicle-mounted sensing is supplemented, more environment sensing information can be provided for a vehicle, the rationality of driving decision making is guaranteed, the accuracy and the integrity of a fusion processing result are guaranteed due to the introduction of the virtual sensing sub-information, the accuracy of the driving decision making is further guaranteed, the realization safety of intelligent driving is guaranteed, and traffic accidents can be reduced; on the other hand, the technical scheme of the embodiment solves the problem that the cost for deploying more sensing devices on the vehicle is high, reduces the deployment cost of the single sensing device, is favorable for promoting the popularization of the intelligent traffic technology, and constructs the social intelligent traffic system.
Example two
Fig. 3 is a flowchart of a road side sensing and vehicle terminal vehicle-road collaborative fusion processing method according to a second embodiment of the present invention, which is further optimized based on the above-mentioned embodiment. As shown in fig. 3, the method may include:
s210, acquiring a perception information set of the driving environment through a roadside perception system, wherein the perception information set comprises perception sub-information transmitted by different devices in the roadside perception system.
S220, virtual perception sub-information is introduced into the perception information set, perception sub-information transmitted by any equipment in the roadside perception system in the perception information set is determined to be target perception sub-information, and the virtual perception sub-information is initialized by the aid of the target perception sub-information.
In this embodiment, based on the normal road driving environment, the time synchronization between each device in the roadside sensing system may be based on the received GPS information, for exampleThe numbers are time synchronized so that the terminal receives the perceptual sub-information transmitted by different devices with the same time stamp. For example, the terminal selects all perception sub-information in a certain time range according to the current timestamp to form a perception information set { A, B, C }, the perception information set comprises perception sub-information A, perception sub-information B and perception sub-information C which are respectively transmitted by three road side units, and virtual perception sub-information V is caused in the perception information set0Obtaining a set of perceptual information { V0A, B, C }; using perceptual sub-information A to virtual perceptual sub-information V0Initialization is performed, at which time the virtual perception sub-information V0The same information content as that contained in the perceptual sub-information a.
And S230, performing fusion processing on the initialized virtual perception sub-information and perception sub-information transmitted by any equipment in the roadside perception system except the target perception sub-information in the perception information set by using a fusion algorithm to obtain a current fusion processing result.
Continuing with the above example, any fusion algorithm may be utilized to integrate the perceptual information set { V }0In A, B, C), initializing virtual perception sub-information V0Performing a fusion process with either one of the perception sub-information B and the perception sub-information C, for example, the initialized virtual perception sub-information V0Performing fusion processing on the virtual perception sub-information V and the perception sub-information B to obtain a fusion processing result X, and then utilizing the fusion processing result X to perform the initialization on the virtual perception sub-information V0Updating to obtain updated virtual perception sub-information V1At this time, the updated virtual perception sub information V1Substantially simultaneously containing the content of the perception sub-information a and the perception sub-information B.
S240, updating the initialized virtual perception sub-information by using the current fusion processing result, taking the updated virtual perception sub-information as new initialized virtual sub-information, and repeatedly executing the fusion processing between the initialized virtual perception sub-information and the perception sub-information which is not involved in the fusion processing and is transmitted by any equipment in the roadside perception system in the perception information set until the perception sub-information in the perception information set is involved in the fusion processing.
In the above example, the perception sub-information C in the perception information set { a, B, C } has not participated in the fusion process, and the updated virtual perception sub-information V continues to be updated1And performing fusion processing on the sensing sub-information C, wherein the sensing sub-information A and the sensing sub-information B are equivalent to performing fusion processing on the sensing sub-information C at the same time to obtain a fusion processing result Y, and the fusion processing result Y simultaneously comprises the contents of the sensing sub-information A, B and C. If the perception sub-information which does not participate in the fusion processing still exists in the perception information set, the virtual perception sub-information V can be updated by using the fusion processing result Y1Obtaining updated virtual perception sub-information V2And using the currently updated virtual perception sub-information V2And carrying out fusion processing on the sensing sub-information which is not involved in the fusion processing and the rest sensing sub-information. Namely, for the perception sub-information transmitted by different devices in the perception information set, the virtual perception sub-information needs to be dynamically updated after each fusion processing operation is finished.
Compared with the prior art that a single pairwise fusion processing mode between the sensing information is usually adopted, for example, the sensing sub-information a and the sensing sub-information B are subjected to fusion processing, and then the sensing sub-information B and the sensing sub-information C are subjected to fusion processing, so that the results of the two times of fusion processing are used as the fusion processing results between the sensing sub-information A, B and the sensing sub-information C, the fusion processing between the sensing sub-information a and the sensing sub-information C is omitted, when the sensing sub-information transmitted by different devices in the road side sensing system has large relevance, the problem of information missing exists in the fusion processing results of the terminal, and the fusion processing results are inaccurate and incomplete.
According to the technical scheme, on one hand, the collection task of the environment sensing information is distributed to the roadside sensing system to be executed, the advantage that the roadside end sensing device is wide in detection range is utilized, the problem that the sensing range of the vehicle-mounted sensing device is small in the driving process is solved, the sensing detection range in the driving process is expanded, the blind area of vehicle-mounted sensing is supplemented, more environment sensing information can be provided for a vehicle, the rationality of driving decision making is ensured, in addition, the accuracy and the integrity of a fusion processing result are ensured due to the introduction of virtual sensing sub-information, the accuracy of the driving decision making is further ensured, and the realization safety of intelligent driving is ensured; on the other hand, the technical scheme of the embodiment solves the problem that the cost for deploying more sensing devices on the vehicle is high, reduces the deployment cost of the single sensing device, and is beneficial to promoting the popularization of the intelligent traffic technology.
EXAMPLE III
Fig. 4 is a flowchart of a road side sensing and vehicle terminal vehicle-road collaborative fusion processing method provided by the third embodiment of the present invention, and the present embodiment is further optimized and expanded based on the above embodiments. As shown in fig. 4, the method may include:
s310, acquiring a perception information set of the driving environment through a roadside perception system, wherein the perception information set comprises perception sub-information transmitted by different devices in the roadside perception system.
S320, traversing the perception sub-information transmitted by different equipment in the roadside perception system to obtain at least one group of track pairs, wherein the track pairs comprise any two target objects respectively corresponding to the different equipment in the roadside perception system.
The perception sub-information transmitted by different devices in the roadside perception system may include information of different target objects, including but not limited to obstacles, pedestrians, vehicles, buildings, traffic signs and the like on the road. And grouping the target objects related to the perception sub-information transmitted by different equipment pairwise by traversing the perception information set. For example, the perception sub-information a and B transmitted by two communication devices in the roadside perception system respectively, the perception sub-information a includes 2 target objects: m1 and m2, wherein the perception sub information B comprises 3 target objects n1, n2 and n3, and 6 groups of track pairs can be obtained by traversing the perception sub information A and B, wherein the track pairs respectively comprise: (m1, n1), (m1, n2), (m1, n3), (m2, n1), (m2, n2), (m2, n 3).
Optionally, before the fusion processing is performed on the perception information set, the method further includes:
in the perception information set, distributing sub-identifications for target objects in perception sub-information transmitted by different equipment in the roadside perception system, wherein the sub-identifications are used for distinguishing the target objects corresponding to the different equipment in the roadside perception system, and the sub-identifications comprise sub-IDs of the target objects. In other words, in the perception sub information transmitted by different devices in the roadside perception system, the related target objects correspond to unique sub identifiers, and the sub identifiers do not coincide with different devices in the roadside perception system, so that the target objects corresponding to different devices are distinguished in the fusion processing process. For example, for any track pair, two target objects in the track pair respectively correspond to different sub-identifiers.
S330, determining the distance between two target objects in each group of track pairs according to the respective information of the target objects in each group of track pairs.
The perception sub-information transmitted by different devices in the roadside perception system can comprise characteristic information such as the position, the speed, the motion direction, the acceleration, the geometric shape, the rotation angle and the color of at least one target object. The distance between the two target objects can be determined according to the position information of the two target objects in the track pair.
And S340, determining a track pair set meeting a distance threshold in at least one group of track pairs according to the determined distance.
Specifically, when the distance between two target objects is smaller than a preset distance threshold (or called a tracking threshold), the corresponding track pair may be determined as a target track pair, and a track pair set is formed by the determined target track pairs, that is, at least one set of track pairs obtained may be screened based on the greedy algorithm idea. The distance threshold may be set according to a requirement, and this embodiment is not particularly limited. Continuing with the above example, in the 6 sets of track pairs: (m1, n1), (m1, n2), (m1, n3), (m2, n1), (m2, n2), (m2, n3), by determining the distance in front of the target object in each set of track pairs and comparing it with a preset distance threshold, if only 2 sets of track pairs meet the threshold condition: (m1, n2), (m2, n3), the 2 sets of track pairs are then preferably fused using a fusion algorithm. The closer the distance between two target objects in the track pair is, the higher the possibility that the target objects belong to the same target object in the driving environment is, the higher the value of fusion processing is, the priority fusion processing is performed on the screened track pair, the data processing amount of the fusion processing can be reduced, and the fusion processing efficiency is improved.
Optionally, determining, according to the determined distance, a track pair set satisfying a distance threshold in at least one group of track pairs, including: and storing the track pairs meeting the distance threshold value in at least one group of track pairs into a sequence list (SeqList) according to the determined distance to obtain a track pair set. Namely, after the target track pair is determined each time, the target track pair information can be sequentially stored in the sequence list. The sequence table has a fixed storage capacity as a sort of array structure, but may store any type of data.
And S350, introducing virtual perception sub-information into the track pair sub-set, and performing fusion processing on the track pair sub-set by using a fusion algorithm.
Illustratively, virtual sensing sub-information is introduced into the track pair sub-set, the track can be used to initialize the virtual sensing sub-information for information of one target object in any track pair in the sub-set, the initialized virtual sensing sub-information and information of another target object in the track pair participating in initialization are subjected to fusion processing, the initialized virtual sensing sub-information is dynamically updated by using a current fusion processing result, and then the track pairs which do not participate in fusion processing in the sub-set are subjected to fusion processing with the track in sequence, that is, dynamic updating of the virtual sensing sub-information and fusion processing between the virtual sensing sub-information and information of the target object which does not participate in fusion processing are repeatedly executed until the information of all target objects in the sub-set is subjected to fusion processing by the track.
After the fusion processing result of the track pair sub-set is obtained, the information of the sensing areas except the sensing area corresponding to the track pair sub-set can be continuously added based on the fusion processing result and the currently obtained sensing information set, so that the complete fusion processing result of the current running environment of the vehicle is obtained.
On the basis of the foregoing technical solution, optionally, after determining, according to the determined distance, a trajectory pair subset satisfying a distance threshold in at least one set of trajectory pairs, the method further includes: and in the track pair subset, sequentially performing feature matching on each group of track pairs according to the distance between the target objects, and determining the track pairs with successfully matched features so as to introduce virtual perception sub-information into the track pairs with successfully matched features.
For example, feature matching may be performed on each set of track pairs sequentially according to the sorting result of the distances between the two target objects from small to large. And the distance between the target objects in the track pair is used as a first-layer screening condition, after the track pair set meeting the distance threshold value is determined, the track pair set is further screened according to the feature matching result between the target objects, so that the optimal track pair with a short distance and matched features is obtained, and the efficiency and pertinence of fusion processing can be improved.
Further, the method further comprises:
if the feature matching fails in the process of sequentially performing feature matching on each group of track pairs, deleting the track pairs with the failed feature matching from the track pair set;
and if the characteristic matching is successful in the process of sequentially performing the characteristic matching on each group of track pairs, deleting other track pairs associated with any target object in the track pairs with the successful characteristic matching from the track pair subset.
In each feature matching process, if the feature matching fails, it is indicated that two target objects in the current track pair do not belong to the same target object in the driving environment, that is, the screening condition of the current preferred track pair is not met, so that the track pair with the failed feature matching is deleted from the track pair set; if the feature matching is successful (namely the track pair association is successful), the target object in the current track pair belongs to the same target object, and other track pairs associated with any target object in the current track pair do not belong to the track pair containing the same target object, the track pair can be deleted, and then association information is added to the track pair with successful feature matching for fusion processing. In addition, if the track pairs in the track pair sub-sets are stored in the form of a sequence list, the track pairs meeting the deletion condition are deleted from the sequence list in sequence, and the track pairs with successfully matched characteristics are stored in other areas to be fused until the sequence list is empty.
Fig. 5 is a flowchart illustrating another road side sensing and vehicle terminal vehicle road collaborative fusion processing method provided in this embodiment, taking the target object as an example of an obstacle. As shown in fig. 5, a terminal acquires a sensing information set of a vehicle driving environment through a roadside sensing system, and then redistributes sub-IDs for obstacles corresponding to different roadside units to realize the distinguishing of the obstacles, wherein the sub-IDs are different from device IDs in the roadside sensing system, when different roadside units transmit sensing sub-information to the terminal, the sensing sub-information carries different device IDs, and sensing sub-information transmitted by the same roadside unit corresponds to the same roadside unit ID; after the sub-IDs are redistributed, calculating the distance between two obstacles in each group of track pairs, realizing the first-layer screening of the track pairs according to the relation between the distance and a tracking threshold, storing the track pairs meeting the tracking threshold into a sequence list, then sorting the track pairs according to the distance in the sequence list, sequentially carrying out feature matching on the track pairs according to the sorting result from small to large of the distance, realizing the second-layer screening of the track pairs through the feature matching, further determining the preferred track pairs (namely the track pairs which are successfully correlated), and simultaneously deleting the track pairs which do not meet the conditions from the sequence list; and finally, based on the determined optimal track pair and the associated information, carrying out deduplication processing, and based on a fusion result obtained by deduplication processing, continuously adding the obstacle information of the non-redundant area, namely non-coincident obstacle information, so as to obtain a complete fusion processing result of the current running environment of the vehicle, so as to make a driving decision.
According to the technical scheme, at least one group of track pairs are obtained by traversing the sensing information set acquired by the road-side-dependent sensing system, and the track pairs are screened based on the matching of the distance between the target objects in each group of track pairs and the characteristics to obtain the optimal track pairs, so that the optimal track pairs are subjected to the prior fusion processing by utilizing fusion algorithms such as a track association algorithm and the like, the efficiency of the fusion processing is improved, the accuracy of the fusion processing result is ensured, and the rationality and the accuracy of the driving decision making in the vehicle system are ensured; meanwhile, the technical scheme of the embodiment solves the problems that the sensing range of the vehicle-mounted sensing device is small in the driving process and the cost for deploying more sensing devices on the vehicle is high, expands the sensing and detecting range in the driving process, supplements the blind area of vehicle-mounted sensing, can provide more environment sensing information for the vehicle, reduces the deployment cost of the single-vehicle sensing device and contributes to promoting the popularization of the intelligent traffic technology.
Example four
Fig. 6 is a schematic structural diagram of a roadside perception and vehicle terminal vehicle-road collaborative fusion processing device provided in the fourth embodiment of the present invention, which is applicable to a situation where the perception information acquired from the roadside perception system is subjected to fusion processing based on the cooperation between the roadside perception system and the terminal, so as to assist driving. The device can be implemented in a software and/or hardware manner, and can be integrated on any terminal capable of executing the perception information fusion processing operation, and the terminal includes but is not limited to a vehicle-mounted device, a mobile terminal and the like.
As shown in fig. 6, the roadside awareness and vehicle terminal vehicle-road collaborative fusion processing apparatus provided in this embodiment may include an awareness information set obtaining module 410 and a fusion processing module 420, where:
a perception information set obtaining module 410, configured to obtain a perception information set of a driving environment through a roadside sensing system, where perception sub-information in the perception information set includes perception sub-information transmitted by different devices in the roadside sensing system;
and the fusion processing module 420 is configured to introduce virtual sensing sub-information into the sensing information set, and perform fusion processing on the sensing information set by using a fusion algorithm, where the virtual sensing sub-information is used to associate the sensing sub-information transmitted by different devices.
Optionally, the fusion processing module 420 includes:
the virtual perception sub-information initialization unit is used for introducing virtual perception sub-information into the perception information set, determining perception sub-information transmitted by any equipment in the roadside perception system in the perception information set as target perception sub-information, and initializing the virtual perception sub-information by using the target perception sub-information;
the first fusion processing unit is used for carrying out fusion processing on the initialized virtual perception sub-information and perception sub-information transmitted by any equipment in the roadside perception system except the target perception sub-information in the perception information set by using a fusion algorithm to obtain a current fusion processing result;
and the fusion processing repeated execution unit is used for updating the initialized virtual perception sub-information by using the current fusion processing result, taking the updated virtual perception sub-information as new initialized virtual sub-information, and repeatedly executing the fusion processing between the initialized virtual perception sub-information and the perception sub-information which is not involved in the fusion processing and is transmitted by any equipment in the roadside perception system in the perception information set until the perception sub-information in the perception information set is involved in the fusion processing.
Optionally, the apparatus further comprises:
and a sub-identifier allocating module, configured to allocate, in the sensing information set, sub-identifiers to target objects in the sensing sub-information transmitted by different devices in the roadside sensing system before the fusion processing module 420 performs the operation of performing fusion processing on the sensing information set, where the sub-identifiers are used to distinguish the target objects corresponding to different devices in the roadside sensing system.
Optionally, the apparatus further comprises:
a track pair determining module, configured to traverse the sensing sub-information transmitted by different devices in the roadside sensing system before the fusion processing module 420 performs an operation of introducing the virtual sensing sub-information into the sensing information set, to obtain at least one set of track pairs, where each track pair includes any two target objects respectively corresponding to different devices in the roadside sensing system;
the target object distance determining module is used for determining the distance between two target objects in each group of track pairs according to the respective information of the target objects in each group of track pairs;
and the track pair subset determining module is used for determining a track pair subset meeting a distance threshold in at least one group of track pairs according to the determined distance so as to introduce virtual perception sub-information into the track pair subset.
Optionally, the apparatus further comprises:
and the characteristic matching module is used for performing characteristic matching on each group of track pairs in the track pair subset according to the distance between the target objects after the track pair subset determining module determines the operation of the track pair subset meeting the distance threshold in at least one group of track pairs according to the determined distance, and determining the track pairs with successful characteristic matching so as to introduce virtual perception sub-information into the track pairs with successful characteristic matching.
Optionally, the apparatus further comprises:
the first track pair deleting module is used for deleting the track pairs with the failed characteristic matching from the track pair subset if the characteristic matching fails in the process of sequentially performing the characteristic matching on each group of track pairs;
and the second track pair deleting module is used for deleting other track pairs associated with any target object in the track pairs with successfully matched characteristics from the track pair subset if the characteristic matching is successful in the process of sequentially performing the characteristic matching on each group of track pairs.
Optionally, the track pair subset determining module is specifically configured to:
and storing the track pairs meeting the distance threshold in at least one group of track pairs into a sequence table according to the determined distance to obtain a track pair set.
Optionally, the fusion processing module 420 includes:
the perception sub-information duplication removing unit is used for removing duplication of the coincidence perception sub-information transmitted by different equipment in the roadside perception system in the perception information set;
and the perception sub-information fusion unit is used for fusing the non-coincident perception sub-information transmitted by different equipment in the roadside perception system in the perception information set.
The road side perception and vehicle terminal vehicle road collaborative fusion processing device provided by the embodiment of the invention can execute the road side perception and vehicle terminal vehicle road collaborative fusion processing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. Reference may be made to the description of any method embodiment of the invention not specifically described in this embodiment.
EXAMPLE five
Fig. 7 is a schematic structural diagram of a terminal according to a fifth embodiment of the present invention. Fig. 7 illustrates a block diagram of an exemplary terminal 512 suitable for use in implementing embodiments of the present invention. The terminal 512 shown in fig. 7 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention. The terminal 512 may typically be a vehicle-mounted device or a mobile terminal or the like.
As shown in fig. 7, the terminal 512 is represented in the form of a general-purpose terminal. The components of the terminal 512 may include, but are not limited to: one or more processors 516, a storage device 528, and a bus 518 that couples the various system components including the storage device 528 and the processors 516.
Bus 518 represents one or more of any of several types of bus structures, including a memory device bus or memory device controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
The terminal 512 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by terminal 512 and includes both volatile and nonvolatile media, removable and non-removable media.
Storage 528 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 530 and/or cache Memory 532. The terminal 512 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 534 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, and commonly referred to as a "hard drive"). Although not shown in FIG. 7, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk such as a Compact disk Read-Only Memory (CD-ROM), Digital Video disk Read-Only Memory (DVD-ROM) or other optical media may be provided. In these cases, each drive may be connected to bus 518 through one or more data media interfaces. Storage 528 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 540 having a set (at least one) of program modules 542 may be stored, for example, in storage 528, such program modules 542 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may include an implementation of a network environment. The program modules 542 generally perform the functions and/or methods of the described embodiments of the invention.
The terminal 512 may also communicate with one or more external devices 514 (e.g., keyboard, pointing terminal, display 524, etc.), with one or more terminals that enable a user to interact with the terminal 512, and/or with any terminals (e.g., network card, modem, etc.) that enable the terminal 512 to communicate with one or more other computing terminals. Such communication may occur via input/output (I/O) interfaces 522. Also, the terminal 512 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the internet) via the Network adapter 520. As shown in fig. 7, the network adapter 520 communicates with the other modules of the terminal 512 via the bus 518. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the terminal 512, including but not limited to: microcode, end drives, Redundant processors, external disk drive Arrays, RAID (Redundant Arrays of Independent Disks) systems, tape drives, and data backup storage systems, among others.
The processor 516 executes various functional applications and data processing by running the program stored in the storage device 528, for example, implementing a road side sensing and vehicle terminal vehicle road collaborative fusion processing method provided by any embodiment of the present invention, which may include:
acquiring a perception information set of a driving environment through a roadside perception system, wherein the perception information set comprises perception sub-information transmitted by different devices in the roadside perception system;
and introducing virtual perception sub-information into the perception information set, and performing fusion processing on the perception information set by using a fusion algorithm, wherein the virtual perception sub-information is used for associating the perception sub-information transmitted by different devices.
EXAMPLE six
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a road side sensing and vehicle terminal vehicle-road collaborative fusion processing method provided in any embodiment of the present invention, where the method may include:
acquiring a perception information set of a driving environment through a roadside perception system, wherein the perception information set comprises perception sub-information transmitted by different devices in the roadside perception system;
and introducing virtual perception sub-information into the perception information set, and performing fusion processing on the perception information set by using a fusion algorithm, wherein the virtual perception sub-information is used for associating the perception sub-information transmitted by different devices.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or terminal. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A road side perception and vehicle terminal vehicle road collaborative fusion processing method is characterized by comprising the following steps:
acquiring a perception information set of a driving environment through a roadside perception system, wherein the perception information set comprises perception sub-information transmitted by different devices in the roadside perception system; obtaining perception sub-information in the perception information set from different perception areas;
introducing virtual perception sub-information into the perception information set, and performing fusion processing on the perception information set by using a fusion algorithm, wherein the virtual perception sub-information is used for associating perception sub-information transmitted by different devices;
before introducing virtual perceptual sub-information in the set of perceptual information, the method further comprises:
traversing perception sub-information transmitted by different equipment in the roadside perception system to obtain at least one group of track pairs, wherein the track pairs comprise any two target objects respectively corresponding to the different equipment;
determining the distance between two target objects in each group of track pairs according to the respective information of the target objects in each group of track pairs;
and determining a track pair subset meeting a distance threshold in the at least one group of track pairs according to the distance so as to introduce virtual perception sub-information into the track pair subset.
2. The method according to claim 1, wherein introducing virtual perception sub-information into the perception information set, and performing fusion processing on the perception information set by using a fusion algorithm comprises:
introducing virtual perception sub-information into the perception information set, determining perception sub-information transmitted by any equipment in the roadside perception system in the perception information set as target perception sub-information, and initializing the virtual perception sub-information by using the target perception sub-information;
performing fusion processing on the initialized virtual perception sub-information and perception sub-information transmitted by any equipment in the roadside perception system except the target perception sub-information in the perception information set by using the fusion algorithm to obtain a current fusion processing result;
and updating the initialized virtual perception sub-information by using the current fusion processing result, taking the updated virtual perception sub-information as new initialized virtual sub-information, and repeatedly executing the fusion processing between the initialized virtual perception sub-information and perception sub-information which is not involved in the fusion processing and is transmitted by any equipment in the roadside perception system in the perception information set until the perception sub-information in the perception information set is involved in the fusion processing.
3. The method of claim 1, wherein prior to the fusing the set of perceptual information, the method further comprises:
in the perception information set, distributing sub-identifiers for target objects in perception sub-information transmitted by different devices in the roadside perception system, wherein the sub-identifiers are used for distinguishing the target objects corresponding to the different devices.
4. The method of claim 1, wherein after determining, based on the distance, a set of track pairs in the at least one set of track pairs that satisfy a distance threshold, the method further comprises:
and in the track pair set, sequentially carrying out feature matching on each group of track pairs according to the distance between the target objects, and determining the track pairs with successfully matched features so as to introduce virtual perception sub-information into the track pairs with successfully matched features.
5. The method of claim 4, further comprising:
if the feature matching fails in the process of sequentially matching the features of each group of track pairs, deleting the track pairs with the failed feature matching from the track pair subset;
and if the feature matching is successful in the process of sequentially performing the feature matching on each group of track pairs, deleting other track pairs associated with any target object in the track pairs with the successfully matched features from the track pair subset.
6. The method of claim 1, wherein determining, in the at least one set of track pairs, a set of track pairs that satisfies a distance threshold based on the distance comprises:
and storing the track pairs meeting the distance threshold value in the at least one group of track pairs into a sequence list according to the distance to obtain the track pair set.
7. The method according to claim 1, wherein the fusing the perception information set comprises:
removing the duplication of the coincidence perception sub-information transmitted by different equipment in the roadside perception system in the perception information set;
and fusing the non-coincident perception sub-information transmitted by different equipment in the roadside perception system in the perception information set.
8. A roadside perception and vehicle terminal vehicle-road cooperative fusion processing device is characterized by comprising:
the system comprises a perception information set acquisition module, a road side perception system acquisition module and a driving information acquisition module, wherein the perception information set acquisition module is used for acquiring a perception information set of a driving environment through the road side perception system, and perception sub information in the perception information set comprises perception sub information transmitted by different equipment in the road side perception system; obtaining perception sub-information in the perception information set from different perception areas;
the fusion processing module is used for introducing virtual perception sub-information into the perception information set and carrying out fusion processing on the perception information set by using a fusion algorithm, wherein the virtual perception sub-information is used for associating the perception sub-information transmitted by different devices;
wherein the apparatus further comprises:
a track pair determining module, configured to traverse the sensing sub-information transmitted by different devices in the roadside sensing system before the fusion processing module 420 performs an operation of introducing the virtual sensing sub-information into the sensing information set, to obtain at least one set of track pairs, where each track pair includes any two target objects respectively corresponding to different devices in the roadside sensing system;
the target object distance determining module is used for determining the distance between two target objects in each group of track pairs according to the respective information of the target objects in each group of track pairs;
and the track pair subset determining module is used for determining a track pair subset meeting a distance threshold in at least one group of track pairs according to the determined distance so as to introduce virtual perception sub-information into the track pair subset.
9. A terminal, comprising:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the road side perception and vehicle terminal vehicle road collaborative fusion processing method according to any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the roadside awareness and vehicle terminal roadway collaborative fusion processing method according to any one of claims 1 to 7.
CN201910420656.1A 2019-05-20 2019-05-20 Road side perception and vehicle terminal vehicle road cooperative fusion processing method and device Active CN109996176B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910420656.1A CN109996176B (en) 2019-05-20 2019-05-20 Road side perception and vehicle terminal vehicle road cooperative fusion processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910420656.1A CN109996176B (en) 2019-05-20 2019-05-20 Road side perception and vehicle terminal vehicle road cooperative fusion processing method and device

Publications (2)

Publication Number Publication Date
CN109996176A CN109996176A (en) 2019-07-09
CN109996176B true CN109996176B (en) 2021-08-10

Family

ID=67136745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910420656.1A Active CN109996176B (en) 2019-05-20 2019-05-20 Road side perception and vehicle terminal vehicle road cooperative fusion processing method and device

Country Status (1)

Country Link
CN (1) CN109996176B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276972A (en) * 2019-07-16 2019-09-24 启迪云控(北京)科技有限公司 A kind of object cognitive method and system based on car networking
CN110299010A (en) * 2019-07-26 2019-10-01 交通运输部公路科学研究所 A kind of information processing method towards bus or train route collaboration roadside device
CN113064415A (en) * 2019-12-31 2021-07-02 华为技术有限公司 Method and device for planning track, controller and intelligent vehicle
CN111601266B (en) * 2020-03-31 2022-11-22 浙江吉利汽车研究院有限公司 Cooperative control method and system
CN111768621B (en) * 2020-06-17 2021-06-04 北京航空航天大学 Urban road and vehicle fusion global perception method based on 5G
CN111754798A (en) * 2020-07-02 2020-10-09 上海电科智能系统股份有限公司 Method for realizing detection of vehicle and surrounding obstacles by fusing roadside laser radar and video
CN114386481A (en) * 2021-12-14 2022-04-22 京东鲲鹏(江苏)科技有限公司 Vehicle perception information fusion method, device, equipment and storage medium
CN115273473A (en) * 2022-07-29 2022-11-01 阿波罗智联(北京)科技有限公司 Method and device for processing perception information of road side equipment and automatic driving vehicle

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7460951B2 (en) * 2005-09-26 2008-12-02 Gm Global Technology Operations, Inc. System and method of target tracking using sensor fusion
CN102831766B (en) * 2012-07-04 2014-08-13 武汉大学 Multi-source traffic data fusion method based on multiple sensors
CN105390029B (en) * 2015-11-06 2019-04-26 武汉理工大学 Ship collision prevention aid decision-making method and system based on Track Fusion and Trajectory Prediction
CN107798870B (en) * 2017-10-25 2019-10-22 清华大学 A kind of the track management method and system, vehicle of more vehicle target tracking
CN108010360A (en) * 2017-12-27 2018-05-08 中电海康集团有限公司 A kind of automatic Pilot context aware systems based on bus or train route collaboration
CN108803622B (en) * 2018-07-27 2021-10-26 吉利汽车研究院(宁波)有限公司 Method and device for processing target detection data

Also Published As

Publication number Publication date
CN109996176A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN110132290B (en) Intelligent driving road side equipment perception information fusion processing method, device and equipment
CN109996176B (en) Road side perception and vehicle terminal vehicle road cooperative fusion processing method and device
CN109194736B (en) Message duplicate removal method and device, electronic equipment, medium and unmanned vehicle
CN109284801B (en) Traffic indicator lamp state identification method and device, electronic equipment and storage medium
CN110674349B (en) Video POI (Point of interest) identification method and device and electronic equipment
US20190355258A1 (en) Information processing device, information processing method, and computer readable medium
CN111738316B (en) Zero sample learning image classification method and device and electronic equipment
US20210209162A1 (en) Method for processing identity information, electronic device, and storage medium
US20230078241A1 (en) Driving assistance processing method and apparatus, computer-readable medium, and electronic device
CN105827509A (en) Position information sharing processing method and system, vehicle terminal and server
CN115817463B (en) Vehicle obstacle avoidance method, device, electronic equipment and computer readable medium
CN110083529B (en) Automatic testing method, device, medium and electronic equipment
CN111930249A (en) Intelligent pen image processing method and device and electronic equipment
CN110662191B (en) Communication mode selection method and device and electronic equipment
CN112654999B (en) Method and device for determining labeling information
CN111914784B (en) Method and device for detecting intrusion of trackside obstacle in real time and electronic equipment
CN115061386B (en) Intelligent driving automatic simulation test system and related equipment
CN111488866B (en) Invading object identification method and device based on deep learning and electronic equipment
CN111832354A (en) Target object age identification method and device and electronic equipment
CN118057470A (en) Image processing method and device
CN110334763B (en) Model data file generation method, model data file generation device, model data file identification device, model data file generation apparatus, model data file identification apparatus, and model data file identification medium
CN111401224B (en) Target detection method and device and electronic equipment
CN114771515B (en) Vehicle collision processing method and device and related equipment
CN114379592B (en) Target association method, device, electronic equipment and storage medium
CN110263852B (en) Data processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant