CN110132290B - Intelligent driving road side equipment perception information fusion processing method, device and equipment - Google Patents

Intelligent driving road side equipment perception information fusion processing method, device and equipment Download PDF

Info

Publication number
CN110132290B
CN110132290B CN201910420661.2A CN201910420661A CN110132290B CN 110132290 B CN110132290 B CN 110132290B CN 201910420661 A CN201910420661 A CN 201910420661A CN 110132290 B CN110132290 B CN 110132290B
Authority
CN
China
Prior art keywords
information
perception
sub
fusion processing
road side
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910420661.2A
Other languages
Chinese (zh)
Other versions
CN110132290A (en
Inventor
曹获
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910420661.2A priority Critical patent/CN110132290B/en
Publication of CN110132290A publication Critical patent/CN110132290A/en
Application granted granted Critical
Publication of CN110132290B publication Critical patent/CN110132290B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data

Abstract

The embodiment of the invention discloses a method, a device, equipment and a storage medium for fusion processing of perception information of intelligent driving road side equipment, wherein the method comprises the following steps: receiving a perception information set in a target area, wherein the perception information set comprises perception sub-information transmitted by different road-side devices, and the perception sub-information has the same timestamp based on time calibration among the different road-side devices; and performing fusion processing on the perception information set by using a fusion algorithm. The embodiment of the invention solves the problems that the sensing range of the vehicle-mounted sensing device is smaller in the driving process and the cost for deploying more sensing devices on the vehicle is higher, expands the sensing and detecting range in the driving process, reduces the deployment cost of the single-vehicle sensing device, ensures the accuracy of the fusion processing result and is beneficial to promoting the popularization of the intelligent traffic technology.

Description

Intelligent driving road side equipment perception information fusion processing method, device and equipment
Technical Field
The embodiment of the invention relates to the technical field of intelligent traffic, in particular to a method, a device, equipment and a storage medium for fusion processing of perception information of intelligent driving road side equipment.
Background
The intelligent driving is used as a product of the times of intelligent manufacturing and Internet plus, causes the comprehensive upgrading and remodeling of the ecological and business modes of the automobile industry, and has great significance for promoting the scientific and technological progress of China, the economic development and the social harmony, the comprehensive national strength and the like.
Environmental awareness is the basis for intelligent driving techniques. The sensing device is arranged on the vehicle to sense the surrounding environment, so that intelligent auxiliary driving is realized. However, the sensing range of the vehicle sensing device is limited due to factors such as fixed position or limited visual angle, so that the acquired sensing information is difficult to meet the requirement of intelligent driving, and particularly in the aspect of automatic driving, the requirement on comprehensive environmental information is higher for ensuring driving safety. In addition, if each vehicle is provided with a plurality of sensing devices, the deployment cost of the vehicle owner is relatively high.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a storage medium for fusion processing of perception information of intelligent driving roadside equipment, which are used for expanding a perception detection range in a driving process, ensuring the accuracy of a fusion processing result and reducing the deployment cost of a single-vehicle sensor.
In a first aspect, an embodiment of the present invention provides a method for fusion processing of perception information of an intelligent driving road side device, where the method includes:
receiving a perception information set in a target area, wherein the perception information set comprises perception sub-information transmitted by different roadside devices, and the perception sub-information has the same timestamp based on time calibration among the different roadside devices;
and performing fusion processing on the perception information set by using a fusion algorithm.
In a second aspect, an embodiment of the present invention further provides an intelligent driving road side device perception information fusion processing apparatus, where the apparatus includes:
the system comprises a perception information set acquisition module, a time alignment module and a time alignment module, wherein the perception information set acquisition module is used for receiving a perception information set in a target area, the perception information set comprises perception sub-information transmitted by different road side equipment, and the perception sub-information has the same timestamp based on time alignment among the different road side equipment;
and the fusion processing module is used for carrying out fusion processing on the perception information set by utilizing a fusion algorithm.
In a third aspect, an embodiment of the present invention further provides an apparatus, including:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the intelligent driving road side device perception information fusion processing method according to any embodiment of the invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for processing perception information fusion of an intelligent driving road side device according to any embodiment of the present invention.
According to the embodiment of the invention, the distributed roadside sensing system is distributed on the road, the target equipment which is pre-designated in the system receives the sensing sub-information which is transmitted by different roadside equipment in the system and is related to the vehicle running environment, wherein the sensing sub-information transmitted by the different roadside equipment has the same timestamp, and after fusion processing is carried out, the fusion processing result is transmitted to the vehicle, so that the system is applied to auxiliary driving. On one hand, the embodiment of the invention utilizes the advantage of wider detection range of the roadside end sensing device, solves the problem of smaller sensing range of the vehicle-mounted sensing device in the driving process, expands the sensing detection range in the driving process, supplements the blind area of vehicle-mounted sensing, can provide more environment sensing information for vehicles, ensures the rationality of making driving decisions, and simultaneously ensures the accuracy and the real-time performance of fusion processing results under the driving scene without GPS signal synchronization by sensing the timestamp consistency of sub-information; on the other hand, the embodiment of the invention does not need to rely on the vehicle-mounted sensing device to acquire the environmental data, thereby solving the problem of higher cost of deploying more sensing devices on the vehicle, reducing the deployment cost of the single-vehicle sensing device and being beneficial to promoting the popularization of the intelligent traffic technology.
Drawings
Fig. 1 is a flowchart of a processing method for fusion of perception information of an intelligent driving road side device according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a roadside sensing system and a terminal for communication according to an embodiment of the present invention;
fig. 3 is a flowchart of a perception information fusion processing method for intelligent driving road side equipment according to a second embodiment of the present invention;
fig. 4 is a flowchart of a processing method for fusion of perception information of an intelligent driving road side device according to a third embodiment of the present invention;
fig. 5 is a flowchart of another method for processing perception information fusion of an intelligent driving road side device according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of an intelligent driving road side device perception information fusion processing device according to a fourth embodiment of the present invention;
fig. 7 is a schematic structural diagram of an apparatus according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a method for fusion processing of perception information of an intelligent driving roadside device according to an embodiment of the present invention, which may be applied to a situation where perception information of a vehicle driving environment is collected and fused at a roadside end, and then a fusion processing result is sent to a terminal on a vehicle, such as an on-board device or a mobile terminal, so as to assist driving. The method can be executed by an intelligent driving roadside device perception information fusion processing device, the device can be realized in a software and/or hardware mode, and can be integrated on any device capable of executing the intelligent driving roadside device perception information fusion processing operation, such as roadside devices with calculation processing capacity in a roadside perception system.
The embodiment can be applied to driving scenes which cannot receive GPS signals, such as tunnels, underground garages and the like, and at the moment, different devices are difficult to automatically perform time synchronization by utilizing the GPS signals. The method comprises the steps of determining a plurality of target areas (the areas can be set adaptively) based on road distribution in a specific driving scene, carrying out time calibration on different devices in a roadside sensing system deployed in a driving environment by using a preset time calibration method when the roadside sensing system is deployed in each target area, then carrying out real-time and comprehensive acquisition on driving environment data of a vehicle, and sending the data to a terminal on the vehicle through fusion processing.
Illustratively, the roadside sensing system comprises various sensing devices, computing devices, communication devices and the like in each target area. The Computing device includes but is not limited to a Road Side Computing Unit (RSCU), the communication device includes but is not limited to a Road Side Unit (RSU), the types of the sensing devices include but are not limited to an image sensor (camera), a millimeter wave radar, a laser radar, an ultrasonic sensor, and the like, and the deployment number of each type of sensing device may be set according to actual needs. Each computing device can be connected with a plurality of sensing devices of at least one type, and can perform preliminary processing on the environmental data acquired by the sensing devices to obtain corresponding sub-sensing information, wherein the sub-sensing information can correspond to a part of sensing areas in the target area. Data interaction between different computing devices may be based on wired or wireless communication, for example, via fiber optic connections. The deployment density of the computing devices in each target area may be set according to requirements, and the embodiment is not particularly limited.
Furthermore, one or more computing devices may be pre-designated as target devices in the multiple computing devices in each target area, so as to implement the technical solution of this embodiment, that is, the target devices not only need to perform preliminary processing on the environmental data collected by the sensing devices connected thereto, but also need to receive the sensing sub-information transmitted by other computing devices connected thereto in real time, and perform fusion processing between the sensing sub-information transmitted by the multiple computing devices. The target device may send the final fusion processing result to a terminal on the vehicle through one or more communication means. The number of communication devices may be less than the number of computing devices. In the vehicle driving process, the terminal can directly acquire a required intelligent driving road side equipment perception information fusion processing result from any communication device according to the communication distance without executing fusion processing by itself, and exemplarily, an On Board Unit (OBU) can be integrated On the terminal and used for communicating with the communication device to acquire the intelligent driving road side equipment perception information fusion processing result.
On the basis of ensuring that the sensing information of the current driving environment can be fully acquired, the deployment density of each device in the roadside sensing system can be set according to requirements, and the embodiment is not particularly limited, and can be determined according to the communication distance between different devices and/or the partition number of the sensing detection range. The distance between the target apparatus and the communication device needs to satisfy a communicable distance requirement. Further, the target device and the communication apparatus may be integrated into one device.
Fig. 2 is a schematic structural diagram illustrating a roadside sensing system and a terminal for communication according to an embodiment of the present invention, which is taken as an example, but should not be construed as a specific limitation to the embodiment. As shown in fig. 2, the roadside sensing system includes distributed sensing devices, roadside computing units and roadside units, in a target area, each roadside computing unit is configured to perform preliminary processing on environmental data collected by a plurality of sensing devices to obtain perception sub-information, and transmit the perception sub-information to the roadside computing unit serving as target equipment, and after the target equipment performs fusion processing on the received perception sub-information, the perception sub-information is sent to a terminal on a vehicle through the roadside unit connected with the target equipment. The road side unit can be integrated with a sending module, and the sending module can subscribe a theme (topic) of the intelligent driving road side equipment perception information fusion processing result in advance and broadcast the received fusion processing result. The technical scheme of the embodiment of the invention is explained in detail in the following with the accompanying drawings:
as shown in fig. 1, the method for processing perception information fusion of an intelligent driving road side device provided in this embodiment may include:
s110, receiving a perception information set in the target area, wherein the perception information set comprises perception sub-information transmitted by different road side devices, and the perception sub-information has the same timestamp based on time calibration among the different road side devices.
For a specific driving environment, the driving environment may be divided into a plurality of target regions in advance, each target region may include a plurality of sensing range partitions, and each target region is deployed with a roadside sensing system to acquire a sensing information set. In this embodiment, the different roadside devices that transmit the perception sub-information refer to devices that receive the environmental data collected by the sensing device and perform preliminary processing, so as to obtain the perception sub-information, and specifically may refer to a computing device in the roadside perception system, and the target device for executing the technical scheme of this embodiment may be one or more devices that are pre-specified in the multiple roadside devices.
For example, in the process of deploying the roadside sensing system, a switch may be used as a core switching device of a Network, a plurality of devices including the roadside devices form a distributed local area Network, a local Network Time Protocol Server (Network Time Protocol Server) is set on any one of the roadside devices, the local Time of the local Network Time Protocol Server is used as reference Time, and other devices are calibrated based on the reference Time, so that Time consistency of the devices in the roadside sensing system is realized, and Time synchronization of all devices in the roadside sensing system deployed in the whole driving environment is further maintained. The Network Time Protocol (Network Time Protocol) is a Protocol for synchronizing Time of each computer in the Network. When the local time of different road side equipment is consistent, the perception sub-information in the perception information set has the same timestamp, so that the accuracy and the real-time performance of the fusion processing result of the perception information set are ensured.
And S120, carrying out fusion processing on the perception information set by using a fusion algorithm.
In this embodiment, the fusion algorithm may be any available algorithm capable of performing fusion processing on the perceptron information in the prior art, including but not limited to a track association algorithm, a hungarian algorithm, a bipartite graph maximum-weight matching algorithm (Kuhn-Munkres, KM algorithm), and the like. Further, the method further comprises: and transmitting the fusion processing result To the Vehicle, for example, To terminals such as Vehicle To X (V2X) on the Vehicle, a smart phone and a personal computer, so as To assist a Vehicle system in making a driving decision.
The method comprises the steps that sensing information of the current running environment of a vehicle is obtained through a road side sensing system arranged on a road and is subjected to fusion processing, for each vehicle, only a terminal interacting with the road side sensing system needs to be arranged, and a required fusion processing result is obtained from the road side sensing system, so that the cost of arranging more sensing devices on a single vehicle can be saved; moreover, the road side sensing system has a wider and more flexible viewing angle for the sensing device to acquire the environmental data, and particularly under the condition that a sensing blind area exists in the process of depending on a single vehicle sensing device to acquire the environmental data, the sensing blind area can be overcome and sensing information in the sensing blind area can be supplemented based on the arrangement comprehensiveness of the sensing device and road side equipment in the road side sensing system, so that the accuracy of making driving decisions is improved, the driving safety is improved, and traffic accidents are reduced.
Optionally, the fusion processing is performed on the perception information set by using a fusion algorithm, including: and introducing virtual perception sub-information into the perception information set, and performing fusion processing on the perception information set by using a fusion algorithm, wherein the virtual perception sub-information is used for associating the perception sub-information transmitted by different road side equipment.
In the fusion processing process, the virtual perception sub-information is equivalent to a correlation bridge between perception sub-information transmitted by different roadside devices in a roadside perception system, so that the perception sub-information transmitted by any two or more roadside devices is correlated with each other, for example, by using the virtual perception sub-information, the perception sub-information transmitted by different roadside devices is subjected to circulating correlation fusion processing, and the condition that the fusion processing result is inaccurate and incomplete due to the fact that part of perception sub-information is not fused and further the making of driving decisions is influenced can be avoided.
Optionally, the fusion processing is performed on the perception information set, and includes:
removing the duplication of the coincident perception sub-information transmitted by different roadside equipment in the perception information set;
and fusing the non-coincident perception sub-information transmitted by different roadside equipment in the perception information set.
Sensing areas of a plurality of sensing devices deployed in a target area are overlapped, so that overlapped sensing sub-information exists in sensing sub-information transmitted by different road side equipment, and the overlapped sensing sub-information can be removed based on feature identification and matching; and then, integrating the non-coincident perception sub-information to obtain the complete perception information of the current running environment of the vehicle.
According to the technical scheme, the distributed roadside sensing system is distributed on a road, the target device pre-designated in the system receives the sensing sub-information which is transmitted by different roadside devices in the system and is related to the vehicle running environment, the sensing sub-information transmitted by the different roadside devices has the same timestamp, after fusion processing is carried out, a fusion processing result is transmitted to the vehicle, and therefore the system is applied to auxiliary driving. On one hand, the technical scheme of the embodiment utilizes the advantage of wider detection range of the roadside end sensing device, solves the problem of smaller sensing range of the vehicle-mounted sensing device in the driving process, expands the sensing detection range in the driving process, supplements the blind area of vehicle-mounted sensing, can provide more environment sensing information for the vehicle, ensures the rationality of making driving decisions, and simultaneously ensures the accuracy and the real-time performance of fusion processing results under the driving scene without GPS signal synchronization by sensing the timestamp consistency of sub-information; on the other hand, because the technical scheme of the embodiment does not need to rely on the vehicle-mounted sensing device to collect the environmental data, the problem that the cost for deploying more sensing devices on the vehicle is high is solved, the deployment cost of the single-vehicle sensing device is reduced, the popularization of the intelligent traffic technology is facilitated, and the social intelligent traffic system is constructed.
Example two
Fig. 3 is a flowchart of an intelligent driving road side device perception information fusion processing method provided in the second embodiment of the present invention, and this embodiment is further optimized based on the above embodiments. As shown in fig. 3, the method may include:
s210, receiving a perception information set in the target area, wherein the perception information set comprises perception sub-information transmitted by different road side devices, and the perception sub-information has the same timestamp based on time calibration among the different road side devices.
S220, virtual perception sub-information is introduced into the perception information set, perception sub-information transmitted by any road side device in the perception information set is determined to be target perception sub-information, and the virtual perception sub-information is initialized by the aid of the target perception sub-information.
For example, the target device selects all perception sub-information within a certain time range according to the current timestamp to form a perception information set { A, B, C }, wherein the perception information set comprises perception sub-information A, perception sub-information B and perception sub-information C which are respectively transmitted by three road side devices, and virtual perception sub-information is caused in the perception information setInformation V0Obtaining a set of perceptual information { V0A, B, C }; using perceptual sub-information A to virtual perceptual sub-information V0Initialization is performed, at which time the virtual perception sub-information V0The same information content as that contained in the perceptual sub-information a.
And S230, performing fusion processing on the initialized virtual perception sub-information and perception sub-information transmitted by any road side equipment except the target perception sub-information in the perception information set by using a fusion algorithm to obtain a current fusion processing result.
Continuing with the above example, any fusion algorithm may be utilized to integrate the perceptual information set { V }0In A, B, C), initializing virtual perception sub-information V0Performing a fusion process with either one of the perception sub-information B and the perception sub-information C, for example, the initialized virtual perception sub-information V0Performing fusion processing on the virtual perception sub-information V and the perception sub-information B to obtain a fusion processing result X, and then utilizing the fusion processing result X to perform the initialization on the virtual perception sub-information V0Updating to obtain updated virtual perception sub-information V1At this time, the updated virtual perception sub information V1Substantially simultaneously containing the content of the perception sub-information a and the perception sub-information B.
S240, updating the initialized virtual perception sub-information by using the current fusion processing result, taking the updated virtual perception sub-information as new initialized virtual sub-information, and repeatedly executing the fusion processing between the initialized virtual perception sub-information and the perception sub-information which is not involved in the fusion processing and is transmitted by any road side equipment in the perception information set until the perception sub-information in the perception information set is involved in the fusion processing.
In the above example, the perception sub-information C in the perception information set { a, B, C } has not participated in the fusion process, and the updated virtual perception sub-information V continues to be updated1Performing fusion processing with the perception sub-information C, wherein the perception sub-information A and the perception sub-information B are simultaneously subjected to fusion processing with the perception sub-information C to obtain a fusion processing result Y, and the fusion processing result Y simultaneously comprises the perception sub-information A, B and the perception sub-information CThe contents. If the perception sub-information which does not participate in the fusion processing still exists in the perception information set, the virtual perception sub-information V can be updated by using the fusion processing result Y1Obtaining updated virtual perception sub-information V2And using the currently updated virtual perception sub-information V2And carrying out fusion processing on the sensing sub-information which is not involved in the fusion processing and the rest sensing sub-information. Namely, for the perception sub-information transmitted by different road side devices in the perception information set, the virtual perception sub-information needs to be dynamically updated after each fusion processing operation is finished.
Compared with the prior art that a single pairwise fusion processing mode between perception information is usually adopted, for example, the perception sub-information A and the perception sub-information B are subjected to fusion processing, and then the perception sub-information B and the perception sub-information C are subjected to fusion processing, so that the results of the two times of fusion processing are used as the fusion processing results between the perception sub-information A, B and the perception sub-information C, the fusion processing between the perception sub-information A and the perception sub-information C is omitted, when the perception sub-information transmitted by different road side devices has large relevance, the problem of information missing exists in the fusion processing results, and the fusion processing results are inaccurate and incomplete.
According to the technical scheme, on one hand, the advantage that the roadside end sensing device is wide in detection range is utilized, the problem that the sensing range of the vehicle-mounted sensing device is small in the driving process is solved, the sensing detection range in the driving process is expanded, the blind area of vehicle-mounted sensing is supplemented, more environment sensing information can be provided for vehicles, and the rationality of driving decision making is guaranteed; meanwhile, the consistency of the timestamps of the sub-information is sensed, so that the accuracy and the real-time performance of the fusion processing result under the driving scene without GPS signal synchronization are ensured; the introduction of the virtual perception sub-information ensures the accuracy and the integrity of the fusion processing result, further ensures the accuracy of making a driving decision and ensures the realization safety of intelligent driving; on the other hand, because the technical scheme of the embodiment does not need to rely on the vehicle-mounted sensing device to collect the environmental data, the problem that the cost for deploying more sensing devices on the vehicle is high is solved, the deployment cost of the single-vehicle sensing device is reduced, the popularization of the intelligent traffic technology is facilitated, and the social intelligent traffic system is constructed.
EXAMPLE III
Fig. 4 is a flowchart of a processing method for fusion of perception information of an intelligent driving road side device according to a third embodiment of the present invention, and this embodiment is further optimized and expanded based on the above embodiments. As shown in fig. 4, the method may include:
s310, receiving a perception information set in the target area, wherein the perception information set comprises perception sub-information transmitted by different road side devices, and the perception sub-information has the same timestamp based on time calibration among the different road side devices.
S320, traversing the perception sub-information transmitted by the different road side devices to obtain at least one group of track pairs, wherein the track pairs comprise any two target objects respectively corresponding to the different road side devices.
The perception sub-information transmitted by different roadside devices in the roadside perception system may include information of different target objects, and the target objects include but are not limited to obstacles, pedestrians, vehicles, buildings, traffic signs and the like on roads. And grouping the target objects related to the perception sub-information transmitted by different road side equipment pairwise by traversing the perception information set. For example, the perception sub-information a and B transmitted by two roadside devices, respectively, includes 2 target objects in the perception sub-information a: m1 and m2, wherein the perception sub information B comprises 3 target objects n1, n2 and n3, and 6 groups of track pairs can be obtained by traversing the perception sub information A and B, wherein the track pairs respectively comprise: (m1, n1), (m1, n2), (m1, n3), (m2, n1), (m2, n2), (m2, n 3).
Optionally, before the fusion processing is performed on the perception information set, the method further includes:
in the perception information set, distributing sub-identifications for target objects in perception sub-information transmitted by any road side device, wherein the sub-identifications are used for distinguishing the target objects corresponding to different road side devices, and the sub-identifications comprise sub-IDs of the target objects. In other words, in the perception sub-information transmitted by different road side devices, the related target objects correspond to unique sub-identifiers, and the sub-identifiers do not coincide with the different road side devices, so that the target objects corresponding to the different road side devices are distinguished in the fusion processing process. For example, for any track pair, two target objects in the track pair respectively correspond to different sub-identifiers.
S330, determining the distance between two target objects in each group of track pairs according to the respective information of the target objects in each group of track pairs.
The perception sub-information transmitted by different roadside devices can include characteristic information such as position, speed, motion direction, acceleration, geometric shape, rotation angle and color of at least one target object. The distance between the two target objects in the track pair can be determined according to the position information of the two target objects.
And S340, determining a track pair set meeting a distance threshold in at least one group of track pairs according to the determined distance.
Specifically, when the distance between two target objects is smaller than a preset distance threshold (or called a tracking threshold), the corresponding track pair may be determined as a target track pair, and a track pair set is formed by the determined target track pairs, that is, at least one set of track pairs obtained may be screened based on the greedy algorithm idea. The distance threshold may be set according to a requirement, and this embodiment is not particularly limited. Continuing with the above example, in the 6 sets of track pairs: (m1, n1), (m1, n2), (m1, n3), (m2, n1), (m2, n2), (m2, n3), by determining the distance in front of the target object in each set of track pairs and comparing it with a preset distance threshold, if only 2 sets of track pairs meet the threshold condition: (m1, n2), (m2, n3), the 2 sets of track pairs are then preferably fused using a fusion algorithm. The closer the distance between two target objects in the track pair is, the higher the possibility that the target objects belong to the same target object in the driving environment is, the higher the value of fusion processing is, the priority fusion processing is performed on the screened track pair, the data processing amount of the fusion processing can be reduced, and the fusion processing efficiency is improved.
Optionally, determining, according to the determined distance, a track pair set satisfying a distance threshold in at least one group of track pairs, including: and storing the track pairs meeting the distance threshold value in at least one group of track pairs into a sequence list (SeqList) according to the determined distance to obtain a track pair set. Namely, after the target track pair is determined each time, the target track pair information can be sequentially stored in the sequence list. The sequence table has a fixed storage capacity as a sort of array structure, but may store any type of data.
And S350, performing fusion processing on the track pair subsets by using a fusion algorithm.
Illustratively, virtual sensing sub-information is introduced into the track pair sub-set, the track can be used to initialize the virtual sensing sub-information for information of one target object in any track pair in the sub-set, the initialized virtual sensing sub-information and information of another target object in the track pair participating in initialization are subjected to fusion processing, the initialized virtual sensing sub-information is dynamically updated by using a current fusion processing result, and then the track pairs which do not participate in fusion processing in the sub-set are subjected to fusion processing with the track in sequence, that is, dynamic updating of the virtual sensing sub-information and fusion processing between the virtual sensing sub-information and information of the target object which does not participate in fusion processing are repeatedly executed until the information of all target objects in the sub-set is subjected to fusion processing by the track.
After the fusion processing result of the track pair sub-set is obtained, the information of the sensing areas except the sensing area corresponding to the track pair sub-set can be continuously added based on the fusion processing result and the currently obtained sensing information set, so that the complete fusion processing result of the current running environment of the vehicle is obtained.
On the basis of the foregoing technical solution, optionally, after determining, according to the determined distance, a trajectory pair subset satisfying a distance threshold in at least one set of trajectory pairs, the method further includes:
and in the track pair subset, sequentially performing feature matching on each group of track pairs according to the distance between the target objects, and determining the track pairs with successfully matched features so as to perform fusion processing on the track pairs with successfully matched features.
For example, feature matching may be performed on each set of track pairs sequentially according to the sorting result of the distances between the two target objects from small to large. And the distance between the target objects in the track pair is used as a first-layer screening condition, after the track pair set meeting the distance threshold value is determined, the track pair set is further screened according to the feature matching result between the target objects, so that the optimal track pair with a short distance and matched features is obtained, and the efficiency and pertinence of fusion processing can be improved.
Further, the method further comprises:
if the feature matching fails in the process of sequentially performing feature matching on each group of track pairs, deleting the track pairs with the failed feature matching from the track pair set;
and if the characteristic matching is successful in the process of sequentially performing the characteristic matching on each group of track pairs, deleting other track pairs associated with any target object in the track pairs with the successful characteristic matching from the track pair subset.
In each feature matching process, if the feature matching fails, it is indicated that two target objects in the current track pair do not belong to the same target object in the driving environment, that is, the screening condition of the current preferred track pair is not met, so that the track pair with the failed feature matching is deleted from the track pair set; if the feature matching is successful (namely the track pair association is successful), the target object in the current track pair belongs to the same target object, and other track pairs associated with any target object in the current track pair do not belong to the track pair containing the same target object, the track pair can be deleted, and then association information is added to the track pair with successful feature matching for fusion processing. In addition, if the track pairs in the track pair sub-sets are stored in the form of a sequence list, the track pairs meeting the deletion condition are deleted from the sequence list in sequence, and the track pairs with successfully matched characteristics are stored in other areas to be fused until the sequence list is empty.
Fig. 5 is a flowchart illustrating another intelligent driving road side device perception information fusion processing method provided in this embodiment, taking an example that a target object is an obstacle. As shown in fig. 5, in a target area, a target device in a roadside sensing system acquires a sensing information set of a vehicle driving environment through different roadside devices, and then redistributes sub-IDs for obstacles corresponding to the different roadside devices to realize the distinction of the obstacles, where the sub-IDs are different from device IDs in the roadside sensing system, and when the sensing sub-information is transmitted to the target device by different roadside devices, the sensing sub-information carries different device IDs, and the sensing sub-information transmitted by the same roadside device corresponds to the same roadside device ID; after the sub-IDs are redistributed, calculating the distance between two obstacles in each group of track pairs, realizing the first-layer screening of the track pairs according to the relation between the distance and a tracking threshold, storing the track pairs meeting the tracking threshold into a sequence list, then sorting the track pairs according to the distance in the sequence list, sequentially carrying out feature matching on the track pairs according to the sorting result from small to large of the distance, realizing the second-layer screening of the track pairs through the feature matching, further determining the preferred track pairs (namely the track pairs which are successfully correlated), and simultaneously deleting the track pairs which do not meet the conditions from the sequence list; and finally, based on the determined optimal track pair and the associated information, carrying out deduplication processing, and based on a fusion result obtained by deduplication processing, continuously adding the obstacle information of the non-redundant area, namely non-coincident obstacle information, so as to obtain a complete fusion processing result of the current running environment of the vehicle, so as to make a driving decision.
According to the technical scheme, at least one group of track pairs are obtained by traversing sensing information sets acquired by different roadside equipment, and the track pairs are screened based on the distance and characteristic matching between target objects in each group of track pairs to obtain the optimal track pairs, so that the optimal track pairs are subjected to the prior fusion processing by utilizing fusion algorithms such as a track association algorithm and the like, the efficiency of the fusion processing is improved, the accuracy of the fusion processing result is ensured, and the rationality and the accuracy of driving decision making in a vehicle system are ensured; meanwhile, the technical scheme of the embodiment solves the problems that the sensing range of the vehicle-mounted sensing device is small in the driving process and the cost for deploying more sensing devices on the vehicle is high, expands the sensing and detecting range in the driving process, supplements the blind area of vehicle-mounted sensing, can provide more environment sensing information for the vehicle, reduces the deployment cost of the single-vehicle sensing device and contributes to promoting the popularization of the intelligent traffic technology.
Example four
Fig. 6 is a schematic structural diagram of a sensing information fusion processing device for intelligent driving roadside equipment according to a fourth embodiment of the present invention, which is applicable to a situation where sensing information of a vehicle driving environment is collected and fused at a roadside end, and then a fusion processing result is sent to a terminal on a vehicle, such as an on-board device or a mobile terminal, so as to assist driving. The device can be realized in a software and/or hardware mode, and can be integrated on any equipment capable of executing intelligent driving roadside equipment perception information fusion processing operation, such as roadside equipment in a roadside perception system.
As shown in fig. 6, the intelligent driving road side device perception information fusion processing apparatus provided in this embodiment may include a perception information set obtaining module 410 and a fusion processing module 420, where:
a perception information set obtaining module 410, configured to receive a perception information set in a target area, where the perception information set includes perception sub-information transmitted by different roadside devices, and the perception sub-information has the same timestamp based on time calibration between the different roadside devices;
and the fusion processing module 420 is configured to perform fusion processing on the perception information set by using a fusion algorithm.
Optionally, the fusion processing module 420 is specifically configured to:
and introducing virtual perception sub-information into the perception information set, and performing fusion processing on the perception information set by using a fusion algorithm, wherein the virtual perception sub-information is used for associating the perception sub-information transmitted by different road side equipment.
Further, the fusion processing module 420 includes:
the virtual perception sub-information initialization unit is used for introducing virtual perception sub-information into the perception information set, determining the perception sub-information transmitted by any road side equipment in the perception information set as target perception sub-information, and initializing the virtual perception sub-information by using the target perception sub-information;
the first fusion processing unit is used for carrying out fusion processing on the initialized virtual perception sub-information and perception sub-information transmitted by any road side equipment except the target perception sub-information in the perception information set by using a fusion algorithm to obtain a current fusion processing result;
and the fusion processing repeated execution unit is used for updating the initialized virtual perception sub-information by using the current fusion processing result, taking the updated virtual perception sub-information as new initialized virtual sub-information, and repeatedly executing the fusion processing between the initialized virtual perception sub-information and the perception sub-information which is not involved in the fusion processing and is transmitted by any road side equipment in the perception information set until the perception sub-information in the perception information set is involved in the fusion processing.
Optionally, the apparatus further comprises:
and a sub-identifier allocating module, configured to allocate, in the sensing information set, a sub-identifier for a target object in the sensing sub-information transmitted by any roadside device before the fusion processing module 420 performs the operation of performing fusion processing on the sensing information set, where the sub-identifier is used to distinguish target objects corresponding to different roadside devices.
Optionally, the apparatus further comprises a control unit,
a track pair determining module, configured to traverse the sensing sub-information transmitted by the different roadside devices to obtain at least one set of track pairs before the fusion processing module 420 performs the operation of performing fusion processing on the sensing information set, where the track pairs include any two target objects respectively corresponding to the different roadside devices;
the target object distance determining module is used for determining the distance between two target objects in each group of track pairs according to the respective information of the target objects in each group of track pairs;
and the track pair subset determining module is used for determining a track pair subset meeting a distance threshold in at least one group of track pairs according to the determined distance so as to perform fusion processing on the track pair subset.
Optionally, the apparatus further comprises:
and the characteristic matching module is used for performing characteristic matching on each group of track pairs in the track pair subset in sequence according to the distance between the target objects after the track pair subset which meets the distance threshold is determined in at least one group of track pairs according to the determined distance in the track pair subset determining module, and determining the track pairs with successful characteristic matching so as to perform fusion processing on the track pairs with successful characteristic matching.
Optionally, the apparatus further comprises:
the first track pair deleting module is used for deleting the track pairs with the failed characteristic matching from the track pair subset if the characteristic matching fails in the process of sequentially performing the characteristic matching on each group of track pairs;
and the second track pair deleting module is used for deleting other track pairs associated with any target object in the track pairs with successfully matched characteristics from the track pair subset if the characteristic matching is successful in the process of sequentially performing the characteristic matching on each group of track pairs.
Optionally, the track pair subset determining module is specifically configured to:
and storing the track pairs meeting the distance threshold in at least one group of track pairs into a sequence table according to the determined distance to obtain a track pair set.
Optionally, the fusion processing module 420 includes:
the perception sub-information duplication removing unit is used for removing duplication of the superposition perception sub-information transmitted by different roadside equipment in the perception information set;
and the perception sub-information fusion unit is used for fusing the non-coincident perception sub-information transmitted by different roadside devices in the perception information set.
Optionally, the apparatus further comprises:
and the result transmission module is used for transmitting the fusion processing result to the vehicle.
The perception information fusion processing device for the intelligent driving road side equipment, provided by the embodiment of the invention, can execute the perception information fusion processing method for the intelligent driving road side equipment, provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. Reference may be made to the description of any method embodiment of the invention not specifically described in this embodiment.
EXAMPLE five
Fig. 7 is a schematic structural diagram of an apparatus according to a fifth embodiment of the present invention. FIG. 7 illustrates a block diagram of an exemplary device 512 suitable for use in implementing embodiments of the present invention. The device 512 shown in fig. 7 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present invention. The device 512 may typically be a device capable of performing an intelligent driving road side device perception information fusion processing operation, such as a computing device in a road side perception system.
As shown in fig. 7, the device 512 is in the form of a general purpose device. Components of device 512 may include, but are not limited to: one or more processors 516, a storage device 528, and a bus 518 that couples the various system components including the storage device 528 and the processors 516.
Bus 518 represents one or more of any of several types of bus structures, including a memory device bus or memory device controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Device 512 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by device 512 and includes both volatile and nonvolatile media, removable and non-removable media.
Storage 528 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 530 and/or cache Memory 532. The device 512 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 534 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, and commonly referred to as a "hard drive"). Although not shown in FIG. 7, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk such as a Compact disk Read-Only Memory (CD-ROM), Digital Video disk Read-Only Memory (DVD-ROM) or other optical media may be provided. In these cases, each drive may be connected to bus 518 through one or more data media interfaces. Storage 528 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 540 having a set (at least one) of program modules 542 may be stored, for example, in storage 528, such program modules 542 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may include an implementation of a network environment. The program modules 542 generally perform the functions and/or methods of the described embodiments of the invention.
The device 512 may also communicate with one or more external devices 514 (e.g., keyboard, pointing terminal, display 524, etc.), with one or more terminals that enable a user to interact with the device 512, and/or with any terminals (e.g., network card, modem, etc.) that enable the device 512 to communicate with one or more other computing terminals. Such communication may occur via input/output (I/O) interfaces 522. Also, the device 512 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the internet) via the Network adapter 520. As shown in FIG. 7, the network adapter 520 communicates with the other modules of the device 512 via the bus 518. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the device 512, including but not limited to: microcode, end drives, Redundant processors, external disk drive Arrays, RAID (Redundant Arrays of Independent Disks) systems, tape drives, and data backup storage systems, among others.
The processor 516 executes various functional applications and data processing by running the program stored in the storage device 528, for example, implementing the method for processing perception information fusion of the intelligent driving road side device provided by any embodiment of the present invention, where the method may include:
receiving a perception information set in a target area, wherein the perception information set comprises perception sub-information transmitted by different roadside devices, and the perception sub-information has the same timestamp based on time calibration among the different roadside devices;
and performing fusion processing on the perception information set by using a fusion algorithm.
EXAMPLE six
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for processing perception information fusion of an intelligent driving road side device according to any embodiment of the present invention, where the method includes:
receiving a perception information set in a target area, wherein the perception information set comprises perception sub-information transmitted by different roadside devices, and the perception sub-information has the same timestamp based on time calibration among the different roadside devices;
and performing fusion processing on the perception information set by using a fusion algorithm.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or terminal. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A perception information fusion processing method for intelligent driving road side equipment is characterized by comprising the following steps:
receiving a perception information set in a target area, wherein the perception information set comprises perception sub-information transmitted by different roadside devices, and the perception sub-information has the same timestamp based on time calibration among the different roadside devices; the perception sub-information transmitted by different road side equipment comprises information of different target objects;
in the perception information set, distributing sub-identifiers for target objects in perception sub-information transmitted by any road side device, wherein the sub-identifiers are used for distinguishing the target objects corresponding to different road side devices;
performing fusion processing on the perception information set by using a fusion algorithm;
the fusion processing of the perception information set by using a fusion algorithm comprises the following steps: introducing virtual perception sub-information into the perception information set, and performing fusion processing on the perception information set by using the fusion algorithm, wherein the virtual perception sub-information is used for associating perception sub-information transmitted by any two or more roadside devices in the perception information set;
introducing virtual perception sub-information into the perception information set, and performing fusion processing on the perception information set by using the fusion algorithm, wherein the fusion processing comprises the following steps:
introducing virtual perception sub-information into the perception information set, determining the perception sub-information transmitted by any road side equipment in the perception information set as target perception sub-information, and initializing the virtual perception sub-information by using the target perception sub-information;
performing fusion processing on the initialized virtual perception sub-information and perception sub-information transmitted by any road side equipment except the target perception sub-information in the perception information set by using the fusion algorithm to obtain a current fusion processing result;
and updating the initialized virtual perception sub-information by using the current fusion processing result, taking the updated virtual perception sub-information as new initialized virtual sub-information, and repeatedly executing the fusion processing between the initialized virtual perception sub-information and perception sub-information which is not involved in the fusion processing and is transmitted by any road side equipment in the perception information set until the perception sub-information in the perception information set is involved in the fusion processing.
2. The method of claim 1, wherein prior to the fusing the set of perceptual information, the method further comprises:
traversing the perception sub-information transmitted by different road side equipment to obtain at least one group of track pairs, wherein the track pairs comprise any two target objects respectively corresponding to the different road side equipment;
determining the distance between two target objects in each group of track pairs according to the respective information of the target objects in each group of track pairs;
and determining a track pair subset meeting a distance threshold in the at least one group of track pairs according to the distance so as to perform fusion processing on the track pair subset.
3. The method of claim 2, wherein after determining, from the distance, a set of track pairs in the at least one set of track pairs that satisfy a distance threshold, the method further comprises:
and in the track pair set, sequentially carrying out feature matching on each group of track pairs according to the distance between the target objects, and determining the track pairs with successfully matched features so as to carry out fusion processing on the track pairs with successfully matched features.
4. The method of claim 3, further comprising:
if the feature matching fails in the process of sequentially matching the features of each group of track pairs, deleting the track pairs with the failed feature matching from the track pair subset;
and if the feature matching is successful in the process of sequentially performing the feature matching on each group of track pairs, deleting other track pairs associated with any target object in the track pairs with the successfully matched features from the track pair subset.
5. The method of claim 2, wherein determining, in the at least one set of track pairs, a set of track pairs that satisfies a distance threshold based on the distance comprises:
and storing the track pairs meeting the distance threshold value in the at least one group of track pairs into a sequence list according to the distance to obtain the track pair set.
6. The method according to claim 1, wherein the fusing the perception information set comprises:
removing the duplication of the coincident perception sub-information transmitted by different roadside equipment in the perception information set;
and fusing the non-coincident perception sub-information transmitted by different roadside equipment in the perception information set.
7. The method of claim 1, further comprising:
and transmitting the fusion processing result to the vehicle.
8. The utility model provides an intelligence driving roadside equipment perception information fusion processing apparatus which characterized in that, the device includes:
the system comprises a perception information set acquisition module, a time alignment module and a time alignment module, wherein the perception information set acquisition module is used for receiving a perception information set in a target area, the perception information set comprises perception sub-information transmitted by different road side equipment, and the perception sub-information has the same timestamp based on time alignment among the different road side equipment; the perception sub-information transmitted by different road side equipment comprises information of different target objects;
the fusion processing module is used for carrying out fusion processing on the perception information set by utilizing a fusion algorithm;
the fusion processing module is specifically configured to introduce virtual perception sub-information into the perception information set, and perform fusion processing on the perception information set by using the fusion algorithm, where the virtual perception sub-information is used to associate perception sub-information transmitted by any two or more roadside devices in the perception information set;
the sub-identifier distribution module is used for distributing sub-identifiers to target objects in the perception sub-information transmitted by any road side device in the perception information set before the fusion processing module executes the operation of fusion processing on the perception information set, wherein the sub-identifiers are used for distinguishing the target objects corresponding to different road side devices;
wherein, the fusion processing module comprises: the virtual perception sub-information initialization unit is used for introducing virtual perception sub-information into the perception information set, determining the perception sub-information transmitted by any road side equipment in the perception information set as target perception sub-information, and initializing the virtual perception sub-information by using the target perception sub-information;
the first fusion processing unit is used for carrying out fusion processing on the initialized virtual perception sub-information and perception sub-information transmitted by any road side equipment except the target perception sub-information in the perception information set by using a fusion algorithm to obtain a current fusion processing result;
and the fusion processing repeated execution unit is used for updating the initialized virtual perception sub-information by using the current fusion processing result, taking the updated virtual perception sub-information as new initialized virtual sub-information, and repeatedly executing the fusion processing between the initialized virtual perception sub-information and the perception sub-information which is not involved in the fusion processing and is transmitted by any road side equipment in the perception information set until the perception sub-information in the perception information set is involved in the fusion processing.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the intelligent driving roadside device awareness information fusion processing method according to any one of claims 1 to 7.
10. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the intelligent driving roadside device awareness information fusion processing method according to any one of claims 1 to 7.
CN201910420661.2A 2019-05-20 2019-05-20 Intelligent driving road side equipment perception information fusion processing method, device and equipment Active CN110132290B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910420661.2A CN110132290B (en) 2019-05-20 2019-05-20 Intelligent driving road side equipment perception information fusion processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910420661.2A CN110132290B (en) 2019-05-20 2019-05-20 Intelligent driving road side equipment perception information fusion processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN110132290A CN110132290A (en) 2019-08-16
CN110132290B true CN110132290B (en) 2021-12-14

Family

ID=67571758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910420661.2A Active CN110132290B (en) 2019-05-20 2019-05-20 Intelligent driving road side equipment perception information fusion processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN110132290B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110412595A (en) * 2019-06-04 2019-11-05 深圳市速腾聚创科技有限公司 Roadbed cognitive method, system, vehicle, equipment and storage medium
CN110796865B (en) * 2019-11-06 2022-09-23 阿波罗智联(北京)科技有限公司 Intelligent traffic control method and device, electronic equipment and storage medium
CN113064415A (en) * 2019-12-31 2021-07-02 华为技术有限公司 Method and device for planning track, controller and intelligent vehicle
CN111090087B (en) * 2020-01-21 2021-10-26 广州赛特智能科技有限公司 Intelligent navigation machine, laser radar blind area compensation method and storage medium
CN114078325B (en) * 2020-08-19 2023-09-05 北京万集科技股份有限公司 Multi-perception system registration method, device, computer equipment and storage medium
CN112590808B (en) * 2020-12-23 2022-05-17 东软睿驰汽车技术(沈阳)有限公司 Multi-sensor fusion method and system and automatic driving vehicle
CN112671935A (en) * 2021-03-17 2021-04-16 中智行科技有限公司 Method for remotely controlling vehicle, road side equipment and cloud platform
CN113223311A (en) * 2021-03-26 2021-08-06 南京市德赛西威汽车电子有限公司 Vehicle door opening anti-collision early warning method based on V2X
CN115331421A (en) * 2021-05-10 2022-11-11 北京万集科技股份有限公司 Roadside multi-sensing environment sensing method, device and system
CN113965879B (en) * 2021-05-13 2024-02-06 深圳市速腾聚创科技有限公司 Multi-sensor perception information fusion method and related equipment
CN113077632A (en) * 2021-06-07 2021-07-06 四川紫荆花开智能网联汽车科技有限公司 V2X intelligent network connection side system and realizing method
CN115457773A (en) * 2022-09-19 2022-12-09 智道网联科技(北京)有限公司 Road side equipment data processing method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102625428A (en) * 2012-04-24 2012-08-01 苏州摩多物联科技有限公司 Time synchronization method of wireless sensor networks
CN108182817A (en) * 2018-01-11 2018-06-19 北京图森未来科技有限公司 Automatic Pilot auxiliary system, trackside end auxiliary system and vehicle-mounted end auxiliary system
CN108535753A (en) * 2018-03-30 2018-09-14 北京百度网讯科技有限公司 Vehicle positioning method, device and equipment
CN108803622A (en) * 2018-07-27 2018-11-13 吉利汽车研究院(宁波)有限公司 A kind of method, apparatus for being handled target acquisition data
CN108877269A (en) * 2018-08-20 2018-11-23 清华大学 A kind of detection of intersection vehicle-state and V2X broadcasting method
CN109147369A (en) * 2018-08-27 2019-01-04 惠州Tcl移动通信有限公司 A kind of Vehicular automatic driving bootstrap technique, automatic driving vehicle and storage medium
CN109696172A (en) * 2019-01-17 2019-04-30 福瑞泰克智能系统有限公司 A kind of multisensor flight path fusion method, device and vehicle
CN109714730A (en) * 2019-02-01 2019-05-03 清华大学 For Che Che and bus or train route the cloud control plateform system cooperateed with and cooperative system and method
CN109920246A (en) * 2019-02-22 2019-06-21 重庆邮电大学 It is a kind of that local paths planning method is cooperateed with binocular vision based on V2X communication

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8705797B2 (en) * 2012-03-07 2014-04-22 GM Global Technology Operations LLC Enhanced data association of fusion using weighted Bayesian filtering
US20160323715A1 (en) * 2015-04-30 2016-11-03 Huawei Technologies Co., Ltd. System and Method for Data Packet Scheduling and Delivery
CN106840242B (en) * 2017-01-23 2020-02-04 驭势科技(北京)有限公司 Sensor self-checking system and multi-sensor fusion system of intelligent driving automobile
US11025456B2 (en) * 2018-01-12 2021-06-01 Apple Inc. Time domain resource allocation for mobile communication
CN109756544A (en) * 2018-01-23 2019-05-14 启迪云控(北京)科技有限公司 The control of vehicle collaborative perception, method for subscribing, device and system based on cloud control platform
CN108491533B (en) * 2018-03-29 2019-04-02 百度在线网络技术(北京)有限公司 Data fusion method, device, data processing system and storage medium
CN109583505A (en) * 2018-12-05 2019-04-05 百度在线网络技术(北京)有限公司 A kind of object correlating method, device, equipment and the medium of multisensor
CN109739236B (en) * 2019-01-04 2022-05-03 腾讯科技(深圳)有限公司 Vehicle information processing method and device, computer readable medium and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102625428A (en) * 2012-04-24 2012-08-01 苏州摩多物联科技有限公司 Time synchronization method of wireless sensor networks
CN108182817A (en) * 2018-01-11 2018-06-19 北京图森未来科技有限公司 Automatic Pilot auxiliary system, trackside end auxiliary system and vehicle-mounted end auxiliary system
CN108535753A (en) * 2018-03-30 2018-09-14 北京百度网讯科技有限公司 Vehicle positioning method, device and equipment
CN108803622A (en) * 2018-07-27 2018-11-13 吉利汽车研究院(宁波)有限公司 A kind of method, apparatus for being handled target acquisition data
CN108877269A (en) * 2018-08-20 2018-11-23 清华大学 A kind of detection of intersection vehicle-state and V2X broadcasting method
CN109147369A (en) * 2018-08-27 2019-01-04 惠州Tcl移动通信有限公司 A kind of Vehicular automatic driving bootstrap technique, automatic driving vehicle and storage medium
CN109696172A (en) * 2019-01-17 2019-04-30 福瑞泰克智能系统有限公司 A kind of multisensor flight path fusion method, device and vehicle
CN109714730A (en) * 2019-02-01 2019-05-03 清华大学 For Che Che and bus or train route the cloud control plateform system cooperateed with and cooperative system and method
CN109920246A (en) * 2019-02-22 2019-06-21 重庆邮电大学 It is a kind of that local paths planning method is cooperateed with binocular vision based on V2X communication

Also Published As

Publication number Publication date
CN110132290A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
CN110132290B (en) Intelligent driving road side equipment perception information fusion processing method, device and equipment
CN109996176B (en) Road side perception and vehicle terminal vehicle road cooperative fusion processing method and device
CN104977014B (en) The air navigation aid and device of a kind of terminal
CN106781669A (en) Recommend method and device in a kind of parking stall
CN109472884B (en) Unmanned vehicle data storage method, device, equipment and storage medium
CN109284801B (en) Traffic indicator lamp state identification method and device, electronic equipment and storage medium
CN110674349B (en) Video POI (Point of interest) identification method and device and electronic equipment
CN111114554A (en) Method, device, terminal and storage medium for predicting travel track
WO2023011331A1 (en) Method and apparatus for controlling formation driving, medium, and electronic device
CN109353345A (en) Control method for vehicle, device, equipment, medium and vehicle
US20200336565A1 (en) Autonomous vehicle ad-hoc networking and data processing
CN112541912B (en) Rapid detection method and device for salient targets in mine sudden disaster scene
US20190355258A1 (en) Information processing device, information processing method, and computer readable medium
EP4286234A1 (en) Method and apparatus for starting unmanned vehicle, electronic device, and computer-readable medium
JP2022050380A (en) Method for monitoring vehicle, apparatus, electronic device, storage medium, computer program, and cloud control platform
CN111738316A (en) Image classification method and device for zero sample learning and electronic equipment
CN106448225A (en) Road information sharing method and device
Liu et al. Towards vehicle-to-everything autonomous driving: A survey on collaborative perception
CN115565158B (en) Parking space detection method, device, electronic equipment and computer readable medium
CN111914784A (en) Method and device for detecting intrusion of trackside obstacle in real time and electronic equipment
CN111931600A (en) Intelligent pen image processing method and device and electronic equipment
CN115002196B (en) Data processing method and device and vehicle end acquisition equipment
CN115061386A (en) Intelligent driving automatic simulation test system and related equipment
CN111488866A (en) Invading object identification method and device based on deep learning and electronic equipment
CN113454970B (en) Extensible data fusion architecture and related products

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant