CN114884994B - Vehicle-road cooperative information fusion method and system based on transfer learning - Google Patents

Vehicle-road cooperative information fusion method and system based on transfer learning Download PDF

Info

Publication number
CN114884994B
CN114884994B CN202210498666.9A CN202210498666A CN114884994B CN 114884994 B CN114884994 B CN 114884994B CN 202210498666 A CN202210498666 A CN 202210498666A CN 114884994 B CN114884994 B CN 114884994B
Authority
CN
China
Prior art keywords
vehicle
information
fusion
self
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210498666.9A
Other languages
Chinese (zh)
Other versions
CN114884994A (en
Inventor
陆由付
俄广迅
王勇
战一源
朱猛
张岱峰
李研强
于良杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong High Speed Construction Management Group Co ltd
Institute of Automation Shandong Academy of Sciences
Original Assignee
Shandong High Speed Construction Management Group Co ltd
Institute of Automation Shandong Academy of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong High Speed Construction Management Group Co ltd, Institute of Automation Shandong Academy of Sciences filed Critical Shandong High Speed Construction Management Group Co ltd
Priority to CN202210498666.9A priority Critical patent/CN114884994B/en
Publication of CN114884994A publication Critical patent/CN114884994A/en
Application granted granted Critical
Publication of CN114884994B publication Critical patent/CN114884994B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention belongs to the technical field of intelligent network vehicle-road cooperation and provides a vehicle-road cooperation information fusion method and system based on transfer learning, wherein the method comprises two information processing processes of self-vehicle deep learning and transfer learning, and the running state and environment characteristic information which are unified references are used as the basis in the information processing process, so that the way that the road digitizes the self-information and then sends the digitized self-information to a vehicle end through vehicle-road communication to supplement the dimension of self-vehicle perception information is replaced; the self-vehicle deep learning model can provide richer and complete source domain data for vehicle road information fusion, and the vehicle state and environmental characteristics acquired by combining transfer learning can give a road end observation result and confidence probability; and finally, obtaining a fusion perception result of the two processing procedures by utilizing joint probability calculation so as to improve the cooperative fusion precision and robustness of the vehicle and the road and provide effective utilization basis for functional complementation between the vehicle and the road.

Description

Vehicle-road cooperative information fusion method and system based on transfer learning
Technical Field
The invention belongs to the technical field of intelligent network vehicle-road cooperation, and particularly relates to a vehicle-road cooperation information fusion method and system based on transfer learning.
Background
Along with the rapid development of the internet of things, the technical field of automatic driving also enters a development peak period, how to safely drive an automatic driving automobile becomes a crucial research direction for the development of the automatic driving field, and how to effectively solve the problem of a perception blind area becomes a problem to be solved. Because the detection efficiency is low and the shielding effect is large only by the radar, the camera and the like of the automatic driving automobile, the cooperative sensing is needed between the automatic driving automobile and the roadbed equipment to improve the sensing effect. Along with the intelligent development of automatic driving automobiles and roadbed equipment, the automobile road cooperation of wireless mobile communication technology and internet technology becomes new development direction, and through realizing the dynamic sustainable interaction and information fusion of all-around vehicles and roadbeds, the full cooperation of vehicles and roadbeds is promoted, the vehicle environment perception efficiency is improved, and then a safe, green and sustainable intelligent traffic system is formed.
The inventor finds that the traditional vehicle-road collaborative fusion method is that the road digitizes the information of the road and then sends the information to the vehicle end through vehicle-road communication so as to supplement the dimension of the perceived information of the vehicle, but the method has slower response speed and is unsuitable for the high-speed running state of the vehicle; in addition, the functional complementarity between the vehicle and the roadbed is not effectively utilized, so that the original perception performance of the vehicle cannot be improved to a great extent due to the fact that the vehicle and the roadbed cooperate with the final fusion result.
Disclosure of Invention
In order to solve the problems, the invention provides a vehicle-road collaborative information fusion method and system based on transfer learning, which can fully embody the basic concept of vehicle-road collaborative, realize multidimensional and multi-angle identification and observation of driving environment conditions under complex road conditions according to the effective combination of environment observation information and roadbed equipment transfer learning, effectively improve the utilization rate of a learning model and remarkably improve the vehicle-road collaborative fusion precision and robustness.
In a first aspect, the invention provides a vehicle-road collaborative information fusion method based on transfer learning, which comprises the following steps:
acquiring driving state information of an automatic driving automobile and environment information of a road;
obtaining driving state information and environment information of a unified reference through space-time registration processing;
carrying out decision-level fusion on the driving state information and the environment information with unified reference, and calculating fusion posterior probability;
comparing the calculated fusion posterior probability with a preset threshold value, and if the calculated fusion posterior probability is greater than or equal to the preset threshold value, obtaining a fusion result;
if the calculated fusion posterior probability is smaller than a preset threshold value, clustering the acquired driving state information and environment information to obtain various characteristic information;
obtaining a self-vehicle motion state and a target motion state according to the clustered various feature information and a preset self-vehicle deep learning model; the self-vehicle deep learning model is a trained neural network;
performing transfer learning on various feature information to obtain estimated vehicle, target and environment features and confidence; when the transfer learning is carried out, the self-vehicle deep learning model provides source domain data, and the transfer learning is carried out on driving characteristic data in the environment information;
and carrying out joint probability calculation and fusion information screening on the obtained self-vehicle motion state, the target motion state, the estimated vehicle, the target and environment characteristics and the confidence coefficient by adopting a DS evidence theory.
Further, decision-level fusion is carried out on the driving state information and the environment information of the unified standard, and when fusion posterior probability is calculated: collecting data information features and identifying target attributes, and giving characterization quantities about the target attributes; calculating likelihood functions under the premise of assuming environmental attributes; and calculating fusion posterior probability according to a Bayes formula.
Further, the acquired driving state information and environment information are clustered and divided into a self-vehicle movement characteristic, a target movement characteristic, a visual information characteristic and identification information.
Further, the self-vehicle movement characteristics include a self-vehicle running speed characteristic and an acceleration characteristic; the target motion characteristics comprise target running speed characteristics and acceleration characteristics; the visual information is visual information of surrounding environment; the identification information is identification information about identification of the target type.
Further, when the acquired driving state information and environment information are clustered, firstly, the self-vehicle motion characteristics, the target motion characteristics, the visual information characteristics and the identification information are respectively extracted from the original data, 1 data object is randomly selected from each type of the extracted information, then clustering is carried out, and each data object represents a cluster center; classifying according to the similarity between the multisource information to be classified and the centers of the clusters; and then, calculating the average value of all the objects in each cluster again, and repeating the clustering as a new cluster center, and ending when the maximum iteration number is reached.
Further, during neural network training and transfer learning, the self-vehicle motion characteristics and the target motion characteristics are subjected to characteristic combination to generate motion characteristics; combining the target motion characteristics, the visual information characteristics and the identification information to generate target characteristics; and superposing the self-vehicle motion characteristics, the target motion characteristics, the visual information characteristics, the identification information, the motion characteristics and the target characteristics in dimensions to serve as a training sample.
Further, the self-vehicle motion state, the target motion state, the visual identification information and the associated confidence probability are respectively obtained through a self-vehicle deep learning model and transfer learning, and the joint probability of the self-vehicle deep learning and transfer learning characteristic information is obtained after DS evidence theory calculation; and taking the identification classification and the characteristic information with the highest probability as the final result.
In a second aspect, the present invention further provides a vehicle-road collaborative information fusion system based on transfer learning, including:
a data acquisition module configured to: acquiring driving state information of an automatic driving automobile and environment information of a road;
a spatiotemporal registration module configured to: obtaining driving state information and environment information of a unified reference through space-time registration processing;
a fusion posterior probability calculation module configured to: carrying out decision-level fusion on the driving state information and the environment information with unified reference, and calculating fusion posterior probability;
a judgment module configured to: comparing the calculated fusion posterior probability with a preset threshold value, and if the calculated fusion posterior probability is greater than or equal to the preset threshold value, obtaining a fusion result;
a clustering module configured to: if the calculated fusion posterior probability is smaller than a preset threshold value, clustering the acquired driving state information and environment information to obtain various characteristic information;
the vehicle deep learning module is configured to: obtaining a self-vehicle motion state and a target motion state according to the clustered various feature information and a preset self-vehicle deep learning model; the self-vehicle deep learning model is a trained neural network;
a transfer learning module configured to: performing transfer learning on various feature information to obtain estimated vehicle, target and environment features and confidence; when the transfer learning is carried out, the self-vehicle deep learning model provides source domain data, and the transfer learning is carried out on driving characteristic data in the environment information;
an information fusion module configured to: and carrying out joint probability calculation and fusion information screening on the obtained self-vehicle motion state, the target motion state, the estimated vehicle, the target and environment characteristics and the confidence coefficient by adopting a DS evidence theory.
In a third aspect, the present invention further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the steps of the vehicle-road collaborative information fusion method based on transfer learning described in the first aspect are implemented when the processor executes the program.
In a fourth aspect, the present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the vehicle-road collaborative information fusion method based on transfer learning described in the first aspect.
Compared with the prior art, the invention has the beneficial effects that:
the invention comprises two information processing processes of self-vehicle deep learning and transfer learning, wherein the running state and the environmental characteristic information of the unified reference are used as the basis when the information is processed, and the way that the road digitizes the information of the self-vehicle deep learning and the transfer learning and then sends the information to the vehicle end through vehicle-road communication to supplement the dimension of the self-vehicle perception information is replaced; the self-vehicle deep learning model can provide richer and complete source domain data for vehicle-road information fusion, and the vehicle state and the environmental characteristics (target domain) acquired by combining transfer learning can give out road-end observation results and confidence probabilities; and finally, obtaining a fusion perception result of the two processing procedures by utilizing joint probability calculation so as to improve the cooperative fusion precision and robustness of the vehicle and the road and provide effective utilization basis for functional complementation between the vehicle and the road.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments and are incorporated in and constitute a part of this specification, illustrate and explain the embodiments and together with the description serve to explain the embodiments.
FIG. 1 is a flow chart of embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of decision-level fusion based on Bayesian inference in accordance with embodiment 1 of the present invention;
fig. 3 is a schematic diagram of a model for roadbed transfer learning established from a vehicle end according to embodiment 1 of the present invention.
The specific embodiment is as follows:
the invention will be further described with reference to the drawings and examples.
It should be noted that the following detailed description is illustrative and is intended to provide further explanation of the present application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
An automatic driving automobile, an automatic driving automobile (Autonomous vehicles; self-driving automobile) is also called an unmanned automobile, a computer driving automobile or a wheel type mobile robot, and is an intelligent automobile for realizing unmanned through a computer system.
The vehicle-road cooperation is a safe, efficient and environment-friendly road traffic system which is formed by adopting advanced wireless communication, new generation internet and other technologies, implementing vehicle-vehicle and vehicle-road dynamic real-time information interaction in an omnibearing manner, developing vehicle active safety control and road cooperation management on the basis of full-time idle dynamic traffic information acquisition and fusion, fully realizing effective cooperation of people and vehicles and road, ensuring traffic safety and improving traffic efficiency.
The vehicle is a self-vehicle, a vehicle where a current driver is or a vehicle which is currently controlled.
Roadbed, all intelligent facilities that all possess communication function on the road.
The space-time registration process, firstly, the information obtained by each sensor is described under the same coordinate system, which is called space registration; second, the data between the sensors should be aligned in time, known as time registration, with space-time registration becoming spatio-temporal registration.
Example 1:
as shown in fig. 1, the invention provides a vehicle-road collaborative information fusion method based on transfer learning, which comprises the following steps:
s1, vehicle-road multi-source information decision-level fusion:
s1.1, vehicle-road multi-source information decision-level fusion based on Bayesian reasoning:
s1.1.1, acquiring information by a self-vehicle receiving road end and acquiring data by combining a self-vehicle sensor, and acquiring multi-source information of a running state and environmental characteristics of a unified reference after space-time registration processing; it can be understood that the vehicle sensor may include a camera, a laser radar, an ultrasonic radar, a millimeter wave radar, and the like, and accordingly, may collect data such as vehicle information, other vehicle information, pedestrian information, and other object information on the road; the space-time registration processing process is to adopt a similarity matching method to perform asynchronous time reference alignment aiming at the vehicle-road-end state data registration process; a space reference distance compensation method is adopted to match the space coordinates of the multi-source information data; and performing feature similarity matching on the image data of the road end, and removing the mismatching difference points to obtain a unified vehicle positioning information space-time reference.
In this embodiment, the roadbed equipment related to the road end may include intelligent facilities such as roadside cameras, millimeter wave radars, laser radars, intelligent traffic lights, intelligent labels, intelligent conical barrels, geomagnetic sensors, and meteorological information sensing; the road side acquisition information can be used for helping the digital infrastructure to realize the interaction between the road and the vehicle through RSU (Road Side Unit), for example, a camera captures information such as pedestrians, traffic light states, whether traffic accidents exist in front, weather and the like, and the information is shared to vehicles running nearby through an RSU.
S1.1.2, decision-level fusion can be carried out on the data after unified reference by adopting Bayesian reasoning, and fusion posterior probability is calculated; assuming that the data has x acquisition sources, m independent environment attributes can be obtained and marked as A i (i=1, 2,) m. As shown in FIG. 2, bayesian reasoning can be performed in multiple stages, with the first stage responsible for collecting the data information features of each sensor source and identifying target attributes, giving a characterization quantity about its target attributes, and denoted as B j (j=1, 2,., x); the second stage calculates likelihood functions P (B) of the multisource sensor on the assumption of environmental properties j |A i ) A multisource sensor may be understood as a collection of the plurality of sensors described above; thirdly, calculating fusion posterior probability of the multi-source data according to a Bayes formula;
s1.1.3, determining whether the decision-level fusion result meets the requirement of perception precision by the decision logic.
In this embodiment, in the third-stage fusion process, the target fusion posterior probability may be calculated as follows:
Figure BDA0003634412490000081
wherein P (A) i |B 1 ,B 2 ,…,B x ) Representing all transmissionsThe sensor is based on the unified reference input information and based on the assumption A i Fusion posterior probability of (2); p (B) 1 ,B 2 ,...,B x |A i )=∏P(B j |A i ) Is expressed in satisfying hypothesis A i Under the condition, the multi-source information is combined with statistical probability; p (A) i ) Representing hypothesis A i Is a predicted probability of (1); p (B) 1 ,B 2 ,...,B x )=∏P(B j ) And the joint statistical probability of confidence of the multisource acquired data is represented.
S1.2, judging a confidence coefficient condition based on fusion posterior probability:
the maximum posterior decision logic may be selected to directly select the target attribute with the greatest posterior joint probability as follows:
Figure BDA0003634412490000082
setting the preset threshold value of judgment as P 0 If P * ≥P 0 Outputting the fusion result A * =argmax 1≤i≤m P(A i |B 1 ,B 2 ,...,B x ) Otherwise, executing the next step to carry out deep fusion.
S2, the multisource information is expanded into deep learning driving characteristics:
s2.1, classifying deep learning features of multi-source information:
classifying the data acquired by the self-vehicle and the road end acquired data respectively, and classifying the multi-source information acquired by the self-vehicle and the road end into four types by adopting K-means clustering, wherein the multi-source information comprises the motion characteristics, the target motion characteristics, the visual information characteristics and the identification information characteristics of the self-vehicle; wherein, the self-vehicle movement characteristics can comprise characteristics such as self-vehicle running speed, acceleration and the like; the target motion characteristics can comprise characteristics such as target running speed, acceleration and the like; the visual information may be visual information of the surrounding environment; the identification information may be identification information about object type identification.
For the clustering process flow, firstly, four types of information including a vehicle motion feature, a target motion feature, a visual information feature and an identification information feature are respectively extracted from original data, 1 data object is randomly selected from each type of the extracted information, k-means clustering is carried out, and each data object represents a cluster center, namely 4 initial centers are selected; for each object of the multi-source information to be classified, assigning it to the cluster corresponding to the cluster center most similar to the object according to the similarity between the object and the cluster center; then, calculating the average value of all objects in each cluster again to be used as a new cluster center; the above process is repeated continuously, and the process is terminated when the maximum iteration times are reached; the similarity can be described using the following euclidean distance:
Figure BDA0003634412490000091
wherein J is c Representing a similarity function between the classified object and the classified cluster; k represents the number of clusters; x represents a multi-source information object; s is S i Representing the class of the cluster; c (C) i Representing the center point of the ith cluster; dist (C) i X) is X to C i Is a distance of (3).
S2.2, deep learning feature combination and sample generation:
s2.2.1, combining the motion characteristics of the vehicle and the motion characteristics of the target to generate new characteristic classes, and marking the new characteristic classes as motion characteristics; and combining the target motion feature, the visual information feature and the identification information feature, and marking the combined target motion feature as six types of features of the self-vehicle motion feature, the target motion feature, the visual information feature, the identification information feature, the motion feature and the target feature.
S2.2.2, the vehicle motion state, the target motion state, and the target classification corresponding to the feature are marked as tag values.
S2.2.3, in order to develop the deep learning of the state information of the own vehicle and the migration learning of the roadbed information, six types of features captured by the own vehicle are overlapped in dimension, and then the six types of features and the corresponding tag value are used as a training sample set of the deep learning of the own vehicle, namely a source domain sample; meanwhile, six types of features captured by the road end received by the own vehicle are overlapped in dimension, and finally the six types of features and the corresponding label value are used as a training sample set for transfer learning, namely a target domain sample.
S3, deep learning and training of driving characteristics of the self-vehicle:
the superimposed features in step S2 are used as the input of the network, and are processed by the vehicle deep learning model to output the corresponding vehicle motion state, target motion state and probability of each target (the target category is predicted to obtain the probability X of the target being the category 1 1 Probability X for class 2 2 … …, probability X of category N N ) The method comprises the steps of carrying out a first treatment on the surface of the The motor vehicle motion state includes a speed in the X, Y direction and an acceleration in the X, Y direction. The self-vehicle deep learning model is a deep learning network, a BP neural network can be selected, the number of nodes of an input layer of the BP neural network is equal to the dimension of an input feature vector, and each input node corresponds to a feature of a sample; while there is only one output layer node.
For visual information features and identifying information features, involving classification of perceived objects, the following cross entropy loss functions may be employed for training:
Figure BDA0003634412490000101
wherein L is 1 Representing a self-vehicle identification class feature cross entropy loss function; n represents the category number of the object to be identified; z is Z i Representing the true sample type of the marker;
Figure BDA0003634412490000102
representing the prediction probabilities for each type.
The mean square error loss function may be selected for the vehicle motion state and the target motion state, respectively:
Figure BDA0003634412490000103
Figure BDA0003634412490000104
wherein V is x ,V y ,a x ,a y Representing the true speed and acceleration of the vehicle in the X, Y direction, respectively;
Figure BDA0003634412490000105
respectively representing the predicted speed and the predicted acceleration of the own vehicle in the X, Y direction; m represents the number of targets to be identified; v (V) xj ,V yj ,a xj ,a yj Representing the true speed and the acceleration of the jth target to be identified in the X, Y direction respectively; />
Figure BDA0003634412490000111
Representing the predicted speed and acceleration of the jth object to be identified in the X, Y direction, respectively.
S4, roadbed driving characteristic transfer learning and training
Feature-based transfer learning can perform feature transfer by minimizing the difference between the source domain data feature distribution and the target domain data feature distribution; in this embodiment, the source domain data is provided by the self-vehicle deep learning model, the driving feature data received by the roadbed equipment is subjected to transfer learning, and the estimated vehicle, target and environment features and the probability thereof at the roadbed end are obtained, so as to perform deep collaborative information fusion.
The migration learning method of the roadbed end information generates a mapping to enable the edge probability distribution of the source domain data and the edge probability distribution of the target domain data to be similar through migration component analysis; the maximum mean difference method (Maximum Mean Discrepancy, MMD) is chosen here to represent the difference between the source domain distribution and the target domain distribution by hilbert spatial sample mean difference. Let the source domain and target domain data sets be respectively
Figure BDA0003634412490000112
And->
Figure BDA0003634412490000113
Wherein (1)>
Figure BDA0003634412490000114
And->
Figure BDA0003634412490000115
Respectively representing a source domain sample and a target domain sample; n is n S And n T Representing the number of source domain samples and the number of target domain samples, respectively. In order to establish the correlation between the source domain and the target domain data, the following MMD is calculated to realize the mapping from the source domain to the target domain information:
Figure BDA0003634412490000116
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003634412490000117
representing the normal distribution of sample x.
And constructing a migration learning network consisting of an input layer, a hidden layer and an output layer for target domain data acquired from a roadbed end, wherein the input layer is used for receiving an input source domain sample and a target domain sample and obtaining vector characterization of the source domain sample and the target domain sample in the input layer, and the execution results of the source domain and the target domain tasks can be respectively obtained in the output layer through the processing of the hidden layer. The transfer learning loss function of the roadbed driving characteristics is basically the same as that of the self-vehicle deep learning, and the main difference is that the training data sets are different.
To avoid the negative migration problem, in this embodiment, a sample selector is configured for the migration learning model; the sample selector clusters the source domain samples and the target domain samples based on K-means, and comprises a clustering module and a weighting module. After the clustering module obtains the clustering result, the clustering result is transmitted to the weighting module, and the weighting module calculates the weight of the source domain sample for screening the source domain sample, so that the similar sample of the target domain sample is determined from the source domain sample.
S5, calculating joint probability based on DS evidence theory:
adopting DS evidence theory to combine the self-vehicle deep learning and the transfer learning output state, characteristics and confidence probability of roadbed driving informationProbability calculation and fusion information screening. The confidence probability of the information such as the motion state, the recognition type and the like obtained by the self-vehicle learning and the roadbed learning is defined as a hypothesis space and is expressed by Θ. Driving feature probability distributions (Basic Probability Allocation, BPA) defining on- Θ vehicle learning and roadbed learning are denoted as P, respectively 1 (. Cndot.) E (0, 1) and P 2 (. Cndot.) e (0, 1), then for any associated driving feature event in the Θ space
Figure BDA0003634412490000121
The joint probability synthesis rule is as follows:
Figure BDA0003634412490000122
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003634412490000123
the joint probability of the feature information is learned for the self-vehicle deep learning and the roadbed transfer; />
Figure BDA0003634412490000124
Representing joint probability operations; a is that 1 The method comprises the steps of identifying a target class for a self-vehicle model; a is that 2 Identifying a set of target categories for the roadbed migration model; e is a normalized coefficient, which can be obtained by the following formula:
Figure BDA0003634412490000125
wherein phi represents a null event; a is that 1 ∩A 2 The term "φ" refers to the case where the vehicle model and the roadbed migration model consider the target object as a certain class at the same time.
S6, outputting the vehicle target identification and fusion confidence degree:
the self-vehicle motion state output by two models is obtained through self-vehicle deep learning and roadbed transfer learning respectively
Figure BDA0003634412490000131
Target movement state->
Figure BDA0003634412490000132
Visual identification information->
Figure BDA0003634412490000133
And the associated confidence probability, then the DS evidence theory calculation is carried out to obtain +.>
Figure BDA0003634412490000134
Figure BDA0003634412490000135
And taking the identification classification with the highest probability and the motion characteristic as the final output result for the output result. For motion state processing, the speed and acceleration of an object obtained by self-vehicle learning and roadbed learning in the X, Y direction are respectively taken as values, average components in all directions are obtained, and vector addition is carried out on the average components in the two directions to obtain a final motion state judgment result.
Example 2:
the embodiment provides a vehicle-road cooperative information fusion system based on transfer learning, which comprises the following steps:
a data acquisition module configured to: acquiring driving state information of an automatic driving automobile and environment information of a road;
a spatiotemporal registration module configured to: obtaining driving state information and environment information of a unified reference through space-time registration processing;
a fusion posterior probability calculation module configured to: carrying out decision-level fusion on the driving state information and the environment information with unified reference, and calculating fusion posterior probability;
a judgment module configured to: comparing the calculated fusion posterior probability with a preset threshold value, and if the calculated fusion posterior probability is greater than or equal to the preset threshold value, obtaining a fusion result;
a clustering module configured to: if the calculated fusion posterior probability is smaller than a preset threshold value, clustering the acquired driving state information and environment information to obtain various characteristic information;
the vehicle deep learning module is configured to: obtaining a self-vehicle motion state and a target motion state according to the clustered various feature information and a preset self-vehicle deep learning model; the self-vehicle deep learning model is a trained neural network;
a transfer learning module configured to: performing transfer learning on various feature information to obtain estimated vehicle, target and environment features and confidence; when the transfer learning is carried out, the self-vehicle deep learning model provides source domain data, and the transfer learning is carried out on driving characteristic data in the environment information;
an information fusion module configured to: and carrying out joint probability calculation and fusion information screening on the obtained self-vehicle motion state, the target motion state, the estimated vehicle, the target and environment characteristics and the confidence coefficient by adopting a DS evidence theory.
The working method of the system is the same as the vehicle-road collaborative information fusion method based on the transfer learning in embodiment 1, and is not described here again.
Example 3:
the present embodiment provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the steps in the vehicle-road collaborative information fusion method based on transfer learning described in embodiment 1 are implemented when the processor executes the program.
Example 4:
the present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps in the vehicle-road cooperative information fusion method based on transfer learning described in embodiment 1.
The above description is only a preferred embodiment of the present embodiment, and is not intended to limit the present embodiment, and various modifications and variations can be made to the present embodiment by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present embodiment should be included in the protection scope of the present embodiment.

Claims (8)

1. The vehicle-road cooperative information fusion method based on transfer learning is characterized by comprising the following steps of:
acquiring driving state information of an automatic driving automobile and environment information of a road;
acquiring information by a self-vehicle receiving road end and acquiring data by combining a self-vehicle sensor, and acquiring driving state information and environment information with unified references through space-time registration processing;
carrying out decision-level fusion on the driving state information and the environment information with unified reference, and calculating fusion posterior probability;
comparing the calculated fusion posterior probability with a preset threshold value, and if the calculated fusion posterior probability is greater than or equal to the preset threshold value, obtaining a fusion result;
if the calculated fusion posterior probability is smaller than a preset threshold value, clustering the acquired driving state information and environment information to obtain various characteristic information;
the method comprises the steps of respectively clustering data acquired by a self-vehicle and road end acquisition data, wherein the data are divided into self-vehicle movement characteristics, target movement characteristics, visual information characteristics and identification information;
combining the motion characteristics of the vehicle and the motion characteristics of the target to generate new characteristic classes, and recording the new characteristic classes as motion characteristics; combining the target motion feature, the visual information feature and the identification information feature to obtain six types of features including a self-vehicle motion feature, a target motion feature, a visual information feature, an identification information feature, a motion feature and a target feature;
marking the self-vehicle motion state, the target motion state and the target classification corresponding to the characteristics as tag values;
overlapping six types of features captured by the vehicle in dimensions, and then using the six types of features and the corresponding tag values as a training sample set for deep learning of the vehicle, namely a source domain sample; meanwhile, six types of features captured by a road end received by a self-vehicle are overlapped in dimension, and finally the six types of features and the corresponding label value are used as a training sample set for transfer learning, namely a target domain sample;
obtaining a self-vehicle motion state and a target motion state according to the clustered various feature information and a preset self-vehicle deep learning model; the self-vehicle deep learning model is a trained neural network;
providing source domain data by a self-vehicle deep learning model, and performing migration learning on driving characteristic data received by road end equipment to obtain estimated vehicle, target and environment characteristics and confidence;
and carrying out joint probability calculation and fusion information screening on the obtained self-vehicle motion state, the target motion state, the estimated vehicle, the target and environment characteristics and the confidence coefficient by adopting a DS evidence theory.
2. The vehicle-road cooperative information fusion method based on transfer learning as claimed in claim 1, wherein decision-level fusion is performed on driving state information and environment information of a unified standard, and when fusion posterior probability is calculated: collecting data information features and identifying target attributes, and giving characterization quantities about the target attributes; calculating likelihood functions under the premise of assuming environmental attributes; and calculating fusion posterior probability according to a Bayes formula.
3. The vehicle-road cooperative information fusion method based on transfer learning of claim 1, wherein the vehicle motion characteristics comprise a vehicle running speed characteristic and an acceleration characteristic; the target motion characteristics comprise target running speed characteristics and acceleration characteristics; the visual information is visual information of surrounding environment; the identification information is identification information about identification of the target type.
4. The vehicle-road cooperative information fusion method based on transfer learning as claimed in claim 1, wherein when the acquired driving state information and environment information are clustered, firstly, the vehicle motion feature, the target motion feature, the visual information feature and the identification information are respectively extracted from original data, 1 data object is randomly selected from each type of the extracted information, then clustering is carried out, and each data object represents a cluster center; classifying according to the similarity between the multisource information to be classified and the centers of the clusters; and then, calculating the average value of all the objects in each cluster again, and repeating the clustering as a new cluster center, and ending when the maximum iteration number is reached.
5. The vehicle-road cooperative information fusion method based on transfer learning of claim 1, wherein the vehicle motion state, the target motion state, the visual identification information and the associated confidence probability are respectively obtained through a vehicle deep learning model and transfer learning, and the joint probability of the vehicle deep learning and transfer learning characteristic information is obtained after DS evidence theory calculation; and taking the identification classification and the characteristic information with the highest probability as the final result.
6. A vehicle-road cooperative information fusion system based on transfer learning, which performs the vehicle-road cooperative information fusion method based on transfer learning as set forth in any one of claims 1 to 5, and is characterized by comprising:
a data acquisition module configured to: acquiring driving state information of an automatic driving automobile and environment information of a road;
a spatiotemporal registration module configured to: obtaining driving state information and environment information of a unified reference through space-time registration processing;
a fusion posterior probability calculation module configured to: carrying out decision-level fusion on the driving state information and the environment information with unified reference, and calculating fusion posterior probability;
a judgment module configured to: comparing the calculated fusion posterior probability with a preset threshold value, and if the calculated fusion posterior probability is greater than or equal to the preset threshold value, obtaining a fusion result;
a clustering module configured to: if the calculated fusion posterior probability is smaller than a preset threshold value, clustering the acquired driving state information and environment information to obtain various characteristic information;
the vehicle deep learning module is configured to: obtaining a self-vehicle motion state and a target motion state according to the clustered various feature information and a preset self-vehicle deep learning model; the self-vehicle deep learning model is a trained neural network;
a transfer learning module configured to: performing transfer learning on various feature information to obtain estimated vehicle, target and environment features and confidence; when the transfer learning is carried out, the self-vehicle deep learning model provides source domain data, and the transfer learning is carried out on driving characteristic data in the environment information;
an information fusion module configured to: and carrying out joint probability calculation and fusion information screening on the obtained self-vehicle motion state, the target motion state, the estimated vehicle, the target and environment characteristics and the confidence coefficient by adopting a DS evidence theory.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the vehicle co-information fusion method based on transfer learning as claimed in any one of claims 1 to 5 when the program is executed by the processor.
8. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements the steps of the vehicle-road cooperative information fusion method based on transfer learning as claimed in any one of claims 1 to 5.
CN202210498666.9A 2022-05-09 2022-05-09 Vehicle-road cooperative information fusion method and system based on transfer learning Active CN114884994B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210498666.9A CN114884994B (en) 2022-05-09 2022-05-09 Vehicle-road cooperative information fusion method and system based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210498666.9A CN114884994B (en) 2022-05-09 2022-05-09 Vehicle-road cooperative information fusion method and system based on transfer learning

Publications (2)

Publication Number Publication Date
CN114884994A CN114884994A (en) 2022-08-09
CN114884994B true CN114884994B (en) 2023-06-27

Family

ID=82673905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210498666.9A Active CN114884994B (en) 2022-05-09 2022-05-09 Vehicle-road cooperative information fusion method and system based on transfer learning

Country Status (1)

Country Link
CN (1) CN114884994B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021202602A1 (en) * 2020-03-30 2021-10-07 Moove.Ai Vehicle-data analytics
CN114037948A (en) * 2021-10-08 2022-02-11 中铁第一勘察设计院集团有限公司 Vehicle-mounted road point cloud element vectorization method and device based on migration active learning

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764108A (en) * 2018-05-22 2018-11-06 湖北省专用汽车研究院 A kind of Foregut fermenters method based on Bayesian inference
CN109410564A (en) * 2018-12-10 2019-03-01 肇庆学院 A kind of vehicle platoon lonitudinal redundance control system based on information fusion technology
CN110208793B (en) * 2019-04-26 2022-03-11 纵目科技(上海)股份有限公司 Auxiliary driving system, method, terminal and medium based on millimeter wave radar
KR20200129457A (en) * 2019-05-08 2020-11-18 삼성전자주식회사 Neural network system for performing learning, learning method thereof and transfer learning method of neural network processor
CN111026127B (en) * 2019-12-27 2021-09-28 南京大学 Automatic driving decision method and system based on partially observable transfer reinforcement learning
CN112950678A (en) * 2021-03-25 2021-06-11 上海智能新能源汽车科创功能平台有限公司 Beyond-the-horizon fusion sensing system based on vehicle-road cooperation
CN114328448A (en) * 2021-12-01 2022-04-12 中交第二公路勘察设计研究院有限公司 Expressway vehicle following behavior reconstruction method based on simulated learning algorithm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021202602A1 (en) * 2020-03-30 2021-10-07 Moove.Ai Vehicle-data analytics
CN114037948A (en) * 2021-10-08 2022-02-11 中铁第一勘察设计院集团有限公司 Vehicle-mounted road point cloud element vectorization method and device based on migration active learning

Also Published As

Publication number Publication date
CN114884994A (en) 2022-08-09

Similar Documents

Publication Publication Date Title
Sun et al. A real-time precrash vehicle detection system
Dewangan et al. RCNet: road classification convolutional neural networks for intelligent vehicle system
WO2020135810A1 (en) Multi-sensor data fusion method and device
Tang et al. Automatic road environment classification
Husain et al. Vehicle detection in intelligent transport system under a hazy environment: a survey
JP2016062610A (en) Feature model creation method and feature model creation device
CN106428000A (en) Vehicle speed control device and method
Mahaur et al. Road object detection: a comparative study of deep learning-based algorithms
Kuang et al. Feature selection based on tensor decomposition and object proposal for night-time multiclass vehicle detection
CN111860269A (en) Multi-feature fusion tandem RNN structure and pedestrian prediction method
Zhang et al. A framework for turning behavior classification at intersections using 3D LIDAR
Guindel et al. Joint object detection and viewpoint estimation using CNN features
Gwak et al. A review of intelligent self-driving vehicle software research
Dewangan et al. Towards the design of vision-based intelligent vehicle system: methodologies and challenges
Maity et al. Last decade in vehicle detection and classification: a comprehensive survey
WO2023023336A1 (en) Detected object path prediction for vision-based systems
Bouain et al. Multi-sensor fusion for obstacle detection and recognition: A belief-based approach
Arthi et al. Object detection of autonomous vehicles under adverse weather conditions
CN106650814B (en) Outdoor road self-adaptive classifier generation method based on vehicle-mounted monocular vision
Bewley et al. Background Appearance Modeling with Applications to Visual Object Detection in an Open‐Pit Mine
CN117237884A (en) Interactive inspection robot based on berth positioning
CN114884994B (en) Vehicle-road cooperative information fusion method and system based on transfer learning
CN111160089A (en) Trajectory prediction system and method based on different vehicle types
CN114581841A (en) Method for detecting weak and small targets by using deep learning method in complex traffic environment
Ahn et al. Vehicle classification and tracking based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant