CN117459922A - Data transmission method, device, terminal and storage medium - Google Patents

Data transmission method, device, terminal and storage medium Download PDF

Info

Publication number
CN117459922A
CN117459922A CN202311204324.2A CN202311204324A CN117459922A CN 117459922 A CN117459922 A CN 117459922A CN 202311204324 A CN202311204324 A CN 202311204324A CN 117459922 A CN117459922 A CN 117459922A
Authority
CN
China
Prior art keywords
data
vehicle
collaborative
surrounding environment
collaborative awareness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311204324.2A
Other languages
Chinese (zh)
Inventor
周明宇
王芳
崔琪楣
张雪菲
梁高红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Baicells Technologies Co Ltd
Original Assignee
Beijing University of Posts and Telecommunications
Baicells Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications, Baicells Technologies Co Ltd filed Critical Beijing University of Posts and Telecommunications
Priority to CN202311204324.2A priority Critical patent/CN117459922A/en
Publication of CN117459922A publication Critical patent/CN117459922A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/18Processing of user or subscriber data, e.g. subscribed services, user preferences or user profiles; Transfer of user or subscriber data
    • H04W8/20Transfer of user or subscriber data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a data transmission method, a device, a terminal and a storage medium, wherein the method comprises the following steps: the vehicle end and the cloud end are communicated with each other, the vehicle end acquires surrounding environment data, and performs semantic extraction, screening and encoding on the surrounding environment data to obtain collaborative perception data; the cloud receives the collaborative awareness data sent by the plurality of vehicle ends, obtains predicted global collaborative awareness data based on the global collaborative awareness data, and transmits the predicted global collaborative awareness data to the vehicle ends. The invention introduces semantic data transmission, reduces the data interaction flow, reduces the transmission data quantity and realizes high-efficiency data transmission. In addition, through the design of the object screening network, the autonomous selection of the vehicle to the object with the largest information quantity for the cloud end is realized, so that the transmission of redundant data is reduced, and the efficiency of data transmission is improved.

Description

Data transmission method, device, terminal and storage medium
Technical Field
The application relates to the technical field of cooperative sensing of vehicles and cloud in the Internet of vehicles, in particular to a data transmission method, a device, a terminal and a storage medium.
Background
In the field of cooperative sensing of the Internet of vehicles, the method mainly comprises two large fields of cooperative sensing of vehicles and cooperative sensing of an edge cloud auxiliary vehicle end. Because vehicles share data, the complex data processing is oriented, and the calculation cost and the calculation time delay of the vehicle end are increased. Moreover, as the intelligent construction of roads is gradually perfected, the scheme tends to assist the vehicle to cooperatively sense the scene based on the edge cloud. The edge cloud may provide more computing power and more reliable data processing services for the vehicle, thereby enabling more efficient, accurate and timely collaborative awareness (collective perception, CP).
Collaborative awareness is the ability of a vehicle to perceive the surrounding environment can be enhanced by sharing real-time information (based on vehicle sensor information or sensor data from the road side). In doing so, these raw data are typically analyzed to detect and calculate a mathematical representation of the detected object. The CPM (collective perception message, co-perceived information) is then sent, carrying a list of detected objects, including high-level information of their dynamic states (e.g., heading, speed, acceleration, etc.), size, confidence level, estimated type, and detection sensors. The purpose of this information is to create an omnidirectional view, thereby avoiding accidents.
Collaborative awareness information sharing allows for the addition of limited sensor views to support some use cases of various security applications (some advanced driving (also called full-automatic driving) in 3 GPP: collaborative collision avoidance, fully automatic driving information sharing, city driving intersection security information provision, etc.).
But also encounters challenges: (1) the transmission data volume is large: future vehicles will be equipped with more than 200 sensors, and mass data will be generated by the car every moment; (2) the data redundancy is high: when a large amount of vehicle end and edge clouds share sensor data, object information contained in CPM generated by adjacent vehicles is highly redundant, the expected number of redundant data transmission may be about 6.5 times, and higher traffic density can occur in UL, which poses challenges to the existing communication technology.
The above situation finally leads to a large amount of data transmission, which causes network congestion, and the packet loss phenomenon of the receiving end can occur when the network congestion is serious. Since a high reliability and low delay is critical for reliable operation of the autopilot. Therefore, research on how to reduce data redundancy between neighboring vehicles in the edge cloud assisted vehicle cooperative awareness is very important.
For the method for reducing the data transmission quantity by cooperative sensing, various space partition technologies are mainly utilized to reduce the data redundancy between adjacent vehicles so as to reduce the data transmission quantity. The cloud end needs to collect information such as the position of the vehicle end and then issues partition decisions. And the vehicle end uploads the data of the object in the area according to the partition decision.
The prior art has the following defects:
first, control signaling needs to be sent to partition while data is interacted, which increases communication overhead; second, the zoning technique is highly dependent on the positioning function, and although differential GPS based on GPS improvement can achieve centimeter level accuracy, in environments with many occlusions such as urban roads, or in tunnels, underground scenes, the positioning accuracy can drop to meter level or even lose signals.
Disclosure of Invention
The main purpose of the present application is to provide a data transmission method, device, terminal and storage medium, so as to solve the problems of large communication overhead and low positioning accuracy in the related art.
To achieve the above object, in a first aspect, the present application provides a data transmission method, including:
the vehicle end and the cloud end are communicated with each other;
the vehicle end acquires the surrounding environment data, performs semantic extraction, screening and encoding on the surrounding environment data, and obtains collaborative perception data;
the cloud receives collaborative awareness data sent by a plurality of vehicle ends, and obtains predicted global collaborative awareness data based on the global collaborative awareness data;
and the cloud transmits the predicted global collaborative awareness data to the vehicle end.
In one possible implementation manner, the vehicle end obtains surrounding environment data, performs semantic extraction, screening and encoding on the surrounding environment data to obtain collaborative awareness data, and includes:
the vehicle end performs object semantic extraction on the surrounding environment data to obtain object semantic feature data;
obtaining action space information of the vehicle through an object screening network;
matching the action space information with all object feature data acquired by a vehicle end to obtain matched data;
and carrying out semantic coding on the matched data to obtain collaborative perception data.
In one possible implementation manner, the vehicle end performs object semantic extraction on surrounding environment data to obtain object semantic feature data, including:
the vehicle end pre-processes the surrounding environment data to obtain a target object and surrounding environment data corresponding to the target object;
and extracting semantic features of surrounding environment data corresponding to the target object to obtain object semantic feature data.
In one possible implementation manner, the obtaining, through the object filtering network, action space information of the vehicle includes:
acquiring predicted global collaborative awareness data from a cloud;
determining a collaborative awareness state based on the object semantic feature data and the predicted global collaborative awareness data, wherein the collaborative awareness state comprises a relative information entropy and a network congestion state;
and inputting the collaborative awareness state into an object screening network, and outputting the action space information of the vehicle.
In one possible implementation, the reward function of the object screening network is:
r t =λ localcpm ·H t,ωchanel ·C t
wherein lambda is local Representing that the receiving party does not detect the target object omega, mu cpm Mu for a first preset threshold value chanel Is a second preset threshold value, H t,ω Representing the entropy of the relative information containing the target object omega at time t, C t Is the network congestion level at time t.
In one possible implementation manner, the cloud receives collaborative awareness data sent by a plurality of vehicle ends, obtains predicted global collaborative awareness data based on the global collaborative awareness data, and includes:
the cloud performs semantic decoding on the collaborative awareness data from the plurality of vehicle ends to obtain decoded global collaborative awareness data;
and inputting the decoded global cooperative sensing data into a prediction network to generate predicted global cooperative sensing data.
In one possible implementation, the predictive network is at least a recurrent neural network or a long-short term memory network.
In a second aspect, an embodiment of the present invention provides a data transmission apparatus, including:
the vehicle end and the cloud end are communicated with each other;
the data processing module is used for acquiring surrounding environment data by the vehicle end, and carrying out semantic extraction, screening and encoding on the surrounding environment data to obtain collaborative perception data;
the prediction module is used for receiving the collaborative awareness data sent by the plurality of vehicle ends by the cloud end and obtaining predicted global collaborative awareness data based on the global collaborative awareness data;
and the transmission module is used for transmitting the predicted global collaborative awareness data to the vehicle end by the cloud.
In a third aspect, an embodiment of the present invention provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of any one of the data transmission methods described above when the computer program is executed by the processor.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium storing a computer program which, when executed by a processor, performs the steps of any one of the data transmission methods described above.
The embodiment of the invention provides a data transmission method, a device, a terminal and a storage medium, comprising the following steps: the cloud end acquires surrounding environment data, performs semantic extraction, screening and encoding on the surrounding environment data to obtain collaborative awareness data, receives the collaborative awareness data sent by the plurality of vehicle ends, obtains predicted global collaborative awareness data based on the global collaborative awareness data, and transmits the predicted global collaborative awareness data to the vehicle end. The invention introduces semantic data transmission, reduces the data interaction flow, reduces the transmission data quantity and realizes high-efficiency data transmission. In addition, through the design of the object screening network, the autonomous selection of the vehicle to the object with the largest information quantity for the cloud end is realized, so that the transmission of redundant data is reduced, and the efficiency of data transmission is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, are included to provide a further understanding of the application and to provide a further understanding of the application with regard to the other features, objects and advantages of the application. The drawings of the illustrative embodiments of the present application and their descriptions are for the purpose of illustrating the present application and are not to be construed as unduly limiting the present application. In the drawings:
fig. 1 is a flowchart of an implementation of a data transmission method according to an embodiment of the present invention;
fig. 2 is a flowchart of an implementation of a data transmission method according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a data transmission device according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a terminal according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein.
It should be understood that, in various embodiments of the present invention, the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should be understood that in the present invention, "comprising" and "having" and any variations thereof are intended to cover non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements that are expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present invention, "plurality" means two or more. "and/or" is merely an association relationship describing an association object, and means that three relationships may exist, for example, and/or B may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. "comprising A, B and C", "comprising A, B, C" means that all three of A, B, C comprise, "comprising A, B or C" means that one of the three comprises A, B, C, and "comprising A, B and/or C" means that any 1 or any 2 or 3 of the three comprises A, B, C.
It should be understood that in the present invention, "B corresponding to a", "a corresponding to B", or "B corresponding to a" means that B is associated with a, from which B can be determined. Determining B from a does not mean determining B from a alone, but may also determine B from a and/or other information. The matching of A and B is that the similarity of A and B is larger than or equal to a preset threshold value.
As used herein, "if" may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to detection" depending on the context.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the following description will be made by way of specific embodiments with reference to the accompanying drawings.
In an embodiment, as shown in fig. 1, the present invention provides a data transmission method applied to a vehicle end and a cloud end which communicate with each other, including the following steps:
step S101: and the vehicle end acquires the surrounding environment data, performs semantic extraction, screening and encoding on the surrounding environment data, and obtains collaborative perception data.
Acquiring surrounding environment data aiming at a vehicle end, and carrying out semantic extraction, screening and encoding on the surrounding environment data to obtain collaborative awareness data, wherein the method comprises the following steps: the vehicle end performs object semantic extraction on surrounding environment data to obtain object semantic feature data, obtains action space information of the vehicle through an object screening network, matches the action space information with all object feature data obtained by the vehicle end to obtain matched data, and performs semantic coding on the matched data to obtain collaborative perception data.
The method for extracting the object semantic features from the surrounding environment data by the vehicle end comprises the following steps: the vehicle end pre-processes the surrounding environment data to obtain a target object and surrounding environment data corresponding to the target object, and then performs semantic feature extraction on the surrounding environment data corresponding to the target object to obtain object semantic feature data.
The vehicle end, i.e. the vehicle, is typically equipped with various sensors, such as cameras, radar, lidar etc., which are operable to sense the surrounding environment so that the vehicle can acquire real-time surrounding environment data. Wherein sensor data fusion techniques fuse together data from different sensors, vehicles can identify and track objects in the surrounding environment, such as other vehicles, pedestrians, bicycles, etc., using computer vision and image processing techniques. By analyzing the camera images, the vehicle can extract features of the object, such as shape, color, motion trajectory, and the like.
By integrating the above, the collaborative awareness system obtains and integrates the information of the object through technologies such as sensor data fusion and target recognition, encodes the information into collaborative awareness data, and shares the collaborative awareness data between the vehicle end and the cloud end so as to realize more accurate environmental awareness and decision.
With reference to fig. 2, after the vehicle acquires data, such as surrounding environment data, the vehicle performs object semantic extraction on the surrounding environment data to obtain object semantic feature data.
Object semantic extraction is, among other things, the conversion of various important information and semantic meanings in data into feature representations (e.g., objects, edges, textures, etc.) by mapping image data at the input pixel level into a high-dimensional feature space, so that the computer system can better understand and process the data. This representation of high-dimensional features can help us find meaningful patterns and structures in complex data.
The method for obtaining the action space information of the vehicle through the object screening network comprises the following steps: and acquiring predicted global cooperative sensing data (i.e. the cooperative sensing data predicted immediately above in fig. 2) from the cloud, determining a cooperative sensing state (i.e. the cooperative sensing data at the current moment in fig. 2) based on the object semantic feature data and the predicted global cooperative sensing data, wherein the cooperative sensing state comprises relative information entropy and a network congestion state, inputting the cooperative sensing state into an object screening network, and outputting information of a vehicle in an action space.
The object screening network can be realized based on a deep reinforcement learning technology, and the specific collaborative awareness state, action space information and rewards are designed as follows:
collaborative awareness status: (1) Calculating information entropy difference between the object semantic feature data and the predicted global cooperative sensing data fed back by the cloud, namely, relative information entropy; (2) network congestion level (status).
Action space information: a= { transmit, discard }, where the action is uploaded when it becomes transmit and not sent when it becomes discard.
The reward function of the object screening network is:
r t =λ localcpm ·H t,ωchanel ·C t
wherein lambda is local Representing that the receiving party does not detect the target object omega, mu cpm Mu for a first preset threshold value chanel Is a second preset threshold value, H t,ω Representing the entropy of the relative information containing the target object omega at time t, C t Is the network congestion level at time t.
Step S102: the cloud receives collaborative awareness data sent by a plurality of vehicle ends, and obtains predicted global collaborative awareness data based on the global collaborative awareness data.
The cloud receives cooperative sensing data sent by a plurality of vehicle ends, obtains predicted global cooperative sensing data based on the global cooperative sensing data, and comprises the following steps: the cloud performs semantic decoding on the collaborative awareness data from the plurality of vehicle ends to obtain decoded global collaborative awareness data, inputs the decoded global collaborative awareness data into a prediction network, and generates predicted global collaborative awareness data (i.e. collaborative awareness data predicted at the next moment in fig. 2).
Wherein the prediction network is at least a cyclic neural network or a long-term and short-term memory network.
Based on the received collaborative awareness data of the cloud, the spatial state information of each target object at the next moment is inferred, and a time sequence prediction method, such as a Recurrent Neural Network (RNN) or a long short term memory network (LSTM), can be used for predicting the spatial state information of each target object at the next moment, so as to model and predict the collaborative awareness data, or an inference process of a kalman filter can be adopted.
Step S103: and the cloud transmits the predicted global collaborative awareness data to the vehicle end.
The cloud sends the predicted global cooperative sensing data back to the corresponding vehicle, so that global sensing of the vehicle end is realized, and driving vision is expanded.
The embodiment of the invention provides a data transmission method, which comprises the following steps: the cloud end acquires surrounding environment data, performs semantic extraction, screening and encoding on the surrounding environment data to obtain collaborative awareness data, receives the collaborative awareness data sent by the plurality of vehicle ends, obtains predicted global collaborative awareness data based on the global collaborative awareness data, and transmits the predicted global collaborative awareness data to the vehicle end. The invention introduces semantic data transmission, reduces the data interaction flow, reduces the transmission data quantity and realizes high-efficiency data transmission. In addition, through the design of the object screening network, the autonomous selection of the vehicle to the object with the largest information quantity for the cloud end is realized, so that the transmission of redundant data is reduced, and the efficiency of data transmission is improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
The following are device embodiments of the invention, for details not described in detail therein, reference may be made to the corresponding method embodiments described above.
Fig. 3 shows a schematic structural diagram of a data transmission device according to an embodiment of the present invention, and for convenience of explanation, only a portion related to the embodiment of the present invention is shown, and the data transmission device includes a data processing module 301, a prediction module 302, and a transmission module 303, which are specifically as follows:
the vehicle end and the cloud end are communicated with each other;
the data processing module 301 is configured to obtain ambient data from a vehicle end, and perform semantic extraction, screening, and encoding on the ambient data to obtain collaborative awareness data;
the prediction module 302 is configured to receive collaborative awareness data sent by a plurality of vehicle ends at the cloud end, and obtain predicted global collaborative awareness data based on the global collaborative awareness data;
and the transmission module 303 is used for transmitting the predicted global collaborative awareness data to the vehicle end by the cloud.
In a possible implementation manner, the data processing module 301 is further configured to perform object semantic extraction on surrounding environment data by using a vehicle end to obtain object semantic feature data;
obtaining action space information of the vehicle through an object screening network;
matching the action space information with all object feature data acquired by a vehicle end to obtain matched data;
and carrying out semantic coding on the matched data to obtain collaborative perception data.
In a possible implementation manner, the data processing module 301 is further configured to preprocess the surrounding environment data by the vehicle end, so as to obtain the target object and the surrounding environment data corresponding to the target object;
and extracting semantic features of surrounding environment data corresponding to the target object to obtain object semantic feature data.
In one possible implementation, the data processing module 301 is further configured to obtain predicted global collaborative awareness data from the cloud;
determining a collaborative awareness state based on the object semantic feature data and the predicted global collaborative awareness data, wherein the collaborative awareness state comprises a relative information entropy and a network congestion state;
and inputting the collaborative awareness state into an object screening network, and outputting the action space information of the vehicle.
In one possible implementation, the reward function of the object screening network is:
r t =λ localcpm ·H t,ωchanel ·C t
wherein lambda is local Representing that the receiving party does not detect the target object omega, mu cpm Mu for a first preset threshold value chanel Is a second preset threshold value, H t,ω Representing the entropy of the relative information containing the target object omega at time t, C t Is the network congestion level at time t.
In one possible implementation manner, the prediction module 302 is further configured to perform semantic decoding on the collaborative awareness data from the plurality of vehicle ends by using the cloud to obtain decoded global collaborative awareness data;
and inputting the decoded global cooperative sensing data into a prediction network to generate predicted global cooperative sensing data.
In one possible implementation, the predictive network is at least a recurrent neural network or a long-short term memory network.
The embodiment of the invention provides a data transmission device which can be used for a vehicle end and a cloud end which are in communication with each other, wherein the vehicle end acquires surrounding environment data, performs semantic extraction, screening and encoding on the surrounding environment data to obtain collaborative perception data, then the cloud end receives the collaborative perception data sent by a plurality of vehicle ends and obtains predicted global collaborative perception data based on global collaborative perception data, and the cloud end transmits the predicted global collaborative perception data to the vehicle end. The invention introduces semantic data transmission, reduces the data interaction flow, reduces the transmission data quantity and realizes high-efficiency data transmission. In addition, through the design of the object screening network, the autonomous selection of the vehicle to the object with the largest information quantity for the cloud end is realized, so that the transmission of redundant data is reduced, and the efficiency of data transmission is improved.
Fig. 4 is a schematic diagram of a terminal according to an embodiment of the present invention. As shown in fig. 4, the terminal 4 of this embodiment includes: a processor 401, a memory 402 and a computer program 403 stored in the memory 402 and executable on the processor 401. The steps of the above-described respective data transmission method embodiments, such as steps 101 to 103 shown in fig. 1, are implemented when the processor 401 executes the computer program 403. Alternatively, the processor 401 may implement the functions of the modules/units in the above-described embodiments of the data transmission device when executing the computer program 403, for example, the functions of the modules/units 301 to 303 shown in fig. 3.
The present invention also provides a readable storage medium having a computer program stored therein, which when executed by a processor is configured to implement the data transmission method provided in the above various embodiments, including:
the vehicle end and the cloud end are communicated with each other;
the vehicle end acquires the surrounding environment data, performs semantic extraction, screening and encoding on the surrounding environment data, and obtains collaborative perception data;
the cloud receives collaborative awareness data sent by a plurality of vehicle ends, and obtains predicted global collaborative awareness data based on the global collaborative awareness data;
and the cloud transmits the predicted global collaborative awareness data to the vehicle end.
In one possible implementation manner, the vehicle end obtains surrounding environment data, performs semantic extraction, screening and encoding on the surrounding environment data to obtain collaborative awareness data, and includes:
the vehicle end performs object semantic extraction on the surrounding environment data to obtain object semantic feature data;
obtaining action space information of the vehicle through an object screening network;
matching the action space information with all object feature data acquired by a vehicle end to obtain matched data;
and carrying out semantic coding on the matched data to obtain collaborative perception data.
In one possible implementation manner, the vehicle end performs object semantic extraction on surrounding environment data to obtain object semantic feature data, including:
the vehicle end pre-processes the surrounding environment data to obtain a target object and surrounding environment data corresponding to the target object;
and extracting semantic features of surrounding environment data corresponding to the target object to obtain object semantic feature data.
In one possible implementation manner, the obtaining, through the object filtering network, action space information of the vehicle includes:
acquiring predicted global collaborative awareness data from a cloud;
determining a collaborative awareness state based on the object semantic feature data and the predicted global collaborative awareness data, wherein the collaborative awareness state comprises a relative information entropy and a network congestion state;
and inputting the collaborative awareness state into an object screening network, and outputting the action space information of the vehicle.
In one possible implementation, the reward function of the object screening network is:
r t =λ localcpm ·H t,ωchanel ·C t
wherein lambda is local Representing that the receiving party does not detect the target object omega, mu cpm Mu for a first preset threshold value chanel Is a second preset threshold value, H t,ω Representing the entropy of the relative information containing the target object omega at time t, C t Is the network congestion level at time t.
In one possible implementation manner, the cloud receives collaborative awareness data sent by a plurality of vehicle ends, obtains predicted global collaborative awareness data based on the global collaborative awareness data, and includes:
the cloud performs semantic decoding on the collaborative awareness data from the plurality of vehicle ends to obtain decoded global collaborative awareness data;
and inputting the decoded global cooperative sensing data into a prediction network to generate predicted global cooperative sensing data.
In one possible implementation, the predictive network is at least a recurrent neural network or a long-short term memory network.
The readable storage medium may be a computer storage medium or a communication medium. Communication media includes any medium that facilitates transfer of a computer program from one place to another. Computer storage media can be any available media that can be accessed by a general purpose or special purpose computer. For example, a readable storage medium is coupled to the processor such that the processor can read information from, and write information to, the readable storage medium. In the alternative, the readable storage medium may be integral to the processor. The processor and the readable storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuits, ASIC for short). In addition, the ASIC may reside in a user device. The processor and the readable storage medium may reside as discrete components in a communication device. The readable storage medium may be read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tape, floppy disk, optical data storage device, etc.
The present invention also provides a program product comprising execution instructions stored in a readable storage medium. At least one processor of the apparatus may read the execution instructions from the readable storage medium, and execution of the execution instructions by the at least one processor causes the apparatus to implement the data transmission method provided by the various embodiments described above, including:
the vehicle end and the cloud end are communicated with each other;
the vehicle end acquires the surrounding environment data, performs semantic extraction, screening and encoding on the surrounding environment data, and obtains collaborative perception data;
the cloud receives collaborative awareness data sent by a plurality of vehicle ends, and obtains predicted global collaborative awareness data based on the global collaborative awareness data;
and the cloud transmits the predicted global collaborative awareness data to the vehicle end.
In one possible implementation manner, the vehicle end obtains surrounding environment data, performs semantic extraction, screening and encoding on the surrounding environment data to obtain collaborative awareness data, and includes:
the vehicle end performs object semantic extraction on the surrounding environment data to obtain object semantic feature data;
obtaining action space information of the vehicle through an object screening network;
matching the action space information with all object feature data acquired by a vehicle end to obtain matched data;
and carrying out semantic coding on the matched data to obtain collaborative perception data.
In one possible implementation manner, the vehicle end performs object semantic extraction on surrounding environment data to obtain object semantic feature data, including:
the vehicle end pre-processes the surrounding environment data to obtain a target object and surrounding environment data corresponding to the target object;
and extracting semantic features of surrounding environment data corresponding to the target object to obtain object semantic feature data.
In one possible implementation manner, the obtaining, through the object filtering network, action space information of the vehicle includes:
acquiring predicted global collaborative awareness data from a cloud;
determining a collaborative awareness state based on the object semantic feature data and the predicted global collaborative awareness data, wherein the collaborative awareness state comprises a relative information entropy and a network congestion state;
and inputting the collaborative awareness state into an object screening network, and outputting the action space information of the vehicle.
In one possible implementation, the reward function of the object screening network is:
r t =λ localcpm ·H t,ωchanel ·C t
wherein lambda is local Representing that the receiving party does not detect the target object omega, mu cpm Mu for a first preset threshold value chanel Is a second preset threshold value, H t,ω Representing the entropy of the relative information containing the target object omega at time t, C t Is the network congestion level at time t.
In one possible implementation manner, the cloud receives collaborative awareness data sent by a plurality of vehicle ends, obtains predicted global collaborative awareness data based on the global collaborative awareness data, and includes:
the cloud performs semantic decoding on the collaborative awareness data from the plurality of vehicle ends to obtain decoded global collaborative awareness data;
and inputting the decoded global cooperative sensing data into a prediction network to generate predicted global cooperative sensing data.
In one possible implementation, the predictive network is at least a recurrent neural network or a long-short term memory network.
In the above embodiment of the apparatus, it should be understood that the processor may be a central processing unit (english: central Processing Unit, abbreviated as CPU), or may be other general purpose processors, digital signal processors (english: digital Signal Processor, abbreviated as DSP), application specific integrated circuits (english: application Specific Integrated Circuit, abbreviated as ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. A data transmission method, comprising:
the vehicle end and the cloud end are communicated with each other;
the vehicle end acquires surrounding environment data, and performs semantic extraction, screening and encoding on the surrounding environment data to obtain collaborative perception data;
the cloud receives collaborative awareness data sent by a plurality of vehicle ends, and obtains predicted global collaborative awareness data based on the global collaborative awareness data;
and the cloud transmits the predicted global collaborative awareness data to the vehicle end.
2. The data transmission method of claim 1, wherein the vehicle side obtains surrounding environment data, performs semantic extraction, screening and encoding on the surrounding environment data to obtain collaborative awareness data, and comprises the following steps:
the vehicle end performs object semantic extraction on the surrounding environment data to obtain object semantic feature data;
obtaining action space information of the vehicle through an object screening network;
matching the action space information with all object feature data acquired by a vehicle end to obtain matched data;
and carrying out semantic coding on the matched data to obtain the collaborative awareness data.
3. The data transmission method of claim 2, wherein the vehicle end performs object semantic extraction on the surrounding environment data to obtain object semantic feature data, and the method comprises the following steps:
the vehicle end preprocesses the surrounding environment data to obtain a target object and surrounding environment data corresponding to the target object;
and extracting semantic features from the surrounding environment data corresponding to the target object to obtain object semantic feature data.
4. The data transmission method as claimed in claim 2, wherein the obtaining the motion space information of the vehicle through the object screening network comprises:
acquiring predicted global collaborative awareness data from a cloud;
determining a collaborative awareness state based on the object feature data and the predicted global collaborative awareness data, wherein the collaborative awareness state comprises a relative information entropy and a network congestion state;
and inputting the collaborative awareness state into the object screening network, and outputting the action space information of the vehicle.
5. The data transmission method of claim 2, wherein the reward function of the object screening network is:
r t =λ localcpm ·H t,ωchanel ·C t
wherein lambda is local Representing that the receiving party does not detect the target object omega, mu cpm Mu for a first preset threshold value chanel Is a second preset threshold value, H t,ω Representing the entropy of the relative information containing the target object omega at time t, C t Is the network congestion level at time t.
6. The data transmission method of claim 1, wherein the cloud receives collaborative awareness data sent by a plurality of vehicle ends, and obtains predicted global collaborative awareness data based on the global collaborative awareness data, and the method comprises:
the cloud end performs semantic decoding on collaborative awareness data from a plurality of vehicle ends to obtain decoded global collaborative awareness data;
and inputting the decoded global cooperative sensing data into a prediction network to generate predicted cooperative sensing data.
7. The data transmission method of claim 6, wherein the predictive network is at least a recurrent neural network or a long-term memory network.
8. A data transmission apparatus, comprising:
the vehicle end and the cloud end are communicated with each other;
the data processing module is used for acquiring surrounding environment data by the vehicle end, and carrying out semantic extraction, screening and encoding on the surrounding environment data to obtain collaborative perception data;
the prediction module is used for receiving the collaborative awareness data sent by the plurality of vehicle ends by the cloud end and obtaining predicted global collaborative awareness data based on the global collaborative awareness data;
and the transmission module is used for transmitting the predicted global cooperative sensing data to the vehicle end by the cloud.
9. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the data transmission method according to any one of claims 1 to 7 when the computer program is executed.
10. A computer-readable storage medium storing a computer program, characterized in that the computer program realizes the steps of the data transmission method according to any one of claims 1 to 7 when the computer program is executed by a processor.
CN202311204324.2A 2023-09-18 2023-09-18 Data transmission method, device, terminal and storage medium Pending CN117459922A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311204324.2A CN117459922A (en) 2023-09-18 2023-09-18 Data transmission method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311204324.2A CN117459922A (en) 2023-09-18 2023-09-18 Data transmission method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN117459922A true CN117459922A (en) 2024-01-26

Family

ID=89591793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311204324.2A Pending CN117459922A (en) 2023-09-18 2023-09-18 Data transmission method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN117459922A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127931A (en) * 2019-12-24 2020-05-08 国汽(北京)智能网联汽车研究院有限公司 Vehicle road cloud cooperation method, device and system for intelligent networked automobile
CN113743479A (en) * 2021-08-19 2021-12-03 东南大学 End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof
CN114091598A (en) * 2021-11-16 2022-02-25 北京大学 Multi-vehicle collaborative environment sensing method based on semantic level information fusion
CN116170779A (en) * 2023-04-18 2023-05-26 西安深信科创信息技术有限公司 Collaborative awareness data transmission method, device and system
CN116659524A (en) * 2023-06-02 2023-08-29 中国第一汽车股份有限公司 Vehicle positioning method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127931A (en) * 2019-12-24 2020-05-08 国汽(北京)智能网联汽车研究院有限公司 Vehicle road cloud cooperation method, device and system for intelligent networked automobile
CN113743479A (en) * 2021-08-19 2021-12-03 东南大学 End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof
CN114091598A (en) * 2021-11-16 2022-02-25 北京大学 Multi-vehicle collaborative environment sensing method based on semantic level information fusion
CN116170779A (en) * 2023-04-18 2023-05-26 西安深信科创信息技术有限公司 Collaborative awareness data transmission method, device and system
CN116659524A (en) * 2023-06-02 2023-08-29 中国第一汽车股份有限公司 Vehicle positioning method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
X. LIU: "MISO- V: Misbehavior Detection for Collective Perception Services in Vehicular Communications", 2021 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 17 July 2021 (2021-07-17) *

Similar Documents

Publication Publication Date Title
US10929713B2 (en) Semantic visual landmarks for navigation
JP6299427B2 (en) Scene estimation method and scene estimation apparatus
US11403517B2 (en) Proximity-based distributed sensor processing
Breitenstein et al. Systematization of corner cases for visual perception in automated driving
JP2020042808A (en) System and method for egocentric vision-based future vehicle localization
US20220261601A1 (en) Multiple Stage Image Based Object Detection and Recognition
CN113741485A (en) Control method and device for cooperative automatic driving of vehicle and road, electronic equipment and vehicle
WO2020186444A1 (en) Object detection method, electronic device, and computer storage medium
US10916124B2 (en) Method, device and system for wrong-way driver detection
Wei et al. Survey of connected automated vehicle perception mode: from autonomy to interaction
CN114758502B (en) Dual-vehicle combined track prediction method and device, electronic equipment and automatic driving vehicle
US20200174474A1 (en) Method and system for context and content aware sensor in a vehicle
WO2021202784A1 (en) Systems and methods for augmenting perception data with supplemental information
CN112378397A (en) Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle
US11308324B2 (en) Object detecting system for detecting object by using hierarchical pyramid and object detecting method thereof
KR20220151098A (en) End-to-end system training using fused images
Aditya et al. Collision detection: An improved deep learning approach using SENet and ResNext
CN117128979A (en) Multi-sensor fusion method and device, electronic equipment and storage medium
CN112380933A (en) Method and device for identifying target by unmanned aerial vehicle and unmanned aerial vehicle
CN116776151A (en) Automatic driving model capable of performing autonomous interaction with outside personnel and training method
US20230090338A1 (en) Method and system for evaluation and development of automated driving system features or functions
CN117459922A (en) Data transmission method, device, terminal and storage medium
CN115223144A (en) Unmanned mine car sensor data screening method and device based on cloud data
KR20230051035A (en) Object detection using radar and lidar fusion
Ke Real-time video analytics empowered by machine learning and edge computing for smart transportation applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination