CN113743479A - End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof - Google Patents

End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof Download PDF

Info

Publication number
CN113743479A
CN113743479A CN202110954152.5A CN202110954152A CN113743479A CN 113743479 A CN113743479 A CN 113743479A CN 202110954152 A CN202110954152 A CN 202110954152A CN 113743479 A CN113743479 A CN 113743479A
Authority
CN
China
Prior art keywords
perception
module
layer
vehicle
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110954152.5A
Other languages
Chinese (zh)
Other versions
CN113743479B (en
Inventor
王金湘
严永俊
彭湃
耿可可
林中盛
韩东明
陈锦鑫
方振伍
殷国栋
李普
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202110954152.5A priority Critical patent/CN113743479B/en
Publication of CN113743479A publication Critical patent/CN113743479A/en
Application granted granted Critical
Publication of CN113743479B publication Critical patent/CN113743479B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a terminal-edge-cloud vehicle-road cooperative fusion sensing architecture and a construction method thereof, relates to the technical field of intelligent traffic, and solves the technical problems of weak association, strong conflict, high dispersion and low compatibility among information flows of various intelligent bodies under the current vehicle-road cooperative architecture. The multi-source perception system with strong environmental adaptability, high environmental recognition and understanding accuracy and strong robustness for responding to scene changes is provided for automatic driving decision, planning and control.

Description

End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof
Technical Field
The application relates to the technical field of intelligent traffic, in particular to vehicle-road cooperative sensing and computing resource scheduling, and particularly relates to an end-edge-cloud vehicle-road cooperative fusion sensing architecture and a construction method thereof.
Background
The automatic driving technology is an important means for solving the problems of traffic safety, road congestion and the like, and the automatic driving automobile becomes an important engine for the strategic direction of the global automobile industry development and the continuous increase of the world economy. The environment perception is the premise and the basis of safe and reliable running of the automatic driving automobile, and determines the intelligent level of the automobile. The terminal of the automatic driving single vehicle is limited by a hardware structure, storage resources and computing power, a perception system of the terminal is not comprehensive and stable, and sudden change and variable scenes are difficult to accurately perceive in real time. The development of the intelligent networking environment lower-side-cloud vehicle-road cooperation technology provides a feasible idea for solving the problems, and the vehicle-road cooperation technology improves the safety and reliability of vehicle driving in a complex traffic environment through cooperative sensing, decision making and calculation based on V2X information interaction. However, the information flows of the agents under the current vehicle-road cooperative architecture present weak association, strong conflict, high dispersion and low compatibility, and are more like a communication architecture rather than a fusion architecture, so that the effective utilization rate of multi-source perception information is not high, and the driving safety of an intelligent transportation system is difficult to guarantee.
Disclosure of Invention
The application provides a terminal-side-cloud vehicle-road cooperative fusion perception architecture and a construction method thereof, and the technical purpose is to construct a terminal-side-cloud cooperative high-reliability multi-level environment perception and understanding architecture, fuse heterogeneous information streams to the maximum extent, resolve redundancy conflicts among the information streams and high-concurrency computing resource use conflicts, and optimize the overall effect of fusion perception, so that the perception capability and the processing capability of a vehicle to the environment are improved.
The technical purpose of the application is realized by the following technical scheme:
the end-edge-cloud cooperation and vehicle-road cooperation sensing system is a complex layered distributed system essentially consisting of heterogeneous multi-source sensing units, computing units and communication systems, and the relations of cooperation, competition and conflict exist among all the agents. A Multi-Agent System (referred to as a "Multi-Agent" System in this application) has reactivity, autonomy, and flexibility, and is characterized in that by organizing and cooperating each hierarchical Agent, a comprehensive function of higher flexibility, environmental adaptability, and extensibility is completed. The mode of hierarchical functional division and coordination of different granularities and different intelligent levels is very suitable for a multi-source heterogeneous complex end-edge-cloud vehicle road cooperative sensing system. The method based on multiple agents adopts a bottom-up hierarchical organization method, heterogeneous multi-source intelligence bodies with different functions can form an organic whole, and the method has more flexibility and expandability than a centralized or top-down supervision method. The method is used for building a terminal-side-cloud vehicle-road cooperative fusion perception framework based on a multi-Agent method.
A terminal-edge-cloud vehicle-road cooperative fusion perception architecture comprises a data acquisition layer, a bottom layer perception layer, a coordination decision layer, a calculation arrangement layer and a situation prediction layer; the data acquisition layer is connected with the bottom sensing layer and the calculation arrangement layer, the coordination decision layer is connected with the bottom sensing layer and the calculation coding layer, and the calculation arrangement layer is connected with the situation prediction layer;
the data acquisition layer comprises a multi-mode data acquisition module of a vehicle-end intelligent body and a road-end intelligent body and is used for acquiring single-end environment perception data;
the bottom sensing layer comprises a multi-mode fusion sensing intelligent agent of a vehicle-end intelligent agent and a road-end intelligent agent and is used for fusing single-end environment sensing data to realize single-end environment sensing;
the coordination decision layer comprises dynamic coordination decision agents and is used for coordinating the perception ranges and the perception accuracies of different agents and improving the overall performance of the vehicle-road cooperative perception network;
the calculation arrangement layer comprises distributed calculation unloading intelligent agents used for scheduling calculation resources and helping vehicle ends/road ends to process perception data;
the situation prediction layer comprises a multi-dimensional situation prediction intelligent agent and is used for establishing a full-element real-time accurate semantic map of a road traffic physical space and realizing vehicle-road cooperative large-range risk situation prediction.
A construction method of a terminal-edge-cloud vehicle-road cooperative fusion perception architecture comprises the following steps:
the method comprises the steps that a vehicle-end intelligent body and a road-end intelligent body are built, wherein the vehicle-end intelligent body and the road-end intelligent body both comprise a multi-mode data acquisition module and a multi-mode fusion perception intelligent body, the multi-mode data acquisition module forms a data acquisition layer, and the multi-mode fusion perception intelligent body forms a bottom layer perception layer;
constructing an integral perception framework of a multi-modal fusion perception agent, wherein the integral perception framework comprises a same-target space-time registration module, a weight self-adaptive distribution module and a distributed fusion perception module;
establishing a decision and coordination mechanism, namely a coordination decision layer, suitable for a terminal-side-cloud vehicle-road collaborative fusion perception system, wherein the coordination decision layer comprises a dynamic coordination decision intelligent agent, and the dynamic coordination decision intelligent agent comprises a dynamic target evaluation module, a coordination mechanism module and a decision reasoning module;
constructing a computing arrangement layer, wherein the computing arrangement layer comprises a distributed computing offload agent;
and constructing a situation prediction layer, wherein the situation prediction layer comprises a behavior state analysis module, a dynamic semantic map module and a risk situation prediction module.
The beneficial effect of this application lies in:
the distributed vehicle-road cooperative sensing architecture based on multiple agents is mainly used for solving the problems of function conflict, poor compatibility, contradiction of digital mapping, dimension explosion and the like among all cross-domain heterogeneous sensing and computing units under an end-edge-cloud vehicle-road cooperative architecture, establishing space-time information interaction, association, computation and understanding mechanisms with strong inclusion and expansibility, providing an optimal allocation method of computing resources among applications under high concurrency, and realizing robust and accurate understanding of space-time scenes under a network connection environment. The application provides a vehicle-road collaborative multi-mode cross-domain perception fusion framework, and aims to provide a multi-source perception system which is strong in environmental adaptability, high in environmental recognition and understanding accuracy and strong in robustness in responding to scene changes for decision, planning and control of automatic driving. The method has very important scientific significance and engineering application value for breaking through basic theory and key common technology bottleneck of automatic driving, improving traffic efficiency and guaranteeing driving safety.
Drawings
FIG. 1 is a schematic structural diagram of a fusion framework according to the present application;
fig. 2 is a schematic diagram of a terminal-edge-cloud vehicle road cooperative sensing architecture according to the present application;
fig. 3 is a schematic diagram of an internal structure of the vehicle end/road end intelligent agent according to the present application.
Detailed Description
The technical solution of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a fusion framework according to the present application, where the fusion framework includes a data acquisition layer, a bottom sensing layer, a coordination decision layer, a calculation and arrangement layer, and a situation prediction layer; the data acquisition layer is connected with the bottom sensing layer and the calculation arrangement layer, the coordination decision layer is connected with the bottom sensing layer and the calculation coding layer, and the calculation arrangement layer is connected with the situation prediction layer.
The data acquisition layer comprises a multi-mode data acquisition module of a vehicle-end intelligent body and a road-end intelligent body and is used for acquiring single-end environment perception data. Specifically, the multi-modal data acquisition module comprises a color camera module, a laser radar module, a millimeter wave radar module, a traffic signal module, a Beidou positioning module and the like, and is used for acquiring images, point clouds, traffic light phases, traffic identification, positioning coordinates and the like.
The bottom sensing layer comprises a multi-mode fusion sensing intelligent body of a vehicle-end intelligent body and a road-end intelligent body and is used for fusing single-ended environment sensing data to realize single-ended environment sensing.
Specifically, the vehicle-end intelligent body and the road-end intelligent body comprise a data acquisition layer and a bottom sensing layer, and the bottom sensing layer comprises a multi-mode fusion sensing intelligent body, a same-target space-time registration module, a weight self-adaptive distribution module and a distributed fusion sensing module. The same-target space-time registration module is used for registering the environment data and converting the environment data into the same space-time coordinate system; the weight self-adaptive distribution module is used for distributing weight and computing capacity; the distributed fusion perception module is used for determining the requirement of fusion perception according to the local perception information.
The coordination decision layer comprises dynamic coordination decision intelligent agents and is used for coordinating perception ranges and precisions of different intelligent agents and improving the overall performance of the vehicle-road cooperative perception network.
Specifically, the dynamic coordination decision agent comprises a dynamic target evaluation module, a decision reasoning module and a coordination mechanism module. The dynamic target evaluation module is used for standardizing the output of a single intelligent agent and determining the coupling relation between the intelligent agents; the decision reasoning module is used for calculating the action weight of each agent for different regional perceptions and determining the distribution of the perception tasks; the coordination mechanism module is used for coordinating the conflict of all perception agents, maximizing the global perception domain of the system and optimizing the perception accuracy of the system in a local important area.
The calculation arrangement layer comprises distributed calculation unloading intelligent agents used for scheduling calculation resources and helping the vehicle end/road end to process perception data. The distributed computing offload agent, in turn, includes a distributed computing offload module for equitably allocating computing resources to nearby edge computing units.
The situation prediction layer comprises a multi-dimensional situation prediction intelligent agent and is used for establishing a full-element real-time accurate semantic map of a road traffic physical space and realizing vehicle-road cooperative large-range risk situation prediction.
Specifically, the multidimensional situation prediction agent further comprises a behavior state analysis module, a dynamic semantic map module and a risk situation prediction module. The behavior state analysis module is used for analyzing the behavior state of the surrounding environment target; the dynamic semantic map module is used for mapping the environmental information of the road traffic physical space to an information space; and the risk situation prediction module is used for predicting the large-scale risk situation of vehicle-road cooperation.
As a specific embodiment, the application establishes a multi-Agent theory-based end-edge-cloud vehicle-road collaborative fusion perception architecture, and the main construction steps include:
(1) establishing each subsystem intelligent agent of a vehicle end and a road end, which specifically comprises the following steps:
establishing each subsystem Agent is the basis for constructing a vehicle-road cooperative intelligent fusion perception architecture, and therefore, considering the perception target and range of each vehicle-end/road-end heterogeneous subsystem Agent, the Agent internal structure shown in figure 3 is designed for each subsystem. Firstly, joint calibration of multi-modal target information is required, a multi-modal data acquisition module comprises a plurality of sensors for acquiring various environmental data, a single sensor is taken as a heterogeneous sensing unit, a fuzzy self-adaptive volume Kalman filtering algorithm is adopted to carry out primary filtering on the sensor data, noise and interference in each modal sensing data are weakened, key information characteristics are highlighted, and a three-dimensional normal distribution transformation method is adopted to carry out interactive dynamic distortion correction on each modal data and a vehicle motion state in consideration of the motion sensing characteristics of an automatic driving vehicle; on the other hand, the same-target space-time registration module analyzes the influence degree of the sensor drift on the accuracy and stability of the sensing system, constructs a reinforcement learning reward mechanism, obtains information characteristic points in the sensing unit by using a radial basis function neural network clustering algorithm as the input of reinforcement learning, adopts a multi-mode covariance cross fusion method for the same target estimation state, provides an environment optimal three-dimensional characteristic understanding strategy of multi-mode data in the same vehicle body coordinate system, and realizes the space-time consistent calibration of the sensor data.
Secondly, aiming at the problems of heterogeneous multi-source and information overload of sensed data of the automatic driving vehicle, an open intelligent fusion system (namely a distributed fusion sensing module) with high compatibility and expansibility mainly based on distributed cooperative sensing is established according to the data characteristics, adaptive scenes and respective sensing advantages of the multi-mode heterogeneous sensors. And finally, a weight adaptive distribution module is set up, fusion perception leading modal information under a dynamic environment is evaluated and analyzed, a complex time-varying environment perception multi-modal weight adaptive distribution mechanism based on attention is set up, all feature fusion modes are adopted for key perception modes, only time-space characteristic key points and clustering feature key points are reserved for inferior modal perception data to assist in fusion, and perception robustness under a low-recognition-degree environment is improved.
The single Agent determines the requirement of fusion perception according to the local perception information, if calculation unloading assistance is needed, a request signal is sent to an upper layer coordination decision layer, the coordination decision layer sets fusion perception data quantity of the two parties through a response signal after coordinating with a calculation arrangement layer according to the overall perception information, perception data overflowing from the single Agent are unloaded to the edge side, and finally the final output result of edge calculation in the single Agent and the calculation arrangement layer is obtained.
(2) The method for constructing the vehicle-road cooperative overall perception system architecture specifically comprises the following steps:
the vehicle-road cooperative dynamic intelligent fusion sensing structure shown in the figures 1 and 2 is built, the unity and the partial unity of the whole body are maintained, the dialectical relationship between the whole body and the local elements is ensured, and the mutual coordination among heterogeneous multi-agents of the vehicle-road is completed. The whole multi-Agent system is used as an autonomous intelligent Agent capable of reacting to an external environment, the whole internal function is composed of related modules (a same-target space-time registration module, a weight self-adaptive distribution module, a distributed fusion sensing module and the like), each module is a local optimal multi-Agent system, and sensing targets are achieved by mutual cooperation of subunits (motion distortion correction and the like of the target space-time registration module). According to the method, the mixed topological structure of each intelligent main body is designed by adopting a method of combining deliberate Agent and reaction Agent, and perception information transmission among heterogeneous multi-intelligent main bodies is realized by adopting a 5G communication mode. Because each module (sub-Agent) inevitably generates conflict and cooperation, each module (sub-Agent) is necessarily coordinated to select the optimal scheme, and the optimal perception effect is realized.
(3) Establishing a decision and coordination mechanism suitable for the vehicle-road collaborative fusion perception system, which specifically comprises the following steps:
in the vehicle-road cooperative sensing architecture, each vehicle-end Agent and each road-end Agent are the basis of the sensing architecture, and a decision-making and coordination mechanism is the core of the cooperative sensing architecture. Firstly, starting from the optimization of real-time performance, sensing range, mapping precision and computing power, a dynamic evaluation system (namely a dynamic target evaluation module) is established to standardize the output of the intelligent bodies according to the coupling mechanism of each intelligent body. When the perception areas between heterogeneous vehicle ends or between the vehicle ends and the road ends are overlapped in a large range, action weights perceived by the heterogeneous intelligent agents aiming at different areas are distributed through multi-party game adjustment, perception system conflicts are dynamically coordinated and solved, on the premise that the real-time performance of the perception system is ensured, the overall perception domain of the system is maximized, and the perception accuracy and the mapping accuracy of the system in local important areas are optimized. Secondly, a qualitative coordination rule (namely a coordination rule of a coordination mechanism module) is designed by combining a knowledge base, and the action weight of each agent is quantitatively calculated according to the rule, the perception data and the calculation unloading request of each agent aiming at a coordination object. The method adopts a fuzzy relation type network method, expresses the relation between the agents through fuzzy weight values, and realizes the coordination and cooperation of the agents. Fuzzy reasoning technology (namely a decision reasoning module) is adopted to dynamically determine the action weight value of the local perception intelligent agent. Meanwhile, considering that the global perception effect optimization cannot define the weight simply according to a single evaluation index, the mesh mechanism is complex, and the influence of the sub-units of other sub-agents must be included in the decision. In addition, uncertainty exists in the fusion perception module and the processing speed of each agent, and at the moment, the traditional mathematical programming solving method is difficult. The design of the decision scheme is only related to the current perception information of the vehicle-road cooperative system, but not related to the past task characteristics and the past capability information of the Agent, namely, the decision of perception task allocation has Markov memoryless. Therefore, the dynamic target evaluation module adopts a Markov decision dynamic process (MDP) model to solve the problems of uncertainty and complex mesh coordination among heterogeneous multiple agents, defines a decision function, a reward function and a perception state value function, and finally adopts a dynamic programming method or a distributed Nash-Q reinforcement learning method to solve the overall coordination problem of the perception system.
(4) Establishing a distributed computing offload agent, specifically comprising:
according to the computing task arrangement of the dynamic coordination decision-making intelligent agent, the urgency of computing requirements, the request sequence, and the randomness and heterogeneity of computing resources are fully considered, a multi-objective function is adopted as a deployment function, deployment logic is determined, the computing resources are reasonably distributed to nearby edge computing units to process real-time computing tasks by minimizing the objective function, and non-real-time tasks are processed on the cloud side in a unified mode to maximize the total utility of a computing program which runs reliably in the current state of the system. Finally, the overall sensing results detected by the vehicle end and the road end are put in a unified coordinate system and output to a situation prediction layer, the global physical space human, vehicle and road are mapped to an information space, an intelligent cooperative fusion sensing system full-element real-time accurate dynamic semantic map is constructed, and multi-dimensional risk situation prediction of the vehicle-road cooperative large range is achieved.
The foregoing has described the general principles, principal features, and advantages of the invention. It will be understood by those skilled in the art that the present application is not limited by the foregoing examples, which are presented solely for purposes of illustrating the principles of the application, and that various changes and modifications may be made without departing from the spirit and scope of the application, which are intended to be covered by the claims. The scope of the claims herein is defined by the appended claims and equivalents thereof.

Claims (6)

1. A terminal-edge-cloud vehicle-road cooperative fusion perception architecture is characterized by comprising a data acquisition layer, a bottom layer perception layer, a coordination decision layer, a calculation arrangement layer and a situation prediction layer; the data acquisition layer is connected with the bottom sensing layer and the calculation arrangement layer, the coordination decision layer is connected with the bottom sensing layer and the calculation arrangement layer, and the calculation arrangement layer is connected with the situation prediction layer;
the data acquisition layer comprises a multi-mode data acquisition module of a vehicle-end intelligent body and a road-end intelligent body and is used for acquiring single-end environment perception data;
the bottom sensing layer comprises a multi-mode fusion sensing intelligent agent of a vehicle-end intelligent agent and a road-end intelligent agent and is used for fusing single-end environment sensing data to realize single-end environment sensing;
the coordination decision layer comprises dynamic coordination decision agents and is used for coordinating the perception ranges and the perception accuracies of different agents and improving the overall performance of the vehicle-road cooperative perception network;
the calculation arrangement layer comprises distributed calculation unloading intelligent agents used for scheduling calculation resources and helping vehicle ends/road ends to process perception data;
the situation prediction layer comprises a multi-dimensional situation prediction intelligent agent and is used for establishing a full-element real-time accurate semantic map of a road traffic physical space and realizing vehicle-road cooperative large-range risk situation prediction.
2. The architecture of claim 1, wherein the multi-modal fusion awareness agents each comprise:
the same-target space-time registration module is used for registering the environment data and converting the environment data into the same space-time coordinate system;
the weight self-adaptive distribution module is used for distributing the weight and the computing capacity;
and the distributed fusion sensing module is used for determining the requirement of fusion sensing according to the local sensing information.
3. The architecture of claim 1, wherein the dynamically coordinated decision agent comprises:
the dynamic target evaluation module is used for standardizing the output of a single intelligent agent and determining the coupling relation between the intelligent agents;
the coordination mechanism module is used for coordinating the conflict of all perception agents, maximizing the global perception domain of the system and optimizing the perception precision of the system in a local important area;
and the decision reasoning module is used for calculating the action weight of each agent for different regional perceptions and determining the distribution of the perception tasks.
4. The architecture of claim 1, wherein the distributed computing offload agent comprises:
and the distributed computing unloading module is used for reasonably distributing computing resources to the nearby edge computing units.
5. The architecture of claim 1, wherein the multi-dimensional situational prediction agent comprises:
the behavior state analysis module is used for analyzing the behavior state of the surrounding environment target;
the dynamic semantic map module is used for mapping the environmental information of the road traffic physical space to an information space;
and the risk situation prediction module is used for predicting the large-scale risk situation of vehicle-road cooperation.
6. A construction method of a terminal-edge-cloud vehicle-road cooperative fusion perception architecture is characterized by comprising the following steps:
the method comprises the steps that a vehicle-end intelligent body and a road-end intelligent body are built, wherein the vehicle-end intelligent body and the road-end intelligent body both comprise a multi-mode data acquisition module and a multi-mode fusion perception intelligent body, the multi-mode data acquisition module forms a data acquisition layer, and the multi-mode fusion perception intelligent body forms a bottom layer perception layer;
constructing an integral perception framework of a multi-modal fusion perception agent, wherein the integral perception framework comprises a same-target space-time registration module, a weight self-adaptive distribution module and a distributed fusion perception module;
establishing a decision and coordination mechanism, namely a coordination decision layer, suitable for a terminal-side-cloud vehicle-road collaborative fusion perception system, wherein the coordination decision layer comprises a dynamic coordination decision intelligent agent, and the dynamic coordination decision intelligent agent comprises a dynamic target evaluation module, a coordination mechanism module and a decision reasoning module;
constructing a computing arrangement layer, wherein the computing arrangement layer comprises a distributed computing offload agent;
and constructing a situation prediction layer, wherein the situation prediction layer comprises a behavior state analysis module, a dynamic semantic map module and a risk situation prediction module.
CN202110954152.5A 2021-08-19 2021-08-19 End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof Active CN113743479B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110954152.5A CN113743479B (en) 2021-08-19 2021-08-19 End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110954152.5A CN113743479B (en) 2021-08-19 2021-08-19 End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof

Publications (2)

Publication Number Publication Date
CN113743479A true CN113743479A (en) 2021-12-03
CN113743479B CN113743479B (en) 2022-05-24

Family

ID=78731957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110954152.5A Active CN113743479B (en) 2021-08-19 2021-08-19 End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof

Country Status (1)

Country Link
CN (1) CN113743479B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114530041A (en) * 2022-02-16 2022-05-24 交通运输部公路科学研究所 New vehicle-road cooperative fusion perception method based on accuracy
CN115086374A (en) * 2022-06-14 2022-09-20 河南职业技术学院 Scene complexity self-adaptive multi-agent layered cooperation method
WO2024045248A1 (en) * 2022-08-31 2024-03-07 广州软件应用技术研究院 Vehicle-road cooperative driving method and system based on end-edge-cloud collaborative computing

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005558A (en) * 2015-08-14 2015-10-28 武汉大学 Multi-modal data fusion method based on crowd sensing
EP3300046A1 (en) * 2016-09-26 2018-03-28 Kyland Technology Co., Ltd. Intelligent traffic cloud control server
CN109714421A (en) * 2018-12-28 2019-05-03 国汽(北京)智能网联汽车研究院有限公司 Intelligent network based on bus or train route collaboration joins automobilism system
CN110083163A (en) * 2019-05-20 2019-08-02 三亚学院 A kind of 5G C-V2X bus or train route cloud cooperation perceptive method and system for autonomous driving vehicle
EP3525157A1 (en) * 2018-02-09 2019-08-14 Volkswagen Aktiengesellschaft Method and system for cooperative operation
CN111127931A (en) * 2019-12-24 2020-05-08 国汽(北京)智能网联汽车研究院有限公司 Vehicle road cloud cooperation method, device and system for intelligent networked automobile
CN112099510A (en) * 2020-09-25 2020-12-18 东南大学 Intelligent agent control method based on end edge cloud cooperation
WO2021038294A1 (en) * 2019-08-26 2021-03-04 Mobileye Vision Technologies Ltd. Systems and methods for identifying potential communication impediments
CN113096442A (en) * 2021-03-29 2021-07-09 上海智能新能源汽车科创功能平台有限公司 Intelligent bus control system based on bus road cloud cooperation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005558A (en) * 2015-08-14 2015-10-28 武汉大学 Multi-modal data fusion method based on crowd sensing
EP3300046A1 (en) * 2016-09-26 2018-03-28 Kyland Technology Co., Ltd. Intelligent traffic cloud control server
EP3525157A1 (en) * 2018-02-09 2019-08-14 Volkswagen Aktiengesellschaft Method and system for cooperative operation
CN109714421A (en) * 2018-12-28 2019-05-03 国汽(北京)智能网联汽车研究院有限公司 Intelligent network based on bus or train route collaboration joins automobilism system
CN110083163A (en) * 2019-05-20 2019-08-02 三亚学院 A kind of 5G C-V2X bus or train route cloud cooperation perceptive method and system for autonomous driving vehicle
WO2021038294A1 (en) * 2019-08-26 2021-03-04 Mobileye Vision Technologies Ltd. Systems and methods for identifying potential communication impediments
CN111127931A (en) * 2019-12-24 2020-05-08 国汽(北京)智能网联汽车研究院有限公司 Vehicle road cloud cooperation method, device and system for intelligent networked automobile
CN112099510A (en) * 2020-09-25 2020-12-18 东南大学 Intelligent agent control method based on end edge cloud cooperation
CN113096442A (en) * 2021-03-29 2021-07-09 上海智能新能源汽车科创功能平台有限公司 Intelligent bus control system based on bus road cloud cooperation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JINXIANG WANG ET AL: "Agent-based coordination framework for integrated vehicle chassis control", 《 AUTOMOBILE ENGINEERING》 *
刘伟等: "面向道路交通的C-V2X车路协同应用研究", 《集成电路应用》 *
刘志: "基于多源信息融合的智慧公交车路协同控制系统研究", 《交通科技》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114530041A (en) * 2022-02-16 2022-05-24 交通运输部公路科学研究所 New vehicle-road cooperative fusion perception method based on accuracy
CN115086374A (en) * 2022-06-14 2022-09-20 河南职业技术学院 Scene complexity self-adaptive multi-agent layered cooperation method
WO2024045248A1 (en) * 2022-08-31 2024-03-07 广州软件应用技术研究院 Vehicle-road cooperative driving method and system based on end-edge-cloud collaborative computing

Also Published As

Publication number Publication date
CN113743479B (en) 2022-05-24

Similar Documents

Publication Publication Date Title
CN113743479B (en) End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof
Yang et al. Edge intelligence for autonomous driving in 6G wireless system: Design challenges and solutions
Tong et al. Artificial intelligence for vehicle-to-everything: A survey
CN109753047B (en) System and method for autonomous vehicle behavior control and advanced controller
Savazzi et al. Opportunities of federated learning in connected, cooperative, and automated industrial systems
CN109116349B (en) Multi-sensor cooperative tracking joint optimization decision method
Wang Agent-based control for networked traffic management systems
CN112099510B (en) Intelligent agent control method based on end edge cloud cooperation
Boukezzoula et al. Multi-sensor information fusion: Combination of fuzzy systems and evidence theory approaches in color recognition for the NAO humanoid robot
Dong et al. Collaborative autonomous driving: Vision and challenges
Jingnan et al. Data logic structure and key technologies on intelligent high-precision map
Martelli et al. An outlook on the future marine traffic management system for autonomous ships
Rihan et al. Deep-VFog: When artificial intelligence meets fog computing in V2X
CN113159403A (en) Method and device for predicting pedestrian track at intersection
CN115840892B (en) Multi-agent layering autonomous decision-making method and system in complex environment
Xiao et al. Mobile-edge-platooning cloud: a lightweight cloud in vehicular networks
Blasch et al. Artificial intelligence in use by multimodal fusion
CN111445125B (en) Agricultural robot computing task cooperation method, system, medium and equipment
CN112926952A (en) Cloud computing-combined big data office business processing method and big data server
Chen et al. Semantic Interaction Strategy of Multiagent System in Large-Scale Intelligent Sensor Network Environment
Wang et al. A Diffusion-Based Reactive Approach to Road Network Cooperative Persistent Surveillance
Wang et al. Distributed Target Tracking in Sensor Networks by Consistency Algorithm and Semantic Moving Computing of Internet of Things
Singh Trajectory-Prediction with Vision: A Survey
Fiosina et al. Big data processing and mining for the future ICT-based smart transportation management system
Guo et al. Semantic Consensus Model and Behavioural Control Model for Visual Data Link

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant