CN112099510A - Intelligent agent control method based on end edge cloud cooperation - Google Patents

Intelligent agent control method based on end edge cloud cooperation Download PDF

Info

Publication number
CN112099510A
CN112099510A CN202011021858.8A CN202011021858A CN112099510A CN 112099510 A CN112099510 A CN 112099510A CN 202011021858 A CN202011021858 A CN 202011021858A CN 112099510 A CN112099510 A CN 112099510A
Authority
CN
China
Prior art keywords
control
edge
cloud
intelligent agent
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011021858.8A
Other languages
Chinese (zh)
Other versions
CN112099510B (en
Inventor
孙长银
王乐
曹向辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202011021858.8A priority Critical patent/CN112099510B/en
Publication of CN112099510A publication Critical patent/CN112099510A/en
Application granted granted Critical
Publication of CN112099510B publication Critical patent/CN112099510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Abstract

An intelligent agent control method based on end edge cloud cooperation. Firstly, a local side sensing module and an edge side sensing module of the intelligent body are respectively established, and sensing information is sent to a cloud end. The local side carries out prejudgment on behavior intentions and tracks of peripheral intelligent agents, the edge side carries out real-time learning and training according to feature extraction of local side perception information and edge side perception information, and the cloud side carries out off-line path planning and control based on historical data. In the operation process, the control scheduler of the local end schedules local end/edge end/cloud end to take over control according to a plurality of real-time intelligent agent positions. The method has good scene adaptability, and can conveniently switch the control of the local end, the edge end and the cloud end in the application process, so that the least consumed computing resource obtains an effective control effect. Because the method migrates part of common computing tasks to the roadside edge server and the cloud server, the use of the method can also reduce the cost of a single intelligent agent at present.

Description

Intelligent agent control method based on end edge cloud cooperation
Technical Field
The invention belongs to the field of intelligent agent control, relates to an intelligent agent control method based on end edge cloud cooperation, and relates to a framework combining intelligent agent control of local control, edge computing and cloud computing and multi-intelligent agent cooperative control to realize effective management of utilization, computing and control effects of computing resources.
Background
An agent is a computing entity that can continuously and autonomously function in a certain environment and has the characteristics of residence, responsiveness, sociality, initiative and the like. Common agents include mobile robots, unmanned vehicles, and the like. The control of the intelligent agent requires information interaction, key information collection and knowledge extraction between the intelligent agent and the surrounding environment, and self-adaptive control is performed to autonomously complete established tasks. The overview comprises three layers of environment perception, path planning, control and decision.
The perception aspect is that how to coordinate the perception of an intelligent agent and a peripheral edge end for the acquisition of complex environment data information and predict the situation of environment change according to the perceived information, and the current researches comprise a calculation unloading scheme and a transmission scheduling rule, the cooperative interaction of adjacent region perceptrons uses cloud control, multimodal data fusion and the mutual fusion of image and speed information, and the like. Path planning and control decision are generally integrated, and existing research has been carried out on control of a single intelligent agent track and control of tracks among intelligent agents, a model prediction control algorithm is improved to improve track tracking performance, and the influences of delay and data packet loss caused by larger time delay and congestion in communication of cloud control are considered. Although the research in this field is fruitful, there are two problems: 1. on one hand, analysis and decision of large data volume is limited by local computing power of the intelligent agent, and the effect of crowd decision can be achieved while the local computing power is liberated by combining an advanced edge/cloud processing technology. At the moment, the computing content is reasonably unloaded between the local and the edge/cloud, and is reasonably distributed under the condition of meeting the real-time requirement; 2. in a more complex environment, most of the current agents are controlled by the single agents, and the phenomenon of similar congestion can be avoided by utilizing the edge end to carry out cooperative control on the multiple agents.
Disclosure of Invention
Aiming at the problems of inaccurate dynamic model of an intelligent body, limited local calculation amount and the like, the invention provides an intelligent body control method based on end edge cloud cooperation, which utilizes edge equipment and remote cloud equipment to predict according to real-time peripheral obstacles and behavior intentions and driving tracks of other intelligent bodies in the next time period, on the basis, the intelligent body control method is divided into two layers to be controlled, namely local end local obstacle avoidance control and global track tracking control of edge cloud cooperation control, and local end scheduling and decision making are carried out through a control scheduler;
the first stage is a construction stage of a terminal edge cloud collaborative mobile agent control method;
step S11, a driving intention module of a peripheral intelligent agent at a local end is established, the driving intention module obtains movement related data based on a sensor of the mobile intelligent agent, historical data is processed through a POMDP model, and behavior intention is prejudged;
step S12, according to the pre-judgment of the peripheral mobile intelligent agent movement intention module of the local end on the peripheral mobile intelligent agent behaviors, a track prediction module is constructed, and the track prediction module is realized through an artificial potential field path planning component and a neural network path planning component;
step S13, selecting a proper depth reinforcement learning module according to the prediction of the local end on the running track of the peripheral intelligent agent, evaluating the generated optimal path and path tracking and obstacle avoidance control, and adjusting parameters of the depth reinforcement learning module;
step S14, according to the characteristics extracted by the local end in real time, a tracking control module of a single intelligent agent is constructed in a peripheral edge server, and the module belongs to online training and is used in the edge server;
S15-A, constructing a cooperative control module of a plurality of intelligent agents at the edge end based on a tracking control module of a single intelligent agent at the edge end, wherein the module is used as a control center to interact with other peripheral intelligent agents, collects road conditions and information of the intelligent agents, utilizes edge end computing resources to construct a cooperative algorithm, namely a multi-objective optimization model, and calculates the control quantity corresponding to each intelligent agent;
step S15-B, an offline tracking control module of the cloud is constructed, firstly, a high-precision map of the cloud is constructed, the cloud is constructed to collect historical perception information of all the agents and edge devices, the control quantity of the cloud control module transmits data through the Internet, and the control quantity is long in time delay and unstable, so that the control quantity is constructed into the offline tracking control module;
step S16, a scheduler module is built at the local end, the scheduler module comprises a processing module for sensing information of an intelligent agent, senses the complexity of the surrounding environment, and a switching module for controlling the local end, the edge end and the cloud end is built;
the second stage is an execution stage of the mobile agent control method with cooperation of the end edge clouds;
step S21, a local sensor observes the environment and an edge sensor observes the environment, the data are stored in the cloud as historical data, the mobile intelligent agent observes the surrounding environment through the equipped sensors, and the position of the mobile intelligent agent, the position of a surrounding vehicle, the position of an obstacle and the like are determined;
step S22, the control dispatcher of the local end dispatches and controls, the local end processes the perception information at first, and comprehensively judges whether the local end/edge end/cloud end is used for taking over the current control;
step S23, local control, namely, establishing an external environment model for path planning according to the step S22, modeling the movement intentions of other mobile intelligent bodies around by using POMDP, extracting perception data characteristics and sending the perception data characteristics to an edge terminal, comprehensively calculating the risk and the movement distance of adjacent obstacles by using neural network path planning, predicting the track of surrounding vehicles, and finally calculating the optimal path of the vehicle and the control quantity of tracking control and returning the optimal path and the control quantity of the tracking control to a local control scheduler;
step S24, the edge end cooperatively controls a single agent and multiple agents, collects more global perception data than the local end, receives the characteristics extracted from the single agent sent by the local end, calculates the control quantity through the deep learning module trained in real time, obtains the optimal path and the optimal tracking control quantity under a complex environment, if the environment is more complex, carries out cooperative scheduling control of the multiple agents, solves the optimal path and the optimal control quantity of the cooperative multiple agents according to multi-objective optimization, and transmits the optimal path and the optimal control quantity to the corresponding mechanism;
step S25, controlling the cloud offline single agent, improving an individualized path planning scheme for simpler road conditions based on the historical perception information of the step S22 and the step S24, and driving navigation control for tracking an optimal path by using a local tracking control module;
in the real-time execution process, the scheduler operates one of S23, S24, S25 every period according to the sensing state.
As a further improvement of the present invention, the sensors of the mobile agent itself in step S11 include a vision sensor, a lidar sensor, and a positioning sensor.
As a further improvement of the present invention, the historical awareness information in step S15-B includes the current location of the agent and the fixed obstacle range.
As a further improvement of the present invention, the surrounding environment in step S21 includes determining the own position, the nearby vehicle position, and the obstacle position.
As a further improvement of the present invention, the processing of the sensing information in step S22 includes processing the video signal using the vision sensor processing component; processing the distance signal/positioning signal by using a laser radar/positioning sensor processing assembly to respectively obtain information perception environments such as color information of an external environment, relative position information of an obstacle and the like; the perception of network signals measures network delay by sending ping packets.
Has the advantages that:
the intelligent agent control method based on end edge cloud cooperation has good scene adaptability, and can conveniently switch the control of the local end, the edge end and the cloud end in the application process, so that the least consumed computing resource obtains an effective control effect.
Due to the fact that a part of common computing tasks are migrated to the edge server and the cloud server by the intelligent agent control method based on the end edge cloud cooperation, the method can be used for reducing the manufacturing cost of a single intelligent agent.
Additional features and advantages of the invention will be set forth in the description which follows. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof.
Drawings
FIG. 1 is a flow chart of the construction steps of an edge cloud collaborative agent control method according to the present invention;
fig. 2 is a flowchart of the execution steps of the method for controlling an agent with edge cloud coordination according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the following detailed description and accompanying drawings:
the invention provides an intelligent agent control method based on edge cloud coordination, which utilizes edge equipment and remote cloud equipment to predict according to real-time peripheral obstacles and behavior intentions and driving tracks of other intelligent agents in the next time period, on the basis, the control is divided into two layers to carry out control, namely local end local obstacle avoidance control and global track tracking control of edge cloud coordination control, and local end scheduling and decision are carried out by a control scheduler.
The specific implementation mode of the mobile robot control method with the cooperation of the end edge clouds is divided into two stages, wherein the first stage is a construction stage of the mobile robot control method with the cooperation of the end edge clouds, and the second stage is an execution stage of the mobile robot control method with the cooperation of the end edge clouds. The following are described respectively according to the drawings of the specification.
In the present embodiment, a task in which a plurality of mobile robots move to a target location in an environment of a base station having an edge computing server around the mobile robots will be described as an example, and the mobile robots may use service resources of a remote cloud server.
As shown in fig. 1, the present invention provides a construction of an edge cloud coordination control method:
step S11, a local peripheral robot driving intention module is established. The module obtains movement-related data based on sensors of the mobile robot, such as a vision sensor, a laser radar sensor and a positioning sensor, processes historical data through a POMDP model, and prejudges behavior intentions.
And step S12, constructing a track prediction module according to the prejudgment of the peripheral mobile robot movement intention module of the local terminal on the peripheral mobile robot behaviors. The module is realized by an artificial potential field path planning component and a neural network path planning component. For example, the calculated possible trajectory belongs to the lane change category based on the predicted avoidance behavior and the positions of other vehicles on the lane.
And step S13, selecting a proper depth reinforcement learning module according to the prediction of the local end on the running track of the peripheral robot, evaluating the generated optimal path and path tracking and obstacle avoidance control, and adjusting parameters of the depth reinforcement learning module.
And step S14, according to the characteristics extracted by the local end in real time, a tracking control module of a single robot is constructed in the peripheral edge server, the tracking control module has better training effect compared with the local end, belongs to online training and uses more computing resources in the edge server than the local end.
And step S15-A, constructing a coordinated control module of the edge end multi-robot based on the tracking control module of the edge end single robot. The module is used as a control center to interact with other peripheral robots, collects road conditions and information of a plurality of robots, utilizes edge computing resources to construct a collaborative algorithm, namely a multi-objective optimization model, and computes the corresponding control quantity of each robot.
And step S15-B, constructing an offline tracking control module of the cloud. The method comprises the steps of firstly constructing a high-precision map of a cloud, and constructing the cloud to collect historical perception information of all robots and edge devices in the whole world, wherein the historical perception information comprises the current positions of the robots, the ranges of fixed obstacles and the like. Compared with a local end and an edge end, the control quantity of the cloud control module transmits data through the Internet, and the time delay is long and unstable, so that the cloud control module is constructed as an off-line control module.
In step S16, a scheduler module is built on the local side. The scheduler module comprises a processing module for sensing information of the robot, senses the complexity of the surrounding environment and constructs a local end, an edge end and a cloud end control switching module.
As shown in fig. 2, the method for navigating the mobile robot by using end edge cloud cooperation provided by the invention comprises the following steps:
and step S21, the local sensor observation environment and the edge sensor observation environment are observed, and the data are stored in the cloud as historical data. The mobile robot observes the surrounding environment by a sensor provided therein, and determines the position of the mobile robot, the position of a neighboring vehicle, the position of an obstacle, and the like. In this embodiment, a video signal of an environment can be obtained through a vision sensor, a distance signal is obtained through a laser radar sensor, a positioning signal is obtained through a positioning sensor, and the position of a moving obstacle is positioned through a plurality of base stations and a mobile phone signal.
In step S22, the control scheduler on the local side schedules control. The local end firstly processes the perception information, including processing the video signal by using a visual sensor processing component; processing the distance signal/positioning signal by using a laser radar/positioning sensor processing assembly to respectively obtain information perception environments such as color information of an external environment, relative position information of an obstacle and the like; the sensing of the network signal, the network delay and the like are measured by sending a ping packet, so that the current control is comprehensively judged and taken over by using a local end/edge end/cloud end.
And step S23, local control. And establishing an external environment model according to the step S22 for path planning. The POMDP is used for modeling the moving intention of other mobile robots around, extracting perception data characteristics and sending the perception data characteristics to the edge end, the neural network path planning is used for comprehensively calculating the risks of nearby obstacles and the moving distance, the track of surrounding vehicles is predicted, and finally the optimal path of the vehicle and the control quantity of tracking control are calculated and returned to the local end control scheduler.
And step S24, the edge end cooperatively controls the single robot and the multiple robots. And collecting more global perception data than the local end, receiving the features extracted from the single robot sent by the local end, and calculating the control quantity through a deep learning module trained in real time to obtain the optimal path and the optimal tracking control quantity under a complex environment. And if the environment is more complex, performing coordinated scheduling control of the multiple robots, solving the optimal path and optimal control quantity of the coordinated multiple robots according to multi-objective optimization, and transmitting the optimal path and optimal control quantity to corresponding vehicles.
And step S25, performing cloud offline single-robot control. Based on the historical perception information of the step S22 and the step S24, the personalized path planning scheme is improved for simpler road conditions, and the navigation control of tracking the optimal path is driven by the local tracking control module.
In the real-time execution process, the scheduler operates one of S23, S24, S25 every period according to the sensing state.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, but any modifications or equivalent variations made according to the technical spirit of the present invention are within the scope of the present invention as claimed.

Claims (5)

1. An end edge cloud cooperation-based intelligent agent control method is characterized by comprising the steps of constructing an end edge cloud cooperation-based intelligent agent control method and executing an end edge cloud cooperation-based intelligent agent navigation method, wherein the specific steps are as follows;
the first stage is a construction stage of a terminal edge cloud collaborative mobile agent control method;
step S11, a driving intention module of a peripheral intelligent agent at a local end is established, the driving intention module obtains movement related data based on a sensor of the mobile intelligent agent, historical data is processed through a POMDP model, and behavior intention is prejudged;
step S12, according to the pre-judgment of the peripheral mobile intelligent agent movement intention module of the local end on the peripheral mobile intelligent agent behaviors, a track prediction module is constructed, and the track prediction module is realized through an artificial potential field path planning component and a neural network path planning component;
step S13, selecting a proper depth reinforcement learning module according to the prediction of the local end on the running track of the peripheral intelligent agent, evaluating the generated optimal path and path tracking and obstacle avoidance control, and adjusting parameters of the depth reinforcement learning module;
step S14, according to the characteristics extracted by the local end in real time, a tracking control module of a single intelligent agent is constructed in a peripheral edge server, and the module belongs to online training and is used in the edge server;
S15-A, constructing a cooperative control module of a plurality of intelligent agents at the edge end based on a tracking control module of a single intelligent agent at the edge end, wherein the module is used as a control center to interact with other peripheral intelligent agents, collects road conditions and information of the intelligent agents, utilizes edge end computing resources to construct a cooperative algorithm, namely a multi-objective optimization model, and calculates the control quantity corresponding to each intelligent agent;
step S15-B, an offline tracking control module of the cloud is constructed, firstly, a high-precision map of the cloud is constructed, the cloud is constructed to collect historical perception information of all the agents and edge devices, the control quantity of the cloud control module transmits data through the Internet, and the control quantity is long in time delay and unstable, so that the control quantity is constructed into the offline tracking control module;
step S16, a scheduler module is built at the local end, the scheduler module comprises a processing module for sensing information of an intelligent agent, senses the complexity of the surrounding environment, and a switching module for controlling the local end, the edge end and the cloud end is built;
the second stage is an execution stage of the mobile agent control method with cooperation of the end edge clouds;
step S21, a local sensor observes the environment and an edge sensor observes the environment, the data are stored in the cloud as historical data, the mobile intelligent agent observes the surrounding environment through the equipped sensors, and the position of the mobile intelligent agent, the position of a surrounding vehicle, the position of an obstacle and the like are determined;
step S22, the control dispatcher of the local end dispatches and controls, the local end processes the perception information at first, and comprehensively judges whether the local end/edge end/cloud end is used for taking over the current control;
step S23, local control, namely, establishing an external environment model for path planning according to the step S22, modeling the movement intentions of other mobile intelligent bodies around by using POMDP, extracting perception data characteristics and sending the perception data characteristics to an edge terminal, comprehensively calculating the risk and the movement distance of adjacent obstacles by using neural network path planning, predicting the track of surrounding vehicles, and finally calculating the optimal path of the vehicle and the control quantity of tracking control and returning the optimal path and the control quantity of the tracking control to a local control scheduler;
step S24, the edge end cooperatively controls a single agent and multiple agents, collects more global perception data than the local end, receives the characteristics extracted from the single agent sent by the local end, calculates the control quantity through the deep learning module trained in real time, obtains the optimal path and the optimal tracking control quantity under a complex environment, if the environment is more complex, carries out cooperative scheduling control of the multiple agents, solves the optimal path and the optimal control quantity of the cooperative multiple agents according to multi-objective optimization, and transmits the optimal path and the optimal control quantity to the corresponding mechanism;
step S25, controlling the cloud offline single agent, improving an individualized path planning scheme for simpler road conditions based on the historical perception information of the step S22 and the step S24, and driving navigation control for tracking an optimal path by using a local tracking control module;
in the real-time execution process, the scheduler operates one of S23, S24, S25 every period according to the sensing state.
2. An end edge cloud coordination based intelligent agent control method according to claim 1, wherein the sensors of the mobile intelligent agent in step S11 comprise a vision sensor, a laser radar sensor and a positioning sensor.
3. An end edge cloud coordination based intelligent agent control method according to claim 1, wherein historical perception information in step S15-B comprises intelligent agent current position and fixed obstacle range.
4. An end edge cloud coordination based intelligent agent control method according to claim 1, wherein in step S21, the surrounding environment comprises determining self position, surrounding vehicle position and obstacle position.
5. An end edge cloud coordination based intelligent agent control method according to claim 1, wherein the processing of the sensing information in step S22 includes processing a video signal using a visual sensor processing component; processing the distance signal/positioning signal by using a laser radar/positioning sensor processing assembly to respectively obtain information perception environments such as color information of an external environment, relative position information of an obstacle and the like; the perception of network signals measures network delay by sending ping packets.
CN202011021858.8A 2020-09-25 2020-09-25 Intelligent agent control method based on end edge cloud cooperation Active CN112099510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011021858.8A CN112099510B (en) 2020-09-25 2020-09-25 Intelligent agent control method based on end edge cloud cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011021858.8A CN112099510B (en) 2020-09-25 2020-09-25 Intelligent agent control method based on end edge cloud cooperation

Publications (2)

Publication Number Publication Date
CN112099510A true CN112099510A (en) 2020-12-18
CN112099510B CN112099510B (en) 2022-10-18

Family

ID=73755298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011021858.8A Active CN112099510B (en) 2020-09-25 2020-09-25 Intelligent agent control method based on end edge cloud cooperation

Country Status (1)

Country Link
CN (1) CN112099510B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112946603A (en) * 2021-03-08 2021-06-11 安徽乐道信息科技有限公司 Road maintenance detection system based on laser radar and detection method thereof
CN113034718A (en) * 2021-03-01 2021-06-25 启若人工智能研究院(南京)有限公司 Subway pipeline inspection system based on multiple agents
CN113743479A (en) * 2021-08-19 2021-12-03 东南大学 End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof
CN113985475A (en) * 2021-10-28 2022-01-28 北京石油化工学院 Micro-seismic monitoring data transmission method based on terminal edge cloud cooperation of Internet of things
CN114137956A (en) * 2021-10-28 2022-03-04 华人运通(上海)云计算科技有限公司 Vehicle cloud collaborative path planning method and system
CN114493164A (en) * 2021-12-30 2022-05-13 重庆特斯联智慧科技股份有限公司 Robot task analysis method and system based on edge calculation
WO2023005389A1 (en) * 2021-07-30 2023-02-02 International Business Machines Corporation Edge function-guided artifical intelligence request routing
CN116744368A (en) * 2023-07-03 2023-09-12 北京理工大学 Intelligent collaborative heterogeneous air-ground unmanned system based on cloud side end architecture and implementation method
WO2024001302A1 (en) * 2022-06-30 2024-01-04 华为云计算技术有限公司 Mapping system and related method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150102945A1 (en) * 2011-12-16 2015-04-16 Pragmatek Transport Innovations, Inc. Multi-agent reinforcement learning for integrated and networked adaptive traffic signal control
CN108407808A (en) * 2018-04-23 2018-08-17 安徽车鑫保汽车销售有限公司 A kind of running car intelligent predicting system
CN111127931A (en) * 2019-12-24 2020-05-08 国汽(北京)智能网联汽车研究院有限公司 Vehicle road cloud cooperation method, device and system for intelligent networked automobile
CN111367657A (en) * 2020-02-21 2020-07-03 重庆邮电大学 Computing resource collaborative cooperation method based on deep reinforcement learning
CN111756812A (en) * 2020-05-29 2020-10-09 华南理工大学 Energy consumption perception edge cloud cooperation dynamic unloading scheduling method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150102945A1 (en) * 2011-12-16 2015-04-16 Pragmatek Transport Innovations, Inc. Multi-agent reinforcement learning for integrated and networked adaptive traffic signal control
CN108407808A (en) * 2018-04-23 2018-08-17 安徽车鑫保汽车销售有限公司 A kind of running car intelligent predicting system
CN111127931A (en) * 2019-12-24 2020-05-08 国汽(北京)智能网联汽车研究院有限公司 Vehicle road cloud cooperation method, device and system for intelligent networked automobile
CN111367657A (en) * 2020-02-21 2020-07-03 重庆邮电大学 Computing resource collaborative cooperation method based on deep reinforcement learning
CN111756812A (en) * 2020-05-29 2020-10-09 华南理工大学 Energy consumption perception edge cloud cooperation dynamic unloading scheduling method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034718A (en) * 2021-03-01 2021-06-25 启若人工智能研究院(南京)有限公司 Subway pipeline inspection system based on multiple agents
CN112946603A (en) * 2021-03-08 2021-06-11 安徽乐道信息科技有限公司 Road maintenance detection system based on laser radar and detection method thereof
CN112946603B (en) * 2021-03-08 2024-03-26 安徽乐道智能科技有限公司 Road maintenance detection system based on laser radar and detection method thereof
WO2023005389A1 (en) * 2021-07-30 2023-02-02 International Business Machines Corporation Edge function-guided artifical intelligence request routing
CN113743479B (en) * 2021-08-19 2022-05-24 东南大学 End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof
CN113743479A (en) * 2021-08-19 2021-12-03 东南大学 End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof
CN113985475A (en) * 2021-10-28 2022-01-28 北京石油化工学院 Micro-seismic monitoring data transmission method based on terminal edge cloud cooperation of Internet of things
CN113985475B (en) * 2021-10-28 2023-09-05 北京石油化工学院 Microseism monitoring data transmission method based on Internet of things terminal Bian Yun cooperation
CN114137956B (en) * 2021-10-28 2023-11-10 华人运通(上海)云计算科技有限公司 Vehicle cloud cooperative path planning method and system
CN114137956A (en) * 2021-10-28 2022-03-04 华人运通(上海)云计算科技有限公司 Vehicle cloud collaborative path planning method and system
CN114493164A (en) * 2021-12-30 2022-05-13 重庆特斯联智慧科技股份有限公司 Robot task analysis method and system based on edge calculation
CN114493164B (en) * 2021-12-30 2024-04-09 重庆特斯联智慧科技股份有限公司 Robot task analysis method and system based on edge calculation
WO2024001302A1 (en) * 2022-06-30 2024-01-04 华为云计算技术有限公司 Mapping system and related method
CN116744368A (en) * 2023-07-03 2023-09-12 北京理工大学 Intelligent collaborative heterogeneous air-ground unmanned system based on cloud side end architecture and implementation method
CN116744368B (en) * 2023-07-03 2024-01-23 北京理工大学 Intelligent collaborative heterogeneous air-ground unmanned system based on cloud side end architecture and implementation method

Also Published As

Publication number Publication date
CN112099510B (en) 2022-10-18

Similar Documents

Publication Publication Date Title
CN112099510B (en) Intelligent agent control method based on end edge cloud cooperation
Alsamhi et al. Survey on artificial intelligence based techniques for emerging robotic communication
Balico et al. Localization prediction in vehicular ad hoc networks
US10168674B1 (en) System and method for operator control of heterogeneous unmanned system teams
Vidal et al. Pursuit-evasion games with unmanned ground and aerial vehicles
CN111907527A (en) Interpretable learning system and method for autonomous driving
CN103256931B (en) Visual navigation system of unmanned planes
CN102707675A (en) Swarm-robot controller, swarm-robot control method and controller terminal
CN112235808B (en) Multi-agent distributed collaborative dynamic coverage method and system
KR20190109324A (en) Method, apparatus and system for recommending location of robot charging station
Gil et al. Cooperative scheduling of tasks for networked uninhabited autonomous vehicles
CN110488843A (en) Barrier-avoiding method, mobile robot and computer readable storage medium
Mishra et al. A high-end IoT devices framework to foster beyond-connectivity capabilities in 5G/B5G architecture
CN105043379A (en) Scenic spot visiting path planning method and device based on space-time constraint
CN110210806A (en) A kind of the cloud base unmanned vehicle framework and its control evaluation method of 5G edge calculations
CN113743479B (en) End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof
Liu et al. Robotic communications for 5g and beyond: Challenges and research opportunities
Guo et al. V2V task offloading algorithm with LSTM-based spatiotemporal trajectory prediction model in SVCNs
Cui et al. UAV target tracking algorithm based on task allocation consensus
CN112857370A (en) Robot map-free navigation method based on time sequence information modeling
Zhao et al. Automated vehicle traffic control tower: A solution to support the next level automation
Huang et al. Multi-agent vehicle formation control based on mpc and particle swarm optimization algorithm
CN112731914A (en) Cloud AGV application system of 5G smart factory
CN115314850A (en) Intelligent motion system based on cloud edge cooperative control
Zema et al. 3D trajectory optimization for multimission UAVs in smart city scenarios

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant