CN112099510B - Intelligent agent control method based on end edge cloud cooperation - Google Patents

Intelligent agent control method based on end edge cloud cooperation Download PDF

Info

Publication number
CN112099510B
CN112099510B CN202011021858.8A CN202011021858A CN112099510B CN 112099510 B CN112099510 B CN 112099510B CN 202011021858 A CN202011021858 A CN 202011021858A CN 112099510 B CN112099510 B CN 112099510B
Authority
CN
China
Prior art keywords
control
edge
intelligent agent
cloud
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011021858.8A
Other languages
Chinese (zh)
Other versions
CN112099510A (en
Inventor
孙长银
王乐
曹向辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202011021858.8A priority Critical patent/CN112099510B/en
Publication of CN112099510A publication Critical patent/CN112099510A/en
Application granted granted Critical
Publication of CN112099510B publication Critical patent/CN112099510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Abstract

An intelligent agent control method based on end edge cloud cooperation. Firstly, a local side sensing module and an edge side sensing module of the intelligent body are respectively established, and sensing information is sent to a cloud end. The local side carries out prejudgment on behavior intentions and tracks of peripheral intelligent agents, the edge side carries out real-time learning and training according to feature extraction of local side perception information and edge side perception information, and the cloud side carries out off-line path planning and control based on historical data. In the operation process, the control scheduler of the local end schedules local end/edge end/cloud end to take over control according to a plurality of real-time intelligent agent positions. The method has good scene adaptability, and can conveniently switch the control of the local end, the edge end and the cloud end in the application process, so that the least consumed computing resource obtains an effective control effect. Because the method migrates part of common computing tasks to the roadside edge server and the cloud server, the use of the method can also reduce the cost of a single intelligent agent at present.

Description

Intelligent agent control method based on end edge cloud cooperation
Technical Field
The invention belongs to the field of intelligent agent control, relates to an intelligent agent control method based on end edge cloud cooperation, and relates to a framework combining intelligent agent control of local control, edge computing and cloud computing and multi-intelligent agent cooperative control to realize effective management of utilization, computing and control effects of computing resources.
Background
An agent is a computing entity that can continuously and autonomously function in a certain environment and has the characteristics of residence, responsiveness, sociality, initiative and the like. Common agents include mobile robots, unmanned vehicles, and the like. The control of the intelligent agent requires information interaction, key information collection and knowledge extraction between the intelligent agent and the surrounding environment, and self-adaptive control is performed to autonomously complete established tasks. The overview comprises three layers of environment perception, path planning, control and decision.
The perception aspect is that how to coordinate the perception of an intelligent agent and a peripheral edge end for the acquisition of complex environment data information and predict the situation of environment change according to the perceived information, and the current researches comprise a calculation unloading scheme and a transmission scheduling rule, the cooperative interaction of adjacent region perceptrons uses cloud control, multimodal data fusion and the mutual fusion of image and speed information, and the like. Path planning and control decision are generally integrated, and existing research has been carried out on control of a single intelligent agent track and control of tracks among intelligent agents, a model prediction control algorithm is improved to improve track tracking performance, and the influences of delay and data packet loss caused by larger time delay and congestion in communication of cloud control are considered. Although the research in this field is fruitful, there are two problems: 1. on one hand, analysis and decision of large data volume is limited by local computing power of the intelligent agent, and the effect of crowd decision can be achieved while the local computing power is liberated by combining an advanced edge/cloud processing technology. At the moment, the computing content is reasonably unloaded between the local and the edge/cloud, and is reasonably distributed under the condition of meeting the real-time requirement; 2. in a more complex environment, most of the current agents are controlled by the single agents, and the phenomenon of similar congestion can be avoided by utilizing the edge end to carry out cooperative control on the multiple agents.
Disclosure of Invention
Aiming at the problems of inaccurate dynamic model of an intelligent body, limited local calculation amount and the like, the invention provides an intelligent body control method based on end edge cloud cooperation, which utilizes edge equipment and remote cloud equipment to predict according to real-time peripheral obstacles and behavior intentions and driving tracks of other intelligent bodies in the next time period, on the basis, the intelligent body control method is divided into two layers to be controlled, namely local end local obstacle avoidance control and global track tracking control of edge cloud cooperation control, and local end scheduling and decision making are carried out through a control scheduler;
the first stage is a construction stage of a terminal edge cloud collaborative mobile agent control method;
s11, establishing a local peripheral intelligent agent driving intention module, obtaining movement related data based on a sensor of the mobile intelligent agent, processing historical data through a POMDP model, and prejudging a behavior intention;
s12, according to the pre-judgment of the peripheral mobile intelligent agent movement intention module of the local end on the peripheral mobile intelligent agent behaviors, a track prediction module is constructed and is realized through an artificial potential field path planning component and a neural network path planning component;
s13, selecting a proper depth reinforcement learning module according to the prediction of the running track of the peripheral intelligent agent by the local end, and evaluating the generated optimal path, path tracking and obstacle avoidance control and adjusting parameters of the depth reinforcement learning module;
step S14, according to the characteristics extracted by the local end in real time, a tracking control module of a single intelligent agent is constructed in a peripheral edge server, belongs to online training and is used in the edge server;
S15-A, constructing a cooperative control module of a plurality of intelligent agents of an edge end based on a tracking control module of a single intelligent agent of the edge end, wherein the module is used as a control center to interact with other peripheral intelligent agents, collects road conditions and information of the plurality of intelligent agents, utilizes computing resources of the edge end to construct a cooperative algorithm, namely a multi-objective optimization model, and calculates the corresponding control quantity of each intelligent agent;
S15-B, constructing an offline tracking control module of a cloud, firstly constructing a high-precision map of the cloud, constructing the cloud to collect historical sensing information of all agents and edge devices in the whole world, and constructing the offline tracking control module as a long-time delay and instability because the control quantity of the cloud control module transmits data through the Internet;
step S16, a scheduler module is built at the local end, the scheduler module comprises a processing module for sensing information of an intelligent agent, the complexity of the surrounding environment is sensed, and a switching module for controlling the local end, the edge end and the cloud end is built;
the second stage is an execution stage of the mobile agent control method with cooperation of the end edge clouds;
s21, a local sensor observes the environment and an edge sensor observes the environment, the data are stored in a cloud as historical data, the mobile intelligent agent observes the surrounding environment through the equipped sensors, and the position of the mobile intelligent agent, the position of a peripheral vehicle, the position of an obstacle and the like are determined;
s22, the control scheduler of the local end carries out scheduling control, the local end firstly processes the sensing information and comprehensively judges whether the local end/edge end/cloud end is used for taking over the current control;
step S23, local control, namely establishing an external environment model according to the step S22 for path planning, modeling the movement intentions of other surrounding mobile intelligent agents by using POMDP, extracting perception data characteristics, sending the perception data characteristics to an edge terminal, comprehensively calculating the risks and the movement distances of nearby obstacles by using neural network path planning, predicting the trajectories of surrounding vehicles, and finally calculating the optimal path of the vehicle and the control quantity of tracking control and returning the optimal path and the control quantity of the tracking control to a local control scheduler;
s24, the edge end performs cooperative control on a single intelligent agent and multiple intelligent agents, collects more global perception data than the local end, receives the characteristics extracted from the single intelligent agent sent by the local end, calculates control quantity through a deep learning module trained in real time to obtain an optimal path and an optimal tracking control quantity under a complex environment, performs cooperative scheduling control on the multiple intelligent agents if the environment is more complex, solves the optimal path and the optimal control quantity of the cooperative multiple intelligent agents according to multi-objective optimization, and transmits the optimal path and the optimal control quantity to a corresponding mechanism;
step S25, cloud offline single agent control is performed, based on historical perception information of the step S22 and the step S24, for simpler road conditions, an individualized path planning scheme is improved, and a local tracking control module is used for driving navigation control for tracking an optimal path;
in the real-time execution process, the scheduler operates one of the operations S23, S24, S25 every period according to the sensing state.
As a further improvement of the present invention, the sensors of the mobile agent itself in step S11 include a vision sensor, a lidar sensor, and a positioning sensor.
As a further improvement of the invention, the historical perception information in step S15-B includes the current position of the agent and the fixed obstacle range.
As a further improvement of the present invention, the surrounding environment in step S21 includes determining the own position, the nearby vehicle position, and the obstacle position.
As a further improvement of the present invention, the processing of the sensing information in step S22 includes processing the video signal using the vision sensor processing component; processing the distance signal/positioning signal by using a laser radar/positioning sensor processing assembly to respectively obtain information perception environments such as color information of an external environment, relative position information of an obstacle and the like; the perception of network signals measures network delay by sending ping packets.
Has the advantages that:
the intelligent agent control method based on end edge cloud cooperation has good scene adaptability, and can conveniently switch the control of the local end, the edge end and the cloud end in the application process, so that the least consumed computing resource obtains an effective control effect.
Due to the fact that a part of common computing tasks are migrated to the edge server and the cloud server by the intelligent agent control method based on the end edge cloud cooperation, the method can be used for reducing the manufacturing cost of a single intelligent agent.
Additional features and advantages of the invention will be set forth in the description which follows. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof.
Drawings
FIG. 1 is a flow chart of the construction steps of an edge cloud collaborative agent control method according to the present invention;
fig. 2 is a flowchart of execution steps of an edge cloud collaborative agent control method according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the following detailed description and accompanying drawings:
the invention provides an intelligent agent control method based on edge cloud coordination, which utilizes edge equipment and remote cloud equipment to predict according to real-time peripheral obstacles and behavior intentions and driving tracks of other intelligent agents in the next time period, on the basis, the control is divided into two layers to carry out control, namely local end local obstacle avoidance control and global track tracking control of edge cloud coordination control, and local end scheduling and decision are carried out by a control scheduler.
The specific implementation mode of the mobile robot control method with the cooperation of the end edge clouds is divided into two stages, wherein the first stage is a construction stage of the mobile robot control method with the cooperation of the end edge clouds, and the second stage is an execution stage of the mobile robot control method with the cooperation of the end edge clouds. The following are described respectively according to the drawings of the specification.
In the present embodiment, a task in which a plurality of mobile robots move to a target location in an environment of a base station having an edge computing server around the mobile robots will be described as an example, and the mobile robots may use service resources of a remote cloud server.
As shown in fig. 1, the present invention provides a construction of a control method of end edge cloud cooperation:
and step S11, establishing a driving intention module of the peripheral robot at the local end. The module obtains movement-related data based on sensors of the mobile robot, such as a vision sensor, a laser radar sensor and a positioning sensor, processes historical data through a POMDP model, and prejudges behavior intentions.
And S12, constructing a track prediction module according to the prejudgment of the peripheral mobile robot movement intention module of the local end on the peripheral mobile robot behaviors. The module is realized by an artificial potential field path planning component and a neural network path planning component. For example, the calculated possible trajectory belongs to the lane change category based on the predicted avoidance behavior and the positions of other vehicles on the lane.
And S13, selecting a proper depth reinforcement learning module according to the prediction of the local end on the running track of the peripheral robot, evaluating the generated optimal path and path tracking and obstacle avoidance control, and adjusting parameters of the depth reinforcement learning module.
And S14, constructing a tracking control module of a single robot in a peripheral edge server according to the characteristics extracted by the local end in real time, wherein the tracking control module has better training effect compared with the local end, belongs to online training and uses more computing resources in the edge server than the local end.
And S15-A, constructing a cooperative control module of the edge end multi-robot based on the tracking control module of the edge end single robot. The module is used as a control center to interact with other peripheral robots, collects road conditions and information of a plurality of robots, utilizes edge computing resources to construct a collaborative algorithm, namely a multi-objective optimization model, and computes the corresponding control quantity of each robot.
And S15-B, constructing an offline tracking control module of the cloud. The method comprises the steps of firstly constructing a high-precision map of a cloud, and constructing the cloud to collect historical perception information of all robots and edge devices in the whole world, wherein the historical perception information comprises the current positions of the robots, the ranges of fixed obstacles and the like. Compared with a local end and an edge end, the control quantity of the cloud control module transmits data through the Internet, and the time delay is long and unstable, so that the cloud control module is constructed as an off-line control module.
And S16, constructing a scheduler module at the local end. The scheduler module comprises a processing module for sensing information of the robot, senses the complexity of the surrounding environment, and constructs a local end, an edge end and a cloud control switching module.
As shown in fig. 2, the method for navigating the mobile robot by using end edge cloud cooperation provided by the invention comprises the following steps:
and S21, observing the environment by the local sensor and the environment by the edge sensor, and storing the data serving as historical data in a cloud. The mobile robot observes the surrounding environment by a sensor provided therein, and determines the position of the mobile robot, the position of a neighboring vehicle, the position of an obstacle, and the like. In this embodiment, can obtain the video signal of environment through vision sensor, through laser radar sensor range signal, obtain the locating signal through the positioning sensor, through a plurality of basic stations with cell-phone signal location removal barrier position.
Step S22, the control scheduler of the local end schedules and controls. The local end firstly processes the perception information, including processing the video signal by using a visual sensor processing component; processing the distance signal/positioning signal by using a laser radar/positioning sensor processing assembly to respectively obtain information perception environments such as color information of an external environment, relative position information of an obstacle and the like; the sensing of the network signal, the network delay and the like are measured by sending a ping packet, so that the current control is comprehensively judged and taken over by using a local end/edge end/cloud end.
And step S23, controlling the local terminal. And establishing an external environment model according to the step S22 for path planning. The POMDP is used for modeling the moving intention of other mobile robots around, extracting perception data characteristics and sending the perception data characteristics to the edge end, the neural network path planning is used for comprehensively calculating the risks of nearby obstacles and the moving distance, the track of surrounding vehicles is predicted, and finally the optimal path of the vehicle and the control quantity of tracking control are calculated and returned to the local end control scheduler.
And step S24, the edge end cooperatively controls the single robot and the multiple robots. And collecting more global perception data than the local end, receiving the features extracted from the single robot sent by the local end, and calculating the control quantity through a deep learning module trained in real time to obtain the optimal path and the optimal tracking control quantity under a complex environment. And if the environment is more complex, performing cooperative scheduling control of the multiple robots, solving the optimal paths and optimal control quantities of the multiple robots in cooperation according to multi-objective optimization, and transmitting the optimal paths and optimal control quantities to corresponding vehicles.
And S25, controlling by the cloud offline single robot. Based on the historical perception information of the step S22 and the step S24, for simpler road conditions, an individualized path planning scheme is improved, and a local end tracking control module is used for driving and tracking navigation control of an optimal path.
In the real-time execution process, the scheduler operates one of the operations S23, S24, S25 every period according to the sensing state.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, but any modifications or equivalent variations made according to the technical spirit of the present invention are within the scope of the present invention as claimed.

Claims (5)

1. An end edge cloud cooperation-based intelligent agent control method is characterized by comprising the steps of constructing an end edge cloud cooperation-based intelligent agent control method and executing an end edge cloud cooperation-based intelligent agent navigation method, wherein the specific steps are as follows;
the first stage is a construction stage of a terminal edge cloud cooperative mobile agent control method;
s11, establishing a local peripheral intelligent agent driving intention module, obtaining movement related data based on a sensor of the mobile intelligent agent, processing historical data through a POMDP model, and prejudging a behavior intention;
s12, according to the pre-judgment of the peripheral mobile intelligent agent movement intention module of the local end on the peripheral mobile intelligent agent behaviors, a track prediction module is constructed and is realized through an artificial potential field path planning component and a neural network path planning component;
s13, selecting a proper depth reinforcement learning module according to the prediction of the local end on the running track of the peripheral intelligent agent, evaluating the generated optimal path and path tracking and obstacle avoidance control, and adjusting parameters of the depth reinforcement learning module;
step S14, according to the characteristics extracted by the local end in real time, a tracking control module of a single intelligent agent is constructed in a peripheral edge server, belongs to online training and is used in the edge server;
S15-A, constructing a cooperative control module of a plurality of intelligent agents of an edge end based on a tracking control module of a single intelligent agent of the edge end, wherein the module is used as a control center to interact with other peripheral intelligent agents, collects road conditions and information of the plurality of intelligent agents, utilizes computing resources of the edge end to construct a cooperative algorithm, namely a multi-objective optimization model, and calculates the corresponding control quantity of each intelligent agent;
S15-B, constructing an offline tracking control module of a cloud, firstly constructing a high-precision map of the cloud, constructing the cloud to collect historical sensing information of all agents and edge devices in the whole world, and constructing the offline tracking control module as a long-time delay and instability because the control quantity of the cloud control module transmits data through the Internet;
step S16, a scheduler module is built at the local end, the scheduler module comprises a processing module for sensing information of an intelligent agent, the complexity of the surrounding environment is sensed, and a switching module for controlling the local end, the edge end and the cloud end is built;
the second stage is an execution stage of the mobile agent control method with cooperation of the end edge clouds;
s21, a local sensor observes the environment and an edge sensor observes the environment, the data are stored in a cloud as historical data, and the mobile intelligent agent observes the surrounding environment through the equipped sensors to determine the position of the mobile intelligent agent, the position of a surrounding vehicle and the position of an obstacle;
s22, the control scheduler of the local end carries out scheduling control, the local end firstly processes the sensing information and comprehensively judges whether the local end/the edge end/the cloud end is used for taking over the current control;
step S23, local control, namely establishing an external environment model according to the step S22 for path planning, modeling the movement intentions of other surrounding mobile intelligent agents by using POMDP, extracting perception data characteristics, sending the perception data characteristics to an edge terminal, comprehensively calculating the risks and the movement distances of nearby obstacles by using neural network path planning, predicting the trajectories of surrounding vehicles, and finally calculating the optimal path of the vehicle and the control quantity of tracking control and returning the optimal path and the control quantity of the tracking control to a local control scheduler;
s24, the edge end performs cooperative control on a single intelligent agent and multiple intelligent agents, collects more global perception data than the local end, receives the characteristics extracted from the single intelligent agent sent by the local end, calculates control quantity through a deep learning module trained in real time to obtain an optimal path and an optimal tracking control quantity under a complex environment, performs cooperative scheduling control on the multiple intelligent agents if the environment is more complex, solves the optimal path and the optimal control quantity of the cooperative multiple intelligent agents according to multi-objective optimization, and transmits the optimal path and the optimal control quantity to a corresponding mechanism;
step S25, cloud offline single agent control is performed, based on historical perception information of the step S22 and the step S24, for simpler road conditions, an individualized path planning scheme is improved, and a local tracking control module is used for driving navigation control for tracking an optimal path;
in the real-time execution process, the scheduler operates one of the steps S23, S24, and S25 every time period according to the sensing state.
2. An end edge cloud coordination based intelligent agent control method according to claim 1, wherein the sensors of the mobile intelligent agent in step S11 comprise a vision sensor, a laser radar sensor and a positioning sensor.
3. An end edge cloud coordination based intelligent agent control method according to claim 1, wherein in the step S15-B, the historical perception information comprises the current position of the intelligent agent and a fixed obstacle range.
4. An end edge cloud coordination based intelligent agent control method according to claim 1, wherein the surrounding environment in step S21 comprises determining a self position, a surrounding vehicle position and an obstacle position.
5. An end edge cloud coordination based intelligent agent control method according to claim 1, wherein the processing of the sensing information in step S22 comprises processing a video signal using a visual sensor processing component; processing the distance signal/positioning signal by using a laser radar/positioning sensor processing assembly to respectively obtain color information of an external environment and a sensing environment of relative position information of the obstacle; the perception of network signals measures network delay by sending ping packets.
CN202011021858.8A 2020-09-25 2020-09-25 Intelligent agent control method based on end edge cloud cooperation Active CN112099510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011021858.8A CN112099510B (en) 2020-09-25 2020-09-25 Intelligent agent control method based on end edge cloud cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011021858.8A CN112099510B (en) 2020-09-25 2020-09-25 Intelligent agent control method based on end edge cloud cooperation

Publications (2)

Publication Number Publication Date
CN112099510A CN112099510A (en) 2020-12-18
CN112099510B true CN112099510B (en) 2022-10-18

Family

ID=73755298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011021858.8A Active CN112099510B (en) 2020-09-25 2020-09-25 Intelligent agent control method based on end edge cloud cooperation

Country Status (1)

Country Link
CN (1) CN112099510B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034718A (en) * 2021-03-01 2021-06-25 启若人工智能研究院(南京)有限公司 Subway pipeline inspection system based on multiple agents
CN112946603B (en) * 2021-03-08 2024-03-26 安徽乐道智能科技有限公司 Road maintenance detection system based on laser radar and detection method thereof
US20230033818A1 (en) * 2021-07-30 2023-02-02 International Business Machines Corporation Edge function-guided artifical intelligence request routing
CN113743479B (en) * 2021-08-19 2022-05-24 东南大学 End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof
CN113985475B (en) * 2021-10-28 2023-09-05 北京石油化工学院 Microseism monitoring data transmission method based on Internet of things terminal Bian Yun cooperation
CN114137956B (en) * 2021-10-28 2023-11-10 华人运通(上海)云计算科技有限公司 Vehicle cloud cooperative path planning method and system
CN114493164B (en) * 2021-12-30 2024-04-09 重庆特斯联智慧科技股份有限公司 Robot task analysis method and system based on edge calculation
WO2024001302A1 (en) * 2022-06-30 2024-01-04 华为云计算技术有限公司 Mapping system and related method
CN116744368B (en) * 2023-07-03 2024-01-23 北京理工大学 Intelligent collaborative heterogeneous air-ground unmanned system based on cloud side end architecture and implementation method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013086629A1 (en) * 2011-12-16 2013-06-20 El-Tantawy Samah Multi-agent reinforcement learning for integrated and networked adaptive traffic signal control
CN108407808A (en) * 2018-04-23 2018-08-17 安徽车鑫保汽车销售有限公司 A kind of running car intelligent predicting system
CN111127931B (en) * 2019-12-24 2021-06-11 国汽(北京)智能网联汽车研究院有限公司 Vehicle road cloud cooperation method, device and system for intelligent networked automobile
CN111367657B (en) * 2020-02-21 2022-04-19 重庆邮电大学 Computing resource collaborative cooperation method based on deep reinforcement learning
CN111756812B (en) * 2020-05-29 2021-09-21 华南理工大学 Energy consumption perception edge cloud cooperation dynamic unloading scheduling method

Also Published As

Publication number Publication date
CN112099510A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN112099510B (en) Intelligent agent control method based on end edge cloud cooperation
Chang et al. Reinforcement based mobile robot path planning with improved dynamic window approach in unknown environment
Balico et al. Localization prediction in vehicular ad hoc networks
Leung et al. Active SLAM using model predictive control and attractor based exploration
CN103926925B (en) Improved VFH algorithm-based positioning and obstacle avoidance method and robot
JP2021533036A (en) Multi-view system and method for action policy selection by autonomous agents
CN112235808B (en) Multi-agent distributed collaborative dynamic coverage method and system
CN110488843A (en) Barrier-avoiding method, mobile robot and computer readable storage medium
Gil et al. Cooperative scheduling of tasks for networked uninhabited autonomous vehicles
CN110989352A (en) Group robot collaborative search method based on Monte Carlo tree search algorithm
Liu et al. Robotic communications for 5g and beyond: Challenges and research opportunities
CN113743479B (en) End-edge-cloud vehicle-road cooperative fusion perception architecture and construction method thereof
CN112857370A (en) Robot map-free navigation method based on time sequence information modeling
Adamey et al. A decentralized approach for multi-UAV multitarget tracking and surveillance
CN113219506A (en) Positioning method for multimode fusion seamless switching
Cui et al. UAV target tracking algorithm based on task allocation consensus
Qiao et al. Dynamic self-organizing leader-follower control in a swarm mobile robots system under limited communication
Shangguan et al. Motion planning for autonomous grain carts
CN112731914A (en) Cloud AGV application system of 5G smart factory
CN115314850A (en) Intelligent motion system based on cloud edge cooperative control
CN115657676A (en) Centralized multi-AGV multi-path channel change decision planning method based on priority
Zema et al. 3D trajectory optimization for multimission UAVs in smart city scenarios
De Freitas et al. Decentralized task distribution among cooperative UAVs in surveillance systems applications
Lin et al. Service-oriented dynamic data driven application systems to urban traffic management in resource-bounded environment
CN111897348A (en) Control method and system of cloud robot, cloud robot and cloud server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant