CN113610271B - Multi-Agent airport scene sliding path planning method based on historical data analysis - Google Patents

Multi-Agent airport scene sliding path planning method based on historical data analysis Download PDF

Info

Publication number
CN113610271B
CN113610271B CN202110749433.7A CN202110749433A CN113610271B CN 113610271 B CN113610271 B CN 113610271B CN 202110749433 A CN202110749433 A CN 202110749433A CN 113610271 B CN113610271 B CN 113610271B
Authority
CN
China
Prior art keywords
conflict
agent
aircraft
intersection
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110749433.7A
Other languages
Chinese (zh)
Other versions
CN113610271A (en
Inventor
韩云祥
张建伟
何爱平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202110749433.7A priority Critical patent/CN113610271B/en
Publication of CN113610271A publication Critical patent/CN113610271A/en
Application granted granted Critical
Publication of CN113610271B publication Critical patent/CN113610271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/06Traffic control systems for aircraft, e.g. air-traffic control [ATC] for control when on the ground
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a multi-Agent airport scene taxi path planning method based on historical data analysis, which is based on the historical scene taxi data analysis of an aircraft and combines a Q learning algorithm in reinforcement learning to dynamically plan a scene aircraft taxi path in real time. The method can greatly reduce the possibility of ground collision of the aircraft and can improve the running efficiency of scenes. According to the method, the conflict hot spot area and the conflict peak period in the history operation process are obtained by analyzing different dimensionalities of the historical sliding data, including time sequence data and scene resources, and then intelligent learning is performed by setting corresponding agents. A search strategy based on the shortest path is adopted, the sliding path is shortest under the constraint conditions of minimum safety interval, sliding speed constraint and the like, and a corresponding priority conflict avoidance method is adopted for solving the conflict.

Description

Multi-Agent airport scene sliding path planning method based on historical data analysis
Technical Field
The invention relates to a multi-Agent airport scene sliding path planning method based on historical data analysis, and belongs to the field of airport scene path planning.
Background
With the rapid development of civil aviation management in recent years, people pay more attention to the quasi-point rate of flights, the user experience demands are higher and higher, the flight scene operation efficiency influences the progress of the flights to a certain extent, and in order to cope with the situation that the demands are continuously increased and the hardware resources (such as runway resources, apron resources and the like) are extremely limited, an intelligent scene management system must be formulated to realize an intelligent scheme with maximized airport resource utilization, optimized scene operation efficiency and maximized economic benefit.
The key point of improving the scene operation efficiency is how to plan the ground sliding track of the aircraft effectively, and for a large airport with busy daily life, various conflicts are inevitably encountered in the sliding path planning process, particularly, the ground sliding track planning method mainly comprises the steps of crossing and contact nodes in the maneuvering area of the airport, and due to the factors of insufficient comprehensive coverage of airport radar signals, unreasonable scene structure layout, unreasonable indication mark plates or unreasonable ground light setting at the crossing and the contact nodes, the problem of doubtful occurrence of crew members in the sliding process is easily caused, so that the ground sliding conflict is caused. Therefore, preparation for processing conflicts at any time is needed in planning, and the preparation is also a precondition for ensuring path planning safety.
Currently, there are many studies in the field of airport scenes at home and abroad, but the planning of taxi paths of aircraft scenes is very rare, and many researchers pay more attention to the solution of conflicts, and the planning is ignored. Path planning, which is a real-time local plan rather than a static global plan, includes a huge path search space, for which many researchers currently propose a-algorithm, genetic algorithm, meng Daka Luo Suanfa, etc. to solve the problem, and in order to reduce the complexity of the problem, an intelligent method must be adopted to meet the airport requirement.
Disclosure of Invention
Aiming at the problems, the invention provides a multi-Agent airport scene taxi path planning method based on historical data analysis.
The invention discloses a multi-Agent airport scene sliding path planning method based on historical data analysis, which has the following beneficial effects:
(a) The problem of airport scene congestion can be solved, airport scene operation pressure is effectively relieved, and efficiency is improved;
(b) Reducing the sliding time of the flights in the entering or leaving stage, thereby increasing the flight punctuation rate;
(c) The efficient path planning can reduce the probability of collision of the aircraft to a certain extent;
(d) The pressure of the captain and the controller is reduced, and the airport scene management efficiency can be improved.
The technical scheme is as follows: in order to achieve the above purpose, the present invention adopts the following technical scheme:
a multi-Agent airport scene taxi path planning method based on historical data analysis comprises the following specific steps:
firstly, analyzing historical sliding data of an aircraft, setting a conflict hot spot threshold according to historical conflict times of all intersections of a scene, finally obtaining the distribution condition of conflict hot spot areas of the scene, and simultaneously collecting sliding paths of the aircraft from a starting point to an ending point, wherein the sliding paths comprise all intersections passing through in the middle, so that a model can learn;
setting two kinds of agents, namely an aircraft Agent and a taxiway intersection Agent, wherein the same kind of agents play roles with each other to contend for the scene resources, and searching the shortest path reaching the destination among the aircraft agents, and if the runway and the taxiway resources are applied at the same time, the conflict is necessarily generated; the agents of different types cooperate with each other, the agents at the intersection of the sliding channel give action strategies according to the environment of the agents of the aircraft, and then the aircraft takes reasonable actions according to constraint conditions;
thirdly, model training is carried out according to the sliding data in the first step, firstly, a scene environment is initialized, m aircraft agents are set, n sliding way intersections are defined, the position of the initial aircraft is well defined, then, a historical sliding track is referred to, a conflict hot spot area is avoided according to conflict probability, a shortest path strategy is firstly adopted to search for the next node, actions adopted by each intersection are recorded, if no conflict exists, a Q value is returned as a reward, if the conflict exists, the corresponding Q value is fed back as punishment until a scheme without the conflict and with a short path is found, a Q value network is built after all data are trained, and the Q value directly reflects the probability of the conflict of the intersection;
step four, completing path planning according to the Q value network constructed in the step three, firstly, sequencing the Q values of the adjacent reachable intersections by the aircraft Agent, selecting the intersection with the largest Q value as the next candidate node, if collision occurs, giving different conflict resolution strategies by the intersection Agent according to the conflict type, and if no conflict exists, continuing to repeat the step until the aircraft Agent safely reaches the end point;
further, the model involved in the third step is expressed as follows:
Figure BDA0003143998680000021
where α represents the learning rate, i.e. the step size per update, taking α=0.001, r t Indicating that the aircraft is in current state O t Action A is taken t The prize value obtained. When an aircraft Agent passes through an intersection, if the suggested action given by the intersection Agent is to let the aircraft pass through safely without collision, a reward value is fed back to the intersection Agent, the reward value is determined according to the time consumed by the aircraft to slide through the section of path, and the set standard is as follows:
(a) The standard time consumption is in the range of 15 seconds to 30 seconds, the rewards value is taken as 5 through the intersection;
(b) The time is less than 15 seconds, a conflict alarm is triggered, speed reduction is suggested, and the rewarding value is 1;
(c) The time is more than 30 seconds, a conflict alarm is triggered, acceleration is suggested, and the rewarding value is taken as 2;
(d) A lower prize is encountered with conflicting feedback, taken as 0.
The action set { A } includes { acceleration, deceleration, turning, straight going, waiting }, and gamma represents discount factor, and represents the influence degree of the benefit value fed back by corresponding action on the current benefit value by the state of the aircraft at the next moment, A * Indicating the next moment of the aircraft according to the environment O t+1 The action that maximizes the benefit value is screened in the action set, also referred to as the optimal action. As the number of iterations increases, all node benefit values reach a converging state.
Further, in the fourth step, the trained Q-value network is used as a priori knowledge, and is combined with the environment O where the current aircraft is located t Searching the next adjacent node, adding the next adjacent node into the node set to be selected, and recording the next adjacent node as N= { N 1 ,n 2 ,...,n i ,...,n k N is }, where n i Representing the ith optional sectionAnd k represents the number of nodes in the whole candidate set, then the cumulative benefit values of k nodes are ranked from large to small, firstly, the node with the biggest benefit is considered, whether the conflict exists is detected, if the conflict exists, the benefit value of the node is updated firstly, then the other nodes with the biggest benefit values and no conflict are considered in sequence, after all the nodes are traversed in sequence, the conflict-free nodes are added into the candidate set, finally, the node with the shortest distance is selected as the node considered in the next step in the path planning according to the ranking from small to large of the shortest paths from the nodes to the apron.
The invention can solve the problem of path planning of the aircraft on the scene based on analysis of the historical sliding data of the scene aircraft, and provides decision support for the dispatching of the scene aircraft.
The planning scheme of the invention accords with the command habit of scene scheduling personnel and can be better embedded into the existing various scene scheduling automatic systems.
Drawings
FIG. 1 is a schematic diagram of scene conflict categories;
FIG. 2 is a schematic diagram of an aircraft Agent module;
FIG. 3 is a schematic diagram of a path plan for a scene;
fig. 4 is a directed pictorial illustration of abstracting a scene into nodes.
Detailed Description
A multi-Agent airport scene taxi path planning method based on historical data analysis is characterized in that: the method comprises the steps of analysis and processing of sliding historical data, establishment of an Agent model, Q value network establishment and a real-time path planning strategy;
based on the conflict types shown in fig. 1, the taxi history data analysis and processing module mainly processes the taxi records of the current airport in the past year, including the taxi track of the aircraft, key node speed parameters and time stamp records, and provides basic data for later Agent learning.
The Agent model is built, including an aircraft Agent and an intersection Agent, as shown in fig. 2, the aircraft Agent is mainly responsible for shortest path searching in the model, Q values of adjacent nodes are ordered in the built Q value network, then nodes which simultaneously meet the condition that no conflict exists at present and are the highest Q value are selected, and finally shortest path searching is performed.
The intersection Agent is mainly responsible for tracking the state of the aircraft near the current intersection in real time, carrying out conflict alarm according to related constraint conditions, broadcasting an alarm message to all the aircraft on a scene if the intersection is in conflict, giving corresponding actions, and finally evaluating the aircraft passing through the node according to the specified standard passing point time requirement by the intersection Agent, so that specific speed suggestions are given.
Q value network construction, wherein the constructed network is used as prior knowledge of an aircraft, and the larger the benefit value is, the larger the probability of collision of the node in historical taxiing data is, and the specific construction method is as follows:
initializing scene related data, including the number m of aircrafts and the number n of intersections, abstracting the scene into a directed graph G between nodes and endpoints, wherein the specific method comprises the following steps: regarding each intersection as a node, each node is connected through a directional arrow, the path of the aircraft from the sliding runway to the apron can be regarded as the directional connection between the nodes, taking the schematic diagram of the field plane of fig. 3 as an example, and describing the description with reference to fig. 4, wherein 001, 010, …,110 are the number of the divided intersections, the directional line segments between the nodes represent the subsections between the intersections, and the arrow direction represents the direction of the path in which the aircraft can operate;
step two, inputting historical data, reading in nodes passed by the aircraft, and training, wherein specific standards are set as follows:
(a) The standard time consumption is in the range of 15 seconds to 30 seconds, the rewards value is taken as 5 through the intersection;
(b) The time is less than 15 seconds, a conflict alarm is triggered, speed reduction is suggested, and the rewarding value is 1;
(c) The time is more than 30 seconds, a conflict alarm is triggered, the speed is suggested to be increased, and the rewarding value is taken as 2;
(d) The collision feedback prize is taken to be 0.
Step three, performing iterative updating, and after the accumulated benefits of all the nodes are converged, completing the construction of the scene Q value network, wherein a specific iterative updating formula is as follows:
Figure BDA0003143998680000041
/>
where α represents the learning rate, i.e. the step size per update, taking α=0.001, r t Indicating that the aircraft is in current state O t Action A is taken t The prize value obtained. The action set { A } includes { acceleration, deceleration, turning, straight going, waiting }, and gamma represents discount factor, and represents the influence degree of the benefit value fed back by corresponding action on the current benefit value by the state of the aircraft at the next moment, A * Indicating the next moment of the aircraft according to the environment O t+1 The action that maximizes the benefit value is screened in the action set, also referred to as the optimal action. As the number of iterations increases, all node yield values reach a converged state.
The real-time path planning strategy, the constructed network is used as the prior knowledge of the aircraft, and a reference is provided when the aircraft searches the next node, and the specific steps are as follows:
initializing a starting point position of an aircraft Agent, searching according to an abstract scene node directed graph, searching nodes adjacent to the current position, adding the nodes into a candidate set N, and marking all elements in the set as legal candidate nodes as N= { N 1 ,n 2 ,...,n i ,...,n k N is }, where n i Representing the ith optional next planned path node, and k represents the number of nodes in the whole candidate set;
step two, sorting the node profit values in the set N from large to small according to the constructed Q value network, wherein the Q value reflects the probability of the node encountering conflict in the history, the larger the profit value is, the smaller the node flow is, the less the node is easy to collide, otherwise, the node is easy to collide, then the node with the current collision is eliminated according to the alarm information of the Agent at the intersection, the collision-free node is left, and the node is added into a new set R, and is recorded as R= { N i ,...,n j },n i ,n j Respectively representing the ith and the jth optional transition nodes, wherein all elements in R represent all conflict-free transition node sets in the current scene state;
step three, calculating the shortest paths from all nodes in the R set to the end point, wherein a specific shortest path algorithm is Dijkstra algorithm, and the detailed algorithm is described as follows:
step 1, selecting a designated node, listing weights from the node to other nodes, wherein non-adjacency is infinity;
step 2, selecting the minimum value from the weights, wherein the minimum value is the shortest path from the starting point to the corresponding vertex, and marking the corresponding vertex;
step 3, comparing the direct distance from the starting point to other untagged vertexes with the sum of the distances from the starting point to the just-marked vertexes and the distances from the marked vertexes to other vertexes, and if the sum is small, updating the corresponding weight;
step 4, turning to step 2;
and finally selecting the shortest node in the path from the set R to the destination apron as the next node in the path planning.

Claims (1)

1. The multi-Agent airport scene taxi path planning method based on historical data analysis is characterized by comprising an analysis and processing module of aircraft taxi historical data, a Q value network training module, a real-time path planning module, a conflict detection module and a conflict avoidance module, and specifically comprising the following steps of:
(1) Processing and analyzing historical taxiing data of the aircraft, wherein the processing and analyzing specifically comprises a single aircraft entering and leaving port path sequence, a relation between the number of scene aircraft and a time sequence, and analyzing a conflict hot zone in a flow peak stage;
(2) Based on the determined conflict hot spot layout and the flow peak time sequence, setting an aircraft Agent and a taxiway intersection Agent, wherein the aircraft Agent is responsible for searching the shortest taxiway under the conflict-free condition, and the taxiway intersection Agent is responsible for scene taxiway conflict alarm service and conflict resolution strategy service;
(3) Constructing a Q value network, combining a priori knowledge through a Q learning algorithm, taking the Q value network as a reference of the next moment, and if conflict nodes exist in a planned shortest path, generating corresponding Q values according to each generation of conflict in training for feedback until the gain values of all nodes are converged;
(4) According to the current scene situation, carrying out real-time path planning on the aircraft Agent, at an intersection, firstly, finding an intersection which is directly adjacent to the aircraft Agent according to the prompt of the intersection Agent, then carrying out path searching according to constraint rules, sorting the Q value of the currently searched intersection from large to small, selecting the intersection with the largest rewarding value as a candidate node, and simultaneously meeting the shortest constraint condition of the path;
the specific steps of the step (1) are as follows:
the method comprises the steps of (1.1) analyzing historical sliding data of all aircrafts in the whole scene to obtain conflict hot spot areas and scene flow peak time sequence data, wherein the data are used as priori knowledge of an Agent of the aircrafts after feature extraction;
(1.2) analyzing single aircraft taxiing data, namely an incoming and outgoing path sequence, comprising a runway number, a taxiway intersection and an apron number, which are used for Agent training;
the specific steps of the step (2) are as follows:
(2.1) the aircraft Agent is responsible for searching the current shortest path under the condition of no conflict, and determining a behavior including acceleration, deceleration, straight running and turning according to the current environment and combining with the taxi-way intersection Agent policy function;
(2.2) the agents at the intersections of the taxiways give conflict alarms for the states of the aircrafts on the scene, firstly, each Agent at the intersection checks the aircrafts requesting the intersection, including whether the speed and the interval meet the safety standard or not and whether the conflict exists or not, and a plurality of aircrafts initiate the request at the same time to be regarded as the conflict;
the specific steps of the step (3) are as follows:
training by adopting a Q learning algorithm according to historical sliding data, firstly, carrying out path search by each aircraft Agent according to a starting point and an ending point by combining the current state, generating an alternative path by adopting a greedy algorithm of the current shortest path, feeding back a benefit value at a corresponding intersection if sliding conflict exists, and continuously searching for a suboptimal shortest path until no conflict reaches the ending point;
a single historical taxiing process for each aircraft Agent is represented by a sequence of behaviors, denoted (O) 1 ,R 1 ,A 1 ,...,O i ,R i ,A i ) Wherein O is t ,R t ,A t Respectively representing the observation state of the Agent at the time t, the obtained benefits and the corresponding actions taken, (O) t ,R t ,A t ) Representing the complete action of the Agent at one time; two classes of agents are involved on the scene, all Agent sets being denoted { A } 1 ,A 2 …,A i ,…,A m ,B 1 ,B 2 ,…B j ,…B n }, wherein A i 、B j Respectively representing the ith aircraft Agent and the jth taxiway intersection Agent, and m and n respectively representing the number of the aircraft and the taxiway intersections; the Q learning iterative update formula is as follows:
Figure QLYQS_1
where α represents the learning rate, i.e. the step size per update, taking α=0.001, r t Representing action A taken in the current state t The obtained profit value, gamma represents discount factor, represents weight of the profit value fed back by taking corresponding action at the next moment, A * Indicating at the next moment, according to the environment O t+1 And the action to be taken concentrates on the action that maximizes the benefit value; as the iteration times are continuously increased, all the node income values reach a convergence state;
(3.2) when the profit value of all intersections reaches convergence, the Q value network is initialized, the conflict probability of each intersection is reflected by the Q value, the smaller the Q value is, the larger the probability of conflict generation at the intersection is indicated, the profit value is continuously updated according to real-time feedback according to the prior probability when the path is planned;
the specific steps of the step (4) are as follows:
(4.1) searching all adjacent reachable cross nodes according to the current position by the aircraft Agent, sorting the Q values of the cross nodes according to the descending order, considering the node with the large Q value as the next candidate node, deleting the node from the candidate set if conflict exists, continuing searching for the suboptimal Q value node until a node without conflict is found and added into the candidate set until all the nodes to be selected are traversed;
(4.2) selecting the next candidate node according to the shortest path from the candidate set node to the destination.
CN202110749433.7A 2021-07-01 2021-07-01 Multi-Agent airport scene sliding path planning method based on historical data analysis Active CN113610271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110749433.7A CN113610271B (en) 2021-07-01 2021-07-01 Multi-Agent airport scene sliding path planning method based on historical data analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110749433.7A CN113610271B (en) 2021-07-01 2021-07-01 Multi-Agent airport scene sliding path planning method based on historical data analysis

Publications (2)

Publication Number Publication Date
CN113610271A CN113610271A (en) 2021-11-05
CN113610271B true CN113610271B (en) 2023-05-02

Family

ID=78337202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110749433.7A Active CN113610271B (en) 2021-07-01 2021-07-01 Multi-Agent airport scene sliding path planning method based on historical data analysis

Country Status (1)

Country Link
CN (1) CN113610271B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114254567A (en) * 2021-12-29 2022-03-29 北京博能科技股份有限公司 Airport fusion simulation method based on Muti-Agent and reinforcement learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537431A (en) * 2014-12-16 2015-04-22 南京航空航天大学 Taxiway path optimizing method based on collision detection
CN109361601A (en) * 2018-10-31 2019-02-19 浙江工商大学 A kind of SDN route planning method based on intensified learning
CN109540151A (en) * 2018-03-25 2019-03-29 哈尔滨工程大学 A kind of AUV three-dimensional path planning method based on intensified learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2818850B1 (en) * 2000-12-22 2003-01-31 Commissariat Energie Atomique REFLEX ADAPTIVE ROUTING METHOD WITH REINFORCEMENT LEARNING
US20210103286A1 (en) * 2019-10-04 2021-04-08 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Systems and methods for adaptive path planning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537431A (en) * 2014-12-16 2015-04-22 南京航空航天大学 Taxiway path optimizing method based on collision detection
CN109540151A (en) * 2018-03-25 2019-03-29 哈尔滨工程大学 A kind of AUV three-dimensional path planning method based on intensified learning
CN109361601A (en) * 2018-10-31 2019-02-19 浙江工商大学 A kind of SDN route planning method based on intensified learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A New Traffic Flow Control Method for Terminal Control Area Using Dioid Algebra;HAN YUN-XIANG等;《IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS》;第57卷(第4期);第2459-2468页 *
基于多Agent 的机场场面最优滑行路径算法;尤杰等;《交通运输工程学报》;第9卷(第1期);第109-112页 *

Also Published As

Publication number Publication date
CN113610271A (en) 2021-11-05

Similar Documents

Publication Publication Date Title
Lin et al. Deep learning based short-term air traffic flow prediction considering temporal–spatial correlation
Ikli et al. The aircraft runway scheduling problem: A survey
Benlic et al. Heuristic search for the coupled runway sequencing and taxiway routing problem
Zhang et al. A bi-level cooperative operation approach for AGV based automated valet parking
CN112489426A (en) Urban traffic flow space-time prediction scheme based on graph convolution neural network
Ai et al. A deep learning approach to predict the spatial and temporal distribution of flight delay in network
CN113610271B (en) Multi-Agent airport scene sliding path planning method based on historical data analysis
CN114664122B (en) Conflict minimized flight path planning method considering high altitude wind uncertainty
Yin et al. Joint apron-runway assignment for airport surface operations
CN114117700A (en) Urban public transport network optimization research method based on complex network theory
Lin et al. Approach for 4-d trajectory management based on HMM and trajectory similarity
Jiang et al. A collaborative optimization model for ground taxi based on aircraft priority
Guclu et al. Analysis of aircraft ground traffic flow and gate utilisation using a hybrid dynamic gate and taxiway assignment algorithm
Tariq et al. Combining machine learning and fuzzy rule-based system in automating signal timing experts’ decisions during non-recurrent congestion
CN114253975B (en) Load-aware road network shortest path distance calculation method and device
CN116050245A (en) Highway automatic driving commercial vehicle track prediction and decision method and system based on complex network theory
Zhang et al. Direction-decision learning based pedestrian flow behavior investigation
Patil Machine Learning for Traffic Management in Large-Scale Urban Networks: A Review
CN111582592A (en) Regional airport group navigation line network optimization method
CN115187093B (en) Airport scene operation optimization method, device, equipment and readable storage medium
Ye et al. Data-driven distributionally robust generation of time-varying flow corridor networks under demand uncertainty
Yao et al. A path planning model based on spatio-temporal state vector from vehicles trajectories
CN114118578A (en) Calculation method for predicting flight arrival time based on air trajectory and big data
Chu et al. Hierarchical Method for Mining a Prevailing Flight Pattern in Airport Terminal Airspace
Wang et al. A review of flight delay prediction methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant