CN113610271A - Multi-Agent airport scene sliding path planning method based on historical data analysis - Google Patents

Multi-Agent airport scene sliding path planning method based on historical data analysis Download PDF

Info

Publication number
CN113610271A
CN113610271A CN202110749433.7A CN202110749433A CN113610271A CN 113610271 A CN113610271 A CN 113610271A CN 202110749433 A CN202110749433 A CN 202110749433A CN 113610271 A CN113610271 A CN 113610271A
Authority
CN
China
Prior art keywords
agent
conflict
aircraft
intersection
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110749433.7A
Other languages
Chinese (zh)
Other versions
CN113610271B (en
Inventor
韩云祥
张建伟
何爱平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202110749433.7A priority Critical patent/CN113610271B/en
Publication of CN113610271A publication Critical patent/CN113610271A/en
Application granted granted Critical
Publication of CN113610271B publication Critical patent/CN113610271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/06Traffic control systems for aircraft, e.g. air-traffic control [ATC] for control when on the ground
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a multi-Agent airport scene sliding path planning method based on historical data analysis. The method can greatly reduce the possibility of ground conflict of the aircraft and can improve the operation efficiency of the scene. According to the method, different dimensions of historical taxiing data are analyzed, including time sequence data and scene resources, a conflict hotspot region and a conflict peak period in a historical operation process are obtained, and then intelligent learning is performed by setting corresponding agents. A shortest path-based search strategy is adopted, the sliding path is shortest under the constraint conditions of minimum safety interval, sliding speed constraint and the like, and a corresponding priority collision avoidance method is adopted to solve the problem in case of collision.

Description

Multi-Agent airport scene sliding path planning method based on historical data analysis
Technical Field
The invention relates to a multi-Agent airport surface taxi path planning method based on historical data analysis, and belongs to the field of airport surface path planning.
Background
In recent years, with the rapid development of civil aviation air management industry, people pay more and more attention to the punctuality rate of flights, the user experience demand is higher and higher, the flight progress is influenced to a certain extent by the flight scene operation efficiency, and in order to deal with the situation that the demand is continuously increased and the hardware resources (such as runway resources, airport apron resources and the like) are extremely limited, an intelligent scene management system must be formulated, so that an intelligent scheme for maximizing the utilization of airport resources, optimizing the scene operation efficiency and maximizing the economic benefit is realized.
The key for improving the scene operation efficiency lies in how to systematically and effectively plan the landing sliding track of the aircraft, and for a large airport busy in daily life, various conflicts are difficult to avoid in the sliding path planning process, specifically, the sliding path planning process mainly comprises intersections and contact nodes in an airport maneuvering area, and as airport radar signals are not covered fully, the scene structure layout is unreasonable, the intersections and the contact nodes output indication sign boards or ground light is unreasonable and other factors, the problems that the flight crew is confused in the sliding process, and therefore wrong sliding and other unreasonable sliding phenomena occur, and ground sliding conflicts are caused. Therefore, preparation for processing conflicts at any time is required in planning, which is also a precondition for ensuring the safety of path planning.
Currently, there are many studies in the airport scene field at home and abroad, but the study of aircraft scene taxi path planning is rare, and many researchers pay more attention to the solution of conflict and neglect planning. The path planning is a real-time local planning, not a static global planning, and includes a huge path search space, and many researchers have proposed an a-x algorithm, a genetic algorithm, a monte carlo algorithm, and the like to solve the problem.
Disclosure of Invention
Aiming at the problems, the invention provides a multi-Agent airport scene taxi path planning method based on historical data analysis.
The invention discloses a multi-Agent airport scene slide path planning method based on historical data analysis, which has the following beneficial effects:
(a) the problem of congestion of the airport scene can be solved, the operation pressure of the airport scene is effectively relieved, and the efficiency is improved;
(b) the taxi time of the flight in the stage of entering or leaving is reduced, so that the punctuality rate of the flight is increased;
(c) the probability of collision of the aircrafts can be reduced to a certain extent by efficient path planning;
(d) the pressure of the captain and the controller is reduced, and the airport scene management efficiency can be improved.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the following technical scheme:
a multi-Agent airport surface taxi path planning method based on historical data analysis comprises the following specific steps:
analyzing historical slide data of the aircraft, setting a conflict hotspot threshold according to historical conflict times of intersections on the scene, finally obtaining distribution conditions of a scene conflict hotspot region, and meanwhile, collecting aircraft slide paths from a starting point to a terminal point, including intersections passing by in the path, so as to facilitate model learning;
setting two types of agents, namely an aircraft Agent and a taxiway intersection Agent, mutually playing games between the same agents, competing for scene resources, searching for a shortest path to a terminal point between the aircraft, and if simultaneously applying for runway and taxiway resources, inevitably generating conflict; different agents cooperate with each other, the agents at the intersection of the taxiway give an action strategy according to the environment where the aircraft Agent is located, and then the aircraft takes reasonable action according to constraint conditions;
step three, performing model training according to the taxiing data in the step one, initializing a scene environment, setting m aircraft agents and n taxiway intersection agents, defining the positions of initial aircrafts, then referring to historical taxiing tracks, avoiding conflict hot spot areas according to conflict probabilities, firstly adopting a shortest path strategy to search a next node, recording actions taken at each intersection, if no conflict exists, returning a Q value as a reward, if a conflict exists, feeding back a corresponding Q value as a penalty until a scheme with no conflict and short path is found, and finally constructing a Q value network after all data are trained, wherein the Q value directly reflects the probability of conflict at the intersection;
step four, completing path planning according to the Q value network constructed in the step three, firstly, sequencing the Q values of the reachable intersections adjacent to the reachable intersections by the aircraft Agent, selecting the intersection with the maximum Q value as the next candidate node, if a conflict occurs, giving out different conflict solution strategies by the intersection Agent according to the conflict type, and if no conflict occurs, continuously repeating the step until the aircraft Agent safely reaches the end point;
further, the model involved in step three is formulated as follows:
Figure BDA0003143998680000021
where α represents the learning rate, i.e. the step size of each update, and is 0.001, RtIndicating that the aircraft is in the current state OtNext, take action AtThe value of the prize earned. When the aircraft Agent passes through an intersection, if the recommended action given by the intersection Agent is to allow the aircraft to pass through safely and without conflict, a reward value is fed back to the intersection Agent, the reward value is determined according to the time consumed by the aircraft to slide through the section of route, and the setting standard is as follows:
(a) the standard consumed time is within the range of 15 seconds to 30 seconds, and the reward value is 5;
(b) triggering a conflict alarm when the consumed time is less than 15 seconds, recommending deceleration, and taking the reward value as 1;
(c) when the time consumption is more than 30 seconds, triggering a conflict alarm, suggesting acceleration, and taking the reward value as 2;
(d) a conflict is encountered and a lower prize is fed back and taken to be 0.
The action set { A } includes { acceleration, deceleration, turning, straight going and waiting }, gamma represents a discount factor and represents the influence degree of the income value fed back by corresponding action according to the state of the aircraft encountered at the next moment on the current income value, and A*Indicating that the aircraft is in the next moment, according to the environment Ot+1Screening actions in action set to maximize profit valueDoing, also referred to as optimal action. As the number of iterations increases, all node profit values reach a converged state.
Further, in the fourth step, the trained Q value network is used as prior knowledge and combined with the environment O where the current aircraft is locatedtSearching the next adjacent node, adding the next adjacent node into the node set to be selected, and recording as N ═ N1,n2,...,ni,...,nkIn which n isiAnd representing the ith optional node, wherein k represents the number of nodes in the whole to-be-selected set, then sequencing the accumulated profit values of the k nodes from large to small, firstly considering the node with the largest profit, detecting whether a conflict exists, if the conflict exists, updating the profit value of the node, then sequentially considering the nodes with the largest residual profit values and no conflict, sequentially traversing all the nodes, adding the nodes without conflict into the candidate set, and finally sequencing from small to large according to the shortest paths from the nodes to the apron, and selecting the node with the shortest distance as the node to be considered next in path planning.
The method and the system can solve the problem of path planning of the aircraft on the scene based on the analysis of the historical slide data of the scene aircraft, and provide decision support for the scheduling of the scene aircraft.
The planning scheme of the invention conforms to the command habits of scene scheduling personnel and can be better embedded into the existing various scene scheduling automation systems.
Drawings
FIG. 1 is a schematic illustration of scene conflict types;
FIG. 2 is a schematic diagram of an aircraft Agent module;
FIG. 3 is a schematic diagram of a path plan for a scene;
FIG. 4 is a directed graph diagram illustrating abstraction of a scene into nodes.
Detailed Description
A multi-Agent airport scene slide path planning method based on historical data analysis is characterized by comprising the following steps: analyzing and processing sliding historical data, establishing an Agent model, constructing a Q value network and planning a real-time path;
based on the conflict types shown in fig. 1, the taxi history data analysis and processing module mainly processes the taxi records of the aircraft in the past one year time of the current airport, including the taxi tracks of the aircraft, the speed parameters of the key nodes and the timestamp records, and provides basic data for later Agent learning.
The Agent model is established and comprises an aircraft Agent and an intersection Agent, as shown in fig. 2, the aircraft Agent is mainly responsible for shortest path search in the model, Q values of adjacent nodes are sequenced in the established Q value network, then the node which meets the requirements of no conflict and the maximum Q value at present is selected, and finally the shortest path search is carried out.
The intersection Agent is mainly responsible for tracking the state of the aircraft near the current intersection in real time, performing conflict alarm according to related constraint conditions, broadcasting alarm information to all aircraft on the scene if the intersection conflicts, giving corresponding actions, and finally evaluating the aircraft passing through the node according to the specified standard passing time requirement, so that a specific speed suggestion is given.
The Q value network is constructed, the constructed network is used as the prior knowledge of the aircraft, the larger the income value is, the larger the probability of conflict of the node in the historical slide data is, and the specific construction method is as follows:
initializing relevant scene data including the number m of aircrafts and the number n of intersections, and abstracting the scene into a directed graph G between nodes and an end point, wherein the specific method comprises the following steps: regarding each intersection as a node, connecting each node through a directional arrow, and regarding a path from a sliding runway to an apron of an aircraft as directional connection between the nodes, taking a schematic view of a scene of fig. 3 as an example, the description is made in conjunction with fig. 4, where 001, 010, …, and 110 are divided intersection numbers, directional line segments between the nodes represent subsections between the intersections, and the direction of the arrow represents a path direction in which the aircraft can run;
step two, inputting historical data, reading in nodes passed by the aircraft, and training, wherein the specific standard setting is as follows:
(a) the standard consumed time is within the range of 15 seconds to 30 seconds, and the reward value is 5;
(b) triggering a conflict alarm when the consumed time is less than 15 seconds, recommending deceleration, and taking the reward value as 1;
(c) when the time consumption is more than 30 seconds, triggering a conflict alarm, suggesting to increase the speed, and taking the reward value as 2;
(d) the feedback reward takes 0 when a conflict is encountered.
And step three, performing iterative updating, and completing the construction of the scene Q value network after the accumulated income of all the nodes is converged, wherein a specific iterative updating formula is as follows:
Figure BDA0003143998680000041
where α represents the learning rate, i.e. the step size of each update, and is 0.001, RtIndicating that the aircraft is in the current state OtNext, take action AtThe value of the prize earned. The action set { A } includes { acceleration, deceleration, turning, straight going and waiting }, gamma represents a discount factor and represents the influence degree of the income value fed back by corresponding action according to the state of the aircraft encountered at the next moment on the current income value, and A*Indicating that the aircraft is in the next moment, according to the environment Ot+1The action that maximizes the benefit value is screened in the action set, also referred to as the optimal action. And as the iteration times are continuously increased, the income values of all the nodes reach a convergence state.
The real-time path planning strategy is characterized in that a constructed network is used as prior knowledge of an aircraft, and a reference is provided when the aircraft searches for a next node, and the method specifically comprises the following steps:
initializing the starting position of an aircraft Agent, searching according to an abstract scene node directed graph, searching for nodes adjacent to the current position, adding the nodes into a to-be-selected set N, wherein all elements in the set are legal candidate nodes and are marked as N ═ N1,n2,...,ni,...,nkIn which n isiRepresenting the ith selectable next planning path node, and k represents the number of nodes in the whole to-be-selected set;
step two, according to the constructed Q value network, sorting the node profit values in the set N from large to small, wherein the Q value reflects the probability that the node encounters conflict in history, the larger the profit value is, the smaller the node flow is, conflict is not easy to occur, otherwise, the node is easy to conflict, then the node which is at present and conflicts is eliminated according to the alarm information of the intersection Agent, the node which is at present is left without conflict, and the node is added into a new set R and is recorded as R ═ { N ═ N%i,...,nj},ni,njRespectively representing the ith optional transition node and the jth optional transition node, wherein all elements in R represent all conflict-free transition node sets in the current scene state;
step three, calculating the shortest paths from all nodes in the R set to the end point, wherein the specific shortest path algorithm is a Dijkstra algorithm, and the detailed algorithm is described as follows:
step 1, selecting a designated node, listing weights of the node to other nodes, wherein the nodes are not adjacent to each other and are infinite;
step 2, selecting a minimum value from the weight values, wherein the minimum value is the shortest path from the starting point to the corresponding vertex, and marking the corresponding vertex;
step 3, comparing the direct distance from the starting point to other unmarked vertexes with the sum of the distances from the starting point to the just marked vertex and the distances from the marked vertexes to other vertexes, and if the sum is small, updating the corresponding weight value;
step 4, turning to step 2;
and finally, selecting the shortest node in the path from the set R to the terminal apron as the next node in the path planning.

Claims (5)

1. A multi-Agent airport scene sliding path planning method based on historical data analysis is characterized by comprising an aircraft sliding historical data analysis and processing module, a Q value network training module, a real-time path planning module, a conflict detection module and a conflict avoidance module, and specifically comprises the following steps:
(1) processing and analyzing historical taxiing data of the aircraft, specifically comprising a single aircraft entering and departing path sequence, a relation between the number of aircrafts on the scene and a time sequence, and analyzing a conflict hot area in a flow peak stage;
(2) setting an aircraft Agent and a taxiway intersection Agent based on the determined conflict hotspot layout and the flow peak time sequence, wherein the aircraft Agent is responsible for searching the shortest taxiway under the conflict-free condition, and the taxiway intersection Agent is responsible for scene taxiway conflict warning service and conflict resolution strategy service;
(3) constructing a Q value network, combining the prior knowledge through a Q learning algorithm, taking the Q value network as a reference of the next moment, and if conflict nodes exist in the planned shortest path, generating corresponding Q values for feedback according to conflict generated each time in training until reward values of all nodes are converged;
(4) according to the situation of the current scene, the aircraft agents are planned in real time, at an intersection, the aircraft firstly finds an intersection directly adjacent to the intersection and can reach the intersection according to the prompt of the intersection agents, and then carries out path search according to a constraint rule.
2. The multi-Agent airport surface taxi path planning method based on historical data analysis of claim 1, wherein: the step (1) comprises the following specific steps:
(1.1) analyzing historical slide data of all aircrafts on the whole scene to obtain conflict hot spot areas and scene flow peak time sequence data, wherein the data are used as prior knowledge of aircrafts after feature extraction;
(1.2) analyzing the single aircraft sliding data, namely the sequence of the entering and leaving paths, including the runway number, the taxiway intersection and the airport apron number, for Agent training.
3. The multi-Agent airport surface taxi path planning method based on historical data analysis of claim 1, wherein: the step (2) comprises the following specific steps:
(2.1) the aircraft Agent is responsible for searching the current shortest path under the condition of no conflict, and determining a behavior including acceleration, deceleration, straight going and turning according to the current environment and by combining with the Agent policy function of the taxiway intersection;
(2.2) the agents at the taxiway intersections give conflict alarms according to the states of the aircrafts on the scene, firstly, each intersection Agent checks the aircrafts requesting the intersection, including whether the speed and the interval meet the safety standard or not, whether conflicts exist or not and the like, and a plurality of aircrafts simultaneously initiate requests to be regarded as conflicts.
4. The multi-Agent airport surface taxi path planning method based on historical data analysis of claim 1, wherein: the step (3) comprises the following specific steps:
(3.1) according to the historical slide data in the claim 2, training by adopting a Q learning algorithm, firstly, each aircraft Agent carries out path search according to a starting point and an end point and combining the current state, an alternative path is generated by adopting a greedy algorithm of the current shortest path, if slide conflicts exist, an incentive value is fed back at a corresponding intersection, the suboptimal shortest path is continuously searched until no conflict reaches the end point, a single historical slide process of each aircraft Agent is represented by a behavior sequence, and the history slide process is marked as (O)1,R1,A1,...,Oi,Ri,Ai) In which O ist,Rt,AtRespectively representing the Agent's observed state, reward earned and action taken at time t, (O)t,Rt,At) Representing one complete action of the Agent, two types of agents are involved on scene, and the set of all the agents is expressed as { A }1,A2...,Ai,...,Am,B1,B2,...Bj,...BnIn which A isi、BjRespectively representing the ith aircraft Agent and the jth taxiway intersection Agent, wherein m and n respectively represent the number of the aircraft and the taxiway intersection, and the Q learning iteration updating formula is as follows:
Figure FDA0003143998670000021
where α represents the learning rate, i.e. the step size of each update, and is 0.001, RtIndicating that action A is taken in the current statetThe obtained reward value, gamma represents a discount factor, and represents the weight of the reward value fed back by taking corresponding action at the next moment, A*Indicating that at the next moment, according to the environment Ot+1The action which is taken centralizes the action which enables the reward value to be maximum, and the reward values of all nodes reach a convergence state along with the continuous increase of the iteration times;
and (3.2) when the reward values of all the intersections are converged, the Q value network initialization is completed, the conflict probability of each intersection is reflected by the Q value, the smaller the Q value is, the higher the probability of conflict generation is, the greater the probability of conflict generation at the intersection is, the greater the probability is to be avoided according to the prior probability during path planning, and the reward value is continuously updated according to real-time feedback.
5. The multi-Agent airport surface taxi path planning method based on historical data analysis of claim 1, wherein: the step (4) comprises the following specific steps:
(4.1) the aircraft Agent searches all reachable cross nodes adjacent to the aircraft Agent according to the current position, sorts the Q values of the cross nodes from large to small, considers the node with the large Q value as the next candidate node, deletes the node from the candidate set if the node has conflict, and continues to search suboptimal Q value nodes until a node without conflict is found and added into the candidate set until all nodes to be selected are traversed;
and (4.2) selecting the next candidate node according to the shortest path from the node in the candidate set to the end point.
CN202110749433.7A 2021-07-01 2021-07-01 Multi-Agent airport scene sliding path planning method based on historical data analysis Active CN113610271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110749433.7A CN113610271B (en) 2021-07-01 2021-07-01 Multi-Agent airport scene sliding path planning method based on historical data analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110749433.7A CN113610271B (en) 2021-07-01 2021-07-01 Multi-Agent airport scene sliding path planning method based on historical data analysis

Publications (2)

Publication Number Publication Date
CN113610271A true CN113610271A (en) 2021-11-05
CN113610271B CN113610271B (en) 2023-05-02

Family

ID=78337202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110749433.7A Active CN113610271B (en) 2021-07-01 2021-07-01 Multi-Agent airport scene sliding path planning method based on historical data analysis

Country Status (1)

Country Link
CN (1) CN113610271B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114254567A (en) * 2021-12-29 2022-03-29 北京博能科技股份有限公司 Airport fusion simulation method based on Muti-Agent and reinforcement learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030145111A1 (en) * 2000-12-22 2003-07-31 Dominique Derou-Madeline Adaptive routing process by deflection with training by reinforcement
CN104537431A (en) * 2014-12-16 2015-04-22 南京航空航天大学 Taxiway path optimizing method based on collision detection
CN109361601A (en) * 2018-10-31 2019-02-19 浙江工商大学 A kind of SDN route planning method based on intensified learning
CN109540151A (en) * 2018-03-25 2019-03-29 哈尔滨工程大学 A kind of AUV three-dimensional path planning method based on intensified learning
US20210103286A1 (en) * 2019-10-04 2021-04-08 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Systems and methods for adaptive path planning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030145111A1 (en) * 2000-12-22 2003-07-31 Dominique Derou-Madeline Adaptive routing process by deflection with training by reinforcement
CN104537431A (en) * 2014-12-16 2015-04-22 南京航空航天大学 Taxiway path optimizing method based on collision detection
CN109540151A (en) * 2018-03-25 2019-03-29 哈尔滨工程大学 A kind of AUV three-dimensional path planning method based on intensified learning
CN109361601A (en) * 2018-10-31 2019-02-19 浙江工商大学 A kind of SDN route planning method based on intensified learning
US20210103286A1 (en) * 2019-10-04 2021-04-08 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Systems and methods for adaptive path planning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAN YUN-XIANG等: "A New Traffic Flow Control Method for Terminal Control Area Using Dioid Algebra", 《IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS》 *
尤杰等: "基于多Agent 的机场场面最优滑行路径算法", 《交通运输工程学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114254567A (en) * 2021-12-29 2022-03-29 北京博能科技股份有限公司 Airport fusion simulation method based on Muti-Agent and reinforcement learning

Also Published As

Publication number Publication date
CN113610271B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
Wang et al. Aircraft taxi time prediction: Feature importance and their implications
CN109544929A (en) A kind of control of vehicle low-carbon and abductive approach, system, equipment and storage medium based on big data
CN112489426A (en) Urban traffic flow space-time prediction scheme based on graph convolution neural network
CN113378891A (en) Urban area relation visual analysis method based on track distribution representation
CN110836675A (en) Decision tree-based automatic driving search decision method
CN116628455B (en) Urban traffic carbon emission monitoring and decision support method and system
Lin et al. Approach for 4-d trajectory management based on HMM and trajectory similarity
CN115790636A (en) Unmanned retail vehicle cruise path planning method and device based on big data
Cui et al. The parallel mobile charging service for free-floating shared electric vehicle clusters
CN111581780A (en) Airport group airspace simulation modeling and verification method and device under complex airspace scene
Guclu et al. Analysis of aircraft ground traffic flow and gate utilisation using a hybrid dynamic gate and taxiway assignment algorithm
Yin et al. Joint apron-runway assignment for airport surface operations
CN113610271A (en) Multi-Agent airport scene sliding path planning method based on historical data analysis
CN113610282A (en) Flight taxi time prediction method
CN116017407A (en) Method for reliably identifying resident trip mode driven by mobile phone signaling data
CN112115571A (en) Central radiation type navigation network optimization design method based on green aviation model
Deng et al. Heterogenous trip distance-based route choice behavior analysis using real-world large-scale taxi trajectory data
Wandelt et al. Measuring node importance in air transportation systems: On the quality of complex network estimations
CN114253975A (en) Load-aware road network shortest path distance calculation method and device
CN116805015B (en) Bird migration route graph theory modeling method based on GPS tracking data
CN117133158A (en) Airport group traffic flow strategic conflict alarm method under cooperative operation
Amara et al. Geographical information system for air traffic optimization using genetic algorithm
CN110533215A (en) A kind of taxi of cruising based on GPS data seeks objective behavior prediction method
Chu et al. Hierarchical Method for Mining a Prevailing Flight Pattern in Airport Terminal Airspace
CN114783212A (en) Method for constructing model feature set for prediction of departure taxi time of aircraft in busy airport

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant