CN112465199A - Airspace situation evaluation system - Google Patents

Airspace situation evaluation system Download PDF

Info

Publication number
CN112465199A
CN112465199A CN202011291878.7A CN202011291878A CN112465199A CN 112465199 A CN112465199 A CN 112465199A CN 202011291878 A CN202011291878 A CN 202011291878A CN 112465199 A CN112465199 A CN 112465199A
Authority
CN
China
Prior art keywords
airspace
operation complexity
image
sector
air traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011291878.7A
Other languages
Chinese (zh)
Other versions
CN112465199B (en
Inventor
谢华
张明华
袁立罡
朱永文
毛继志
董欣放
唐治理
王长春
蒲钒
陈海燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202011291878.7A priority Critical patent/CN112465199B/en
Publication of CN112465199A publication Critical patent/CN112465199A/en
Application granted granted Critical
Publication of CN112465199B publication Critical patent/CN112465199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Marketing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Tourism & Hospitality (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention specifically relates to an airspace situation assessment system, which comprises: the marking module extracts sector dynamic traffic data of a target airspace sector and marks the airspace operation complexity level; the gridding module is used for delimiting a circumscribed rectangle of a target airspace sector and carrying out gridding treatment; the image construction module is used for constructing a multi-channel air traffic situation image and constructing an air traffic situation image library according to the operation complexity grade of an airspace; the model construction module is used for constructing a spatial domain operation complexity hierarchical network model according to the multi-channel air traffic situation image; the model training module is used for training the space domain operation complexity hierarchical network model; and the evaluation module is used for carrying out airspace operation complexity evaluation according to the trained airspace operation complexity hierarchical network model, so that the workload and the use threshold of airspace complexity evaluation are greatly reduced.

Description

Airspace situation evaluation system
Technical Field
The invention belongs to the technical field of airspace situation assessment of air traffic control, and particularly relates to an airspace situation assessment system.
Background
With the development of the air transportation industry, the sharply increased air traffic flight flow and limited airspace resources bring enormous workload and pressure to air traffic controllers. The airspace operation complexity is a key index for evaluating the workload of controllers, and meanwhile, the airspace operation complexity also can provide decision support for strategic and tactical air traffic management systems, so that how to determine a scientific, accurate and reliable method for evaluating the airspace operation complexity is one of the widely researched problems. In recent years, some scholars adopt a machine learning method based on manual features to solve the problem of evaluating the complexity of airspace operation, and the main research idea is as follows: the method comprises the steps of constructing a feature system influencing the operation complexity of an airspace sector, simultaneously collecting airspace operation complexity labels under different air traffic scenes, training and learning a mapping relation model between a plurality of airspace operation complexity features and complexity labels through a machine learning algorithm, and carrying out the estimation work of the airspace operation complexity by utilizing the trained machine learning model. However, the characteristics influencing the airspace operation complexity are numerous and complex, a uniformly recognized airspace operation complexity characteristic system does not exist at present, in practice, the selection of related characteristics is greatly influenced by subjectivity, and an incomplete or improper complexity characteristic system can seriously influence the performance of the airspace operation complexity evaluation model based on the machine learning algorithm.
Aiming at the problems, a unified and comprehensive airspace operation complexity feature system is lacked, so that a machine learning model is difficult to learn an evaluation model with excellent performance through a defective feature set, and a computer is expected to replace a researcher to complete automatic generation or selection of airspace operation complexity features. In deep learning, the features of a data set can be unknown, and the goal is to perform automatic feature learning through a deep convolutional neural network under the guidance of a label to mine rich feature information of original data and provide good feature vector input for a subsequent classification or regression model.
In the prior art, the spatial domain operation complexity characteristics are manually selected, although a machine learning algorithm model can learn the mapping relation between different characteristics and the spatial domain operation complexity, the mapping relation is not perfect due to possible defects of the characteristic set, and the selection and the use of the characteristic set are greatly influenced by actual scenes and professional knowledge.
Therefore, a new spatial situation assessment system needs to be designed based on the above technical problems.
Disclosure of Invention
The invention aims to provide an airspace situation evaluation system.
In order to solve the above technical problem, the present invention provides an airspace situation assessment system, including:
the marking module extracts sector dynamic traffic data of a target airspace sector and marks the airspace operation complexity level;
the gridding module is used for delimiting a circumscribed rectangle of a target airspace sector and carrying out gridding treatment;
the image construction module is used for constructing a multi-channel air traffic situation image and constructing an air traffic situation image library according to the operation complexity grade of an airspace;
the model construction module is used for constructing a spatial domain operation complexity hierarchical network model according to the multi-channel air traffic situation image;
the model training module is used for training the space domain operation complexity hierarchical network model; and
and the evaluation module is used for carrying out space domain operation complexity evaluation according to the trained space domain operation complexity hierarchical network model.
Furthermore, the labeling module is suitable for extracting the sector dynamic traffic data of the target airspace sector and labeling the airspace operation complexity level, namely
Acquiring original air traffic operation data of a target airspace sector, and extracting sector dynamic traffic data of the target airspace sector from the original operation data according to a preset time granularity period;
and dividing the dynamic traffic data of the sector according to a preset time period, and marking the complexity grade of the airspace operation on the dynamic traffic data of the sector corresponding to each time period.
Further, the gridding module is suitable for defining a circumscribed rectangle of a target airspace sector and carrying out gridding treatment, namely
The method comprises the steps of obtaining sector boundary point longitude and latitude data of a target airspace sector, determining a minimum external rectangle of the target airspace sector, expanding each side of the minimum external rectangle outwards by a preset length respectively to form the target airspace sector external rectangle, and carrying out gridding processing on the target airspace sector external rectangle according to preset length intervals.
Furthermore, the image construction module is suitable for constructing a multi-channel air traffic situation image, and constructing an air traffic situation image library according to the operation complexity grade mark of the airspace, namely
According to the dynamic traffic data of the sectors at each time period, taking the longitude and latitude of the aircraft as coordinates, acquiring the position of the corresponding coordinates in the external rectangle of the meshed target airspace sector, filling the altitude motion parameters of the aircraft in the dynamic traffic data of the sectors at each time period into the corresponding grids as pixel values to generate an altitude historical track image channel at the corresponding time period, and filling the speed parameters of the aircraft into the external rectangle of the meshed target airspace sector to generate a speed historical track image channel;
acquiring a predicted point which is reached by the aircraft after preset time according to the longitude and latitude, the speed and the course of the last track point of each aircraft in a time period, connecting sector dynamic traffic data between the last track point and the predicted point to acquire a predicted track, mapping the predicted track to a meshed target airspace sector external rectangle through the longitude and latitude, and when the track passes through a new mesh, sequentially decreasing the filling value by a preset step length until the filling of the last point of the predicted track is completed to generate a conflict predicted track image channel;
and forming a multi-channel air traffic situation image according to the height historical track image channel, the speed historical track image channel and the conflict prediction track image channel, associating the multi-channel air traffic situation image generated at different time intervals with the airspace operation complexity level to obtain an air traffic situation image library, and dividing the air traffic situation image library into a training data set and a testing data set.
Further, the model construction module is suitable for constructing a spatial domain operation complexity hierarchical network model according to the multi-channel air traffic situation image, namely
Constructing a spatial domain operation complexity hierarchical network model according to the multi-channel air traffic situation image through a deep convolutional neural network, namely
Constructing a fourteen-layer deep convolutional neural network model;
the first layer is an input layer and inputs multi-channel air traffic situation images; the second, third, fifth, sixth, eighth and ninth layers are convolution layers; the fourth, seventh and tenth layers are pooling layers; the eleventh layer is a rolled layer; the twelfth, thirteenth and fourteenth layers are full connection layers, and the output is a space domain operation complexity level vector;
the second layer and the third layer of convolution layers comprise 32 convolution kernels, the fifth layer and the sixth layer of convolution layers comprise 64 convolution kernels, the eighth layer and the ninth layer of convolution layers comprise 128 convolution kernels, and convolution calculation is carried out in an SAME filling mode according to the size of a preset convolution kernel and the movement amplitude of the preset convolution kernel;
the pooling layers of the fourth, seventh and tenth layers are subjected to pooling treatment by adopting a maximum pooling mode;
the twelfth, thirteenth and fourteenth layers are full connection layers, probability representation is carried out on the fourteenth layer output through a Softmax function, and the classification with the maximum probability value is selected as a final classification result;
the Softmax function is:
Figure BDA0002784097920000041
wherein i is a airspace operation complexity grade category, and i is 1, which indicates that the airspace operation complexity is 1 grade; k represents a natural number greater than zero; n is the total level number of the airspace operation complexity;
the output of the fourteenth layer is a 5-dimensional vector, and each dimension represents the probability of the airspace operation complexity belonging to the level;
and the nonlinear function used after the second, third, fifth, sixth, eighth and ninth convolution layers and the twelfth, thirteenth and fourteenth full connection layers is subjected to nonlinear transformation.
Further, the model training module is adapted to train a space-domain operational complexity hierarchical network model, i.e. a model training module adapted to train a space-domain operational complexity hierarchical network model
Preprocessing an image in the training data set, and performing image standardization processing on pixel values of the image:
Figure BDA0002784097920000051
wherein mu is the mean value of the image; x is an image matrix; σ is the standard deviation; p is the number of pixels of the image;
putting the preprocessed training data set into the airspace operation complexity hierarchical network model for training;
in the training process, the target loss function is cross entropy:
Figure BDA0002784097920000052
where y represents the true probability distribution of the image class;
Figure BDA0002784097920000053
representing the probability distribution calculated by the neural network; y isjAnd
Figure BDA0002784097920000054
each represents a probability value of the jth dimension in the 5-dimensional vector;
the objective loss function is continuously optimized during the training process.
Further, the evaluation module is adapted to perform a spatial domain operation complexity evaluation according to the trained spatial domain operation complexity hierarchical network model, i.e. the evaluation module is adapted to perform the spatial domain operation complexity evaluation
And preprocessing the images in the test data set, inputting the images after the preprocessing in the test data set into the trained airspace operation complexity grading network model to obtain an airspace operation complexity grading result, and finishing the airspace operation complexity evaluation.
The method has the beneficial effects that the sector dynamic traffic data of the target airspace sector are extracted through the marking module, and the airspace operation complexity grade marking is carried out; the gridding module is used for delimiting a circumscribed rectangle of a target airspace sector and carrying out gridding treatment; the image construction module is used for constructing a multi-channel air traffic situation image and constructing an air traffic situation image library according to the operation complexity grade of an airspace; the model construction module is used for constructing a spatial domain operation complexity hierarchical network model according to the multi-channel air traffic situation image; the model training module is used for training the space domain operation complexity hierarchical network model; and the evaluation module is used for carrying out airspace operation complexity evaluation according to the trained airspace operation complexity hierarchical network model, so that the workload and the use threshold of airspace complexity evaluation are greatly reduced.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic block diagram of a spatial domain situation assessment system in accordance with the present invention;
FIG. 2(a) is a schematic diagram of a high historical track path in accordance with the present invention;
FIG. 2(b) is a schematic diagram of a speed history track path in accordance with the present invention;
FIG. 3(a) is a schematic diagram of a collision-free scenario in a collision-predicted track path according to the present invention;
FIG. 3(b) is a schematic diagram of a collision scenario in a collision prediction track channel according to the present invention;
FIG. 4 is a schematic diagram of a deep convolutional neural network architecture in accordance with the present invention;
FIG. 5 is a graph of evaluation accuracy performance index and loss function convergence according to the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic block diagram of a spatial domain situation estimation system according to the present invention.
As shown in fig. 1, the present embodiment provides a spatial situation assessment system, including: the marking module extracts sector dynamic traffic data of a target airspace sector and marks the airspace operation complexity level; the gridding module is used for delimiting a circumscribed rectangle of a target airspace sector and carrying out gridding treatment; the image construction module is used for constructing a multi-channel air traffic situation image and constructing an air traffic situation image library according to the operation complexity grade of an airspace; the model construction module is used for constructing a spatial domain operation complexity hierarchical network model according to the multi-channel air traffic situation image; the model training module is used for training the space domain operation complexity hierarchical network model; the evaluation module is used for evaluating the complexity of the airspace operation according to the trained airspace operation complexity hierarchical network model, so that the selection of the relevant features independent of the complexity is realized, and the most relevant features can be directly learned from the original data in an end-to-end mode, so that the establishment of the airspace operation complexity hierarchical network model is assisted, and the workload and the use threshold of the airspace complexity evaluation are greatly reduced; the method can be flexibly applied to sectors with different airway structure configurations as a part of an air traffic control decision-making system.
In this embodiment, the labeling module is adapted to extract sector dynamic traffic data of a target airspace sector, perform airspace operation complexity level labeling, that is, obtain original air traffic operation data of the target airspace sector, and extract sector dynamic traffic data of the target airspace sector from the original operation data in time intervals according to a preset time granularity (1min time granularity), where the sector dynamic traffic data mainly includes aircraft position information (longitude and latitude) and aircraft motion parameters (flight altitude, heading, speed) in the sector; the sector dynamic traffic data are divided according to preset time periods, and the sector dynamic traffic data corresponding to each time period are subjected to airspace operation complexity level (airspace operation complexity level) marking, namely the extracted sector dynamic traffic data in each 1 minute time period are one sample, corresponding to a corresponding air traffic scene, each sample is subjected to airspace operation complexity level marking (label for short) by an experienced air traffic controller, the size of the corresponding airspace operation complexity is represented, 5 labels (1-5 levels) in different categories are totally provided, and the larger the number is, the higher the airspace operation complexity is represented.
In this embodiment, the gridding module is adapted to define an external rectangle of a target airspace sector, and perform gridding, that is, obtain the longitude and latitude data of the sector boundary point of the target airspace sector, determine the minimum external rectangle of the target airspace sector, assume the aircraft speed is 900km/h (15km/min), the time granularity of an air traffic situation scene is 1min, to ensure that the sector boundary position of the target airspace sector can reflect the information of the predicted flight path of the aircraft, each side of the minimum external rectangle is extended outward by a preset length (45km, 3min flight time distance) respectively to form an external rectangle of the target airspace sector, and perform gridding on the external rectangle of the target airspace sector according to a preset length interval, perform gridding on the external rectangle of the target airspace sector at an interval of 2km to obtain a plurality of square grids with a side length of 2km, as the basis for the subsequent generation of the air traffic situation image.
FIG. 2(a) is a schematic diagram of a high historical track path in accordance with the present invention;
FIG. 2(b) is a schematic diagram of a speed history track path in accordance with the present invention;
FIG. 3(a) is a schematic diagram of a collision-free scenario in a collision-predicted track path according to the present invention;
fig. 3(b) is a schematic diagram of a collision scenario in a collision prediction track channel according to the present invention.
In this embodiment, the image construction module is adapted to construct a multi-channel air traffic situation image, and construct an air traffic situation image library according to the airspace operation complexity level label, that is, according to the dynamic traffic data of each time segment sector, taking the latitude and longitude of the aircraft as coordinates, acquiring the specific position of the corresponding coordinates in the external rectangle of the meshed target airspace sector, filling the aircraft altitude motion parameters in the dynamic traffic data of each time segment into the corresponding grid as pixel values to generate a height history track image channel (as shown in fig. 2 (a)) of the corresponding time segment, and similarly filling the aircraft speed parameters into the new external rectangle of the meshed target airspace sector to generate a speed history track image channel (as shown in fig. 2 (b)); acquiring a predicted point (the predicted arrival position of the aircraft) of the aircraft after the preset time (3min) according to the longitude and latitude, the speed and the course of the last track point of each aircraft in a time period, connecting the sector dynamic traffic data between the last track point and the predicted point to acquire a predicted track, mapping the predicted track onto a new external rectangle of a gridded target airspace sector through the longitude and latitude, setting the filling value of the starting point of the predicted track of the aircraft in the external rectangle of the gridded target airspace sector to 10000, and sequentially decreasing the filling value by a preset step length (100) when the track passes through a new grid, the uncertainty size is used for reflecting the future aircraft position and the collision influence, and a collision prediction track image channel is generated until the filling of the last point of the prediction track is completed (as shown in fig. 3(a) and fig. 3 (b)); and (2) forming a multi-channel air traffic situation image according to the height historical track image channel, the speed historical track image channel and the conflict prediction track image channel, wherein the size of the image is 173 × 3, correlating the multi-channel air traffic situation image generated in different time periods with the airspace operation complexity level to obtain an air traffic situation image library, and dividing the air traffic situation image library into a training data set and a test data set according to the proportion of 8: 2 for the training of the airspace operation complexity hierarchical network model and the test of the airspace operation complexity evaluation performance.
Fig. 4 is a schematic diagram of the structure of the deep convolutional neural network according to the present invention.
In this embodiment, the model construction module is adapted to construct a spatial domain operation complexity hierarchical network model from the multi-channel air traffic situation image, that is, a spatial domain operation complexity hierarchical network model is constructed from the multi-channel air traffic situation image through a deep convolutional neural network, that is, the model construction module is adapted to construct a spatial domain operation complexity hierarchical network model from the multi-channel air traffic situation image
Constructing a fourteen-layer deep convolutional neural network model through a TensorFlow framework;
the first layer is an input layer and inputs multi-channel air traffic situation images; the second, third, fifth, sixth, eighth and ninth layers are convolution layers, and each convolution layer is subjected to nonlinear transformation operation by using a ReLU function; the fourth, seventh and tenth layers are pooling layers; the eleventh layer is a rolling layer and converts all the two-dimensional characteristic graphs into one-dimensional vectors; the twelfth, thirteenth and fourteenth layers are full-connection layers, and the output is a space-domain operation complexity level vector (5 levels);
the second and third convolutional layers comprise 32 convolution kernels, the fifth and sixth convolutional layers comprise 64 convolution kernels, the eighth and ninth convolutional layers comprise 128 convolution kernels, and convolution calculation is performed in an SAME filling mode according to the size of a preset convolution kernel (for example, the size of each convolution kernel is 3 x 3) and the movement amplitude of the preset convolution kernel (for example, the movement amplitude of the convolution kernel is 1);
performing pooling treatment on the pooling layers of the fourth, seventh and tenth layers in a maximum pooling mode, wherein the size of a pooling core is (2 x 2), the moving amplitude of the pooling core is 2, and padding is not filled;
the twelfth, thirteenth and fourteenth layers are all connected layers, the dimensions are (1 x 320), (1 x 160) and (1 x 5), wherein 5 represents the number of the spatial domain operation complexity levels, the output of the fourteenth layer carries out probability representation on the output of the fourteenth layer through a Softmax function, and the classification with the maximum probability is selected as the final classification result;
the Softmax function is:
Figure BDA0002784097920000101
wherein i is a airspace operation complexity grade category, and i is 1, which indicates that the airspace operation complexity is 1 grade; k represents a natural number greater than zero for counting; n is the total level number of the airspace operation complexity;
the output of the fourteenth layer is a 5-dimensional vector, and each dimension represents the probability of the airspace operation complexity belonging to the level;
nonlinear transformation is carried out on a nonlinear function ReLU function max (0, x) used after the second, third, fifth, sixth, eighth and ninth convolutional layers and the twelfth, thirteenth and fourteenth fully-connected layers; feature learning is automatically carried out from the provided virtual air multi-channel air traffic situation image data set by utilizing a deep convolutional neural network, the problem that manual features constructed by experts are relied on in the existing airspace operation complexity evaluation is solved, the evaluation of an end-to-end mode is realized, and the cost of constructing the manual features is eliminated; compared with the manually constructed shallow features, the deep features extracted by the deep neural network can be closer to the connotation understanding of the real airspace operation complexity of a controller, cross the barrier of 'semantic gap', can better represent the internal information of air traffic, and exceed the existing machine learning method and system based on manual features in the airspace operation complexity evaluation performance;
in the present embodiment, specifically, as shown in fig. 4, [ 1 ] is an input layer, the input data is a generated multi-channel air traffic situation image, the size is (173 × 3), the width and height of the image are 173, and 3 indicates the number of channels of the image;
【2】 Representing a convolution layer, wherein the neuron on each receptive field in the layer is locally connected with the neuron in [ 1 ], and shares a weight; in the present embodiment, 32 convolution kernels are used for the [ 1 ] layer convolution, each convolution kernel has a size of (3 × 3), the convolution kernel shift amplitude is 2, and in the convolution, 0 is added to the [ 1 ] layer image, so that the image size after convolution is expressed by the formula:
Convl(x,y)=Filterl*PreLayer(x,y)[filterarea],l=1,2,...,32;
wherein, Convl(x, y) represents the convolved value of the neuron on the (x, y) th slice in the convolution layer, FilterlRepresenting convolution kernels, wherein neurons on the same slice l share one convolution kernel and are sequentially convolved with the neurons in the previous layer;
【3】 The layers are convolutional layers which are the same as the (2) layers, the convolutional layer principle of the (5) layers and the (6) layers is the same as that of the (2) layers, but the layers use 64 convolutional kernels, and in a similar way, the convolutional layer principle of the (8) layers and the (9) layers is the same as that of the (2) layers, but the layers use 128 convolutional kernels;
the convolution layer is connected with nonlinear operation to carry out nonlinear transformation on the numerical value of the convolution layer, thereby enhancing the universality of the neural network,
Outputl(x,y)=nonlinear(Convl(x,y)+biasl) Wherein nonlinear represents a non-linear function, Convl(x, y) represents the value of the neuron at the previous layer (x, y), biaslRepresenting a bias value, which is shared by all neurons in the l-th receptive field of the upper layer; common activation functions include a sigmod activation function, a tanh activation function, an ELU activation function, a ReLU activation function, and the like; the activation function in this embodiment is a ReLU activation function, where the expression of the ReLU activation function is: (x) max (0, x), where x represents the output of each convolution layer; the ReLU activation function is used to enable the output of a part of neurons to be 0, so that the network becomes sparse, the interdependence relation of parameters is reduced, the occurrence of the overfitting problem is relieved, and meanwhile, the calculation amount is greatly reduced;
【4】 The layers, [ 7 ] and [ 10 ] are down-sampling layers, are used for reducing the dimension of the output vector of the previous layer, are used for preventing overfitting and simplifying calculation, the down-sampling operation here has modes of maximum pooling, average pooling and the like, in the embodiment, maximum pooling is adopted, the pooled kernel size is (2 × 2), and the pooled kernel moving amplitude is 2, so the image dimensionality after pooling becomes half of the original image dimensionality;
【12】 The layers, [ 13 ] and [ 14 ] are fully-connected layers, and the dimensions are (1 × 320), (1 × 160) and (1 × 5), respectively, wherein 5 represents the number of the spatial domain operation complexity levels, and the total number of the five levels is five, and the higher the level is, the higher the operation complexity of the corresponding air traffic scene is; finally, probability representation is carried out on the [ 14 ] layer output by using a softmax function,
Figure BDA0002784097920000121
and finally outputting a 5-dimensional vector, wherein each dimension represents the probability that the spatial domain operation complexity belongs to a certain level.
FIG. 5 is a graph of evaluation accuracy performance index and loss function convergence according to the present invention.
In this embodiment, the model training module is adapted to train a spatial domain operation complexity hierarchical network model, that is, to pre-process an image in a training data set, and to perform image normalization on pixel values of the image, so that all pixel values of the image are located between 0 and 1:
Figure BDA0002784097920000122
wherein mu is the mean value of the image; x is an image matrix; σ is the standard deviation; p is the number of pixels of the image;
putting the preprocessed training data set into the airspace operation complexity hierarchical network model for training; when the model is trained, a class equilibrium sampling method is adopted for model training, so that the trained samples are balanced among different classes every time, and the problem of unbalance of air traffic sample data is solved; meanwhile, in the process of sampling each time, data enhancement processing is carried out on the image in a rotating mode to generate a large number of new images, and the number of sufficient training samples is ensured; respectively collecting training samples from different types of data sets according to fixed data, and performing random data enhancement on training images after collection, wherein an image turning mode is adopted in the embodiment in consideration of actual problem conditions;
in the training process, the target loss function is cross entropy:
Figure BDA0002784097920000131
where y represents the true probability distribution of the image class;
Figure BDA0002784097920000132
representing the probability distribution calculated by the neural network; y isjAnd
Figure BDA0002784097920000133
each represents a probability value of the jth dimension in the 5-dimensional vector;
continuously optimizing a target loss function by a random optimization method in the training process; in this embodiment, an Adam optimizer is used, which is a random optimization method, and only requires a first-order gradient, and has a small memory footprint, and has a good effect on solving a model with a large number of parameters, and the Adam optimization method includes the following key steps:
t←t+1
Figure BDA0002784097920000134
mt←β1·mt-1+(1-β1)·gt
Figure BDA0002784097920000138
Figure BDA0002784097920000135
Figure BDA0002784097920000136
Figure BDA0002784097920000137
where α represents the algorithm learning rate, β1,β2M, v are parameters in the algorithm, f (p) represents a target loss function, and t represents the iteration number of the algorithm; the model evaluation performance iteration curves of the training process and the testing process are shown in fig. 5; the method for enhancing the image data by adopting category balanced sampling and random rotation in the model training stage avoids the tendency that the model is prone to selecting the category containing most samples, relieves the influence of the imbalance problem to a certain extent, increases the richness of a training data set, and further improves the accuracy of the network to airspace operation complexity evaluation.
In this embodiment, the evaluation module is adapted to perform the airspace operation complexity evaluation according to the trained airspace operation complexity hierarchical network model, that is, to preprocess the images in the test data set, and to input the preprocessed images in the test data set into the trained airspace operation complexity hierarchical network model, so as to obtain the airspace operation complexity hierarchical result, thereby completing the airspace operation complexity evaluation.
In summary, the sector dynamic traffic data of the target airspace sector is extracted through the marking module, and the airspace operation complexity level is marked; the gridding module is used for delimiting a circumscribed rectangle of a target airspace sector and carrying out gridding treatment; the image construction module is used for constructing a multi-channel air traffic situation image and constructing an air traffic situation image library according to the operation complexity grade of an airspace; the model construction module is used for constructing a spatial domain operation complexity hierarchical network model according to the multi-channel air traffic situation image; the model training module is used for training the space domain operation complexity hierarchical network model; and the evaluation module is used for carrying out airspace operation complexity evaluation according to the trained airspace operation complexity hierarchical network model, so that the selection of relevant features independent of complexity is realized, and the most relevant features can be directly learned from the original data in an end-to-end mode, thereby assisting the establishment of the airspace operation complexity evaluation model and greatly reducing the workload and the use threshold of the airspace complexity evaluation.
In the embodiments provided herein, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute the system functions according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (7)

1. An airspace situation assessment system, comprising:
the marking module extracts sector dynamic traffic data of a target airspace sector and marks the airspace operation complexity level;
the gridding module is used for delimiting a circumscribed rectangle of a target airspace sector and carrying out gridding treatment;
the image construction module is used for constructing a multi-channel air traffic situation image and constructing an air traffic situation image library according to the operation complexity grade of an airspace;
the model construction module is used for constructing a spatial domain operation complexity hierarchical network model according to the multi-channel air traffic situation image;
the model training module is used for training the space domain operation complexity hierarchical network model; and
and the evaluation module is used for carrying out space domain operation complexity evaluation according to the trained space domain operation complexity hierarchical network model.
2. The spatial domain situation assessment system according to claim 1,
the marking module is suitable for extracting the sector dynamic traffic data of the target airspace sector and marking the airspace operation complexity level, namely
Acquiring original air traffic operation data of a target airspace sector, and extracting sector dynamic traffic data of the target airspace sector from the original operation data according to a preset time granularity period;
and dividing the dynamic traffic data of the sector according to a preset time period, and marking the complexity grade of the airspace operation on the dynamic traffic data of the sector corresponding to each time period.
3. The spatial domain situation assessment system according to claim 2,
the gridding module is suitable for defining the external rectangle of the target airspace sector and carrying out gridding treatment, namely
The method comprises the steps of obtaining sector boundary point longitude and latitude data of a target airspace sector, determining a minimum external rectangle of the target airspace sector, expanding each side of the minimum external rectangle outwards by a preset length respectively to form the target airspace sector external rectangle, and carrying out gridding processing on the target airspace sector external rectangle according to preset length intervals.
4. The spatial domain situation assessment system according to claim 3,
the image construction module is suitable for constructing a multi-channel air traffic situation image, and constructing an air traffic situation image library according to the operation complexity grade of an airspace, namely
According to the dynamic traffic data of the sectors at each time period, taking the longitude and latitude of the aircraft as coordinates, acquiring the position of the corresponding coordinates in the external rectangle of the meshed target airspace sector, filling the altitude motion parameters of the aircraft in the dynamic traffic data of the sectors at each time period into the corresponding grids as pixel values to generate an altitude historical track image channel at the corresponding time period, and filling the speed parameters of the aircraft into the external rectangle of the meshed target airspace sector to generate a speed historical track image channel;
acquiring a predicted point which is reached by the aircraft after preset time according to the longitude and latitude, the speed and the course of the last track point of each aircraft in a time period, connecting sector dynamic traffic data between the last track point and the predicted point to acquire a predicted track, mapping the predicted track to a meshed target airspace sector external rectangle through the longitude and latitude, and when the track passes through a new mesh, sequentially decreasing the filling value by a preset step length until the filling of the last point of the predicted track is completed to generate a conflict predicted track image channel;
and forming a multi-channel air traffic situation image according to the height historical track image channel, the speed historical track image channel and the conflict prediction track image channel, associating the multi-channel air traffic situation image generated at different time intervals with the airspace operation complexity level to obtain an air traffic situation image library, and dividing the air traffic situation image library into a training data set and a testing data set.
5. The spatial domain situation assessment system according to claim 4,
the model construction module is suitable for constructing a spatial domain operation complexity hierarchical network model according to the multi-channel air traffic situation image, namely
Constructing a spatial domain operation complexity hierarchical network model according to the multi-channel air traffic situation image through a deep convolutional neural network, namely
Constructing a fourteen-layer deep convolutional neural network model;
the first layer is an input layer and inputs multi-channel air traffic situation images; the second, third, fifth, sixth, eighth and ninth layers are convolution layers; the fourth, seventh and tenth layers are pooling layers; the eleventh layer is a rolled layer; the twelfth, thirteenth and fourteenth layers are full connection layers, and the output is a space domain operation complexity level vector;
the second layer and the third layer of convolution layers comprise 32 convolution kernels, the fifth layer and the sixth layer of convolution layers comprise 64 convolution kernels, the eighth layer and the ninth layer of convolution layers comprise 128 convolution kernels, and convolution calculation is carried out in an SAME filling mode according to the size of a preset convolution kernel and the movement amplitude of the preset convolution kernel;
the pooling layers of the fourth, seventh and tenth layers are subjected to pooling treatment by adopting a maximum pooling mode;
the twelfth, thirteenth and fourteenth layers are full connection layers, probability representation is carried out on the fourteenth layer output through a Softmax function, and the classification with the maximum probability value is selected as a final classification result;
the Softmax function is:
Figure FDA0002784097910000031
wherein i is a airspace operation complexity grade category, and i is 1, which means that the airspace operation complexity is 1 grade; k represents a natural number greater than zero; n is the total level number of the airspace operation complexity;
the output of the fourteenth layer is a 5-dimensional vector, and each dimension represents the probability of the airspace operation complexity belonging to the level;
and the nonlinear function used after the second, third, fifth, sixth, eighth and ninth convolution layers and the twelfth, thirteenth and fourteenth full connection layers is subjected to nonlinear transformation.
6. The spatial domain situation assessment system according to claim 5,
the model training module is adapted to train a space-domain operational complexity hierarchical network model, i.e. to train a space-domain operational complexity hierarchical network model
Preprocessing an image in the training data set, and performing image standardization processing on pixel values of the image:
Figure FDA0002784097910000041
wherein mu is the mean value of the image; x is an image matrix; σ is the standard deviation; p is the number of pixels of the image;
putting the preprocessed training data set into the airspace operation complexity hierarchical network model for training;
in the training process, the target loss function is cross entropy:
Figure FDA0002784097910000042
where y represents the true probability distribution of the image class;
Figure FDA0002784097910000043
representing the probability distribution calculated by the neural network; y isjAnd
Figure FDA0002784097910000044
each represents a probability value of the jth dimension in the 5-dimensional vector;
the objective loss function is continuously optimized during the training process.
7. The spatial domain situation assessment system according to claim 6,
the evaluation module is suitable for carrying out space domain operation complexity evaluation according to the trained space domain operation complexity hierarchical network model, namely
And preprocessing the images in the test data set, inputting the images after the preprocessing in the test data set into the trained airspace operation complexity grading network model to obtain an airspace operation complexity grading result, and finishing the airspace operation complexity evaluation.
CN202011291878.7A 2020-11-18 2020-11-18 Airspace situation assessment system Active CN112465199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011291878.7A CN112465199B (en) 2020-11-18 2020-11-18 Airspace situation assessment system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011291878.7A CN112465199B (en) 2020-11-18 2020-11-18 Airspace situation assessment system

Publications (2)

Publication Number Publication Date
CN112465199A true CN112465199A (en) 2021-03-09
CN112465199B CN112465199B (en) 2024-03-12

Family

ID=74837627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011291878.7A Active CN112465199B (en) 2020-11-18 2020-11-18 Airspace situation assessment system

Country Status (1)

Country Link
CN (1) CN112465199B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113299120A (en) * 2021-05-25 2021-08-24 中国电子科技集团公司第二十八研究所 Intelligent sensing system for air traffic situation supported by edge cloud in cooperation
CN113592341A (en) * 2021-08-10 2021-11-02 南京航空航天大学 Measurement loss function, sector complexity evaluation method and system
CN113656670A (en) * 2021-08-23 2021-11-16 南京航空航天大学 Flight data-oriented space-time trajectory data management analysis method and device
CN113688172A (en) * 2021-10-26 2021-11-23 中国地质大学(武汉) Landslide susceptibility evaluation model training method, landslide susceptibility evaluation device and medium
CN115512221A (en) * 2022-09-22 2022-12-23 中国人民解放军海军航空大学 GNN-based synchronous track robustness correlation method
CN115527397A (en) * 2022-09-30 2022-12-27 中国民用航空飞行学院 Air traffic control situation feature extraction method and device based on multimode neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855778A (en) * 2012-09-10 2013-01-02 南京航空航天大学 Space-domain sector classification method based on complexity assessment
CN109993225A (en) * 2019-03-29 2019-07-09 北京航空航天大学 A kind of airspace complexity classification method and device based on unsupervised learning
CN110991502A (en) * 2019-11-21 2020-04-10 北京航空航天大学 Airspace security situation assessment method based on category activation mapping technology
CN111047182A (en) * 2019-12-10 2020-04-21 北京航空航天大学 Airspace complexity evaluation method based on deep unsupervised learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855778A (en) * 2012-09-10 2013-01-02 南京航空航天大学 Space-domain sector classification method based on complexity assessment
CN109993225A (en) * 2019-03-29 2019-07-09 北京航空航天大学 A kind of airspace complexity classification method and device based on unsupervised learning
CN110991502A (en) * 2019-11-21 2020-04-10 北京航空航天大学 Airspace security situation assessment method based on category activation mapping technology
CN111047182A (en) * 2019-12-10 2020-04-21 北京航空航天大学 Airspace complexity evaluation method based on deep unsupervised learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王红勇;赵嶷飞;王飞;温瑞英;: "空中交通管制扇区复杂度评估研究", 交通运输系统工程与信息, no. 06, 15 December 2013 (2013-12-15) *
高伟;马岚;王涛波;: "基于调速法的交叉航路空域复杂度研究", 中国民航飞行学院学报, no. 06, 15 November 2016 (2016-11-15) *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113299120A (en) * 2021-05-25 2021-08-24 中国电子科技集团公司第二十八研究所 Intelligent sensing system for air traffic situation supported by edge cloud in cooperation
CN113299120B (en) * 2021-05-25 2022-05-13 中国电子科技集团公司第二十八研究所 Intelligent sensing system for air traffic situation supported by edge cloud in cooperation
CN113592341A (en) * 2021-08-10 2021-11-02 南京航空航天大学 Measurement loss function, sector complexity evaluation method and system
CN113656670A (en) * 2021-08-23 2021-11-16 南京航空航天大学 Flight data-oriented space-time trajectory data management analysis method and device
CN113688172A (en) * 2021-10-26 2021-11-23 中国地质大学(武汉) Landslide susceptibility evaluation model training method, landslide susceptibility evaluation device and medium
CN115512221A (en) * 2022-09-22 2022-12-23 中国人民解放军海军航空大学 GNN-based synchronous track robustness correlation method
CN115512221B (en) * 2022-09-22 2024-02-27 中国人民解放军海军航空大学 GNN-based synchronous track robustness association method
CN115527397A (en) * 2022-09-30 2022-12-27 中国民用航空飞行学院 Air traffic control situation feature extraction method and device based on multimode neural network

Also Published As

Publication number Publication date
CN112465199B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
CN112489497B (en) Airspace operation complexity evaluation method based on deep convolutional neural network
CN112465199B (en) Airspace situation assessment system
CN107909206B (en) PM2.5 prediction method based on deep structure recurrent neural network
CN109508360B (en) Geographical multivariate stream data space-time autocorrelation analysis method based on cellular automaton
CN111695731B (en) Load prediction method, system and equipment based on multi-source data and hybrid neural network
CN111160311A (en) Yellow river ice semantic segmentation method based on multi-attention machine system double-flow fusion network
CN114092832B (en) High-resolution remote sensing image classification method based on parallel hybrid convolutional network
CN112949828B (en) Graph convolution neural network traffic prediction method and system based on graph learning
CN112180471B (en) Weather forecasting method, device, equipment and storage medium
CN113808396B (en) Traffic speed prediction method and system based on traffic flow data fusion
CN112766283B (en) Two-phase flow pattern identification method based on multi-scale convolution network
CN116110022B (en) Lightweight traffic sign detection method and system based on response knowledge distillation
CN114283285A (en) Cross consistency self-training remote sensing image semantic segmentation network training method and device
US20230222768A1 (en) Multiscale point cloud classification method and system
CN115862324A (en) Space-time synchronization graph convolution neural network for intelligent traffic and traffic prediction method
CN117390407B (en) Fault identification method, system, medium and equipment of substation equipment
CN114332075A (en) Rapid structural defect identification and classification method based on lightweight deep learning model
CN114494777A (en) Hyperspectral image classification method and system based on 3D CutMix-transform
CN117636183A (en) Small sample remote sensing image classification method based on self-supervision pre-training
Tian et al. Flight maneuver intelligent recognition based on deep variational autoencoder network
CN113128769A (en) Intelligent flight delay prediction method based on deep learning
CN113076686A (en) Aircraft trajectory prediction method based on social long-term and short-term memory network
CN117152427A (en) Remote sensing image semantic segmentation method and system based on diffusion model and knowledge distillation
Yang et al. A data-driven method for flight time estimation based on air traffic pattern identification and prediction
CN113962424A (en) Performance prediction method based on PCANet-BiGRU, processor, readable storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant