CN113989747A - Terminal area meteorological scene recognition system - Google Patents

Terminal area meteorological scene recognition system Download PDF

Info

Publication number
CN113989747A
CN113989747A CN202111323000.1A CN202111323000A CN113989747A CN 113989747 A CN113989747 A CN 113989747A CN 202111323000 A CN202111323000 A CN 202111323000A CN 113989747 A CN113989747 A CN 113989747A
Authority
CN
China
Prior art keywords
scene
meteorological
coding
convolution
clustering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111323000.1A
Other languages
Chinese (zh)
Inventor
袁立罡
曾杨
谢华
张立东
王兵
陈海燕
李�杰
张颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202111323000.1A priority Critical patent/CN113989747A/en
Publication of CN113989747A publication Critical patent/CN113989747A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention belongs to the technical field of airport terminal area operation meteorological scene analysis in air traffic operation management, and particularly relates to a terminal area meteorological scene identification system, which comprises the following components: the depth convolution self-coding embedded clustering scene recognition module is used for constructing an embedded clustering method based on improved depth convolution self-coding and recognizing the image dimensionality reduction and the meteorological scene; the evaluation module selects a corresponding unsupervised clustering effect evaluation index and evaluates the meteorological scene identification; and the verification module is used for verifying the identified meteorological scene and determining the characteristics of the identified scene, so that the classification and identification of the meteorological scene are realized, a more visual historical result is provided for a controller, and a more effective prior analysis means is provided for field control operation.

Description

Terminal area meteorological scene recognition system
Technical Field
The invention belongs to the technical field of airport terminal area operation meteorological scene analysis in air traffic operation management, and particularly relates to a terminal area meteorological scene identification system.
Background
In the air traffic field, meteorological conditions with high dynamics are important factors influencing control operation, and are also important objects of researches of scholars and the industry. With the continuous improvement of weather radar data products, visual weather data (especially visual of convective weather) is also rapidly applied to making navigation plans and control decisions. Although visual meteorological data can provide visual perception for decision making of the air traffic control personnel, complex and variable meteorological influences cannot be directly converted into control decisions, and experience differences of different control personnel can cause different decision making and implementation effects. In order to improve the effectiveness and decision efficiency of a control strategy and provide rapid objective evaluation of the influence degree of the air traffic operation on weather, related researchers provide support for providing current decisions by using the similarity of historical operation, and the weather scene recognition concept is the core content of the concept.
The meteorological scene identification process comprises meteorological image feature extraction and scene clustering division. In 2015 and 2016, Kuhn K and the like adopt a machine learning method to extract features (mainly PCA), and then a classical clustering method is adopted to solve the problem of meteorological scene identification based on the extracted features, but the traditional machine learning method has certain defects in the extraction of image features. The development of the deep learning method accelerates the wide application of image data, and can more completely retain data information under the condition of no knowledge of the data. Therefore, the convective weather scene identification based on deep learning has large application requirements and research space. The current research on terminal area meteorological scene identification is as follows: effective comparison research is not carried out on a dimension reduction method of high-dimensional meteorological image data in a terminal area; the terminal area meteorological scene considering the severity and the spatial distribution is identified and researched less, and the regional sector is mainly used in part of research; the method of embedding deep convolutional self-coding into clusters is not applied to terminal region meteorological scene identification.
Therefore, the terminal area meteorological scene identification method based on unsupervised dimension reduction and clustering can make up the blank field, so that a controller is assisted to analyze the historical convective weather scene, and then decision making is assisted. The data dimension reduction method comparison is carried out on the terminal area meteorological scene identification, a method suggestion can be provided for the future development of practical terminal area meteorological scene identification application and related auxiliary analysis tools, and a dimension reduction method more suitable for the dimension reduction of the convection weather image is provided in various methods, so that the follow-up research can be carried out more accurately and scientifically.
On the other hand, due to the excellent achievement of deep learning in each field, a plurality of learners apply the deep learning to the civil aviation field, mainly for predicting relevant research, in the actual operation, the learners can rarely obtain relevant labels, and the labeling is very heavy work, so in order to reduce unnecessary workload, the learners propose that learning is performed by using partial sample labels, namely semi-supervised learning, how to label and which samples are labeled is an important basis, and the problem of the unsupervised learning can be better solved, so that the unsupervised learning can play an important role in preliminary research.
Therefore, it is necessary to design a new terminal area weather scene identification system based on the above technical problems.
Disclosure of Invention
The invention aims to provide a terminal area meteorological scene identification system.
In order to solve the above technical problem, the present invention provides a terminal area weather scene identification system, including:
the depth convolution self-coding embedded clustering scene recognition module is used for constructing an embedded clustering method based on improved depth convolution self-coding and recognizing the image dimensionality reduction and the meteorological scene;
the evaluation module selects a corresponding unsupervised clustering effect evaluation index and evaluates the meteorological scene identification; and
and the verification module verifies the identified meteorological scene and determines the characteristics of the identified scene.
Further, the deep convolutional self-coding embedded clustering scene recognition module is suitable for constructing an embedded clustering method based on improved deep convolutional self-coding to perform image dimension reduction and meteorological scene recognition, namely
The convolutional self-coding neural network is learned to minimize its loss function, x ═ x for the input convective weather image1,x2,...,xiWith k convolution kernels, each convolution kernel parameter is given by WkAnd bkComposition, expressed as convolutional layer:
hk=σ(x*Wk+bk);
wherein σ is Relu activation function; is a 2D convolution;
and (3) carrying out convolution operation on each feature graph h and the corresponding transposition of the convolution kernel, summing the results, and then adding an offset to obtain deconvolution operation:
Figure BSA0000257263470000031
wherein y is a reconstructed image, and y is { y ═ y1,y2,...,yi}; h is the whole feature map group;
Figure BSA0000257263470000032
turning operation of the weight in two dimensions; c is a bias, constant term;
comparing Euclidean distances between the input samples and a result obtained by final feature reconstruction, and obtaining a complete convolution self-encoder loss function according to a BP algorithm:
Figure BSA0000257263470000041
obtaining gradient values through convolution operations:
Figure BSA0000257263470000042
in the formula, δ h and δ y are respectively the increment of the hidden state and the reconstructed state;
and updating the weight through random gradient to train the convolution self-coding network, and finishing the dimension reduction of the image data.
Further, the deep convolution self-coding embedded clustering scene recognition module is suitable for constructing an improved deep convolution self-coding embedded clustering method to perform image dimension reduction and meteorological scene recognition, namely
The method comprises the steps of replacing a decoding and coding full-connection layer in the deep self-coding with a convolutional layer, using a leveling operation to level a characteristic vector, using clustering loss and reconstruction loss as loss functions, modifying the coding layer and the decoding layer into the convolutional layer and a pooling layer to jointly realize image characteristic extraction, and then using the clustering loss and the reconstruction loss as the loss functions to train a model.
Further, the evaluation module is suitable for selecting corresponding unsupervised clustering effect evaluation indexes to evaluate weather scene identification, namely the evaluation module evaluates the weather scene identification
Evaluating the meteorological scene identification according to the DBI index, the average contour coefficient and the CH score;
the DB index is:
Figure BSA0000257263470000043
wherein, avg (C)i),avg(Cj) Represents a cluster Ci,CjThe average distance between inner samples; dcen(Ci,Cj) Represents a cluster Ci,CjDistance between the center points;
the average profile coefficient is:
Figure BSA0000257263470000051
wherein, aiRepresents the average of the distances between the point i and all other points in the cluster; biA minimum value representing the average of the distances of point i from all other points in different other clusters;
the CH index is:
Figure BSA0000257263470000052
wherein k represents the number of clusters; n represents the sample size; SSBIs the between-class variance; SSWIs the intra-class variance;
the closer to 0 the DBI index is 0 or more, the better the evaluation is;
the average contour coefficient is between-1 and 1, and the evaluation is better when the average contour coefficient is closer to 1;
CH score is greater than 0, the higher the score the better the evaluation.
Further, the verification module is adapted to verify the identified weather scene and determine characteristics of the identified scene, i.e. the weather scene is determined
And verifying the identified meteorological scene through a visualization method and actual operation data, and determining the characteristics of the identified scene.
The method has the advantages that the embedded clustering method based on the improved depth convolution self-coding is constructed through the depth convolution self-coding embedded clustering scene recognition module, and the dimension reduction and the meteorological scene recognition of the image are realized; the evaluation module selects a corresponding unsupervised clustering effect evaluation index and evaluates the meteorological scene identification; and the verification module is used for verifying the identified meteorological scene and determining the characteristics of the identified scene, so that the classification and identification of the meteorological scene are realized, a more visual historical result is provided for a controller, and a more effective prior analysis means is provided for field control operation.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a terminal area weather scene identification system in accordance with the present invention;
FIG. 2 is a schematic diagram of the dimensionality reduction of PCA in the present invention;
FIG. 3 is a schematic representation of HOG dimension reduction in accordance with the present invention;
FIG. 4 is a CAE dimension reduction diagram of the present invention;
FIG. 5 is a schematic diagram of the improved deep convolutional self-coding embedded clustering method in the present invention;
FIG. 6 is a schematic diagram of PCA-KMS scene recognition results in the present invention;
FIG. 7 is a diagram illustrating a HOG-KMS scene recognition result in the present invention;
FIG. 8 is a schematic diagram of CAE-KMS scene recognition results in the present invention;
FIG. 9 is a diagram illustrating IDCEC scene recognition results in the present invention;
FIG. 10 is a diagram of the actual flow distribution of the terminal area of the busy period in the five meteorological scenes identified by IDCEC in the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
FIG. 1 is a flow chart of a terminal region weather scene identification system in accordance with the present invention.
As shown in fig. 1, the present embodiment provides a terminal area weather scene identification system, including: the depth convolution self-coding embedded clustering scene recognition module is used for constructing and improving an embedded clustering method based on depth convolution self-coding, and recognizing the image dimension reduction and the meteorological scene; the evaluation module selects a corresponding unsupervised clustering effect evaluation index and evaluates the meteorological scene identification; and the verification module is used for verifying the identified meteorological scene, determining the characteristics of the identified scene, realizing preliminary unsupervised identification, laying a research foundation for subsequent semi-supervised identification, realizing the classification identification of the meteorological scene, providing a more visual historical result for a controller, and providing a more effective prior analysis means for field control operation.
In the embodiment, the current data dimension reduction methods are more, but when aiming at image data, the methods which are widely used and have better effect mainly include PCA, HOG and CAE; and (4) reducing the dimension of the image data through PCA, HOG and CAE according to the image data of the convection weather.
FIG. 2 is a schematic diagram of the dimensionality reduction of PCA in the present invention.
In this embodiment, the image data is reduced in dimension by PCA, i.e.
PCA is a statistical method of dimension reduction, it is by means of an orthogonal transformation, convert its component correlated original random vector into its component uncorrelated new random vector, this represents in algebraic expression to transform the covariance matrix of the original random vector into the diagonal matrix, represent in geometry to transform original coordinate system into new orthogonal coordinate system, make it point to sample point spread p most open orthogonal directions, then reduce the dimension to the multidimensional variable system, make it can convert into the low dimensional variable system with a higher precision, and then through constructing the appropriate value function, further convert the low dimensional system into the one-dimensional system;
assuming that p indices are provided, X ═ X (X) is represented by a vector1,X2,...,Xp);
Wherein Xi=(x1i,x2i,...,xni)′,xniRepresenting the observed value of the nth sample on the ith (i ═ 1, 2., p) index, the sample is a single convection weather image, and then the ith principal component is:
Pi=a1iX1+a2iX2+...+apiXp
satisfy the requirement of
Figure BSA0000257263470000081
PiAnd Pj(i ≠ j, i, j ≠ 1, 2.. gtp) is uncorrelated,
Figure BSA0000257263470000082
the ith principal component PiIs X1,...,XpIs greater than the ith variance in all linear combinations of (a), and the corresponding coefficient vector (a)1i,a2i,...,api) Then the eigenvector corresponding to the ith largest eigenvalue of the covariance matrix of X, singular value decomposition is often used in practice instead of the eigenvalue decomposition of the covariance matrix. The dimension reduction results are shown in FIG. 2.
FIG. 3 is a schematic view of HOG dimension reduction in the present invention.
In this embodiment, the image data is subjected to dimensionality reduction by using the HOG, that is, a convective weather image is a simpler image, and the HOG feature (Histogram of Oriented gradients) forms a feature by calculating and counting a Gradient direction Histogram of a local region of the image, so as to achieve the purpose of dimensionality reduction of the image data, which is a classical image feature extraction method;
constructing features by calculating and counting a gradient direction histogram of a local area of the convection weather image so as to reduce the dimension of the image data;
graying the convection weather image;
standardizing the color space of the input convection weather image by using a gamma correction method;
calculating the gradient of each pixel of the convection weather image, wherein the gradient of the pixel point (x, y) is as follows:
Gx(x,y)=H(x+1,y)-H(x-1,y);
Gy(x,y)=H(x,y+1)-H(x,y-1);
in the formula, Gx(x,y),Gy(x, y), wherein H (x, y) respectively represents the horizontal gradient, the vertical gradient and the pixel value of the pixel point (x, y) in the input convection weather image;
the gradient values and gradient directions at the pixel point (x, y) are respectively:
Figure BSA0000257263470000091
Figure BSA0000257263470000092
dividing the convective weather image into small lattices; counting the gradient histogram of each cell to form a description feature of each cell; forming a block by using a preset number of cells, and connecting the description characteristics of all the cells in the block in series to obtain the HOG characteristic of the block; and (4) connecting HOG characteristics of all blocks in the convection weather image in series to obtain the HOG characteristics of the convection weather image, and finishing the dimension reduction of the image data. The dimension reduction results are shown in FIG. 3.
In this embodiment, the unsupervised clustering meteorological scene recognition module is adapted to add the image data after dimensionality reduction into an unsupervised clustering device to perform unsupervised clustering meteorological scene recognition, that is, to perform unsupervised clustering meteorological scene recognition
Clustering by using K-MEANS, and randomly selectingTaking k clustering centers as mu1,μ2,...,μk
For each reduced-dimension sample i, calculating the class to which the sample i belongs:
Figure BSA0000257263470000101
for each class j, its cluster center is recalculated:
Figure BSA0000257263470000102
and carrying out unsupervised clustering meteorological scene identification until convergence or training times are reached.
FIG. 4 is a CAE dimension reduction diagram of the present invention;
in this embodiment, the meteorological image data subjected to CAE dimension reduction is added to the unsupervised clustering device to perform unsupervised clustered meteorological scene recognition, so as to compare the effects of the deep convolutional self-coding embedded clustering method. FIG. 5 is a schematic diagram of the improved deep convolutional self-coding embedded clustering method in the present invention;
in this embodiment, the scene recognition module for embedded Deep convolutional self-coding Clustering is adapted to construct an Improved Deep convolutional self-coding embedded Clustering (IDCEC) method for image dimension reduction and image scene recognition, that is, for image scene recognition
The self-encoder is a neural network based on unsupervised learning, is used for extracting intrinsic characteristics of samples, consists of an encoder and a decoder, and is usually used for characteristic learning or data dimension reduction. The encoder encodes the input data into latent variables, and the decoder reconstructs the latent variables into the original data. Since the autoencoder can perform dimensionality reduction on data and effectively filter redundant information, the autoencoder has great advantages in image retrieval, and is widely adopted.
The realization process of the convolution self-encoder (CAE) is consistent with the idea of self-encoder, and the convolution self-encoder (CAE) adopts the steps of firstly encoding and then decoding, and comparing the decoded data with the original dataTraining is carried out by the difference of the initial data, and finally stable parameters are obtained. The convolutional self-coding neural network is learned to minimize its loss function, x ═ x for the input convective weather image1,x2,...,xiSuppose there are k convolution kernels, each with W as the parameterkAnd bkComposition of usingkRepresents the convolutional layer:
hk=σ(x*Wk+bk);
wherein σ is Relu activation function; is a 2D convolution;
where the deviations are broadcast to the entire graph, a single deviation is used for each potential graph, so each filter specializes the characteristics of the entire input, which is then reconstructed using this method. And (3) carrying out convolution operation on each feature graph h and the corresponding transposition of the convolution kernel, summing the results, and then adding an offset to obtain deconvolution operation:
Figure BSA0000257263470000111
wherein y is a reconstructed image, and y is { y ═ y1,y2,...,yi}; h is the whole feature map group;
Figure BSA0000257263470000112
turning operation of the weight in two dimensions; c is a bias, constant term;
comparing Euclidean distances between the input samples and the result obtained by final feature reconstruction, and optimizing by a BP algorithm to obtain a complete loss function of the convolution self-encoder:
Figure BSA0000257263470000113
as with standard networks, back-propagation algorithms are used to calculate the gradient of the error function with respect to the parameter, which can be obtained by convolution using the following formula:
Figure BSA0000257263470000121
in the formula, δ h and δ y are respectively the increment of the hidden state and the reconstructed state;
and updating the weight through random gradient to train the convolution self-coding network, and finishing the dimension reduction of the image data. The dimension reduction results are shown in FIG. 4. As shown in fig. 5, for the depth embedded clustering algorithm, in order to better process image data and perform dimensionality reduction on the data, a fully-connected layer of decoding and encoding in depth self-encoding is replaced by a convolutional layer, and finally, a leveling operation is used to level a feature vector, and clustering loss and reconstruction loss are used as loss functions, but considering that the feature loss is easily caused by the process of retaining the feature layer by the last leveling operation, the encoding layer and the decoding layer are modified to jointly implement image feature extraction by the convolutional layer and the pooling layer, and then, a model is trained by using the clustering loss and the reconstruction loss as the loss functions; the model is an improved depth self-coding embedded clustering model, namely the model after the depth self-coding improvement is trained.
In this embodiment, the evaluation module is adapted to select a corresponding unsupervised clustering effect evaluation Index to evaluate weather scene identification, that is, considering that a current sample is difficult to label, similar scene identification is an unsupervised clustering process, so that the clustering effect needs to be evaluated by the clustering internal evaluation Index, the weather scene identification is evaluated by using a classic clustering internal Index according to a DBI Index (Davies-Bouldin Index, which is greater than or equal to 0 and better as being closer to 0), an Average contour Coefficient (Average simple procedure Coefficient, ASC, -1, which is closer to 1 and better as being closer to 1), a CH Score (Calinski-harasz Score, which is greater than 0 and better as being higher), two and five similar scene results under different methods are evaluated, and the results are shown in table 1;
table 1: evaluation results
Figure BSA0000257263470000131
The DB index is:
Figure BSA0000257263470000132
wherein, avg (C)i),avg(Cj) Represents a cluster Ci,CjThe average distance between inner samples; dcen(Ci,Cj) Represents a cluster Ci,CjDistance between the center points;
the average profile coefficient is:
Figure BSA0000257263470000133
wherein, aiRepresents the average of the distances between the point i and all other points in the cluster; biA minimum value representing the average of the distances of point i from all other points in different other clusters;
the CH index is:
Figure BSA0000257263470000134
wherein k represents the number of clusters; n represents the sample size; SSBIs the between-class variance; SSWIs the intra-class variance;
the closer to 0 the DBI index is 0 or more, the better the evaluation is;
the average contour coefficient is between-1 and 1, and the evaluation is better when the average contour coefficient is closer to 1;
CH score is greater than 0, the higher the score the better the evaluation.
FIG. 6 is a schematic diagram of PCA-KMS scene recognition results in the present invention;
FIG. 7 is a diagram illustrating a HOG-KMS scene recognition result in the present invention;
FIG. 8 is a schematic diagram of CAE-KMS scene recognition results in the present invention;
FIG. 9 is a diagram illustrating IDCEC scene recognition results in the present invention;
FIG. 10 is a diagram of the actual flow distribution of the terminal area of the busy period in the five meteorological scenes identified by IDCEC in the present invention.
In this embodiment, the verification module is adapted to verify the identified weather scene and determine the characteristics of the identified scene, i.e. verify the identified weather scene by means of a visualization method and actual operation data and determine the characteristics of the identified scene, i.e. determine the characteristics of the identified scene
Fig. 6 shows the identification results of the PCA-KMS method for five types of terminal area weather scenes, which are respectively class 1, class 2, class 3, class 4, and class 5 from top to bottom, and it can be seen that the PCA-KMS method can identify five types of terminal area weather scenes, but in the second type of scene, convection weather existing in south east of the airport and near the airport exists, and the third type and the fourth type are similar, and are convection weather in south of the airport, but the third type of convection weather is slightly less severe than the fourth type, and the fifth type is severe convection weather in which most of the terminal area is covered. The PCA-KMS method can identify the meteorological scene of the terminal area, but the identification result and the numerical correlation are larger, and the identification effect on the distribution position of the convection weather is poor.
Fig. 7 shows the recognition results of five types of terminal area weather scenes by the HOG-KMS method, i.e., class 1, class 2, class 3, class 4, and class 5 from top to bottom, respectively, and it can be seen that, compared to the PCA-KMS method, the HOG-KMS method has better recognition effect on the terminal area weather scenes, where classes 1 to 5 represent no/weak convection weather, convection weather near an airport, south convection weather of the airport, north convection weather of the airport, and severe convection weather in which most of the terminal area is covered, but north convection weather and south convection weather of the airport, which have similar profiles to those of the convection weather near the airport, appear in class 2, and meanwhile, convection weather near the airport appears in class 3 and class 4. Compared with the PCA-KMS method, the HOG-KMS method can better identify the distribution position of the meteorological scene in the terminal area, but the separation of partial meteorological scenes is poor due to meteorological contours.
Fig. 8 shows the recognition results of five types of terminal area weather scenes by the CAE-KMS method, which are class 1, class 2, class 3, class 4, and class 5 from top to bottom, respectively, and it can be seen that class 1 to class 5 respectively represent no/weak convection weather, convection weather near an airport, south convection weather at the airport, north convection weather at the airport, and severe convection weather in which most of the terminal area is covered, but it can be seen that south convection weather at the airport occurs in class 2 and near the airport occurs in class 4. As can be seen, the CAE-KMS method and the HOG-KMS method are the same, and although the distributed positions of the terminal region meteorological scenes can be better identified compared with the PCA-KMS method, the partial meteorological scene is still poor in separation.
Fig. 9 shows the identification results of the IDCEC method for five types of terminal area weather scenes, which are respectively class 1, class 2, class 3, class 4, and class 5 from top to bottom, and it can be seen that the classes 1 to 5 respectively represent no/weak convection weather, convection weather near an airport, south convection weather of the airport, north convection weather of the airport, and severe convection weather in which most of the terminal area is covered.
In order to further verify the scene recognition result, the scene recognition result is analyzed by utilizing the aircraft operation data
As can be seen from fig. 10, the flow distribution in class 1 is the highest, class 5 is the lowest, and the scene identification results of the two are respectively corresponding to weak/no convection weather and severe convection weather in which most of the terminal area is covered, and it can be seen that class 2 is covered by convection weather near the airport, the flow mode is 8 frames/10 min, and is only higher than class 5, which can be obtained from the flow distribution of class 3 and class 4, the flow in the terminal area is mainly north, and when north is affected by convection weather, the flow is more reduced, and the IDCEC is used to effectively identify the weather scene in the terminal area, and the correlation between the identification result and the actual flight operation is larger
In conclusion, the invention constructs an embedded clustering method based on improved depth convolution self-coding through a depth convolution self-coding embedded clustering scene identification module, and identifies the image dimension reduction and the meteorological scene; the evaluation module selects a corresponding unsupervised clustering effect evaluation index and evaluates the meteorological scene identification; and the verification module is used for verifying the identified meteorological scene and determining the characteristics of the identified scene, so that the classification and identification of the meteorological scene are realized, a more visual historical result is provided for a controller, and a more effective prior analysis means is provided for field control operation.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (5)

1. A terminal area weather scene identification system, comprising:
the depth convolution self-coding embedded clustering scene recognition module is used for constructing an embedded clustering method based on improved depth convolution self-coding so as to perform image dimension reduction and meteorological scene recognition;
the evaluation module selects a corresponding unsupervised clustering effect evaluation index and evaluates the meteorological scene identification; and
and the verification module verifies the identified meteorological scene and determines the characteristics of the identified scene.
2. The terminal area weather scene identification system of claim 1,
the deep convolutional self-coding embedded clustering scene recognition module is suitable for constructing an embedded clustering method based on improved deep convolutional self-coding to perform image dimension reduction and meteorological scene recognition, namely the learning of a convolutional self-coding neural network is to minimize the loss function, and for an input convection weather image x ═ { x ═ x { (x)1,x2,...,xiThere are k convolution kernels, each convolution kernel parameter consists of a sum, representing the convolution layer:
hk=σ(x*Wk+bk);
wherein σ is Relu activation function; is a 2D convolution;
and (3) carrying out convolution operation on each feature graph h and the corresponding transposition of the convolution kernel, summing the results, and then adding an offset to obtain deconvolution operation:
Figure FSA0000257263460000011
wherein y is a reconstructed image, and y is { y ═ y1,y2,...,yi}; h is the whole feature map group;
Figure FSA0000257263460000012
turning operation of the weight in two dimensions;
comparing Euclidean distances between the input samples and a result obtained by final feature reconstruction, and obtaining a complete convolution self-encoder loss function according to a BP algorithm:
Figure FSA0000257263460000013
obtaining gradient values through convolution operations:
Figure FSA0000257263460000014
in the formula, δ h and δ y are respectively the increment of the hidden state and the reconstructed state;
and updating the weight through random gradient to train the convolution self-coding network, and finishing the dimension reduction of the image data.
3. The terminal area weather scene identification system of claim 2,
the deep convolution self-coding embedded clustering scene recognition module is suitable for constructing an improved deep convolution self-coding embedded clustering method to perform image dimension reduction and meteorological scene recognition, namely
The method comprises the steps of replacing a decoding and coding full-connection layer in the deep self-coding with a convolutional layer, using a leveling operation to level a characteristic vector, using clustering loss and reconstruction loss as loss functions, modifying the coding layer and the decoding layer into the convolutional layer and a pooling layer to jointly realize image characteristic extraction, and then using the clustering loss and the reconstruction loss as the loss functions to train a model.
4. The terminal area weather scene identification system of claim 3,
the evaluation module is suitable for selecting a corresponding unsupervised clustering effect evaluation index to evaluate the meteorological scene identification, namely evaluating the meteorological scene identification according to the DBI index, the average contour coefficient and the CH score;
the DB index is:
Figure FSA0000257263460000021
wherein, avg (C)i),avg(Cj) Represents a cluster Ci,CjThe average distance between inner samples;
Figure FSA0000257263460000024
represents a cluster Ci,CjDistance between the center points;
the average profile coefficient is:
Figure FSA0000257263460000022
wherein, aiRepresents the average of the distances between the point i and all other points in the cluster; biA minimum value representing the average of the distances of point i from all other points in different other clusters;
the CH index is:
Figure FSA0000257263460000023
wherein k represents the number of clusters; n represents the sample size; SSBIs the between-class variance; SSWIs the intra-class variance;
the closer to 0 the DBI index is 0 or more, the better the evaluation is;
the average contour coefficient is between-1 and 1, and the evaluation is better when the average contour coefficient is closer to 1;
CH score is greater than 0, the higher the score the better the evaluation.
5. The terminal area weather scene identification system of claim 4,
the verification module is adapted to verify the identified meteorological scene and determine characteristics of the identified scene, i.e. the weather scene is determined
And verifying the identified meteorological scene through a visualization method and actual operation data, and determining the characteristics of the identified scene.
CN202111323000.1A 2021-11-09 2021-11-09 Terminal area meteorological scene recognition system Pending CN113989747A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111323000.1A CN113989747A (en) 2021-11-09 2021-11-09 Terminal area meteorological scene recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111323000.1A CN113989747A (en) 2021-11-09 2021-11-09 Terminal area meteorological scene recognition system

Publications (1)

Publication Number Publication Date
CN113989747A true CN113989747A (en) 2022-01-28

Family

ID=79747484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111323000.1A Pending CN113989747A (en) 2021-11-09 2021-11-09 Terminal area meteorological scene recognition system

Country Status (1)

Country Link
CN (1) CN113989747A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114139063A (en) * 2022-01-30 2022-03-04 北京淇瑀信息科技有限公司 User tag extraction method and device based on embedded vector and electronic equipment
CN114882263A (en) * 2022-05-18 2022-08-09 南京智慧航空研究院有限公司 Convection weather similarity identification method based on CNN image mode

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114139063A (en) * 2022-01-30 2022-03-04 北京淇瑀信息科技有限公司 User tag extraction method and device based on embedded vector and electronic equipment
CN114882263A (en) * 2022-05-18 2022-08-09 南京智慧航空研究院有限公司 Convection weather similarity identification method based on CNN image mode
CN114882263B (en) * 2022-05-18 2024-03-08 南京智慧航空研究院有限公司 Convection weather similarity identification method based on CNN image mode

Similar Documents

Publication Publication Date Title
WO2022041678A1 (en) Remote sensing image feature extraction method employing tensor collaborative graph-based discriminant analysis
CN109815357B (en) Remote sensing image retrieval method based on nonlinear dimension reduction and sparse representation
CN113989747A (en) Terminal area meteorological scene recognition system
CN105574548A (en) Hyperspectral data dimensionality-reduction method based on sparse and low-rank representation graph
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN105678261B (en) Based on the direct-push Method of Data with Adding Windows for having supervision figure
CN110659665A (en) Model construction method of different-dimensional features and image identification method and device
CN105184298A (en) Image classification method through fast and locality-constrained low-rank coding process
CN106874862B (en) Crowd counting method based on sub-model technology and semi-supervised learning
CN112883839A (en) Remote sensing image interpretation method based on adaptive sample set construction and deep learning
CN110941734A (en) Depth unsupervised image retrieval method based on sparse graph structure
CN110889865A (en) Video target tracking method based on local weighted sparse feature selection
CN113239839B (en) Expression recognition method based on DCA face feature fusion
CN110991554B (en) Improved PCA (principal component analysis) -based deep network image classification method
CN115732034A (en) Identification method and system of spatial transcriptome cell expression pattern
CN106803105B (en) Image classification method based on sparse representation dictionary learning
CN113989676A (en) Terminal area meteorological scene identification method for improving deep convolutional self-coding embedded clustering
CN108388918B (en) Data feature selection method with structure retention characteristics
CN108520539B (en) Image target detection method based on sparse learning variable model
CN113378021A (en) Information entropy principal component analysis dimension reduction method based on semi-supervision
CN111414958B (en) Multi-feature image classification method and system for visual word bag pyramid
CN112580575A (en) Electric power inspection insulator image identification method
CN109886352B (en) Non-supervision assessment method for airspace complexity
CN111325158A (en) CNN and RFC-based integrated learning polarized SAR image classification method
CN116523877A (en) Brain MRI image tumor block segmentation method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination