CN113989676A - Terminal area meteorological scene identification method for improving deep convolutional self-coding embedded clustering - Google Patents

Terminal area meteorological scene identification method for improving deep convolutional self-coding embedded clustering Download PDF

Info

Publication number
CN113989676A
CN113989676A CN202111323107.6A CN202111323107A CN113989676A CN 113989676 A CN113989676 A CN 113989676A CN 202111323107 A CN202111323107 A CN 202111323107A CN 113989676 A CN113989676 A CN 113989676A
Authority
CN
China
Prior art keywords
scene
coding
meteorological
self
clustering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111323107.6A
Other languages
Chinese (zh)
Inventor
袁立罡
陈海燕
毛继志
胡明华
谢华
王兵
张颖
李�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202111323107.6A priority Critical patent/CN113989676A/en
Publication of CN113989676A publication Critical patent/CN113989676A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01WMETEOROLOGY
    • G01W1/00Meteorology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Tourism & Hospitality (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Strategic Management (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Marketing (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Environmental & Geological Engineering (AREA)
  • Evolutionary Biology (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Quality & Reliability (AREA)
  • Primary Health Care (AREA)
  • Atmospheric Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Ecology (AREA)

Abstract

The invention belongs to the technical field of airport terminal area operation meteorological scene analysis in air traffic operation management, and particularly relates to a terminal area meteorological scene identification method for improving deep convolutional self-coding embedded clustering, which comprises the following steps: an embedded clustering method based on improved depth convolution self-coding is constructed, and the dimension reduction and meteorological scene identification are carried out on the image; selecting a corresponding unsupervised clustering effect evaluation index, and evaluating the meteorological scene identification; and the identified meteorological scene is verified, the characteristics of the identified scene are determined, the classification and identification of the meteorological scene are realized, a more visual historical result is provided for a controller, and a more effective prior analysis means is provided for field control operation.

Description

Terminal area meteorological scene identification method for improving deep convolutional self-coding embedded clustering
Technical Field
The invention belongs to the technical field of airport terminal area operation meteorological scene analysis in air traffic operation management, and particularly relates to a terminal area meteorological scene identification method for improving deep convolutional self-coding embedded clustering.
Background
In the air traffic field, meteorological conditions with high dynamics are important factors influencing control operation, and are also important objects of researches of scholars and the industry. With the continuous improvement of weather radar data products, visual weather data (especially visual of convective weather) is also rapidly applied to making navigation plans and control decisions. Although visual meteorological data can provide visual perception for decision making of the air traffic control personnel, complex and variable meteorological influences cannot be directly converted into control decisions, and experience differences of different control personnel can cause different decision making and implementation effects. In order to improve the effectiveness and decision efficiency of a control strategy and provide rapid objective evaluation of the influence degree of the air traffic operation on weather, related researchers provide support for providing current decisions by using the similarity of historical operation, and the weather scene recognition concept is the core content of the concept.
The meteorological scene identification process comprises meteorological image feature extraction and scene clustering division. In 2015 and 2016, Kuhn K and the like adopt a machine learning method to extract features (mainly PCA), and then a classical clustering method is adopted to solve the problem of meteorological scene identification based on the extracted features, but the traditional machine learning method has certain defects in the extraction of image features. The development of the deep learning method accelerates the wide application of image data, and can more completely retain data information under the condition of no knowledge of the data. Therefore, the convective weather scene identification based on deep learning has large application requirements and research space. The current research on terminal area meteorological scene identification is as follows: effective comparison research is not carried out on a dimension reduction method of high-dimensional meteorological image data in a terminal area; the terminal area meteorological scene considering the severity and the spatial distribution is identified and researched less, and the regional sector is mainly used in part of research; the method of embedding deep convolutional self-coding into clusters is not applied to terminal region meteorological scene identification.
Therefore, the terminal area meteorological scene identification method based on unsupervised dimension reduction and clustering can make up the blank field, so that a controller is assisted to analyze the historical convective weather scene, and then decision making is assisted. The data dimension reduction method comparison is carried out on the terminal area meteorological scene identification, a method suggestion can be provided for the future development of practical terminal area meteorological scene identification application and related auxiliary analysis tools, and a dimension reduction method more suitable for the dimension reduction of the convection weather image is provided in various methods, so that the follow-up research can be carried out more accurately and scientifically.
On the other hand, due to the excellent achievement of deep learning in each field, a plurality of learners apply the deep learning to the civil aviation field, mainly for predicting relevant research, in the actual operation, the learners can rarely obtain relevant labels, and the labeling is very heavy work, so in order to reduce unnecessary workload, the learners propose that learning is performed by using partial sample labels, namely semi-supervised learning, how to label and which samples are labeled is an important basis, and the problem of the unsupervised learning can be better solved, so that the unsupervised learning can play an important role in preliminary research.
Therefore, a new and improved terminal region meteorological scene identification method of the deep convolutional self-coding embedded cluster needs to be designed based on the technical problems.
Disclosure of Invention
The invention aims to provide a terminal region meteorological scene identification method for improving deep convolutional self-coding embedded clustering.
In order to solve the technical problem, the invention provides a terminal region meteorological scene identification method for improving deep convolutional self-coding embedded clustering, which comprises the following steps:
an embedded clustering method based on improved depth convolution self-coding is constructed, and the dimension reduction and meteorological scene identification are carried out on the image;
selecting a corresponding unsupervised clustering effect evaluation index, and evaluating the meteorological scene identification; and
the identified weather scene is verified and the characteristics of the identified scene are determined.
Further, the method for constructing the embedded clustering method based on the improved depth convolution self-coding and identifying the image dimension reduction and the meteorological scene comprises the following steps:
the convolutional self-coding neural network is learned to minimize its loss function, x ═ x for the input convective weather image1,x2,...,xiWith k convolution kernels, each convolution kernel parameter is given by WkAnd bkComposition of usingkRepresents the convolutional layer:
hk=σ(x*Wk+bk);
wherein ■ is the Relu activation function; is a 2D convolution;
and (3) carrying out convolution operation on each feature graph h and the corresponding transposition of the convolution kernel, summing the results, and then adding an offset to obtain deconvolution operation:
Figure BSA0000257263410000031
wherein y is a reconstructed image, and y is { y ═ y1,y2,...,yi}; h is the whole feature map group;
Figure BSA0000257263410000032
turning operation of the weight in two dimensions; c is a bias, constant term;
comparing Euclidean distances between the input samples and a result obtained by final feature reconstruction, and obtaining a complete convolution self-encoder loss function according to a BP algorithm:
Figure BSA0000257263410000041
obtaining gradient values through convolution operations:
Figure BSA0000257263410000042
in the formula, δ h and δ y are respectively the increment of the hidden state and the reconstructed state;
and updating the weight through random gradient to train the convolution self-coding network, and finishing the dimension reduction of the image data.
Further, the method for constructing the embedded clustering method based on the improved depth convolution self-coding and identifying the image dimensionality reduction and the meteorological scene further comprises the following steps:
the method comprises the steps of replacing a decoding and coding full-connection layer in the deep self-coding with a convolutional layer, using a leveling operation to level a characteristic vector, using clustering loss and reconstruction loss as loss functions, modifying the coding layer and the decoding layer into the convolutional layer and a pooling layer to jointly realize image characteristic extraction, and then using the clustering loss and the reconstruction loss as the loss functions to train a model.
Further, the method for selecting the corresponding unsupervised clustering effect evaluation index and evaluating the weather scene identification comprises the following steps:
evaluating the meteorological scene identification according to the DBI index, the average contour coefficient and the CH score;
the DB index is:
Figure BSA0000257263410000051
wherein, avg (C)i),avg(Cj) Represents a cluster Ci,CJThe average distance between inner samples; dcen(Ci,Cj) Represents a cluster Ci,CjDistance between the center points;
the average profile coefficient is:
Figure BSA0000257263410000052
wherein, aiRepresents the average of the distances between the point i and all other points in the cluster; biA minimum value representing the average of the distances of point i from all other points in different other clusters;
the CH index is:
Figure BSA0000257263410000053
wherein k represents the number of clusters; n represents the sample size; SSBIs the between-class variance; SSWIs the intra-class variance;
the closer to 0 the DBI index is 0 or more, the better the evaluation is;
the average contour coefficient is between-1 and 1, and the evaluation is better when the average contour coefficient is closer to 1;
CH score is greater than 0, the higher the score the better the evaluation.
Further, the method of validating the identified weather scene and determining the characteristics of the identified scene includes:
and verifying the identified meteorological scene through a visualization method and actual operation data, and determining the characteristics of the identified scene.
The method has the advantages that the method is used for identifying the image dimensionality reduction and the meteorological scene by constructing an embedded clustering method based on improved depth convolution self-coding; selecting a corresponding unsupervised clustering effect evaluation index, and evaluating the meteorological scene identification; and the identified meteorological scene is verified, the characteristics of the identified scene are determined, the classification and identification of the meteorological scene are realized, a more visual historical result is provided for a controller, and a more effective prior analysis means is provided for field control operation.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method for improving the terminal region weather scene identification of deep convolutional self-coding embedded clusters according to the present invention;
FIG. 2 is a schematic diagram of the dimensionality reduction of PCA in the present invention;
FIG. 3 is a schematic representation of HOG dimension reduction in accordance with the present invention;
FIG. 4 is a CAE dimension reduction diagram of the present invention;
FIG. 5 is a schematic diagram of the improved deep convolutional self-coding embedded clustering method in the present invention;
FIG. 6 is a schematic diagram of PCA-KMS scene recognition results in the present invention;
FIG. 7 is a diagram illustrating a HOG-KMS scene recognition result in the present invention;
FIG. 8 is a schematic diagram of CAE-KMS scene recognition results in the present invention;
FIG. 9 is a diagram illustrating IDCEC scene recognition results in the present invention;
FIG. 10 is a diagram of the actual flow distribution of the terminal area of the busy period in the five meteorological scenes identified by IDCEC in the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
FIG. 1 is a flow chart of a method for identifying a meteorological scene in a terminal region by improving embedded cluster of deep convolutional self-coding according to the invention.
As shown in fig. 1, the present embodiment provides a method for identifying a meteorological scene in a terminal area of an improved deep convolutional self-coding embedded cluster, including: an embedded clustering method based on improved depth convolution self-coding is constructed, and the dimension reduction and meteorological scene identification are carried out on the image; selecting a corresponding unsupervised clustering effect evaluation index, and evaluating the meteorological scene identification; and the identified meteorological scene is verified, the characteristics of the identified scene are determined, the preliminary unsupervised identification is realized, the research foundation is laid for the subsequent semi-supervised identification, the classified identification of the meteorological scene is realized, a more visual historical result is provided for a controller, and a more effective prior analysis means is provided for the field control operation.
In the embodiment, the current data dimension reduction methods are more, but when aiming at image data, the methods which are widely used and have better effect mainly include PCA, HOG and CAE; and (4) reducing the dimension of the image data through PCA, HOG and CAE according to the image data of the convection weather.
FIG. 2 is a schematic diagram of the dimensionality reduction of PCA in the present invention.
In this embodiment, the method for reducing the dimension of the image data by PCA includes:
PCA is a statistical method of dimension reduction, it is by means of an orthogonal transformation, convert its component correlated original random vector into its component uncorrelated new random vector, this represents in algebraic expression to transform the covariance matrix of the original random vector into the diagonal matrix, represent in geometry to transform original coordinate system into new orthogonal coordinate system, make it point to sample point spread p most open orthogonal directions, then reduce the dimension to the multidimensional variable system, make it can convert into the low dimensional variable system with a higher precision, and then through constructing the appropriate value function, further convert the low dimensional system into the one-dimensional system;
assuming that p indices are provided, X ═ X (X) is represented by a vector1,X2,...,Xp);
Wherein Xi=(x1i,x2i,...,xni)′,xniRepresenting the observed value of the nth sample on the ith (i ═ 1, 2., p) index, the sample is a single convection weather image, and then the ith principal component is:
Pi=a1iX1+a2iX2+...+apiXp
satisfy the requirement of
Figure BSA0000257263410000091
PiAnd Pj(i ≠ j, i, j ≠ 1, 2.. times.p), uncorrelated, Var (P)i)>Var(Pi+1)
Figure BSA0000257263410000092
The ith principal component PiIs X1,...,XpIs greater than the ith variance in all linear combinations of (a), and the corresponding coefficient vector (a)1i,a2i,...,api) Then the eigenvector corresponding to the ith largest eigenvalue of the covariance matrix of X, singular value decomposition is often used in practice instead of the eigenvalue decomposition of the covariance matrix. The dimension reduction results are shown in FIG. 2.
FIG. 3 is a schematic view of HOG dimension reduction in the present invention.
In this embodiment, the method for reducing the dimension of the image data by the HOG includes: the convective weather image is a simpler image, and the HOG (Histogram of Oriented Gradient) features are formed by calculating and counting the Gradient direction Histogram of the local area of the image, so that the purpose of reducing the dimension of the image data is achieved, and the method is a classical image feature extraction method;
constructing features by calculating and counting a gradient direction histogram of a local area of the convection weather image so as to reduce the dimension of the image data;
graying the convection weather image;
standardizing the color space of the input convection weather image by using a gamma correction method;
calculating the gradient of each pixel of the convection weather image, wherein the gradient of the pixel point is as follows:
Gx(x,y)=H(x+1,y)-H(x-1,y);
Gy(x,y)=H(x,y+1)-H(x,y-1);
in the formula, Gx(x,y),Gy(x, y), wherein H (x, y) respectively represents the horizontal gradient, the vertical gradient and the pixel value of the pixel point (x, y) in the input convection weather image;
the gradient values and gradient directions at the pixel points are:
Figure BSA0000257263410000101
Figure BSA0000257263410000102
dividing the convective weather image into small lattices; counting the gradient histogram of each cell to form a description feature of each cell; forming a block by using a preset number of cells, and connecting the description characteristics of all the cells in the block in series to obtain the HOG characteristic of the block; and (4) connecting HOG characteristics of all blocks in the convection weather image in series to obtain the HOG characteristics of the convection weather image, and finishing the dimension reduction of the image data. The dimension reduction results are shown in FIG. 3.
In this embodiment, the method for adding the image data after dimensionality reduction to the unsupervised clustering device to perform unsupervised clustering meteorological scene identification includes:
adopting K-MEANS clustering, randomly selecting K clustering centers as mu1,μ2,...,μk
For each reduced-dimension sample i, calculating the class to which the sample i belongs:
Figure BSA0000257263410000103
for each class j, its cluster center is recalculated:
Figure BSA0000257263410000111
and carrying out unsupervised clustering meteorological scene identification until convergence or training times are reached.
FIG. 4 is a CAE dimension reduction diagram of the present invention;
in this embodiment, the meteorological image data subjected to CAE dimension reduction is added to the unsupervised clustering device to perform unsupervised clustered meteorological scene recognition, so as to compare the effects of the deep convolutional self-coding embedded clustering method.
FIG. 5 is a schematic diagram of the improved deep convolutional self-coding embedded clustering method in the present invention.
In this embodiment, the method for constructing an Improved depth Convolution adaptive Embedding Clustering (IDCEC) method to perform image dimension reduction and weather scene identification includes:
the self-encoder is a neural network based on unsupervised learning, is used for extracting intrinsic characteristics of samples, consists of an encoder and a decoder, and is usually used for characteristic learning or data dimension reduction. The encoder encodes the input data into latent variables, and the decoder reconstructs the latent variables into the original data. Since the autoencoder can perform dimensionality reduction on data and effectively filter redundant information, the autoencoder has great advantages in image retrieval, and is widely adopted.
The process of realizing the convolution self-encoder (CAE) is consistent with the idea of the self-encoder, and the process of firstly encoding and then decoding are used, the difference between decoded data and original data is compared to train, and finally stable parameters are obtained. The convolutional self-coding neural network is learned to minimize its loss function, x ═ x for the input convective weather image1,x2,...,xiSuppose there are k convolution kernels, each with W as the parameterkAnd bkComposition of usingkRepresents the convolutional layer:
hk=σ(x*Wk+bk);
wherein σ is Relu activation function; is a 2D convolution;
where the deviations are broadcast to the entire graph, a single deviation is used for each potential graph, so each filter specializes the characteristics of the entire input, which is then reconstructed using this method. And (3) carrying out convolution operation on each feature graph h and the corresponding transposition of the convolution kernel, summing the results, and then adding an offset to obtain deconvolution operation:
Figure BSA0000257263410000121
wherein y is a reconstructed image, and y is { y ═ y1,y2,...,yi}; h is the whole feature map group;
Figure BSA0000257263410000122
turning operation of the weight in two dimensions; c is a bias, constant term;
comparing Euclidean distances between the input samples and the result obtained by final feature reconstruction, and optimizing by a BP algorithm to obtain a complete loss function of the convolution self-encoder:
Figure BSA0000257263410000123
as with standard networks, back-propagation algorithms are used to calculate the gradient of the error function with respect to the parameter, which can be obtained by convolution using the following formula:
Figure BSA0000257263410000124
in the formula, δ h and δ y are respectively the increment of the hidden state and the reconstructed state;
the weights are updated by random gradients to train the convolutional self-coding network, and the dimension reduction of the image data is completed, and the dimension reduction result is shown in fig. 4.
As shown in fig. 5, for the depth embedded clustering algorithm, in order to better process image data and perform dimensionality reduction on the data, a fully-connected layer of decoding and encoding in depth self-encoding is replaced by a convolutional layer, and finally, a leveling operation is used to level a feature vector, and clustering loss and reconstruction loss are used as loss functions, but considering that the feature loss is easily caused by the process of retaining the feature layer by the last leveling operation, the encoding layer and the decoding layer are modified to jointly implement image feature extraction by the convolutional layer and the pooling layer, and then, a model is trained by using the clustering loss and the reconstruction loss as the loss functions; the model is an improved depth self-coding embedded clustering model, namely the model after the depth self-coding improvement is trained.
In this embodiment, the method for selecting the corresponding unsupervised clustering effect evaluation index and evaluating the weather scene identification includes: considering that a current sample is difficult to label, similar scene recognition is an unsupervised clustering process, so that a clustering effect needs to be evaluated according to a clustering internal evaluation Index, and by using a classical clustering internal Index, weather scene recognition is evaluated according to a DBI Index (Davies-Bouldin Index, which is greater than or equal to 0 and is more close to 0), an Average contour Coefficient (Average Silhouette Coefficient, ASC-1, which is more close to 1 and is more good), a CH Score (Calinski-Harabasz Score, which is greater than 0 and is more good), two and five similar scene results under different methods are evaluated, and the results are shown in Table 1;
table 1: evaluation results
Figure BSA0000257263410000141
The DB index is:
Figure BSA0000257263410000142
wherein, avg (C)i),avg(Cj) Represents a cluster Ci,CJThe average distance between inner samples; dcen(Ci,Cj) Represents a cluster Ci,CjDistance between the center points;
the average profile coefficient is:
Figure BSA0000257263410000143
wherein, aiRepresents the average of the distances between the point i and all other points in the cluster; biA minimum value representing the average of the distances of point i from all other points in different other clusters;
the CH index is:
Figure BSA0000257263410000144
wherein k represents the number of clusters; n represents the sample size; SSBIs the between-class variance; SSWIs the intra-class variance;
the closer to 0 the DBI index is 0 or more, the better the evaluation is;
the average contour coefficient is between-1 and 1, and the evaluation is better when the average contour coefficient is closer to 1;
CH score is greater than 0, the higher the score the better the evaluation.
FIG. 6 is a schematic diagram of PCA-KMS scene recognition results in the present invention;
FIG. 7 is a diagram illustrating a HOG-KMS scene recognition result in the present invention;
FIG. 8 is a schematic diagram of CAE-KMS scene recognition results in the present invention;
FIG. 9 is a diagram illustrating IDCEC scene recognition results in the present invention;
FIG. 10 is a diagram of the actual flow distribution of the terminal area of the busy period in the five meteorological scenes identified by IDCEC in the present invention.
In this embodiment, the method for verifying the identified weather scene and determining the characteristics of the identified scene includes: the identified weather scene is verified by a visualization method and actual operational data, and the characteristics of the identified scene, i.e. the actual operational data, are determined
Fig. 6 shows the identification results of the PCA-KMS method for five types of terminal area weather scenes, which are respectively class 1, class 2, class 3, class 4, and class 5 from top to bottom, and it can be seen that the PCA-KMS method can identify five types of terminal area weather scenes, but in the second type of scene, convection weather existing in south east of the airport and near the airport exists, and the third type and the fourth type are similar, and are convection weather in south of the airport, but the third type of convection weather is slightly less severe than the fourth type, and the fifth type is severe convection weather in which most of the terminal area is covered. The PCA-KMS method can identify the meteorological scene of the terminal area, but the identification result and the numerical correlation are larger, and the identification effect on the distribution position of the convection weather is poor.
Fig. 7 shows the recognition results of five types of terminal area weather scenes by the HOG-KMS method, i.e., class 1, class 2, class 3, class 4, and class 5 from top to bottom, respectively, and it can be seen that, compared to the PCA-KMS method, the HOG-KMS method has better recognition effect on the terminal area weather scenes, where classes 1 to 5 represent no/weak convection weather, convection weather near an airport, south convection weather of the airport, north convection weather of the airport, and severe convection weather in which most of the terminal area is covered, but north convection weather and south convection weather of the airport, which have similar profiles to those of the convection weather near the airport, appear in class 2, and meanwhile, convection weather near the airport appears in class 3 and class 4. Compared with the PCA-KMS method, the HOG-KMS method can better identify the distribution position of the meteorological scene in the terminal area, but the separation of partial meteorological scenes is poor due to meteorological contours.
Fig. 8 shows the recognition results of five types of terminal area weather scenes by the CAE-KMS method, which are class 1, class 2, class 3, class 4, and class 5 from top to bottom, respectively, and it can be seen that class 1 to class 5 respectively represent no/weak convection weather, convection weather near an airport, south convection weather at the airport, north convection weather at the airport, and severe convection weather in which most of the terminal area is covered, but it can be seen that south convection weather at the airport occurs in class 2 and near the airport occurs in class 4. As can be seen, the CAE-KMS method and the HOG-KMS method are the same, and although the distributed positions of the terminal region meteorological scenes can be better identified compared with the PCA-KMS method, the partial meteorological scene is still poor in separation.
Fig. 9 shows the identification results of the IDCEC method for five types of terminal area weather scenes, which are respectively class 1, class 2, class 3, class 4, and class 5 from top to bottom, and it can be seen that the classes 1 to 5 respectively represent no/weak convection weather, convection weather near an airport, south convection weather of the airport, north convection weather of the airport, and severe convection weather in which most of the terminal area is covered.
In order to further verify the scene recognition result, the scene recognition result is analyzed by utilizing the aircraft operation data
As can be seen from fig. 10, the flow distribution in class 1 is the highest, class 5 is the lowest, and the scene identification results of the two are respectively corresponding to weak/no convection weather and severe convection weather in which most of the terminal area is covered, and it can be seen that class 2 is covered by convection weather near the airport, the flow mode is 8 frames/10 min, and is only higher than class 5, which can be obtained from the flow distribution of class 3 and class 4, the flow in the terminal area is mainly north, and when north is affected by convection weather, the flow is more reduced, and the IDCEC is used to effectively identify the weather scene in the terminal area, and the correlation between the identification result and the actual flight operation is larger
In conclusion, the invention carries out image dimension reduction and meteorological scene identification by constructing the embedded clustering method based on the improved depth convolution self-coding; selecting a corresponding unsupervised clustering effect evaluation index, and evaluating the meteorological scene identification; and the identified meteorological scene is verified, the characteristics of the identified scene are determined, the classification and identification of the meteorological scene are realized, a more visual historical result is provided for a controller, and a more effective prior analysis means is provided for field control operation.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (5)

1. A terminal region meteorological scene identification method for improving deep convolutional self-coding embedded clustering is characterized by comprising the following steps:
an embedded clustering method based on improved depth convolution self-coding is constructed, and the dimension reduction and meteorological scene identification are carried out on the image; selecting a corresponding unsupervised clustering effect evaluation index, and evaluating the meteorological scene identification; and
the identified weather scene is verified and the characteristics of the identified scene are determined.
2. The method for improved terminal-region weather scene identification of deep convolutional self-coding embedded clusters as claimed in claim 1,
the method for constructing the embedded clustering method based on the improved depth convolution self-coding and identifying the image dimension reduction and the meteorological scene comprises the following steps:
the learning of the convolutional self-coding neural network is to minimize its loss function for the input convective weather image
x={x1,x2,...,xiWith k convolution kernels, each convolution kernel parameter is given by WkAnd bkComposition of usingkRepresents the convolutional layer:
hk=σ(x*Wk+bk);
wherein σ is Relu activation function; is a 2D convolution;
and (3) carrying out convolution operation on each feature graph h and the corresponding transposition of the convolution kernel, summing the results, and then adding an offset to obtain deconvolution operation:
Figure FSA0000257263400000011
wherein y is a reconstructed image, and y is { y ═ y1,y2,...,yi}; h is the whole feature map group;
Figure FSA0000257263400000012
turning operation of the weight in two dimensions;
comparing Euclidean distances between the input samples and a result obtained by final feature reconstruction, and obtaining a complete convolution self-encoder loss function according to a BP algorithm:
Figure FSA0000257263400000013
obtaining gradient values through convolution operations:
Figure FSA0000257263400000021
in the formula, δ h and δ y are respectively the increment of the hidden state and the reconstructed state;
and updating the weight through random gradient to train the convolution self-coding network, and finishing the dimension reduction of the image data.
3. The method for improved terminal-region weather scene identification of deep convolutional self-coding embedded clusters as claimed in claim 2,
the method for constructing the embedded clustering method based on the improved depth convolution self-coding, which is used for reducing the dimension of the image and identifying the meteorological scene, further comprises the following steps:
the method comprises the steps of replacing a decoding and coding full-connection layer in the deep self-coding with a convolutional layer, using a leveling operation to level a characteristic vector, using clustering loss and reconstruction loss as loss functions, modifying the coding layer and the decoding layer into the convolutional layer and a pooling layer to jointly realize image characteristic extraction, and then using the clustering loss and the reconstruction loss as the loss functions to train a model.
4. The method for improved terminal-region weather scene identification of deep convolutional self-coding embedded clusters as claimed in claim 3,
the method for selecting the corresponding unsupervised clustering effect evaluation index and evaluating the meteorological scene identification comprises the following steps:
evaluating the meteorological scene identification according to the DBI index, the average contour coefficient and the CH score;
the DB index is:
Figure FSA0000257263400000022
wherein, avg (C)i),avg(Cj) Represents a cluster Ci,CjThe average distance between inner samples;
dcen(Ci,Cj) Represents a cluster Ci,CjDistance between the center points;
the average profile coefficient is:
Figure FSA0000257263400000031
wherein, aiRepresents the average of the distances between the point i and all other points in the cluster; biA minimum value representing the average of the distances of point i from all other points in different other clusters;
the CH index is:
Figure FSA0000257263400000032
wherein k represents the number of clusters; n represents the sample size; SSBIs the between-class variance; SSWIs the intra-class variance;
the closer to 0 the DBI index is 0 or more, the better the evaluation is;
the average contour coefficient is between-1 and 1, and the evaluation is better when the average contour coefficient is closer to 1;
CH score is greater than 0, the higher the score the better the evaluation.
5. The method for improved terminal-region weather scene identification of deep convolutional self-coding embedded clusters as claimed in claim 4,
the method of verifying the identified weather scene and determining the characteristics of the identified scene comprises:
and verifying the identified meteorological scene through a visualization method and actual operation data, and determining the characteristics of the identified scene.
CN202111323107.6A 2021-11-09 2021-11-09 Terminal area meteorological scene identification method for improving deep convolutional self-coding embedded clustering Pending CN113989676A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111323107.6A CN113989676A (en) 2021-11-09 2021-11-09 Terminal area meteorological scene identification method for improving deep convolutional self-coding embedded clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111323107.6A CN113989676A (en) 2021-11-09 2021-11-09 Terminal area meteorological scene identification method for improving deep convolutional self-coding embedded clustering

Publications (1)

Publication Number Publication Date
CN113989676A true CN113989676A (en) 2022-01-28

Family

ID=79747500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111323107.6A Pending CN113989676A (en) 2021-11-09 2021-11-09 Terminal area meteorological scene identification method for improving deep convolutional self-coding embedded clustering

Country Status (1)

Country Link
CN (1) CN113989676A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882263A (en) * 2022-05-18 2022-08-09 南京智慧航空研究院有限公司 Convection weather similarity identification method based on CNN image mode

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882263A (en) * 2022-05-18 2022-08-09 南京智慧航空研究院有限公司 Convection weather similarity identification method based on CNN image mode
CN114882263B (en) * 2022-05-18 2024-03-08 南京智慧航空研究院有限公司 Convection weather similarity identification method based on CNN image mode

Similar Documents

Publication Publication Date Title
CN107451545B (en) The face identification method of Non-negative Matrix Factorization is differentiated based on multichannel under soft label
CN105574548A (en) Hyperspectral data dimensionality-reduction method based on sparse and low-rank representation graph
CN113989747A (en) Terminal area meteorological scene recognition system
CN107957946B (en) Software defect prediction method based on neighborhood embedding protection algorithm support vector machine
CN105678261B (en) Based on the direct-push Method of Data with Adding Windows for having supervision figure
CN105184298A (en) Image classification method through fast and locality-constrained low-rank coding process
CN106874862B (en) Crowd counting method based on sub-model technology and semi-supervised learning
CN107679509A (en) A kind of small ring algae recognition methods and device
CN104392231A (en) Block and sparse principal feature extraction-based rapid collaborative saliency detection method
CN110941734A (en) Depth unsupervised image retrieval method based on sparse graph structure
CN110889865A (en) Video target tracking method based on local weighted sparse feature selection
CN106780639A (en) Hash coding method based on the sparse insertion of significant characteristics and extreme learning machine
CN115732034A (en) Identification method and system of spatial transcriptome cell expression pattern
CN110991554B (en) Improved PCA (principal component analysis) -based deep network image classification method
CN113989676A (en) Terminal area meteorological scene identification method for improving deep convolutional self-coding embedded clustering
CN108985346B (en) Existing exploration image retrieval method fusing low-level image features and CNN features
CN108388918B (en) Data feature selection method with structure retention characteristics
CN113378021A (en) Information entropy principal component analysis dimension reduction method based on semi-supervision
CN111325158B (en) CNN and RFC-based integrated learning polarized SAR image classification method
Zhang et al. Point clouds classification of large scenes based on blueprint separation convolutional neural network
CN113316080B (en) Indoor positioning method based on Wi-Fi and image fusion fingerprint
CN115424050A (en) Method and system for detecting and positioning ceramic tile surface defects and storage medium
CN112733925A (en) Method and system for constructing light image classification network based on FPCC-GAN
CN112101405A (en) Robust depth self-encoder and density peak value-based track clustering and abnormal value identification method
CN112560894A (en) Improved 3D convolutional network hyperspectral remote sensing image classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination