CN113807254A - Intelligent clustering method based on hierarchical self-organizing mapping digital signal modulation mode - Google Patents

Intelligent clustering method based on hierarchical self-organizing mapping digital signal modulation mode Download PDF

Info

Publication number
CN113807254A
CN113807254A CN202111093382.3A CN202111093382A CN113807254A CN 113807254 A CN113807254 A CN 113807254A CN 202111093382 A CN202111093382 A CN 202111093382A CN 113807254 A CN113807254 A CN 113807254A
Authority
CN
China
Prior art keywords
layer
neurons
self
neuron
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111093382.3A
Other languages
Chinese (zh)
Inventor
邢座程
李泽润
王庆林
张洋
隋兵才
朱满
史红发
郭阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202111093382.3A priority Critical patent/CN113807254A/en
Publication of CN113807254A publication Critical patent/CN113807254A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent clustering method based on a layered self-organizing mapping digital signal modulation mode, which comprises the following steps: step S1: acquiring a target data sequence; step S2: extracting the normalized high-order cumulant and amplitude moment characteristics of the signal to obtain a characteristic space with a high-dimensional vector; step S3: and processing high-dimensional characteristic data by adopting a layered self-organizing mapping model, and clustering characteristic vectors of MPSK (multi-order phase shift keying modulation mode) and MQAM (multi-order quadrature amplitude modulation mode) signals with different orders in a layering process. The method has the advantages of simple principle, simple and convenient operation, capability of saving computing resources, shorter time for converging to an expected clustering result and the like.

Description

Intelligent clustering method based on hierarchical self-organizing mapping digital signal modulation mode
Technical Field
The invention mainly relates to the technical field of wireless communication, in particular to an intelligent clustering method based on a hierarchical self-organizing mapping digital signal modulation mode.
Background
Automatic Modulation Classification (AMC) is the practical and accurate identification of a modulated signal when a priori information is not available. When receiving an air signal in an environment, firstly, the modulation type of the signal is identified, and then the information of the received signal is decoded. In both the civilian and military fields, automatic modulation classification plays an important role in complex wireless non-cooperative communication environments.
The received signals of the modulation type to be identified are usually from non-cooperative systems, and the received signals are always affected by various disturbances, which undoubtedly makes the identification task more challenging. Subtractive clustering can classify different types of modulation signals without setting the number of cluster clusters in advance, but the process of adjusting the rejection threshold and the acceptance threshold of the cluster density is complicated. In addition, the characteristic quantity of the modulation signal to be identified is complex, and the support vector machine method suitable for linear classification is difficult to realize nonlinear classification when the signal to be identified is used. As a popular unsupervised learning model, self-organizing map (SOM) is a network that can form selective responses to input data. The self-organizing map can be used to study the topology of the input samples and the distribution of sample features, identifying the inherent differences in the input samples. Meanwhile, the self-learning network is a self-learning network formed by fully connected neuron arrays without a large number of training data sets and labels.
Hierarchical self-organizing maps are an improved model of self-organizing maps. It contains several self-organizing map layers in the hierarchy. The purpose of this model is to generate a small number of neurons in the root layer and other neurons in the sub-layer to form a new mapping, producing a clustering effect on signals of many different order modulation types. In order to realize the fairness of characteristics, the inherent difference of digital signals to be identified of MPSK (multi-order phase shift keying modulation mode) and MQAM (multi-order quadrature amplitude modulation mode) is obtained by using high-order cumulant and amplitude moment, and the signals are distinguished in the proposed two-layer self-organizing mapping network.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems in the prior art, the invention provides the intelligent clustering method based on the hierarchical self-organizing mapping digital signal modulation mode, which has the advantages of simple principle, simple and convenient operation, calculation resource saving and shorter time for converging to an expected clustering result.
In order to solve the technical problems, the invention adopts the following technical scheme:
an intelligent clustering method based on a layered self-organizing mapping digital signal modulation mode comprises the following steps:
step S1: acquiring a target data sequence;
step S2: extracting the normalized high-order cumulant and amplitude moment characteristics of the signal to obtain a characteristic space with a high-dimensional vector;
step S3: and processing high-dimensional characteristic data by adopting a layered self-organizing mapping model, and clustering characteristic vectors of MPSK and MQAM signals with different orders in a layering process.
As a further improvement of the process of the invention: in the hierarchical self-organizing mapping model, a root layer is used for roughly clustering original data, serves for network growth and controls hierarchical mapping of the extracted model; setting the length of the sub-layer to a specific value according to the data characteristics when the sub-layer is generated from the root layer; the initialization of the sub-layers is typically initialized using samples to exploit data in the original feature space; after the weight vector is initialized, the model executes a competition step and is trained by utilizing the original vector applied by the root layer; if the neurons on the sub-layer finally meet the network layering condition, the layering self-organizing algorithm is terminated, and the whole clustering task is finished.
As a further improvement of the process of the invention: when the number of layers is 2, the index of the 1 st layer is the root layer of the layered self-organizing mapping model for processing all five modulation signals; the index 2-1 sublayer is a sublayer for processing two MQAM signals, and the index 2-2 sublayer is a sublayer for processing three MPSK signals; the sub-layer of index 2-1 differs from the sub-layer of index 2-2 in the number of objects and samples to cluster, according to the number of samples in the two sub-layers, to distinguish clusters handling different modulation types.
As a further improvement of the process of the invention: the self-organizing process in step S3 includes:
step S301: initializing; all connection weights are initialized by small random values;
step S302: competition; for each input mode, the neurons calculate their respective discrimination function values, providing a basis for cooperation;
step S303: cooperation; the winning neuron of the neural network affects other peripheral neurons from near to far, and gradually changes from nerve excitation to nerve inhibition; the winning neuron determines the spatial position of the excitatory neuron in the topological neighborhood, and provides a basis for the adjustment of the weight vector.
Step S304: adjusting; the stimulated neurons reduce the discrimination function values related to the input mode by properly adjusting the related connection weights, so that the response of winning neurons to subsequent application of similar input modes is enhanced;
step S305: layering; after the training of the root layer is finished, whether all neurons on the root layer meet the network layering condition is checked; for the neuron j which does not meet the condition, generating a new layer on the basis of the root layer; the feature samples of the sub-layers are relatively finely clustered on the basis of the root layer.
As a further improvement of the process of the invention: in the step S302, for the input vector xi(i-1, 2, …, M), where M is the number of input vectors. The weight vector is wji(j ═ 1,2, … N; i ═ 1,2, …, M), where N is the number of neurons in the output layer; the competing discriminant function is defined as the input vector xiAnd a weight vector wjiIn betweenMinimum euclidean distance:
minj||xi-wji||,j=1,2,…,M
for an input vector, each neuron in the SOM neural network computes the value of a discriminant function, where the so-called discriminant function is the basis for competition between neurons; the particular neuron with the smallest discriminant function value is declared the winner.
As a further improvement of the process of the invention: in the step S303, Nj,iA topological neighborhood function centered on a winning neuron i, comprising a set of cooperative neurons; one of the cooperative neurons is neuron j; dj,iIs the lateral distance between the winning neuron i and the excitatory neuron j, when dj,iIs equal to zero time Nj,iObtaining a maximum value, Nj,iIs symmetric about zero; neighborhood function Nj,iAmplitude value d ofj,iMonotonically decreasing with increasing lateral distance.
As a further improvement of the process of the invention: in step S303, the topological neighborhood function includes one or more of a gaussian function, a mexican hat function, a bubble function, and a triangular function.
As a further improvement of the process of the invention: in step S305, for the neurons of each network on the sub-layer, the input data is a subsequence of the original data; the network layering conditions are as follows:
qej<τ·qe0,j=1,2,…,M
τ is a parameter controlling the size of the model, qe0In order to initialize the original quantization error of the root layer, M is the number of input vectors; after the training of the root layer is finished, the algorithm checks whether all the neurons on the root layer meet the network layering condition; for the neuron j which does not meet the condition, generating a new layer on the basis of the root layer; the feature samples of the sub-layers are relatively finely clustered on the basis of the root layer.
Compared with the prior art, the invention has the advantages that:
1. the intelligent clustering method based on the layered self-organizing mapping digital signal modulation mode has rough clustering capability on digital signals of different modulation types, and realizes the clustering effect of signals of different orders in a sub-layer network corresponding to the same signal; computational resources may also be saved and the time required to converge to a desired clustering result is shorter.
2. The invention discloses an intelligent clustering method based on a hierarchical self-organizing mapping digital signal modulation mode. The method overcomes the defects of large structure size and much resource consumption of the traditional single-layer self-organizing mapping model. The main advantages of using the hierarchical self-organizing mapping model are that the clustering process is easy to explain and understand, has enough credibility, the training process is reasonable, less computing resources are needed, and the operation speed is high. The model takes all input data into consideration, influences are generated on the position of data mapping through data reduction and a weight updating algorithm based on similarity, MQAM and MPSK signals are divided into two layers for clustering, and an understandable data clustering visualization effect is generated. The model is based on the idea of coarse-grained clustering and then fine-grained clustering, the required weight vectors are fewer, less computing resources are occupied, and a layered self-organizing mapping model can be trained in a shorter time.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
Fig. 2 is a schematic diagram of the self-organizing map model constructed in a specific application example of the invention.
FIG. 3 is a diagram illustrating final clustering in a specific application example of the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings and specific examples.
As shown in fig. 1, the intelligent clustering method based on the hierarchical self-organizing map digital signal modulation scheme of the present invention includes:
step S1: acquiring a target data sequence;
step S2: extracting the normalized high-order cumulant and amplitude moment characteristics of the signal to obtain a characteristic space with a high-dimensional vector;
step S3: and processing high-dimensional characteristic data by adopting a layered self-organizing mapping model, and clustering characteristic vectors of MPSK and MQAM signals with different orders in a layering process.
The self-organizing map used in the present invention is an unsupervised artificial network, unlike convolutional neural networks based on back-propagation algorithms. It simulates the excitation, inhibition and coordination among neurons in the human brain nervous system and the competition among neurons, and realizes the biological dynamics principle of information processing. On the basis of the self-organizing mapping structure with only one competition layer, a layered self-organizing mapping structure is introduced to overcome the defects, and input vectors are more finely clustered on the basis of a root layer.
Specifically, the method adopts an unsupervised clustering mode to estimate the number of categories, and an adopted algorithm is self-organizing mapping. Self-organizing maps are competitive learning, with competing activations between output neurons, with the result that only one neuron is activated at any time, this activated neuron being referred to as the winner neuron. The competition process is implemented by laterally suppressed connections (negative feedback paths) between neurons, and this transformation is performed adaptively in a topologically ordered manner. During the competitive learning process described above, neurons are selectively fine-tuned to accommodate various input patterns (stimuli) or input pattern classes. The positions of the neurons thus adjusted (i.e., the winning neurons) become ordered and a coordinate system meaningful to the input features is created on the grid. Thus, the self-organizing map forms the topological map required for the input pattern.
In a specific application example, the invention can divide MQAM and MPSK signals into two layers for clustering by utilizing a layered self-organizing mapping model based on the high-order cumulant and amplitude moment characteristics of input digital signals, thereby saving calculation resources.
As a preferred embodiment, in order to reduce the training process of the hierarchical self-organizing mapping model, the invention further uses the root layer to cluster the two major classes to obtain the minor classes with different orders. The main purpose of self-organizing maps is to store a large set of input vectors by finding a smaller set of prototypes to provide a good approximation to the original input space. The theoretical basis of the self-organizing mapping model is rooted in the vector quantization theory, and the algorithm idea is derived from dimension reduction or data compression. And a self-organizing mapping model of the input vector is reconstructed by adopting two competition layers, and the original space is approximated by two times of dimensionality reduction, so that the required computing resources are reduced.
In a specific application example, in step S1, the sampled original digital signal may be represented as:
Figure BDA0003268165540000061
where a is the attenuation factor, n is the index of the sampled signal, f0Is the offset frequency of the signal, theta0For phase offset, s (n) is the transmit end signal, and w (n) is additive white gaussian noise of constant power.
The high order cumulant of the zero-mean k-order stationary random process x (t) is defined as:
Ckx12,…,τk-1)=Cum(x(t),x(t+τ1),…,x(t+τk-1))
where Cum () is the high order cumulant of x (f) and τ is the time delay of x (t). The p-order mixing moment of the stochastic process r (t) is
Mpq=E{[r(t)p-qr*(t)q]}
The high-order cumulant with subscript larger than 1 has the characteristic of inhibiting Gaussian noise [4], and the expression of the adopted high-order cumulant is as follows:
Figure BDA0003268165540000062
C41=Cum(x,x,x,x*)=M41-3M20M21
Figure BDA0003268165540000063
Figure BDA0003268165540000064
Figure BDA0003268165540000065
Figure BDA0003268165540000066
Figure BDA0003268165540000067
Figure BDA0003268165540000068
as a preferred embodiment, the invention further provides two amplitude-moment characteristics to distinguish different digital signals, including mu42And Ac,μ42And AcThe expression of (a) is as follows:
Figure BDA0003268165540000069
Figure BDA00032681655400000610
where A (n) is the instantaneous amplitude of the received signal r (n), maIs the average amplitude of a fixed length signal sample. Mu.s42The degree of polymerization of the received signal can be reflected, and phase noise can be suppressed. A. thecWhich represents the average amplitude of the received signal and suppresses the interference effect without a direct current component. The constructed feature space is an eight-dimensional vector space, and the feature expression is
F=[C40,C41,C42,C60,C61,C62,C63,C80,μ42,Ac]
The self-organizing map completes the dimension reduction mapping from the feature space to the output plane. The mapping results in each output cell corresponding to a type of pattern of the input signal, and the patterns corresponding to adjacent cells are similar. In other words, the mapping has the property of preserving topological features. Different from the traditional clustering party, the self-organizing map is an unsupervised clustering algorithm, can map input modes of any dimensionality to two-dimensional graphs of an output layer, and keeps the topological structure of the input modes unchanged. The main factors that affect clustering accuracy and clustering performance are feature similarity, distance or similarity measure and imbalance of data sets.
In a specific application example, the self-organizing process in step S3 includes:
step S301: initialization: all connection weights are initialized with small random values. The initial weights are typically generated using relatively small randomly generated constants so that the weight vectors are more fully distributed in the sample space. During the network training process, the initial weight closer to the sample is continuously adjusted. The weight vectors that are far away from the sample adjust relatively slowly due to the effect of lateral suppression. The worst case is that the overall samples are too concentrated, then the weight vectors initially located far from the samples will never adjust, eventually grouping the samples into one class. Furthermore, it has to be considered that the probability distribution of the initial weights is approximately close to the distribution of the input samples.
Step S302: competition: for each input mode, the neurons compute their respective discrimination function values, providing a basis for cooperation.
For an input vector xi(i-1, 2, …, M), where M is the number of input vectors. The weight vector is wji(j-1, 2, … N; i-1, 2, …, M), where N is the number of neurons in the output layer. The competing discriminant function is defined as the input vector xiAnd a weight vector wjiMinimum euclidean distance between:
minj||xi-wji||,j=1,2,…,M
for the input vector, each neuron in the SOM neural network computes the value of a discriminant function, where the so-called discriminant function is the basis for competition between neurons. The particular neuron with the smallest discriminant function value is declared the winner.
Step S303: cooperation: the winning neuron of the neural network gradually changes from near to far affecting other neurons in the periphery, from neural excitation to neural inhibition. The winning neuron determines the spatial position of the excitatory neurons in the topological neighborhood, thus providing a basis for the adjustment of the weight vectors.
Nj,iIs a topological neighborhood function centered around a winning neuron i, comprising a set of cooperative neurons. One of the cooperative neurons is neuron j. dj,iIs the lateral distance between the winning neuron i and the excitatory neuron j, when dj,iIs equal to zero time Nj,iObtaining a maximum value, Nj,iIs symmetric about zero. Neighborhood function Nj,iAmplitude value d ofj,iMonotonically decreasing with increasing lateral distance. Common four topological neighborhood functions include a gaussian function, a mexican hat function, a bubble function, and a triangular function.
The expression of the gaussian topology neighborhood function is as follows:
Figure BDA0003268165540000081
the expression of the mexican hat topology neighborhood function is as follows:
Figure BDA0003268165540000082
the expression of the bubble topology neighborhood function is as follows:
Figure BDA0003268165540000083
the expression of the triangle neighborhood function is as follows:
Figure BDA0003268165540000084
step S304: adjusting: the excited neurons reduce the discrimination function values associated with the input pattern by appropriately adjusting the associated connection weights, such that the winning neurons respond more to subsequent applications of similar input patterns. Self-organizing maps involve adaptive or learning processes that self-organize through output nodes to form a feature map between input and output layers. Not only will the winning neuron update the weights, but its neighbors will also update their weights. The k-th step neighborhood weight vector updating formula is
wj(k+1)=wj(k)+η(k)Nj,i(k)(xi-wji)
Where eta (k) is the learning rate, Nji(k) Is the topological neighborhood function of step k. Setting the raw learning rate to η (0), the iterative expression of η (k) is as follows:
Figure BDA0003268165540000091
where M is the number of iterations. As k increases, η (k) gradually decreases. The reduction in learning rate allows the SOM model to converge faster.
The method changes the discrimination function value of the input vector by utilizing the adjusting weight of the exciting neuron and enhances the response of the winning neuron to the subsequent similar input vector. On the representation of the input data, the synaptic weight vector tends to follow the distribution of the input vector due to the update of the neighborhood. Then, the tuning algorithm maps the features into the input space for topological ordering, and neurons near the output plane will have similar synaptic weight vectors. This adaptive process provides an accurate statistical quantification of the input space. Furthermore, the number of iterations of convergence depends strongly on the dimension of the input space. If there is no significant change in the output mapping in one iteration, the adjustment will end.
Step S305: layering: when the root layer training is finished, the algorithm checks whether all neurons on the root layer meet the network layering condition. For neurons j that do not satisfy this condition, a new layer is generated on the basis of the root layer. The feature samples of the sub-layers are relatively finely clustered on the basis of the root layer.
When the root layer training is finished, the algorithm checks whether all neurons on the root layer meet the network layering condition. For neurons j that do not satisfy this condition, a new layer is generated on the basis of the root layer. The feature samples of the sub-layers are relatively finely clustered on the basis of the root layer. For each network neuron on the sub-layer, the input data is a subsequence of raw data that is mapped from the corresponding neuron in the root layer. Higher accuracy can be achieved for data clustered in sub-layers. The network layering conditions are as follows:
qej<τ·qe0,j=1,2,…,M
τ is a parameter controlling the size of the model, qe0To initialize the original quantization error at the root level, M is the number of input vectors. When the root layer training is finished, the algorithm checks whether all neurons on the root layer meet the network layering condition. For neurons j that do not satisfy this condition, a new layer is generated on the basis of the root layer. The feature samples of the sub-layers are relatively finely clustered on the basis of the root layer.
In the hierarchical self-organizing mapping model proposed by the invention, the root layer is used for rough clustering of the original data, serves for network growth, and controls the hierarchical mapping of the proposed model. When a sublayer is generated from the root layer, the length of the sublayer is set to a specific value according to the data characteristics. The initialization of the sub-layers is typically initialized using samples to exploit the data in the original feature space. After initializing the weight vectors, the model performs a competition step, trained with the original vectors applied by the root layer. If the neurons on the sub-layer finally meet the network layering condition, the layering self-organizing algorithm is terminated, and the whole clustering task is finished.
When the number of layers is 2, the index of the layer 1 is the root layer of the hierarchical self-organizing map model that processes all five modulation signals. Index 2-1 is a sublayer that handles two types of MQAM signals, and index 2-2 is a sublayer that handles three types of MPSK signals. The two sub-layers differ in the number of cluster objects and samples. The number of signal samples for layer 2-1 is 4000 and the number of signal samples for layer 2-2 is 6000. Clusters dealing with different modulation types can be distinguished according to the number of samples in the two sub-layers. The final clustering results are shown in fig. 3.
As shown in fig. 2, the hierarchical self-organizing map model more finely clusters the input vectors on the basis of the root competition layer and forms two sub competition layers. For each neuron in the sub-competition layer, the input data is the original input vector connected to the winning neuron in the root competition layer. These input data in the same winning neuron usually have similar coarse-grained features, and can be clustered into a new competition layer, so as to distinguish fine-grained features. The neurons of the sub-competition layer are connected to all the winning neurons of the root competition layer.
The invention adopts a hierarchical self-organizing mapping network to distinguish the modulation signals to be identified with different orders, and the output of the sub competition layer is shown in figure 3 and comprises two types of data to be identified, namely MPSK and MQAM. In the left image, the circle mapping point is the mapping position of BPSK signal, the pentagram mapping point is the mapping position of QPSK signal, the triangle mapping point is the mapping position of 8PSK signal, and the clustering effect of the three types of signals is good. In the right graph, the circle mapping points are the positions mapped by the 16QAM signals, the pentagram mapping points are the positions mapped by the 64QAM signals, and the clustering effect of the two types of signals is good.
An example of an application of the invention is the identification of a radiation source. The airborne signal interception equipment is used for intercepting and capturing different types of radiation sources and then demodulating the radiation sources to obtain parameters of the radiation sources, so that the types of radar signals are distinguished, wherein the radar signals comprise monitoring radars, search radars, guidance radars or navigation radars and the like, or the navigation communication emission equipment comprises short wave emission stations, microwave communication stations, ground satellite communication stations, mobile communication stations and the like. The method adopts a self-organizing mapping method to perform unsupervised clustering on the designated characteristic values, clusters the data based on the similarity and the topological structure of the data, and has the capability of distributing corresponding data for the designated category. The unsupervised model uses a competitive learning algorithm, and output neurons compete with each other, so that only one neuron is activated at any time. This competitive learning can be achieved by laterally suppressing connections (negative feedback paths) between neurons, using a nearest neighbor function to maintain the topology of the input space, which ensures that the two-dimensional map contains the relative distances between data points. As a result of the competition, each neuron is forced to recombine itself, adjacent samples in the input space are mapped to adjacent output neurons, and through the dividing operation, signals with different orders and the same modulation system (such as 16QAM and 64QAM) can be gathered into a large class in the root competition layer, and the class number of the signals with unknown modulation system can be roughly obtained. On the basis of coarse-grained clustering, signals with similar characteristics and the same modulation system are subjected to fine-grained clustering by using a layered self-organizing mapping algorithm, and signals with different orders are respectively clustered to obtain the number of types in different sub-competition layers.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (8)

1. An intelligent clustering method based on a layered self-organizing mapping digital signal modulation mode is characterized by comprising the following steps:
step S1: acquiring a target data sequence;
step S2: extracting the normalized high-order cumulant and amplitude moment characteristics of the signal to obtain a characteristic space with a high-dimensional vector;
step S3: and processing high-dimensional characteristic data by adopting a layered self-organizing mapping model, and clustering characteristic vectors of MPSK and MQAM signals with different orders in a layering process.
2. The intelligent clustering method based on the hierarchical self-organizing map digital signal modulation scheme as claimed in claim 1, wherein in the hierarchical self-organizing map model, a root layer is used for rough clustering of original data, and serves for network growth, and controls the hierarchical mapping of the extracted model; setting the length of the sub-layer to a specific value according to the data characteristics when the sub-layer is generated from the root layer; the initialization of the sub-layers is typically initialized using samples to exploit data in the original feature space; after the weight vector is initialized, the model executes a competition step and is trained by utilizing the original vector applied by the root layer; if the neurons on the sub-layer finally meet the network layering condition, the layering self-organizing algorithm is terminated, and the whole clustering task is finished.
3. The intelligent clustering method based on the hierarchical self-organizing map digital signal modulation mode according to claim 2, wherein when the number of layers is 2, the index of the 1 st layer is the root layer of the hierarchical self-organizing map model for processing all five modulation signals; the index 2-1 sublayer is a sublayer for processing two MQAM signals, and the index 2-2 sublayer is a sublayer for processing three MPSK signals; the sub-layer of index 2-1 differs from the sub-layer of index 2-2 in the number of objects and samples to cluster, according to the number of samples in the two sub-layers, to distinguish clusters handling different modulation types.
4. The intelligent clustering method based on the hierarchical self-organizing map digital signal modulation scheme as claimed in claim 1,2 or 3, wherein the self-organizing process in the step S3 comprises:
step S301: initializing; all connection weights are initialized by small random values;
step S302: competition; for each input mode, the neurons calculate their respective discrimination function values, providing a basis for cooperation;
step S303: cooperation; the winning neuron of the neural network affects other peripheral neurons from near to far, and gradually changes from nerve excitation to nerve inhibition; the winning neuron determines the spatial position of the excitatory neuron in the topological neighborhood, and provides a basis for the adjustment of the weight vector;
step S304: adjusting; the stimulated neurons reduce the discrimination function values related to the input mode by properly adjusting the related connection weights, so that the response of winning neurons to subsequent application of similar input modes is enhanced;
step S305: layering; after the training of the root layer is finished, whether all neurons on the root layer meet the network layering condition is checked; for the neuron j which does not meet the condition, generating a new layer on the basis of the root layer; the feature samples of the sub-layers are relatively finely clustered on the basis of the root layer.
5. The intelligent clustering method based on the hierarchical self-organizing map digital signal modulation scheme as claimed in claim 4, wherein in step S302, for the input vector xi(i-1, 2, …, M), where M is the number of input vectors. The weight vector is wji(j ═ 1,2, … N; i ═ 1,2, …, M), where N is the number of neurons in the output layer; the competing discriminant function is defined as the input vector xiAnd a weight vector wjiMinimum euclidean distance between:
minj||xi-wji||,j=1,2,…,M
for an input vector, each neuron in the SOM neural network computes the value of a discriminant function, where the so-called discriminant function is the basis for competition between neurons; the particular neuron with the smallest discriminant function value is declared the winner.
6. The intelligent clustering method based on the hierarchical self-organizing map digital signal modulation scheme as claimed in claim 4, wherein in step S303, N isj,iA topological neighborhood function centered on a winning neuron i, comprising a set of cooperative neurons; one of the cooperative neurons is neuron j; dj,iIs the lateral distance between the winning neuron i and the excitatory neuron j, when dj,iIs equal to zero time Nj,iObtaining a maximum value, Nj,iIs symmetric about zero; neighborhood function Nj,iAmplitude value d ofj,iMonotonically decreasing with increasing lateral distance.
7. The intelligent clustering method based on the hierarchical self-organizing map digital signal modulation scheme as claimed in claim 4, wherein in the step S303, the topological neighborhood function comprises one or more of a Gaussian function, a Mexico cap function, a bubble function and a triangular function.
8. The intelligent clustering method based on the hierarchical self-organizing map digital signal modulation scheme as claimed in claim 4, wherein in step S305, for each network neuron on the sub-layer, the input data is a subsequence of original data; the network layering conditions are as follows:
qej<τ·qe0,j=1,2,…,M
τ is a parameter controlling the size of the model, qe0In order to initialize the original quantization error of the root layer, M is the number of input vectors; after the training of the root layer is finished, the algorithm checks whether all the neurons on the root layer meet the network layering condition; for the neuron j which does not meet the condition, generating a new layer on the basis of the root layer; the feature samples of the sub-layers are relatively finely clustered on the basis of the root layer.
CN202111093382.3A 2021-09-17 2021-09-17 Intelligent clustering method based on hierarchical self-organizing mapping digital signal modulation mode Pending CN113807254A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111093382.3A CN113807254A (en) 2021-09-17 2021-09-17 Intelligent clustering method based on hierarchical self-organizing mapping digital signal modulation mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111093382.3A CN113807254A (en) 2021-09-17 2021-09-17 Intelligent clustering method based on hierarchical self-organizing mapping digital signal modulation mode

Publications (1)

Publication Number Publication Date
CN113807254A true CN113807254A (en) 2021-12-17

Family

ID=78939670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111093382.3A Pending CN113807254A (en) 2021-09-17 2021-09-17 Intelligent clustering method based on hierarchical self-organizing mapping digital signal modulation mode

Country Status (1)

Country Link
CN (1) CN113807254A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116882180A (en) * 2023-07-13 2023-10-13 中国人民解放军国防科技大学 PIN temperature characteristic prediction method based on modal decomposition and self-encoder

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030158828A1 (en) * 2002-02-05 2003-08-21 Fuji Xerox Co., Ltd. Data classifier using learning-formed and clustered map
CN102789593A (en) * 2012-06-18 2012-11-21 北京大学 Intrusion detection method based on incremental GHSOM (Growing Hierarchical Self-organizing Maps) neural network
CN111814777A (en) * 2020-09-15 2020-10-23 湖南国科锐承电子科技有限公司 Modulation pattern recognition method based on characteristic quantity grading
CN112364729A (en) * 2020-10-29 2021-02-12 成都明杰科技有限公司 Modulation identification method based on characteristic parameters and BP neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030158828A1 (en) * 2002-02-05 2003-08-21 Fuji Xerox Co., Ltd. Data classifier using learning-formed and clustered map
CN102789593A (en) * 2012-06-18 2012-11-21 北京大学 Intrusion detection method based on incremental GHSOM (Growing Hierarchical Self-organizing Maps) neural network
CN111814777A (en) * 2020-09-15 2020-10-23 湖南国科锐承电子科技有限公司 Modulation pattern recognition method based on characteristic quantity grading
CN112364729A (en) * 2020-10-29 2021-02-12 成都明杰科技有限公司 Modulation identification method based on characteristic parameters and BP neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
徐毅琼 等: "基于非监督学习神经网络的自动调制识别研究与实现", 计算机应用与软件, vol. 28, no. 1, pages 79 - 81 *
李中亚 等: "基于生长分层自组织映射网络的岩性识别模型", 石油矿场机械, vol. 36, no. 12, pages 10 - 13 *
杨黎刚;苏宏业;张英;褚健;: "基于SOM聚类的数据挖掘方法及其应用研究", 计算机工程与科学, no. 08, pages 133 - 136 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116882180A (en) * 2023-07-13 2023-10-13 中国人民解放军国防科技大学 PIN temperature characteristic prediction method based on modal decomposition and self-encoder
CN116882180B (en) * 2023-07-13 2024-05-03 中国人民解放军国防科技大学 PIN temperature characteristic prediction method based on modal decomposition and self-encoder

Similar Documents

Publication Publication Date Title
Jagannath et al. Machine learning for wireless communications in the Internet of Things: A comprehensive survey
CN111814251B (en) Multi-target multi-modal particle swarm optimization method based on Bayesian adaptive resonance
Sherme A novel method for automatic modulation recognition
CN112329934A (en) RBF neural network optimization algorithm based on improved sparrow search algorithm
Soltani et al. Real-time and embedded deep learning on FPGA for RF signal classification
Amsaleg et al. Intrinsic dimensionality estimation within tight localities
CN107798379B (en) Method for improving quantum particle swarm optimization algorithm and application based on improved algorithm
Ebrahimzadeh et al. Blind digital modulation classification in software radio using the optimized classifier and feature subset selection
Ansari et al. Automatic digital modulation recognition based on genetic-algorithm-optimized machine learning models
Indira et al. Image segmentation using artificial neural network and genetic algorithm: a comparative analysis
CN113807254A (en) Intelligent clustering method based on hierarchical self-organizing mapping digital signal modulation mode
Conn et al. Radio frequency classification and anomaly detection using convolutional neural networks
CN114615118B (en) Modulation identification method based on multi-terminal convolution neural network
CN110166389B (en) Modulation identification method based on least square support vector machine
Wei et al. Automatic modulation recognition using neural architecture search
Tan et al. Specific emitter identification based on software-defined radio and decision fusion
Park et al. Automatic modulation recognition using support vector machine in software radio applications
CN114828095A (en) Efficient data perception layered federated learning method based on task unloading
Sun et al. Quantum-behaved particle swarm optimization clustering algorithm
CN111343115B (en) 5G communication modulation signal identification method and system
Zhang et al. Federated multi-task learning with non-stationary heterogeneous data
Yang et al. A method of high-precision signal recognition based on higher-order cumulants and svm
Jassim et al. Accuracy Enhancement of Automatic Modulation Recognition Using Deep Learning Paradigm.
CN109995690A (en) The neural network self-organization method of MFSK digital signal subclass Modulation Identification
Ebrahimzadeh et al. Intelligent digital signal-type identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination