CN113098664B - MDMSFN-based space-time block code automatic identification method and device - Google Patents

MDMSFN-based space-time block code automatic identification method and device Download PDF

Info

Publication number
CN113098664B
CN113098664B CN202110348741.9A CN202110348741A CN113098664B CN 113098664 B CN113098664 B CN 113098664B CN 202110348741 A CN202110348741 A CN 202110348741A CN 113098664 B CN113098664 B CN 113098664B
Authority
CN
China
Prior art keywords
time
delay
feature
fusion
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110348741.9A
Other languages
Chinese (zh)
Other versions
CN113098664A (en
Inventor
闫文君
张聿远
凌青
方君
张兵强
王萌
付宇鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
School Of Aeronautical Combat Service Naval Aeronautical University Of Pla
Original Assignee
School Of Aeronautical Combat Service Naval Aeronautical University Of Pla
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by School Of Aeronautical Combat Service Naval Aeronautical University Of Pla filed Critical School Of Aeronautical Combat Service Naval Aeronautical University Of Pla
Priority to CN202110348741.9A priority Critical patent/CN113098664B/en
Publication of CN113098664A publication Critical patent/CN113098664A/en
Application granted granted Critical
Publication of CN113098664B publication Critical patent/CN113098664B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0036Systems modifying transmission characteristics according to link quality, e.g. power backoff arrangements specific to the receiver
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/02Arrangements for detecting or preventing errors in the information received by diversity reception
    • H04L1/06Arrangements for detecting or preventing errors in the information received by diversity reception using space diversity
    • H04L1/0618Space-time coding
    • H04L1/0637Properties of the code
    • H04L1/0643Properties of the code block codes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Error Detection And Correction (AREA)

Abstract

The application provides a space-time block code automatic identification method based on a multi-time-delay multi-time sequence feature fusion network MDMSFN, and relates to the technical field of space-time block code identification, wherein the method comprises the following steps: firstly, on the basis that the combined convolution layer maps STBC time domain samples into one-dimensional feature vectors, the STBC code internal features of a discontinuous time window are extracted by adopting expansion convolution under multiple expansion rates, the self-extraction of the multi-time-delay features is realized, then a multi-time sequence feature self-extraction module is constructed to extract the code time sequence features, the mapping feature types are further expanded, finally the maximum time delay features of a multi-time-delay splicing layer are extracted to be used as deep fusion features, a residual error layer with cross connection is added to improve the utilization rate of the fusion features, and the identification of the space-time block codes is realized. The invention adopting the scheme has the advantages of obviously improved performance and stronger adaptability to low signal-to-noise ratio.

Description

MDMSFN-based space-time block code automatic identification method and device
Technical Field
The application relates to the technical field of space-time block code identification, in particular to a space-time block code automatic identification method and device based on a multi-time-delay multi-time-sequence feature fusion network MDMSFN.
Background
Space-time Block Coding (Space-time Block Coding) identification plays an important role in the military communication fields of spectrum management, communication investigation, electromagnetic countermeasure and the like, and is an important research topic in the communication signal identification field. In recent years, electromagnetic environments and informatization technologies are increasingly intensive and complicated, and the traditional STBC identification method based on feature extraction and threshold decision cannot meet the actual requirement of accurate and rapid identification in the actual communication environment. According to the difference of STBC self-coding modes, the traditional algorithm obtains statistical characteristics capable of reflecting signal essential information, such as cyclic statistical characteristics, high-order statistical characteristics, virtual channel correlation matrix characteristics and the like, but the characteristics need to be artificially extracted and an inspection threshold needs to be set, so that the problems of complex parameter adjusting process, high sensitivity to noise and the like exist. Therefore, how to identify the space-time block code under the condition of low signal-to-noise ratio with large signal attenuation is significant.
Consider n t In a MIMO wireless communication system with 1 transmitting antenna and 1 receiving antenna, a transmitting terminal transmits a v-th group of transmitting signals s with symbol number K v =[s 1 ,s 2 ,…,s K ] T Is coded into N t The transmission matrix in xl dimension is specifically expressed as:
Figure BDA0003001702490000011
wherein A is i (i =1,2, …, L) is N t A 2K-dimensional coding matrix determined by an STBC coding mode;
Figure BDA0003001702490000012
from a transmitted signal s v The real and imaginary components of (a). The most commonly used Spatial Multiplexing (SM) and sum of spatial Multiplexing (MSM) are selected hereAlamouti (AL) code is added with two groups of confusable STBC3-1 and STBC3-2, STBC3-3 and STBC4 codes, and 6 types of orthogonal space-time block codes including SM, AL, 3 types of STBC3 and STBC4 are distinguished, and the specific coding mode is as follows:
(1) The transmitting matrix of the SM code is as follows:
Figure BDA0003001702490000013
(2) The transmission matrix of the AL code is as follows:
Figure BDA0003001702490000014
(3) The transmission matrix of the STBC3-1 code is:
Figure BDA0003001702490000015
(4) The transmission matrix of the STBC3-2 code is:
Figure BDA0003001702490000021
(5) The transmission matrix of the STBC3-3 code is:
Figure BDA0003001702490000022
(6) The transmission matrix of the STBC4 code is:
Figure BDA0003001702490000023
in recent years, with the rapid development of deep learning technology in the field of Computer Vision (CV), due to the improvement of the parallel computing capability of the GPU, the deep learning model obtains stronger classification performance with its strong mapping capability on mass data. By combining the performance advantages of the technology, scholars at home and abroad gradually apply the technology to the field of STBC recognition. The space-time block code identification method based on the Convolutional Neural Network (CNN) firstly introduces deep learning into the field by using a CNN frame and a data set pattern which are successfully applied in modulation identification, and realizes the identification of two most common STBC (space Multiplexing, SM) of spatial Multiplexing (Special Multiplexing, alamouti (AL)). The multi-STBC blind identification algorithm based on deep learning inputs preprocessed samples into a CNN for identification by calculating a Frequency Domain Self-Correlation Function (FDSCF) of a received signal. A serial sequence space-time block code identification method of a convolutional-cyclic neural network is utilized to introduce a Long Short-Term Memory (LSTM) layer into space-time block code identification, and an STBC identification method which directly adopts a time domain I/Q signal as a training sample and constructs a convolutional-cyclic neural network to extract space and time sequence characteristics is provided, so that the processing flow of signal samples is optimized, but the method only can identify SM and AL codes, and the identification performance is not ideal. The deep learning method achieves improvement of recognition performance and efficiency, but is only limited to migration learning of a network and construction of a simple deep learning framework, the research on the model structure is not deep enough, and the network architecture is not optimized by combining with the self coding characteristics of the STBC.
The STBC identification algorithm only considers single characteristics, the diversity of the characteristics of the algorithm is insufficient, the actual communication environment is complex and changeable, the single signal characteristics are not enough to comprehensively and accurately represent the difference between STBC coding modes, the identification performance is limited under the condition of low signal-to-noise ratio, and the identification algorithm has certain limitation.
Most traditional algorithms cannot distinguish two groups of coding modes, namely STBC3-1 and STBC3-2 and STBC3-3 and STBC4, because the traditional algorithms need to use the characteristic of different STBC correlations for identification, and the correlations of the two groups of STBC are distributed consistently. Assuming that the transmission channel H is a flat fading channel, the received signal r (k) at the k-th time after passing through the channel can be represented as:
r(k)=HG(k)+n(k)
wherein G (k) is N at the k-th time t The x 1-dimensional transmission signal is transmitted,
Figure BDA0003001702490000031
is 1 XN t Dimension transmission channel coefficient, where h i Is the channel coefficient between the ith transmitting antenna and the receiving antenna, n (k) is complex Gaussian white noise with mean 0 and variance
Figure BDA0003001702490000032
In fact, in the field of signal classification and identification such as modulation identification, communication radiation source identification and radar radiation source identification, feature fusion has become an important method adopted in the deep learning model design process, and is increasingly widely applied. Successful practice of these methods illustrates the feasibility of using multi-feature fusion deep learning models in the field.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first objective of the present application is to provide a space-time block code identification method based on a multi-delay multi-timing feature fusion mdmsfn network model.
The second purpose of the present application is to provide a space-time block code recognition apparatus based on a multi-delay multi-timing feature fusion mdmsfn network model.
A third object of the present application is to propose a non-transitory computer-readable storage medium.
To achieve the above object, an embodiment of the first aspect of the present application provides a method for identifying a space-time block code based on a multi-delay multi-timing feature fusion mdmsfn network model, including:
extracting a real part and an imaginary part of the received signal to be tested to form a 2 XN space-time block code (STBC) sample, carrying out merged convolution on the STBC sample to generate a one-dimensional characteristic vector, and carrying out expanded convolution on the one-dimensional characteristic vector by adopting a multi-expansion rate which is the same as a multi-delay parameter to generate a delay characteristic vector;
extracting deep multi-delay features of the delay feature vector by using a continuous convolution method, performing splicing operation on the deep multi-delay features to generate a first splicing matrix, and extracting multi-time inter-step features of the first splicing matrix;
splitting the multi-time step inter-code features into One-dimensional feature vectors, splicing the One-dimensional feature vectors to generate a second splicing matrix, performing feature fusion on the second splicing matrix, traversing the second feature matrix according to columns, selecting the maximum time delay feature in the second feature matrix as a deep fusion feature, generating a fusion feature vector according to the deep fusion feature, inputting the feature fusion vector into a residual layer to improve the utilization rate of fusion information, and outputting a probability distribution vector of One-Hot coding through a full-connection layer with Softmax as an activation function;
and identifying the STBC codes according to the STBC probability distribution vector so as to determine the category of the received signal to be tested.
Further, in an embodiment of the present application, the merging and convolving the STBC samples generate a one-dimensional feature vector, specifically, the real and imaginary parts of the STBC samples are merged into a one-dimensional feature vector, which is expressed as:
h l =f(x I/Q *W l +b l )
wherein the function represents a feature vector h extracted by the l-th convolution kernel l ,W l And b l Respectively representing the weight and bias to be learned of the first convolution kernel, representing convolution operation, f ((-)) is an activation function, x I/Q Is an input feature sample.
Further, in an embodiment of the present application, the performing dilation convolution on the one-dimensional feature vector with the same multi-dilation rate as the multi-delay parameter to generate the delay feature vector is represented as:
Figure BDA0003001702490000041
wherein the function represents the time delay vector extracted at the kth convolution kernel at the expansion rate τ (i.e., time delay τ)
Figure BDA0003001702490000042
The value at the position of i is,
Figure BDA0003001702490000043
one-dimensional feature vector h output for merged convolutional layer l The value at i + τ S, S is the length of the convolution kernel, L is the number of signatures of the merged convolution layer,
Figure BDA0003001702490000044
for the weight to be learned of the kth convolution kernel at s, b k Is the bias to be learned.
Further, in an embodiment of the present application, the extracting deep multi-delay features of the delay feature vector by using a continuous convolution method is expressed as:
Figure BDA0003001702490000045
wherein the function represents a one-dimensional vector, W, of the g-th convolution kernel output at a time delay, tau g And b g Respectively the weight to be learned and the offset of the convolution kernel,
Figure BDA0003001702490000046
and the time delay characteristic vector is obtained.
The stitching operation on the deep multi-delay features to generate a first stitching matrix, which is represented as:
Figure BDA0003001702490000047
wherein the function represents a conversion of a 3-channel of the deep convolutional layer output to a 2-channel, P τ And (4) forming a spliced matrix after dimension reshaping.
Further, in an embodiment of the present application, the extracting inter-multi-time-step features of the first concatenation matrix is expressed as:
Figure BDA0003001702490000048
wherein the function represents the output of the hidden layer at time t,
Figure BDA0003001702490000049
is the output gate state at the present time,
Figure BDA00030017024900000410
in order to update the state of the memory cell,
Figure BDA00030017024900000411
the functional expression of (a) is:
Figure BDA00030017024900000412
wherein the content of the first and second substances,
Figure BDA00030017024900000413
forgetting door by current time step
Figure BDA00030017024900000414
And input gate
Figure BDA00030017024900000415
The control is carried out by controlling the temperature of the air conditioner,
Figure BDA00030017024900000416
the state of the memory cell at time t-1, W l And b l The weight and the bias representing the current state of the memory cell,
the forgetting door
Figure BDA00030017024900000417
Input gate
Figure BDA00030017024900000418
And output gate
Figure BDA00030017024900000419
The state of (A) is input by the current time step
Figure BDA00030017024900000420
And the state of the hidden layer at time t-1
Figure BDA00030017024900000421
A joint decision, respectively expressed as:
Figure BDA00030017024900000422
Figure BDA00030017024900000423
Figure BDA00030017024900000424
wherein, W f And b f 、W i And b i And W o And b o Represents the weights and offsets of the forgetting gate, the input gate and the output gate, and sigma (-) is a sigmoid function.
Further, in an embodiment of the present application, the splitting the inter-multi-time-step feature into one-dimensional feature vectors is represented as:
Figure BDA0003001702490000051
wherein the function is expressed as the characteristic y between the multi-time step codes under the time delay tau τ And dividing the line into one-dimensional eigenvectors, wherein Q is the number of the one-dimensional vectors.
And the one-dimensional characteristic vectors are spliced to generate a second splicing matrix, which is expressed as:
Figure BDA0003001702490000052
wherein the function represents a 1 XN-dimensional feature vector under class 3 delay of the same sequence number
Figure BDA0003001702490000053
Splicing by row, S q The q-th 3 XN dimensional matrix is obtained for stitching.
Further, in an embodiment of the present application, the second mosaic matrix is subjected to feature fusion, the second feature matrix is traversed by columns, and a maximum time delay feature in the second feature matrix is selected as a deep fusion feature, which is expressed as:
U q (j)=max S q (j)
wherein S is q (j) Is the jth column of the qth mosaic matrix.
Further, in an embodiment of the present application, the generating a fused feature vector from the deep-layer fused features is represented as:
R q =f(h(U q )+F(U q ,W q ))
wherein the function represents the output of the q fusion feature vector after passing through a residual error layer, h (-) is a mapping function of crossing connection, and h (U) is adopted in the invention q )=U q F (-) is a residual mapping function, W q As weights to be learned, h (U) q ) Is the deep fusion feature.
To achieve the above object, an embodiment of a second aspect of the present invention provides an apparatus, including: a processor; a memory for storing the processor-executable instructions; when the processor is executed, the space-time block code identification method based on the MDMSFN network model with the multi-time-delay and multi-time-sequence feature fusion can be realized.
In order to achieve the above object, a non-transitory computer-readable storage medium is provided in a third embodiment of the present invention, and when executed by a processor, the instructions in the storage medium enable the implementation of the above space-time block code identification method based on a multi-latency multi-timing feature fusion mdmsfn network model.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a space-time block code identification method based on a multi-delay multi-timing feature fusion mdmsfn network model according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a multi-latency multi-timing MDMSFN network model structure according to an embodiment of the present application;
fig. 3 is a diagram of a space-time block code STBC correlation distribution according to an embodiment of the present application;
fig. 4 is a space-time block coding STBC multi-delay correlation distribution diagram according to an embodiment of the present application;
fig. 5 is a schematic diagram of setting parameters of the mdmmsfn network according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The space-time block code identification method and device based on the mdmsfn network model with multi-delay and multi-timing feature fusion according to the embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a space-time block code identification method based on a mdmsfn network model fused with multiple time delay and multiple timing characteristics according to an embodiment of the present application.
As shown in fig. 1, the method for identifying space-time block codes based on a multi-delay multi-timing feature fusion mdmmsfn network model includes the following steps:
step 101, extracting a 2 × N space-time block code STBC sample composed of a real part and an imaginary part of the received signal to be tested, performing a combining convolution on the STBC sample to generate a one-dimensional feature vector, and performing an expanding convolution on the one-dimensional feature vector by using a multi-expansion rate same as a multi-delay parameter to generate a delay feature vector.
And 102, extracting deep multi-delay features of the delay feature vector by using a continuous convolution method, performing splicing operation on the deep multi-delay features to generate a first splicing matrix, and extracting multi-time inter-step features of the first splicing matrix.
Step 103, splitting the multi-time inter-code features into one-dimensional feature vectors, splicing the one-dimensional feature vectors to generate a second splicing matrix, performing feature fusion on the second splicing matrix, traversing the second feature matrix according to columns, selecting the maximum time delay features in the second feature matrix as deep fusion features, and generating fusion feature vectors according to the deep fusion features.
And 104, inputting the feature fusion vector into a residual error layer to improve the utilization rate of fusion information, and identifying the STBC code through a full connection layer taking Softmax as an activation function to determine the category of the received signal to be tested.
According to the space-time block code identification method based on the multi-delay multi-timing-sequence feature fusion MDMSFN network model, a real part and an imaginary part of a received signal to be tested are extracted to form a 2 xN dimensional space-time block code STBC sample, the STBC sample is subjected to combined convolution to generate a one-dimensional feature vector, and the one-dimensional feature vector is subjected to expansion convolution to generate a delay feature vector by adopting a multi-expansion rate which is the same as a multi-delay parameter;
extracting deep multi-delay features of the delay feature vector by using a continuous convolution method, performing splicing operation on the deep multi-delay features to generate a first splicing matrix, and extracting multi-time inter-step features of the first splicing matrix;
splitting the multi-time step inter-code features into one-dimensional feature vectors, splicing the one-dimensional feature vectors to generate a second splicing matrix, performing feature fusion on the second splicing matrix, traversing the second feature matrix according to columns, selecting the maximum time delay features in the second feature matrix as deep fusion features, and generating fusion feature vectors according to the deep fusion features;
inputting the characteristic fusion vector into a residual error layer to improve the utilization rate of fusion information, and identifying the STBC code through a full-link layer taking Softmax as an activation function to determine the category of the signal to be tested.
The MDMSFN network model makes full use of complementarity of various time delay characteristics under multiple expansion rates, so that the fusion characteristics have stronger identification degree and stability, the noise sensitivity is effectively reduced, the defects that the existing deep learning algorithm only adopts a migration model or a circulation layer stacking mode to design a framework, the network architecture and the STBC coding characteristics are not deeply researched, the model architecture is single and the like are overcome, the performance of a typical traditional algorithm and the existing deep learning algorithm under a low signal-to-noise ratio is remarkably improved, the MDMSFN network model has good identification efficiency and robustness, and has excellent engineering application prospect.
Further, in the embodiment of the present application, the combining convolution is performed on the STBC samples, and the STBC samples are combined into a one-dimensional feature vector without changing correlation, and is represented as:
h l =f(x I/Q *W l +b l )
wherein the function represents a feature vector h extracted by the l-th convolution kernel l ,W l And b l Respectively representing the weight and bias to be learned of the first convolution kernel, representing convolution operation, f (-) is an activation function, and x I/Q For inputting the feature samples, since the convolution kernel size of the merging layer is 2 × 1 dimension, only real and imaginary parts of the same signal are merged in the convolution process, and the L one-dimensional feature vectors h after merging l (L =1,2, …, L) correlation with input sample x I/Q And (5) keeping the consistency.
Further, in this embodiment of the present application, the performing dilation convolution on the one-dimensional feature vector with the same multi-dilation rate as the multi-delay parameter to generate the delay feature vector is expressed as:
Figure BDA0003001702490000071
wherein the function represents the time delay vector extracted at the kth convolution kernel at the expansion rate τ (i.e., time delay τ)
Figure BDA0003001702490000072
The value at the position of i is,
Figure BDA0003001702490000073
one-dimensional feature vector h output for merged convolutional layer l The value at i + τ S, S is the length of the convolution kernel, L is the number of signatures of the merged convolution layer,
Figure BDA0003001702490000074
for the weight to be learned of the kth convolution kernel at s, b k For the bias to be learned, for a given STBC signal, the statistical characteristics of the given STBC signal under different time delays are greatly different, so a characteristic vector F obtained by convolution of 3 types of time delays with the expansion rate tau =1,2,4 is adopted τ Will also be significantly different.
It is worth to be noted that although the correlation distributions of the two groups of space-time block codes, namely STBC3-1 and STBC3-2, STBC3-3 and STBC4, with the same coding matrix length are consistent, the algorithm herein still has excellent recognition performance for the above two groups of STBCs with higher similarity thanks to the strong mapping capability of the mdmmsfn network on the signal characteristics.
Further, in this embodiment of the present application, the extracting deep multi-delay features of the delay feature vector by using a continuous convolution method is represented as:
Figure BDA0003001702490000081
wherein the functionOne-dimensional vector, W, representing the output of the g-th convolution kernel at time delay τ g And b g Respectively the weight to be learned and the offset of the convolution kernel,
Figure BDA0003001702490000082
for the delay feature vector, in order to adapt to the input dimension of the LSTM layer, it is necessary to convert 3 channels output by the depth convolution layer into 2 channels. Considering that the convolution layer extracts one-dimensional feature vectors, the module further performs a stitching operation on the output g one-dimensional vectors.
The stitching operation on the deep multi-delay features to generate a first stitching matrix, which is represented as:
Figure BDA0003001702490000083
wherein the function represents a conversion of a 3-channel output of the depth convolution layer to a 2-channel, P τ And (4) forming a spliced matrix after dimension reshaping.
Further, in this embodiment of the present application, the extracting the inter-multi-time-step feature of the first concatenation matrix is expressed as:
Figure BDA0003001702490000084
wherein the function represents the output of the hidden layer at time t,
Figure BDA0003001702490000085
is the output gate state at the present time,
Figure BDA0003001702490000086
in order to update the state of the memory cell,
Figure BDA0003001702490000087
the functional expression of (a) is:
Figure BDA0003001702490000088
wherein the content of the first and second substances,
Figure BDA0003001702490000089
forgetting door by current time step
Figure BDA00030017024900000810
And input gate
Figure BDA00030017024900000811
The control is carried out by controlling the temperature of the air conditioner,
Figure BDA00030017024900000812
the state of the memory cell at time t-1, W l And b l The weight and the bias representing the current state of the memory cell,
forget the door
Figure BDA00030017024900000813
Input gate
Figure BDA00030017024900000814
And output gate
Figure BDA00030017024900000815
The state of (A) is input by the current time step
Figure BDA00030017024900000816
And the state of the hidden layer at time t-1
Figure BDA00030017024900000817
A joint decision, respectively expressed as:
Figure BDA0003001702490000091
Figure BDA0003001702490000092
Figure BDA0003001702490000093
wherein, W f And b f 、W i And b i And W o And b o Represents the weights and offsets of the forgetting gate, the input gate and the output gate, and sigma (-) is a sigmoid function.
Through the joint control of the three control gates and the long-term storage of information of the plurality of memory units, the LSTM unit can utilize the characteristics of a plurality of time steps before and after the LSTM unit, thereby realizing the self-extraction of the characteristics of a plurality of time sequences and further enhancing the characteristic mapping capability of the MDMSNN network to the STBC signals.
Further, in this embodiment of the present application, the splitting the inter-multi-time-step feature into one-dimensional feature vectors is represented as:
Figure BDA0003001702490000094
wherein the function is expressed as the inter-multi-time-step feature y at the time delay tau τ The delay characteristics are spliced by dividing the delay characteristics into one-dimensional characteristic vectors according to lines, Q is the number of the one-dimensional vectors, and in order to realize the splicing of the delay characteristics, 2 channels output by an LSTM layer need to be converted into 3 channels again.
And the one-dimensional characteristic vectors are spliced to generate a second splicing matrix, which is expressed as:
Figure BDA0003001702490000095
wherein the function represents a 1 XN-dimensional feature vector under class 3 delay of the same sequence number
Figure BDA0003001702490000096
Splicing by row, S q The q-th 3 XN dimensional matrix is obtained for stitching.
Further, in this embodiment of the present application, the feature fusion is performed on the second mosaic matrix, the second feature matrix is traversed by columns, and the maximum time delay feature in the second feature matrix is selected as a deep fusion feature, which is expressed as:
U q (j)=max S q (j)
wherein S is q (j) Is the jth column of the qth mosaic matrix.
Further, in this embodiment of the present application, the generating a fusion feature vector according to the deep fusion feature is represented as:
R q =f(h(U q )+F(U q ,W q ))
wherein the function represents the output of the q fusion feature vector after passing through a residual error layer, h (-) is a mapping function of crossing connection, and h (U) is adopted in the invention q )=U q F (-) is a residual mapping function, W q As weights to be learned, h (U) q ) Is the deep fusion feature.
In order to implement the above embodiments, the present application further provides a space-time block code recognition apparatus based on a multi-delay multi-timing feature fusion mdmsfn network model.
The device extracts multi-time-delay and multi-time sequence characteristics by utilizing the correlation difference of the STBC coding matrix, carries out deep fusion on various time delay characteristics by adopting a maximum time delay characteristic fusion module, and provides an MDMSFN network model, wherein the structure of the MDMSFN network model is shown in figure 2. The model mainly comprises 3 modules of multi-time delay feature self-extraction, multi-time sequence feature self-extraction and maximum time delay feature fusion, and has the following characteristics: (1) Considering the STBC correlation difference of a receiving end, introducing expansion convolution to extract multi-time delay characteristics, and enhancing the intra-code characteristic mapping capability of the network; (2) An LSTM layer is introduced to extract the inter-code information of multiple time steps, and the self-extraction capability of time sequence characteristics is enhanced; (3) By taking the idea of identifying STBC through a traditional algorithm as a reference, the maximum time delay characteristic is adopted for carrying out characteristic fusion, and the identification degree of deep fusion characteristics is highlighted; (4) Residual layers with cross-connection are added to increase the utilization and characterization capability of the depth fusion feature.
Fig. 2 is a schematic structural diagram of a space-time block code recognition apparatus based on a multi-delay multi-timing feature fusion mdmsfn network model according to an embodiment of the present application.
As shown in fig. 2, the space-time block code recognition apparatus based on the mdmmsfn network model with multi-latency and multi-timing feature fusion includes: a multi-time delay characteristic self-extracting module, a multi-time sequence characteristic self-extracting module, a maximum time delay characteristic fusion module and an identification module, wherein,
the multi-delay characteristic self-extracting module is used for extracting a STBC sample of a 2 XN dimensional space-time block code formed by a real part and an imaginary part of the signal to be tested, carrying out merging convolution on the STBC sample to generate a one-dimensional characteristic vector, and carrying out expansion convolution on the one-dimensional characteristic vector by adopting a multi-expansion rate which is the same as that of a multi-delay parameter to generate a delay characteristic vector;
considering that the traditional algorithm usually completes recognition by calculating statistics of STBC under different time delays and further analyzing the magnitude relation between the statistics of each order and a theoretical threshold, the method for calculating the time delay characteristics is just in conflict with the expansion convolution process in deep learning, namely the expansion convolution and the time delay characteristics are the same information for extracting interval points and are not the signal characteristics of continuous time windows. Inspired by the characteristic, the invention applies the thought of calculating the multi-delay statistical characteristic in the traditional identification method to a deep learning framework through expanding convolution, designs a network framework by combining with the self-correlation characteristic of STBC, provides a multi-delay characteristic self-extraction MDFSE module, fully utilizes the multi-delay characteristic information of the space-time block code signal and improves the characteristic mapping capability of the model. At the receiving end, the distribution of various types of STBC correlations across the channel and noise is shown in fig. 3.
As can be seen from fig. 3, since the received signals in the same coding matrix are correlated, and the signals between different matrices are uncorrelated, the lengths of the correlation distribution transmission matrices of the STBC signals at the receiving end are consistent. The difference of the correlation distribution of the STBC of 6 types under different time delays is shown in figure 4.
The multi-time-sequence feature self-extraction module is used for extracting deep multi-time-delay features of the time-delay feature vector by using a continuous convolution method, performing splicing operation on the deep multi-time-delay features to generate a first splicing matrix, and extracting multi-time inter-step features of the first splicing matrix;
although the combination of CNN and Recurrent Neural Network (RNN) has been applied in the art, the method only adopts the cyclic layer stacking manner to extract the signal features, and the research on the model framework is not deep enough, which leads to the rapid deterioration of the identification accuracy under the severe channel conditions. In addition, the method can only identify two most basic encoding modes of SM and AL, and the identifiable STBC types are even less than those of the traditional algorithm. In order to enhance the feature mapping capability of the network and explore a time sequence feature extraction method more suitable for STBC identification, the module firstly utilizes a continuous sampling point convolution kernel to carry out deep convolution on the basis of a multi-time delay feature extraction framework, and then adopts an LSTM layer to extract front and back multi-time step features of an STBC signal, so that multi-time sequence feature self-extraction MSFSE is realized.
For the MDMSFN network framework designed in the text, the multi-time-sequence coding features extracted by the multi-time-sequence circulation layer are exactly complementary with the multi-time-delay convolution features obtained by the multi-expansion-rate convolution layer, so that the text model can learn deep features with stronger distinguishability, and the mapping capability of the MDMSFN network on STBC features is enhanced.
The maximum time delay feature fusion module splits the multi-time step inter-code features into one-dimensional feature vectors, splices the one-dimensional feature vectors to generate a second splicing matrix, performs feature fusion on the second splicing matrix, traverses the second feature matrix according to columns, selects the maximum time delay features in the second feature matrix as deep fusion features, and generates fusion feature vectors according to the deep fusion features;
compared with single signal characteristics, the fusion information has stronger characterization capability due to the full utilization of complementarity of various characteristics. In order to fully fuse the multi-delay multi-timing characteristics, the idea of identifying STBC by using a traditional algorithm is used for reference, namely, high-order statistics (HOS) under various delays is calculated according to the correlation of the STBC, and the High-order delay statistic with the largest distance measurement is used as the identification characteristic. On the basis of the model framework, a maximum time delay feature fusion MDFF module is further added, the features under multiple time delays are spliced, the maximum time delay feature is extracted to serve as a deep fusion feature, a residual layer with spanning connection is added, fusion information is fully utilized, and therefore the problems that single feature characterization capability is poor and complementarity of all time delay information is not fully utilized are effectively solved.
The identification module is used for identifying the STBC code according to an STBC classification table so as to determine the category of the received signal to be tested.
Because the model needs to perform deep fusion on the multi-delay multi-time sequence characteristics to enhance the inter-code and intra-code characteristic mapping capability of the MDMSFN network and improve the differentiability of the deep fusion characteristics of the STBC sample, the model has 3 parallel multi-delay branch structures with consistent dimensions, and a splicing fusion module is adopted to merge each delay branch. Specific parameters of the network are shown in fig. 5, wherein the dimension of the input layer is set to 2 × 128 to adapt to the sample size, after the network is subjected to 2 × 1-dimensional convolution kernel merging convolution, 3 types of convolution with expansion rate τ =1,2,4 are used for extracting multi-delay features, the LSTM layer is used for extracting multi-timing features, return _ sequences is set to True to return hidden values of all time steps, the output layer is a fully-connected layer including 6 neurons, a Softmax activation function is used for corresponding to 6 types of STBC classification results, all the other layers except the fully-connected layer adopt a ReLU activation function, and the step size of all the convolution layers is stride = (1,1).
It should be noted that the explanation of the foregoing embodiment of the space-time block code identification method based on the mdmsfn network model with multi-delay and multi-timing characteristics fusion is also applicable to the space-time block code identification device based on the mdmsfn network model with multi-delay and multi-timing characteristics fusion in this embodiment, and is not described here again.
In order to implement the above embodiments, the present invention also proposes a non-transitory computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the method of the above embodiments.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (3)

1. A space-time block code identification method based on a multi-time-delay multi-time-sequence feature fusion MDMSFN network model is characterized by comprising the following steps:
extracting a real part and an imaginary part of a received signal to be tested to form a 2 XN space-time block code (STBC) sample, carrying out merged convolution on the STBC sample to generate a one-dimensional characteristic vector, and carrying out expanded convolution on the one-dimensional characteristic vector by adopting a multi-expansion rate which is the same as a multi-delay parameter to generate a delay characteristic vector;
extracting deep multi-delay features of the delay feature vector by using a continuous convolution method, performing splicing operation on the deep multi-delay features to generate a first splicing matrix, and extracting multi-time inter-step features of the first splicing matrix;
splitting the multi-time step inter-code features into one-dimensional feature vectors, splicing the one-dimensional feature vectors to generate a second splicing matrix, performing feature fusion on the second splicing matrix, traversing the second splicing matrix according to columns, selecting the maximum time delay features in the second splicing matrix as deep fusion features, and generating fusion feature vectors according to the deep fusion features;
inputting the characteristic fusion vector into a residual error layer to improve the utilization rate of fusion information, and identifying the STBC code through a full-link layer taking Softmax as an activation function to determine the category of the received signal to be tested;
the merging and convolution of the STBC samples generates a one-dimensional feature vector, specifically, the real and imaginary parts of the STBC samples are merged into a one-dimensional feature vector, which is expressed as:
h l =f(x I/Q *W l +b l )
wherein the function represents a feature vector h extracted by the l-th convolution kernel l ,W l And b l Respectively representing the weight and bias to be learned of the first convolution kernel, representing convolution operation, f ((-)) is an activation function, x I/Q In order to input the characteristic sample, the characteristic sample is input,
performing dilation convolution on the one-dimensional feature vector by using a plurality of dilation rates which are the same as the plurality of time delay parameters to generate a time delay feature vector, wherein the time delay feature vector is expressed as:
Figure FDA0003706451830000011
wherein the function represents the time delay vector extracted by the kth convolution kernel at the expansion rate τ
Figure FDA0003706451830000012
The value at the position of i is,
Figure FDA0003706451830000013
one-dimensional feature vector h output for merged convolutional layer l The value at i + τ S, S is the length of the convolution kernel, L is the number of signatures of the merged convolution layer,
Figure FDA0003706451830000014
for the weight to be learned of the kth convolution kernel at s, b k In order for the bias to be learned,
the deep multi-delay feature of the delay feature vector is extracted by using a continuous convolution method, and is represented as:
Figure FDA0003706451830000015
wherein the function represents a one-dimensional vector, W, of the g-th convolution kernel output at time delay τ g And b g Respectively the weight to be learned and the offset of the convolution kernel,
Figure FDA0003706451830000016
for the said time-delay feature vector, the time-delay feature vector,
the stitching operation on the deep multi-delay features to generate a first stitching matrix, which is represented as:
Figure FDA0003706451830000021
wherein the function represents a conversion of a 3-channel of the deep convolutional layer output to a 2-channel, P τ Is a splicing matrix after the dimensionality is reshaped,
the extraction of the multi-time step inter-code features of the first splicing matrix is represented as:
Figure FDA0003706451830000022
wherein the function represents the output of the hidden layer at time t,
Figure FDA0003706451830000023
is the output gate state at the present time,
Figure FDA0003706451830000024
in order to update the state of the memory cell,
Figure FDA0003706451830000025
the functional expression of (a) is:
Figure FDA0003706451830000026
wherein the content of the first and second substances,
Figure FDA0003706451830000027
forgetting door by current time step
Figure FDA0003706451830000028
And input gate
Figure FDA0003706451830000029
The control is that the computer controls the computer to run,
Figure FDA00037064518300000210
the state of the memory cell at time t-1, W l And b l The weight and the bias representing the current state of the memory cell,
forget the door
Figure FDA00037064518300000211
Input gate
Figure FDA00037064518300000212
And output gate
Figure FDA00037064518300000213
Is input by the current time step
Figure FDA00037064518300000214
And the state of the hidden layer at time t-1
Figure FDA00037064518300000215
A joint decision, respectively expressed as:
Figure FDA00037064518300000216
Figure FDA00037064518300000217
Figure FDA00037064518300000218
wherein, W f And b f 、W i And b i And W o And b o Representing the weights and offsets of the forgetting gate, the input gate and the output gate, sigma (-) is a sigmoid function,
the splitting of the multi-time inter-code features into one-dimensional feature vectors is represented as:
Figure FDA00037064518300000219
wherein the function is expressed as the inter-multi-time-step feature y at the time delay tau τ Dividing the line into one-dimensional characteristic vectors according to the line, wherein Q is the number of the one-dimensional vectors,
and the one-dimensional characteristic vectors are spliced to generate a second splicing matrix, which is expressed as:
Figure FDA00037064518300000220
wherein the function represents a 1 XN-dimensional feature vector under class 3 delay of the same sequence number
Figure FDA00037064518300000221
Splicing by row, S q For the resulting q-th 3 xn dimensional matrix to be spliced,
performing feature fusion on the second splicing matrix, traversing the second splicing matrix according to columns, and selecting the maximum time delay feature in the second splicing matrix as a deep fusion feature, wherein the deep fusion feature is expressed as:
U q (j)=maxS q (j)
wherein S is q (j) For the jth column of the qth mosaic,
generating a fusion feature vector from the deep fusion features, expressed as:
R q =f(h(U q )+F(U q ,W q ))
wherein the function represents the output of the q fusion feature vector after passing through a residual error layer, h (-) is a mapping function of crossing connection, and h (U) is adopted q )=U q F (-) is a residual mapping function, W q As weights to be learned, h (U) q ) Is the deep fusion feature.
2. A space-time block code recognition device based on a multi-time-delay multi-time-sequence feature fusion MDMSFN network model is characterized by comprising a multi-time-delay feature self-extraction module, a multi-time-sequence feature self-extraction module, a maximum time-delay feature fusion module and a recognition module, wherein,
the multi-time-delay feature self-extraction module is used for extracting a STBC sample of a 2 XN dimensional space-time block code formed by a real part and an imaginary part of a received signal to be tested, carrying out merging convolution on the STBC sample to generate a one-dimensional feature vector, and carrying out expanding convolution on the one-dimensional feature vector by adopting a multi-expansion rate which is the same as a multi-time-delay parameter to generate a time-delay feature vector;
the multi-time-sequence feature self-extraction module is used for extracting deep multi-time-delay features of the time-delay feature vector by using a continuous convolution method, performing splicing operation on the deep multi-time-delay features to generate a first splicing matrix, and extracting multi-time step inter-code features of the first splicing matrix;
the maximum time delay feature fusion module splits the multi-time step code features into One-dimensional feature vectors, splices the One-dimensional feature vectors to generate a second splicing matrix, performs feature fusion on the second splicing matrix, traverses the second splicing matrix according to columns, selects the maximum time delay features in the second splicing matrix as deep fusion features, generates fusion feature vectors according to the deep fusion features, inputs the feature fusion vectors into a residual layer to improve the utilization rate of fusion information, and outputs One-Hot coded probability distribution vectors through a full connection layer with Softmax as an activation function;
the identification module is used for identifying the STBC code according to the STBC probability distribution vector so as to determine the category of the received signal to be tested;
the merging and convolution of the STBC samples generates a one-dimensional feature vector, specifically, the real and imaginary parts of the STBC samples are merged into a one-dimensional feature vector, which is expressed as:
h l =f(x I/Q *W l +b l )
wherein the function represents a feature vector h extracted by the l-th convolution kernel l ,W l And b l Respectively representing the weight and bias to be learned of the first convolution kernel, representing convolution operation, f ((-)) is an activation function, x I/Q In order to input the characteristic sample, the characteristic sample is input,
performing dilation convolution on the one-dimensional feature vector by using a plurality of dilation rates which are the same as the plurality of time delay parameters to generate a time delay feature vector, wherein the time delay feature vector is expressed as:
Figure FDA0003706451830000041
wherein the function represents the time delay vector extracted by the kth convolution kernel at the expansion rate τ
Figure FDA0003706451830000042
The value at the position of the i is,
Figure FDA0003706451830000043
one-dimensional feature vector h output for merged convolutional layer l The value at i + τ S, S is the length of the convolution kernel, L is the number of signatures of the merged convolution layer,
Figure FDA0003706451830000044
for the weight to be learned of the kth convolution kernel at s, b k To be learned offset,
The deep multi-delay feature of the delay feature vector is extracted by using a continuous convolution method, and is represented as:
Figure FDA0003706451830000045
wherein the function represents a one-dimensional vector, W, of the g-th convolution kernel output at a time delay, tau g And b g Respectively the weight to be learned and the offset of the convolution kernel,
Figure FDA0003706451830000046
for the said time-delay feature vector, the time-delay feature vector,
the stitching operation on the deep multi-delay features to generate a first stitching matrix, which is represented as:
Figure FDA0003706451830000047
wherein the function represents a conversion of a 3-channel output of the depth convolution layer to a 2-channel, P τ The dimension of the spliced matrix after the reshaping is the spliced matrix,
the extraction of the multi-time step inter-code features of the first splicing matrix is represented as:
Figure FDA0003706451830000048
wherein the function represents the output of the hidden layer at time t,
Figure FDA0003706451830000049
is the output gate state at the present time,
Figure FDA00037064518300000410
in order to update the state of the memory cell,
Figure FDA00037064518300000411
the functional expression of (a) is:
Figure FDA00037064518300000412
wherein the content of the first and second substances,
Figure FDA00037064518300000413
forgetting door by current time step
Figure FDA00037064518300000414
And input gate
Figure FDA00037064518300000415
The control is carried out by controlling the temperature of the air conditioner,
Figure FDA00037064518300000416
the state of the memory cell at time t-1, W l And b l The weight and the bias representing the current state of the memory cell,
forget the door
Figure FDA00037064518300000417
Input gate
Figure FDA00037064518300000418
And output gate
Figure FDA00037064518300000419
The state of (A) is input by the current time step
Figure FDA00037064518300000420
And the state of the hidden layer at time t-1
Figure FDA00037064518300000421
Co-decisions, respectively expressed as:
Figure FDA00037064518300000422
Figure FDA00037064518300000423
Figure FDA00037064518300000424
wherein, W f And b f 、W i And b i And W o And b o Representing the weights and offsets of the forgetting gate, the input gate and the output gate, sigma (-) is a sigmoid function,
the splitting of the multi-time inter-code features into one-dimensional feature vectors is represented as:
Figure FDA0003706451830000051
wherein the function is expressed as the inter-multi-time-step feature y at the time delay tau τ Dividing the line into one-dimensional characteristic vectors, wherein Q is the number of the one-dimensional vectors,
and the one-dimensional characteristic vectors are spliced to generate a second splicing matrix, which is expressed as:
Figure FDA0003706451830000052
wherein the function represents a 1 XN-dimensional feature vector under class 3 delay of the same sequence number
Figure FDA0003706451830000053
Splicing by row, S q For the resulting q-th 3 xn dimensional matrix to be spliced,
performing feature fusion on the second splicing matrix, traversing the second splicing matrix according to columns, and selecting the maximum time delay feature in the second splicing matrix as a deep fusion feature, wherein the deep fusion feature is expressed as:
U q (j)=maxS q (j)
wherein S is q (j) For the jth column of the qth mosaic,
generating a fusion feature vector according to the deep fusion feature, wherein the fusion feature vector is represented as:
R q =f(h(U q )+F(U q ,W q ))
wherein the function represents the output of the q fusion feature vector after passing through the residual error layer, h (-) is a mapping function of crossing connection, and h (U) is adopted q )=U q F (-) is a residual mapping function, W q As weights to be learned, h (U) q ) Is the deep fusion feature.
3. A non-transitory computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method for identifying a space-time block code based on a mdmmsfn network model with multi-latency and multi-timing feature fusion as claimed in claim 1.
CN202110348741.9A 2021-03-31 2021-03-31 MDMSFN-based space-time block code automatic identification method and device Active CN113098664B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110348741.9A CN113098664B (en) 2021-03-31 2021-03-31 MDMSFN-based space-time block code automatic identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110348741.9A CN113098664B (en) 2021-03-31 2021-03-31 MDMSFN-based space-time block code automatic identification method and device

Publications (2)

Publication Number Publication Date
CN113098664A CN113098664A (en) 2021-07-09
CN113098664B true CN113098664B (en) 2022-10-11

Family

ID=76671753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110348741.9A Active CN113098664B (en) 2021-03-31 2021-03-31 MDMSFN-based space-time block code automatic identification method and device

Country Status (1)

Country Link
CN (1) CN113098664B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107562784A (en) * 2017-07-25 2018-01-09 同济大学 Short text classification method based on ResLCNN models
CN110659684A (en) * 2019-09-23 2020-01-07 中国人民解放军海军航空大学 Convolutional neural network-based STBC signal identification method
CN111507299A (en) * 2020-04-24 2020-08-07 中国人民解放军海军航空大学 Method for identifying STBC (space time Block coding) signal on frequency domain by using convolutional neural network
CN112232165A (en) * 2020-10-10 2021-01-15 腾讯科技(深圳)有限公司 Data processing method and device, computer and readable storage medium
CN112365040A (en) * 2020-11-03 2021-02-12 哈尔滨工业大学 Short-term wind power prediction method based on multi-channel convolution neural network and time convolution network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100913873B1 (en) * 2004-09-13 2009-08-26 삼성전자주식회사 Apparatus and method for higher rate differential space-time block codes
CN109820525A (en) * 2019-01-23 2019-05-31 五邑大学 A kind of driving fatigue recognition methods based on CNN-LSTM deep learning model
CN111667445B (en) * 2020-05-29 2021-11-16 湖北工业大学 Image compressed sensing reconstruction method based on Attention multi-feature fusion
CN112187413B (en) * 2020-08-28 2022-05-03 中国人民解放军海军航空大学航空作战勤务学院 SFBC (Small form-factor Block code) identifying method and device based on CNN-LSTM (convolutional neural network-Link State transition technology)

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107562784A (en) * 2017-07-25 2018-01-09 同济大学 Short text classification method based on ResLCNN models
CN110659684A (en) * 2019-09-23 2020-01-07 中国人民解放军海军航空大学 Convolutional neural network-based STBC signal identification method
CN111507299A (en) * 2020-04-24 2020-08-07 中国人民解放军海军航空大学 Method for identifying STBC (space time Block coding) signal on frequency domain by using convolutional neural network
CN112232165A (en) * 2020-10-10 2021-01-15 腾讯科技(深圳)有限公司 Data processing method and device, computer and readable storage medium
CN112365040A (en) * 2020-11-03 2021-02-12 哈尔滨工业大学 Short-term wind power prediction method based on multi-channel convolution neural network and time convolution network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
AL-OFDM和SM-OFDM空频分组码信号盲识别方法;凌青等;《系统工程与电子技术》;20210131;全文 *
Blind identification of space–time block codes based;Limin ZHANG等;《Chinese Journal of Aeronautics》;20210111;全文 *
Hierarchical space–time block codes signals classification using higher order cumulants;Ling Qing等;《Chinese Journal of Aeronautics》;20160615;全文 *
Modulation recognition of space-time block codes based on Fourth-order delay matrix;Yu Keyuan等;《2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP)》;20191211;全文 *
利用卷积-循环神经网络的串行序列空时分组码识别方法;张聿远等;《信号处理》;20200917;全文 *

Also Published As

Publication number Publication date
CN113098664A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN112702294B (en) Modulation recognition method for multi-level feature extraction based on deep learning
CN106059972B (en) A kind of Modulation Identification method under MIMO correlated channels based on machine learning algorithm
CN113269077B (en) Underwater acoustic communication signal modulation mode identification method based on improved gating network and residual error network
CN108169708B (en) Direct positioning method of modular neural network
CN110532932B (en) Method for identifying multi-component radar signal intra-pulse modulation mode
CN110598530A (en) Small sample radio signal enhanced identification method based on ACGAN
CN112910811B (en) Blind modulation identification method and device under unknown noise level condition based on joint learning
CN110929842B (en) Accurate intelligent detection method for non-cooperative radio signal burst time region
CN116866129A (en) Wireless communication signal detection method
CN114764577A (en) Lightweight modulation recognition model based on deep neural network and method thereof
CN114943245A (en) Automatic modulation recognition method and device based on data enhancement and feature embedding
KR102407835B1 (en) Method and apparatus for classifying pulse radar signal properties based on machine learning
CN114285545B (en) Side channel attack method and system based on convolutional neural network
CN114298086A (en) STBC-OFDM signal blind identification method and device based on deep learning and fourth-order lag moment spectrum
CN113098664B (en) MDMSFN-based space-time block code automatic identification method and device
CN115712867A (en) Multi-component radar signal modulation identification method
CN116243248A (en) Multi-component interference signal identification method based on multi-label classification network
CN113962261B (en) Deep network model construction method for radar signal sorting
CN115329821A (en) Ship noise identification method based on pairing coding network and comparison learning
CN113343924B (en) Modulation signal identification method based on cyclic spectrum characteristics and generation countermeasure network
CN112346056B (en) Resolution characteristic fusion extraction method and identification method of multi-pulse radar signals
CN115270891A (en) Method, device, equipment and storage medium for generating signal countermeasure sample
CN114724245A (en) CSI-based incremental learning human body action identification method
CN113869238A (en) Cognitive Internet of vehicles intelligent frequency spectrum sensing method and system
CN114915526B (en) Communication signal modulation identification method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant