CN113271272B - Single-channel time-frequency aliasing signal blind separation method based on residual error neural network - Google Patents

Single-channel time-frequency aliasing signal blind separation method based on residual error neural network Download PDF

Info

Publication number
CN113271272B
CN113271272B CN202110520047.0A CN202110520047A CN113271272B CN 113271272 B CN113271272 B CN 113271272B CN 202110520047 A CN202110520047 A CN 202110520047A CN 113271272 B CN113271272 B CN 113271272B
Authority
CN
China
Prior art keywords
signal
neural network
feature
residual error
separation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110520047.0A
Other languages
Chinese (zh)
Other versions
CN113271272A (en
Inventor
侯小琪
曾泓然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110520047.0A priority Critical patent/CN113271272B/en
Publication of CN113271272A publication Critical patent/CN113271272A/en
Application granted granted Critical
Publication of CN113271272B publication Critical patent/CN113271272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
    • H04L25/03006Arrangements for removing intersymbol interference
    • H04L25/03165Arrangements for removing intersymbol interference using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Power Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Digital Transmission Methods That Use Modulated Carrier Waves (AREA)
  • Monitoring And Testing Of Transmission In General (AREA)

Abstract

The invention discloses a single-channel time-frequency aliasing signal blind separation method based on a residual error neural network, which comprises the following steps of: step 1: acquiring a single-channel time-frequency aliasing signal; step 2: constructing a data set according to the aliasing signals obtained in the step 1, wherein the data set comprises a training set and a test set; and step 3: building a residual error neural network, and training and testing the residual error neural network through the data set obtained in the step 2; and 4, step 4: creating a network loss function; and 5: inputting the aliasing signals into the residual error neural network obtained in the step 3 to obtain two separated source signal waveforms; step 6: sending the source signal waveform obtained in the step (4) into a regulator for demodulation, and completing the blind separation of time-frequency aliasing signals; according to the method, the defect of manual characterization of the separation problem model is overcome by adopting a data-driven deep learning method, deep features of signals are learned from a large number of samples and fitted, and complex and fussy manual design is avoided; the training feature construction complexity is low.

Description

Single-channel time-frequency aliasing signal blind separation method based on residual error neural network
Technical Field
The invention relates to the technical field of signal processing, in particular to a single-channel time-frequency aliasing signal blind separation method based on a residual error neural network.
Background
The signal blind separation technology is a novel signal processing method, and for cooperative communication, the densification of communication equipment can cause the interference between signals to be enhanced. The single-channel blind separation technology can be used for carrying out interference suppression on the mixed signal, and the receiving performance of the communication equipment is improved. For non-cooperative communication, single channel signal blind separation can be used as a key module for non-cooperative communication acquisition analysis and interference suppression. A plurality of source signals of a sending end are transmitted through an antenna, after the source signals pass through a plurality of mixed models, a receiving end obtains the mixture of the source signals under a certain condition, and the blind separation task is to recover the source signals of the sending end from the mixed signals by a receiving end under the condition of lacking the prior knowledge of the mixed signals and a transmission channel.
The single-channel signal blind separation refers to a scene with multiple transmitting antennas and one receiving antenna, which belongs to an underdetermined problem in mathematics and has no definite solution. The existing methods mainly comprise the following two types, which have different limitations respectively. The first method is that the channel converts a single-channel separation model into a multi-channel separation model, converts underdetermined blind separation into adaptive blind separation, or realizes the separation of signals by using the parameter difference of mixed signals. Independent component analysis, FastICA, wavelet transform, etc. are representative methods among them, and such methods have a disadvantage of requiring a mixed signal to be processed first to obtain a certain amount of information or to construct corresponding separation conditions. And then, a multi-channel separation model is used for processing, so that the complexity is high, and the data processing capacity is large. The second method is the joint estimation of the symbol sequence and the parameters of the transmission channel, and two representative methods are the particle filter algorithm and the PSP algorithm. In order to ensure the separation effect, the particle filter algorithm needs to use a large number of particles to simulate the state of the system, the calculation amount of the system will increase in an exponential level along with the increase of the number of the particles, the algorithm complexity is high, and the calculation amount is huge. The PSP method needs to traverse the symbol sequence to find the optimal solution of the system, and the computation amount is still large.
Disclosure of Invention
The single-channel time-frequency aliasing signal blind separation method based on the residual error neural network is simple in algorithm, high in single-channel signal blind separation efficiency and high in signal separation accuracy under the condition of low signal-to-noise ratio.
The technical scheme adopted by the invention is as follows: a single-channel time-frequency aliasing signal blind separation method based on a residual neural network comprises the following steps:
step 1: acquiring a single-channel time-frequency aliasing signal;
and 2, step: constructing a data set according to the aliasing signals obtained in the step 1, wherein the data set comprises a training set and a test set;
and 3, step 3: building a residual error neural network, wherein the residual error neural network is a multi-scale stacked time domain residual error neural network and comprises a feature extraction module, a separation coefficient calculation module and a waveform recovery module;
the feature extraction module performs feature extraction on the input one-dimensional mixed signal, maps the one-dimensional mixed signal into a high-dimensional feature space from a signal space, and obtains a feature representation corresponding to the signal, wherein the feature representation is represented as x feature
x feature =h feature-mapping (x)
Wherein h is feature-mapping The method comprises the following steps of 1 × 1 convolution, layer normalization, a PRelu activation function, depth separable convolution and a ReLu activation function for characteristic mapping operation;
the separation coefficient calculation module comprises three Time-domain Residual Stacked blocks (Stacked Time-domain Residual blocks, Stacked-TRBs), wherein each Time-domain Residual Stacked Block comprises three Multi-scale Residual unit blocks (MRBs) and a one-dimensional deconvolution layer; according to the extracted feature x feature Estimating the separation coefficient beta of each source, and then performing point multiplication on the characteristics and the separation coefficient to obtain the characteristics s corresponding to each source sep-feature
β=f sep (x feature )
Figure GDA0003793907350000028
Wherein f is sep A separation coefficient calculation module for a stack block containing time domain residuals;
waveform recovery module completion slaveMapping the feature space to the signal space, and mapping the features s corresponding to the sources sep-feature Mapping back to signal space to recover source signal waveform s *
s * =h signal-mapping (s sep-feature )
Wherein: h is signal-mapping The method comprises the steps of performing signal mapping operation, including one-dimensional deconvolution operation, layer normalization and a PReLu activation function;
and 4, step 4: creating a self-defined network loss function, and outputting a loss value for the network to optimize by using a gradient descent method:
Figure GDA0003793907350000021
Figure GDA0003793907350000022
Figure GDA0003793907350000023
where θ is a network optimizable parameter, l MSE The value of the loss is the value of the loss,
Figure GDA0003793907350000024
and
Figure GDA0003793907350000025
respectively representing a first path of separation signal and a second path of separation signal output by the multi-scale stacked time domain residual error neural network,
Figure GDA0003793907350000026
and
Figure GDA0003793907350000027
respectively representing a first path of label source signals and a second path of label source signals; l. the MSE1 Represent
Figure GDA0003793907350000031
Correspond to
Figure GDA0003793907350000032
Correspond to
Figure GDA0003793907350000033
l MSE2 Exactly with l MSE1 Is in reverse correspondence to (2) represents
Figure GDA0003793907350000034
Correspond to
Figure GDA0003793907350000035
Corresponding label
Figure GDA0003793907350000036
And 5: inputting the aliasing signals into the residual error neural network obtained in the step 3, training and testing the residual error neural network through the data set obtained in the step 2 to obtain two separated source signal waveforms, and completing the blind separation of the time-frequency aliasing signals;
step 6: and (5) sending the source signal waveform obtained in the step (5) into a demodulator for demodulation to obtain bit data of the separation signal.
Further, the training set and test set constructing process in step 2 is as follows:
extracting a real part and an imaginary part of the mixed signal as network training characteristics;
respectively extracting respective real parts and imaginary parts of the two paths of source signals as target samples;
and constructing the network training characteristics and the target sample into a two-dimensional matrix to form a data set.
Further, the aliasing signal in step 1 is obtained by processing the following method:
x(t)=As(t)+n(t)
wherein: x (t) is the mixing signal, A is the mixing matrix, s (t) is the source signal, and n (t) is the channel noise.
Further, the step 5 includes the following processes:
dividing data into a plurality of sections of equal-length sequences according to fixed length;
combining a fixed number of sequences into a mini _ batch, and inputting the mini _ batch into a neural network;
updating the parameters of each part through a reverse gradient descent algorithm, and performing iterative optimization by using an Adam optimization algorithm;
and the network outputs the two separated source signal waveforms.
A separation device of a single-channel time-frequency aliasing signal blind separation method based on a residual error neural network comprises the following steps:
a signal acquisition module: the method is used for acquiring a single-channel time-frequency aliasing signal;
constructing a module: the data set required by the residual error neural network is constructed according to the single-channel time-frequency aliasing signal;
a signal separation module: obtaining a source signal waveform by adopting a trained residual error neural network;
a demodulation module: for demodulating the source signal waveform.
A control apparatus, characterized by comprising:
at least one memory for storing program instructions;
at least one processor for calling program instructions stored in said memory and for executing the steps of the method according to any one of claims 1 to 4 in accordance with the program instructions obtained.
A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
The invention has the beneficial effects that:
(1) according to the method, the defect of manual characterization of the separation problem model is overcome by adopting a data-driven deep learning method, deep features of signals are learned from a large number of samples and fitted, and complex and fussy manual design is avoided;
(2) the input of the residual error neural network in the invention is a two-dimensional matrix formed by a real part and an imaginary part of a mixed signal, and the complexity of training characteristic construction is low;
(3) the loss function provided by the invention solves the problem of label sequencing, namely the corresponding situation of two paths of separation signals output by a network and two paths of label source signals cannot be determined; the method judges the proximity degree of two paths of separation signals output by the network and the label source signal according to the loss value, and selects the corresponding condition with the minimum loss value for optimization, so that the convergence speed of the network is accelerated;
(4) the method can adapt to the noise environment by learning the mixed signal set containing the noise, obtain better separation effect under low signal-to-noise ratio, effectively reduce the separation difficulty of single-channel time-frequency aliasing signals, improve the separation performance and improve the robustness of a separation system.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of a multi-scale stacked time domain residual error neural network structure employed in the present invention.
FIG. 3 is a diagram illustrating a multi-scale residual unit block structure according to the present invention.
FIG. 4 is a diagram of the effect of the waveform separation according to the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific examples.
A single-channel time-frequency aliasing signal blind separation method based on a residual error neural network comprises the following steps:
step 1: acquiring a single-channel time-frequency aliasing signal; generating and receiving digital modulation signals after two channels are passed by adopting an equipment NIUSRP 2930 software radio platform; and after sampling, linearly superposing the two paths of obtained source signals into a path of mixed signal.
Sampling the collected mixed signal x (t) after passing through a channel to obtain x, wherein a mixed signal blind separation mathematical model adopts a linear instantaneous mixed model:
x(t)=As(t)+n(t)
wherein: x (t) is the mixing signal, A is the mixing matrix, s (t) is the source signal, and n (t) is the channel noise.
x(t)={x 1 (t)} T Source signal s (t) { s ═ s 1 (t),s 2 (t)} T Estimate the signal s * (t)={s * 1 (t),s * 2 (t)} T And A is a mixing matrix, the single-channel blind source separation instantaneous mixing model is as follows:
Figure GDA0003793907350000041
step 2: constructing a data set according to the aliasing signals obtained in the step 1, wherein the data set comprises a training set and a test set; the real part and the imaginary part of the mixed signal are two characteristics which are simple and important, and the real part and the imaginary part are respectively extracted as network training characteristics; extracting the real parts and the imaginary parts of the two paths of source signals respectively to be used as target samples; and constructing the network training characteristics and the target samples into a two-dimensional matrix as a data set (test set) for network training (training set) and testing.
And 3, step 3: building a residual error neural network, and training and testing the residual error neural network through the data set obtained in the step 2;
a multi-scale stacked time domain residual error neural network is built and composed of a feature mapping module, a separation coefficient calculation module and a waveform recovery module, and is shown in FIG. 2.
A feature extraction module comprising 1 × 1 convolution, layer normalization, activation function (PRelu), depth separable convolution, layer normalization, activation function (Relu). Residual connection is added into the module, the number of channels is set to 512, the size of a depth convolution kernel in the depth separable convolution is L, and the sliding step length is L/2. The module function is to extract the characteristics of the one-dimensional mixed signal and map the one-dimensional mixed signal into a high-dimensional characteristic space from a signal space to obtain the characteristic representation corresponding to the signal. The depth separable convolution is the decomposition of common convolution on a channel, and a convolution kernel is split into independent depth convolution and point-wise convolution, so that the number of parameters, the operation cost and the model size can be effectively reduced; layer normalization is to normalize all neurons in a middle layer, so that the input distribution of the neurons is kept consistent in the training process, and the problem of internal covariate deviation caused by updating parameters during each gradient descent is solved, thereby enhancing the network generalization capability and avoiding gradient disappearance and gradient explosion.
x feature =h feature-mapping (x)
Wherein h is feature-mapping A feature mapping operation;
and the separation coefficient calculation module is composed of three time domain residual Stacked blocks (Stacked-TRBs), and each TRB is composed of 3 multi-scale residual unit blocks (MRBs) and one-dimensional deconvolution layer, as shown in fig. 3. The multi-scale residual error unit block is used for estimating a separation coefficient, and the deconvolution layer is used for recovering the number of characteristic channels and the characteristic dimensionality; the convolution kernels are expansion convolution kernels, the expansion convolution can enlarge the receptive field under the condition that pooling operation loss information is not used, multi-scale context information is obtained, and long-term dependency modeling of signal separation is fully achieved. The output of each stacking block is used as the next input, and the network layer number is further deepened.
An n-MRB is composed of five convolutional layers, wherein the convolutional layer 1 is equivalent to a bottleneck layer and used for reducing dimensions, the number of channels of the convolutional layers 2 and 3 is n/2, the number of channels of the convolutional layers 4 and 5 is n, the sizes of convolutional cores are 1, 3 and 3 in sequence, and residual errors are used for connection between layers. The convolution kernel expansion ratios for 3 stacked n-MRBs increase exponentially, being 1, 2, 4 in order. The number of output channels of the one-dimensional deconvolution layer is 512.
According to the extracted feature x feature Estimating the separation coefficient beta of each source, and then performing dot multiplication on the characteristics and the separation coefficient to obtain the characteristics s corresponding to each source sep-feature
β=f sep (x feature )
Figure GDA0003793907350000051
Wherein f is sep A separation coefficient calculation module for a block containing a temporal residual stack.
An activation layer is arranged between the separation coefficient calculation module and the waveform recovery module, ReLu is used as an activation function, nonlinearity is added, and a separation coefficient is output.
And the waveform recovery module sequentially comprises a deconvolution function, a layer normalization function and an activation function (PRelu). And finishing the mapping of the output from the feature space to the signal space and recovering the length of the signal sequence.
And 4, step 4: inputting the aliasing signals into the residual error neural network obtained in the step 3 to obtain two separated source signal waveforms;
dividing a mixed signal x for training into a plurality of sections of equal-length sequences according to the length L
Figure GDA0003793907350000061
k denotes a segment sequence number.
Combining a certain number of sequences into one mini _ batch, i.e.
Figure GDA0003793907350000062
Inputting the feature extraction module according to batches, and then outputting corresponding features
Figure GDA0003793907350000063
Then inputting the characteristics into a separation coefficient calculation module and outputting corresponding separation coefficients
Figure GDA0003793907350000064
Finally, the characteristics of each source are mapped back to a signal space through a waveform recovery module to obtain two paths of separated waveform signals
Figure GDA0003793907350000065
And
Figure GDA0003793907350000066
using a self-defined network loss function to output a loss value for a network to optimize by using a gradient descent method; since the correct corresponding situation of the two paths of separation signals output by the network and the label source signal cannot be judged and determined, the optimization direction of the network cannot be determined, and the label sequencing problem occurs; and calculating the mean square error values under different corresponding conditions by using the self-defined loss function, and selecting the corresponding condition with the minimum error value as the optimization direction of the network, thereby solving the problem of tag sequencing and accelerating the network convergence speed.
The loss function herein is defined as follows:
Figure GDA0003793907350000067
Figure GDA0003793907350000068
Figure GDA0003793907350000069
where θ is a network optimizable parameter, l MSE The value of the loss is a value of,
Figure GDA00037939073500000610
and
Figure GDA00037939073500000611
respectively representing a first path of separation signals and a second path of separation signals output by the multi-scale stacked time domain residual error neural network,
Figure GDA00037939073500000612
and
Figure GDA00037939073500000613
respectively representing a first path of label source signals and a second path of label source signals. l. the MSE1 To represent
Figure GDA00037939073500000614
Correspond to
Figure GDA00037939073500000615
Correspond to
Figure GDA00037939073500000616
l MSE2 Exactly with l MSE1 Is in reverse correspondence to (2) represents
Figure GDA00037939073500000617
Correspond to
Figure GDA00037939073500000618
Corresponding label
Figure GDA00037939073500000619
Parameter theta of each part is determined by inverse gradient descent algorithm feature-mapping 、θ sep 、θ signal-mapping Updating, and performing iterative optimization by using an Adam optimization algorithm to improve the training speed.
Figure GDA0003793907350000071
Figure GDA0003793907350000072
Figure GDA0003793907350000073
Wherein eta is learning rate, l MSE Is the loss value.
And 5: and (5) sending the source signal waveform obtained in the step (4) to a regulator for demodulation, and completing the blind separation of the time-frequency aliasing signals.
Example 1
1. Simulation conditions
The hardware resources adopted are NVDIA GeForce RTX 2080 GPU, Intel (R) core (TM) i7-8700K CPU, and Pythrch is used as a deep learning framework. Designing a multi-scale stacking time domain residual error neural network with an acceptance sequence length seq _ len of 80, and setting the learning rate to be 0.001; the batch size mini _ batch for each training is set to 16.
2. Emulated content
Generation and reception of digitally modulated signals for training using the equipment NIUSRP 2930 software radio platformThe carrier frequency is 915MHz, the IQ sampling frequency fs is set to be 1MHz, the forming filter adopts a root-raised cosine filter with the roll-off coefficient of 0.35, the code rate is 500KBaud, and the signal-to-noise ratio SNR is 5 dB; the linear superposition of source signals with different modulation modes obtains a mixed signal, and each type of modulation signal has 4 multiplied by 10 in total 6 And (4) sampling. The modulation mode adopts BPSK and QPSK. And taking 80% of the data set as a training set and 20% of the data set as a test set, sending the data set into a designed multi-scale stacked time domain residual error neural network for training, and iterating for 200 epochs.
3. Simulation result
And sending the data set corresponding to the mixed signal into a multi-scale stacked time domain residual error neural network for performance test to obtain a separation performance curve graph of the mixed signal. The curve of fig. 4- (a) shows the separation effect of the first BPSK signal, the curve of fig. 4- (b) shows the separation effect of the second QPSK signal, the horizontal axis shows the signal length (in terms of the number of sampling points), the vertical axis shows the signal amplitude, and the degree of fitting between the network output waveform and the tag signal waveform in the graph can be used for judging the separation accuracy.
The invention applies the multi-scale stacked time domain residual error neural network to the single-channel time-frequency aliasing signal blind separation for feature extraction, can effectively obtain the effective feature corresponding to each path of source signal from the complex mixed signal without channel parameters and generate the separation coefficient corresponding to each path of source signal, completes the source signal waveform blind separation, and then demodulates by using the demodulator to obtain the bit stream. The problem of prior art algorithm design complicacy is improved, the loaded down with trivial details degree of single channel blind separation has been reduced, has improved separation efficiency under the condition of guaranteeing the high accuracy. Meanwhile, the training data containing noise can enable the neural network to adapt to the noise environment, the robustness of the separation system is improved, and the method has better separation performance compared with the traditional method under the condition of low signal to noise ratio.
The neural network is selected mainly for three reasons: firstly, the neural network can fit a complex model through a large number of parameters to avoid manual design, theoretically, as long as the data set is correctly designed and the parameter quantity is large enough, the neural network can fit any model and obtain an ideal effect; the multi-scale stacked time domain residual error neural network has superiority in processing sequence data with long-term dependency; and thirdly, other methods all need more complex data processing, manual design and feature extraction than the method, and the multi-scale stacked time domain residual error neural network avoids the defect.

Claims (5)

1. A single-channel time-frequency aliasing signal blind separation method based on a residual error neural network is characterized by comprising the following steps:
step 1: acquiring a single-channel time-frequency aliasing signal;
step 2: constructing a data set according to the aliasing signals obtained in the step 1, wherein the data set comprises a training set and a test set;
the training set and test set construction process is as follows:
extracting a real part and an imaginary part of the mixed signal as network training characteristics;
respectively extracting respective real parts and imaginary parts of the two paths of source signals as target samples;
constructing the network training characteristics and the target sample into a two-dimensional matrix to form a data set;
and step 3: building a residual error neural network, wherein the residual error neural network is a multi-scale stacked time domain residual error neural network and comprises a feature extraction module, a separation coefficient calculation module and a waveform recovery module;
the feature extraction module performs feature extraction on the input one-dimensional mixed signal, maps the one-dimensional mixed signal into a high-dimensional feature space from a signal space, and obtains a feature representation corresponding to the signal, wherein the feature representation is represented as x feature
x feature =h feature-mapping (x)
Wherein h is feature-mapping The method comprises the following steps of performing characteristic mapping operations including 1 × 1 convolution, layer normalization, a PRelu activation function, depth separable convolution and a ReLu activation function;
the separation coefficient calculation module comprises three time domain residual error stacking blocks, wherein each time domain residual error stacking block comprises three multi-scale residual error unit blocks and a one-dimensional deconvolution layer; according to the extracted feature x feature Estimating the separation coefficient beta of each source, and then performing point multiplication on the characteristics and the separation coefficient to obtain the characteristics s corresponding to each source sep-feature
β=f sep (x feature )
Figure FDA0003789649700000011
Wherein f is sep A separation coefficient calculation module for a stack block containing time domain residuals;
the waveform recovery module finishes the mapping from the feature space to the signal space, and the feature s corresponding to each source is obtained sep-feature Mapping back to signal space to recover source signal waveform s *
s * =h signal-mapping (s sep-feature )
Wherein: h is signal-mapping Performing a signal mapping operation including a one-dimensional deconvolution operation, a layer normalization and a PReLu activation function;
and 4, step 4: creating a self-defined network loss function, and outputting a loss value for the network to optimize by using a gradient descent method:
Figure FDA0003789649700000021
Figure FDA0003789649700000022
Figure FDA0003789649700000023
where θ is a network optimizable parameter, l MSE The value of the loss is the value of the loss,
Figure FDA0003789649700000024
and
Figure FDA0003789649700000025
respectively representing a first path of separation signal and a second path of separation signal output by the multi-scale stacked time domain residual error neural network,
Figure FDA0003789649700000026
and
Figure FDA0003789649700000027
respectively representing a first path of label source signals and a second path of label source signals; l MSE1 To represent
Figure FDA0003789649700000029
Correspond to
Figure FDA00037896497000000210
Correspond to
Figure FDA00037896497000000211
l MSE2 Exactly with l MSE1 Is in reverse correspondence to (2) represents
Figure FDA00037896497000000212
Correspond to
Figure FDA00037896497000000213
Corresponding label
Figure FDA0003789649700000028
And 5: inputting the aliasing signals into the residual error neural network obtained in the step 3, training and testing the residual error neural network through the data set obtained in the step 2 to obtain two separated source signal waveforms, and completing the blind separation of the time-frequency aliasing signals;
the method comprises the following steps:
dividing data into a plurality of sections of equal-length sequences according to fixed length;
combining a fixed number of sequences into a mini _ batch, and inputting the mini _ batch into a neural network;
updating the parameters of each part through a reverse gradient descent algorithm, and performing iterative optimization by using an Adam optimization algorithm;
the network outputs the two separated source signal waveforms;
step 6: and (5) sending the source signal waveform obtained in the step (5) into a demodulator for demodulation to obtain bit data of the separation signal.
2. The single-channel time-frequency aliasing signal blind separation method based on the residual neural network according to claim 1, wherein the aliasing signals in the step 1 are obtained by processing according to the following method:
x(t)=As(t)+n(t)
wherein: x (t) is the mixing signal, A is the mixing matrix, s (t) is the source signal, and n (t) is the channel noise.
3. A separation apparatus according to any one of claims 1 to 2, comprising:
a signal acquisition module: the method is used for acquiring a single-channel time-frequency aliasing signal;
constructing a module: the data set required by the residual neural network is constructed according to the single-channel time-frequency aliasing signal;
a signal separation module: obtaining a source signal waveform by adopting a trained residual error neural network;
a demodulation module: for demodulating the source signal waveform.
4. A control apparatus, characterized by comprising:
at least one memory for storing program instructions;
at least one processor for calling program instructions stored in said memory and for executing the steps of the method according to any one of claims 1 to 2 in accordance with the program instructions obtained.
5. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, carries out the steps of the method according to any one of claims 1-2.
CN202110520047.0A 2021-05-13 2021-05-13 Single-channel time-frequency aliasing signal blind separation method based on residual error neural network Active CN113271272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110520047.0A CN113271272B (en) 2021-05-13 2021-05-13 Single-channel time-frequency aliasing signal blind separation method based on residual error neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110520047.0A CN113271272B (en) 2021-05-13 2021-05-13 Single-channel time-frequency aliasing signal blind separation method based on residual error neural network

Publications (2)

Publication Number Publication Date
CN113271272A CN113271272A (en) 2021-08-17
CN113271272B true CN113271272B (en) 2022-09-13

Family

ID=77230830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110520047.0A Active CN113271272B (en) 2021-05-13 2021-05-13 Single-channel time-frequency aliasing signal blind separation method based on residual error neural network

Country Status (1)

Country Link
CN (1) CN113271272B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114330420B (en) * 2021-12-01 2022-08-05 南京航空航天大学 Data-driven radar communication aliasing signal separation method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034070A (en) * 2018-07-27 2018-12-18 河南师范大学 A kind of displacement aliased image blind separating method and device
CN110520875A (en) * 2017-04-27 2019-11-29 日本电信电话株式会社 Learning type signal separation method and learning type signal separator
CN111404849A (en) * 2020-03-20 2020-07-10 北京航空航天大学 OFDM channel estimation and signal detection method based on deep learning
CN111583954A (en) * 2020-05-12 2020-08-25 中国人民解放军国防科技大学 Speaker independent single-channel voice separation method
CN111915007A (en) * 2020-07-29 2020-11-10 厦门大学 Magnetic resonance spectrum noise reduction method based on neural network
CN112001122A (en) * 2020-08-26 2020-11-27 合肥工业大学 Non-contact physiological signal measuring method based on end-to-end generation countermeasure network
CN112017686A (en) * 2020-09-18 2020-12-01 中科极限元(杭州)智能科技股份有限公司 Multichannel voice separation system based on gating recursive fusion depth embedded features
CN112163574A (en) * 2020-11-23 2021-01-01 南京航天工业科技有限公司 ETC interference signal transmitter identification method and system based on deep residual error network
CN112418014A (en) * 2020-11-09 2021-02-26 南京信息工程大学滨江学院 Modulation signal identification method based on wavelet transformation and convolution long-short term memory neural network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7231227B2 (en) * 2004-08-30 2007-06-12 Kyocera Corporation Systems and methods for blind source separation of wireless communication signals
CN106847302B (en) * 2017-02-17 2020-04-14 大连理工大学 Single-channel mixed voice time domain separation method based on convolutional neural network
FR3076410B1 (en) * 2017-12-29 2020-09-11 Avantix BLIND DEMODULATION OR RESEARCH SYSTEM OF THE CHARACTERISTICS OF DIGITAL TELECOMMUNICATION SIGNALS

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110520875A (en) * 2017-04-27 2019-11-29 日本电信电话株式会社 Learning type signal separation method and learning type signal separator
CN109034070A (en) * 2018-07-27 2018-12-18 河南师范大学 A kind of displacement aliased image blind separating method and device
CN111404849A (en) * 2020-03-20 2020-07-10 北京航空航天大学 OFDM channel estimation and signal detection method based on deep learning
CN111583954A (en) * 2020-05-12 2020-08-25 中国人民解放军国防科技大学 Speaker independent single-channel voice separation method
CN111915007A (en) * 2020-07-29 2020-11-10 厦门大学 Magnetic resonance spectrum noise reduction method based on neural network
CN112001122A (en) * 2020-08-26 2020-11-27 合肥工业大学 Non-contact physiological signal measuring method based on end-to-end generation countermeasure network
CN112017686A (en) * 2020-09-18 2020-12-01 中科极限元(杭州)智能科技股份有限公司 Multichannel voice separation system based on gating recursive fusion depth embedded features
CN112418014A (en) * 2020-11-09 2021-02-26 南京信息工程大学滨江学院 Modulation signal identification method based on wavelet transformation and convolution long-short term memory neural network
CN112163574A (en) * 2020-11-23 2021-01-01 南京航天工业科技有限公司 ETC interference signal transmitter identification method and system based on deep residual error network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"DEEP NEURAL NETWORKS FOR SINGLE CHANNEL SOURCE SEPARATION";Emad M. Grais, Mehmet Umut Sen, Hakan Erdogan;《2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP)》;20140509;第3734-3738页 *
"Learning lightweight Multi-Scale Feedback Residual network for single image super-resolution";Wenjie Xu, Huihui Song ∗, Kaihua Zhang, Qingshan Liu, Jia Liu,;《Comput. Vis. Image Underst. 197-198: 103005 (2020)》;20200813;第1-8页 *
"基于Stacked-TCN的空间混叠信号单通道盲源分离方法";赵孟晨;《系统工程与电子技术》;20210302;第1-5节 *
"基于残差神经网络的通信混合信号识别";董聪,张传武,高勇;《无线电工程》;20200828;第727-731页 *
Jiai He•Wei Chen."Single Channel Blind Source Separation Under Deep Recurrent Neural Network".《Springer Science+Business Media》.2020, *

Also Published As

Publication number Publication date
CN113271272A (en) 2021-08-17

Similar Documents

Publication Publication Date Title
CN113014524B (en) Digital signal modulation identification method based on deep learning
CN113472706A (en) MIMO-OFDM system channel estimation method based on deep neural network
CN114268388B (en) Channel estimation method based on improved GAN network in large-scale MIMO
CN111192211A (en) Multi-noise type blind denoising method based on single deep neural network
CN113271272B (en) Single-channel time-frequency aliasing signal blind separation method based on residual error neural network
CN113259283B (en) Single-channel time-frequency aliasing signal blind separation method based on recurrent neural network
CN115250216A (en) Underwater sound OFDM combined channel estimation and signal detection method based on deep learning
CN114157539A (en) Data-aware dual-drive modulation intelligent identification method
CN114745233B (en) Joint channel estimation method and device based on pilot frequency design
CN113726711B (en) OFDM receiving method and device, and channel estimation model training method and device
Nithya et al. Pilot based channel estimation of OFDM systems using deep learning techniques
CN114912486A (en) Modulation mode intelligent identification method based on lightweight network
CN114124168A (en) MIMO-NOMA system signal detection method and system based on deep learning
CN114826832B (en) Channel estimation method, neural network training method, device and equipment
CN111404856B (en) High-order modulation signal demodulation method based on deep learning network
CN104270328B (en) A kind of signal to noise ratio real-time estimation method
Li et al. Modulation recognition network of multi-scale analysis with deep threshold noise elimination
CN113364535B (en) Method, system, device and storage medium for mathematical form multiple-input multiple-output detection
CN114584441A (en) Digital signal modulation identification method based on deep learning
CN113259289B (en) Single-channel aliasing signal modulation mode identification method based on residual error neural network
CN112232120B (en) Radar radiation source signal classification system and method based on software radio
CN117060952A (en) Signal detection method and device in MIMO system
CN113037409B (en) Large-scale MIMO system signal detection method based on deep learning
CN114118145B (en) Method and device for reducing noise of modulation signal, storage medium and equipment
Aer et al. Modulation Recognition of MIMO Systems Based on Dimensional Interactive Lightweight Network.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant