CN112491442A - Self-interference elimination method and device - Google Patents

Self-interference elimination method and device Download PDF

Info

Publication number
CN112491442A
CN112491442A CN202011289525.3A CN202011289525A CN112491442A CN 112491442 A CN112491442 A CN 112491442A CN 202011289525 A CN202011289525 A CN 202011289525A CN 112491442 A CN112491442 A CN 112491442A
Authority
CN
China
Prior art keywords
layer
convolution
imaginary part
output
real part
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011289525.3A
Other languages
Chinese (zh)
Other versions
CN112491442B (en
Inventor
唐燕群
魏玺章
伍哲舜
黄海风
赖涛
王青松
王小青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
National Sun Yat Sen University
Original Assignee
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Sun Yat Sen University filed Critical National Sun Yat Sen University
Priority to CN202011289525.3A priority Critical patent/CN112491442B/en
Publication of CN112491442A publication Critical patent/CN112491442A/en
Application granted granted Critical
Publication of CN112491442B publication Critical patent/CN112491442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/40Circuits
    • H04B1/50Circuits using different frequencies for the two directions of communication
    • H04B1/52Hybrid arrangements, i.e. arrangements for transition from single-path two-direction transmission to single-direction transmission on each of two paths or vice versa
    • H04B1/525Hybrid arrangements, i.e. arrangements for transition from single-path two-direction transmission to single-direction transmission on each of two paths or vice versa with means for reducing leakage of transmitter signal into the receiver
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Noise Elimination (AREA)

Abstract

The invention discloses a self-interference elimination method and a device, comprising the following steps: obtaining a baseband signal x (t) at the time t, and leading the baseband signal x (t) into training models of an input layer and a convolutional layer to obtain an output signal; the training model comprises the following steps: introducing a three-dimensional tensor into the input layer or arranging a plurality of convolution layer structures on the convolution layer; inputting an output signal into an LSTM layer, wherein the LSTM layer is used for processing a sequence with a time sequence and outputting a result to a full connection layer, and the full connection layer is used for carrying out data dimension conversion on the output result to obtain a dimension conversion result; and inputting the dimension conversion result to an output layer, and outputting two neurons by the output layer. According to the invention, by introducing a three-dimensional tensor into the input layer or arranging a plurality of convolutional layer structures on the convolutional layers, two network structures for reconstructing self-interference signals are respectively designed, and the advantages of local perception and weight sharing of a convolutional neural network are fully utilized, so that more abstract low-dimensional characteristics are learned from high-dimensional characteristics, and the effect of self-interference elimination can be improved.

Description

Self-interference elimination method and device
Technical Field
The invention relates to the technical field of full-duplex communication, in particular to a self-interference elimination method and device.
Background
In the face of a non-linear effect caused by a radio frequency device contained in a self-interference signal, a mathematical model is established by combining related prior knowledge to depict the non-linear effect, then model parameters are obtained by a channel estimation method, and the self-interference signal is reconstructed.
Due to the fact that the method relies on relevant prior knowledge extremely, if the model is mismatched, the elimination effect is seriously deteriorated, and the efficiency of the method for estimating relevant parameters by manually designing the model is low. And the full-connection layer neural network only utilizes the capability of the multilayer perceptron and the nonlinear activation function to reduce the characteristic extraction process and the nonlinear approximation to the function to be fitted to a certain degree, but cannot process the characteristics of space-frequency correlation, time correlation and the like of the specific high-dimensional data.
Disclosure of Invention
The invention aims to provide a self-interference elimination method and a device, wherein two network structures for reconstructing self-interference signals are respectively designed by introducing a three-dimensional tensor into an input layer or arranging a plurality of convolutional layer structures on the convolutional layers, the advantages of local perception and weight sharing of a convolutional neural network are fully utilized, the air-frequency characteristic of data can be captured, and therefore more abstract low-dimensional characteristics can be learned from high-dimensional characteristics, and the self-interference elimination effect can be improved.
To achieve the above object, an embodiment of the present invention provides a self-interference cancellation method, including:
obtaining a baseband signal x (t) at time t, and introducing the baseband signal x (t) into training models of an input layer and a convolutional layer to obtain an output signal, wherein the training models comprise: introducing a three-dimensional tensor into the input layer or arranging a plurality of convolution layer structures on the convolution layers;
inputting the output signal into an LSTM layer, wherein the LSTM layer is used for processing a sequence with a time sequence and outputting a result to a full connection layer, and the full connection layer is used for carrying out data dimension conversion on the output result to obtain a dimension conversion result;
and inputting the dimension conversion result to an output layer, and outputting two neurons by the output layer.
Preferably, the introducing a three-dimensional tensor at the input layer includes:
the baseband signal x (t) is divided into a real part and an imaginary part;
constructing a three-dimensional tensor according to the real part and the imaginary part, wherein the three-dimensional tensor also comprises a sample size and a memory length of original data;
and inputting the three-dimensional tensor into the convolutional layer to finish the training model and obtain an output signal.
Preferably, the baseband signal x (t), the input layer is divided into a real part input layer and an imaginary part input layer;
and respectively inputting the convolution layers to carry out complex convolution according to the real part input layer and the imaginary part input layer to obtain a real part rewinding lamination and an imaginary part rewinding lamination, and then carrying out cascade connection to finish the training model to obtain an output signal.
Preferably, the inputting the convolutional layers respectively for complex convolution according to the real part and the imaginary part to obtain a real part convolutional layer and an imaginary part convolutional layer, and then cascading to complete the training model to obtain an output signal, further includes:
the complex convolution is as follows:
Figure BDA0002781420780000021
wherein x and y respectively represent the real part and the imaginary part of the sample, and A and B represent complex convolution kernels.
An embodiment of the present invention further provides a self-interference cancellation apparatus, which is applied to a self-interference cancellation method in any one of the embodiments, and the method includes:
the convolution module acquires a baseband signal x (t) according to the time t, and leads the baseband signal x (t) into training models of an input layer and a convolution layer to obtain an output signal; the convolution module comprises a first submodule or a second submodule, the first submodule comprises a three-dimensional tensor introduced into the input layer, and the second submodule comprises a plurality of convolution layer structures arranged on the convolution layers;
the processing module is used for inputting the output signal into an LSTM layer, the LSTM layer is used for processing a sequence with a time sequence and outputting a result to a full connection layer, and the full connection layer is used for carrying out data dimension conversion on the output result to obtain a dimension conversion result;
and the output module is used for inputting the dimension conversion result to an output layer, and the output layer outputs two neurons.
Preferably, the first sub-module includes:
the baseband signal x (t) is divided into a real part and an imaginary part;
constructing a three-dimensional tensor according to the real part and the imaginary part, wherein the three-dimensional tensor also comprises a sample size and a memory length of original data;
and inputting the three-dimensional tensor into the convolutional layer to finish the training model and obtain an output signal.
Preferably, the second sub-module includes:
the baseband signal x (t), the input layer is divided into a real part input layer and an imaginary part input layer;
and respectively inputting the convolution layers to carry out complex convolution according to the real part input layer and the imaginary part input layer to obtain a real part rewinding lamination and an imaginary part rewinding lamination, and then carrying out cascade connection to finish the training model to obtain an output signal.
Preferably, the second sub-module further includes:
the complex convolution is as follows:
Figure BDA0002781420780000031
wherein x and y respectively represent the real part and the imaginary part of the sample, and A and B represent complex convolution kernels.
In the embodiment of the invention, two network structures for reconstructing self-interference signals are respectively designed by introducing a three-dimensional tensor into an input layer or arranging a plurality of convolutional layer structures on the convolutional layers, the advantages of local perception and weight sharing of a convolutional neural network are fully utilized, and the space-frequency characteristic of data can be captured, so that more abstract low-dimensional characteristics can be learned from high-dimensional characteristics, and the effect of self-interference elimination can be improved.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a self-interference cancellation method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a self-interference cancellation method according to another embodiment of the present invention;
FIG. 3 is a diagram of a network architecture provided by yet another embodiment of the present invention;
fig. 4 is a flowchart illustrating a self-interference cancellation method according to an embodiment of the present invention;
FIG. 5 is a diagram of a network architecture provided by another embodiment of the present invention;
FIG. 6 is a power spectral density plot provided by yet another embodiment of the present invention;
FIG. 7 is a power spectral density map provided by an embodiment of the present invention;
FIG. 8 is a power spectral density map provided by another embodiment of the present invention;
FIG. 9 is a power spectral density map provided by yet another embodiment of the present invention;
fig. 10 is a schematic structural diagram of a self-interference cancellation apparatus according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a self-interference cancellation apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be understood that the step numbers used herein are for convenience of description only and are not intended as limitations on the order in which the steps are performed.
It is to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The terms "comprises" and "comprising" indicate the presence of the described features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The term "and/or" refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, an embodiment of the present invention provides a self-interference cancellation method, including the following steps:
s101, obtaining a baseband signal x (t) at the moment t, and leading the baseband signal x (t) into training models of an input layer and a convolutional layer to obtain an output signal, wherein the training models comprise: introducing a three-dimensional tensor into the input layer or arranging a plurality of convolution layer structures on the convolution layers;
in a specific embodiment, simultaneous co-frequency full duplex communication refers to a set of communication devices or apparatuses simultaneously transmitting and receiving electromagnetic signals using the same time and frequency resources in the same medium resource. Meanwhile, the signal transmitted by the co-frequency full duplex communication transmitter and transmitted to the receiving branch of the receiver is called a self-interference signal, and the high-power co-frequency transmitting signal can influence the weak signal receiving capability of the receiver.
Transmitting and receiving digital baseband signals x (t), inputting the baseband signals into a neural network for training to obtain a new network, comprising:
referring to fig. 2 and fig. 3, in an embodiment, a baseband signal x (t) is obtained and divided into a real part and an imaginary part, a three-dimensional tensor is constructed according to the real part and the imaginary part, the three-dimensional tensor further includes a sample size and a memory length of original data, the three-dimensional tensor is input to the convolutional layer, the training model is completed, and an output signal is obtained.
The CLDNN is a neural network structure of CNN + LSTM + DNN, where CNN can reduce frequency offset variation, LSTM is well suited for modeling time-series speech, and DNN can perform nonlinear mapping of features into an abstract space for efficient separation. Conventional CLDNN networks are composed of a Convolutional Neural Network (CNN), a long-short term memory network (LSTM), and a full-connectivity layer network (DNN) in cascade. The design of a real part and imaginary part two-dimensional neural network (2D-CLDNN) is derived from a scheme for processing multi-channel images in the field of image processing, and a three-dimensional tensor is introduced into algorithm design, wherein each dimension represents the sample size of original data, the memory length M + L (M and L respectively represent the time correlation and the multipath length of a device) and a real and imaginary part. The real part and imaginary part two-dimensional CLDNN (2D-CLDNN) does not need to respectively design a neural network input layer aiming at the real part and the imaginary part of a baseband signal, so that network parameters to be estimated can be greatly reduced, and meanwhile, the training speed of each round is obviously improved. In addition, the real part and the imaginary part of the complex baseband signal can be fully utilized in the convolutional layer, and the phase information of the complex number is considered, so that the method can be more approximate to the nature of the self-interference signal to better approximate the signal. After the signal output by the convolutional layer is input into the LSTM layer, the algorithm fully utilizes the advantage of the specific processing time sequence task of the LSTM, and accurately reflects the characteristic that the self-interference signal has time correlation.
Referring to fig. 4 and fig. 5, in another embodiment, a baseband signal x (t) is obtained, the input layer is divided into a real part input layer and an imaginary part input layer, the convolutional layers are respectively input according to the real part input layer and the imaginary part input layer to perform complex convolution, so as to obtain a real part rewinding stacking layer and an imaginary part rewinding stacking layer, and then the real part rewinding stacking layer and the imaginary part rewinding stacking layer are cascaded to complete the training model, so as to obtain an output signal.
Specifically, on the basis of a traditional deep neural network, a rewinding lamination structure is arranged, and complex convolution operation is carried out on outputs of two layers of input neurons of a real part and an imaginary part respectively:
Figure BDA0002781420780000051
wherein x and y respectively represent the real part and the imaginary part of the sample, and A and B represent complex convolution kernels.
The complex convolution neural network (CC-CLDNN) performs multiple convolution operation simulating the complex operation with the real part and the imaginary part of the input sample through a complex convolution kernel, also fully considers the phase information contained in the complex baseband signal, and better describes and reconstructs the self-interference signal.
Referring to table 1, based on the above two embodiments, the software simulation of the algorithm is performed on the data set obtained by the actually measured channel by comprehensively considering the influence of different channel conditions on the memory length M + L (the memory length parameter will have an influence on the dimension of the network input tensor and the number of neurons in the input layer of the network structure). Respectively selecting three parameters of M + L (8), 13 and 20 to perform simulation calculation, and comparing the nonlinear interference effects of the traditional network structures (a real-valued neural network RVNN and a real-imaginary part segmentation complex neural network CVNN-split) with the non-linear interference effects of the embodiment (2D-CLDNN and CC-CLDNN) of the invention:
TABLE 1 comparison of the nonlinear interference effects of each network
Figure BDA0002781420780000052
Referring to FIG. 6 and Table 1, when the memory length is set at 13, the non-linear self-interference cancellation effect of 2D-CLDNN and CC-CLDNN reaches 7.7dB and 7.89dB, respectively. Compared with the other three methods (polynomial model, real-valued neural network and real-imaginary part division complex neural network), the performances of the three methods are all worse than that of the CLDNN, and when the memory length is set to be 8, the elimination effect of the real-valued neural network RVNN is greatly reduced and only reaches 5.94 dB. After self-interference elimination of two CLDNN networks, the effect of-88.49 dBm and-88.3 dBm can be respectively achieved, and the level of substrate noise-90.8 dBm is very close.
Referring to fig. 7, when the memory length M + L is set to 13, the nonlinear interference cancellation effect approaches 8dB after the number of training rounds of the two CLDNN networks reaches 60 rounds, but shows slight fluctuation on the test set. Furthermore, the real valued neural network RVNN performed slightly above 6dB, but the test set performed relatively stable, while the elimination of the real imaginary part-segmented complex neural network CVNN-split was slightly higher but still lower than both CLDNN networks. The degree of fluctuation of the CLDNN network in the test set is caused by the relative complexity of the network structure, and a certain degree of overfitting can be caused, but the overfitting effect can be overcome by further adjusting the number of training rounds, the learning rate and optimizing the size of the sample batch used.
Referring to fig. 8, when the memory length M + L is set to 8, the self-interference cancellation effect of each network is deteriorated to different degrees. But both CLDNN networks can still maintain the cancellation effect at 7 dB. While the real-valued neural network can only approach 6dB, in which case the cancellation effect is more severely degraded.
Referring to fig. 9, when the memory length M + L is set to 20, the cancellation effect of various networks is close to the behavior when the memory length is set to 13. But the performance of the real-imaginary part segmentation complex neural network on the test set is greatly deteriorated.
S102, inputting the output signal into an LSTM layer, wherein the LSTM layer is used for processing a sequence with a time sequence and outputting a result to a full connection layer, and the full connection layer is used for carrying out data dimension conversion on the output result to obtain a dimension conversion result;
in a specific embodiment, an output signal of the convolutional layer is received, wherein the output signal is introduced into an LSTM layer after feature extraction and dimension reduction, and the LSTM layer is connected with a full connection layer and used for receiving an output result of the LSTM layer and performing data dimension conversion on the output result to obtain a dimension conversion result. The fully-connected layer and the convolutional layer are equivalent, and the output result is converted into the output result of the convolutional layer in the fully-connected layer, so that the effect of training the model is achieved.
And S103, inputting the dimension conversion result to an output layer, and outputting two neurons by the output layer.
The full link layer is connected to two output layers, wherein the two output layers are a real part output layer and an imaginary part output layer, respectively.
The embodiment of the invention has the advantages that two network structures for reconstructing self-interference signals are respectively designed by introducing the three-dimensional tensor into the input layer or arranging the plurality of convolutional layers on the convolutional layers, the advantages of local sensing and weight sharing of the convolutional neural network are fully utilized, the air-frequency characteristic of data can be captured, and therefore, more abstract low-dimensional characteristics can be learned from high-dimensional characteristics, and the effect of self-interference elimination can be improved.
Referring to fig. 10 and fig. 11, an embodiment of the present invention provides a self-interference cancellation apparatus, which is applied to a method for self-interference cancellation in any of the foregoing embodiments, and the method includes:
the convolution module acquires a baseband signal x (t) according to the time t, and leads the baseband signal x (t) into training models of an input layer and a convolution layer to obtain an output signal; the convolution module comprises a first submodule or a second submodule, the first submodule comprises a three-dimensional tensor introduced into the input layer, and the second submodule comprises a plurality of convolution layer structures arranged on the convolution layers;
in a specific embodiment, simultaneous co-frequency full duplex communication refers to a set of communication devices or apparatuses simultaneously transmitting and receiving electromagnetic signals using the same time and frequency resources in the same medium resource. Meanwhile, the signal transmitted by the co-frequency full duplex communication transmitter and transmitted to the receiving branch of the receiver is called a self-interference signal, and the high-power co-frequency transmitting signal can influence the weak signal receiving capability of the receiver.
The transmitted and received digital baseband signal x (t) is input into a neural network to train to obtain a new network, and the convolution module comprises: the first sub-module 11 or the second sub-module 12 includes:
referring to fig. 2 and fig. 3, in an embodiment, in the first sub-module 11, a baseband signal x (t) is obtained and divided into a real part and an imaginary part, a three-dimensional tensor is constructed according to the real part and the imaginary part, the three-dimensional tensor further includes a sample size and a memory length of original data, the three-dimensional tensor is input to the convolutional layer, the training model is completed, and an output signal is obtained.
The CLDNN is a neural network structure of CNN + LSTM + DNN, where CNN can reduce frequency offset variation, LSTM is well suited for modeling time-series speech, and DNN can perform nonlinear mapping of features into an abstract space for efficient separation. Conventional CLDNN networks are composed of a Convolutional Neural Network (CNN), a long-short term memory network (LSTM), and a full-connectivity layer network (DNN) in cascade. The design of a real part and imaginary part two-dimensional neural network (2D-CLDNN) is derived from a scheme for processing multi-channel images in the field of image processing, and a three-dimensional tensor is introduced into algorithm design, wherein each dimension represents the sample size of original data, the memory length M + L (M and L respectively represent the time correlation and the multipath length of a device) and a real and imaginary part. The real part and imaginary part two-dimensional CLDNN (2D-CLDNN) does not need to respectively design a neural network input layer aiming at the real part and the imaginary part of a baseband signal, so that network parameters to be estimated can be greatly reduced, and meanwhile, the training speed of each round is obviously improved. In addition, the real part and the imaginary part of the complex baseband signal can be fully utilized in the convolutional layer, and the phase information of the complex number is considered, so that the method can be more approximate to the nature of the self-interference signal to better approximate the signal. After the signal output by the convolutional layer is input into the LSTM layer, the algorithm fully utilizes the advantage of the specific processing time sequence task of the LSTM, and accurately reflects the characteristic that the self-interference signal has time correlation.
Referring to fig. 4 and 5, in another embodiment, in the second sub-module 12, a baseband signal x (t) is obtained, the input layer is divided into a real part input layer and an imaginary part input layer, the convolutional layers are respectively input according to the real part input layer and the imaginary part input layer to perform complex convolution, so as to obtain a real part rewinding layer and an imaginary part rewinding layer, and then the real part rewinding layer and the imaginary part rewinding layer are cascaded to complete the training model, so as to obtain an output signal.
Specifically, on the basis of a traditional deep neural network, a rewinding lamination structure is arranged, and complex convolution operation is carried out on outputs of two layers of input neurons of a real part and an imaginary part respectively:
Figure BDA0002781420780000071
wherein x and y respectively represent the real part and the imaginary part of the sample, and A and B represent complex convolution kernels.
The complex convolution neural network (CC-CLDNN) performs multiple convolution operation simulating the complex operation with the real part and the imaginary part of the input sample through a complex convolution kernel, also fully considers the phase information contained in the complex baseband signal, and better describes and reconstructs the self-interference signal.
Referring to table 1, based on the above two embodiments, the software simulation of the algorithm is performed on the data set obtained by the actually measured channel by comprehensively considering the influence of different channel conditions on the memory length M + L (the memory length parameter will have an influence on the dimension of the network input tensor and the number of neurons in the input layer of the network structure). Respectively selecting three parameters of M + L (8), 13 and 20 to perform simulation calculation, and comparing the nonlinear interference effects of the traditional network structures (a real-valued neural network RVNN and a real-imaginary part segmentation complex neural network CVNN-split) with the non-linear interference effects of the embodiment (2D-CLDNN and CC-CLDNN) of the invention:
TABLE 1 comparison of the nonlinear interference effects of each network
Figure BDA0002781420780000081
Referring to FIG. 6 and Table 1, when the memory length is set at 13, the non-linear self-interference cancellation effect of 2D-CLDNN and CC-CLDNN reaches 7.7dB and 7.89dB, respectively. Compared with the other three methods (polynomial model, real-valued neural network and real-imaginary part division complex neural network), the performances of the three methods are all worse than that of the CLDNN, and when the memory length is set to be 8, the elimination effect of the real-valued neural network RVNN is greatly reduced and only reaches 5.94 dB. After self-interference elimination of two CLDNN networks, the effect of-88.49 dBm and-88.3 dBm can be respectively achieved, and the level of substrate noise-90.8 dBm is very close.
Referring to fig. 7, when the memory length M + L is set to 13, the nonlinear interference cancellation effect approaches 8dB after the number of training rounds of the two CLDNN networks reaches 60 rounds, but shows slight fluctuation on the test set. Furthermore, the real valued neural network RVNN performed slightly above 6dB, but the test set performed relatively stable, while the elimination of the real imaginary part-segmented complex neural network CVNN-split was slightly higher but still lower than both CLDNN networks. The degree of fluctuation of the CLDNN network in the test set is caused by the relative complexity of the network structure, and a certain degree of overfitting can be caused, but the overfitting effect can be overcome by further adjusting the number of training rounds, the learning rate and optimizing the size of the sample batch used.
Referring to fig. 8, when the memory length M + L is set to 8, the self-interference cancellation effect of each network is deteriorated to different degrees. But both CLDNN networks can still maintain the cancellation effect at 7 dB. While the real-valued neural network can only approach 6dB, in which case the cancellation effect is more severely degraded.
Referring to fig. 9, when the memory length M + L is set to 20, the cancellation effect of various networks is close to the behavior when the memory length is set to 13. But the performance of the real-imaginary part segmentation complex neural network on the test set is greatly deteriorated.
The processing module 121 is configured to input the output signal to an LSTM layer, where the LSTM layer is configured to process a sequence with a time sequence and output a result to a full connection layer, and the full connection layer is configured to perform data dimension conversion on the output result to obtain a dimension conversion result;
in a specific embodiment, an output signal of the convolutional layer is received, wherein the output signal is introduced into an LSTM layer after feature extraction and dimension reduction, and the LSTM layer is connected with a full connection layer and used for receiving an output result of the LSTM layer and performing data dimension conversion on the output result to obtain a dimension conversion result. The fully-connected layer and the convolutional layer are equivalent, and the output result is converted into the output result of the convolutional layer in the fully-connected layer, so that the effect of training the model is achieved.
And the output module 131 inputs the dimension conversion result to an output layer, and the output layer outputs two neurons.
The full link layer is connected to two output layers, wherein the two output layers are a real part output layer and an imaginary part output layer, respectively.
The embodiment of the invention has the advantages that two network structures for reconstructing self-interference signals are respectively designed by introducing the three-dimensional tensor into the input layer or arranging the plurality of convolutional layers on the convolutional layers, the advantages of local sensing and weight sharing of the convolutional neural network are fully utilized, the air-frequency characteristic of data can be captured, and therefore, more abstract low-dimensional characteristics can be learned from high-dimensional characteristics, and the effect of self-interference elimination can be improved.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (8)

1. A method for self-interference cancellation, comprising:
obtaining a baseband signal x (t) at time t, and introducing the baseband signal x (t) into training models of an input layer and a convolutional layer to obtain an output signal, wherein the training models comprise: introducing a three-dimensional tensor into the input layer or arranging a plurality of convolution layer structures on the convolution layers;
inputting the output signal into an LSTM layer, wherein the LSTM layer is used for processing a sequence with a time sequence and outputting a result to a full connection layer, and the full connection layer is used for carrying out data dimension conversion on the output result to obtain a dimension conversion result;
and inputting the dimension conversion result to an output layer, and outputting two neurons by the output layer.
2. The method of claim 1, wherein the introducing a three-dimensional tensor in the input layer comprises:
the baseband signal x (t) is divided into a real part and an imaginary part;
constructing a three-dimensional tensor according to the real part and the imaginary part, wherein the three-dimensional tensor also comprises a sample size and a memory length of original data;
and inputting the three-dimensional tensor into the convolutional layer to finish the training model and obtain an output signal.
3. The method of claim 1, wherein the disposing a plurality of convolutional layers on the convolutional layer comprises:
the baseband signal x (t), the input layer is divided into a real part input layer and an imaginary part input layer;
and respectively inputting the convolution layers to carry out complex convolution according to the real part input layer and the imaginary part input layer to obtain a real part rewinding lamination and an imaginary part rewinding lamination, and then carrying out cascade connection to finish the training model to obtain an output signal.
4. The method according to claim 3, wherein the convolutional layers are input to perform complex convolution respectively according to the real part and the imaginary part to obtain a real part convolutional layer and an imaginary part convolutional layer, and then are cascaded to complete the training model to obtain an output signal, further comprising:
the complex convolution is as follows:
Figure FDA0002781420770000011
wherein x and y respectively represent the real part and the imaginary part of the sample, and A and B represent complex convolution kernels.
5. A self-interference cancellation apparatus, comprising:
the convolution module acquires a baseband signal x (t) according to the time t, and leads the baseband signal x (t) into training models of an input layer and a convolution layer to obtain an output signal; the convolution module comprises a first submodule or a second submodule, the first submodule comprises a three-dimensional tensor introduced into the input layer, and the second submodule comprises a plurality of convolution layer structures arranged on the convolution layers;
the processing module is used for inputting the output signal into an LSTM layer, the LSTM layer is used for processing a sequence with a time sequence and outputting a result to a full connection layer, and the full connection layer is used for carrying out data dimension conversion on the output result to obtain a dimension conversion result;
and the output module is used for inputting the dimension conversion result to an output layer, and the output layer outputs two neurons.
6. The self-interference cancellation apparatus of claim 5, wherein the first sub-module comprises:
the baseband signal x (t) is divided into a real part and an imaginary part;
constructing a three-dimensional tensor according to the real part and the imaginary part, wherein the three-dimensional tensor also comprises a sample size and a memory length of original data;
and inputting the three-dimensional tensor into the convolutional layer to finish the training model and obtain an output signal.
7. The self-interference cancellation apparatus of claim 5, wherein the second sub-module comprises:
the baseband signal x (t), the input layer is divided into a real part input layer and an imaginary part input layer;
and respectively inputting the convolution layers to carry out complex convolution according to the real part input layer and the imaginary part input layer to obtain a real part rewinding lamination and an imaginary part rewinding lamination, and then carrying out cascade connection to finish the training model to obtain an output signal.
8. The self-interference cancellation apparatus of claim 7, wherein the second sub-module further comprises:
the complex convolution is as follows:
Figure FDA0002781420770000021
wherein x and y respectively represent the real part and the imaginary part of the sample, and A and B represent complex convolution kernels.
CN202011289525.3A 2020-11-17 2020-11-17 Self-interference elimination method and device Active CN112491442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011289525.3A CN112491442B (en) 2020-11-17 2020-11-17 Self-interference elimination method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011289525.3A CN112491442B (en) 2020-11-17 2020-11-17 Self-interference elimination method and device

Publications (2)

Publication Number Publication Date
CN112491442A true CN112491442A (en) 2021-03-12
CN112491442B CN112491442B (en) 2021-12-28

Family

ID=74931180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011289525.3A Active CN112491442B (en) 2020-11-17 2020-11-17 Self-interference elimination method and device

Country Status (1)

Country Link
CN (1) CN112491442B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113325375A (en) * 2021-05-25 2021-08-31 哈尔滨工程大学 Self-adaptive cancellation method based on deep neural network
CN113726350A (en) * 2021-08-09 2021-11-30 哈尔滨工程大学 Deep neural network-based strong correlation self-interference cancellation method
CN114938232A (en) * 2022-06-15 2022-08-23 北京邮电大学 LSTM-based simultaneous co-frequency full-duplex digital domain self-interference suppression method
CN115664898A (en) * 2022-10-24 2023-01-31 四川农业大学 OFDM system channel estimation method and system based on complex convolution neural network
CN115836867A (en) * 2023-02-14 2023-03-24 中国科学技术大学 Dual-branch fusion deep learning electroencephalogram noise reduction method, device and medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8625566B1 (en) * 2009-12-22 2014-01-07 Qualcomm Incorporated Detection of transmission in collocated wireless devices
US20170278513A1 (en) * 2016-03-23 2017-09-28 Google Inc. Adaptive audio enhancement for multichannel speech recognition
CN108509911A (en) * 2018-04-03 2018-09-07 电子科技大学 Interference signal recognition methods based on convolutional neural networks
CN108599809A (en) * 2018-03-14 2018-09-28 中国信息通信研究院 Full duplex self-interference signal number removing method and device
CN109921822A (en) * 2019-02-19 2019-06-21 哈尔滨工程大学 The method that non-linear, digital self-interference based on deep learning is eliminated
CN109995449A (en) * 2019-03-15 2019-07-09 北京邮电大学 A kind of millimeter-wave signal detection method based on deep learning
CN110996343A (en) * 2019-12-18 2020-04-10 中国人民解放军陆军工程大学 Interference recognition model based on deep convolutional neural network and intelligent recognition algorithm
CN111277312A (en) * 2020-02-26 2020-06-12 电子科技大学 Fixed subarray space-based millimeter wave beam forming method based on deep complex network
WO2020154972A1 (en) * 2019-01-30 2020-08-06 Baidu.Com Times Technology (Beijing) Co., Ltd. Lidar localization using 3d cnn network for solution inference in autonomous driving vehicles
CN111638488A (en) * 2020-04-10 2020-09-08 西安电子科技大学 Radar interference signal identification method based on LSTM network
CN111769844A (en) * 2020-06-24 2020-10-13 中国电子科技集团公司第三十六研究所 Single-channel co-channel interference elimination method and device
CN111898583A (en) * 2020-08-13 2020-11-06 华中科技大学 Communication signal modulation mode identification method and system based on deep learning

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8625566B1 (en) * 2009-12-22 2014-01-07 Qualcomm Incorporated Detection of transmission in collocated wireless devices
US20170278513A1 (en) * 2016-03-23 2017-09-28 Google Inc. Adaptive audio enhancement for multichannel speech recognition
CN108599809A (en) * 2018-03-14 2018-09-28 中国信息通信研究院 Full duplex self-interference signal number removing method and device
CN108509911A (en) * 2018-04-03 2018-09-07 电子科技大学 Interference signal recognition methods based on convolutional neural networks
WO2020154972A1 (en) * 2019-01-30 2020-08-06 Baidu.Com Times Technology (Beijing) Co., Ltd. Lidar localization using 3d cnn network for solution inference in autonomous driving vehicles
CN109921822A (en) * 2019-02-19 2019-06-21 哈尔滨工程大学 The method that non-linear, digital self-interference based on deep learning is eliminated
CN109995449A (en) * 2019-03-15 2019-07-09 北京邮电大学 A kind of millimeter-wave signal detection method based on deep learning
CN110996343A (en) * 2019-12-18 2020-04-10 中国人民解放军陆军工程大学 Interference recognition model based on deep convolutional neural network and intelligent recognition algorithm
CN111277312A (en) * 2020-02-26 2020-06-12 电子科技大学 Fixed subarray space-based millimeter wave beam forming method based on deep complex network
CN111638488A (en) * 2020-04-10 2020-09-08 西安电子科技大学 Radar interference signal identification method based on LSTM network
CN111769844A (en) * 2020-06-24 2020-10-13 中国电子科技集团公司第三十六研究所 Single-channel co-channel interference elimination method and device
CN111898583A (en) * 2020-08-13 2020-11-06 华中科技大学 Communication signal modulation mode identification method and system based on deep learning

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
HONGLI GAO ET AL.: "Tool Wear Monitoring Based on Localized Fuzzy Neural Networks for Turning Operation", 《2009 SIXTH INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY》 *
JIALANG XU ET AL.: "A Spatiotemporal Multi-Channel Learning Framework for Automatic Modulation Recognition", 《IEEE WIRELESS COMMUNICATIONS LETTERS》 *
LI TINGPENG ET AL.: "Identification of Jamming Factors in Electronic Information System Based on Deep Learning", 《 2018 IEEE 18TH INTERNATIONAL CONFERENCE ON COMMUNICATION TECHNOLOGY (ICCT)》 *
MERIMA ET AL.: "End-to-End Learning From Spectrum Data: A Deep Learning Approach for Wireless Signal Identification in Spectrum Monitoring Applications", 《IEEE ACCESS 》 *
YUWANG JI ET AL.: "A Survey on Tensor Technique and Applications in Machine Learning", 《IEEE ACCESS》 *
任艳: "神经网络在通信信号调制识别中的应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
唐燕群等: "基于无线信道特征的内生安全通信技术及应用", 《无线电通信技术》 *
张斌,刘凯,赵梦伟: "基于时频分析的深度学习调制识别算法", 《工业控制计算机》 *
张静等: "基于人工智能的无线传输技术最新研究进展", 《电信科学》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113325375A (en) * 2021-05-25 2021-08-31 哈尔滨工程大学 Self-adaptive cancellation method based on deep neural network
CN113325375B (en) * 2021-05-25 2022-12-13 哈尔滨工程大学 Self-adaptive cancellation method based on deep neural network
CN113726350A (en) * 2021-08-09 2021-11-30 哈尔滨工程大学 Deep neural network-based strong correlation self-interference cancellation method
CN114938232A (en) * 2022-06-15 2022-08-23 北京邮电大学 LSTM-based simultaneous co-frequency full-duplex digital domain self-interference suppression method
CN115664898A (en) * 2022-10-24 2023-01-31 四川农业大学 OFDM system channel estimation method and system based on complex convolution neural network
CN115664898B (en) * 2022-10-24 2023-09-08 四川农业大学 OFDM system channel estimation method and system based on complex convolution neural network
CN115836867A (en) * 2023-02-14 2023-03-24 中国科学技术大学 Dual-branch fusion deep learning electroencephalogram noise reduction method, device and medium
CN115836867B (en) * 2023-02-14 2023-06-16 中国科学技术大学 Deep learning electroencephalogram noise reduction method, equipment and medium with double-branch fusion

Also Published As

Publication number Publication date
CN112491442B (en) 2021-12-28

Similar Documents

Publication Publication Date Title
CN112491442B (en) Self-interference elimination method and device
Li et al. Spatio-temporal representation with deep neural recurrent network in MIMO CSI feedback
CN113748614B (en) Channel estimation model training method and device
CN110837842A (en) Video quality evaluation method, model training method and model training device
CN113326930B (en) Data processing method, neural network training method, related device and equipment
US11477060B2 (en) Systems and methods for modulation classification of baseband signals using attention-based learned filters
CN110263909A (en) Image-recognizing method and device
CN111224905B (en) Multi-user detection method based on convolution residual error network in large-scale Internet of things
CN108510982A (en) Audio event detection method, device and computer readable storage medium
CN115022193B (en) Local area network flow prediction method based on deep learning model
CN113673260A (en) Model processing method, device, storage medium and processor
Guo et al. Deep learning for joint channel estimation and feedback in massive MIMO systems
CN116628566A (en) Communication signal modulation classification method based on aggregated residual transformation network
CN114925720A (en) Small sample modulation signal identification method based on space-time mixed feature extraction network
CN112836822A (en) Federal learning strategy optimization method and device based on width learning
CN113242201B (en) Wireless signal enhanced demodulation method and system based on generation classification network
CN110516566B (en) Filtering method and device based on convolutional layer
Wei et al. A multi-resolution channel structure learning estimation method of geometry-based stochastic model with multi-scene
Yang et al. Conventional Neural Network‐Based Radio Frequency Fingerprint Identification Using Raw I/Q Data
CN113037411B (en) Multi-user signal detection method and device based on deep learning
CN116248229B (en) Packet loss compensation method for real-time voice communication
Liu et al. Deep-learning-based OFDM channel compensation hardware implementation algorithm design
CN116761223B (en) Method for realizing 4G radio frequency communication by using 5G baseband chip and vehicle-mounted radio frequency system
CN116578674B (en) Federal variation self-coding theme model training method, theme prediction method and device
WO2023070675A1 (en) Data processing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant