CN114565077A - Deep neural network generalization modeling method for power amplifier - Google Patents

Deep neural network generalization modeling method for power amplifier Download PDF

Info

Publication number
CN114565077A
CN114565077A CN202210126012.3A CN202210126012A CN114565077A CN 114565077 A CN114565077 A CN 114565077A CN 202210126012 A CN202210126012 A CN 202210126012A CN 114565077 A CN114565077 A CN 114565077A
Authority
CN
China
Prior art keywords
layer
neural network
power amplifier
ofdm
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210126012.3A
Other languages
Chinese (zh)
Other versions
CN114565077B (en
Inventor
胡欣
谢树宾
刘志军
冀昕
常旭明
邱翊
王卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202210126012.3A priority Critical patent/CN114565077B/en
Publication of CN114565077A publication Critical patent/CN114565077A/en
Application granted granted Critical
Publication of CN114565077B publication Critical patent/CN114565077B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Transmitters (AREA)
  • Amplifiers (AREA)

Abstract

The invention provides a deep neural network generalization modeling method for a power amplifier, belonging to the technical field of wireless communication; firstly, acquiring data with different bandwidths and power levels, modulating the data to an OFDM system with K subcarriers to generate OFDM signals, sampling to obtain sampling signals, and cooperatively uploading the sampling signals to a cloud end based on a cloud edge; then, one-hot coding is carried out on the corresponding bandwidth and power level of each OFDM signal, and the OFDM signals are marked and output as respective coding vectors through a full-connection neural network layer; constructing a DNN model based on the DNN model; training a DNN model by using the collected data with different bandwidths and power levels to realize generalization modeling based on multiple state data; the invention realizes modeling of multiple groups of power amplifier signals with different bandwidths and power levels, avoids the problem that network models need to be trained one by one in a targeted manner when OFDM signals with different bandwidths and power conditions exist, reduces the number of models to be modeled, and greatly improves the modeling efficiency.

Description

Deep neural network generalization modeling method for power amplifier
Technical Field
The invention belongs to the technical field of wireless communication, and particularly relates to a power amplifier-oriented deep neural network generalization modeling method.
Background
With the continuous development of the fifth generation mobile communication technology (5G), the communication system has higher requirements on data rate, communication delay and the like, and the requirements on the broadband and low energy consumption of the system are more and more emphasized.
A Power Amplifier (PA) is an essential important component of a wireless communication system as a main module for providing power gain, and whether the operation of high efficiency and high linearity is related to whether the whole communication system can operate with high quality. For a power amplifier, when the power amplifier operates at high efficiency, the power amplifier tends to a saturation region, and a signal passing through the power amplifier is severely distorted and exhibits severe nonlinear characteristics, which causes gain compression, spectrum regeneration of adjacent channels and in-band distortion, resulting in performance degradation of a communication system, and distortion correction needs to be performed by using a predistortion technique.
Meanwhile, due to rapid increase of data bandwidth and data amount of a communication system, in order to improve spectrum utilization, non-constant envelope modulation schemes such as Orthogonal Frequency Division Multiplexing (OFDM) technology are widely applied to wireless communication technology, but the nonlinear characteristics of a power amplifier become more prominent due to a high peak-to-average power ratio of the modulation schemes.
Therefore, accurate power amplifier modeling is important for better pre-distortion study of the communication system and improvement of communication efficiency.
Because the input-output relationship of the power amplifier can be expressed by a function, the neural network can be trained by using a large amount of data to approximate any function expression, and the neural network can also effectively compensate the memory effect of the power amplifier, the neural network model is widely applied to the field of power amplifier modeling in order to better compensate the nonlinear distortion of the ultra-wideband power amplifier.
An enhanced real-time delay neural network (ARVTDNN) uses the in-phase component, the quadrature component and the amplitude of the current and historical input signals of a power amplifier to establish a Deep Neural Network (DNN), can effectively track the memory effect of the PA, and improves the linearization performance along with the increase of hidden layers, has good universality, so the ARVTDNN is widely applied with excellent performance.
However, in practical applications, the method for modeling the power amplifier by using the deep neural network model can only model the power amplifier signal with fixed bandwidth and power, and once there are OFDM signals with various bandwidth and power conditions, the DNN modeling effect is rapidly reduced. In order to improve the modeling effect of the nonlinear characteristic of the power amplifier, the generalization capability of the DNN model needs to be improved.
Disclosure of Invention
The invention aims to solve the problems that: in order to solve the problem of low signal bandwidth and power generalization of a power amplifier deep neural network modeling in a broadband communication system and meet the requirement of power amplifier modeling aiming at signals with different bandwidth powers in practical application, the deep neural network generalization modeling method facing the power amplifier is provided.
The deep neural network generalization modeling method facing the power amplifier specifically comprises the following steps:
step one, collecting data with different bandwidths and power levels, and respectively using the data as an input signal x (n) of a power amplifier and an output sampling signal y (n);
firstly, generating each group of modulation data symbols by Quadrature Phase Shift Keying (QPSK) or Quadrature Amplitude Modulation (QAM);
the ith group of data symbolsNumber is represented by Xi=[Xi,0,Xi,1,…,Xi,K-1]K is the number of the ith group of data symbols; n, N is the total number of bandwidth and power state information for acquiring data.
Respectively modulating each group of data symbols to an OFDM system with K subcarriers by discrete Fourier inversion to generate respective corresponding OFDM signals;
Xicorresponding OFDM signal is xi(n) the calculation formula is:
Figure BDA0003500527550000021
l is an oversampling multiple, Xi,kIs the kth data symbol generated by QPSK or QAM; n represents the number of sampling points,
then, sampling through a power amplifier to obtain sampling signals corresponding to all OFDM signals;
xi(n) corresponds to the sampling signal yi(n),(i=1,2,...,N);
Secondly, uploading the acquired OFDM signals and corresponding sampling signals to a cloud terminal respectively based on cloud-edge cooperation;
thirdly, one-hot coding is carried out on the corresponding bandwidth and power level of each OFDM signal, and coding information is marked and output as respective coding vectors through a layer of fully-connected neural network layer;
first, for an OFDM signal xi(n) recording its bandwidth and power information as a set of states: state 1, State 2, … …, and one-hot encoding is performed, which is denoted as
Figure BDA0003500527550000022
Then, through the fully connected neural network layer pair
Figure BDA0003500527550000023
Realizing dimension reduction output and recording as coding vector
Figure BDA0003500527550000024
To represent the bandwidth and power information of the OFDM signal, namely:
Figure BDA0003500527550000025
wherein the content of the first and second substances,
Figure BDA0003500527550000026
the weights of the neurons in the fully-connected neural network layer,
Figure BDA0003500527550000027
for the bias of neurons in this fully-connected neural network layer, ncThe number of neurons in the fully connected layer of the layer.
Fourthly, constructing a DNN model by utilizing each OFDM signal, the corresponding coding vector and the corresponding sampling signal;
the DNN model comprises an Input Layer (Input Layer), a fully Connected neural network hidden Layer (Full Connected Layer) and an Output Layer (Output Layer); the input data includes: encoding the vector and the OFDM signal;
the number of neurons in the input layer is 5 x (M +1) + ncM is the memory depth of the power amplifier, ncFor the dimension of the coded vector, the input expression is:
Figure BDA0003500527550000031
Figure BDA0003500527550000032
Figure BDA0003500527550000033
Figure BDA0003500527550000034
Figure BDA00035005275500000314
wherein: | xi(n) | denotes the amplitude term of the i-th group of power amplifier input signals, ci(n) is a representation of the input signal xi(n) corresponding code vectors of bandwidth and power level,
Figure BDA0003500527550000035
|xi(n)|,|xi(n)|2,|xi(n)|3is the signal xi(n), M ∈ 0, 1.
The number of the hidden layers of the fully-connected neural network is 3, the number of the neurons is 25, 15 and 15 respectively, and the activation function uses a Tanh activation function; output a of the j-th hidden layerjComprises the following steps:
aj=Tanh(wj T·aj-1+bj),(j=1,2,3)
Figure BDA0003500527550000036
wherein the content of the first and second substances,
Figure BDA0003500527550000037
is the weight of the jth hidden layer;
Figure BDA0003500527550000038
is the bias of the hidden layer of the j-th layer,/jFor the number of neurons of the j-th hidden layer, in particular, define l0=5*(M+1)+ncIn order to input the number of neurons in the layer,
Figure BDA0003500527550000039
is the input data to the neural network.
The number of the neurons of the output layer is 2, and the neurons are respectively opposite to the in-phase component and the quadrature component of the output signal of the power amplifierIf the activation function is a linear activation function, the output Y of the output layeroutExpressed as:
Figure BDA00035005275500000310
wherein the content of the first and second substances,
Figure BDA00035005275500000311
is the weight of the output layer;
Figure BDA00035005275500000312
is the bias of the output layer; y'i,I(n) is the in-phase component of the output signal at the current moment; y'i,QAnd (n) is the quadrature component of the output signal at the current time.
The target output expression of the DNN model is:
Figure BDA00035005275500000313
wherein
Figure BDA0003500527550000041
Representing the in-phase component of the power amplifier output signal,
Figure BDA0003500527550000042
a quadrature component representing the power amplifier output signal;
fifthly, training the DNN model by using the collected data with different bandwidths and power levels to realize generalization modeling based on multiple state data;
firstly, data with different bandwidths and powers are mixed together to train a DNN model, and the weight and the bias of a coding vector c (n) are continuously adjusted along with the increase of the iteration times, so that the updating optimization of the coding vector is realized.
Then, after training is finished, respectively carrying out performance test on the DNN model by utilizing OFDM signals under different bandwidths and powers, and determining the normalized mean square error performance of each group of data;
the cost function adopts a mean square error function MSE, the constructed DNN model parameters are updated and trained by using an Adam optimization algorithm, and the optimization objective function of the model is as follows:
Figure BDA0003500527550000043
wherein N istotalRepresenting the number of samples used for training.
Finally, the trained DNN model is issued to an edge terminal, and model reasoning is carried out to reduce the transmission delay of model output;
when receiving OFDM signals under new bandwidth and power conditions, the OFDM data are merged into a training set, updating training of the DNN model is carried out, the updated model is issued to the edge end again to replace the original model, and timely updating of the model is achieved.
The invention has the advantages and positive effects that:
according to the deep neural network generalization modeling method for the power amplifier, a DNN model with certain generalization capability on bandwidth and power is established by embedding a group of coding vectors representing the bandwidth and power states of signals, modeling is carried out on multiple groups of power amplifier signals with bandwidth and power levels, the problem that network models need to be trained pertinently one by one when OFDM signals with different bandwidth and power conditions exist is solved, the number of models needing to be modeled is reduced, and modeling efficiency is greatly improved.
Drawings
FIG. 1 is a flow chart of a power amplifier-oriented deep neural network generalization modeling method according to the present invention;
FIG. 2 is a schematic representation of the respective bandwidth and power level outputs for each OFDM signal as a respective code vector in accordance with the present invention;
FIG. 3 is a schematic diagram of a network structure for constructing a DNN model according to the present invention.
Fig. 4 is a corresponding normalized power spectrum of the output signal of the present invention and the actual output signal of the PA.
FIG. 5 is a graph comparing NMSE performance of the ARVTDNN model output by the proposed method of the present invention.
Detailed Description
The following detailed description and the accompanying drawings are included to provide a more detailed and clear description of the embodiments of the present invention.
The invention provides a power amplifier-oriented deep neural network generalization modeling method, which comprises the following steps as shown in figure 1:
step one, collecting data with different bandwidths and power levels, and respectively using the data as an input signal x (n) of a power amplifier and an output sampling signal y (n);
collecting data of different bandwidths and power levels to establish a model of input and output signals of the power amplifier, wherein the collecting process comprises the following steps:
firstly, generating each group of modulation data symbols by Quadrature Phase Shift Keying (QPSK) or Quadrature Amplitude Modulation (QAM);
the ith group of data symbols is denoted by Xi=[Xi,0,Xi,1,…,Xi,K-1]K is the number of the ith group of data symbols; n, N is the total number of bandwidth and power state information for acquiring data.
Then, modulating each group of data symbols to an OFDM system with K subcarriers respectively through discrete Fourier inversion to generate respective corresponding OFDM signals;
Xicorresponding OFDM signal is xi(n) the calculation formula is:
Figure BDA0003500527550000051
l is an oversampling multiple, Xi,kIs the kth data symbol generated by QPSK or QAM; n represents the number of sampling points,
then, sampling through a power amplifier to obtain sampling signals corresponding to all OFDM signals;
xi(n),yi(n) input and sample signals representing the ith group of data, xi(n) corresponds to the sampling signal yi(n),(i=1,2,...,N);
Secondly, uploading each OFDM signal and corresponding sampling signals acquired by an edge terminal to a cloud terminal respectively based on a cloud edge cooperation method;
then training a DNN model of the power amplifier at the cloud end;
thirdly, one-hot coding is carried out on the corresponding bandwidth and power level of each OFDM signal, and the coded information is marked to be output as respective coding vectors through a layer of fully-connected neural network layer;
as shown in FIG. 2, first, for an OFDM signal xi(n) recording its bandwidth and power information as a set of states: state 1, State 2, … …, and one-hot encoding is performed, which is denoted as
Figure BDA0003500527550000052
Then, through the full-connection neural network layer pair
Figure BDA0003500527550000053
Realizing dimension reduction output and recording as coding vector
Figure BDA0003500527550000054
To represent the bandwidth and power information of the OFDM signal, namely:
Figure BDA0003500527550000055
wherein the content of the first and second substances,
Figure BDA0003500527550000056
the weights of the neurons in the fully-connected neural network layer,
Figure BDA0003500527550000057
for the bias of neurons in this fully-connected neural network layer, ncThe number of neurons in the fully connected layer of the layer.
Then the vector c will be encodedi(n) input to the DNN in the form of an embedded vectorIn the model, when the DNN model is trained, parameters of neurons in the layer also participate in the training, and the weight and the bias of the neurons can be adjusted along with the increase of the iteration number, so that the updating and optimization of the coding vector are realized.
Fourthly, constructing a DNN model by utilizing each OFDM signal, the corresponding coding vector and the corresponding sampling signal;
all OFDM signals are: x (n) ═ x1(n),x2(n),...,xi(n),...,xN(n)];
The corresponding code vector is: c (n) ═ c1(n),c2(n),...,ci(n),...,cN(n)];
Corresponding sampling signal y (n) ═ y1(n),y2(n),...,yi(n),...,yN(n)];
As shown in fig. 3, the DNN model includes an Input Layer (Input Layer), a fully Connected neural network hidden Layer (Full Connected Layer), and an Output Layer (Output Layer); two types of input data are included: from OFDM signals xiOne-hot encoding of (n) encoded vector c output through one fully-connected layeri(n) and a signal xi(n) the in-phase component, the quadrature component, the envelope term, the corresponding delay information;
the number of neurons in the input layer is 5 x (M +1) + ncM is the memory depth of the power amplifier, ncFor the dimension of the coded vector, the input expression is:
Figure BDA0003500527550000061
Figure BDA0003500527550000062
Figure BDA0003500527550000063
Figure BDA0003500527550000064
Figure BDA00035005275500000611
wherein: | xi(n) | denotes the amplitude term of the i-th group of power amplifier input signals, ci(n) is a representation of the input signal xi(n) corresponding code vectors of bandwidth and power level,
Figure BDA00035005275500000612
|xi(n)|,|xi(n)|2,|xi(n)|3is the signal xi(n) a set of input components followed by a time delayed signal xiA set of corresponding components of (n-1), and so on, with a time delayed signal x in the middleiA set of corresponding components of (n-m), the last set being a time delayed signal xiThe corresponding component of (n-M), M ∈ 0, 1.
The number of the hidden layers of the fully-connected neural network is 3, the number of the neurons is 25, 15 and 15 respectively, and the activation function uses a Tanh activation function; output a of the j-th hidden layerjComprises the following steps:
aj=Tanh(wj T·aj-1+bj),(j=1,2,3)
Figure BDA0003500527550000066
wherein the content of the first and second substances,
Figure BDA0003500527550000067
is the weight of the jth hidden layer;
Figure BDA0003500527550000068
is the bias of the hidden layer of the j-th layer,/jFor the number of neurons of the j-th hidden layer, in particular, define l0=5*(M+1)+ncFor the input layer of the number of neurons,
Figure BDA0003500527550000069
is the input data to the neural network.
The number of the neurons of the output layer is 2, the neurons respectively correspond to the in-phase component and the quadrature component of the output signal of the power amplifier, the activation function is a linear activation function, and then the output Y of the output layeroutExpressed as:
Figure BDA00035005275500000610
wherein the content of the first and second substances,
Figure BDA0003500527550000071
is the weight of the output layer;
Figure BDA0003500527550000072
is the bias of the output layer; y isoutRepresents the output of the DNN model and includes the in-phase component y 'of the current time model output signal'i,I(n) and quadrature component y'i,Q(n)。
The target output expression of the DNN model is:
Figure BDA0003500527550000073
wherein
Figure BDA0003500527550000074
Representing the in-phase component of the power amplifier output signal,
Figure BDA0003500527550000075
a quadrature component representing the power amplifier output signal;
fifthly, training the DNN model by using the collected data with different bandwidths and power levels to realize generalization modeling based on multiple state data;
in acquiring input data
Figure BDA0003500527550000076
And target output data
Figure BDA0003500527550000077
And then, dividing the data into a training set and a test set according to a certain proportion, and respectively using the training set and the test set for network training and final testing.
Firstly, data with different bandwidths and powers are mixed together to train a DNN model, and the weight and the bias of a coding vector c (n) are continuously adjusted along with the increase of the iteration times, so that the updating optimization of the coding vector is realized.
Then, after training is finished, respectively carrying out performance test on the DNN model by utilizing OFDM signals under different bandwidths and powers, and determining the Normalized Mean Square Error (NMSE) performance of each group of data; the generalization modeling of the power amplifier signals suitable for various bandwidth power levels is realized.
The cost function adopts a Mean Square Error function (MSE), the constructed DNN model parameters are updated and trained by using an Adam optimization algorithm, and the optimization objective function of the model is as follows:
Figure BDA0003500527550000078
wherein N istotalRepresenting the number of samples used for training.
Finally, the trained DNN model is issued to an edge terminal, and model reasoning is carried out to reduce the transmission delay of model output;
and collecting the OFDM signal at an edge end, and inputting the OFDM signal into the DNN model to obtain an output signal. When receiving OFDM signals under new bandwidth and power conditions, the OFDM data are merged into a training set, updating training of the DNN model is carried out, the updated model is issued to the edge end again to replace the original model, and timely updating of the model is achieved.
The invention uses a data set consisting of power amplifier input and output signals at different bandwidths and powers when training the DNN model.
After the model training is completed, the data in the test set pass through the output signal of the network and the corresponding normalized power spectrogram of the actual output signal of the PA, as shown in fig. 4, it can be seen that the DNN model has fully learned the nonlinear characteristics of the PA. By comparing the NMSE performance output by the method provided by the invention and the ARVTDNN Model, as shown in FIG. 5, the method comprises a Model 1, wherein the method has modeling effect on each group of data; model 2, outputting results of each group of data after training through the ARVTDNN Model of the same scale; model 3. the set of uncoded data was directly mixed and input to the ARVTDNN Model for training of the NMSE test results.
As can be seen from FIG. 5, the NMSE of the model output signals provided by the invention is lower than-32 dB, and the modeling effect on each group of data is better. The experimental result shows that the method greatly reduces the number of required models, improves the generalization capability of the models and meets the actual requirements on power amplifier signal modeling under various bandwidth and power conditions under the condition of ensuring good modeling performance of the power amplifier.

Claims (6)

1. A deep neural network generalization modeling method for a power amplifier is characterized by comprising the following specific steps:
firstly, generating each group of modulation data symbols through quadrature phase shift keying or quadrature amplitude modulation, and respectively modulating the modulation data symbols to an OFDM system with K subcarriers to generate respective corresponding OFDM signals;
meanwhile, sampling is carried out through a power amplifier to obtain sampling signals corresponding to all OFDM signals;
then, based on cloud-edge cooperation, uploading the acquired OFDM signals and corresponding sampling signals to a cloud end respectively, performing one-hot coding on corresponding bandwidth and power level of each OFDM signal, and marking and outputting coding information as respective coding vectors through a layer of fully-connected neural network layer;
then, constructing a DNN model by utilizing each OFDM signal, the corresponding coding vector and the corresponding sampling signal;
and finally, training the DNN model by using the acquired data with different bandwidths and power levels to realize generalization modeling based on the data with various states.
2. The power amplifier-oriented deep neural network generalization modeling method of claim 1, wherein the ith group of data symbols is represented by Xi=[Xi,0,Xi,1,…,Xi,K-1]K is the number of the ith group of data symbols; n, N is the total number of bandwidth and power state information for acquiring data.
3. The power amplifier-oriented deep neural network generalization modeling method according to claim 1, wherein each group of data symbols is respectively modulated to OFDM signals generated on an OFDM system by inverse discrete fourier transform;
the ith group of data symbols XiCorresponding OFDM signal is xi(n) the calculation formula is:
Figure FDA0003500527540000011
l is an oversampling multiple, Xi,kIs the kth data symbol generated by QPSK or QAM; n represents the number of sampling points;
OFDM signal xi(n) corresponds to the sampling signal yi(n)。
4. The power amplifier-oriented deep neural network generalization modeling method according to claim 1, wherein the specific calculation process of the coding vector is as follows:
first, for an OFDM signal xi(n) recording its bandwidth and power information as a set of states: state 1, State 2, … …, and one-hot encoding is performed, which is denoted as
Figure FDA0003500527540000012
Then, through the fully connected neural network layer pair
Figure FDA0003500527540000013
Realizing dimension reduction output and recording as coding vector
Figure FDA0003500527540000014
To represent the bandwidth and power information of the OFDM signal, namely:
Figure FDA0003500527540000015
wherein the content of the first and second substances,
Figure FDA0003500527540000016
the weights of the neurons in the fully-connected neural network layer,
Figure FDA0003500527540000017
for biasing of neurons in this fully-connected neural network layer, ncThe number of neurons in the fully connected layer of the layer.
5. The power amplifier-oriented deep neural network generalization modeling method of claim 1, wherein the DNN model comprises an input layer, a fully-connected neural network hidden layer and an output layer; the input data includes: encoding the vector and the OFDM signal;
the number of neurons in the input layer is 5 x (M +1) + ncM is the memory depth of the power amplifier, ncFor the dimension of the coded vector, the input expression is:
Figure FDA0003500527540000021
wherein: | xi(n) | denotes the amplitude term of the i-th group of power amplifier input signals, ci(n) is a representation of the input signal xi(n) corresponding code vectors of bandwidth and power level,
Figure FDA0003500527540000022
|xi(n)|,|xi(n)|2,|xi(n)|3is the signal xi(n), M ∈ 0, 1.. times, M;
the number of the hidden layers of the fully-connected neural network is 3, and the activation function uses a Tanh activation function;
output a of the j-th hidden layerjComprises the following steps:
aj=Tanh(wj T·aj-1+bj),(j=1,2,3)
Figure FDA0003500527540000023
wherein the content of the first and second substances,
Figure FDA0003500527540000024
is the weight of the jth hidden layer;
Figure FDA0003500527540000025
is the bias of the hidden layer of the j-th layer,/jFor the number of neurons of the j-th hidden layer, in particular, define l0=5*(M+1)+ncFor the input layer of the number of neurons,
Figure FDA0003500527540000026
is input data of the neural network;
the number of the neurons of the output layer is 2, the neurons respectively correspond to the in-phase component and the quadrature component of the output signal of the power amplifier, the activation function is a linear activation function, and then the output Y of the output layeroutExpressed as:
Figure FDA0003500527540000027
wherein the content of the first and second substances,
Figure FDA0003500527540000028
is the weight of the output layer;
Figure FDA0003500527540000029
is the bias of the output layer; y'i,I(n) is the in-phase component of the output signal at the current moment; y'i,Q(n) is the quadrature component of the output signal at the current time;
the target output expression of the DNN model is:
Figure FDA0003500527540000031
wherein
Figure FDA0003500527540000032
Representing the in-phase component of the power amplifier output signal,
Figure FDA0003500527540000033
representing the quadrature component of the power amplifier output signal.
6. The power amplifier-oriented deep neural network generalization modeling method of claim 1, wherein the training of the DNN model specifically comprises:
firstly, mixing data with different bandwidths and powers together to train a DNN model, and continuously adjusting the weight and bias of a coding vector c (n) along with the increase of iteration times to realize the updating optimization of the coding vector;
then, after training is finished, respectively carrying out performance test on the DNN model by utilizing OFDM signals under different bandwidths and powers, and determining the normalized mean square error performance of each group of data;
the cost function adopts a mean square error function MSE, the constructed DNN model parameters are updated and trained by using an Adam optimization algorithm, and the optimization objective function of the model is as follows:
Figure FDA0003500527540000034
wherein N istotalRepresenting the number of samples used for training;
finally, the trained DNN model is issued to an edge terminal, and model reasoning is carried out to reduce the transmission delay of model output;
when receiving OFDM signals under new bandwidth and power conditions, the OFDM data are merged into a training set, updating training of the DNN model is carried out, the updated model is issued to the edge end again to replace the original model, and timely updating of the model is achieved.
CN202210126012.3A 2022-02-10 2022-02-10 Deep neural network generalization modeling method for power amplifier Active CN114565077B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210126012.3A CN114565077B (en) 2022-02-10 2022-02-10 Deep neural network generalization modeling method for power amplifier

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210126012.3A CN114565077B (en) 2022-02-10 2022-02-10 Deep neural network generalization modeling method for power amplifier

Publications (2)

Publication Number Publication Date
CN114565077A true CN114565077A (en) 2022-05-31
CN114565077B CN114565077B (en) 2023-04-18

Family

ID=81714173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210126012.3A Active CN114565077B (en) 2022-02-10 2022-02-10 Deep neural network generalization modeling method for power amplifier

Country Status (1)

Country Link
CN (1) CN114565077B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI830276B (en) * 2022-07-04 2024-01-21 聯發科技股份有限公司 Method of compensating for power amplifier distortions and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200295975A1 (en) * 2019-03-15 2020-09-17 The Research Foundation For The State University Of New York Integrating volterra series model and deep neural networks to equalize nonlinear power amplifiers
CN111988254A (en) * 2020-04-29 2020-11-24 北京邮电大学 Low-complexity peak-to-average ratio compression and predistortion joint optimization method
CN113221308A (en) * 2021-06-11 2021-08-06 北京邮电大学 Transfer learning rapid low-complexity modeling method facing power amplifier
CN113472706A (en) * 2021-07-12 2021-10-01 南京大学 MIMO-OFDM system channel estimation method based on deep neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200295975A1 (en) * 2019-03-15 2020-09-17 The Research Foundation For The State University Of New York Integrating volterra series model and deep neural networks to equalize nonlinear power amplifiers
CN111988254A (en) * 2020-04-29 2020-11-24 北京邮电大学 Low-complexity peak-to-average ratio compression and predistortion joint optimization method
CN113221308A (en) * 2021-06-11 2021-08-06 北京邮电大学 Transfer learning rapid low-complexity modeling method facing power amplifier
CN113472706A (en) * 2021-07-12 2021-10-01 南京大学 MIMO-OFDM system channel estimation method based on deep neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SUN ZHANG,ET AL.: "Deep Neural Network Behavioral Modeling Based on Transfer Learning for Broadband Wireless Power Amplifier", 《IEEE MICROWAVE AND WIRELESS COMPONENTS LETTERS》 *
ZHIJUN LIU,ET AL.: "A joint PAPR reduction and digital predistortion based on real-valued neural networks for OFDM Systems", 《IEEE MICROWAVE AND WIRELESS COMPONENTS LETTERS》 *
唐珂: "基于Volterra级数与神经网络的预失真方法研究", 《CNKI硕士电子期刊出版信息》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI830276B (en) * 2022-07-04 2024-01-21 聯發科技股份有限公司 Method of compensating for power amplifier distortions and system

Also Published As

Publication number Publication date
CN114565077B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
Liu et al. Low-complexity PAPR reduction method for OFDM systems based on real-valued neural networks
CN111245375B (en) Power amplifier digital predistortion method of complex-valued fully-connected recurrent neural network model
CN114565077B (en) Deep neural network generalization modeling method for power amplifier
CN114024811A (en) OTFS waveform PAPR suppression method and device based on deep learning
Benvenuto et al. A neural network approach to data predistortion with memory in digital radio systems
CN102624338B (en) Volterra-filtering-based power amplifier pre-distortion method for double-loop feedback model
CN112910562A (en) Communication method based on probability shaping
Prasad et al. PAPR reduction in OFDM using scaled particle swarm optimisation based partial transmit sequence technique
CN105681244A (en) Method of inhibiting PAPR in OFDM system based on BA
Li et al. Extreme learning machine-based tone reservation scheme for OFDM systems
Hu et al. Parameter‐adjustable piecewise exponential companding scheme for peak‐to‐average power ratio reduction in orthogonal frequency division multiplexing systems
CN109951412B (en) Method for suppressing signal cubic metric by deep neural network
Ramavath et al. Theoretical analysis of PAPR companding techniques for FBMC systems
CN113221308A (en) Transfer learning rapid low-complexity modeling method facing power amplifier
Şimşir et al. A novel discrete elephant herding optimization-based PTS scheme to reduce the PAPR of universal filtered multicarrier signal
CN111200470A (en) High-order modulation signal transmission control method suitable for being interfered by nonlinearity
Zhang et al. Neural network assisted active constellation extension for PAPR reduction of OFDM system
CN111988254B (en) Low-complexity peak-to-average ratio compression and predistortion joint optimization method
Maheswari et al. Performance analysis of filter bank multicarrier system with non‐linear high power amplifiers for 5G wireless networks
Lahbabi et al. Very efficient tone reservation PAPR reduction fully compatible with ATSC 3.0 standard: performance and practical implementation analysis
Sohn RBF Neural Network Based SLM Peak‐to‐Average Power Ratio Reduction in OFDM Systems
CN111614595A (en) F-TR peak-to-average ratio inhibition method based on energy efficiency optimization
CN111884602B (en) Power amplifier predistortion method based on single-output-node neural network
Zayani et al. Pre-distortion for the compensation of hpa nonlinearity with neural networks: Application to satellite communications
CN114118145B (en) Method and device for reducing noise of modulation signal, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant