CN113221308A - Transfer learning rapid low-complexity modeling method facing power amplifier - Google Patents

Transfer learning rapid low-complexity modeling method facing power amplifier Download PDF

Info

Publication number
CN113221308A
CN113221308A CN202110659703.5A CN202110659703A CN113221308A CN 113221308 A CN113221308 A CN 113221308A CN 202110659703 A CN202110659703 A CN 202110659703A CN 113221308 A CN113221308 A CN 113221308A
Authority
CN
China
Prior art keywords
layer
model
power amplifier
output
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110659703.5A
Other languages
Chinese (zh)
Inventor
胡欣
张孙
刘志军
孙琳琳
韩康
王卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202110659703.5A priority Critical patent/CN113221308A/en
Publication of CN113221308A publication Critical patent/CN113221308A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/18Network design, e.g. design based on topological or interconnect aspects of utility systems, piping, heating ventilation air conditioning [HVAC] or cabling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/06Power analysis or power optimisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Amplifiers (AREA)

Abstract

The invention discloses a transfer learning rapid low-complexity modeling method facing a power amplifier, and belongs to the technical field of wireless communication. The invention comprises the following steps: firstly, training a DNN model of a power amplifier, copying and fixing partial parameters and a network structure of the DNN model as a pre-designed filter by utilizing transfer learning, then constructing a transfer learning neural network model TLNN comprising the pre-designed filter and a transfer learning layer, and selecting a model with good performance and long training time by training the TLNN model formed by the pre-designed filters with different layers. And the number of layers of the preset filter is fixed in the final model, and when the power amplifier needs to be modeled again, the TLNN model of the preset filter with the fixed number of layers is directly used for training to obtain the model. The invention effectively reduces the training time of the model while ensuring good modeling effect of the PA model, and meets the requirement of frequently modeling the power amplifier in practical application.

Description

Transfer learning rapid low-complexity modeling method facing power amplifier
Technical Field
The invention belongs to the technical field of wireless communication, and particularly relates to a transfer learning rapid low-complexity modeling method for a power amplifier.
Background
Currently, with the gradual popularization and application of the fifth generation mobile communication system (5G), supporting faster data transmission rate and better communication quality becomes a major challenge of modern mobile communication systems. In order to improve the data transmission rate and the communication quality, various modulation modes such as Orthogonal Frequency Division Multiplexing (OFDM) and the like are introduced, and the introduction of the modulation modes gradually increases the bandwidth and the peak-to-average ratio of signals, which also has higher requirements on power amplification. The power amplifier is a key component in a wireless communication system, and functions to amplify a modulated signal to a desired power for transmission to an antenna. As the transmission bandwidth of a wireless communication system increases, the nonlinearity and memory effect of a power amplifier become more prominent, and the distortion of a communication signal becomes more significant. Therefore, in order to better analyze the nonlinear characteristics of the power amplifier and perform predistortion study on the communication system, it is increasingly important to establish a power amplifier behavior model.
The neural network model is considered as an effective power amplifier behavior modeling model due to its strong nonlinear fitting capability, and is widely used in the field of Power Amplifier (PA) modeling. Methods for creating a Shallow Neural Network (SNN) using the in-phase component, the quadrature component, and the associated envelope term of the power amplifier input signal also occur accordingly. With the popularization and application of the 5G mobile communication technology, the signal bandwidth of the wireless communication system is gradually increased, and the increase of the signal bandwidth will cause the PA to have stronger nonlinearity and memory effect. In this case, the dimension of the PA neural network model input signal needs to be increased to ensure the modeling effect. However, as the dimension of the model input signal increases, the input-output relationship of the PA model becomes complex, and the use of SNN limits the accuracy of the PA model. To improve the performance of neural network models, the number of neurons in the network and the number of layers in the network may generally be increased. It was found that a neural network with more hidden layers performs better when both neural networks have the same number of neuron parameters. Thus, Deep Neural Networks (DNNs) have received extensive attention and research in the field of PA modeling.
However, in practical applications, operating conditions (such as temperature) may change the characteristics of the power amplifier. In order to improve the non-linear effect of the power amplifier, it is important to adaptively modify the DNN model. Since DNN requires a long training time, this would incur a large time cost if the PA was modeled directly with DNN.
Disclosure of Invention
In order to solve the problem of long time for modeling and training the deep Neural Network of the power amplifier in a broadband communication system, the invention provides a method for applying Transfer Learning to the deep Neural Network modeling of the power amplifier in order to meet the requirement of frequently modeling the power amplifier in practical application, establishes a Transfer Learning Neural Network (TLNN), realizes a Transfer Learning rapid low-complexity modeling method facing the power amplifier, and fully reduces the modeling time of the power amplifier on the premise of ensuring the modeling effect.
The invention provides a fast low-complexity modeling method for transfer learning of a power amplifier, which comprises the following steps:
(1) collecting input and output signals of a power amplifier, and constructing and training a DNN model of the power amplifier; the DNN model comprises an input layer, an N-layer full-connection layer and an output layer; n is a positive integer greater than 3;
(2) copying and fixing the weight, the bias and the network structure of a front k layer (k is more than or equal to 1 and less than or equal to N) of the fully connected layer of the trained DNN model, and taking the weight, the bias and the network structure as a pre-designed filter to extract PA characteristics;
(3) constructing a transfer learning neural network model of the power amplifier and training;
the TLNN model of the power amplifier comprises an input layer, a preset filter and an adaptation layer; the transfer learning layer and the output layer form an adaptation layer; the migration learning layer has l layers, the output of the preset filter is used as the input of the first migration learning layer, the output of the ith migration learning layer is used as the input of the next migration learning layer, i is 1,2, …, l, and the output of the l layer migration learning layer is used as the input of the output layer;
the input layer of the TLNN model is the same as that of the DNN model, and the number of the neurons is set to be 5M +5 according to the memory depth M of the power amplifier;
(4) and (4) constructing the TLNN model by using the pre-designed filters with different layers, and selecting the network with the best comprehensive performance as a final model. Training different TLNN models, recording training time of different pre-designed filter construction models after the network is trained, and respectively testing the performance of the network. And comparing the training time and NMSE performance of different networks to meet the requirement of modeling performance, wherein the model with shorter training time is the model finally established by the invention. At this time, the number of layers of the full connection layer in the filter is fixed by a preset amount. And when a new power amplifier model needs to be constructed, training and obtaining the TLNN model by utilizing the fixed layer number of the pre-designed filters.
When the method is used for training a DNN model and a TLNN model, different training data sets are used, and the different training data sets are formed by collecting input signals and output signals of different power amplifiers.
Compared with the prior art, the method has the advantages and positive effects that: according to the method, partial network parameters and structures of a DNN model of the power amplifier are reused and used as the pre-designed filter, different TLNN models can be established for the PA through the pre-designed filter, then a model with the best comprehensive performance is selected as a final model of the final power amplifier, when the PA needs to be modeled again, the TLNN model of the pre-designed filter with the fixed number of layers is retrained only by using the acquired input and output signals of the PA, good modeling performance of the power amplifier is guaranteed, meanwhile, training time of the power amplifier model is greatly reduced, and the requirement that the power amplifier is frequently modeled in practical application is met.
Drawings
FIG. 1 is a schematic flow chart of a fast low-complexity modeling method for transfer learning of a power amplifier according to the present invention;
FIG. 2 is a schematic diagram of a power amplifier DNN model established by the present invention;
FIG. 3 is a schematic diagram of a pre-designed filter constructed in accordance with the present invention;
FIG. 4 is a schematic diagram of a power amplifier TLNN model built using a pre-designed filter in accordance with the present invention;
FIG. 5 is a graph of the modeling time and modeling performance of the TLNN model of the present invention as a function of the number of layers of a pre-designed filter;
FIG. 6 is a schematic diagram of the SSPA 200-MHz output spectrum of the TLNN model finally established in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a rapid low-complexity modeling method for transfer learning of a power amplifier, which reduces the training complexity of a model by reusing a partial network structure of a pre-trained DNN model through the transfer learning. Firstly, the invention trains a DNN model of a power amplifier by using an adam (adaptive motion) algorithm, and uses partial parameters and a network structure of a transfer learning, copying and fixing DNN model as a Pre-designed filter (Pre-designed filter). The adaptive Layers (Adaptation Layers) of the model are then trained using the LM (Levenberg-Marquard) algorithm to fit the true output of the PA. The invention realizes the PA modeling method based on the transfer learning deep neural network, and effectively reduces the training time of the model while ensuring the good modeling effect of the PA model.
As shown in fig. 1, the fast low-complexity modeling method for transfer learning of a power amplifier according to an embodiment of the present invention includes the following five steps.
The method comprises the steps of firstly, acquiring input and output signals of a power amplifier.
In embodiments of the present invention, models of input and output signals are established for two power amplifiers. For the power amplifier 1, Quadrature Phase Shift Keying (QPSK) or Quadrature Amplitude Modulation (QAM) generates modulated data symbols XnsData symbols X by inverse discrete Fourier transformnsModulating to OFDM system with K sub-carriers to generate OFDM signal xns(n), sampling through the power amplifier 1 to obtain a sampling signal yns(n) of (a). Likewise for the power amplifier 2, Quadrature Phase Shift Keying (QPSK) or Quadrature Amplitude Modulation (QAM) generates modulated data symbols XntGenerating an OFDM signal x on an OFDM systemnt(n) sampling the signal by the power amplifier 2 to obtain a sampling signal ynt(n) of (a). n represents the number of sampling points.
In order to verify the performance of the transfer learning neural network, the power amplifier 1 and the power amplifier 2 in the embodiment of the present invention use two power amplifiers of different models. It is also possible to set the two power amplifiers to be of the same type, and to set the bandwidths of the input and output signals to be different.
Step two, utilizing OFDM signal xns(n) and the sampling signal yns(n) constructing a DNN model of the power amplifier and training. The Input signal of the DNN model is defined as source Input (Source Input), the Output signal of the DNN model is defined as source Output (Source Output), namely the source Input of the DNN model is xns(n) the source output is yns(n)。
As shown in fig. 2, the DNN model includes an Input Layer, a Fully Connected Layer, and an Output Layer.
The number of neurons in the input layer is 5M +5, and M is the memory depth of PA.
The number of the fully-connected layers is N, N is greater than 3, and the activation function uses a Tanh activation function.
The number of the neurons of the output layer is 2, the neurons correspond to the in-phase component and the quadrature component of the PA output signal respectively, and the activation function is a linear activation function.
Input X of DNN model in the embodiment of the inventionnsThe expression formula of (a) is:
Figure BDA0003112645600000041
wherein: i isns(n),Qns(n),|xns(n)|,|xns(n)|2,|xns(n)|3For DNN model input signal xns(n) a set of components corresponding to input X of the DNN modelns(1),Xns(2),Xns(3),Xns(4),Xns(5);Ins(n),Qns(n) respectively represent input signals xnsIn-phase and quadrature components of (n) | xns(n) | denotes the input signal xnsThe magnitude term of (n).
Ins(n-m),Qns(n-m),|xns(n-m)|,|xns(n-m)|2,|xns(n-m)|3For time-delayed signal xnsA set of components of (n-M), M ∈ 0, 1.
The Output (Source Output) expression of the DNN model is:
Yns=[yIs(n),yQs(n)] (2)
wherein: y isIs(n) represents the DNN model output signal ynsIn-phase component of (n), yQs(n) represents the DNN model output signal ynsThe quadrature component of (n). Y 'in FIG. 2'Is(n)、y′QsAnd (n) respectively represent the output of the current DNN network model.
Training the constructed DNN model: the network model cost function is a Mean Square Error function (MSE), and the optimization algorithm of the network is an Adam optimization algorithm. The training data is divided into a training set and a test set according to the ratio of 3:2, and the training set and the test set are respectively used for network training and final test of the network. After the model is trained, the model is performance tested using Normalized Mean Square Error (NMSE).
And step three, saving part of the network of the DNN model as a preset filter.
As shown in FIG. 3, the weights and bias parameters of the front k-layer fully-connected layer of the DNN model (k is more than or equal to 1 and less than N) are saved, the front k-layer network structure of the DNN model fully-connected layer is copied, the saved network parameters are input into the network structure and then fixed, and the network parameters are used as a pre-designed filter to extract PA characteristics.
Step four, utilizing OFDM signal xnt(n) and the sampling signal ynt(n) constructing and training a power amplifier Transfer Learning Neural Network (TLNN) model. Defining the Input signal of the TLNN model as Target Input and the Output signal of the TLNN model as Target Output, i.e. the Target Input of the TLNN model is xnt(n) target output is ynt(n)。
As shown in fig. 4, the TLNN model includes an input layer, a predesigned filter, and Adaptation Layers (Adaptation Layers). The input layer of the TLNN model is the same as that of the DNN model, with 5M +5 neurons.
Input X of TLNN model in embodiment of the inventionntThe expression formula of (a) is:
Figure BDA0003112645600000051
wherein, Int(n),Qnt(n),|xnt(n)|,|xnt(n)|2,|xnt(n)|3For a signal x input to the power amplifierntA set of components of (n) | xnt(n) | denotes the power amplifier input signal xntAmplitude of (n), Int(n),Qnt(n) respectively represent input signals xntAn in-phase component and a quadrature component of (n); i isnt(n-m),Qnt(n-m),|xnt(n-m)|,|xnt(n-m)|2,|xnt(n-m)|3For time-delayed signal xntA set of components of (n-M), M ∈ 0, 1.
Pre-designed filterThe weight and bias parameters of (a) are fixed, with the activation function "Tanh". Preset filter to input data XntPerforming PA feature extraction, and predefining output signal S of filternIs represented as:
Sn=fPre-designed Filter(Xnt) (4)
wherein f isPre-designed Filter(.) represents the input-output relationship of the pre-designed filter.
In the adaptation Layer, a l-Layer fully-connected Layer is defined as a Transfer Learning Layer (TL Layer), and the Transfer Learning Layer and an Output Layer (Output Layer) constitute the adaptation Layer. I-th layer transfer learning layer TLiIs represented by aniThe following were used:
Figure BDA0003112645600000052
wherein, wiAnd biAre each TLiWeight and bias. The output of the preset filter is the layer 1 transport layer TL1I.e. when i is 1, a0=Sn
In the embodiment of the invention, k + l is set to be N, so that the modeling time and the model performance of the TLNN model and the DNN model constructed by the invention can be compared conveniently.
The number of the neurons of the output layer is 2, the neurons correspond to the in-phase component and the quadrature component of the PA output signal respectively, and the activation function is a linear activation function. The output expression of the TLNN model is:
Figure BDA0003112645600000053
wherein, woutAnd boutRespectively, the weights and offsets of the output layers. a islIs the l-th migration layer TLlTo output of (c). y isIt(n) represents the in-phase component of the output signal of the TLNN model, yQt(n) represents the quadrature component of the TLNN model output signal.
Training the constructed TLNN model: the cost function of the network model is a mean square error function MSE, and the optimization algorithm of the network is an LM optimization algorithm. The training data is divided into a training set and a test set according to the ratio of 3:2, and the training set and the test set are respectively used for network training and final test of the network. After the model is trained, the model is subjected to performance testing using a normalized mean square error function NMSE. Y in FIG. 4It′(n)、yQt' (n) respectively represent the outputs of the current TLNN model.
And step five, constructing the TLNN model by using the pre-designed filters with different layer numbers, and selecting the network with the best comprehensive performance as a final model. And training different TLNN models, recording the training time of the TLNN models constructed by different pre-designed filters after the network is trained, and respectively testing the performance of the network. And comparing the training time and the NMSE performance of different TLNN networks, meeting the requirement of modeling performance, and simultaneously, obtaining a model with shorter training time, namely the model finally established by the invention.
After the final TLNN model is obtained, the number of layers of the pre-designed filters is fixed, and when the PA needs to be modeled again, the TLNN model of the pre-designed filters with the number of layers fixed is directly retrained, so that the modeling training time of the PA is greatly reduced, and the method is suitable for the requirement of frequently modeling the PA.
The effectiveness of the proposed method is demonstrated by two experimental platforms. One of the experimental platforms is a Solid-State Power Amplifier (SSPA). The SSPA uses a GaN Doherty power amplifier as an experimental device, the small signal gain of the SSPA is 13dB, the central frequency is 2.14GHz, the saturation power is 43dbm, and the central frequency is 2.14 GHz. The other test platform is a Traveling Wave Tube Amplifier (TWTA), the TWTA takes a traveling wave tube of a Ka band of the test device as an experimental device, the average output power of the TWTA is 47dBm, the output attenuation is 3dB, and the center frequency is 19.85 GHz. For the SSPA platform, the invention collects 10000 sets of 100-MHz, 200-MHz and 20-MHz OFDM signals. Secondly, for the TWTA platform, 10000 groups of 100-MHz OFDM signals are collected by the method. Each of the above signals is according to 3: the ratio of 2 is divided into a training set and a test set for training and testing the model, respectively.
In the experiment, first, the present invention trains a DNN model of a 5-Hidden Layer (5-Hidden Layer) with SSPA 100-MHz signal. After the model training is completed, extracting the front k layers of the DNN model as a preset filter, where k is the number of layers of the preset filter, k is ∈ {1,2,3,4}, and the number of layers l of the migration learning layer satisfies the equation k + l ═ N, and N ═ 5. Second, the present invention defines and trains the following networks:
case 1: AkB, source input is SSPA 100-MHz OFDM signal and target input is SSPA 200-MHz OFDM signal
Case 2: AkC, source input is the SSPA 100-MHz OFDM signal and target input is the SSPA 20-MHz OFDM signal.
AkD, source input being SSPA 100-MHz OFDM signal and target input being TWTA 100-MHz OFDM signal.
The invention trains the different TLNN models, records the training time of the models constructed by different pre-designed filters after the network is trained, and respectively tests the performance of the network. After the training time and the performance of different networks are compared, the modeling performance requirement is met, and the model with the shortest training time is the model finally established by the method.
As shown in fig. 5, the training time and NMSE performance of the TLNN model for different numbers of layers of the pre-designed filter are shown. In fig. 5, the left ordinate represents training time, the right ordinate represents NMSE, and the abscissa represents the number of layers of the pre-designed filter. As can be seen from fig. 5, when the number of the preset filter layers saved from the five-layer DNN model is 3, the TLNN model has better performance and shorter modeling time, and therefore, in the finally established TLNN model, the number of the preset filter layers is set to 3, and the number of the migratory learning layers is set to 2. After the final network is built, the output spectrum of the TLNN model is tested. As shown in fig. 6, the frequency spectrum of the modeling error is below-40 dB, which indicates that the model finally built by the method of the present invention performs well. In fig. 6, the abscissa represents frequency offset and the ordinate represents normalized power spectral density. The experimental effects of fig. 5 and fig. 6 show that the method of the present invention can greatly reduce the modeling time and meet the actual requirement for frequently modeling the power amplifier under the condition that the performance of the model established for the power amplifier is good.

Claims (4)

1. A fast low-complexity modeling method for transfer learning of a power amplifier is characterized by comprising the following steps:
(1) collecting input and output signals of a power amplifier, and constructing and training a Deep Neural Network (DNN) model of the power amplifier; the DNN model comprises an input layer, an N-layer full-connection layer and an output layer; n is a positive integer greater than 3; the input layer sets the number of the neurons to be 5M +5 according to the memory depth M of the power amplifier;
(2) copying and fixing the weight, the bias and the network structure of the front k layer full-connection layer of the trained DNN model as a pre-designed filter; k is more than or equal to 1 and less than N;
(3) constructing a transfer learning neural network TLNN model of the power amplifier and training;
the TLNN model of the power amplifier comprises an input layer, a preset filter and an adaptation layer; the transfer learning layer and the output layer form an adaptation layer; the migration learning layer has l layers, k + l is N, the output of the predesigned filter is used as the input of the first layer of migration learning layer, the output of the ith layer of migration learning layer is used as the input of the next layer of migration learning layer, i is 1,2, …, l, and the output of the ith layer of migration learning layer is used as the input of the output layer;
(4) constructing and training a TLNN model by using preset filters with different layers, recording the training time of the model, testing the performance of the model, selecting the model with long training time and good performance as a final TLNN model, and fixing the layers of all-connected layers in the preset filters; and when a new power amplifier model needs to be constructed, training and obtaining the TLNN model by utilizing the fixed layer number of the pre-designed filters.
2. The method of claim 1 wherein different training data sets are used in training the DNN model and the TLNN model, the different training data sets collecting input and output signal contributions from different power amplifiers.
3. The method of claim 1, wherein in (1), the DNN model of the power amplifier is constructed in which the activation function of the input layer is a Tanh activation function, and the input layer corresponds to M +1 signals x of the input power amplifiernsA component of (n-M), M ∈ 0, 1.
Signal xnsA group of components of (n-m) is Ins(n-m),Qns(n-m),|xns(n-m)|,|xns(n-m)|2,|xns(n-m)|3
Ins(n-m),Qns(n-m) respectively represent the signals xnsIn-phase and quadrature components of (n-m) | xns(n-m) | represents the signal xns(n-m) amplitude;
the number of the neurons of the output layer is 2, the neurons respectively correspond to the in-phase component and the quadrature component of the output signal of the power amplifier, and the activation function is a linear activation function.
4. The method according to claim 1 or 3, wherein in (3), the constructed TLNN model of the power amplifier comprises:
the number of input layer neurons is 5M +5, and M +1 signals x are inputntA component of (n-M), M ∈ 0, 1. Input signal xntA group of components of (n-m) is Int(n-m),Qnt(n-m),|xnt(n-m)|,|xnt(n-m)|2,|xnt(n-m)|3,|xnt(n-m) | represents the signal xntAmplitude of (n-m), Int(n-m),Qnt(n-m) respectively represent the signals xntAn in-phase component and a quadrature component of (n-m);
the weight and bias parameters of the pre-designed filter are fixed, the activation function is Tanh, and the pre-designed filter is used for inputting data XntPerforming feature extraction to obtain an output signal Sn=fPre-designed Filter(Xnt);fPre-designed Filter(.) representing pre-designed filtersAn input-output relationship;
in the transition learning layer, the output of the i-th transition learning layer is expressed as
Figure FDA0003112645590000021
wiAnd biThe weights and the offsets of the ith layer of the transfer learning layer are respectively; a is0=Sn
The number of the neurons of the output layer is 2, the neurons respectively correspond to an in-phase component and an orthogonal component of an output signal of the power amplifier, an activation function of the output layer is a linear activation function, and the output of the output layer is expressed as
Figure FDA0003112645590000022
Wherein, woutAnd boutRespectively, the weight and offset of the output layer, alIs the output of the l-th layer of the transfer learning layer, yIt(n) and yQt(n) represent the in-phase and quadrature components of the power amplifier output signal, respectively.
CN202110659703.5A 2021-06-11 2021-06-11 Transfer learning rapid low-complexity modeling method facing power amplifier Pending CN113221308A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110659703.5A CN113221308A (en) 2021-06-11 2021-06-11 Transfer learning rapid low-complexity modeling method facing power amplifier

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110659703.5A CN113221308A (en) 2021-06-11 2021-06-11 Transfer learning rapid low-complexity modeling method facing power amplifier

Publications (1)

Publication Number Publication Date
CN113221308A true CN113221308A (en) 2021-08-06

Family

ID=77080399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110659703.5A Pending CN113221308A (en) 2021-06-11 2021-06-11 Transfer learning rapid low-complexity modeling method facing power amplifier

Country Status (1)

Country Link
CN (1) CN113221308A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565077A (en) * 2022-02-10 2022-05-31 北京邮电大学 Deep neural network generalization modeling method for power amplifier
CN116596061A (en) * 2023-05-16 2023-08-15 北京航空航天大学 Power amplifier harmonic prediction method based on transfer learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108733852A (en) * 2017-04-17 2018-11-02 天津大学(青岛)海洋工程研究院有限公司 A kind of power amplifier behavior modeling method based on extreme learning machine
CN109948645A (en) * 2019-01-23 2019-06-28 西安交通大学 A kind of enterprise based on depth confrontation transfer learning evades the tax recognition methods
CN110009706A (en) * 2019-03-06 2019-07-12 上海电力学院 A kind of digital cores reconstructing method based on deep-neural-network and transfer learning
WO2021085780A1 (en) * 2019-10-29 2021-05-06 삼성전자 주식회사 Electronic device for processing input signal of power amplifier and operation method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108733852A (en) * 2017-04-17 2018-11-02 天津大学(青岛)海洋工程研究院有限公司 A kind of power amplifier behavior modeling method based on extreme learning machine
CN109948645A (en) * 2019-01-23 2019-06-28 西安交通大学 A kind of enterprise based on depth confrontation transfer learning evades the tax recognition methods
CN110009706A (en) * 2019-03-06 2019-07-12 上海电力学院 A kind of digital cores reconstructing method based on deep-neural-network and transfer learning
WO2021085780A1 (en) * 2019-10-29 2021-05-06 삼성전자 주식회사 Electronic device for processing input signal of power amplifier and operation method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SUN ZHANG 等: "《Deep Neural Network Behavioral Modeling Based on Transfer Learning for Broadband Wireless Power Amplifier》", 《IEEE MICROWAVE AND WIRELESS COMPONENTS LETTERS》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565077A (en) * 2022-02-10 2022-05-31 北京邮电大学 Deep neural network generalization modeling method for power amplifier
CN116596061A (en) * 2023-05-16 2023-08-15 北京航空航天大学 Power amplifier harmonic prediction method based on transfer learning
CN116596061B (en) * 2023-05-16 2023-12-12 北京航空航天大学 Power amplifier harmonic prediction method based on transfer learning

Similar Documents

Publication Publication Date Title
Hu et al. Convolutional neural network for behavioral modeling and predistortion of wideband power amplifiers
CN111245375B (en) Power amplifier digital predistortion method of complex-valued fully-connected recurrent neural network model
CN102487367B (en) Adaptive power amplifier digital baseband predistortion method
CN113221308A (en) Transfer learning rapid low-complexity modeling method facing power amplifier
CN110765720B (en) Power amplifier predistortion method of complex-valued pipeline recurrent neural network model
KR20070046779A (en) A pre-distorter for orthogonal frequency division multiplexing systems and method of operating the same
CN111490737A (en) Nonlinear compensation method and device for power amplifier
Li et al. Vector decomposed long short-term memory model for behavioral modeling and digital predistortion for wideband RF power amplifiers
Lee et al. Adaptive neuro-fuzzy inference system (ANFIS) digital predistorter for RF power amplifier linearization
Tanio et al. Efficient digital predistortion using sparse neural network
CN101350597A (en) Method for modeling wideband radio-frequency power amplifier
CN111988254B (en) Low-complexity peak-to-average ratio compression and predistortion joint optimization method
Zhang et al. Neural network assisted active constellation extension for PAPR reduction of OFDM system
Thompson et al. Integrating volterra series model and deep neural networks to equalize nonlinear power amplifiers
Xia et al. Signal-based digital predistortion for linearization of power amplifiers
CN114565077B (en) Deep neural network generalization modeling method for power amplifier
Langlet et al. Comparison of neural network adaptive predistortion techniques for satellite down links
CN114598274B (en) Low-complexity lookup table construction method oriented to broadband predistortion
CN113612455B (en) Working method of digital predistortion system based on iterative learning control and main curve analysis
CN115913844A (en) MIMO system digital predistortion compensation method, device, equipment and storage medium based on neural network
CN111884602A (en) Power amplifier predistortion method based on single-output-node neural network
Wu et al. Symbol-based over-the-air digital predistortion using reinforcement learning
Li et al. Complex-valued pipelined chebyshev functional link recurrent neural network for joint compensation of wideband transmitter distortions and impairments
TWI830276B (en) Method of compensating for power amplifier distortions and system
Cheaito et al. EVM derivation for multicarrier signals: Joint impact of non-linear amplification and predistortion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210806

WD01 Invention patent application deemed withdrawn after publication