CN110765720A - Power amplifier predistortion method of complex value assembly line recurrent neural network model - Google Patents

Power amplifier predistortion method of complex value assembly line recurrent neural network model Download PDF

Info

Publication number
CN110765720A
CN110765720A CN201910865377.6A CN201910865377A CN110765720A CN 110765720 A CN110765720 A CN 110765720A CN 201910865377 A CN201910865377 A CN 201910865377A CN 110765720 A CN110765720 A CN 110765720A
Authority
CN
China
Prior art keywords
power amplifier
signal
model
neural network
complex value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910865377.6A
Other languages
Chinese (zh)
Other versions
CN110765720B (en
Inventor
李明玉
蔡振东
靳一
代志江
徐常志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201910865377.6A priority Critical patent/CN110765720B/en
Publication of CN110765720A publication Critical patent/CN110765720A/en
Application granted granted Critical
Publication of CN110765720B publication Critical patent/CN110765720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F1/00Details of amplifiers with only discharge tubes, only semiconductor devices or only unspecified devices as amplifying elements
    • H03F1/02Modifications of amplifiers to raise the efficiency, e.g. gliding Class A stages, use of an auxiliary oscillation
    • H03F1/0205Modifications of amplifiers to raise the efficiency, e.g. gliding Class A stages, use of an auxiliary oscillation in transistor amplifiers
    • H03F1/0288Modifications of amplifiers to raise the efficiency, e.g. gliding Class A stages, use of an auxiliary oscillation in transistor amplifiers using a main and one or several auxiliary peaking amplifiers whereby the load is connected to the main amplifier using an impedance inverter, e.g. Doherty amplifiers
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F1/00Details of amplifiers with only discharge tubes, only semiconductor devices or only unspecified devices as amplifying elements
    • H03F1/32Modifications of amplifiers to reduce non-linear distortion
    • H03F1/3241Modifications of amplifiers to reduce non-linear distortion using predistortion circuits
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F3/00Amplifiers with only discharge tubes or only semiconductor devices as amplifying elements
    • H03F3/189High-frequency amplifiers, e.g. radio frequency amplifiers
    • H03F3/19High-frequency amplifiers, e.g. radio frequency amplifiers with semiconductor devices only
    • H03F3/195High-frequency amplifiers, e.g. radio frequency amplifiers with semiconductor devices only in integrated circuits
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F3/00Amplifiers with only discharge tubes or only semiconductor devices as amplifying elements
    • H03F3/20Power amplifiers, e.g. Class B amplifiers, Class C amplifiers
    • H03F3/21Power amplifiers, e.g. Class B amplifiers, Class C amplifiers with semiconductor devices only
    • H03F3/213Power amplifiers, e.g. Class B amplifiers, Class C amplifiers with semiconductor devices only in integrated circuits

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Power Engineering (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Nonlinear Science (AREA)
  • Amplifiers (AREA)

Abstract

A power amplifier predistortion method of a complex value assembly line recurrent neural network model is characterized in that a power amplifier behavior model is modeled through the complex value assembly line recurrent neural network model, a predistorter module is solved, and therefore predistortion operation is carried out on a power amplifier input signal. Firstly, part of input and output signals of the power amplifier are used as test signals, forward modeling is carried out on the power amplifier, model weight is optimized through an enhanced complex value real-time recursive learning algorithm to obtain optimal model weight, and the nonlinear and memorability of the power amplifier represented by the model is checked; secondly, the model is inverted, so that the reverse modeling is carried out on the power amplifier, and a predistorter structure is obtained; finally, the input signal of the power amplifier passes through the predistorter structure to obtain a signal subjected to predistortion compensation, and the signal is sent into the power amplifier, so that the adjacent channel power ratio of the obtained output signal of the power amplifier can be obviously improved.

Description

Power amplifier predistortion method of complex value assembly line recurrent neural network model
Technical Field
The invention relates to the field of digital predistortion in digital signal processing, in particular to a power amplifier predistortion method of a complex value assembly line recurrent neural network model.
Background
With the rapid development of wireless communication technology, especially when 4G and 5G mobile communication are widely used, the demand of modern communication system for spectrum resources is increasing. To meet this requirement, high-order modulation techniques are widely used in communication systems, but these modulation techniques make the design of radio frequency power amplifiers, which play a crucial role in the radio frequency front-end section, more difficult and also cause spectral regrowth. In order to meet the requirements of modern wireless communication technology, a series of power amplifier linearization and efficiency enhancement technologies are proposed, such as feed-forward technology feedback, negative feedback technology feedback, linear linearity with Nonlinear Components technology, and predistortion technology. The predistortion technique is divided into analog predistortion and digital predistortion. Compared with other schemes, digital predistortion has become the mainstream technology of power amplifier linearization because of its advantages of low cost, good linearization performance, high flexibility and the like. The basic principle of digital predistortion is to insert a predistorter before the power amplifier, and the predistorter is inverse to the nonlinear characteristic of the power amplifier, so as to obtain linearly amplified radio frequency output at the output end of the power amplifier.
An Artificial Neural Network (ANN) is a nonlinear system with a powerful intelligent information processing function, has simple learning rule, strong robustness, memory capability and self-learning capability, and can map any complex nonlinear relation. The combination of the Neural Network technology and the power amplifier predistortion technology benefits from the strong approximation effect of the Neural Network in the nonlinear system modeling, and the Neural Network model for power amplifier behavior model modeling and digital predistortion mainly comprises a feed-forward (feed forward) Neural Network, a Radial Basis Function (RBF) Neural Network, a Time Delay Neural Network (TDNN) and the like. The method adopts a neural network learning method based on real-value information processing in the application of the communication system. However, in modern science, the nature of efficient model data has increased complex values, which has led to many learning algorithms being extended to the complex domain to process signals
When a wideband signal powers a power amplifier, the signal bandwidth may be comparable to the inherent bandwidth of the power amplifier, where the output of the power amplifier is related not only to the current input, but also to the previous input and output. Therefore, an excellent power amplifier model not only can solve the problem that an input signal is a complex value, but also can well represent the memory effect of the power amplifier. However, the traditional neural network model cannot solve the problem of complex-valued signals or completely express the memory effect of the power amplifier, and is difficult to realize both the problem and the memory effect.
Disclosure of Invention
The invention provides a power amplifier predistortion method of a complex value assembly line recurrent neural network model, which can not only express the memory effect of a power amplifier, but also process complex value signals and quickly update parameters of the model, aiming at the problems that the existing neural network model can not well express the memory effect of the power amplifier and when an input signal is a complex value, a heavy training algorithm is introduced to cause long training time, and the specific technical scheme is as follows:
a power amplifier predistortion method of a complex value assembly line recurrent neural network model is characterized in that:
the method comprises the following steps:
s1: generating an original signal, wherein the original signal is used as an input signal of a target power amplifier;
s2: establishing a neural network model, training the neural network model to obtain a power amplifier positive model, and inverting the power amplifier positive model to obtain a predistortion structure, wherein the predistortion structure is an inverse model obtained after the positive model is inverted;
s3: modulating an original signal to obtain a radio frequency signal;
s4: passing the radio frequency signal through a predistortion structure to obtain a predistortion signal;
s5: and obtaining a compensated power amplifier output signal by the pre-distortion signal through a target power amplifier.
Further: the S2 includes the following steps:
s2-1: selecting a segment from the original signal as a test signal;
s2-2: modulating the test signal to obtain a radio frequency signal;
s2-3: the radio frequency signal is amplified to obtain an output signal of the power amplifier;
s2-4: reducing the power of the output signal of the power amplifier through an attenuator to obtain an attenuated signal;
s2-5: carrying out down-conversion and filtering on the attenuated signal to obtain a target output signal;
s2-6: establishing a complex value assembly line recurrent neural network model, and training the complex value assembly line recurrent neural network model by using a test signal as an input parameter and a target output signal as an output parameter to obtain a power amplifier positive model;
s2-7: and optimizing the positive model of the power amplifier by using an enhanced complex value real-time recursive learning algorithm to obtain an optimized positive model of the power amplifier, and then inverting the optimized positive model of the power amplifier to obtain a predistortion structure of the power amplifier.
Further: the S2-6 comprises the following steps:
s2-6-1: establishing a complex value assembly line recurrent neural network model structure, wherein the model comprises M repeated modules;
each module is provided with N neurons, wherein the first M-1 modules are non-fully-connected recurrent neural networks, N-1 outputs in output neurons of the first module are used for feeding back to inputs, the outputs of the rest neurons, namely the output of the first neuron, are directly applied to the next module, the last module is a fully-connected recurrent neural network, and the outputs of all the neurons are fed back to the inputs;
s2-6-2: introducing augmented complex statistics, taking into account cross statistics of two variables, by inputting the conjugate x of variable x*To enhance the integrity of the information, a linear vector of Λ ═ x is obtainedT,xH]T
S2-6-3: the weight vector of the l-th neuron of the network output layer of each module is
Figure BDA0002201108820000041
M modules all use the same weight matrix
Figure BDA0002201108820000042
S2-6-4: the mathematical expression of the model structure of the recurrent neural network of the complex value assembly line is
Figure BDA0002201108820000044
Where ψ (-) is an activation function,and
Figure BDA0002201108820000046
respectively representing the output of the ith neuron of the tth module at the moment k and the network node input of an activation function;
augmented input for the tth module at time kAnd input vector of Mth module
Figure BDA0002201108820000048
Is composed of
Figure BDA0002201108820000049
It(k)=[s(k-t),...,s(k-t-p+1),1+j
yt+1,1(k-1),yt,2(k-1),...,yt,N(k-1)]
Figure BDA00022011088200000410
IM(k)=[s(k-M),...,s(k-M-p+1),1+j
yM,1(k-1),yM,2(k-1),...,yM,N(k-1)]
Wherein p is the number of external inputs, i.e. delaying the input sample by p unit durations, and the upper right label a indicates that the expression is an augmented expression;
s2-5-5: as seen in S2-5-4, the inputs of the first M-1 modules include the output of the next module
Figure BDA00022011088200000411
And replacing the first feedback delay of the module, and the last module M has no input from the latter module, so the input retains the feedback delay of all the outputs of the module;
the mathematical expression of the model in S2-6-4 is simplified as:
Figure BDA00022011088200000412
s2-6-6: and (3) taking the first neuron output of the first module to represent the final output signal of the positive power amplifier model by using the formula obtained in S2-6-5:
further: the S2-7 comprises the following steps:
s2-7-1: the test signal in S2-1 is used to subtract the target output signal in S2-5 to obtain the t-th module error expression at the time k asWherein
Figure BDA0002201108820000053
Is the error of the real value of the signal,
Figure BDA0002201108820000054
is an imaginary error, d (k-t +1) is a target output signal, and since the input and output signals of the power amplifier are complex, the cost function of the complex value pipeline recurrent neural network model is as follows:
Figure BDA0002201108820000055
the lambda in the formula is an exponential weighting factor and is used for controlling the influence of the memory effect of each module on the whole structure;
s2-7-2: the weight coefficient is changed along the whole complex value assembly line recurrent neural network model by adopting a gradient descent learning method to minimize the cost function, and the method is as follows:
Figure BDA0002201108820000056
Figure BDA0002201108820000057
in the above formula
Figure BDA0002201108820000059
Is a complex-valued augmented weight function,
Figure BDA00022011088200000510
updating the formula for the weight of the nth weight of the ith neuron at time k,
Figure BDA00022011088200000511
for gradient expressions, η is the learning rate, here a constant;
s2-7-3: calculating the gradient of the real part and the imaginary part of the cost function to the complex value amplification type weight function as follows:
Figure BDA0002201108820000061
Figure BDA0002201108820000062
Figure BDA0002201108820000063
representing the sensitivity of the ith module at the time k of the ith neuron;
s2-7-4: according to the cauchy-riemann equation, the real and imaginary part sensitivities are obtained as a result of the sensitivities:
Figure BDA0002201108820000064
Figure BDA0002201108820000065
Figure BDA0002201108820000066
s2-7-5: substituting the formulas of S2-7-4 and S2-7-3 into S2-7-2
Figure BDA0002201108820000067
A gradient can be obtained as:
Figure BDA0002201108820000068
the output of the sensitivity function at the time k of the jth neuron of the tth module is:
Figure BDA0002201108820000069
δlnis a function of Crohn
S2-7-6: the final weight updating equation of the optimized power amplifier positive model is as follows:
Figure BDA0002201108820000071
s2-7-7: and (5) inverting the optimized power amplifier positive model to obtain a predistortion structure of the power amplifier.
The invention has the beneficial effects that: firstly, the invention combines the advantages of the traditional neural network model, popularizes the pipeline recurrent neural network model which can effectively express the memory effect to a complex field to obtain a complex value pipeline recurrent neural network model (CPRNN) for power amplifier modeling, popularizes the real-time recurrent learning algorithm used for model parameter optimization updating and further deduces to obtain an enhanced complex value real-time recurrent learning Algorithm (ACRTRL) for the CPRNN model. The effect in practical application shows that the complex value assembly line recurrent neural network model based on the enhanced complex value real-time recurrent learning algorithm can accurately model the power amplifier and effectively compensate the signal.
Secondly, the invention can utilize the CPRNN model to carry out more accurate modeling on the power amplifier.
Thirdly, the RTRL algorithm is popularized to a complex field and then an enhanced ACRTRL algorithm is deduced, so that the parameters of the model can be effectively updated, the complexity can be reduced, and the time required by convergence can be shortened.
Fourthly, the method can be applied to the power amplifier modeling of the F type, the Doherty type and the like which are popular at present, and has wide applicability.
Drawings
FIG. 1 is a block diagram of the present invention;
FIG. 2 is a structural diagram of the CPRNN of the present invention;
FIG. 3 is a graph of the modeling effect of the model of the present invention;
fig. 4 is a diagram of the effect of the present invention after predistortion.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
As shown in fig. 1 and 2: a power amplifier predistortion method of recurrent neural network model of assembly line of complex value, wherein the predistortion system used includes RF PAs, the computer, vector signal source VSG, signal analyzer VSA, implement the power amplifier used for F class power amplifier, in this embodiment, recurrent neural network model of assembly line of complex value is CPRNN for short, the real-time recurrent learning algorithm of enhanced complex value is ACRTRL for short;
the method comprises the following steps:
s1: using MATLAB to generate an LTE double-carrier signal, wherein the LTE double-carrier signal is an input signal of a target power amplifier;
s2: establishing a neural network model, training the neural network model to obtain a power amplifier positive model, and inverting the power amplifier positive model to obtain a predistortion structure, wherein the predistortion structure is used for compensating nonlinearity of a power amplifier;
wherein, S2 specifically adopts the following steps:
s2-1: selecting one section from the LTE double-carrier signal as a test signal;
s2-2: downloading the test signal into a signal analyzer, and modulating the test signal to obtain a radio frequency signal;
s2-3: the radio frequency signal is amplified to obtain an output signal of the power amplifier;
s2-4: reducing the power of the output signal of the power amplifier through an attenuator to obtain an attenuated signal;
s2-5: carrying out down-conversion and filtering on the attenuated signal to obtain a target output signal;
s2-6: establishing a complex value assembly line recurrent neural network model by using MATLAB, training the complex value assembly line recurrent neural network model by using a test signal as an input parameter and a target output signal as an output parameter to obtain a power amplifier positive model;
s2-7: and optimizing the positive model of the power amplifier by using an enhanced complex value real-time recursive learning algorithm to obtain an optimized positive model of the power amplifier, and then inverting the optimized positive model of the power amplifier to obtain a predistortion structure of the power amplifier.
The optimized modeling effect is as shown in fig. 3, and fig. 3 shows that a target output signal is obtained through down-conversion and filtering, the prediction output of a complex value pipeline recurrent neural network model is obtained, and the error of the complex value pipeline recurrent neural network model is the error between the prediction and an actual value.
As can be seen from FIG. 2, the model can accurately model the power amplifier, and the modeling error is kept below-50 dB;
s3: modulating the LTE dual-carrier signal to obtain a radio frequency signal;
s4: passing the radio frequency signal through a predistortion structure to obtain a predistortion signal;
s5: and obtaining a compensated power amplifier output signal by the pre-distortion signal through a target power amplifier.
As shown in fig. 4, fig. 4 shows a predistortion signal processed by a predistortion structure, i.e. a compensated target power amplifier output signal, and an output signal without predistortion, i.e. a power amplifier output signal after an input signal directly passes through a target power amplifier, and an original signal.
The implementation result shows that the compensated target power amplifier output signal has the ACPR corrected by 19.95dB, and the implementation result and fig. 4 show that the CPRNN model based on the ACRTRL algorithm can effectively compensate the memory effect and the nonlinear characteristic of the power amplifier.
Wherein, in the above steps, the S2-6 includes the following steps:
s2-6-1: establishing a complex value assembly line recurrent neural network model structure, wherein the model comprises M repeated modules;
each module is provided with N neurons, wherein the first M-1 modules are non-fully-connected recurrent neural networks, N-1 outputs in output neurons of the first module are used for feeding back to inputs, the outputs of the rest neurons, namely the output of the first neuron, are directly applied to the next module, the last module is a fully-connected recurrent neural network, and the outputs of all the neurons are fed back to the inputs;
s2-6-2: introducing augmented complex statistics, taking into account cross statistics of two variables, by inputting the conjugate x of variable x*To enhance the integrity of the information, a linear vector of Λ ═ x is obtainedT,xH]T
S2-6-3: the weight vector of the l-th neuron of the network output layer of each module is
Figure BDA0002201108820000101
M modules all use the same weight matrix
Figure BDA0002201108820000102
S2-6-4: the mathematical expression of the model structure of the recurrent neural network of the complex value assembly line is
Figure BDA0002201108820000103
Where ψ (-) is an activation function,
Figure BDA0002201108820000105
andrespectively representing the output of the ith neuron of the tth module at the moment k and the network node input of an activation function;
augmented input for the tth module at time k
Figure BDA0002201108820000107
And input vector of Mth module
Figure BDA0002201108820000108
Is composed of
Figure BDA0002201108820000109
It(k)=[s(k-t),...,s(k-t-p+1),1+j
yt+1,1(k-1),yt,2(k-1),...,yt,N(k-1)]
Figure BDA00022011088200001010
IM(k)=[s(k-M),...,s(k-M-p+1),1+j
yM,1(k-1),yM,2(k-1),...,yM,N(k-1)]
Wherein p is the number of external inputs, i.e. delaying the input sample by p unit durations, and the upper right label a indicates that the expression is an augmented expression;
s2-6-5: the mathematical expression of the model in S2-6-4 is simplified as:
Figure BDA00022011088200001011
s2-6-6: and (3) taking the first neuron output of the first module to represent the final output signal of the positive power amplifier model by using the formula obtained in S2-6-5:
Figure BDA0002201108820000111
wherein the S2-7 comprises the following steps:
s2-7-1: the test signal in S2-1 is used to subtract the target output signal in S2-5 to obtain the t-th module error expression at the time k as
Figure BDA0002201108820000112
Wherein
Figure BDA0002201108820000113
Is the error of the real value of the signal,
Figure BDA0002201108820000114
is an imaginary error, d (k-t +1) is a target output signal, and since the input and output signals of the power amplifier are complex, the cost function of the complex value pipeline recurrent neural network model is as follows:
Figure BDA0002201108820000115
the lambda in the formula is an exponential weighting factor and is used for controlling the influence of the memory effect of each module on the whole structure;
s2-7-2: the weight coefficient is changed along the whole complex value assembly line recurrent neural network model by adopting a gradient descent learning method to minimize the cost function, and the method is as follows:
Figure BDA0002201108820000116
Figure BDA0002201108820000117
Figure BDA0002201108820000118
in the above formula
Figure BDA0002201108820000119
Is a complex-valued augmented weight function,
Figure BDA00022011088200001110
updating the formula for the weight of the nth weight of the ith neuron at time k,
Figure BDA00022011088200001111
for gradient expression, η is the learning rate, taking a constant;
s2-7-3: calculating the gradient of the real part and the imaginary part of the cost function to the complex value amplification type weight function as follows:
Figure BDA0002201108820000121
Figure BDA0002201108820000122
Figure BDA0002201108820000123
representing the sensitivity of the ith module at the time k of the ith neuron;
s2-7-4: according to the cauchy-riemann equation, the real and imaginary part sensitivities are obtained as a result of the sensitivities:
Figure BDA0002201108820000124
Figure BDA0002201108820000125
Figure BDA0002201108820000126
s2-7-5: substituting the formulas of S2-7-4 and S2-7-3 into S2-7-2A gradient can be obtained as:
Figure BDA0002201108820000128
the output of the sensitivity function at the time k of the jth neuron of the tth module is:
Figure BDA0002201108820000129
δlnis a function of Crohn
S2-7-6: the final weight updating equation of the optimized power amplifier positive model is as follows:
s2-7-7: and (5) inverting the optimized power amplifier positive model to obtain a predistortion structure of the power amplifier.

Claims (4)

1. A power amplifier predistortion method of a complex value assembly line recurrent neural network model is characterized in that:
the method comprises the following steps:
s1: generating an original signal, wherein the original signal is used as an input signal of a target power amplifier;
s2: establishing a neural network model, training the neural network model to obtain a power amplifier positive model, and inverting the power amplifier positive model to obtain a predistortion structure;
s3: modulating an original signal to obtain a radio frequency signal;
s4: passing the radio frequency signal through a predistortion structure to obtain a predistortion signal;
s5: and obtaining a compensated power amplifier output signal by the pre-distortion signal through a target power amplifier.
2. The power amplifier predistortion method of the complex value assembly line recurrent neural network model as claimed in claim 1, characterized in that:
the S2 includes the following steps:
s2-1: selecting a segment from the original signal as a test signal;
s2-2: modulating the test signal to obtain a radio frequency signal;
s2-3: the radio frequency signal is amplified to obtain an output signal of the power amplifier;
s2-4: reducing the power of the output signal of the power amplifier through an attenuator to obtain an attenuated signal;
s2-5: carrying out down-conversion and filtering on the attenuated signal to obtain a target output signal;
s2-6: establishing a complex value assembly line recurrent neural network model, and training the complex value assembly line recurrent neural network model by using a test signal as an input parameter and a target output signal as an output parameter to obtain a power amplifier positive model;
s2-7: and optimizing the positive model of the power amplifier by using an enhanced complex value real-time recursive learning algorithm to obtain an optimized positive model of the power amplifier, and then inverting the optimized positive model of the power amplifier to obtain a predistortion structure of the power amplifier.
3. The power amplifier predistortion method of the complex value assembly line recurrent neural network model as claimed in claim 1, characterized in that:
the S2-6 comprises the following steps:
s2-6-1: establishing a complex value assembly line recurrent neural network model structure, wherein the model comprises M repeated modules;
each module is provided with N neurons, wherein the first M-1 modules are non-fully-connected recurrent neural networks, N-1 outputs in output neurons of the first module are used for feeding back to inputs, the outputs of the rest neurons, namely the output of the first neuron, are directly applied to the next module, the last module is a fully-connected recurrent neural network, and the outputs of all the neurons are fed back to the inputs;
s2-6-2: introducing augmented complex statistics, taking into account cross statistics of two variables, by inputting the conjugate x of variable x*To enhance the integrity of the information, a linear vector of Λ ═ x is obtainedT,xH]T
S2-6-3: the weight vector of the l-th neuron of the network output layer of each module is
Figure FDA0002201108810000021
M modules all use the same weight matrix
S2-6-4: the mathematical expression of the model structure of the recurrent neural network of the complex value assembly line is
Figure FDA0002201108810000023
Where ψ (-) is an activation function,
Figure FDA0002201108810000024
and
Figure FDA0002201108810000025
respectively representing the output of the ith neuron of the tth module at the moment k and the network node input of an activation function;
augmented input for the tth module at time k
Figure FDA0002201108810000026
And input vector of Mth module
Figure FDA0002201108810000027
Is composed of
Figure FDA0002201108810000028
It(k)=[s(k-t),...,s(k-t-p+1),1+j
yt+1,1(k-1),yt,2(k-1),...,yt,N(k-1)]
Figure FDA0002201108810000029
IM(k)=[s(k-M),...,s(k-M-p+1),1+j
yM,1(k-1),yM,2(k-1),...,yM,N(k-1)]
Wherein p is the number of external inputs, i.e. the input sample is delayed by p unit durations, the upper right label a indicates that the formula is an augmented expression, and the upper right label x indicates that the formula is a conjugate;
s2-6-5: the mathematical expression of the model in S2-6-4 is simplified as:
Figure FDA0002201108810000031
s2-6-6: and (3) taking the first neuron output of the first module to represent the final output signal of the positive power amplifier model by using the formula obtained in S2-6-5:
Figure FDA0002201108810000032
4. the power amplifier predistortion method of the complex value assembly line recurrent neural network model as claimed in claim 1, characterized in that:
the S2-7 comprises the following steps:
s2-7-1: by using the measurement in S2-1Subtracting the target output signal in S2-5 from the test signal to obtain the t-th module error expression at the moment kWherein
Figure FDA0002201108810000034
Is the error of the real value of the signal,
Figure FDA0002201108810000035
is an imaginary error, d (k-t +1) is a target output signal, and since the input and output signals of the power amplifier are complex, the cost function of the complex value pipeline recurrent neural network model is as follows:
Figure FDA0002201108810000036
the lambda in the formula is an exponential weighting factor and is used for controlling the influence of the memory effect of each module on the whole structure;
s2-7-2: the weight coefficient is changed along the whole complex value assembly line recurrent neural network model by adopting a gradient descent learning method to minimize the cost function, and the method is as follows:
Figure FDA0002201108810000041
Figure FDA0002201108810000042
Figure FDA0002201108810000043
in the above formula
Figure FDA0002201108810000044
Is a complex-valued augmented weight function,
Figure FDA0002201108810000045
updating the formula for the weight of the nth weight of the ith neuron at time k,
Figure FDA0002201108810000046
for the gradient expression, η is the learning rate, which is usually taken to be a small constant;
s2-7-3: calculating the gradient of the real part and the imaginary part of the cost function to the complex value amplification type weight function as follows:
Figure FDA0002201108810000047
Figure FDA0002201108810000048
Figure FDA0002201108810000049
representing the sensitivity of the ith module at the time k of the ith neuron;
s2-7-4: according to the Cauchy-Riemann equation, a real part sensitivity function and an imaginary part sensitivity function which are formed by sensitivity are obtained:
Figure FDA00022011088100000410
Figure FDA00022011088100000411
Figure FDA00022011088100000412
s2-7-5: substituting the formulas of S2-7-4 and S2-7-3 into S2-7-2
Figure FDA00022011088100000413
A gradient can be obtained as:
Figure FDA0002201108810000051
the output of the sensitivity function at the time k of the jth neuron of the tth module is:
Figure FDA0002201108810000052
δlnis a function of Crohn
S2-7-6: the final weight updating equation of the optimized power amplifier positive model is as follows:
Figure FDA0002201108810000053
s2-7-7: and (5) inverting the optimized power amplifier positive model to obtain a predistortion structure of the power amplifier.
CN201910865377.6A 2019-09-12 2019-09-12 Power amplifier predistortion method of complex-valued pipeline recurrent neural network model Active CN110765720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910865377.6A CN110765720B (en) 2019-09-12 2019-09-12 Power amplifier predistortion method of complex-valued pipeline recurrent neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910865377.6A CN110765720B (en) 2019-09-12 2019-09-12 Power amplifier predistortion method of complex-valued pipeline recurrent neural network model

Publications (2)

Publication Number Publication Date
CN110765720A true CN110765720A (en) 2020-02-07
CN110765720B CN110765720B (en) 2024-05-24

Family

ID=69329601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910865377.6A Active CN110765720B (en) 2019-09-12 2019-09-12 Power amplifier predistortion method of complex-valued pipeline recurrent neural network model

Country Status (1)

Country Link
CN (1) CN110765720B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111599437A (en) * 2020-05-18 2020-08-28 山东农业大学 Dynamic beta power signal tracking method, system and terminal
CN112202695A (en) * 2020-08-05 2021-01-08 重庆大学 Under-sampling digital predistortion method and system based on Landweber iterative algorithm
CN112865721A (en) * 2021-01-05 2021-05-28 紫光展锐(重庆)科技有限公司 Signal processing method, device, equipment, storage medium, chip and module equipment
CN113612455A (en) * 2021-08-16 2021-11-05 重庆大学 Digital predistortion system working method based on iterative learning control and principal curve analysis
CN113676426A (en) * 2021-08-24 2021-11-19 东南大学 Intelligent digital predistortion system and method for dynamic transmission
JP2023509699A (en) * 2020-06-02 2023-03-09 中興通訊股▲ふん▼有限公司 Predistortion method, system, device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6118335A (en) * 1999-05-06 2000-09-12 Nortel Networks Corporation Method and apparatus for providing adaptive predistortion in power amplifier and base station utilizing same
WO2003092154A1 (en) * 2002-04-23 2003-11-06 Huawei Technologies Co. Ltd The method of improving the radio frequency power amplifier efficiency based on the baseband digital pre-distortion technique
CN101997492A (en) * 2010-09-29 2011-03-30 东南大学 Simplified fuzzy neural network reinforced Wiener model based power amplifier predistortion method
CN102938638A (en) * 2012-10-26 2013-02-20 宁波大学 Cross coupling modeling method of concurrency multiband nonlinear system and linear device
US20130343483A1 (en) * 2012-06-25 2013-12-26 Telefonaktiebolaget L M Ericsson (Publ) Predistortion According to an Artificial Neural Network (ANN)-based Model
CN103715992A (en) * 2013-12-17 2014-04-09 东南大学 Power-amplifier pre-distortion device and method based on simplified Volterra series
CN103731105A (en) * 2014-01-03 2014-04-16 东南大学 Amplifier digital pre-distortion device and method based on dynamic fuzzy neural network
CN106453172A (en) * 2016-07-19 2017-02-22 天津大学 Memory polynomial digital pre-distortion method based on piecewise linear function

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6118335A (en) * 1999-05-06 2000-09-12 Nortel Networks Corporation Method and apparatus for providing adaptive predistortion in power amplifier and base station utilizing same
WO2003092154A1 (en) * 2002-04-23 2003-11-06 Huawei Technologies Co. Ltd The method of improving the radio frequency power amplifier efficiency based on the baseband digital pre-distortion technique
US20070075770A1 (en) * 2002-04-23 2007-04-05 Maolin Long Base-band digital pre-distortion-based method for improving efficiency of rf power amplifier
CN101997492A (en) * 2010-09-29 2011-03-30 东南大学 Simplified fuzzy neural network reinforced Wiener model based power amplifier predistortion method
US20130343483A1 (en) * 2012-06-25 2013-12-26 Telefonaktiebolaget L M Ericsson (Publ) Predistortion According to an Artificial Neural Network (ANN)-based Model
CN102938638A (en) * 2012-10-26 2013-02-20 宁波大学 Cross coupling modeling method of concurrency multiband nonlinear system and linear device
CN103715992A (en) * 2013-12-17 2014-04-09 东南大学 Power-amplifier pre-distortion device and method based on simplified Volterra series
CN103731105A (en) * 2014-01-03 2014-04-16 东南大学 Amplifier digital pre-distortion device and method based on dynamic fuzzy neural network
CN106453172A (en) * 2016-07-19 2017-02-22 天津大学 Memory polynomial digital pre-distortion method based on piecewise linear function

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SU LEE GOH ET AL.: "A Complex-Valued RTRL Algorithm for recurrent Neural Networks", NEURAL COMPUTATION, vol. 16, no. 12, pages 2699 - 2713 *
李明玉;何松柏;李晓东;: "基于测量的记忆多项式功率放大器特征模型", 电子测量与仪器学报, no. 08, pages 49 - 55 *
田娜;南敬昌;高明明;: "基于粗糙集理论的PSO-IOIF-Elman神经网络建模", 计算机应用与软件, no. 05, pages 254 - 257 *
荀海恩: "基于神经网络的功放数字预失真技术的研究", 万方学位论文, pages 20 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111599437A (en) * 2020-05-18 2020-08-28 山东农业大学 Dynamic beta power signal tracking method, system and terminal
JP2023509699A (en) * 2020-06-02 2023-03-09 中興通訊股▲ふん▼有限公司 Predistortion method, system, device and storage medium
JP7451720B2 (en) 2020-06-02 2024-03-18 中興通訊股▲ふん▼有限公司 Predistortion methods, systems, devices and storage media
CN112202695A (en) * 2020-08-05 2021-01-08 重庆大学 Under-sampling digital predistortion method and system based on Landweber iterative algorithm
CN112865721A (en) * 2021-01-05 2021-05-28 紫光展锐(重庆)科技有限公司 Signal processing method, device, equipment, storage medium, chip and module equipment
CN113612455A (en) * 2021-08-16 2021-11-05 重庆大学 Digital predistortion system working method based on iterative learning control and principal curve analysis
CN113612455B (en) * 2021-08-16 2024-01-26 重庆大学 Working method of digital predistortion system based on iterative learning control and main curve analysis
CN113676426A (en) * 2021-08-24 2021-11-19 东南大学 Intelligent digital predistortion system and method for dynamic transmission
CN113676426B (en) * 2021-08-24 2022-07-22 东南大学 Intelligent digital predistortion system and method for dynamic transmission

Also Published As

Publication number Publication date
CN110765720B (en) 2024-05-24

Similar Documents

Publication Publication Date Title
CN110765720B (en) Power amplifier predistortion method of complex-valued pipeline recurrent neural network model
CN111245375B (en) Power amplifier digital predistortion method of complex-valued fully-connected recurrent neural network model
CN110414565B (en) Group Lasso-based neural network cutting method for power amplifier
Naskas et al. Neural-network-based adaptive baseband predistortion method for RF power amplifiers
WO2003092154A1 (en) The method of improving the radio frequency power amplifier efficiency based on the baseband digital pre-distortion technique
Kobal et al. Digital predistortion of RF power amplifiers with phase-gated recurrent neural networks
US20230006611A1 (en) Ai-assisted power amplifier optimization
Wang et al. Augmented iterative learning control for neural-network-based joint crest factor reduction and digital predistortion of power amplifiers
CN112804171B (en) Multi-segment digital predistortion system and method based on support vector regression
JP5336134B2 (en) Predistorter
Gilabert Pinal Multi look-up table digital predistortion for RF power amplifier linearization
JP5299958B2 (en) Predistorter
CN110533169A (en) A kind of digital pre-distortion method and system based on complex value neural network model
US20240119264A1 (en) Digital Pre-Distortion Using Convolutional Neural Networks
Vaicaitis et al. Segmented Spline Curve Neural Network for Low Latency Digital Predistortion of RF Power Amplifiers
CN111884602B (en) Power amplifier predistortion method based on single-output-node neural network
CN115913844A (en) MIMO system digital predistortion compensation method, device, equipment and storage medium based on neural network
CN108063739A (en) Broadband digital communication system transmitting terminal power amplifier adaptive digital pre-distortion method
JP5238461B2 (en) Predistorter
Yin et al. Pattern recognition of RF power amplifier behaviors with multilayer perceptron
Yin et al. Iteration process analysis of real-valued time-delay neural network with different activation functions for power amplifier behavioral modeling
Wang et al. A New Functional Link Neural Network Model with Extended Legendre Polynomials for Digital Predistortion for LTE Applications
CN115913140B (en) Piecewise polynomial digital predistortion device and method for controlling operation precision
EP4318941A1 (en) Digital predistortion of rf power amplifiers with decomposed vector rotation-based recurrent neural networks
TWI830276B (en) Method of compensating for power amplifier distortions and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant