CN110598261B - Power amplifier frequency domain modeling method based on complex reverse neural network - Google Patents

Power amplifier frequency domain modeling method based on complex reverse neural network Download PDF

Info

Publication number
CN110598261B
CN110598261B CN201910757206.1A CN201910757206A CN110598261B CN 110598261 B CN110598261 B CN 110598261B CN 201910757206 A CN201910757206 A CN 201910757206A CN 110598261 B CN110598261 B CN 110598261B
Authority
CN
China
Prior art keywords
output
real
neural network
frequency domain
power amplifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910757206.1A
Other languages
Chinese (zh)
Other versions
CN110598261A (en
Inventor
邵杰
孔天姣
胡久元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201910757206.1A priority Critical patent/CN110598261B/en
Publication of CN110598261A publication Critical patent/CN110598261A/en
Application granted granted Critical
Publication of CN110598261B publication Critical patent/CN110598261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses a power amplifier frequency domain modeling method based on a complex reverse neural network. Firstly, converting the collected time domain signals into a frequency domain, then modeling frequency domain data by utilizing the fitting effect of a complex inverse neural network to obtain output data of the neural network, and then converting the frequency domain output data into the time domain through inverse fast Fourier transform. The power amplifier is modeled by the method, and the frequency domain error of the power amplifier is reduced to a great extent. The invention can better describe the nonlinear characteristic and the memory effect of the power amplifier.

Description

Power amplifier frequency domain modeling method based on complex reverse neural network
Technical Field
The invention belongs to the field of nonlinear system analysis application, and particularly relates to a frequency domain modeling method for a power amplifier.
Background
As an important module in wireless communication systems, power amplifiers are designed to play a crucial role in increasingly complex wireless communication systems. The output of the power amplifier has better linearity when the input power is smaller, but in practice, the power amplifier is usually operated near the saturation point for efficiency improvement, and then the non-linearity of the power amplifier becomes worse. Meanwhile, the power amplifier is affected by electric reactance such as devices and the like to have a memory effect. Therefore, the non-linearity and memory effects of the power amplifier model become important when modeling.
The modeling of the power amplifier can be divided into two modes of physical modeling and behavior modeling, wherein the physical modeling is to establish an equivalent circuit model according to the internal specific structure of the circuit, and the behavior modeling is simpler in comparison. The behavior modeling is to regard the power amplifier circuit as a black box, and establish the response relation of the power amplifier according to the input and output signals without considering the internal structure of the circuit. Behavioral modeling can be further divided into memoryless and memoryless models, with the presence or absence of memory effects in a model depending on whether the output signal is related to historical input signals. Memory-less models are commonly used for narrow-band systems, such as the Saleh model and Rapp model. However, with the increase of the signal bandwidth, the memory effect of the power amplifier is more obvious, and the limitation of a memory-free model is gradually highlighted. Memory models are mainly divided into two categories: one is the Volterra series and its simplified model, and the other is the neural network model. Parameters of the Volterra series model can rapidly increase along with the increase of the memory length of the power amplifier, so that the Volterra series model is only suitable for weak nonlinear systems. The neural network model can simulate the characteristics of the human brain to establish a mathematical model, can approximate any nonlinear function, and has the functions of learning, memorizing, calculating and the like. However, the general real neural network model cannot well describe the frequency domain characteristics of the system, and has certain limitations.
Disclosure of Invention
In order to solve the technical problems mentioned in the background art, the invention provides a power amplifier frequency domain modeling method based on a complex reverse neural network.
In order to achieve the technical purpose, the technical scheme of the invention is as follows:
a power amplifier frequency domain modeling method based on a complex reverse neural network comprises the following steps:
(1) collecting input signal x (n) and output signal y (n) of power amplifier time domain, and converting them into frequency domain signal
Figure BDA0002169152190000021
And
Figure BDA0002169152190000022
wherein
Figure BDA0002169152190000023
And
Figure BDA0002169152190000024
are respectively as
Figure BDA0002169152190000025
The real and imaginary parts of (a) and (b),
Figure BDA0002169152190000026
and
Figure BDA0002169152190000027
are respectively as
Figure BDA0002169152190000028
J is an imaginary unit;
(2) establishing a plurality of reverse neural network models:
the complex reverse neural network comprises an input layer, a hidden layer and an output layer, wherein the input layer comprises M neurons, the hidden layer comprises L neurons, the output layer comprises K neurons, and each hidden layer and each output layer neuron have the same excitation function; the input layer neuron receives the frequency domain signal in the step (1), then transmits the frequency domain signal to the hidden layer neuron, and obtains a hidden layer output vector U (t) ═ U of the t iteration under the action of an excitation functionRe(t)+jUIm(t) wherein URe(t) and UIm(t) the real and imaginary parts of U (t), respectively; transmitting the hidden layer output vector U (t) to an output layer neuron, and obtaining the output layer vector of the t iteration after the action of an output layer excitation function
Figure BDA0002169152190000029
Wherein
Figure BDA00021691521900000210
And
Figure BDA00021691521900000211
are respectively as
Figure BDA00021691521900000212
The real and imaginary parts of (c);
the complex reverse neural network comprises two weight coefficient matrixes:
m × L dimensional weight coefficient matrix V (t) V from input layer to hidden layer of the t-th iterationRe(t)+jVIm(t) wherein VRe(t) and VIm(t) the real and imaginary parts of V (t), respectively;
the L × K dimensional weight coefficient matrix W (t) ═ W from the hidden layer to the output layer of the t-th iterationRe(t)+jWIm(t) wherein WRe(t) and WIm(t) the real and imaginary parts of W (t), respectively;
(3) computing hidden layer output vectors U (t) and
Figure BDA0002169152190000031
Figure BDA0002169152190000032
Figure BDA0002169152190000033
in the above formula, f represents an excitation function, and superscript T represents matrix transposition;
(4) calculating a target error function E (t):
Figure BDA0002169152190000034
in the above formula, the first and second carbon atoms are,
Figure BDA0002169152190000035
to be the desired output for output layer neuron k,
Figure BDA0002169152190000036
and
Figure BDA0002169152190000037
are respectively as
Figure BDA0002169152190000038
The real and imaginary parts of (a) and (b),
Figure BDA0002169152190000039
for the output of output layer neuron k for the t iteration,
Figure BDA00021691521900000310
and
Figure BDA00021691521900000311
are respectively as
Figure BDA00021691521900000312
The real and imaginary parts of (c);
order to
Figure BDA00021691521900000313
(5) Calculating the adjustment quantity of the weight coefficient matrix, and making the real part adjustment quantity of the weight coefficient matrix and ERe(t) is proportional to the decreasing gradient, and the adjustment of the imaginary part of the weight coefficient matrix is proportional to EImThe gradient of the decline of (t) is proportional, and is given by:
Figure BDA00021691521900000314
Figure BDA00021691521900000315
in the above formula, Δ v (t) is the adjustment amount of v (t), Δ w (t) is the adjustment amount of w (t), and η is the learning rate of the neural network;
(6) if the iteration number t is t +1, when the target error function e (t) is greater than the error threshold or the iteration number t is less than the maximum iteration number, the step (7) is performed, and when the target error function e (t) is less than or equal to the error threshold or the iteration number t is equal to the maximum iteration number, the step (8) is performed;
(7) updating the weight coefficient matrix according to the adjustment quantity of the weight coefficient matrix, and then returning to the step (3);
(8) obtaining the output of a plurality of inverse neural networks
Figure BDA0002169152190000041
(9) According to the output of a complex inverse neural network
Figure BDA0002169152190000042
Restoring the time-domain output signal of a power amplifier
Figure BDA0002169152190000043
Further, in step (2), the excitation function is as follows:
Figure BDA0002169152190000044
in the above formula, f (z) is the excitation function, z is a complex number, Re [ · ]]The representation takes the real part, Im [. cndot]Representing the imaginary part, gr,giIs a gain parameter.
Further, the gain parameter gr,giThe update rule of (2) is as follows:
Figure BDA0002169152190000045
Figure BDA0002169152190000046
in the above formula, gr(t+1),gi(t +1) is the gain parameter value for the t +1 th iteration,
Figure BDA0002169152190000047
k=1,2,…,K,Wlk(t) is the weight coefficient between hidden layer neuron l and output layer neuron k of the t-th iteration, Ul(t) is the output of hidden layer neuron l for the t-th iteration.
Further, in step (4), a momentum term is added during the adjustment of the weight coefficient matrix, that is, a part of the adjustment of the weight coefficient matrix from the t-th iteration is taken out and added to the adjustment of the weight coefficient matrix from the t + 1-th iteration:
ΔV′(t+1)=ΔV(t+1)+αΔV(t)
ΔW′(t+1)=ΔW(t+1)+αΔW(t)
in the above formula, Δ V (t +1) and Δ W (t +1) are adjustment amounts by which the momentum term is not increased, and Δ V '(t +1) and Δ W' (t +1) are adjustment amounts by which the momentum term is increased; α is a momentum coefficient, and α ∈ (0, 1).
Further, the specific process of step (1) is as follows:
(1a) acquiring an input signal vector x (N) ═ x (1), x (2), …, x (N)) and an output signal vector y (N) ═ y (1), y (2), …, y (N)) of the power amplifier, wherein N is a data length;
(1b) performing fast fourier transform on the input signal vector and the output signal vector data:
Figure BDA0002169152190000051
Figure BDA0002169152190000052
let X be XRe+jXIm=[X(1),X(2),…,X(M)]Is the result of a fast Fourier transform of x (n), Y ═ YRe+jYIm=[Y(1),Y(2),…,Y(M)]Is the result of a fast Fourier transform of y (n), where XReAnd XImRespectively the real and imaginary parts of X, YReAnd YImThe real part and the imaginary part of Y respectively; m is the number of Fourier transform points;
(1c) and carrying out logarithm processing on X and Y:
Figure BDA0002169152190000053
Y1=20lg[Y]=Y1 Re+jY1 Im
in the above formula, X1And Y1Is the result of taking the logarithm of X and Y, wherein
Figure BDA0002169152190000061
And
Figure BDA0002169152190000062
are each X1Real and imaginary parts of, Y1 ReAnd Y1 ImAre each Y1The real and imaginary parts of (c);
(1d) separately determine X1And Y1Dc offset of real and imaginary parts:
Figure BDA0002169152190000063
Figure BDA0002169152190000064
Figure BDA0002169152190000065
Figure BDA0002169152190000066
in the above formula, the first and second carbon atoms are,
Figure BDA0002169152190000067
and
Figure BDA0002169152190000068
are each X1The dc offset of the real and imaginary parts,
Figure BDA0002169152190000069
and
Figure BDA00021691521900000610
are each Y1DC offsets of the real part and the imaginary part; max [. C]For the operation to find the maximum value of the vector, min [. cndot.)]For finding the minimum of the vectorCalculating;
(1e) to X1And Y1The real part and the imaginary part respectively subtract a direct current offset:
Figure BDA00021691521900000611
Figure BDA00021691521900000612
Figure BDA00021691521900000613
Figure BDA00021691521900000614
then, carrying out normalization processing on the obtained data to obtain a frequency domain input signal and a frequency domain output signal:
Figure BDA00021691521900000615
Figure BDA00021691521900000616
further, the specific process of step (9) is as follows:
(9a) output to complex inverse neural network
Figure BDA00021691521900000617
Carrying out normalized reduction:
Figure BDA0002169152190000071
Figure BDA0002169152190000072
in the above formula, the first and second carbon atoms are,
Figure BDA0002169152190000073
and
Figure BDA0002169152190000074
are respectively as
Figure BDA0002169152190000075
The real and imaginary parts of (a) and (b),
Figure BDA0002169152190000076
and
Figure BDA0002169152190000077
the real part and the imaginary part are reduced;
(9b) adding a direct current offset to the restored data:
Figure BDA0002169152190000078
Figure BDA0002169152190000079
order to
Figure BDA00021691521900000710
(9c) To O2Performing an inverse logarithmic operation:
Figure BDA00021691521900000711
in the above formula, the first and second carbon atoms are,
Figure BDA00021691521900000712
is the result of an inverse logarithmic operation;
(9d) to pair
Figure BDA00021691521900000713
Performing inverse Fourier transform to obtain time domain output signal
Figure BDA00021691521900000714
Figure BDA00021691521900000715
In the above formula, the first and second carbon atoms are,
Figure BDA00021691521900000716
the data after the above reduction processing is output to the output layer neuron k.
Adopt the beneficial effect that above-mentioned technical scheme brought:
(1) according to the invention, power amplifier modeling is carried out in a frequency domain, so that the memory effect problem of a system model is avoided;
(2) compared with a real number neural network, the complex number neural network adopted by the invention has higher convergence speed and stronger classification and fitting capabilities;
(3) the power amplifier model established based on the complex reverse neural network solves the problem of low frequency domain precision of the power amplifier.
Drawings
FIG. 1 is a diagram of a power amplifier "black box" model;
FIG. 2 is a diagram of a complex inverse neural network model architecture of the present invention;
FIG. 3 is a time domain waveform and error plot of the actual output and the model output in an embodiment;
FIG. 4 is a frequency domain waveform and error plot of the actual output and the model output in the example.
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings.
Fig. 1 is a diagram of a "black box" model of a power amplifier, in which the input signal x is a chirp signal with an amplitude of 8.5V and the output signal through the power amplifier is y with distortion. After a power amplifier circuit is simulated by using Pspice simulation software, 2000 input and output signals are collected as experimental data for modeling, and the sampling frequency is 100 kHz.
The invention designs a power amplifier frequency domain modeling method based on a complex reverse neural network, which comprises the following steps:
step 1: collecting input signal x (n) and output signal y (n) of power amplifier time domain, and converting them into frequency domain signal
Figure BDA0002169152190000081
And
Figure BDA0002169152190000082
wherein
Figure BDA0002169152190000083
And
Figure BDA0002169152190000084
are respectively as
Figure BDA0002169152190000085
The real and imaginary parts of (a) and (b),
Figure BDA0002169152190000086
and
Figure BDA0002169152190000087
are respectively as
Figure BDA0002169152190000088
J is an imaginary unit;
step 2: establishing a plurality of reverse neural network models:
as shown in FIG. 2, the complex inverse neural network comprises an input layer, a hidden layer and an output layer, wherein the input layer comprises M neurons, the hidden layer comprises L neurons, the output layer comprises K neurons, and each hidden layer and each output layer neuron have the same structureThe excitation function of (a); the input layer neuron receives the frequency domain signal in the step 1 and then transmits the frequency domain signal to the hidden layer neuron, and the hidden layer output vector U (t) ═ U of the t iteration is obtained after the action of the excitation functionRe(t)+jUIm(t) wherein URe(t) and UIm(t) the real and imaginary parts of U (t), respectively; transmitting the hidden layer output vector U (t) to an output layer neuron, and obtaining the output layer vector of the t iteration after the action of an output layer excitation function
Figure BDA0002169152190000091
Wherein
Figure BDA0002169152190000092
And
Figure BDA0002169152190000093
are respectively as
Figure BDA0002169152190000094
The real and imaginary parts of (c);
the complex reverse neural network comprises two weight coefficient matrixes:
m × L dimensional weight coefficient matrix V (t) V from input layer to hidden layer of the t-th iterationRe(t)+jVIm(t) wherein VRe(t) and VIm(t) the real and imaginary parts of V (t), respectively;
the L × K dimensional weight coefficient matrix W (t) ═ W from the hidden layer to the output layer of the t-th iterationRe(t)+jWIm(t) wherein WRe(t) and WIm(t) the real and imaginary parts of W (t), respectively;
and step 3: computing hidden layer output vectors U (t) and
Figure BDA0002169152190000095
Figure BDA0002169152190000096
Figure BDA0002169152190000097
in the above formula, f represents an excitation function, and superscript T represents matrix transposition;
and 4, step 4: calculating a target error function E (t):
Figure BDA0002169152190000098
in the above formula, the first and second carbon atoms are,
Figure BDA0002169152190000099
to be the desired output for output layer neuron k,
Figure BDA00021691521900000910
and
Figure BDA00021691521900000911
are respectively as
Figure BDA00021691521900000912
The real and imaginary parts of (a) and (b),
Figure BDA00021691521900000913
for the output of output layer neuron k for the t iteration,
Figure BDA00021691521900000914
and
Figure BDA00021691521900000915
are respectively as
Figure BDA00021691521900000916
The real and imaginary parts of (c);
order to
Figure BDA00021691521900000917
And 5: calculating the adjustment amount of the weight coefficient matrix,make the real part of the weight coefficient matrix adjust the quantity and ERe(t) is proportional to the decreasing gradient, and the adjustment of the imaginary part of the weight coefficient matrix is proportional to EImThe gradient of the decline of (t) is proportional, and is given by:
Figure BDA0002169152190000101
Figure BDA0002169152190000102
in the above formula, Δ v (t) is the adjustment amount of v (t), Δ w (t) is the adjustment amount of w (t), and η is the learning rate of the neural network;
step 6: when the target error function e (t) is greater than the error threshold or the iteration number t is less than the maximum iteration number, the step 7 is performed, and when the target error function e (t) is less than or equal to the error threshold or the iteration number t is less than the maximum iteration number, the step 8 is performed;
and 7: updating the weight coefficient matrix according to the adjustment quantity of the weight coefficient matrix, and then returning to the step 3;
and 8: obtaining the output of a plurality of inverse neural networks
Figure BDA0002169152190000103
And step 9: according to the output of a complex inverse neural network
Figure BDA0002169152190000104
Restoring the time-domain output signal of a power amplifier
Figure BDA0002169152190000105
In this embodiment, the following preferred scheme is adopted in step 1:
1a, acquiring an input signal vector x (N) ═ x (1), x (2), …, x (N)) and an output signal vector y (N) ═ y (1), y (2), …, y (N)), wherein N is a data length;
1b, performing fast Fourier transform on input signal vector data and output signal vector data:
Figure BDA0002169152190000106
Figure BDA0002169152190000107
let X be XRe+jXIm=[X(1),X(2),…,X(M)]Is the result of a fast Fourier transform of x (n), Y ═ YRe+jYIm=[Y(1),Y(2),…,Y(M)]Is the result of a fast Fourier transform of y (n), where XReAnd XImRespectively the real and imaginary parts of X, YReAnd YImThe real part and the imaginary part of Y respectively; m is the number of Fourier transform points;
1c, carrying out logarithm processing on X and Y:
Figure BDA0002169152190000111
Y1=20lg[Y]=Y1 Re+jY1 Im
in the above formula, X1And Y1Is the result of taking the logarithm of X and Y, wherein
Figure BDA0002169152190000112
And
Figure BDA0002169152190000113
are each X1Real and imaginary parts of, Y1 ReAnd Y1 ImAre each Y1The real and imaginary parts of (c);
1d, separately obtaining X1And Y1Dc offset of real and imaginary parts:
Figure BDA0002169152190000114
Figure BDA0002169152190000115
Figure BDA0002169152190000116
Figure BDA0002169152190000117
in the above formula, the first and second carbon atoms are,
Figure BDA0002169152190000118
and
Figure BDA0002169152190000119
are each X1The dc offset of the real and imaginary parts,
Figure BDA00021691521900001110
and
Figure BDA00021691521900001111
are each Y1DC offsets of the real part and the imaginary part; max [. C]For the operation to find the maximum value of the vector, min [. cndot.)]Calculating the minimum value of the vector;
1e, to X1And Y1The real part and the imaginary part respectively subtract a direct current offset:
Figure BDA00021691521900001112
Figure BDA00021691521900001113
Figure BDA00021691521900001114
Figure BDA0002169152190000121
then, carrying out normalization processing on the obtained data to obtain a frequency domain input signal and a frequency domain output signal:
Figure BDA0002169152190000122
Figure BDA0002169152190000123
in this embodiment, the following preferred scheme is adopted in step 2:
the excitation function is as follows:
Figure BDA0002169152190000124
in the above formula, f (z) is the excitation function, z is a complex number, Re [ · ]]The representation takes the real part, Im [. cndot]Representing the imaginary part, gr,giIs a gain parameter.
The gain parameter gr,giThe update rule of (2) is as follows:
Figure BDA0002169152190000125
Figure BDA0002169152190000126
in the above formula, gr(t+1),gi(t +1) is the gain parameter value for the t +1 th iteration,
Figure BDA0002169152190000127
k=1,2,…,K,Wlk(t) is the weight coefficient between hidden layer neuron l and output layer neuron k of the t-th iteration, Ul(t) is the output of hidden layer neuron l for the t-th iteration.
In this example, the following preferred scheme is adopted in step 4:
adding a momentum item when the weight coefficient matrix is adjusted, namely taking out a part from the weight coefficient matrix adjustment amount of the t iteration and superposing the part to the weight coefficient matrix adjustment of the t +1 iteration:
ΔV′(t+1)=ΔV(t+1)+αΔV(t)
ΔW′(t+1)=ΔW(t+1)+αΔW(t)
in the above formula, Δ V (t +1) and Δ W (t +1) are adjustment amounts by which the momentum term is not increased, and Δ V '(t +1) and Δ W' (t +1) are adjustment amounts by which the momentum term is increased; α is a momentum coefficient, and α ∈ (0, 1).
In this embodiment, the following preferred scheme is adopted in step 9:
9a, output to complex inverse neural network
Figure BDA0002169152190000131
Carrying out normalized reduction:
Figure BDA0002169152190000132
Figure BDA0002169152190000133
in the above formula, the first and second carbon atoms are,
Figure BDA0002169152190000134
and
Figure BDA0002169152190000135
are respectively as
Figure BDA0002169152190000136
The real and imaginary parts of (a) and (b),
Figure BDA0002169152190000137
and
Figure BDA0002169152190000138
the real part and the imaginary part are reduced;
9b, adding a direct current offset to the restored data:
Figure BDA0002169152190000139
Figure BDA00021691521900001310
order to
Figure BDA00021691521900001311
9c, to O2Performing an inverse logarithmic operation:
Figure BDA00021691521900001312
in the above formula, the first and second carbon atoms are,
Figure BDA00021691521900001313
is the result of an inverse logarithmic operation;
9d, pair
Figure BDA00021691521900001314
Performing inverse Fourier transform to obtain time domain output signal
Figure BDA00021691521900001315
Figure BDA00021691521900001316
In the above formula, the first and second carbon atoms are,
Figure BDA00021691521900001317
the data after the above reduction processing is output to the output layer neuron k.
In this embodiment, the number of fourier transform points and the number of input layer neurons of the complex inverse neural network are both M2048, the number of implicit layer neurons L is 15, the number of output layer neurons K is 2048, the maximum number of iterations is 200, the error threshold is 0.0001, the learning rate is 0.05, and the momentum coefficient α is 0.5. When the number of iterations is 130, the time domain waveform and the time domain error that obtain the actual output and the model output are shown in fig. 3, and the frequency domain waveform and the frequency domain error are shown in fig. 4. As can be obtained from the graph, the maximum instantaneous error of the complex inverse neural network in the time domain is-0.02721V, the average error of the time domain is 8.502e-6V, the maximum instantaneous error of the frequency domain is 0.249dB, and the average error of the frequency domain is 0.001049 dB. Therefore, the complex inverse neural network well describes the memory effect and the nonlinear characteristic of the power amplifier, and the frequency domain error is greatly reduced.
The embodiments are only for illustrating the technical idea of the present invention, and the technical idea of the present invention is not limited thereto, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the scope of the present invention.

Claims (6)

1. A power amplifier frequency domain modeling method based on a complex reverse neural network is characterized by comprising the following steps:
(1) collecting input signal x (n) and output signal y (n) of power amplifier time domain, and converting them into frequency domain signal
Figure FDA0002750052760000011
And
Figure FDA0002750052760000012
wherein
Figure FDA0002750052760000013
And
Figure FDA0002750052760000014
are respectively as
Figure FDA0002750052760000015
The real and imaginary parts of (a) and (b),
Figure FDA0002750052760000016
and
Figure FDA0002750052760000017
are respectively as
Figure FDA0002750052760000018
J is an imaginary unit;
(2) establishing a plurality of reverse neural network models:
the complex reverse neural network comprises an input layer, a hidden layer and an output layer, wherein the input layer comprises M neurons, the hidden layer comprises L neurons, the output layer comprises K neurons, and each hidden layer and each output layer neuron have the same excitation function; the input layer neuron receives the frequency domain signal in the step (1), then transmits the frequency domain signal to the hidden layer neuron, and obtains a hidden layer output vector U (t) ═ U of the t iteration under the action of an excitation functionRe(t)+jUIm(t) wherein URe(t) and UIm(t) the real and imaginary parts of U (t), respectively; transmitting the hidden layer output vector U (t) to an output layer neuron, and obtaining the output layer vector of the t iteration after the action of an output layer excitation function
Figure FDA0002750052760000019
Wherein
Figure FDA00027500527600000110
And
Figure FDA00027500527600000111
are respectively as
Figure FDA00027500527600000112
The real and imaginary parts of (c);
the complex reverse neural network comprises two weight coefficient matrixes:
m × L dimensional weight coefficient matrix V (t) V from input layer to hidden layer of the t-th iterationRe(t)+jVIm(t) wherein VRe(t) and VIm(t) the real and imaginary parts of V (t), respectively;
the L × K dimensional weight coefficient matrix W (t) ═ W from the hidden layer to the output layer of the t-th iterationRe(t)+jWIm(t) wherein WRe(t) and WIm(t) the real and imaginary parts of W (t), respectively;
(3) computing hidden layer output vectors U (t) and
Figure FDA00027500527600000113
Figure FDA0002750052760000021
Figure FDA0002750052760000022
in the above formula, f represents an excitation function, and superscript T represents matrix transposition;
(4) calculating a target error function E (t):
Figure FDA0002750052760000023
in the above formula, the first and second carbon atoms are,
Figure FDA0002750052760000024
for output layer neuron kAnd outputting the signals to the computer for output,
Figure FDA0002750052760000025
and
Figure FDA0002750052760000026
are respectively as
Figure FDA0002750052760000027
The real and imaginary parts of (a) and (b),
Figure FDA0002750052760000028
for the output of output layer neuron k for the t iteration,
Figure FDA0002750052760000029
and
Figure FDA00027500527600000210
are respectively as
Figure FDA00027500527600000211
The real and imaginary parts of (c);
order to
Figure FDA00027500527600000212
(5) Calculating the adjustment quantity of the weight coefficient matrix, and making the real part adjustment quantity of the weight coefficient matrix and ERe(t) is proportional to the decreasing gradient, and the adjustment of the imaginary part of the weight coefficient matrix is proportional to EImThe gradient of the decline of (t) is proportional, given by:
Figure FDA00027500527600000213
Figure FDA00027500527600000214
in the above formula, Δ v (t) is the adjustment amount of v (t), Δ w (t) is the adjustment amount of w (t), and η is the learning rate of the neural network;
(6) if the iteration number t is t +1, when the target error function e (t) is greater than the error threshold or the iteration number t is less than the maximum iteration number, the step (7) is performed, and when the target error function e (t) is less than or equal to the error threshold or the iteration number t is equal to the maximum iteration number, the step (8) is performed;
(7) updating the weight coefficient matrix according to the adjustment quantity of the weight coefficient matrix, and then returning to the step (3);
(8) obtaining the output of a plurality of inverse neural networks
Figure FDA0002750052760000031
(9) According to the output of a complex inverse neural network
Figure FDA0002750052760000032
Restoring the time-domain output signal of a power amplifier
Figure FDA0002750052760000033
2. The frequency domain modeling method for power amplifier based on complex inverse neural network as claimed in claim 1, wherein in step (2), said excitation function is as follows:
Figure FDA0002750052760000034
in the above formula, f (z) is the excitation function, z is a complex number, Re [ · ]]The representation takes the real part, Im [. cndot]Representing the imaginary part, gr,giIs a gain parameter.
3. The frequency domain modeling method for power amplifier based on complex inverse neural network as claimed in claim 2, wherein said gain parameter g isr,giThe update rule of (2) is as follows:
Figure FDA0002750052760000035
Figure FDA0002750052760000036
in the above formula, gr(t+1),gi(t +1) is the gain parameter value for the t +1 th iteration,
Figure FDA0002750052760000037
k=1,2,…,K,Wlk(t) is the weight coefficient between hidden layer neuron l and output layer neuron k of the t-th iteration, Ul(t) is the output of hidden layer neuron l for the t-th iteration.
4. The frequency domain modeling method for power amplifier based on complex inverse neural network as claimed in claim 1, wherein in step (4), a momentum term is added during the adjustment of the weight coefficient matrix, that is, a part of the adjustment of the weight coefficient matrix from the t-th iteration is taken out and added to the adjustment of the weight coefficient matrix from the t + 1-th iteration:
ΔV′(t+1)=ΔV(t+1)+αΔV(t)
ΔW′(t+1)=ΔW(t+1)+αΔW(t)
in the above formula, Δ V (t +1) and Δ W (t +1) are adjustment amounts by which the momentum term is not increased, and Δ V '(t +1) and Δ W' (t +1) are adjustment amounts by which the momentum term is increased; α is a momentum coefficient, and α ∈ (0, 1).
5. The frequency domain modeling method for the power amplifier based on the complex inverse neural network as claimed in claim 1, wherein the specific process of step (1) is as follows:
(1a) acquiring an input signal vector x (N) ═ x (1), x (2), …, x (N)) and an output signal vector y (N) ═ y (1), y (2), …, y (N)) of the power amplifier, wherein N is a data length;
(1b) performing fast fourier transform on the input signal vector and the output signal vector data:
Figure FDA0002750052760000041
Figure FDA0002750052760000042
let X be XRe+jXIm=[X(1),X(2),…,X(M)]Is the result of a fast Fourier transform of x (n), Y ═ YRe+jYIm=[Y(1),Y(2),…,Y(M)]Is the result of a fast Fourier transform of y (n), where XReAnd XImRespectively the real and imaginary parts of X, YReAnd YImThe real part and the imaginary part of Y respectively; m is the number of Fourier transform points;
(1c) and carrying out logarithm processing on X and Y:
Figure FDA0002750052760000043
Figure FDA0002750052760000044
in the above formula, X1And Y1Is the result of taking the logarithm of X and Y, wherein
Figure FDA0002750052760000051
And
Figure FDA0002750052760000052
are each X1Real and imaginary parts of, Y1 ReAnd Y1 ImAre each Y1The real and imaginary parts of (c);
(1d) separately determine X1And Y1Dc offset of real and imaginary parts:
Figure FDA0002750052760000053
Figure FDA0002750052760000054
Figure FDA0002750052760000055
Figure FDA0002750052760000056
in the above formula, the first and second carbon atoms are,
Figure FDA0002750052760000057
and
Figure FDA0002750052760000058
are each X1DC offset of real and imaginary parts, Y2 ReAnd Y2 ImAre each Y1DC offsets of the real part and the imaginary part; max [. C]For the operation to find the maximum value of the vector, min [. cndot.)]Calculating the minimum value of the vector;
(1e) to X1And Y1The real part and the imaginary part respectively subtract a direct current offset:
Figure FDA0002750052760000059
Figure FDA00027500527600000510
Figure FDA00027500527600000511
Figure FDA00027500527600000512
then, carrying out normalization processing on the obtained data to obtain a frequency domain input signal and a frequency domain output signal:
Figure FDA00027500527600000513
Figure FDA00027500527600000514
6. the frequency domain modeling method for the power amplifier based on the complex inverse neural network as claimed in claim 5, wherein the specific process of step (9) is as follows:
(9a) output to complex inverse neural network
Figure FDA0002750052760000061
Carrying out normalized reduction:
Figure FDA0002750052760000062
Figure FDA0002750052760000063
in the above formula, the first and second carbon atoms are,
Figure FDA0002750052760000064
and
Figure FDA0002750052760000065
are respectively as
Figure FDA0002750052760000066
The real and imaginary parts of (a) and (b),
Figure FDA0002750052760000067
and
Figure FDA0002750052760000068
the real part and the imaginary part are reduced;
(9b) adding a direct current offset to the restored data:
Figure FDA0002750052760000069
Figure FDA00027500527600000610
order to
Figure FDA00027500527600000611
(9c) To O2Performing an inverse logarithmic operation:
Figure FDA00027500527600000612
in the above formula, the first and second carbon atoms are,
Figure FDA00027500527600000613
is the result of an inverse logarithmic operation;
(9d) to pair
Figure FDA00027500527600000614
Performing inverse Fourier transform to obtain time domain output signal
Figure FDA00027500527600000615
Figure FDA00027500527600000616
In the above formula, the first and second carbon atoms are,
Figure FDA00027500527600000617
the data after the above reduction processing is output to the output layer neuron k.
CN201910757206.1A 2019-08-16 2019-08-16 Power amplifier frequency domain modeling method based on complex reverse neural network Active CN110598261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910757206.1A CN110598261B (en) 2019-08-16 2019-08-16 Power amplifier frequency domain modeling method based on complex reverse neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910757206.1A CN110598261B (en) 2019-08-16 2019-08-16 Power amplifier frequency domain modeling method based on complex reverse neural network

Publications (2)

Publication Number Publication Date
CN110598261A CN110598261A (en) 2019-12-20
CN110598261B true CN110598261B (en) 2021-03-30

Family

ID=68854540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910757206.1A Active CN110598261B (en) 2019-08-16 2019-08-16 Power amplifier frequency domain modeling method based on complex reverse neural network

Country Status (1)

Country Link
CN (1) CN110598261B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859795A (en) * 2020-07-14 2020-10-30 东南大学 Polynomial-assisted neural network behavior modeling system and method for power amplifier

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101350597A (en) * 2008-09-10 2009-01-21 北京北方烽火科技有限公司 Method for modeling wideband radio-frequency power amplifier
WO2014002017A1 (en) * 2012-06-25 2014-01-03 Telefonaktiebolaget L M Ericsson (Publ) Predistortion according to an artificial neural network (ann)-based model
CN105116329A (en) * 2015-09-06 2015-12-02 冯伟 Identification method and device for galvanometer scanning motor model parameters
CN108153943A (en) * 2017-12-08 2018-06-12 南京航空航天大学 The behavior modeling method of power amplifier based on dock cycles neural network
CN108256257A (en) * 2018-01-31 2018-07-06 南京航空航天大学 A kind of power amplifier behavior modeling method based on coding-decoding neural network model
CN109302156A (en) * 2018-09-28 2019-02-01 东南大学 Power amplifier dynamical linearization system and method based on pattern-recognition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8175541B2 (en) * 2009-02-06 2012-05-08 Rfaxis, Inc. Radio frequency transceiver front end circuit
CN105224985B (en) * 2015-09-28 2017-10-31 南京航空航天大学 A kind of power amplifier behavior modeling method based on depth reconstruction model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101350597A (en) * 2008-09-10 2009-01-21 北京北方烽火科技有限公司 Method for modeling wideband radio-frequency power amplifier
WO2014002017A1 (en) * 2012-06-25 2014-01-03 Telefonaktiebolaget L M Ericsson (Publ) Predistortion according to an artificial neural network (ann)-based model
CN105116329A (en) * 2015-09-06 2015-12-02 冯伟 Identification method and device for galvanometer scanning motor model parameters
CN108153943A (en) * 2017-12-08 2018-06-12 南京航空航天大学 The behavior modeling method of power amplifier based on dock cycles neural network
CN108256257A (en) * 2018-01-31 2018-07-06 南京航空航天大学 A kind of power amplifier behavior modeling method based on coding-decoding neural network model
CN109302156A (en) * 2018-09-28 2019-02-01 东南大学 Power amplifier dynamical linearization system and method based on pattern-recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于BP神经网络的任意幅频响应FIR滤波器的设计;唐金华,邵杰;《第七届中国通信学会学术年会论文集》;20101014;全文 *

Also Published As

Publication number Publication date
CN110598261A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
Araújo et al. A morphological-rank-linear evolutionary method for stock market prediction
Orcioni et al. Identification of Volterra models of tube audio devices using multiple-variance method
CN108957421B (en) Radar radiation source individual identification method and system based on Bezier curve fitting
CN108153943B (en) Behavior modeling method of power amplifier based on clock cycle neural network
Chaudhary et al. Design of sign fractional optimization paradigms for parameter estimation of nonlinear Hammerstein systems
CN113553755B (en) Power system state estimation method, device and equipment
CN110472280A (en) A kind of power amplifier behavior modeling method based on generation confrontation neural network
CN111859795A (en) Polynomial-assisted neural network behavior modeling system and method for power amplifier
CN110598261B (en) Power amplifier frequency domain modeling method based on complex reverse neural network
Zhang et al. Pipelined robust M-estimate adaptive second-order Volterra filter against impulsive noise
Iatrou et al. Modeling of nonlinear nonstationary dynamic systems with a novel class of artificial neural networks
Zhang et al. Pipelined set-membership approach to adaptive Volterra filtering
Monfared et al. Prediction of state-of-charge effects on lead-acid battery characteristics using neural network parameter modifier
Abbas et al. Independent component analysis based on quantum particle swarm optimization
Rizvi Time series deep learning for robust steady-state load parameter estimation using 1D-CNN
CN109379652B (en) Earphone active noise control secondary channel off-line identification method
Li et al. Distributed functional link adaptive filtering for nonlinear graph signal processing
Chang Identification of nonlinear discrete systems using a new Hammerstein model with Volterra neural network
Geng et al. Accurate and effective nonlinear behavioral modeling of a 10-w gan hemt based on lstm neural networks
CN110533169A (en) A kind of digital pre-distortion method and system based on complex value neural network model
CN110188382B (en) Power amplifier frequency domain behavior modeling method based on FFT and BP neural network model
RV et al. Optimization of digital predistortion models for RF power amplifiers using a modified differential evolution algorithm
CN113987924A (en) Complex electromagnetic signal simulation generation method based on target feature self-learning
Sahu et al. A simplified functional link net architecture for dynamic system identification with a UKF algorithm
CN109474258A (en) The Optimization Method of Kernel Parameter of random Fourier feature core LMS based on nuclear polarization strategy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant