CN110598261A - Power amplifier frequency domain modeling method based on complex reverse neural network - Google Patents

Power amplifier frequency domain modeling method based on complex reverse neural network Download PDF

Info

Publication number
CN110598261A
CN110598261A CN201910757206.1A CN201910757206A CN110598261A CN 110598261 A CN110598261 A CN 110598261A CN 201910757206 A CN201910757206 A CN 201910757206A CN 110598261 A CN110598261 A CN 110598261A
Authority
CN
China
Prior art keywords
output
real
neural network
frequency domain
power amplifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910757206.1A
Other languages
Chinese (zh)
Other versions
CN110598261B (en
Inventor
邵杰
孔天姣
胡久元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201910757206.1A priority Critical patent/CN110598261B/en
Publication of CN110598261A publication Critical patent/CN110598261A/en
Application granted granted Critical
Publication of CN110598261B publication Critical patent/CN110598261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Amplifiers (AREA)

Abstract

The invention discloses a power amplifier frequency domain modeling method based on a complex reverse neural network. Firstly, converting the collected time domain signals into a frequency domain, then modeling frequency domain data by utilizing the fitting effect of a complex inverse neural network to obtain output data of the neural network, and then converting the frequency domain output data into the time domain through inverse fast Fourier transform. The power amplifier is modeled by the method, and the frequency domain error of the power amplifier is reduced to a great extent. The invention can better describe the nonlinear characteristic and the memory effect of the power amplifier.

Description

Power amplifier frequency domain modeling method based on complex reverse neural network
Technical Field
The invention belongs to the field of nonlinear system analysis application, and particularly relates to a frequency domain modeling method for a power amplifier.
Background
As an important module in wireless communication systems, power amplifiers are designed to play a crucial role in increasingly complex wireless communication systems. The output of the power amplifier has better linearity when the input power is smaller, but in practice, the power amplifier is usually operated near the saturation point for efficiency improvement, and then the non-linearity of the power amplifier becomes worse. Meanwhile, the power amplifier is affected by electric reactance such as devices and the like to have a memory effect. Therefore, the non-linearity and memory effects of the power amplifier model become important when modeling.
The modeling of the power amplifier can be divided into two modes of physical modeling and behavior modeling, wherein the physical modeling is to establish an equivalent circuit model according to the internal specific structure of the circuit, and the behavior modeling is simpler in comparison. The behavior modeling is to regard the power amplifier circuit as a black box, and establish the response relation of the power amplifier according to the input and output signals without considering the internal structure of the circuit. Behavioral modeling can be further divided into memoryless and memoryless models, with the presence or absence of memory effects in a model depending on whether the output signal is related to historical input signals. Memory-less models are commonly used for narrow-band systems, such as the Saleh model and Rapp model. However, with the increase of the signal bandwidth, the memory effect of the power amplifier is more obvious, and the limitation of a memory-free model is gradually highlighted. Memory models are mainly divided into two categories: one is the Volterra series and its simplified model, and the other is the neural network model. Parameters of the Volterra series model can rapidly increase along with the increase of the memory length of the power amplifier, so that the Volterra series model is only suitable for weak nonlinear systems. The neural network model can simulate the characteristics of the human brain to establish a mathematical model, can approximate any nonlinear function, and has the functions of learning, memorizing, calculating and the like. However, the general real neural network model cannot well describe the frequency domain characteristics of the system, and has certain limitations.
Disclosure of Invention
In order to solve the technical problems mentioned in the background art, the invention provides a power amplifier frequency domain modeling method based on a complex reverse neural network.
In order to achieve the technical purpose, the technical scheme of the invention is as follows:
a power amplifier frequency domain modeling method based on a complex reverse neural network comprises the following steps:
(1) collecting input signal x (n) and output signal y (n) of power amplifier time domain, and converting them into frequency domain signalAndwhereinAndare respectively asThe real and imaginary parts of (a) and (b),andare respectively asJ is an imaginary unit;
(2) establishing a plurality of reverse neural network models:
the complex reverse neural network comprises an input layer, a hidden layer and an output layer, wherein the input layer comprises M neurons, the hidden layer comprises L neurons, the output layer comprises K neurons, and each hidden layer and each output layer neuron have the same excitation function; the input layer neuron receives the frequency domain signal in the step (1), then transmits the frequency domain signal to the hidden layer neuron, and obtains a hidden layer output vector U (t) ═ U of the t iteration under the action of an excitation functionRe(t)+jUIm(t) wherein URe(t) and UIm(t) the real and imaginary parts of U (t), respectively; transmitting the hidden layer output vector U (t) to an output layer neuron, and obtaining the output layer vector of the t iteration after the action of an output layer excitation functionWhereinAndare respectively asThe real and imaginary parts of (c);
the complex reverse neural network comprises two weight coefficient matrixes:
m × L dimensional weight coefficient matrix V (t) V from input layer to hidden layer of the t-th iterationRe(t)+jVIm(t) wherein VRe(t) and VIm(t) the real and imaginary parts of V (t), respectively;
the L × K dimensional weight coefficient matrix W (t) ═ W from the hidden layer to the output layer of the t-th iterationRe(t)+jWIm(t) wherein WRe(t) and WIm(t) the real and imaginary parts of W (t), respectively;
(3) computing hidden layer output vectors U (t) and
in the above formula, f represents an excitation function, and superscript T represents matrix transposition;
(4) calculating a target error function E (t):
in the above formula, the first and second carbon atoms are,to be the desired output for output layer neuron k,andare respectively asThe real and imaginary parts of (a) and (b),for the output of output layer neuron k for the t iteration,andare respectively asThe real and imaginary parts of (c);
order to
(5) Calculating the adjustment quantity of the weight coefficient matrix, and making the real part adjustment quantity of the weight coefficient matrix and ERe(t) is proportional to the decreasing gradient, and the adjustment of the imaginary part of the weight coefficient matrix is proportional to EImThe gradient of the decline of (t) is proportional, and is given by:
in the above formula, Δ v (t) is the adjustment amount of v (t), Δ w (t) is the adjustment amount of w (t), and η is the learning rate of the neural network;
(6) if the iteration number t is t +1, when the target error function e (t) is greater than the error threshold or the iteration number t is less than the maximum iteration number, the step (7) is performed, and when the target error function e (t) is less than or equal to the error threshold or the iteration number t is equal to the maximum iteration number, the step (8) is performed;
(7) updating the weight coefficient matrix according to the adjustment quantity of the weight coefficient matrix, and then returning to the step (3);
(8) obtaining the output of a plurality of inverse neural networks
(9) According to the output of a complex inverse neural networkRestoring the time-domain output signal of a power amplifier
Further, in step (2), the excitation function is as follows:
in the above formula, f (z) is the excitation function, z is a complex number, Re [ · ]]The representation takes the real part, Im [. cndot]Representing the imaginary part, gr,giIs a gain parameter.
Further, the gain parameter gr,giThe update rule of (2) is as follows:
in the above formula, gr(t+1),gi(t +1) is the gain parameter value for the t +1 th iteration,k=1,2,…,K,Wlk(t) is the weight coefficient between hidden layer neuron l and output layer neuron k of the t-th iteration, Ul(t) input of hidden layer neuron l for the t-th iterationAnd (6) discharging.
Further, in step (4), a momentum term is added during the adjustment of the weight coefficient matrix, that is, a part of the adjustment of the weight coefficient matrix from the t-th iteration is taken out and added to the adjustment of the weight coefficient matrix from the t + 1-th iteration:
ΔV′(t+1)=ΔV(t+1)+αΔV(t)
ΔW′(t+1)=ΔW(t+1)+αΔW(t)
in the above formula, Δ V (t +1) and Δ W (t +1) are adjustment amounts by which the momentum term is not increased, and Δ V '(t +1) and Δ W' (t +1) are adjustment amounts by which the momentum term is increased; α is a momentum coefficient, and α ∈ (0, 1).
Further, the specific process of step (1) is as follows:
(1a) acquiring an input signal vector x (N) ═ x (1), x (2), …, x (N)) and an output signal vector y (N) ═ y (1), y (2), …, y (N)) of the power amplifier, wherein N is a data length;
(1b) performing fast fourier transform on the input signal vector and the output signal vector data:
let X be XRe+jXIm=[X(1),X(2),…,X(M)]Is the result of a fast Fourier transform of x (n), Y ═ YRe+jYIm=[Y(1),Y(2),…,Y(M)]Is the result of a fast Fourier transform of y (n), where XReAnd XImRespectively the real and imaginary parts of X, YReAnd YImThe real part and the imaginary part of Y respectively; m is the number of Fourier transform points;
(1c) and carrying out logarithm processing on X and Y:
Y1=20lg[Y]=Y1 Re+jY1 Im
in the above formula, X1And Y1Is the result of taking the logarithm of X and Y, whereinAndare each X1Real and imaginary parts of, Y1 ReAnd Y1 ImAre each Y1The real and imaginary parts of (c);
(1d) separately determine X1And Y1Dc offset of real and imaginary parts:
in the above formula, the first and second carbon atoms are,andare each X1The dc offset of the real and imaginary parts,andare each Y1DC offsets of the real part and the imaginary part; max [. C]For the operation of finding the maximum value of the vector, min[·]Calculating the minimum value of the vector;
(1e) to X1And Y1The real part and the imaginary part respectively subtract a direct current offset:
then, carrying out normalization processing on the obtained data to obtain a frequency domain input signal and a frequency domain output signal:
further, the specific process of step (9) is as follows:
(9a) output to complex inverse neural networkCarrying out normalized reduction:
in the above formula, the first and second carbon atoms are,andare respectively asThe real and imaginary parts of (a) and (b),andthe real part and the imaginary part are reduced;
(9b) adding a direct current offset to the restored data:
order to
(9c) To O2Performing an inverse logarithmic operation:
in the above formula, the first and second carbon atoms are,is the result of an inverse logarithmic operation;
(9d) to pairPerforming inverse Fourier transform to obtain time domain output signal
In the above formula, the first and second carbon atoms are,the data after the above reduction processing is output to the output layer neuron k.
Adopt the beneficial effect that above-mentioned technical scheme brought:
(1) according to the invention, power amplifier modeling is carried out in a frequency domain, so that the memory effect problem of a system model is avoided;
(2) compared with a real number neural network, the complex number neural network adopted by the invention has higher convergence speed and stronger classification and fitting capabilities;
(3) the power amplifier model established based on the complex reverse neural network solves the problem of low frequency domain precision of the power amplifier.
Drawings
FIG. 1 is a diagram of a power amplifier "black box" model;
FIG. 2 is a diagram of a complex inverse neural network model architecture of the present invention;
FIG. 3 is a time domain waveform and error plot of the actual output and the model output in an embodiment;
FIG. 4 is a frequency domain waveform and error plot of the actual output and the model output in the example.
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings.
Fig. 1 is a diagram of a "black box" model of a power amplifier, in which the input signal x is a chirp signal with an amplitude of 8.5V and the output signal through the power amplifier is y with distortion. After a power amplifier circuit is simulated by using Pspice simulation software, 2000 input and output signals are collected as experimental data for modeling, and the sampling frequency is 100 kHz.
The invention designs a power amplifier frequency domain modeling method based on a complex reverse neural network, which comprises the following steps:
step 1: collecting input signal x (n) and output signal y (n) of power amplifier time domain, and converting them into frequency domain signalAndwhereinAndare respectively asThe real and imaginary parts of (a) and (b),andare respectively asJ is an imaginary unit;
step 2: establishing a plurality of reverse neural network models:
as shown in fig. 2, the complex inverse neural network includes an input layer, a hidden layer and an output layer, the input layer includes M neurons, the hidden layer includes L neurons, the output layer includes K neurons, and each hidden layer and each output layer neuron have the same excitation function; the input layer neuron receives the frequency domain signal in the step 1 and then transmits the frequency domain signal to the hidden layer neuron, and the hidden layer output vector U (t) ═ U of the t iteration is obtained after the action of the excitation functionRe(t)+jUIm(t) wherein URe(t) and UIm(t) the real and imaginary parts of U (t), respectively; transmitting the hidden layer output vector U (t) to the output layer neuron, and obtaining the t th after the action of the output layer excitation functionOutput layer vector of sub-iterationWhereinAndare respectively asThe real and imaginary parts of (c);
the complex reverse neural network comprises two weight coefficient matrixes:
m × L dimensional weight coefficient matrix V (t) V from input layer to hidden layer of the t-th iterationRe(t)+jVIm(t) wherein VRe(t) and VIm(t) the real and imaginary parts of V (t), respectively;
the L × K dimensional weight coefficient matrix W (t) ═ W from the hidden layer to the output layer of the t-th iterationRe(t)+jWIm(t) wherein WRe(t) and WIm(t) the real and imaginary parts of W (t), respectively;
and step 3: computing hidden layer output vectors U (t) and
in the above formula, f represents an excitation function, and superscript T represents matrix transposition;
and 4, step 4: calculating a target error function E (t):
in the above formula, the first and second carbon atoms are,to be the desired output for output layer neuron k,andare respectively asThe real and imaginary parts of (a) and (b),for the output of output layer neuron k for the t iteration,andare respectively asThe real and imaginary parts of (c);
order to
And 5: calculating the adjustment quantity of the weight coefficient matrix, and making the real part adjustment quantity of the weight coefficient matrix and ERe(t) is proportional to the decreasing gradient, and the adjustment of the imaginary part of the weight coefficient matrix is proportional to EImThe gradient of the decline of (t) is proportional, and is given by:
in the above formula, Δ v (t) is the adjustment amount of v (t), Δ w (t) is the adjustment amount of w (t), and η is the learning rate of the neural network;
step 6: when the target error function e (t) is greater than the error threshold or the iteration number t is less than the maximum iteration number, the step 7 is performed, and when the target error function e (t) is less than or equal to the error threshold or the iteration number t is less than the maximum iteration number, the step 8 is performed;
and 7: updating the weight coefficient matrix according to the adjustment quantity of the weight coefficient matrix, and then returning to the step 3;
and 8: obtaining the output of a plurality of inverse neural networks
And step 9: according to the output of a complex inverse neural networkRestoring the time-domain output signal of a power amplifier
In this embodiment, the following preferred scheme is adopted in step 1:
1a, acquiring an input signal vector x (N) ═ x (1), x (2), …, x (N)) and an output signal vector y (N) ═ y (1), y (2), …, y (N)), wherein N is a data length;
1b, performing fast Fourier transform on input signal vector data and output signal vector data:
let X be XRe+jXIm=[X(1),X(2),…,X(M)]Is the result of a fast Fourier transform of x (n), Y ═ YRe+jYIm=[Y(1),Y(2),…,Y(M)]Is y: (n) after fast Fourier transform, wherein XReAnd XImRespectively the real and imaginary parts of X, YReAnd YImThe real part and the imaginary part of Y respectively; m is the number of Fourier transform points;
1c, carrying out logarithm processing on X and Y:
Y1=20lg[Y]=Y1 Re+jY1 Im
in the above formula, X1And Y1Is the result of taking the logarithm of X and Y, whereinAndare each X1Real and imaginary parts of, Y1 ReAnd Y1 ImAre each Y1The real and imaginary parts of (c);
1d, separately obtaining X1And Y1Dc offset of real and imaginary parts:
in the above formula, the first and second carbon atoms are,andare each X1The dc offset of the real and imaginary parts,andare each Y1DC offsets of the real part and the imaginary part; max [. C]For the operation to find the maximum value of the vector, min [. cndot.)]Calculating the minimum value of the vector;
1e, to X1And Y1The real part and the imaginary part respectively subtract a direct current offset:
then, carrying out normalization processing on the obtained data to obtain a frequency domain input signal and a frequency domain output signal:
in this embodiment, the following preferred scheme is adopted in step 2:
the excitation function is as follows:
in the above formula, f (z) is the excitation function, z is a complex number, Re [ · ]]The representation takes the real part, Im [. cndot]Representing the imaginary part, gr,giIs a gain parameter.
The gain parameter gr,giThe update rule of (2) is as follows:
in the above formula, gr(t+1),gi(t +1) is the gain parameter value for the t +1 th iteration,k=1,2,…,K,Wlk(t) is the weight coefficient between hidden layer neuron l and output layer neuron k of the t-th iteration, Ul(t) is the output of hidden layer neuron l for the t-th iteration.
In this example, the following preferred scheme is adopted in step 4:
adding a momentum item when the weight coefficient matrix is adjusted, namely taking out a part from the weight coefficient matrix adjustment amount of the t iteration and superposing the part to the weight coefficient matrix adjustment of the t +1 iteration:
ΔV′(t+1)=ΔV(t+1)+αΔV(t)
ΔW′(t+1)=ΔW(t+1)+αΔW(t)
in the above formula, Δ V (t +1) and Δ W (t +1) are adjustment amounts by which the momentum term is not increased, and Δ V '(t +1) and Δ W' (t +1) are adjustment amounts by which the momentum term is increased; α is a momentum coefficient, and α ∈ (0, 1).
In this embodiment, the following preferred scheme is adopted in step 9:
9a, output to complex inverse neural networkCarrying out normalized reduction:
in the above formula, the first and second carbon atoms are,andare respectively asThe real and imaginary parts of (a) and (b),andthe real part and the imaginary part are reduced;
9b, adding a direct current offset to the restored data:
order to
9c, to O2Performing an inverse logarithmic operation:
in the above formula, the first and second carbon atoms are,is the result of an inverse logarithmic operation;
9d, pairPerforming inverse Fourier transform to obtain time domain output signal
In the above formula, the first and second carbon atoms are,the data after the above reduction processing is output to the output layer neuron k.
In this embodiment, the number of fourier transform points and the number of input layer neurons of the complex inverse neural network are both M2048, the number of implicit layer neurons L is 15, the number of output layer neurons K is 2048, the maximum number of iterations is 200, the error threshold is 0.0001, the learning rate is 0.05, and the momentum coefficient α is 0.5. When the number of iterations is 130, the time domain waveform and the time domain error that obtain the actual output and the model output are shown in fig. 3, and the frequency domain waveform and the frequency domain error are shown in fig. 4. As can be obtained from the graph, the maximum instantaneous error of the complex inverse neural network in the time domain is-0.02721V, the average error of the time domain is 8.502e-6V, the maximum instantaneous error of the frequency domain is 0.249dB, and the average error of the frequency domain is 0.001049 dB. Therefore, the complex inverse neural network well describes the memory effect and the nonlinear characteristic of the power amplifier, and the frequency domain error is greatly reduced.
The embodiments are only for illustrating the technical idea of the present invention, and the technical idea of the present invention is not limited thereto, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the scope of the present invention.

Claims (6)

1. A power amplifier frequency domain modeling method based on a complex reverse neural network is characterized by comprising the following steps:
(1) collecting input signal x (n) and output signal y (n) of power amplifier time domain, and converting them into frequency domain signalAndwhereinAndare respectively asThe real and imaginary parts of (a) and (b),andare respectively asJ is an imaginary unit;
(2) establishing a plurality of reverse neural network models:
the complex reverse neural network comprises an input layer, a hidden layer and an output layer, wherein the input layer comprises M neurons, the hidden layer comprises L neurons, the output layer comprises K neurons, and each hidden layer and each output layer neuron have the same excitation function; the input layer neuron receives the frequency domain signal in the step (1), then transmits the frequency domain signal to the hidden layer neuron, and obtains the t-th time after the action of an excitation functionIterative hidden layer output vector U (t) URe(t)+jUIm(t) wherein URe(t) and UIm(t) the real and imaginary parts of U (t), respectively; transmitting the hidden layer output vector U (t) to an output layer neuron, and obtaining the output layer vector of the t iteration after the action of an output layer excitation functionWhereinAndare respectively asThe real and imaginary parts of (c);
the complex reverse neural network comprises two weight coefficient matrixes:
m × L dimensional weight coefficient matrix V (t) V from input layer to hidden layer of the t-th iterationRe(t)+jVIm(t) wherein VRe(t) and VIm(t) the real and imaginary parts of V (t), respectively;
the L × K dimensional weight coefficient matrix W (t) ═ W from the hidden layer to the output layer of the t-th iterationRe(t)+jWIm(t) wherein WRe(t) and WIm(t) the real and imaginary parts of W (t), respectively;
(3) computing hidden layer output vectors U (t) and
in the above formula, f represents an excitation function, and superscript T represents matrix transposition;
(4) calculating a target error function E (t):
in the above formula, the first and second carbon atoms are,to be the desired output for output layer neuron k,andare respectively asThe real and imaginary parts of (a) and (b),for the output of output layer neuron k for the t iteration,andare respectively asThe real and imaginary parts of (c);
order to
(5) Calculating the adjustment quantity of the weight coefficient matrix, and making the real part adjustment quantity of the weight coefficient matrix and ERe(t) is proportional to the decreasing gradient, and the adjustment of the imaginary part of the weight coefficient matrix is proportional to EImThe gradient of the decline of (t) is proportional, and is given by:
in the above formula, Δ v (t) is the adjustment amount of v (t), Δ w (t) is the adjustment amount of w (t), and η is the learning rate of the neural network;
(6) if the iteration number t is t +1, when the target error function e (t) is greater than the error threshold or the iteration number t is less than the maximum iteration number, the step (7) is performed, and when the target error function e (t) is less than or equal to the error threshold or the iteration number t is equal to the maximum iteration number, the step (8) is performed;
(7) updating the weight coefficient matrix according to the adjustment quantity of the weight coefficient matrix, and then returning to the step (3);
(8) obtaining the output of a plurality of inverse neural networks
(9) According to the output of a complex inverse neural networkRestoring the time-domain output signal of a power amplifier
2. The frequency domain modeling method for power amplifier based on complex inverse neural network as claimed in claim 1, wherein in step (2), said excitation function is as follows:
in the above formula, f(z) is the excitation function, z is the complex number, Re [. cndot.)]The representation takes the real part, Im [. cndot]Representing the imaginary part, gr,giIs a gain parameter.
3. The frequency domain modeling method for power amplifier based on complex inverse neural network as claimed in claim 2, wherein said gain parameter g isr,giThe update rule of (2) is as follows:
in the above formula, gr(t+1),gi(t +1) is the gain parameter value for the t +1 th iteration,k=1,2,…,K,Wlk(t) is the weight coefficient between hidden layer neuron l and output layer neuron k of the t-th iteration, Ul(t) is the output of hidden layer neuron l for the t-th iteration.
4. The frequency domain modeling method for power amplifier based on complex inverse neural network as claimed in claim 1, wherein in step (4), a momentum term is added during the adjustment of the weight coefficient matrix, that is, a part of the adjustment of the weight coefficient matrix from the t-th iteration is taken out and added to the adjustment of the weight coefficient matrix from the t + 1-th iteration:
ΔV′(t+1)=ΔV(t+1)+αΔV(t)
ΔW′(t+1)=ΔW(t+1)+αΔW(t)
in the above formula, Δ V (t +1) and Δ W (t +1) are adjustment amounts by which the momentum term is not increased, and Δ V '(t +1) and Δ W' (t +1) are adjustment amounts by which the momentum term is increased; α is a momentum coefficient, and α ∈ (0, 1).
5. The frequency domain modeling method for the power amplifier based on the complex inverse neural network as claimed in claim 1, wherein the specific process of step (1) is as follows:
(1a) acquiring an input signal vector x (N) ═ x (1), x (2), …, x (N)) and an output signal vector y (N) ═ y (1), y (2), …, y (N)) of the power amplifier, wherein N is a data length;
(1b) performing fast fourier transform on the input signal vector and the output signal vector data:
let X be XRe+jXIm=[X(1),X(2),…,X(M)]Is the result of a fast Fourier transform of x (n), Y ═ YRe+jYIm=[Y(1),Y(2),…,Y(M)]Is the result of a fast Fourier transform of y (n), where XReAnd XImRespectively the real and imaginary parts of X, YReAnd YImThe real part and the imaginary part of Y respectively; m is the number of Fourier transform points;
(1c) and carrying out logarithm processing on X and Y:
in the above formula, X1And Y1Is the result of taking the logarithm of X and Y, whereinAndare each X1Real and imaginary parts of, Y1 ReAnd Y1 ImAre each Y1The real and imaginary parts of (c);
(1d) separately determine X1And Y1Dc offset of real and imaginary parts:
in the above formula, the first and second carbon atoms are,andare each X1The dc offset of the real and imaginary parts,andare each Y1DC offsets of the real part and the imaginary part; max [. C]For the operation to find the maximum value of the vector, min [. cndot.)]Calculating the minimum value of the vector;
(1e) to X1And Y1The real part and the imaginary part respectively subtract a direct current offset:
then, carrying out normalization processing on the obtained data to obtain a frequency domain input signal and a frequency domain output signal:
6. the frequency domain modeling method for the power amplifier based on the complex inverse neural network as claimed in claim 5, wherein the specific process of step (9) is as follows:
(9a) output to complex inverse neural networkCarrying out normalized reduction:
in the above formula, the first and second carbon atoms are,andare respectively asThe real and imaginary parts of (a) and (b),andthe real part and the imaginary part are reduced;
(9b) adding a direct current offset to the restored data:
order to
(9c) To O2Performing an inverse logarithmic operation:
in the above formula, the first and second carbon atoms are,is the result of an inverse logarithmic operation;
(9d) to pairPerforming inverse Fourier transform to obtain time domain output signal
In the above formula, the first and second carbon atoms are,the data after the above reduction processing is output to the output layer neuron k.
CN201910757206.1A 2019-08-16 2019-08-16 Power amplifier frequency domain modeling method based on complex reverse neural network Active CN110598261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910757206.1A CN110598261B (en) 2019-08-16 2019-08-16 Power amplifier frequency domain modeling method based on complex reverse neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910757206.1A CN110598261B (en) 2019-08-16 2019-08-16 Power amplifier frequency domain modeling method based on complex reverse neural network

Publications (2)

Publication Number Publication Date
CN110598261A true CN110598261A (en) 2019-12-20
CN110598261B CN110598261B (en) 2021-03-30

Family

ID=68854540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910757206.1A Active CN110598261B (en) 2019-08-16 2019-08-16 Power amplifier frequency domain modeling method based on complex reverse neural network

Country Status (1)

Country Link
CN (1) CN110598261B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859795A (en) * 2020-07-14 2020-10-30 东南大学 Polynomial-assisted neural network behavior modeling system and method for power amplifier

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101350597A (en) * 2008-09-10 2009-01-21 北京北方烽火科技有限公司 Method for modeling wideband radio-frequency power amplifier
US20100203847A1 (en) * 2009-02-06 2010-08-12 Oleksandr Gorbachov Single input/output port radio frequency transceiver front end circuit
WO2014002017A1 (en) * 2012-06-25 2014-01-03 Telefonaktiebolaget L M Ericsson (Publ) Predistortion according to an artificial neural network (ann)-based model
CN105116329A (en) * 2015-09-06 2015-12-02 冯伟 Identification method and device for galvanometer scanning motor model parameters
CN105224985A (en) * 2015-09-28 2016-01-06 南京航空航天大学 A kind of power amplifier behavior modeling method based on degree of depth reconstruction model
CN108153943A (en) * 2017-12-08 2018-06-12 南京航空航天大学 The behavior modeling method of power amplifier based on dock cycles neural network
CN108256257A (en) * 2018-01-31 2018-07-06 南京航空航天大学 A kind of power amplifier behavior modeling method based on coding-decoding neural network model
CN109302156A (en) * 2018-09-28 2019-02-01 东南大学 Power amplifier dynamical linearization system and method based on pattern-recognition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101350597A (en) * 2008-09-10 2009-01-21 北京北方烽火科技有限公司 Method for modeling wideband radio-frequency power amplifier
US20100203847A1 (en) * 2009-02-06 2010-08-12 Oleksandr Gorbachov Single input/output port radio frequency transceiver front end circuit
WO2014002017A1 (en) * 2012-06-25 2014-01-03 Telefonaktiebolaget L M Ericsson (Publ) Predistortion according to an artificial neural network (ann)-based model
CN105116329A (en) * 2015-09-06 2015-12-02 冯伟 Identification method and device for galvanometer scanning motor model parameters
CN105224985A (en) * 2015-09-28 2016-01-06 南京航空航天大学 A kind of power amplifier behavior modeling method based on degree of depth reconstruction model
CN108153943A (en) * 2017-12-08 2018-06-12 南京航空航天大学 The behavior modeling method of power amplifier based on dock cycles neural network
CN108256257A (en) * 2018-01-31 2018-07-06 南京航空航天大学 A kind of power amplifier behavior modeling method based on coding-decoding neural network model
CN109302156A (en) * 2018-09-28 2019-02-01 东南大学 Power amplifier dynamical linearization system and method based on pattern-recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
唐金华,邵杰: "基于BP神经网络的任意幅频响应FIR滤波器的设计", 《第七届中国通信学会学术年会论文集》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859795A (en) * 2020-07-14 2020-10-30 东南大学 Polynomial-assisted neural network behavior modeling system and method for power amplifier

Also Published As

Publication number Publication date
CN110598261B (en) 2021-03-30

Similar Documents

Publication Publication Date Title
Orcioni et al. Identification of Volterra models of tube audio devices using multiple-variance method
CN108153943B (en) Behavior modeling method of power amplifier based on clock cycle neural network
CN113553755B (en) Power system state estimation method, device and equipment
CN110472280A (en) A kind of power amplifier behavior modeling method based on generation confrontation neural network
Lu et al. Time delay Chebyshev functional link artificial neural network
CN111859795A (en) Polynomial-assisted neural network behavior modeling system and method for power amplifier
CN110598261B (en) Power amplifier frequency domain modeling method based on complex reverse neural network
Monfared et al. Prediction of state-of-charge effects on lead-acid battery characteristics using neural network parameter modifier
CN109379652A (en) A kind of the secondary channel offline identification method and system of earphone Active noise control
Rizvi Time series deep learning for robust steady-state load parameter estimation using 1D-CNN
Abbas et al. Independent component analysis based on quantum particle swarm optimization
O'Brien et al. RF power amplifier behavioral modeling using a globally recurrent neural network
Li et al. Distributed functional link adaptive filtering for nonlinear graph signal processing
Yazdanpanah et al. L 0-norm adaptive Volterra filters
CN110533169A (en) A kind of digital pre-distortion method and system based on complex value neural network model
CN110188382B (en) Power amplifier frequency domain behavior modeling method based on FFT and BP neural network model
RV et al. Optimization of digital predistortion models for RF power amplifiers using a modified differential evolution algorithm
Mišić et al. Volterra kernels extraction from neural networks for amplifier behavioral modeling
Liang et al. A novel RTRLNN model for passive intermodulation cancellation in satellite communications
Sahu et al. A simplified functional link net architecture for dynamic system identification with a UKF algorithm
CN109474258A (en) The Optimization Method of Kernel Parameter of random Fourier feature core LMS based on nuclear polarization strategy
Kulshrestha et al. Application of Artificial Neural Network for Planar Printed Antenna Optimization
Liu et al. Digital predistortion for wideband wireless power amplifiers using real-value neural network with attention mechanism
Wang et al. Dual-band transmitter linearization using spiking neural networks
Xu et al. Behavioral Modeling and Digital Predistortion for Power Amplifier Based on the Sparse Smooth Twin Support Vector Regression Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant