CN111245375B - Power amplifier digital predistortion method of complex-valued fully-connected recurrent neural network model - Google Patents

Power amplifier digital predistortion method of complex-valued fully-connected recurrent neural network model Download PDF

Info

Publication number
CN111245375B
CN111245375B CN202010059771.3A CN202010059771A CN111245375B CN 111245375 B CN111245375 B CN 111245375B CN 202010059771 A CN202010059771 A CN 202010059771A CN 111245375 B CN111245375 B CN 111245375B
Authority
CN
China
Prior art keywords
complex
power amplifier
model
neural network
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010059771.3A
Other languages
Chinese (zh)
Other versions
CN111245375A (en
Inventor
党妮
徐常志
汤昊
靳一
汪滴珠
左金钟
杨丽
李明玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Institute of Space Radio Technology
Original Assignee
Xian Institute of Space Radio Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Institute of Space Radio Technology filed Critical Xian Institute of Space Radio Technology
Priority to CN202010059771.3A priority Critical patent/CN111245375B/en
Publication of CN111245375A publication Critical patent/CN111245375A/en
Application granted granted Critical
Publication of CN111245375B publication Critical patent/CN111245375B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F1/00Details of amplifiers with only discharge tubes, only semiconductor devices or only unspecified devices as amplifying elements
    • H03F1/32Modifications of amplifiers to reduce non-linear distortion
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F3/00Amplifiers with only discharge tubes or only semiconductor devices as amplifying elements
    • H03F3/189High-frequency amplifiers, e.g. radio frequency amplifiers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Nonlinear Science (AREA)
  • Amplifiers (AREA)

Abstract

A power amplification digital predistortion method of a complex-valued full-connection recurrent neural network model utilizes the complex-valued full-connection recurrent neural network model to simulate a complex power amplifier model, thereby obtaining a power amplifier inverse model and realizing self-adaptive digital predistortion. The power amplifier model is based on complex-valued neural network theory, an improved complex-valued real-time recursion learning algorithm is adopted on the basis of the real-time recursion learning algorithm, and more accurate model approximation is realized for the power amplifier model of the transmitting end of the digital communication system. The invention combines the real-time recursion learning algorithm in the recursion neural network, provides a complex-value fully-connected recursion neural network model with better effect on the original real-value recursion neural network model, and further promotes the complex-value real-time recursion learning algorithm. Through simulation verification, the model structure and algorithm provided by the invention have better performance in the aspects of training time and modeling accuracy, and can ensure higher fitting degree of power amplifier nonlinearity.

Description

Power amplifier digital predistortion method of complex-valued fully-connected recurrent neural network model
Technical Field
The invention belongs to the technical field of digital signal processing of wireless communication systems, and particularly relates to a digital predistortion method based on complex-valued neural network power amplifier modeling.
Background
In modern communication systems, due to the dual pressures of high-rate data transmission requirements and limited spectrum resources, in order to improve spectrum utilization, modulation modes such as quadrature amplitude modulation (Quadrature Amplitude Modulation, QAM), quadrature phase keying (Quadrature Phase Shift Keying, QPSK), orthogonal frequency division multiplexing (Orthogonal Frequrncy Division Multiplexing, OFDM) and the like are increasingly widely used in communication systems. However, such modulation techniques increase the design difficulty of the radio frequency power amplifier, and the signals are envelope modulation signals, which have the characteristic of higher Peak-to-average power radio (PAPR), and inevitably introduce nonlinear distortion, and under the same average power level, the higher the PAPR, the more sensitive the signal is to the nonlinearity of the power amplifier, resulting in an increase in nonlinearity. In addition, many devices in a communication system have inherent nonlinearity, when an envelope modulation signal passes through the devices, harmonic components and intermodulation distortion are generated, so that nonlinearity is caused, adjacent channels are interfered, and the performance of the communication system is affected, so that linearization processing of a power amplifier is a significant problem faced by current-generation communication. A common Power amplification technology is a Power Back off method, and the principle is that a Power amplifier is far away from a saturation region when in operation, and the Power Back off method is simple to implement but has low working efficiency through backing off operation, and among various linearization technologies, digital predistortion is regarded as a most promising Power amplification technology in the industry due to the advantages of good linearity, wide bandwidth, high efficiency, full adaptability and the like.
In recent years, as the application of the neural network is wider and wider, the neural network has good nonlinear system approximation capability, strong learning capability, good robustness and self-adaptation capability, the neural network attracts the eyes of the power amplifier modeling field, and the artificial neural network (Artificial Neural network, ANN) is used as one of modeling and predistortion technologies of a power amplifier and a transmitter to be researched, so that the theory and practice on the power amplifier modeling are also increasing. The integration of the neural network technology and the power amplifier predistortion technology benefits from the strong approximation effect of the neural network on nonlinear system modeling, the Complex-value-based neural network (Complex-Valued Neural Network, CVNN) model is also more and more widely applied, but the application of the Complex-value neural network on the power amplifier behavioral model modeling and the digital predistortion technology is still deficient.
Disclosure of Invention
The invention solves the technical problems that: aiming at the problems of insufficient model prediction accuracy, high calculation complexity, insufficient predistortion correction capability and the like of the traditional neural network-based power amplification modeling method, the power amplification digital predistortion method of a complex-valued fully-connected recurrent neural network model is provided on the basis of a traditional real-valued neural network model and a training algorithm, a power amplifier behavior model can be established more accurately, a power amplifier nonlinear curve is fitted well, namely, the model accuracy is good, and a predistorter with good prediction effect can be obtained by adopting a power amplifier model inversion method on the basis of higher model accuracy.
The technical scheme of the invention is as follows: a power amplifier digital predistortion method of a complex-valued fully-connected recurrent neural network model comprises the following steps:
(1) Transmitting signal data x (n) to a hardware communication system, and acquiring an output signal y (n) of the radio frequency power amplifier through a feedback channel of the hardware communication system;
(2) Performing autocorrelation synchronization on y (n) and x (n), and performing synchronization alignment processing on input and output signals;
(3) After carrying out normalization treatment on x (n) and y (n), carrying out preliminary power amplifier modeling by using a complex-valued fully-connected recurrent neural network model;
(4) Carrying out parameter updating on the established preliminary power amplifier model by using a recursion learning training algorithm to obtain a final power amplifier model;
(5) Using an inversion method, taking an output signal of the power amplifier as a model input, and taking an input signal of the power amplifier as a reference signal for modeling to obtain an inverse model of the power amplifier, namely the power amplifier digital predistorter;
(6) The input signal x (n) enters a power amplification digital predistorter to obtain an output sequence signal z (n), and the output sequence signal z (n) is processed by a power amplification model to obtain an output sampling signal v (n);
(7) An absolute error signal |e (n) | is obtained from e (n) =x (n) -v (n), and the predistorter effect is determined from the magnitude of |e (n) |.
In the complex-value full-connection recurrent neural network model, the input, the weight and the output are complex numbers, and the complex-value full-connection recurrent neural network model comprises N neurons, p external inputs and N feedback connecting lines, and is of a two-layer structure, namely an external input feedback layer and an output processing layer; in the network, y is used for complex value output of each neuron at kth time l (k) Expressed, i=1..n, the external input is expressed by P delay terms of s (k), which is a (1 xp) vector, where s (k) is the input sample sequence and the total input vector P (k) of the whole network is represented by the output vector y l (k) S (k) and a bias input (1+j) are connected in series, expressed as:
Figure BDA0002374066980000031
in the method, in the process of the invention,
Figure BDA0002374066980000032
(·) T representing the transpose of the vector, (·) r And ( i Representing a complex value or real and imaginary parts of a complex vector; for the first neuron, its weight is the vector of the same dimension (p+n+1) ×1 as the input vector, expressed as: />
Figure BDA0002374066980000033
The complex value matrix of the whole network is expressed as: w= [ W ] 1 ,...,w N ];
The output of each neuron is defined as: y is l (k)=Φ(net l (k) Phi) represents the complex-valued nonlinear activation function of neurons, net l (k) The input of the activation function at the moment k, namely the linear sum of all the inputs of a certain node after being weighted, is expressed as:
Figure BDA0002374066980000034
y l (k)=Φ r (net l (k))+jΦ i (net l (k))=u l (k)+jv l (k)。
the activation function is an elementary transcendental function Φ (z) =tanh (z).
In the step (4), a recursive learning training algorithm is used for updating parameters, specifically:
step 1, assuming that the output of each neuron in the output layer is y t (k) T=1,.. the output error is e t (k) Error is formed by real part e t r (k) And imaginary part e t i (k) The composition is expressed as: e, e t (k)=d(k)-y t (k)=e t r (k)+je t i (k),e t r (k)=d r (k)-u t (k),e t i (k)=d i (k)-v t (k),d(k)=d r (k)+jd i (k) Representing a reference signal;
step 2, taking the difference value between the reference signal and the neuron output signal as a cost function, expressed as
Figure BDA0002374066980000035
Step 3, each weight w in the network l,n E W, l=1,..n, n=1,..p+n+1 updated equations are expressed as
Figure BDA0002374066980000041
Eta is learning rate, < >>
Figure BDA0002374066980000042
For gradient;
Figure BDA0002374066980000043
* For conjugation of complex numbers;
Figure BDA0002374066980000044
Figure BDA0002374066980000045
respectively->
Figure BDA0002374066980000046
Figure BDA0002374066980000047
And->
Figure BDA0002374066980000048
Is a simplified representation of (a); />
Figure BDA0002374066980000049
δ ln Is a kronecker function.
Compared with the prior art, the invention has the advantages that:
(1) The method disclosed by the invention has the advantages that the FCRNN neural network is used for modeling the power amplifier, the nonlinear characteristic and the memory effect of the power amplifier can be expressed quickly and better, and the power amplifier has strong approximation capability; training and learning the model by using a CRTRL (complex-valued real time recurrent learning) algorithm, and accurately characterizing the power amplifier model so as to linearly correct the power amplifier; meanwhile, the CRTRL algorithm can enable the model to converge more quickly, so that training time is shortened, and calculation complexity of the model is reduced;
(2) Compared with the traditional power amplification digital predistortion technology, the method has higher modeling accuracy due to the consideration of the influence of complex-valued signals;
(3) The method can better model the power amplifier and adopts a direct learning type structure, thereby better carrying out linear correction and compensation on the power amplifier.
(4) The method is not only suitable for the establishment of the predistortion model of the F-class power amplifier, but also suitable for other power amplifier types.
Drawings
FIG. 1 is a schematic diagram of the complex-valued fully-connected recurrent neural network architecture of the present invention;
FIG. 2 is a schematic diagram of the implementation of the method of the present invention;
FIG. 3 is a diagram showing the modeling effect of the FCRNN model according to an embodiment of the present invention;
FIG. 4 is a graph of a predistorter effect simulation of a D-FCRNN setup in an embodiment of the present invention.
Detailed Description
The invention adopts a complex-valued fully-connected recurrent neural network (Fully connected recurrent neural network, FCRNN) to establish an effective power amplification model, trains the model by using a complex-valued real-time recurrent learning (CRTRL) algorithm, and obtains a predistorter model by inverting the trained model, which specifically comprises the following steps:
step A, signal data x (n) is sent to a hardware communication system, an output signal y (n) of a radio frequency power amplifier is obtained through a hardware feedback channel, and then step B is carried out;
b, performing autocorrelation synchronization on the collected output signal y (n) and the input signal x (n), performing synchronous alignment processing on the input and output signals, and then entering the step C;
and C, performing normalization processing on the input signal x (n) and the sampled output signal y (n), and performing preliminary power amplifier modeling by using a complex-valued fully-connected recurrent neural network model.
In the full-connection recurrent neural network model of the invention, as shown in fig. 1, in the model structure of the complex-valued full-connection recurrent neural network of fig. 1, the input, the weight and the output of the network are all complex numbers, and the network structure is directly expanded to a complex number domain from a real number domain. Fig. 1 consists of N neurons (shown by circles in the figure), p external inputs, and N feedback connection lines. The network is of a two-layer structure, namely an external input feedback layer and an output processing layer. In the network, y is used for complex value output of each neuron at kth time l (k) Indicating, l=1. The external input is represented by p delay terms of s (k), which is a vector of (1 xp), where s (k) is the input sample sequence. The total input vector P (k) of the whole network is derived from the output vector y l (k) S (k) and a bias input (1+j) are connected in series, the bias input is used for introducing an external complex value vector, and the whole input expression can be described as follows:
Figure BDA0002374066980000051
in the method, in the process of the invention,
Figure BDA0002374066980000052
(·) T representing the transpose of the vector, (·) r And ( i Representing a complex value or the real and imaginary parts of a complex vector.
For the first neuron, the weight is the same (p+n+1) ×1-dimensional vector (preferably less than 1 random value) as the input vector dimension, expressed as:
Figure BDA0002374066980000053
the complex value matrix of the whole neural network is expressed as: w= [ W ] 1 ,...,w N ]。
The output of each neuron is defined as: y is l (k)=Φ(net l (k) Φ represents a complex-valued nonlinear activation function of neurons, and in order to achieve complex-valued gradient learning, the activation function selects a first order transcendental function Φ (z) =tanh (z). Net for writing l (k) The input of the activation function at the moment k, namely the linear sum of all the inputs of a certain node after being weighted, is expressed as:
Figure BDA0002374066980000061
for simplicity, y may be l (k) Expressed as:
y l (k)=Φ r (net l (k))+jΦ i (net l (k))=u l (k)+jv l (k)
and D, for the established preliminary power amplifier model, using an improved RTRL training algorithm (called CRTRL in the invention) to update parameters of the established power amplifier model, so as to obtain a final power amplifier model.
The CRTRL algorithm specifically comprises the following components:
step 1, the output of each neuron in the output layer is y t (k) T=1,.. the output error is e t (k) Error is formed by real part e t r (k) And imaginary part e t i (k) The composition is expressed as: e, e t (k)=d(k)-y t (k)=e t r (k)+je t i (k),e t r (k)=d r (k)-u t (k),e t i (k)=d i (k)-v t (k)。
U in step 2, step 1 t (k) And v t (k) Respectively the real and imaginary parts of the output, d (k) =d r (k)+jd i (k) Representing the learning signal, i.e. the reference signal. The cost function is the difference between the reference signal and the neuron output signal, expressed as
Figure BDA0002374066980000062
Step 3, each weight w in the network l,n E W, l=1,..n, n=1,..p+n+1 updated equations are expressed as
Figure BDA0002374066980000063
η is the learning rate and is a very small constant. />
Figure BDA0002374066980000064
Is biased, i.e. gradient.
Step 4, obtaining gradients by respectively obtaining partial derivatives of E (k) of the real part and the imaginary part of the weight coefficient, namely
Figure BDA0002374066980000065
Step 5, gradient calculation is carried out to obtain partial derivatives of the real part of the complex weight:
Figure BDA0002374066980000066
obtaining partial derivatives of complex weight imaginary parts:
Figure BDA0002374066980000071
step 6, will
Figure BDA0002374066980000072
And->
Figure BDA0002374066980000073
Expressed as sensitivity, respectively marked as +.>
Figure BDA0002374066980000074
The sensitivity in step 7, step 6 should satisfy the relationship of the Cauchy-Riemann equation:
Figure BDA0002374066980000075
calculating to obtain the relation:
Figure BDA0002374066980000076
step 8, substituting the step 5, the step 6 and the step 7 into the step 4 to obtain
Figure BDA0002374066980000077
* To conjugate the complex number.
Step 9, obtaining an update equation of the sensitive function by calculation, wherein the update equation is as follows:
Figure BDA0002374066980000078
δ ln is a kronecker function.
Step 10, substituting the steps 8 and 9 back to the step 3 to obtain a weight update equation:
Figure BDA0002374066980000079
and after the algorithm iteration is finished, updated weights are obtained so as to obtain optimized model prediction output, and the model prediction output and a reference signal difference value are used as modeling effect evaluation.
E, using an inversion method, taking an output signal of the power amplifier as a model input, taking the input signal of the power amplifier as a reference signal for modeling, obtaining an inverse model of the power amplifier, namely, a digital predistorter of the power amplifier, and then entering the step F;
step F, inputting a signal x (n), entering a digital predistorter to obtain an output sequence signal z (n), processing the output sequence signal z (n) by a power amplification model to obtain an output sampling signal v (n), and entering a step G;
step G, obtaining an absolute error signal |e (n) | according to e (n) =x (n) -v (n), and judging the predistorter effect according to the magnitude of |e (n) |, as shown in fig. 2.
The predistortion signal is input into the dual-band power amplifier, the output signal and the power amplifier input signal show a linear relation, and the nonlinear distortion characteristic of the power amplifier is greatly improved. After the signal is pre-distorted, the phase change is opposite to the phase change rule before pre-distortion, and after the signal is amplified by the dual-band power amplifier, the phase difference of the input signal and the output signal is basically 0, which proves that the memory effect of the power amplifier is also improved.
As a preferred embodiment of the invention, referring to the attached figure 3 in the specification, the embodiment shows that the memory depth of the model is set to be 4, the feedback neuron is set to be 5, the training algorithm adopts the CRTRL algorithm, under the conditions of training data 8000 and predicting data 4000, the modeling effect of the F-class power amplifier is small, the error between the predicted output of the FCRNN model and the actual output of the power amplifier is small, and the NMSE is-46.51 dB, so that the model can be well fit with the nonlinear characteristic of the power amplifier.
As a preferred embodiment of the present invention, referring to fig. 4 of the specification, the present embodiment shows the output power spectral density of the power amplifier before and after digital predistortion. The original output power of the power amplifier is-33.87 dB before predistortion treatment. The ACPR of the output signal of the power amplifier after the digital predistortion is-51.53 Bc, and the correction effect of 17.66dB is achieved. It can thus be demonstrated that the digital predistortion technique effectively improves the spectral regeneration of the power amplifier output signal when excited by a 10M bandwidth WCDMA signal.
What is not described in detail in the present specification is a well known technology to those skilled in the art.

Claims (3)

1. The power amplification digital predistortion method of the complex-valued fully-connected recurrent neural network model is characterized by comprising the following steps of:
(1) Transmitting signal data x (n) to a hardware communication system, and acquiring an output signal y (n) of the radio frequency power amplifier through a feedback channel of the hardware communication system;
(2) Performing autocorrelation synchronization on y (n) and x (n), and performing synchronization alignment processing on input and output signals;
(3) After carrying out normalization treatment on x (n) and y (n), carrying out preliminary power amplifier modeling by using a complex-valued fully-connected recurrent neural network model;
(4) Carrying out parameter updating on the established preliminary power amplifier model by using a recursion learning training algorithm to obtain a final power amplifier model;
(5) Using an inversion method, taking an output signal of the power amplifier as a model input, and taking an input signal of the power amplifier as a reference signal for modeling to obtain an inverse model of the power amplifier, namely the power amplifier digital predistorter;
(6) The input signal x (n) enters a power amplification digital predistorter to obtain an output sequence signal z (n), and the output sequence signal z (n) is processed by a power amplification model to obtain an output sampling signal v (n);
(7) Obtaining an absolute error signal |e (n) | according to e (n) =x (n) -v (n), and judging a predistorter effect according to the magnitude of the |e (n) |;
in the complex-value full-connection recurrent neural network model, the input, the weight and the output are complex numbers, and the complex-value full-connection recurrent neural network model comprises N neurons, p external inputs and N feedback connecting lines, and has a two-layer structure, namely an external input feedback layer and an output processing layer; in the network, y is used for complex value output of each neuron at kth time l (k) Expressed, i=1..n, the external input is expressed by P delay terms of s (k), which is a (1 xp) vector, where s (k) is the input sample sequence and the total input vector P (k) of the whole network is represented by the output vector y l (k) S (k) and a bias input (1+j) are connected in series, expressed as:
Figure FDA0004164749330000011
in the method, in the process of the invention,
Figure FDA0004164749330000012
(·) T representing the transpose of the vector, (·) r And ( i Representing a complex value or real and imaginary parts of a complex vector; for the first neuron, its weight is the vector of the same dimension (p+n+1) ×1 as the input vector, expressed as:
Figure FDA0004164749330000021
the complex value matrix of the whole network is expressed as: w= [ W ] 1 ,...,w N ];
The output of each neuron is defined as: y is l (k)=Φ(net l (k) Phi) represents the complex-valued nonlinear activation function of neurons, net l (k) The input of the activation function at the moment k, namely the linear sum of all the inputs of a certain node after being weighted, is expressed as:
Figure FDA0004164749330000022
y l (k)=Φ r (net l (k))+jΦ i (net l (k))=u l (k)+jv l (k)。
2. the power amplifier digital predistortion method of a complex-valued fully connected recurrent neural network model according to claim 1, characterized in that: the activation function is an elementary transcendental function Φ (z) =tanh (z).
3. The power amplifier digital predistortion method of a complex-valued fully connected recurrent neural network model according to claim 1, characterized in that: in the step (4), a recursive learning training algorithm is used for updating parameters, specifically:
step 1, assuming that the output of each neuron in the output layer is y t (k) T=1,.. the output error is e t (k) Error is formed by real part e t r (k) And imaginary part e t i (k) The composition is expressed as: e, e t (k)=d(k)-y t (k)=e t r (k)+je t i (k),e t r (k)=d r (k)-u t (k),e t i (k)=d i (k)-v t (k),d(k)=d r (k)+jd i (k) Representing a reference signal;
step 2, taking the difference value between the reference signal and the neuron output signal as a cost function, expressed as
Figure FDA0004164749330000023
Step 3, each weight w in the network l,n E W, l=1,..n, n=1,..p+n+1 updated equations are expressed as
Figure FDA0004164749330000024
Eta is learning rate, < >>
Figure FDA0004164749330000025
For gradient;
Figure FDA0004164749330000026
* For conjugation of complex numbers;
Figure FDA0004164749330000027
Figure FDA0004164749330000028
respectively is
Figure FDA0004164749330000029
And->
Figure FDA0004164749330000031
Is a simplified representation of (a);
Figure FDA0004164749330000032
δ ln as a Croner function。/>
CN202010059771.3A 2020-01-19 2020-01-19 Power amplifier digital predistortion method of complex-valued fully-connected recurrent neural network model Active CN111245375B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010059771.3A CN111245375B (en) 2020-01-19 2020-01-19 Power amplifier digital predistortion method of complex-valued fully-connected recurrent neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010059771.3A CN111245375B (en) 2020-01-19 2020-01-19 Power amplifier digital predistortion method of complex-valued fully-connected recurrent neural network model

Publications (2)

Publication Number Publication Date
CN111245375A CN111245375A (en) 2020-06-05
CN111245375B true CN111245375B (en) 2023-06-06

Family

ID=70876285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010059771.3A Active CN111245375B (en) 2020-01-19 2020-01-19 Power amplifier digital predistortion method of complex-valued fully-connected recurrent neural network model

Country Status (1)

Country Link
CN (1) CN111245375B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111988250B (en) * 2020-07-14 2023-03-10 清华大学 Simulation full-connection hybrid beam forming system and transmitter
CN112597705B (en) * 2020-12-28 2022-05-24 哈尔滨工业大学 Multi-feature health factor fusion method based on SCVNN
CN114911837A (en) * 2021-02-07 2022-08-16 大唐移动通信设备有限公司 Pre-distortion processing method and device
CN113612455B (en) * 2021-08-16 2024-01-26 重庆大学 Working method of digital predistortion system based on iterative learning control and main curve analysis
CN113468842A (en) * 2021-08-16 2021-10-01 重庆大学 Wideband digital predistortion algorithm based on vector quantization
CN115378446B (en) * 2022-10-25 2023-01-10 北京力通通信有限公司 Broadband digital predistortion system and method based on neural network

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101072220A (en) * 2006-05-08 2007-11-14 中兴通讯股份有限公司 Radical basis function neural network predistortion method for adaptive power amplifier
CN101320960A (en) * 2008-07-18 2008-12-10 东南大学 Power amplifier predistortion method of Hammerstein model based on fuzzy neural network
CN101997492A (en) * 2010-09-29 2011-03-30 东南大学 Simplified fuzzy neural network reinforced Wiener model based power amplifier predistortion method
CN102427438A (en) * 2011-11-29 2012-04-25 电子科技大学 Parameter training method for adaptive digital pre-distortion
EP2538553A1 (en) * 2011-06-21 2012-12-26 Alcatel Lucent Apparatus and method for mitigating impairments of a transmit signal
CN103731105A (en) * 2014-01-03 2014-04-16 东南大学 Amplifier digital pre-distortion device and method based on dynamic fuzzy neural network
WO2017118202A1 (en) * 2016-01-04 2017-07-13 中兴通讯股份有限公司 Software-and-hardware-coordinated digital pre-distortion method and device
CN110414565A (en) * 2019-05-06 2019-11-05 北京邮电大学 A kind of neural network method of cutting out based on Group Lasso for power amplifier
CN110601665A (en) * 2019-08-23 2019-12-20 海南电网有限责任公司 Digital predistorter design method and device based on power amplifier model clipping

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1177449C (en) * 2002-04-23 2004-11-24 华为技术有限公司 Method of raising efficiency of RF power amplifier based on base band digital predistortion technology

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101072220A (en) * 2006-05-08 2007-11-14 中兴通讯股份有限公司 Radical basis function neural network predistortion method for adaptive power amplifier
CN101320960A (en) * 2008-07-18 2008-12-10 东南大学 Power amplifier predistortion method of Hammerstein model based on fuzzy neural network
CN101997492A (en) * 2010-09-29 2011-03-30 东南大学 Simplified fuzzy neural network reinforced Wiener model based power amplifier predistortion method
EP2538553A1 (en) * 2011-06-21 2012-12-26 Alcatel Lucent Apparatus and method for mitigating impairments of a transmit signal
CN102427438A (en) * 2011-11-29 2012-04-25 电子科技大学 Parameter training method for adaptive digital pre-distortion
CN103731105A (en) * 2014-01-03 2014-04-16 东南大学 Amplifier digital pre-distortion device and method based on dynamic fuzzy neural network
WO2017118202A1 (en) * 2016-01-04 2017-07-13 中兴通讯股份有限公司 Software-and-hardware-coordinated digital pre-distortion method and device
CN110414565A (en) * 2019-05-06 2019-11-05 北京邮电大学 A kind of neural network method of cutting out based on Group Lasso for power amplifier
CN110601665A (en) * 2019-08-23 2019-12-20 海南电网有限责任公司 Digital predistorter design method and device based on power amplifier model clipping

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于BP神经网络的功放自适应预失真;邓洪敏等;《通信学报》(第11期);全文 *
简化的滤波器查找表与神经网络联合预失真方法;刘月等;《计算机工程》(第01期);全文 *

Also Published As

Publication number Publication date
CN111245375A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111245375B (en) Power amplifier digital predistortion method of complex-valued fully-connected recurrent neural network model
Hu et al. Convolutional neural network for behavioral modeling and predistortion of wideband power amplifiers
CN110765720B (en) Power amplifier predistortion method of complex-valued pipeline recurrent neural network model
WO2015096735A1 (en) Digital pre-distortion parameter obtaining method and pre-distortion system
CN111490737B (en) Nonlinear compensation method and equipment for power amplifier
CN106506430A (en) A kind of new algorithm of the compensation peak-to-average force ratio non-linear distortion based on compressed sensing technology
CN105471784A (en) Digital predistortion method of jointly compensating for IQ imbalance and PA non-linearity
WO2012174842A1 (en) Distortion correction apparatus and method for non-linear system
CN102075469B (en) Estimation method for signal delay time of digital pre-distortion system
CN111859795A (en) Polynomial-assisted neural network behavior modeling system and method for power amplifier
Jiang et al. Block-oriented time-delay neural network behavioral model for digital predistortion of RF power amplifiers
CN113221308A (en) Transfer learning rapid low-complexity modeling method facing power amplifier
CN107786174B (en) Circuit and method for predistortion
CN110086438B (en) Digital predistortion system and method for passive multi-beam transmitter
CN114598274B (en) Low-complexity lookup table construction method oriented to broadband predistortion
CN111884602B (en) Power amplifier predistortion method based on single-output-node neural network
Vaicaitis et al. Segmented Spline Curve Neural Network for Low Latency Digital Predistortion of RF Power Amplifiers
CN115913844A (en) MIMO system digital predistortion compensation method, device, equipment and storage medium based on neural network
Suo et al. A residual-fitting modeling method for digital predistortion of broadband power amplifiers
Falempin et al. Low-complexity adaptive digital pre-distortion with meta-learning based neural networks
Varahram et al. A digital pre-distortion based on nonlinear autoregressive with exogenous inputs
Elaskary et al. Closed-form analysis for indirect learning digital pre-distorter for wideband wireless systems
CN115459717A (en) Power self-adaptive neural network digital predistortion system and method
Bipin et al. A novel predistorter based on MPSO for power amplifier linearization
CN115378446B (en) Broadband digital predistortion system and method based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant