WO2022185505A1 - Learning device for distortion compensation circuit, and distortion compensation circuit - Google Patents

Learning device for distortion compensation circuit, and distortion compensation circuit Download PDF

Info

Publication number
WO2022185505A1
WO2022185505A1 PCT/JP2021/008580 JP2021008580W WO2022185505A1 WO 2022185505 A1 WO2022185505 A1 WO 2022185505A1 JP 2021008580 W JP2021008580 W JP 2021008580W WO 2022185505 A1 WO2022185505 A1 WO 2022185505A1
Authority
WO
WIPO (PCT)
Prior art keywords
behavior model
distortion compensation
unit
teacher data
compensation circuit
Prior art date
Application number
PCT/JP2021/008580
Other languages
French (fr)
Japanese (ja)
Inventor
暢彦 安藤
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2022577079A priority Critical patent/JP7418619B2/en
Priority to PCT/JP2021/008580 priority patent/WO2022185505A1/en
Publication of WO2022185505A1 publication Critical patent/WO2022185505A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F1/00Details of amplifiers with only discharge tubes, only semiconductor devices or only unspecified devices as amplifying elements
    • H03F1/32Modifications of amplifiers to reduce non-linear distortion
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F3/00Amplifiers with only discharge tubes or only semiconductor devices as amplifying elements
    • H03F3/20Power amplifiers, e.g. Class B amplifiers, Class C amplifiers
    • H03F3/24Power amplifiers, e.g. Class B amplifiers, Class C amplifiers of transmitter output stages

Definitions

  • the present disclosure relates to a learning device for a distortion compensation circuit and a distortion compensation circuit.
  • a high-frequency circuit which is a component of a radio transmitter, is composed of devices such as power amplifiers and filters, amplifies the power of a transmission signal, and outputs the power-amplified transmission signal to an antenna.
  • the quality of the transmission signal deteriorates due to distortion in the transmission signal due to the frequency characteristics and nonlinear characteristics of the high frequency circuit. Therefore, in order to prevent deterioration of signal quality, there is a method of using a distortion compensation circuit that compensates for the distortion that occurs in the high-frequency circuit by adding distortion to the transmission signal in the preceding stage of the high-frequency circuit so as to cancel out the distortion that occurs in the high-frequency circuit. be.
  • a method using a neural network as such a distortion compensation circuit.
  • the neural network is trained so as to obtain an inverse characteristic in which the relationship between the input and output of the high frequency circuit is opposite to that of the input and output.
  • Learning requires teacher data, and the teacher data can be generated by iterative learning control (ILC) processing described in Non-Patent Document 1, for example.
  • ILC iterative learning control
  • Non-Patent Document 1 there is a problem that in repeated calculations when generating teacher data, errors do not decrease and the teacher data continues to increase and the calculation diverges.
  • the present disclosure has been made to solve the above-described problems, and one aspect of the present disclosure aims to provide a learning device for a distortion compensation circuit that can suppress the divergence of calculations in iterative learning control processing.
  • One aspect of the distortion compensation circuit learning device is a distortion compensation circuit learning device including a predistorter to which a transmission signal is input, the behavior model generating a behavior model of a power amplifier.
  • a generation unit a behavior model unit that holds the generated behavior model, and an input signal to the power amplifier by repeatedly performing learning control processing using an error between the transmission signal and the output signal of the behavior model unit.
  • a teacher data generation unit comprising a repeated learning control unit that generates teacher data as and a predistorter composed of a neural network, using the transmission signal and the generated teacher data, the neural network and a coefficient updating unit that calculates the weighting coefficient of the iterative learning control unit, based on the current gain in the generated behavior model and at least one gain up to the previous time, the updated current A gain determination processing unit for obtaining an error is provided.
  • the learning device for the distortion compensation circuit of the present disclosure it is possible to suppress the divergence of calculations due to repeated calculations in the repeated learning control process.
  • FIG. 1 is a block diagram showing a configuration example of a distortion compensation circuit.
  • FIG. 2 is a block diagram showing a configuration example of an iterative learning control unit.
  • FIG. 3 is a flow chart showing the operation of the distortion compensation circuit.
  • FIG. 4A is a diagram illustrating a hardware configuration example of a distortion compensation circuit.
  • FIG. 4B is a diagram showing another configuration example of the hardware of the distortion compensation circuit.
  • FIG. 1 is a block diagram showing a configuration example of a distortion compensation circuit 1 according to Embodiment 1.
  • a distortion compensation circuit 1 is a circuit that compensates for distortion of a signal amplified by a power amplifier 3 .
  • the distortion compensating circuit 1 comprises a predistorter 11 and a learning section 12 .
  • a learning unit 12 is a learning device for the distortion compensation circuit 1 .
  • the learning unit 12 includes a behavior model generation unit 121 , a teacher data generation unit 122 and a coefficient update unit 123 .
  • the learning unit 12 may be always incorporated in the distortion compensation circuit 1 as part of the distortion compensation circuit 1, or may be incorporated only during learning. By constantly incorporating the learning unit 12 into the distortion compensation circuit 1, distortion can be adaptively compensated when the characteristics of the power amplifier 3 change.
  • the predistorter 11 is a predistorter that digitally compensates for distortion of the transmission signal s.
  • the predistorter 11 is configured by a neural network, and sets a weighting factor obtained by a factor updater 123 (to be described later) in the neural network of the predistorter 11 .
  • the predistorter 11 performs distortion compensation processing on the transmission signal s according to the weighting factor, and sends the transmission signal after the distortion compensation processing to the digital-to-analog converter (D/A converter) 2 and the behavior model generator 121. Output.
  • the signal output to the D/A converter 2 is converted into an analog signal and input to the power amplifier 3, which amplifies the power of the input analog signal. By subjecting the input signal to the power amplifier 3 to distortion compensation processing, the occurrence of distortion in the output signal of the power amplifier 3 is suppressed.
  • the power amplifier 3 amplifies the power of the analog signal output from the D/A converter 2 .
  • the A/D converter 4 converts the analog signal output from the power amplifier 3 into a digital signal and outputs the digital signal to the behavior model generator 121 .
  • the analog signal output by the power amplifier 3 converted by the A/D converter 4 may be attenuated by an attenuator (not shown).
  • the behavior model generator 121 generates a behavior model, which is a model of the behavior of the power amplifier 3, using input/output signals of the power amplifier 3.
  • FIG. In order to allow the power amplifier 3 to be used in a wide band where the memory effect occurs, the behavior model generation unit 121 expresses the behavior model BM of the power amplifier 3 by, for example, a memory polynomial, and calculates the coefficients of the polynomial using a technique such as the least squares method. Calculated by The behavior model generator 121 outputs the generated behavior model BM to the teacher data generator 122 .
  • a detailed description of the memory polynomial is omitted because it is described in, for example, Non-Patent Document 2.
  • Non-patent document 2 L. Ding, G. T. Zhou, D. R. Morgan, Z. Ma, J. S. Kenney, J. Kim, C. R. Giardina, "A robust digital baseband predistorter constructed using memory polynomials”, IEEE Trans. Commun., vol. 52, no. 1, pp. 159-165, Jan. 2004.
  • Non-Patent Document 3 Amandeep Singh Sappal, ⁇ Simplified Memory Polynomial modeling of Power Amplifier,'' 2015 International Conference and Workshop on Computing and Communication (IEMCON), 2015.
  • the teacher data generation unit 122 performs iterative learning control (ILC) processing using the behavior model BM generated by the behavior model generation unit 121, thereby generating an optimal input signal to the power amplifier 3. Generate some training data.
  • the teacher data generation unit 122 includes a behavior model unit 1223 , a synthesis unit 1222 and an iterative learning control (ILC) unit 1221 .
  • the behavior model section 1223 holds the behavior model BM generated by the behavior model generation section 121 .
  • s be the transmission signal
  • u i the input signal to the behavior model BM in the i-th calculation
  • z i the output signal of the behavior model BM
  • N be the number of data points.
  • An input signal u i+1 to the model BM is calculated according to the following equation (1).
  • u i+1 u i +G(u i ) ⁇ 1 e i (1)
  • G(u i ) is a diagonal matrix represented by the following equation (2).
  • G(u i ) diag ⁇ G[u i (0)], . . . , G[u i (N ⁇ 1)] ⁇ (2)
  • n is an integer from 0 to N ⁇ 1.
  • G[u i (N)] z i (n)/u i (n) (3)
  • Sufficient iterative calculations by the iterative learning control unit 1221 reduce the error between the output signal zi of the behavior model BM and the transmission signal s.
  • a value finally obtained by repeating the operation a predetermined number of times becomes the teacher data.
  • the teacher data is output to the coefficient updating section 123 .
  • the iterative learning control unit 1221 performs additional processing as described below in the iterative learning control processing described above.
  • the iterative learning control unit 1221 includes a gain determination processing unit 12211, a teacher data calculation unit 12212, and a teacher data suppression unit 12213, as shown in FIG. Processing performed by each functional unit of the iterative learning control unit 1221 will be described below with reference to FIGS. 2 and 3.
  • FIG. 3 is a diagram showing a flow chart of the teacher data generation unit 122 and the iterative learning control unit 1221. As shown in FIG.
  • the teacher data generator 122 calculates the output signal z i of the behavior model BM, the diagonal matrix component G[u i (n)], and the error ei . More specifically, the behavior model unit 1223 receives the input signal ui as an input of the behavior model BM and calculates the output signal zi . Note that the initial value u1 of the input signal can take any value as long as the behavior model BM is operable. Synthesis section 1222 calculates error ei from transmission signal s and output signal zi of behavior model section 1223 . The behavior model unit 1223 or the iterative learning control unit 1221 calculates the component G[u i (n)] from the input signal u i and the output signal z i according to Equation (3).
  • the output signal z i of the behavior model unit 1223 is also output to the iterative learning control unit 1221 in addition to the synthesizing unit 1222 . Since the component G[u i (n)] can be considered as the gain of the power amplifier 3, the component G[u i (n)] will be referred to as the gain G[u i (n)] in the following description.
  • step ST32 the gain determination processing section 12211 determines whether the current gain G[u i (n)] is smaller than the previous gain G[u i-1 (n)]. If YES in step ST32, the process proceeds to step ST33.
  • the current gain G[u i (n)] is compared with the previous gain G[ u i-1 (n)]. , may be compared with the average of the gains of a plurality of past times before the current time. In this way, gain determination processing section 12211 obtains the updated current error based on the current gain and at least one previous gain.
  • step ST34 the teacher data calculator 12212 calculates the input signal u i+1 to the i+1-th behavior model BM according to the above equation (1).
  • step ST35 the teacher data suppression section 12213 determines whether the absolute value
  • step ST37 if the number of iterations i is greater than the maximum number of times, the process is terminated and teacher data
  • the error is adjusted by updating the error according to i (n) ⁇ (steps ST32 and ST33).
  • G[u i (n)] becomes smaller than the previous value G[u i-1 (n)]
  • the effect of the error can be reduced, and the teaching data can be prevented from becoming large.
  • G[u i (n)] becomes larger than the previous value G[u i-1 (n)]
  • G(u i (n)) Since 1 is included, the size of the training data is suppressed.
  • the teacher data u i-1 is an input signal to the saturation region of the behavior model BM, the teacher data u i-1 cannot suppress the occurrence of distortion. This is because signals in the saturation region cannot be compensated. Therefore, the error does not become small. Since the error does not become smaller, the teacher data u i operates to become larger than u i ⁇ 1 . If this is repeated, the teacher data will diverge.
  • the coefficient updating unit 123 includes a predistorter 1231 and a synthesizing unit 1232 .
  • the predistorter 1231 is composed of a neural network like the predistorter 11 .
  • the coefficient updating unit 123 uses the teacher data generated by the teacher data generating unit 122 to perform supervised learning for obtaining weighting coefficients of the neural network operating as the predistorter 11 . More specifically, using the transmission signal s used by the teacher data generation unit 122 as the input signal of the neural network and the teacher data generated by the teacher data generation unit 122 as the output signal of the neural network, the neural network weight Learn to find coefficients.
  • Non-Patent Document 4 For learning of the neural network, a generally known method such as the Levenberg-Marquardt method (Non-Patent Document 4) or ADAM (Non-Patent Document 5) may be used, so a detailed description of the learning method is omitted.
  • the weighting coefficient obtained by the coefficient updating unit 123 is output to the predistorter 11 .
  • Non-Patent Document 4 (Levenberg-Marquardt Method): Hagan, M. T., and M. Menhaj, "Training feed-forward networks with the Marquardt algorithm," IEEE Transactions on Neural Networks, Vol. 5, No. 6, 1999, pp. 989-993, 1994.
  • Non-Patent Document 5 (ADAM): Kingma, Diederik, and Jimmy Ba. "Adam: A method for stochastic optimization.” arXiv preprint arXiv:1412.6980 (2014).
  • All or part of the functional units of the distortion compensation circuit 1 are realized, for example, by a computer having a processor 41 and a memory 42 as shown in FIG. 4A. These functional units are realized by reading out and executing the programs stored in the memory 42 by the processor 41 .
  • Programs may be implemented as software, firmware, or a combination of software and firmware.
  • Examples of the memory 42 include non-volatile or volatile semiconductors such as RAM (Random Access Memory), ROM (Read Only Memory), flash memory, EPROM (Erasable Programmable Read Only Memory), EEPROM (Electrically-EPROM), etc. Memory, magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD are included.
  • all or part of the functional units of the distortion compensation circuit 1 may be realized by a processing circuit 43 as shown in FIG. 4B instead of the processor 41 and memory 42.
  • the processing circuit 43 is, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or a combination thereof. is.
  • the distortion compensation circuit 1 is configured as described above, it is possible to prevent the teacher data from diverging in repeated calculations in the ILC process, and to create highly accurate teacher data.
  • the expression used to express the behavior model may not be a memory polynomial, but may be a Volterra series.
  • a learning device (12) for a distortion compensation circuit of appendix 1 is a learning device for a distortion compensation circuit comprising a predistorter (11) to which a transmission signal is input, and generates a behavior model of a power amplifier.
  • a behavior model generation unit (121), a behavior model unit (1223) holding the generated behavior model, and an error between the transmission signal and the output signal of the behavior model unit are used to repeatedly perform learning control processing.
  • a repetitive learning control unit (1221) that generates teacher data as an input signal to the power amplifier, and a teacher data generation unit (122), and a predistorter (1231) composed of a neural network, a coefficient updating unit (123) that calculates weighting coefficients of the neural network using the transmission signal and the generated teacher data, and the iterative learning control unit updates the current
  • a gain determination processing unit (12211) is provided for obtaining the updated current error based on the gain and at least one gain up to the previous time.
  • the learning device for the distortion compensation circuit of Supplementary Note 2 is the learning device for the distortion compensation circuit of Supplementary Note 1, wherein the gain determination processing unit determines that the current gain in the generated behavior model is higher than the previous gain. is also small, the updated current error is obtained by multiplying the current error by an arbitrary value greater than 0 and less than 1.
  • the learning device for the distortion compensating circuit of Supplementary Note 3 is the learning device for the distortion compensating circuit of Supplementary Note 1, wherein the gain determination processing unit determines whether the current gain in the generated behavior model has been obtained a plurality of times in the past. If it is less than the average gain, the updated current error is obtained by multiplying the current error by any value greater than 0 and less than 1.
  • the learning device for the distortion compensation circuit of appendix 4 is the learning device for the distortion compensation circuit of any one of appendices 1 to 3, wherein the iterative learning control unit uses the updated current error It further comprises a teacher data calculation unit (12212) for calculating the next teacher data according to the formula (1).
  • the learning device for the distortion compensation circuit of Supplementary Note 5 is the learning device for the distortion compensation circuit of any one of Supplements 1 to 4, wherein the iterative learning control unit calculates the absolute value of the next teacher data calculated. It further comprises a teacher data suppression unit (12213) which, when the value is greater than a preset threshold value, sets the threshold value as the absolute value of the calculated next teacher data.
  • the distortion compensation circuit (1) of Appendix 6 includes a predistorter (11) configured by a neural network and to which a transmission signal is input, and the neural network of the predistorter is any one of Appendixes 1 to 5.
  • the predistorter (11) has the weighting factor obtained by the learning device for the distortion compensation circuit, performs distortion compensation processing on the transmission signal according to the weighting factor, and after distortion compensation processing output the transmission signal of
  • a distortion compensation circuit can be used as a distortion compensation circuit that imparts distortion to a transmission signal that is input to a radio frequency circuit of a wireless transmitter.
  • 1 distortion compensation circuit 1 D/A converter, 3 power amplifier, 4 A/D converter, 11 predistorter, 12 learning unit (learning device), 121 behavior model generation unit, 122 teacher data generation unit, 123 coefficient Update unit 1221 Iterative learning control unit 1222 Synthesis unit 1223 Behavior model unit 1231 Predistorter 1232 Synthesis unit 12211 Gain determination processing unit 12212 Teacher data calculation unit 12213 Teacher data suppression unit BM Behavior model.

Landscapes

  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Nonlinear Science (AREA)
  • Amplifiers (AREA)

Abstract

Provided is a distortion compensation circuit capable of suppressing divergence in an operation performed by means of an iterative operation in iterative learning control processing. A learning device (12) for a distortion compensation circuit provided with a predistorter (11) into which a transmission signal is input is provided with a behavior model generating unit (121) for generating a behavior model of a power amplifier, a teacher data generating unit (122) including a behavior model unit (1223) for holding the generated behavior model, and an iterative learning control unit (1221) for generating teacher data serving as an input signal to the power amplifier, by performing iterative learning control processing using an error between the transmission signal and an output signal from the behavior model unit, and a coefficient updating unit (123) which is provided with a predistorter (1231) configured from a neural network, and which uses the transmission signal and the generated teacher data to calculate a weighting coefficient for the neural network, wherein the iterative learning control unit includes a gain determination processing unit (12211) for obtaining an updated current error on the basis of a current error in the generated behavior model and at least one previous gain.

Description

歪み補償回路のための学習装置、及び歪み補償回路Learning device for distortion compensation circuit, and distortion compensation circuit
 本開示は、歪み補償回路のための学習装置、及び歪み補償回路に関する。 The present disclosure relates to a learning device for a distortion compensation circuit and a distortion compensation circuit.
 無線送信機の構成要素である高周波回路は、電力増幅器及びフィルタ等のデバイスにより構成され、送信信号の電力を増幅して電力増幅された送信信号をアンテナへ出力する。送信信号の電力を増幅する際、高周波回路の周波数特性や非線形特性により送信信号に歪みが発生することにより送信信号の品質が劣化する。そこで、信号品質の劣化を防止するため、高周波回路で発生する歪みを打ち消すように、高周波回路の前段で送信信号に歪みを加えて高周波回路で発生する歪みを補償する歪み補償回路を用いる方法がある。 A high-frequency circuit, which is a component of a radio transmitter, is composed of devices such as power amplifiers and filters, amplifies the power of a transmission signal, and outputs the power-amplified transmission signal to an antenna. When amplifying the power of a transmission signal, the quality of the transmission signal deteriorates due to distortion in the transmission signal due to the frequency characteristics and nonlinear characteristics of the high frequency circuit. Therefore, in order to prevent deterioration of signal quality, there is a method of using a distortion compensation circuit that compensates for the distortion that occurs in the high-frequency circuit by adding distortion to the transmission signal in the preceding stage of the high-frequency circuit so as to cancel out the distortion that occurs in the high-frequency circuit. be.
 従来、そのような歪み補償回路として、ニューラルネットワークを用いる方法がある。ニューラルネットワークを使用する歪み補償回路では、高周波回路の入出力の特性と入出力の関係が逆である逆特性が得られるようにニューラルネットワークを学習させる。学習には教師データが必要であり、教師データは、例えば非特許文献1に記載の繰返し学習制御(ILC:Iterative Learning Control)処理により生成することができる。 Conventionally, there is a method using a neural network as such a distortion compensation circuit. In a distortion compensation circuit using a neural network, the neural network is trained so as to obtain an inverse characteristic in which the relationship between the input and output of the high frequency circuit is opposite to that of the input and output. Learning requires teacher data, and the teacher data can be generated by iterative learning control (ILC) processing described in Non-Patent Document 1, for example.
 しかしながら、非特許文献1に記載のILC処理によれば、教師データを生成する際の繰返し演算において、誤差が小さくならずに教師データが大きくなり続けて演算が発散するという課題がある。 However, according to the ILC processing described in Non-Patent Document 1, there is a problem that in repeated calculations when generating teacher data, errors do not decrease and the teacher data continues to increase and the calculation diverges.
 本開示は、上述の課題を解決するためになされたものであり、本開示の一側面は、繰返し学習制御処理における演算の発散を抑圧できる歪み補償回路のための学習装置を提供することを目的とする。 The present disclosure has been made to solve the above-described problems, and one aspect of the present disclosure aims to provide a learning device for a distortion compensation circuit that can suppress the divergence of calculations in iterative learning control processing. and
 本開示に係る歪み補償回路のための学習装置の一側面は、送信信号が入力されるプリディストータを備える歪み補償回路のための学習装置であって、電力増幅器のビヘイビアモデルを生成するビヘイビアモデル生成部と、生成されたビヘイビアモデルを保持するビヘイビアモデル部と、前記送信信号と前記ビヘイビアモデル部の出力信号との誤差を用いて繰返し学習制御処理を行うことにより、前記電力増幅器への入力信号としての教師データを生成する繰返し学習制御部と、を備える教師データ生成部と、ニューラルネットワークで構成されたプリディストータを備え、前記送信信号と生成された教師データとを用いて、前記ニューラルネットワークの重み係数を算出する係数更新部と、を備え、前記繰返し学習制御部は、前記生成されたビヘイビアモデルにおける今回の利得と、前回までの少なくとも1つの利得とに基づいて、更新された今回の誤差を求める利得判定処理部を備える。 One aspect of the distortion compensation circuit learning device according to the present disclosure is a distortion compensation circuit learning device including a predistorter to which a transmission signal is input, the behavior model generating a behavior model of a power amplifier. A generation unit, a behavior model unit that holds the generated behavior model, and an input signal to the power amplifier by repeatedly performing learning control processing using an error between the transmission signal and the output signal of the behavior model unit. and a teacher data generation unit comprising a repeated learning control unit that generates teacher data as and a predistorter composed of a neural network, using the transmission signal and the generated teacher data, the neural network and a coefficient updating unit that calculates the weighting coefficient of the iterative learning control unit, based on the current gain in the generated behavior model and at least one gain up to the previous time, the updated current A gain determination processing unit for obtaining an error is provided.
 本開示の歪み補償回路のための学習装置によれば、繰返し学習制御処理における繰返し演算による演算の発散を抑圧できる。 According to the learning device for the distortion compensation circuit of the present disclosure, it is possible to suppress the divergence of calculations due to repeated calculations in the repeated learning control process.
図1は、歪み補償回路の構成例を示すブロック図である。FIG. 1 is a block diagram showing a configuration example of a distortion compensation circuit. 図2は、繰返し学習制御部の構成例を示すブロック図である。FIG. 2 is a block diagram showing a configuration example of an iterative learning control unit. 図3は、歪み補償回路の動作を示すフローチャートである。FIG. 3 is a flow chart showing the operation of the distortion compensation circuit. 図4Aは、歪み補償回路のハードウェアの構成例を示す図である。FIG. 4A is a diagram illustrating a hardware configuration example of a distortion compensation circuit. 図4Bは、歪み補償回路のハードウェアの他の構成例を示す図である。FIG. 4B is a diagram showing another configuration example of the hardware of the distortion compensation circuit.
実施の形態1.
 以下、図面を参照しつつ、本開示に係る種々の実施形態について詳細に説明する。図1は、実施の形態1に係る歪み補償回路1の構成例を示すブロック図である。歪み補償回路1は、電力増幅器3により増幅された信号の歪みを補償する回路である。図1に示されているように、歪み補償回路1は、プリディストータ11と学習部12を備える。学習部12は、歪み補償回路1のための学習装置である。学習部12は、ビヘイビアモデル生成部121と、教師データ生成部122と、係数更新部123とを備える。学習部12は、歪み補償回路1の一部として歪み補償回路1に常時組み込まれてもよいし、学習時のみ組み込まれてもよい。学習部12が歪み補償回路1に常時組み込まれることにより、電力増幅器3の特性が変化した場合に適応的に歪みを補償することができる。
Embodiment 1.
Various embodiments according to the present disclosure will be described in detail below with reference to the drawings. FIG. 1 is a block diagram showing a configuration example of a distortion compensation circuit 1 according to Embodiment 1. As shown in FIG. A distortion compensation circuit 1 is a circuit that compensates for distortion of a signal amplified by a power amplifier 3 . As shown in FIG. 1, the distortion compensating circuit 1 comprises a predistorter 11 and a learning section 12 . A learning unit 12 is a learning device for the distortion compensation circuit 1 . The learning unit 12 includes a behavior model generation unit 121 , a teacher data generation unit 122 and a coefficient update unit 123 . The learning unit 12 may be always incorporated in the distortion compensation circuit 1 as part of the distortion compensation circuit 1, or may be incorporated only during learning. By constantly incorporating the learning unit 12 into the distortion compensation circuit 1, distortion can be adaptively compensated when the characteristics of the power amplifier 3 change.
(プリディストータ)
 プリディストータ11は、送信信号sに対して、デジタル方式で歪み補償を行うプリディストータである。プリディストータ11は、ニューラルネットワークにより構成され、後述の係数更新部123により求められた重み係数をプリディストータ11のニューラルネットワークに設定する。プリディストータ11は、送信信号sに対して重み係数に応じた歪み補償処理を施し、歪み補償処理後の送信信号をデジタルアナログ変換機(D/A変換器)2及びビヘイビアモデル生成部121へ出力する。D/A変換器2へ出力された信号はアナログ信号に変換されて電力増幅器3に入力され、電力増幅器3はその入力されたアナログ信号の電力を増幅する。電力増幅器3への入力信号に歪み補償処理を施すことにより、電力増幅器3の出力信号における歪みの発生が抑制される。
(pre-distorter)
The predistorter 11 is a predistorter that digitally compensates for distortion of the transmission signal s. The predistorter 11 is configured by a neural network, and sets a weighting factor obtained by a factor updater 123 (to be described later) in the neural network of the predistorter 11 . The predistorter 11 performs distortion compensation processing on the transmission signal s according to the weighting factor, and sends the transmission signal after the distortion compensation processing to the digital-to-analog converter (D/A converter) 2 and the behavior model generator 121. Output. The signal output to the D/A converter 2 is converted into an analog signal and input to the power amplifier 3, which amplifies the power of the input analog signal. By subjecting the input signal to the power amplifier 3 to distortion compensation processing, the occurrence of distortion in the output signal of the power amplifier 3 is suppressed.
 電力増幅器3は、D/A変換器2から出力されたアナログ信号の電力を増幅する。A/D変換器4は、電力増幅器3により出力されたアナログ信号をデジタル信号に変換して、ビヘイビアモデル生成部121に出力する。A/D変換器4が変換する電力増幅器3により出力されたアナログ信号は、不図示の減衰器により減衰されていてもよい。 The power amplifier 3 amplifies the power of the analog signal output from the D/A converter 2 . The A/D converter 4 converts the analog signal output from the power amplifier 3 into a digital signal and outputs the digital signal to the behavior model generator 121 . The analog signal output by the power amplifier 3 converted by the A/D converter 4 may be attenuated by an attenuator (not shown).
(ビヘイビアモデル生成部)
 ビヘイビアモデル生成部121は、電力増幅器3の入出力信号を用いて電力増幅器3の振舞いのモデルであるビヘイビアモデルを生成する。電力増幅器3をメモリ効果が生じる広帯域で使用できるようにするため、ビヘイビアモデル生成部121は、電力増幅器3のビヘイビアモデルBMを、例えば、メモリ多項式で表し、多項式の係数を最小二乗法などの手法により求める。ビヘイビアモデル生成部121は、生成したビヘイビアモデルBMを、教師データ生成部122へ出力する。メモリ多項式に関する説明は、例えば、非特許文献2に記載されているので詳細な説明は省略する。
(behavior model generator)
The behavior model generator 121 generates a behavior model, which is a model of the behavior of the power amplifier 3, using input/output signals of the power amplifier 3. FIG. In order to allow the power amplifier 3 to be used in a wide band where the memory effect occurs, the behavior model generation unit 121 expresses the behavior model BM of the power amplifier 3 by, for example, a memory polynomial, and calculates the coefficients of the polynomial using a technique such as the least squares method. Calculated by The behavior model generator 121 outputs the generated behavior model BM to the teacher data generator 122 . A detailed description of the memory polynomial is omitted because it is described in, for example, Non-Patent Document 2.
 非特許文献2: L. Ding, G. T. Zhou, D. R. Morgan, Z. Ma, J. S. Kenney, J. Kim, C. R. Giardina, “A robust digital baseband predistorter constructed using memory polynomials”, IEEE Trans. Commun., vol. 52, no. 1, pp. 159-165, Jan. 2004. Non-patent document 2: L. Ding, G. T. Zhou, D. R. Morgan, Z. Ma, J. S. Kenney, J. Kim, C. R. Giardina, "A robust digital baseband predistorter constructed using memory polynomials”, IEEE Trans. Commun., vol. 52, no. 1, pp. 159-165, Jan. 2004.
 最小二乗法を用いたビヘイビアモデルの生成については、例えば、非特許文献3に記載されているので詳細な説明は省略する。 Generating a behavior model using the method of least squares is described, for example, in Non-Patent Document 3, so a detailed description will be omitted.
 非特許文献3: Amandeep Singh Sappal, ”Simplified Memory Polynomial modelling of Power Amplifier,” 2015 International Conference and Workshop on Computing and Communication (IEMCON), 2015. Non-Patent Document 3: Amandeep Singh Sappal, ``Simplified Memory Polynomial modeling of Power Amplifier,'' 2015 International Conference and Workshop on Computing and Communication (IEMCON), 2015.
(教師データ生成部)
 教師データ生成部122は、ビヘイビアモデル生成部121により生成されたビヘイビアモデルBMを用いて繰返し学習制御(Iterative Learning Control: ILC)処理を行うことにより、電力増幅器3への入力として最適な入力信号である教師データを生成する。図1に示されているように、教師データ生成部122は、ビヘイビアモデル部1223と、合成部1222と、繰返し学習制御(ILC)部1221とを備える。ビヘイビアモデル部1223は、ビヘイビアモデル生成部121により生成されたビヘイビアモデルBMを保持する。
(Training data generator)
The teacher data generation unit 122 performs iterative learning control (ILC) processing using the behavior model BM generated by the behavior model generation unit 121, thereby generating an optimal input signal to the power amplifier 3. Generate some training data. As shown in FIG. 1 , the teacher data generation unit 122 includes a behavior model unit 1223 , a synthesis unit 1222 and an iterative learning control (ILC) unit 1221 . The behavior model section 1223 holds the behavior model BM generated by the behavior model generation section 121 .
 まず、教師データ生成部122の概略について説明する。送信信号をs、i番目の演算におけるビヘイビアモデルBMへの入力信号をu、ビヘイビアモデルBMの出力信号をzi、データ点数をN点とすると、繰返し学習制御部1221は、i+1番目のビヘイビアモデルBMへの入力信号ui+1を次の式(1)に従って算出する。

           ui+1=u+G(u-1         (1)
First, an outline of the teacher data generation unit 122 will be described. Let s be the transmission signal, u i be the input signal to the behavior model BM in the i-th calculation, z i be the output signal of the behavior model BM, and N be the number of data points. An input signal u i+1 to the model BM is calculated according to the following equation (1).

u i+1 =u i +G(u i ) −1 e i (1)
 ここで、G(u)は、次の式(2)で表される対角行列である。

   G(u)=diag{G[u(0)],・・・,G[u(N-1)]}
                                     (2)
Here, G(u i ) is a diagonal matrix represented by the following equation (2).

G(u i )=diag{G[u i (0)], . . . , G[u i (N−1)]}
(2)
 対角行列の各対角成分は、次の式(3)で表される。式(3)において、nは0からN-1までの整数である。

           G[u(N)]=z(n)/u(n)      (3)
Each diagonal component of the diagonal matrix is represented by the following equation (3). In formula (3), n is an integer from 0 to N−1.

G[u i (N)]=z i (n)/u i (n) (3)
 また、eは、次の式(4)で表される誤差である。この誤差eは、合成部1222により算出される。

               e=s-z               (4)
Also, ei is an error represented by the following equation (4). This error ei is calculated by the synthesizing unit 1222 .

e i =s−z i (4)
 繰返し学習制御部1221による繰返し演算が十分に行われることにより、ビヘイビアモデルBMの出力信号zと送信信号sの間の誤差が小さくなる。所定の回数の繰返し演算を行って最後に得られた値が教師データとなる。教師データは、係数更新部123へ出力される。 Sufficient iterative calculations by the iterative learning control unit 1221 reduce the error between the output signal zi of the behavior model BM and the transmission signal s. A value finally obtained by repeating the operation a predetermined number of times becomes the teacher data. The teacher data is output to the coefficient updating section 123 .
 繰返し学習制御部1221は、以上のような繰返し学習制御処理において、以下で説明するような追加的な処理を行う。このような追加的処理を行うため、繰返し学習制御部1221は、図2に示されているように、利得判定処理部12211、教師データ算出部12212、及び教師データ抑制部12213を備える。以下、繰返し学習制御部1221の各機能部が行う処理について、図2及び図3を参照して説明する。なお、図3は、教師データ生成部122及び繰返し学習制御部1221のフローチャートを示す図である。 The iterative learning control unit 1221 performs additional processing as described below in the iterative learning control processing described above. In order to perform such additional processing, the iterative learning control unit 1221 includes a gain determination processing unit 12211, a teacher data calculation unit 12212, and a teacher data suppression unit 12213, as shown in FIG. Processing performed by each functional unit of the iterative learning control unit 1221 will be described below with reference to FIGS. 2 and 3. FIG. 3 is a diagram showing a flow chart of the teacher data generation unit 122 and the iterative learning control unit 1221. As shown in FIG.
 ステップST31において、教師データ生成部122は、ビヘイビアモデルBMの出力信号z、対角行列の成分G[u(n)]、誤差eを計算する。より具体的には、ビヘイビアモデル部1223が、ビヘイビアモデルBMの入力として入力信号uを受け付けて、出力信号zを計算する。なお、入力信号の初期値uは、ビヘイビアモデルBMが動作可能である限り任意の値を取り得る。合成部1222が、送信信号sとビヘイビアモデル部1223の出力信号zとから、誤差eを計算する。ビヘイビアモデル部1223又は繰返し学習制御部1221が、入力信号uと出力信号zから、成分G[u(n)]を式(3)に従って計算する。繰返し学習制御部1221が成分G[u(n)]を計算する場合、ビヘイビアモデル部1223の出力信号zは、合成部1222の他に、繰返し学習制御部1221へも出力される。なお、成分G[u(n)]は電力増幅器3の利得と考えられるので、以下の説明では成分G[u(n)]を利得G[u(n)]と称する。 In step ST31, the teacher data generator 122 calculates the output signal z i of the behavior model BM, the diagonal matrix component G[u i (n)], and the error ei . More specifically, the behavior model unit 1223 receives the input signal ui as an input of the behavior model BM and calculates the output signal zi . Note that the initial value u1 of the input signal can take any value as long as the behavior model BM is operable. Synthesis section 1222 calculates error ei from transmission signal s and output signal zi of behavior model section 1223 . The behavior model unit 1223 or the iterative learning control unit 1221 calculates the component G[u i (n)] from the input signal u i and the output signal z i according to Equation (3). When the iterative learning control unit 1221 calculates the component G[u i (n)], the output signal z i of the behavior model unit 1223 is also output to the iterative learning control unit 1221 in addition to the synthesizing unit 1222 . Since the component G[u i (n)] can be considered as the gain of the power amplifier 3, the component G[u i (n)] will be referred to as the gain G[u i (n)] in the following description.
 ステップST32において、利得判定処理部12211は、今回の利得G[u(n)]が前回の利得G[ui-1(n)]よりも小さいかを判定する。ステップST32においてYESの場合、処理はステップST33へ進む。ステップST33において、利得判定処理部12211は、式e(n)=e(n)×αに従って今回のサイクルにおける誤差を更新する。αは0<α<1を満たす任意の値である。このように誤差が更新されることにより誤差の調整が行われる。誤差の更新が行われると、処理はステップST34へ進む。また、ステップST32においてNOの場合、処理はステップST34へ進む。なお、ここでは今回の利得G[u(n)]と直前の利得G[ui-1(n)]との比較を行っているが、今回の利得G[u(n)]と、今回より前の過去複数回の利得の平均とを比較してもよい。このように、利得判定処理部12211は、今回の利得と、前回までの少なくとも1つの利得とに基づいて、更新された今回の誤差を求める。 In step ST32, the gain determination processing section 12211 determines whether the current gain G[u i (n)] is smaller than the previous gain G[u i-1 (n)]. If YES in step ST32, the process proceeds to step ST33. In step ST33, the gain determination processing section 12211 updates the error in the current cycle according to the formula e i (n)=e i (n)×α. α is an arbitrary value that satisfies 0<α<1. By updating the error in this way, the error is adjusted. After the error is updated, the process proceeds to step ST34. Moreover, in the case of NO in step ST32, the process proceeds to step ST34. Here, the current gain G[u i (n)] is compared with the previous gain G[ u i-1 (n)]. , may be compared with the average of the gains of a plurality of past times before the current time. In this way, gain determination processing section 12211 obtains the updated current error based on the current gain and at least one previous gain.
 ステップST34において、教師データ算出部12212は、i+1番目のビヘイビアモデルBMへの入力信号ui+1を上述の式(1)に従って算出する。 In step ST34, the teacher data calculator 12212 calculates the input signal u i+1 to the i+1-th behavior model BM according to the above equation (1).
 ステップST35において、教師データ抑制部12213は、入力信号の絶対値|ui+1|が予め設定されたしきい値Mthよりも大きいかを判定する。ステップST35においてYESの場合、処理はステップST36へ進む。ステップST36において、教師データ抑制部12213は、しきい値Mthを入力信号の絶対値|ui+1|に設定する。設定が行われると、処理はステップST37へ進む。ステップST35においてNOの場合、処理はステップST37へ進む。 In step ST35, the teacher data suppression section 12213 determines whether the absolute value |u i+1 | of the input signal is greater than a preset threshold value M th . If YES in step ST35, the process proceeds to step ST36. In step ST36, teacher data suppression section 12213 sets threshold M th to the absolute value |u i+1 | of the input signal. After setting, the process proceeds to step ST37. If NO in step ST35, the process proceeds to step ST37.
 ステップST37において、繰返し回数iが最大回数よりも大きければ処理は終了し、教師データ|ui+1|が得られる。繰返し回数が最大回数よりも大きくない場合、ステップST38において繰返し学習制御部1221により繰返し回数iが1だけインクリメントされて、ステップST31~ステップST37の処理が繰り返される。 In step ST37, if the number of iterations i is greater than the maximum number of times, the process is terminated and teacher data |u i+1 | is obtained. If the number of iterations is not greater than the maximum number of times, the number of iterations i is incremented by 1 by the iterative learning control section 1221 in step ST38, and the processing of steps ST31 to ST37 is repeated.
 このように、利得判定処理部12211は、今回の利得G[u(n)]が前回の利得G[ui-1(n)]よりも小さい場合に、式e(n)=e(n)×αに従って誤差を更新して誤差の調整を行う(ステップST32、ST33)。これにより、G[u(n)]が前回の値G[ui-1(n)]より小さくなる場合においても誤差の影響を小さくすることができ、教師データが大きくなることを防止できる。なお、G[u(n)]が前回の値G[ui-1(n)]より大きくなる場合は、式(1)の右辺の第2項にG(u(n))-1が含まれているので、教師データが大きくなることは抑制されている。 In this way, when the current gain G[u i (n)] is smaller than the previous gain G[u i-1 (n)], the gain determination processing section 12211 determines the formula e i (n)=e The error is adjusted by updating the error according to i (n)×α (steps ST32 and ST33). As a result, even when G[u i (n)] becomes smaller than the previous value G[u i-1 (n)], the effect of the error can be reduced, and the teaching data can be prevented from becoming large. . Note that when G[u i (n)] becomes larger than the previous value G[u i-1 (n)], G(u i (n)) Since 1 is included, the size of the training data is suppressed.
 もし、教師データui-1がビヘイビアモデルBMの飽和領域への入力信号であるとすると、教師データui-1は歪みの発生を抑圧することができない。飽和領域の信号は補償できないからである。そのため、誤差は小さくならない。誤差が小さくならないので,教師データuはui-1より大きくなるように動作する。これを繰り返すと、教師データが発散してしまう。 If the teacher data u i-1 is an input signal to the saturation region of the behavior model BM, the teacher data u i-1 cannot suppress the occurrence of distortion. This is because signals in the saturation region cannot be compensated. Therefore, the error does not become small. Since the error does not become smaller, the teacher data u i operates to become larger than u i−1 . If this is repeated, the teacher data will diverge.
 上述のように、利得G[u(n)]が利得G[ui-1(n)]よりも小さい場合に、式e(n)=e(n)×αに従って誤差の調整を行うことにより、教師データがビヘイビアモデルBMの飽和領域へ入ってしまうことを抑制できる。 As mentioned above, if the gain G[u i (n)] is less than the gain G[u i-1 (n)], adjust the error according to the equation e i (n)=e i (n)×α can prevent the teacher data from entering the saturation region of the behavior model BM.
 また、上述のように、教師データ抑制部12213は、入力信号の絶対値|ui+1|が予め設定したしきい値Mthよりも大きい場合に、しきい値Mthを入力信号の絶対値|ui+1|に設定する。これにより、教師データに上限を設けることができるので、教師データが繰返し演算の中で発散することを抑制することができる。 Further, as described above, when the absolute value of the input signal | u i+1 | Set to u i+1 |. As a result, since an upper limit can be set for the teacher data, it is possible to suppress divergence of the teacher data during repeated calculations.
(係数更新部)
 係数更新部123は、プリディストータ1231及び合成部1232を備える。プリディストータ1231は、プリディストータ11と同様にニューラルネットワークにより構成される。係数更新部123は、教師データ生成部122により生成された教師データを用いて、プリディストータ11として動作するニューラルネットワークの重み係数を求める教師あり学習を行う。より具体的には、ニューラルネットワークの入力信号として教師データ生成部122が使用した送信信号sを、ニューラルネットワークの出力信号として教師データ生成部122により生成された教師データを用いて、ニューラルネットワークの重み係数を求める学習を行う。ニューラルネットワークの学習は、一般的に知られているレーベンバーグ・マルカート法(非特許文献4)又はADAM(非特許文献5)などの手法を用いればよいので、学習法の詳細な説明は省略する。係数更新部123により求められた重み係数は、プリディストータ11に出力される。
(Coefficient updating part)
The coefficient updating unit 123 includes a predistorter 1231 and a synthesizing unit 1232 . The predistorter 1231 is composed of a neural network like the predistorter 11 . The coefficient updating unit 123 uses the teacher data generated by the teacher data generating unit 122 to perform supervised learning for obtaining weighting coefficients of the neural network operating as the predistorter 11 . More specifically, using the transmission signal s used by the teacher data generation unit 122 as the input signal of the neural network and the teacher data generated by the teacher data generation unit 122 as the output signal of the neural network, the neural network weight Learn to find coefficients. For learning of the neural network, a generally known method such as the Levenberg-Marquardt method (Non-Patent Document 4) or ADAM (Non-Patent Document 5) may be used, so a detailed description of the learning method is omitted. . The weighting coefficient obtained by the coefficient updating unit 123 is output to the predistorter 11 .
 非特許文献4(レーベンバーグ・マルカート法): Hagan, M. T., and M. Menhaj, “Training feed-forward networks with the Marquardt algorithm,” IEEE Transactions on Neural Networks, Vol. 5, No. 6, 1999, pp. 989-993, 1994. Non-Patent Document 4 (Levenberg-Marquardt Method): Hagan, M. T., and M. Menhaj, "Training feed-forward networks with the Marquardt algorithm," IEEE Transactions on Neural Networks, Vol. 5, No. 6, 1999, pp. 989-993, 1994.
 非特許文献5(ADAM): Kingma, Diederik, and Jimmy Ba. “Adam: A method for stochastic optimization.” arXiv preprint arXiv:1412.6980 (2014). Non-Patent Document 5 (ADAM): Kingma, Diederik, and Jimmy Ba. "Adam: A method for stochastic optimization." arXiv preprint arXiv:1412.6980 (2014).
 歪み補償回路1の機能部の全部又は一部は、例えば、図4Aに示されたようなプロセッサ41及びメモリ42を備えたコンピュータにより実現される。メモリ42に格納されたプログラムがプロセッサ41に読み出されて実行されることにより、それらの機能部が実現される。プログラムは、ソフトウェア、ファームウェア又はソフトウェアとファームウェアとの組合せとして実現される。メモリ42の例には、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)、EEPROM(Electrically-EPROM)などの不揮発性又は揮発性の半導体メモリ、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク、DVDが含まれる。 All or part of the functional units of the distortion compensation circuit 1 are realized, for example, by a computer having a processor 41 and a memory 42 as shown in FIG. 4A. These functional units are realized by reading out and executing the programs stored in the memory 42 by the processor 41 . Programs may be implemented as software, firmware, or a combination of software and firmware. Examples of the memory 42 include non-volatile or volatile semiconductors such as RAM (Random Access Memory), ROM (Read Only Memory), flash memory, EPROM (Erasable Programmable Read Only Memory), EEPROM (Electrically-EPROM), etc. Memory, magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD are included.
 別の例として、歪み補償回路1の機能部の全部又は一部は、プロセッサ41及びメモリ42に替えて、図4Bに示されたような処理回路43により実現されてもよい。この場合、処理回路43は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)、又は、これらの組合せである。 As another example, all or part of the functional units of the distortion compensation circuit 1 may be realized by a processing circuit 43 as shown in FIG. 4B instead of the processor 41 and memory 42. In this case, the processing circuit 43 is, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or a combination thereof. is.
 歪み補償回路1は以上のように構成されているので、ILC処理における繰返し演算において教師データが発散することを防止することができ、精度の良い教師データを作成することができる。 Since the distortion compensation circuit 1 is configured as described above, it is possible to prevent the teacher data from diverging in repeated calculations in the ILC process, and to create highly accurate teacher data.
 なお、本実施の形態1においてニューラルネットワークの構成に制限はない。ビヘイビアモデルの表現に使用する式はメモリ多項式でなくてもよく、Volterra級数でもよい。 Note that there are no restrictions on the configuration of the neural network in the first embodiment. The expression used to express the behavior model may not be a memory polynomial, but may be a Volterra series.
<付記>
 以上で説明した実施形態の種々の側面の一部を、以下にてまとめる。
<Appendix>
Some of the various aspects of the embodiments described above are summarized below.
(付記1)
 付記1の歪み補償回路のための学習装置(12)は、送信信号が入力されるプリディストータ(11)を備える歪み補償回路のための学習装置であって、電力増幅器のビヘイビアモデルを生成するビヘイビアモデル生成部(121)と、生成されたビヘイビアモデルを保持するビヘイビアモデル部(1223)と、前記送信信号と前記ビヘイビアモデル部の出力信号との誤差を用いて繰返し学習制御処理を行うことにより、前記電力増幅器への入力信号としての教師データを生成する繰返し学習制御部(1221)と、を備える教師データ生成部(122)と、ニューラルネットワークで構成されたプリディストータ(1231)を備え、前記送信信号と生成された教師データとを用いて、前記ニューラルネットワークの重み係数を算出する係数更新部(123)と、を備え、前記繰返し学習制御部は、前記生成されたビヘイビアモデルにおける今回の利得と、前回までの少なくとも1つの利得とに基づいて、更新された今回の誤差を求める利得判定処理部(12211)を備える。
(Appendix 1)
A learning device (12) for a distortion compensation circuit of appendix 1 is a learning device for a distortion compensation circuit comprising a predistorter (11) to which a transmission signal is input, and generates a behavior model of a power amplifier. A behavior model generation unit (121), a behavior model unit (1223) holding the generated behavior model, and an error between the transmission signal and the output signal of the behavior model unit are used to repeatedly perform learning control processing. , a repetitive learning control unit (1221) that generates teacher data as an input signal to the power amplifier, and a teacher data generation unit (122), and a predistorter (1231) composed of a neural network, a coefficient updating unit (123) that calculates weighting coefficients of the neural network using the transmission signal and the generated teacher data, and the iterative learning control unit updates the current A gain determination processing unit (12211) is provided for obtaining the updated current error based on the gain and at least one gain up to the previous time.
(付記2)
 付記2の歪み補償回路のための学習装置は、付記1の歪み補償回路のための学習装置であって、前記利得判定処理部は、前記生成されたビヘイビアモデルにおける今回の利得が前回の利得よりも小さい場合に、今回の誤差に対して0より大きく1未満の任意の値を乗じることにより前記更新された今回の誤差を求める。
(Appendix 2)
The learning device for the distortion compensation circuit of Supplementary Note 2 is the learning device for the distortion compensation circuit of Supplementary Note 1, wherein the gain determination processing unit determines that the current gain in the generated behavior model is higher than the previous gain. is also small, the updated current error is obtained by multiplying the current error by an arbitrary value greater than 0 and less than 1.
(付記3)
 付記3の歪み補償回路のための学習装置は、付記1の歪み補償回路のための学習装置であって、前記利得判定処理部は、前記生成されたビヘイビアモデルにおける今回の利得が過去複数回の利得の平均よりも小さい場合に、今回の誤差に対して0より大きく1未満の任意の値を乗じることにより前記更新された今回の誤差を求める。
(Appendix 3)
The learning device for the distortion compensating circuit of Supplementary Note 3 is the learning device for the distortion compensating circuit of Supplementary Note 1, wherein the gain determination processing unit determines whether the current gain in the generated behavior model has been obtained a plurality of times in the past. If it is less than the average gain, the updated current error is obtained by multiplying the current error by any value greater than 0 and less than 1.
(付記4)
 付記4の歪み補償回路のための学習装置は、付記1から3のいずれか1つの歪み補償回路のための学習装置であって、前記繰返し学習制御部は、前記更新された今回の誤差を用いて、次回の教師データを式(1)により算出する教師データ算出部(12212)を更に備える。
(Appendix 4)
The learning device for the distortion compensation circuit of appendix 4 is the learning device for the distortion compensation circuit of any one of appendices 1 to 3, wherein the iterative learning control unit uses the updated current error It further comprises a teacher data calculation unit (12212) for calculating the next teacher data according to the formula (1).
(付記5)
 付記5の歪み補償回路のための学習装置は、付記1から4のいずれか1つの歪み補償回路のための学習装置であって、前記繰返し学習制御部は、算出された次回の教師データの絶対値が予め設定されたしきい値よりも大きい場合、前記しきい値を前記算出された次回の教師データの絶対値として設定する教師データ抑制部(12213)を更に備える。
(Appendix 5)
The learning device for the distortion compensation circuit of Supplementary Note 5 is the learning device for the distortion compensation circuit of any one of Supplements 1 to 4, wherein the iterative learning control unit calculates the absolute value of the next teacher data calculated. It further comprises a teacher data suppression unit (12213) which, when the value is greater than a preset threshold value, sets the threshold value as the absolute value of the calculated next teacher data.
(付記6)
 付記6の歪み補償回路(1)は、ニューラルネットワークで構成され、送信信号が入力されるプリディストータ(11)を備え、前記プリディストータのニューラルネットワークは、付記1から5のいずれか1つの歪み補償回路のための学習装置により求められた前記重み係数を有し、前記プリディストータ(11)は、前記重み係数に応じて前記送信信号に対して歪み補償処理を行い、歪み補償処理後の送信信号を出力する。
(Appendix 6)
The distortion compensation circuit (1) of Appendix 6 includes a predistorter (11) configured by a neural network and to which a transmission signal is input, and the neural network of the predistorter is any one of Appendixes 1 to 5. The predistorter (11) has the weighting factor obtained by the learning device for the distortion compensation circuit, performs distortion compensation processing on the transmission signal according to the weighting factor, and after distortion compensation processing output the transmission signal of
 なお、実施形態を組み合わせたり、各実施形態を適宜、変形、省略したりすることが可能である。 It should be noted that the embodiments can be combined, and each embodiment can be modified or omitted as appropriate.
 本開示に係る歪み補償回路は、無線送信機の高周波回路に入力される送信信号に歪みを付与する歪み補償回路として利用することができる。 A distortion compensation circuit according to the present disclosure can be used as a distortion compensation circuit that imparts distortion to a transmission signal that is input to a radio frequency circuit of a wireless transmitter.
 1 歪み補償回路、2 D/A変換器、3 電力増幅器、4 A/D変換器、11 プリディストータ、12 学習部(学習装置)、121 ビヘイビアモデル生成部、122 教師データ生成部、123 係数更新部、1221 繰返し学習制御部、1222 合成部、1223 ビヘイビアモデル部、1231 プリディストータ、1232 合成部、12211 利得判定処理部、12212 教師データ算出部、12213 教師データ抑制部、BM ビヘイビアモデル。 1 distortion compensation circuit, 2 D/A converter, 3 power amplifier, 4 A/D converter, 11 predistorter, 12 learning unit (learning device), 121 behavior model generation unit, 122 teacher data generation unit, 123 coefficient Update unit 1221 Iterative learning control unit 1222 Synthesis unit 1223 Behavior model unit 1231 Predistorter 1232 Synthesis unit 12211 Gain determination processing unit 12212 Teacher data calculation unit 12213 Teacher data suppression unit BM Behavior model.

Claims (6)

  1.  送信信号が入力されるプリディストータを備える歪み補償回路のための学習装置であって、
     電力増幅器のビヘイビアモデルを生成するビヘイビアモデル生成部と、
     生成されたビヘイビアモデルを保持するビヘイビアモデル部と、前記送信信号と前記ビヘイビアモデル部の出力信号との誤差を用いて繰返し学習制御処理を行うことにより、前記電力増幅器への入力信号としての教師データを生成する繰返し学習制御部と、を備える教師データ生成部と、
     ニューラルネットワークで構成されたプリディストータを備え、前記送信信号と生成された教師データとを用いて、前記ニューラルネットワークの重み係数を算出する係数更新部と、
    を備え、
     前記繰返し学習制御部は、前記生成されたビヘイビアモデルにおける今回の利得と、前回までの少なくとも1つの利得とに基づいて、更新された今回の誤差を求める利得判定処理部を備える、
    歪み補償回路のための学習装置。
    A learning device for a distortion compensation circuit comprising a predistorter to which a transmission signal is input,
    a behavior model generator that generates a behavior model of the power amplifier;
    a behavior model unit that holds the generated behavior model; and teacher data as an input signal to the power amplifier by repeatedly performing learning control processing using an error between the transmission signal and the output signal of the behavior model unit. a teacher data generation unit comprising an iterative learning control unit that generates
    a coefficient updating unit that includes a predistorter configured with a neural network and calculates a weighting coefficient of the neural network using the transmission signal and the generated teacher data;
    with
    The iterative learning control unit comprises a gain determination processing unit that obtains an updated current error based on the current gain and at least one previous gain in the generated behavior model.
    A learning device for distortion compensation circuits.
  2.  前記利得判定処理部は、前記生成されたビヘイビアモデルにおける今回の利得が前回の利得よりも小さい場合に、今回の誤差に対して0より大きく1未満の任意の値を乗じることにより前記更新された今回の誤差を求める、請求項1に記載の歪み補償回路のための学習装置。 If the current gain in the generated behavior model is smaller than the previous gain, the gain determination processing unit multiplies the current error by an arbitrary value greater than 0 and less than 1. 2. A learning device for a distortion compensation circuit according to claim 1, wherein the current error is determined.
  3.  前記利得判定処理部は、前記生成されたビヘイビアモデルにおける今回の利得が過去複数回の利得の平均よりも小さい場合に、今回の誤差に対して0より大きく1未満の任意の値を乗じることにより前記更新された今回の誤差を求める、請求項1に記載の歪み補償回路のための学習装置。 When the current gain in the generated behavior model is smaller than the average of the gains obtained a plurality of times in the past, the gain determination processing unit multiplies the current error by an arbitrary value greater than 0 and less than 1. 2. A learning device for a distortion compensation circuit according to claim 1, wherein said updated current error is obtained.
  4.  前記繰返し学習制御部は、前記更新された今回の誤差を用いて、次回の教師データを次式により算出する教師データ算出部を更に備える、請求項1から3のいずれか1項に記載の歪み補償回路のための学習装置。

               ui+1=u+G(u-1            

     ここで、

       G(u)=diag{G[u(0)],・・・,G[u(N-1)]}
             G[u(N)]=z(n)/u(n)
                   e=s-z

    であり、uは教師データ且つビヘイビアモデルへの入力信号、sは送信信号、zはビヘイビアモデルの出力信号、iは繰返し演算における繰返し回数の番号、及びNはデータ点数である。
    4. The distortion according to any one of claims 1 to 3, wherein the iterative learning control unit further comprises a teacher data calculation unit that calculates next teacher data using the updated current error by the following equation: A learning device for the compensator.

    u i+1 = u i +G(u i ) −1 e i

    here,

    G(u i )=diag{G[u i (0)], . . . , G[u i (N−1)]}
    G[u i (N)]=z i (n)/u i (n)
    e i =s−z i

    where u is the teacher data and input signal to the behavior model, s is the transmission signal, z is the output signal of the behavior model, i is the iteration number in the iterative operation, and N is the number of data points.
  5.  前記繰返し学習制御部は、算出された次回の教師データの絶対値が予め設定されたしきい値よりも大きい場合、前記しきい値を前記算出された次回の教師データの絶対値として設定する教師データ抑制部を更に備える、請求項4に記載の歪み補償回路のための学習装置。 When the absolute value of the calculated next teacher data is greater than a preset threshold, the repeated learning control unit sets the threshold as the absolute value of the calculated next teacher data. 5. A learning device for a distortion compensation circuit according to claim 4, further comprising a data suppressor.
  6.  ニューラルネットワークで構成され、送信信号が入力されるプリディストータを備え、
     前記プリディストータのニューラルネットワークは、請求項1に記載の歪み補償回路のための学習装置により求められた前記重み係数を有し、
     前記プリディストータは、前記重み係数に応じて前記送信信号に対して歪み補償処理を行い、歪み補償処理後の送信信号を出力する、
    歪み補償回路。
    Equipped with a predistorter that is composed of a neural network and receives a transmission signal,
    The neural network of the predistorter has the weighting coefficient obtained by the learning device for distortion compensation circuit according to claim 1,
    The predistorter performs distortion compensation processing on the transmission signal according to the weighting factor, and outputs the transmission signal after the distortion compensation processing.
    Distortion compensation circuit.
PCT/JP2021/008580 2021-03-05 2021-03-05 Learning device for distortion compensation circuit, and distortion compensation circuit WO2022185505A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022577079A JP7418619B2 (en) 2021-03-05 2021-03-05 Learning device for distortion compensation circuit and distortion compensation circuit
PCT/JP2021/008580 WO2022185505A1 (en) 2021-03-05 2021-03-05 Learning device for distortion compensation circuit, and distortion compensation circuit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/008580 WO2022185505A1 (en) 2021-03-05 2021-03-05 Learning device for distortion compensation circuit, and distortion compensation circuit

Publications (1)

Publication Number Publication Date
WO2022185505A1 true WO2022185505A1 (en) 2022-09-09

Family

ID=83154051

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/008580 WO2022185505A1 (en) 2021-03-05 2021-03-05 Learning device for distortion compensation circuit, and distortion compensation circuit

Country Status (2)

Country Link
JP (1) JP7418619B2 (en)
WO (1) WO2022185505A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010518660A (en) * 2006-12-26 2010-05-27 ダリ システムズ カンパニー リミテッド Method and system for linearizing baseband predistortion in multi-channel wideband communication system
JP2014225832A (en) * 2013-05-17 2014-12-04 日本電気株式会社 Signal amplification device, distortion compensation method and radio transmission device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010518660A (en) * 2006-12-26 2010-05-27 ダリ システムズ カンパニー リミテッド Method and system for linearizing baseband predistortion in multi-channel wideband communication system
JP2014225832A (en) * 2013-05-17 2014-12-04 日本電気株式会社 Signal amplification device, distortion compensation method and radio transmission device

Also Published As

Publication number Publication date
JP7418619B2 (en) 2024-01-19
JPWO2022185505A1 (en) 2022-09-09

Similar Documents

Publication Publication Date Title
US7348844B2 (en) Adaptive digital pre-distortion system
JP5742186B2 (en) Amplifier
Ding et al. Digital predistortion using direct learning with reduced bandwidth feedback
JP5120178B2 (en) Nonlinear system inverse characteristic identification device and method, power amplification device, and power amplifier predistorter
US9397619B2 (en) Distortion compensation apparatus and distortion compensation method
US8737937B2 (en) Distortion compensation apparatus, transmitter, and distortion compensation method
US20110270590A1 (en) Nonlinear identification using compressed sensing and minimal system sampling
US10171041B2 (en) Predistortion device
JP2010136123A (en) Distortion compensating apparatus
US20130162348A1 (en) Adaptive predistortion for a non-linear subsystem based on a model as a concatenation of a non-linear model followed by a linear model
US20120154040A1 (en) Predistorter for compensating for nonlinear distortion and method thereof
CN115589209A (en) Method and system for compensating power amplifier distortion
US8712345B2 (en) Distortion compensation device, distortion compensation method, and radio transmitter
WO2022185505A1 (en) Learning device for distortion compensation circuit, and distortion compensation circuit
JP2019201347A (en) Distortion compensation device and distortion compensation method
JP2010074723A (en) Predistorter
Harmon et al. Iterative approach to the indirect learning architecture for baseband digital predistortion
JP5299958B2 (en) Predistorter
RU2676017C1 (en) Device and method for adaptive linearization of analog radio path through dual-digital corrector
JP6182973B2 (en) Signal amplification device, distortion compensation method, and wireless transmission device
JP7268335B2 (en) Distortion compensation circuit, transmitter and distortion compensation method
US7693922B2 (en) Method for preprocessing a signal and method for signal processing
Shi et al. Application of Improved AHTVSSLMS Algorithm in Digital Predistortion System
RU2731128C1 (en) Method for combined digital linearization of power amplifier and quadrature modulator
Chun et al. Adaptive digital pre-distortions based on affine projection algorithm for WCDMA power amplifier applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21929077

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022577079

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21929077

Country of ref document: EP

Kind code of ref document: A1