WO2020157961A1 - Artificial neural network learning device, distortion compensation circuit, and signal processing device - Google Patents

Artificial neural network learning device, distortion compensation circuit, and signal processing device Download PDF

Info

Publication number
WO2020157961A1
WO2020157961A1 PCT/JP2019/003645 JP2019003645W WO2020157961A1 WO 2020157961 A1 WO2020157961 A1 WO 2020157961A1 JP 2019003645 W JP2019003645 W JP 2019003645W WO 2020157961 A1 WO2020157961 A1 WO 2020157961A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
signals
neural network
artificial neural
control unit
Prior art date
Application number
PCT/JP2019/003645
Other languages
French (fr)
Japanese (ja)
Inventor
安藤 暢彦
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2020569314A priority Critical patent/JP6877662B2/en
Priority to PCT/JP2019/003645 priority patent/WO2020157961A1/en
Publication of WO2020157961A1 publication Critical patent/WO2020157961A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/005Control of transmission; Equalising

Definitions

  • the present invention relates to a technique for learning an artificial neural network structure, and a technique for compensating for signal distortion generated in a signal processing circuit such as a high frequency circuit using the artificial neural network structure.
  • ANN Artificial Neural Networks
  • ANN Artificial Neural Networks
  • Non-Patent Document 1 a distortion of a pre-distortion method that compensates for the non-linear characteristic of a power amplifier using a multi-layered ANN structure called RVTDNN (Real-Valued Time-Delay Neural Networks) is used. Compensation techniques are disclosed.
  • the pre-distortion method is a distortion compensation technique of canceling the signal distortion generated in the high frequency circuit by previously applying distortion having an inverse characteristic to the signal to be input to the high frequency circuit.
  • Non-Patent Document 1 is an input layer (input layer) composed of a plurality of nodes, a hidden layer (hidden layer) composed of a plurality of nodes, and an output layer composed of two nodes. It uses a predistorter that operates based on the ANN structure configured with (output layer).
  • the signal components respectively input to the plurality of nodes of the input layer propagate from the input layer to the nodes of the hidden layer, and then from the hidden layer to the nodes of the output layer.
  • the weighting factors (weights) that determine the characteristics of the ANN structure are calculated by a machine learning algorithm called a back propagation algorithm (Back Propagation Learning Algorithm, BPLA).
  • the weighting coefficient of the ANN structure is updated based on the time series data output from the ANN structure, there is a problem that the learning efficiency (learning rate) of the ANN structure is low. Specifically, if the output signal of the ANN structure at a certain discrete time t n (n is an integer) is represented by O(n), the output signals O(n), O( The weighting coefficient of the ANN structure is updated based on the time series data consisting of (n-1),..., O(n-Nb+1) (Nb is a positive integer). In general, the output signals O(i) and O(j) (i ⁇ j) that are close in time are often highly correlated with each other.
  • an object of the present invention is to provide an artificial neural network learning device, a distortion compensation circuit, and a signal processing device that can improve the learning efficiency of the ANN structure.
  • An artificial neural network learning device includes a signal supply unit that supplies a plurality of temporally continuous training signals to a signal processing circuit and outputs a plurality of response signals from the signal processing circuit, A signal memory that stores a response signal, and a plurality of selection signals that are randomly or pseudo-randomly selected from the plurality of response signals that are stored in the signal memory are read, and the plurality of selection signals are stored in an artificial neural network structure.
  • a memory control unit that supplies a plurality of post-distortion signals from the artificial neural network structure and outputs a plurality of differences between the plurality of post-distortion signals and a plurality of teacher signals corresponding to the plurality of selection signals.
  • a learning control unit that updates the parameter group that determines the input/output characteristics of the artificial neural network structure by calculating an error signal and executing adaptive processing based on a back propagation learning algorithm using the plurality of error signals. It is characterized by
  • adaptive processing based on a back propagation learning algorithm is performed using a plurality of error signals indicating differences between a plurality of selection signals selected randomly or pseudo-randomly and a plurality of teacher signals. Therefore, the learning efficiency of the ANN structure can be improved.
  • FIG. 1 It is a block diagram which shows schematic structure of the artificial neural network (ANN) learning system of Embodiment 1 which concerns on this invention. It is a figure showing the correspondence between processing time, a training signal, and a response signal. It is a figure showing an example of the correspondence of processing time, a response signal, a selection signal, a training signal, and a teacher signal (reference signal). It is a schematic diagram of a neural network model showing a specific example of an ANN structure. It is a figure for explaining an example of forward propagation processing. It is a schematic diagram of a neural network model showing another specific example of the ANN structure. 6 is a flowchart schematically showing an example of a procedure of learning processing according to the first embodiment.
  • ANN artificial neural network
  • FIG. 1 is a block diagram showing a schematic configuration of an artificial neural network (ANN) learning system 1 according to the first embodiment of the present invention.
  • the artificial neural network learning system 1 shown in FIG. 1 includes a signal processing circuit 11 that performs digital signal processing and analog signal processing on an input signal, and an artificial neural network (ANN) including an artificial neural network (ANN) structure 41.
  • the learning device 31 is provided.
  • the ANN learning device 31 has a function of causing the ANN structure 41 to learn an inverse characteristic corresponding to a distortion characteristic that causes signal distortion in the signal processing circuit 11.
  • the ANN structure 41 of the present embodiment is incorporated inside the ANN learning device 31, but instead of this, the ANN structure 41 may be arranged outside the ANN learning device 31.
  • the signal processing circuit 11 includes a quadrature modulator (QM) 21, a D/A converter (DAC) 22, an up converter (UPC) 23, a high frequency circuit 24, a directional coupler 25, and a down converter.
  • a converter (DNC) 26, an A/D converter (ADC) 27 and a quadrature demodulator (QD) 28 are provided.
  • the quadrature modulator 21 receives a complex digital signal composed of an in-phase component and a quadrature component as input, and performs digital quadrature modulation on the complex digital signal to generate a digital modulation signal.
  • the D/A converter 22 converts the digital modulation signal input from the quadrature modulator 21 into an analog signal
  • the up converter 23 converts the analog signal into a high frequency band (for example, microwave band) signal.
  • the high frequency circuit 24 performs analog signal processing on the output signal of the up converter 23 to generate a high frequency output signal TS. Examples of the high frequency circuit 24 include, but are not limited to, a high output power amplifier (High Power Amplifier, HPA).
  • HPA High Power Amplifier
  • a part of the high frequency output signal TS is fed back by the directional coupler 25.
  • the down converter 26 converts the high frequency signal fed back from the directional coupler 25 into an analog signal in a lower frequency band, and the A/D converter 27 converts the analog signal into a digital signal.
  • the quadrature demodulator 28 performs digital quadrature demodulation on the digital signal to generate a complex digital signal including an in-phase component and a quadrature component.
  • the generated complex digital signal is transferred to the ANN learning device 31 as a response signal.
  • the configuration of the signal processing circuit 11 is not limited to the configuration shown in FIG. In the example of FIG. 1, a quadrature modulator 21 that performs digital quadrature modulation and a quadrature demodulator 28 that performs digital quadrature demodulation are used. There may be an embodiment in which the analog quadrature modulation circuit and the analog quadrature demodulation circuit are used without using the quadrature modulator 21 and the quadrature demodulator 28.
  • the ANN learning device 31 uses a signal memory 55 having a storage area 55a in which a plurality of temporally continuous training signals (discrete signals) are stored in advance for use in learning the ANN structure 41, and a signal of the signal memory 55.
  • the memory control unit 51 controls the read operation and the signal write operation, and the signal supply unit 40 that supplies the training signal T(k) read from the signal memory 55 to the signal processing circuit 11.
  • the variable k of the training signal T(k) is an integer value representing discrete time
  • the training signal T(k) is a complex digital signal composed of an in-phase component and a quadrature component.
  • the signal memory 55 for example, a non-volatile memory such as a flash memory and a volatile memory such as an SDRAM (Synchronous Dynamic Random Access Memory) may be used.
  • Transmission line (not shown) is provided.
  • the signal processing circuit 11 performs digital signal processing and analog signal processing on the series of training signals T(0), T(1), T(2),... Transferred from the signal supply unit 40 of the ANN learning device 31.
  • a feedback signal that is, response signals Y(0), Y(1), Y(2),... Is generated, and the response signals Y(0), Y(1), Y(2),. Forward.
  • a transmission line such as a wire or a cable for transferring the response signal Y(k) from the quadrature demodulator 28 of the signal processing circuit 11 to the ANN learning device 31 (see FIG. (Not shown).
  • the ANN learning device 31 includes a signal receiving unit 60 that receives the response signal Y(k) transferred from the signal processing circuit 11.
  • the signal memory 55 has a storage area 55b that stores the response signal Y(k) received by the signal receiving unit 60.
  • the ANN structure 41 performs forward propagation processing on the selection signal y n (c) input from the storage area 55b, and outputs the resulting complex digital signal as a post-distortion signal s n (c).
  • a specific example of the ANN structure 41 and the contents of the forward propagation processing will be described later.
  • FIG. 2 is a diagram showing the correspondence between the processing time t, the training signal T(k), and the response signal Y(k).
  • T 0 is the reference time and ⁇ T is the processing time interval.
  • k is an integer in the range of 0 to K ⁇ 1
  • K is the response signal Y(0) to Y(K ⁇ 1) stored in the storage area 55b of the signal memory 55. Is the number of
  • the ANN learning device 31 further has a learning control unit 61.
  • the learning control unit 61 has a function of updating the parameter group that defines the input/output characteristic (including the frequency characteristic) of the ANN structure 41 by executing the adaptive processing based on the back propagation learning algorithm.
  • the learning control unit 61 includes a subtractor 63, an adaptive processing unit 64, and a parameter setting unit 65.
  • the memory control unit 51 selects C pieces from the group of response signals Y(0) to Y(K-1) stored in the storage area 55b of the signal memory 55.
  • the signals y n (0) to y n (C-1) are randomly or pseudo-randomly selected, and the selected signals y n (0) to y n (C-1) are supplied from the signal memory 55 to the ANN structure 41. ..
  • C is a positive integer smaller than the total number K of the response signals Y(0) to Y(K-1), and the subscript n is a positive integer indicating the number of iterations of the adaptive processing.
  • the ANN structure 41 performs forward propagation processing on the selection signal y n (c) input from the signal memory 55, and outputs the resulting complex digital signal as a post-distortion signal s n (c). Further, the memory control unit 51 selects the selection signals y n (0) to y n (from the group of training signals T(0) to T(K ⁇ 1) stored in the storage area 55a of the signal memory 55. C-1) corresponding to C training signals are selected, and the selected C training signals are used as teacher signals (reference signals) z n (0) to z n (C-1) in the signal memory 55. To the learning control unit 61.
  • FIG. 3 shows an example of a correspondence relationship among the processing time ⁇ , the response signal Y( ⁇ ), the selection signal y n (c), the training signal T( ⁇ ), and the teacher signal (reference signal) z n (c).
  • the memory control unit 51 continuously generates a read address using a physical random number sequence or a pseudo random number sequence, and supplies the read address to the signal memory 55, thereby selecting the selection signal y n (c) and the teacher signal z n.
  • the pair with (c) can be selected randomly or pseudo-randomly.
  • the memory control unit 51 may read the data from the nonvolatile memory in which the data of the physical random number sequence is stored and use it.
  • the pseudo random number sequence when used, the memory control unit 51 only needs to have an arithmetic circuit or a function for calculating the pseudo random number sequence by a known pseudo random number generation algorithm.
  • Such an arithmetic circuit can be realized by, for example, a random number generation circuit using a known linear feedback shift register, but is not limited to this.
  • the memory control unit 51 uses a physical random number sequence or a pseudo random number sequence to randomly or pseudo array the response signals Y(0) to Y(K-1) that are temporarily stored in the storage area 55b of the signal memory 55. You may rearrange in random order.
  • the memory control unit 51 can obtain the selection signals y n (0) to y n (C-1) by reading C response signals from the signal memory 55 in the order of addresses.
  • the learning control unit 61 uses the selection signals y n (0) to y n (C-1) and the teacher signals z n (0) to z n (C-1) that are randomly or pseudo-randomly selected,
  • the parameter group of the ANN structure 41 can be updated by executing the adaptive processing based on the back propagation learning algorithm.
  • the learning control unit 61 can avoid performing adaptive processing using time-series data made up of highly correlated discrete signals, so that the convergence of the parameter group of the ANN structure 41 can be improved. .. Therefore, the learning efficiency of the ANN structure 41 can be improved.
  • FIG. 4 is a schematic diagram showing a neural network model (NN model) 41A showing a specific example of the ANN structure 41.
  • the NN model 41A shown in FIG. 4 is a multi-layered neural network model composed of P layers L 1 to L P.
  • P is an integer of 4 or more indicating the number of layers.
  • the NN model 41A includes an input layer L 1 to which the in-phase component y n i (c) and a quadrature component y n q (c) of the input selection signal y n (c) are input, and an intermediate layer (hidden layer) L 1. 2 to L P-1 and an output layer L P that outputs the in-phase component s n i (c) and the quadrature component s n q (c) of the post-distortion signal s n (c).
  • the number of intermediate layers is two or more, but one layer may be used instead.
  • the p-th layer L p has nodes N p,1 to N p, ⁇ (p) imitating neurons of the neural network of the brain.
  • the subscript p is a layer number
  • the subscript ⁇ (p) is a positive integer indicating the number of nodes in the p-th layer L p .
  • the node of one layer and the node of the other layer of the two adjacent layers are coupled to each other, and a weighting factor (coupling weight) is assigned as the coupling strength between the nodes.
  • a value called bias is assigned to each node.
  • the parameter group that defines the input/output characteristics of the NN model 41A includes a weighting coefficient and a bias.
  • the input layer L 1 includes two nodes N 1 to which the in-phase component y n i (c) and the quadrature component y n q (c) of the selection signal y n (c) are input, respectively.
  • FIG. 5 is a diagram for explaining an example of forward propagation processing.
  • the r-th node N p,r is coupled.
  • r is an integer within the range of 1 to ⁇ (p).
  • the weighting factors w r,1 (p, p-1) to w r, ⁇ (p-1) (p,p-1) are assigned.
  • a bias b r (p) is assigned to the node N p,r .
  • the node N p,r is the signal o propagated in the forward direction from the nodes N p-1,1 to N p-1, ⁇ (p-1) in the p-1 th layer L p-1 .
  • Weighting factors w r,1 (p,p-1) to w r, ⁇ (p-1) (p,p ) for 1 (p-1) to o ⁇ (p-1) (p-1) , respectively. -1) is multiplied, and the multiplication result and the bias b r (p) are added. Further, the node N p,r causes the activation function f[] to act on the addition result.
  • node N p, r, all the nodes N p + 1,1 ⁇ N p + 1 signals o r (p) expressed by the equation, in the p + 1 th layer L p + 1 shown in FIG. 5, mu (p + 1 )
  • the activation function f[] for example, a known activation function such as a sigmoid function or a hyperbolic tangent function may be used.
  • the variable i in the mathematical formula shown in FIG. 5 is an integer in the range of 1 to ⁇ (p ⁇ 1).
  • FIG. 6 is a schematic diagram showing a neural network model (NN model) 41B showing another specific example of the ANN structure 41.
  • the NN model 41B shown in FIG. 6 is a multi-layered neural network model including delay elements D 1 to D M connected in series and P layers L 1 to L P.
  • M is a positive integer representing the number of delay elements D 1 to D M.
  • the group of delay elements D 1 to D M in the NN model 41B has M delay signals y n (c-1) to y n (c) with respect to the selection signal y n (c) input to the NN model 41B. -M) is generated.
  • the layered structure of the NN model 41B has in-phase components y n i (c) to y n i (c ⁇ M) and quadrature components y n q (c) of signals y n (c ⁇ 1) to y n (c ⁇ M).
  • y n q (c ⁇ M) To y n q (c ⁇ M) are input to the input layer L 1 , intermediate layers (hidden layers) L 2 to L P ⁇ 1, and the in-phase component s n i (of the post-distortion signal s n (c)). c) and an output layer L P that outputs the orthogonal component s n q (c).
  • the number of intermediate layers is two or more, but one layer may be used instead.
  • the p-th layer L p of the P layers L 1 to L P in the NN model 41B is a node N that imitates a neuron of the neural network of the brain.
  • p,1 to N p, ⁇ (p) a weighting factor (coupling weight) is assigned as the coupling strength between the node of one layer and the node of the other layer of the two adjacent layers.
  • a value called bias is assigned to each node.
  • the parameter group that defines the input/output characteristics of the NN model 41B includes a weighting coefficient and a bias.
  • FIG. 7 is a flowchart schematically showing an example of the procedure of the learning process executed by the ANN learning device 31.
  • the parameter group of the ANN structure 41 is initialized to a random value.
  • a process for obtaining a response signal group corresponding to the training signal group is executed (steps ST11 to ST13).
  • the learning control unit 61 does not operate.
  • the memory control unit 51 reads the training signal T(k) from the storage area 55 a of the signal memory 55, and the signal supply unit 40 supplies the read training signal to the signal processing circuit 11.
  • the signal processing circuit 11 performs digital signal processing and analog signal processing on the training signal T(k) input from the ANN learning device 31 to generate a response signal Y(k), and the response signal Y(k).
  • the signal receiving unit 60 of the ANN learning device 31 receives the response signal Y(k) transferred from the signal processing circuit 11.
  • the memory control unit 51 stores the response signal Y(k) received by the signal receiving unit 60 in the storage area 55b of the signal memory 55 (step ST12).
  • step ST13 it is determined whether to start the adaptive processing. For example, the memory control unit 51 counts the number of response signals Y(k) stored in the storage area 55b of the signal memory 55, and when the number reaches a predetermined number, determines to start the adaptive processing. Can be performed (YES in step ST13). When it is determined that the adaptive processing is not started (NO in step ST13), steps ST11 to ST12 are repeatedly executed. As a result, a series of training signals T(0), T(1), T(2),... Is supplied to the signal processing circuit 11, and a series of training signals T(0), T(1), T(2). A series of response signals Y(0), Y(1), Y(2),... Corresponding to,... Are stored in the storage area 55b of the signal memory 55.
  • step ST13 When it is determined that the adaptive processing is started (YES in step ST13), the memory control unit 51 and the learning control unit 61 start the adaptive processing.
  • the value of the iteration count n is initialized to 1.
  • the memory control unit 51 reads the selection signals y n (0) to y n (C-1) selected at random or pseudo-randomly from the storage area 55b of the signal memory 55, and selects the selection signal y. supplying n and (0) ⁇ y n (C -1) to the ANN structure 41 (step ST14). As a result, the post-distortion signals s n (0) to s n (C-1) are output from the ANN structure 41.
  • the memory controller 51 from the storage area 55a of the signal memory 55, selection signal y n (0) ⁇ y n teacher signal z n (0) corresponding to the (C-1) ⁇ z n (C-1) Is read and the teacher signals z n (0) to z n (C-1) are supplied to the learning control unit 61 (step ST15).
  • the subtractor 63 in the learning control unit 61 calculates the difference between the post-distortion signals s n (0) to s n (C-1) and the teacher signals z n (0) to z n (C-1). calculating an error signal indicating e n (0) ⁇ e n (C-1) ( step ST16). Note that in the example of FIG. 7, steps ST14, ST15, and ST16 are executed in this order for convenience of description, but the present invention is not limited to this. Steps ST14, ST15, ST16 may be executed in parallel with each other.
  • the adaptive processing section 64 calculates a new parameter group of ANN structure 41 by executing the adaptive processing based on the back propagation learning algorithm using the error signal e n (0) ⁇ e n (C-1) (Step ST17).
  • the parameter setting unit 65 updates the parameter group of the ANN structure 41 by setting the new parameter group as the parameter group of the ANN structure 41 (step ST18).
  • a known algorithm such as a gradient descent method (Gradient Descent algorithm), a Gauss Newton algorithm (Gauss Newton algorithm) or a Levenberg-Marquardt algorithm is used. However, it is not particularly limited.
  • step ST17 the adaptive processing unit 64 uses the new weighting factor as an element based on the n-th weighting factor group w n having the current weighting factor as an element, for example, in accordance with the following equation (1).
  • the weighting coefficient group w n+1 can be calculated.
  • the current weighting coefficient group w n and the new weighting coefficient group w n+1 can be expressed by a vector or a matrix
  • is a learning coefficient
  • e n is an error signal e n (0) to e
  • G(] is a vector having n (C-1) as an element
  • G[] is a function that determines the update amount by the back propagation learning algorithm.
  • the adaptation processing unit 64 determines whether or not the iterative processing is to be ended (step ST19). When it is determined that the iterative process is not ended (NO in step ST19), the memory controller 51 and the learning controller 61 repeatedly execute steps ST14 to ST18. For example, if the number of iterations n reaches a predetermined number, or if a convergence condition indicating that the parameter group has sufficiently converged is satisfied, the adaptive processing unit 64 determines to end the iterative processing. Good (YES in step ST19).
  • the convergence condition for example, the condition that the error evaluation value representing the magnitude of the error signal e n (0) ⁇ e n (C-1) is equal to or less than a predetermined value continuously for a predetermined number of times and the like.
  • step ST19 If it is determined that the iterative process is to be ended (YES in step ST19), and if learning of the ANN structure 41 is to be continued (NO in step ST20), the learning control unit 61 repeats the learning process after step ST11. Execute. When the learning of the ANN structure 41 is completed (YES in step ST20), the learning control unit 61 ends the learning process.
  • FIG. 8 is a block diagram showing a schematic configuration of a signal processing device 2 having an ANN structure 41 used as a distortion compensation circuit.
  • the ANN structure 41 can previously compensate for signal distortion generated by the frequency characteristic and the nonlinear input/output characteristic of the high frequency circuit 24, and also compensates for a quadrature error (quadrature modulation error) generated in the quadrature modulator 21. can do.
  • the in-phase component and the quadrature component of the output signal of the quadrature modulator 21 should be in a state of being orthogonal to each other, but due to factors such as temperature conditions, the in-phase component and the quadrature component of the output signal may differ from the actual state. There may be an error from the ideal state. Such an error is an orthogonality error.
  • the ANN learning device 31 includes the selection signals y n (0) to y n (C-1) and the teacher signal z that are randomly or pseudo-randomly selected. Since the adaptive processing based on the back propagation learning algorithm is executed using the error signals e n (0) to e n (C-1) indicating the difference between n (0) to z n (C-1), It is possible to avoid a decrease in the convergence of the parameter group of the ANN structure 41, or to prevent the parameter group from converging to a value that deviates from the solution that should originally converge. As a result, the learning efficiency of the ANN structure 41 can be improved.
  • ANN learning device 31 all or some of the functions of the ANN learning device 31 described above include, for example, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Gate Integrated Circuit) integrated semiconductor circuit. Alternatively, it can be realized by a plurality of processors. Alternatively, all or some of the functions of the ANN learning device 31 are implemented by one or more processors including a computing device such as a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit) that executes program code of software or firmware. May be realized.
  • a computing device such as a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit) that executes program code of software or firmware. May be realized.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • all or part of the functions of the ANN learning device 31 can be realized by a single processor or a plurality of processors including a combination of a semiconductor integrated circuit such as DSP, ASIC or FPGA and an arithmetic unit such as CPU or GPU. is there.
  • FIG. 9 is a block diagram showing a schematic configuration of a data processing device 80 which is a hardware configuration example of the ANN learning device 31.
  • the data processing device 80 shown in FIG. 9 includes a processor 81, an input/output interface 84, a memory 82, a storage device 83, and a signal path 85.
  • the signal path 85 is a bus for connecting the processor 81, the input/output interface 84, the memory 82, and the storage device 83 to each other.
  • the input/output interface 84 can output the signal transferred from the processor 81 to the signal processing circuit 11 (FIG. 1) and has a function of transferring the signal input from the signal processing circuit 11 to the processor 81.
  • the memory 82 includes a work memory used when the processor 81 executes digital signal processing, and a temporary storage memory in which data used in the digital signal processing is expanded.
  • the memory 82 may be composed of a semiconductor memory such as a flash memory and an SDRAM.
  • the memory 82 can be used as the signal memory 55 of FIG.
  • the storage device 83 can be used as a storage area for storing a program code of software or firmware to be executed by the arithmetic device when the processor 81 includes an arithmetic device such as a CPU or a GPU.
  • the storage device 83 may be composed of a non-volatile semiconductor memory such as a flash memory or a ROM (Read Only Memory).
  • the number of processors 81 is one, but the number is not limited to this.
  • the hardware configuration of the ANN learning device 31 may be realized by using a plurality of processors that operate in cooperation with each other.
  • FIG. 10 is a block diagram showing a schematic configuration of the signal processing device 3 according to the second embodiment of the present invention.
  • the signal processing device 3 shown in FIG. 10 includes a signal processing circuit 11 that executes digital signal processing and analog signal processing on an input signal, and a distortion compensation circuit 13 that pre-compensates for signal distortion generated in the signal processing circuit 11. Is configured.
  • the configuration of the signal processing circuit 11 shown in FIG. 10 is the same as the configuration of the signal processing circuit 11 shown in FIG. 1, so detailed description thereof will be omitted.
  • the distortion compensating circuit 13 of the present embodiment is a distortion compensating circuit including a so-called indirect learning mechanism (Indirect Learning Architecture).
  • the distortion compensation circuit 13 has a set of ANN structures, that is, a first ANN structure 42 and a second ANN structure 43.
  • the input/output characteristic of the first ANN structure 42 (including the frequency characteristic)
  • the input/output characteristic of the second ANN structure 43 (including the frequency characteristic) match each other.
  • the first ANN structure 42 can be made to learn the inverse characteristic corresponding to the distortion characteristic of the signal processing circuit 11. ..
  • the first ANN structure 42 and the second ANN structure 43 of this embodiment are configured by the same neural network model.
  • a specific example of the neural network model is the NN model 41A or 41B shown in FIG. 4 or 5, but the neural network model is not limited to this.
  • the distortion compensation circuit 13 includes a signal receiving unit 60, a signal memory 55, a memory control unit 52, and a learning control unit 62 in addition to the first ANN structure 42 and the second ANN structure 43.
  • the memory control unit 52 can control the signal reading operation and the signal writing operation of the signal memory 55.
  • the variable k of the input signal X(k) is an integer value representing discrete time.
  • the first ANN structure 42 performs forward propagation processing on a plurality of time-sequential input signals X(0), X(1),..., And the complex digital signal obtained as a result is predistorted signal Z(0), Output as Z(1),....
  • the predistorted signals Z(0), Z(1),... Are transferred to the signal processing circuit 11 and the signal memory 55.
  • the memory control unit 52 stores the predistorted signal Z(k) transferred from the first ANN structure 42 in the storage area 55 a of the signal memory 55.
  • the signal processing circuit 11 performs digital signal processing and analog signal processing on the series of pre-distorted signals Z(0), Z(1), Z(2),... Transferred from the first ANN structure 42 to provide a feedback signal, that is, a response.
  • the signals Y(0), Y(1), Y(2),... Are generated and the response signals Y(0), Y(1), Y(2),.
  • a transmission line (not shown) such as a wire or a cable for transferring the predistortion signal Z(k) and the response signal Y(k) is provided between the signal processing circuit 11 and the distortion compensation circuit 13. Has been.
  • the memory control unit 52 stores the response signal Y(k) in the storage area 55b of the signal memory 55. As will be described later, the memory control unit 52 outputs the selection signal y n (c) randomly or pseudo-randomly selected from the response signals Y(0), Y(1),... Stored in the storage area 55b.
  • the second ANN structure 43 performs forward propagation processing on the selection signal y n (c) input from the storage area 55b, and outputs the resulting complex digital signal as a post-distortion signal s n (c). ..
  • FIG. 11 is a diagram showing the correspondence between the processing time t, the pre-distortion signal Z(k) and the response signal Y(k).
  • T 0 is the reference time and ⁇ T is the processing time interval.
  • k is an integer in the range of 0 to K ⁇ 1
  • K is the response signal Y(0) to Y(K ⁇ 1) stored in the storage area 55b of the signal memory 55. Is the number of
  • the predistortion signal Z(k) and the response signal Y(k) have a one-to-one correspondence.
  • the learning control unit 62 shown in FIG. 10 has a function of updating the parameter group of the first ANN structure 42 and the parameter group of the second ANN structure 43 by executing the adaptive processing based on the back propagation learning algorithm.
  • the learning control unit 62 includes a subtractor 63, an adaptive processing unit 64, and a parameter setting unit 66.
  • the configuration of the learning control unit 62 is the same as the configuration of the learning control unit 61 of the first embodiment, except that the parameter setting unit 65 of FIG. 1 is replaced by the parameter setting unit 66 of FIG. 10.
  • the memory control unit 52 selects C responses from the group of response signals Y(0) to Y(K-1) stored in the storage area 55b of the signal memory 55.
  • the signal is randomly or pseudo-randomly selected, and the C selected response signals are supplied from the signal memory 55 to the second ANN structure 43 as selection signals y n (0) to y n (C-1).
  • C is a positive integer smaller than the total number K of the response signals Y(0) to Y(K-1), and the subscript n is a positive integer indicating the number of iterations of the adaptive processing.
  • the second ANN structure 43 performs forward propagation processing on the selection signal y n (c) input from the signal memory 55, and outputs the resulting complex digital signal as a post-distortion signal s n (c). ..
  • the memory control unit 52 selects C signals corresponding to the selection signals y n (0) to y n (C-1) from the group of predistortion signals stored in the storage area 55a of the signal memory 55.
  • the pre-distortion signals of C are selected, and the selected C pre-distortion signals are output from the signal memory 55 to the learning control unit 62 as reference signals z n (0) to z n (C-1).
  • FIG. 12 is a diagram showing an example of a correspondence relationship among the processing time ⁇ , the response signal Y( ⁇ ), the selection signal y n (c), the predistortion signal Z( ⁇ ), and the reference signal z n (c). is there.
  • the memory control unit 52 continuously generates the read address using the physical random number sequence or the pseudo random number sequence, and supplies the read address to the signal memory 55.
  • the selection signal y n (c) and the reference signal z n (c) can be selected randomly or pseudo-randomly.
  • the memory control unit 52 randomly or pseudo-randomizes the array of the response signals Y(0) to Y(K-1) temporarily stored in the storage area 55b of the signal memory 55 using a physical random number sequence or a pseudo random number sequence. You may rearrange in random order.
  • the memory control unit 52 can obtain the selection signals y n (0) to y n (C-1) by reading C response signals from the signal memory 55 in the order of addresses.
  • the learning control unit 62 uses the selection signals y n (0) to s n (C-1) selected randomly or pseudo-randomly and the reference signals z n (0) to z n (C-1).
  • the parameter group of the first ANN structure 42 and the parameter group of the second ANN structure 43 can be updated by executing the adaptive processing based on the back propagation learning algorithm.
  • the learning control unit 62 can avoid performing adaptive processing using time-series data made up of highly correlated discrete signals, and thus the convergence of the parameter groups of the first ANN structure 42 and the second ANN structure 43. Can be improved. Therefore, the learning efficiency of the first ANN structure 42 and the second ANN structure 43 can be improved.
  • FIG. 13 is a flowchart schematically showing an example of the procedure of the learning process executed by the distortion compensation circuit 13.
  • the pre-distortion signal Z(k) is transferred to the signal processing circuit 11 and the signal memory 55.
  • the memory control unit 52 of the distortion compensation circuit 13 stores the pre-distortion signal Z(k) generated by the first ANN structure 42 in the storage area 55a of the signal memory 55 (step ST21).
  • the signal receiving unit 60 receives the response signal Y(k) output from the signal processing circuit 11, the memory control unit 52 stores the response signal Y(k) in the storage area 55b of the signal memory 55 (step ST22). ).
  • step ST23 it is determined whether to start the adaptive processing. For example, the memory control unit 52 counts the number of response signals Y(k) stored in the storage area 55b of the signal memory 55, and when the number reaches a predetermined number, determines to start the adaptive processing. Can be performed (YES in step ST23). When it is determined that the adaptive processing is not started (NO in step ST23), steps ST21 to ST22 are repeatedly executed. As a result, a series of pre-distortion signals Z(0), Z(1), Z(2),... Are stored in the storage area 55a of the signal memory 55, and a series of pre-distortion signals Z(0), Z(1). , Z(2),... A series of response signals Y(0), Y(1), Y(2),... Are stored in the storage area 55b of the signal memory 55.
  • step ST23 When it is determined that the adaptive processing is started (YES in step ST23), the memory control unit 52 and the learning control unit 62 start the adaptive processing.
  • the value of the iteration count n is initialized to 1.
  • the memory control unit 52 reads the selection signals y n (0) to y n (C-1) selected at random or pseudo-randomly from the storage area 55b of the signal memory 55, and selects the selection signal y. n (0) ⁇ y supplies n the (C-1) to the 2ANN structure 43 (step ST24). As a result, the post-distortion signals s n (0) to s n (C-1) are output from the second ANN structure 43. Further, the memory control unit 52 causes the reference signals z n (0) to z n (C-1) corresponding to the selection signals y n (0) to y n (C-1) from the storage area 55a of the signal memory 55.
  • Step ST25 Is read and the reference signals z n (0) to z n (C-1) are supplied to the learning control unit 62 (step ST25).
  • the subtracter 63 in the learning control unit 62 calculates the difference between the post-distortion signals s n (0) to s n (C-1) and the reference signals z n (0) to z n (C-1). calculating an error signal indicating e n (0) ⁇ e n (C-1) ( step ST26). Note that in the example of FIG. 13, steps ST24, ST25, and ST26 are executed in this order for convenience of description, but the present invention is not limited to this. Steps ST24, ST25, ST26 may be executed in parallel with each other.
  • the adaptive processing unit 64 executes the adaptive processing based on the back propagation learning algorithm using the error signal e n (0) ⁇ e n (C-1) Thereby, a new parameter group of the second ANN structure 43 is calculated (step ST27).
  • the parameter setting unit 66 updates the parameter group of the first ANN structure 42 and the second ANN structure 43 by setting the new parameter group as the parameter group of the first ANN structure 42 and the second ANN structure 43 (step ST28).
  • the adaptation processing unit 64 determines whether or not to end the iterative processing (step ST29). When it is determined that the iterative process is not ended (NO in step ST29), the memory control unit 52 and the learning control unit 62 repeatedly execute steps ST24 to ST28. For example, if the number of iterations n reaches a predetermined number, or if a convergence condition indicating that the parameter group has sufficiently converged is satisfied, the adaptive processing unit 64 determines to end the iterative processing. Good (YES in step ST29).
  • the convergence condition for example, the condition that the error evaluation value representing the magnitude of the error signal e n (0) ⁇ e n (C-1) is equal to or less than a predetermined value continuously for a predetermined number of times and the like.
  • step ST29 If it is determined that the iterative process is to be ended (YES in step ST29), if learning of the first ANN structure 42 and the second ANN structure 43 is to be continued (NO in step ST30), the learning control unit 62 causes the learning control unit 62 to perform step ST21. The subsequent learning process is executed again. When the learning of the first ANN structure 42 and the second ANN structure 43 is completed (YES in step ST30), the learning control unit 62 ends the learning process.
  • the distortion compensation circuit 13 includes the selection signals y n (0) to y n (C-1) selected randomly or pseudo-randomly and the reference signal z. Since the adaptive processing based on the back propagation learning algorithm is executed using the error signals e n (0) to e n (C-1) indicating the difference between n (0) to z n (C-1), The learning efficiency of the first ANN structure 42 and the second ANN structure 43 can be improved.
  • all or some of the functions of the distortion compensation circuit 13 described above can be realized by a single or multiple processors having a semiconductor integrated circuit such as DSP, ASIC, or FPGA.
  • all or some of the functions of the distortion compensation circuit 13 may be implemented by one or more processors including a computing device such as a CPU or GPU that executes program code of software or firmware.
  • all or part of the function of the distortion compensation circuit 13 can be realized by one or more processors including a combination of a semiconductor integrated circuit such as DSP, ASIC or FPGA and an arithmetic unit such as CPU or GPU. is there.
  • the hardware configuration of the distortion compensation circuit 13 may be realized by the data processing device 80 shown in FIG.
  • Embodiments 1 and 2 are examples of the present invention, and various Embodiments other than Embodiments 1 and 2 There can be forms. Within the scope of the present invention, it is possible to freely combine the first and second embodiments, modify any constituent element of each embodiment, or omit any constituent element of each embodiment.
  • ANN structure 41 the first ANN structure 42, and the second ANN structure 43 are not limited to the NN models 41A and 41B shown in FIGS. 4 and 6.
  • NN models 41A and 41B a known recurrent neural network model having feedback coupling (Recurrent Neural Network Model, RNN Model) may be used.
  • An artificial neural network (ANN) learning device, a distortion compensating circuit, and a signal processing device are, for example, high frequency radio transmitters or receivers in wireless communication systems such as mobile communication systems, digital broadcasting systems, and satellite communication systems. It can be used in a circuit (for example, a power amplifier).
  • ANN artificial neural network
  • 2 3 signal processing device, 11 signal processing circuit, 13 distortion compensation circuit, 21 quadrature modulator (QM), 22 D/A converter (DAC), 23 up converter (UPC) ), 24 high frequency circuit, 25 directional coupler, 26 down converter (DNC), 27 A/D converter (ADC), 28 quadrature demodulator (QD), 31 artificial neural network (ANN) learning device, 40 signal supply Part, 41 artificial neural network (ANN) structure, 41A, 41B neural network model (NN model), 42 first ANN structure, 43 second ANN structure, 51, 52 memory control unit, 55 signal memory, 55a, 55b storage area, 60 Signal receiving unit, 61, 62 learning control unit, 63 subtractor, 64 adaptive processing unit, 65, 66 parameter setting unit, 80 data processing device, 81 processor, 82 memory, 83 storage device, 84 input/output interface, 85 signal path ..
  • QM quadrature modulator
  • DAC D/A converter
  • UPC up converter
  • QD quadrature demodulator

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Amplifiers (AREA)
  • Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)

Abstract

An artificial neural network learning device (31) is provided with: a signal supply unit (40) that supplies a plurality of training signals to a signal processing circuit (11); a signal memory (55) that stores a response signal output from the signal processing circuit (11); a memory control unit (51) that reads, from the signal memory (55), a plurality of selected signals which were randomly or pseudo-randomly selected, and supplies the plurality of selected signals to an artificial neural network structure (41) to cause a plurality of back distortion signals to be output from the artificial neural network structure (41); and a learning control unit (61). The learning control unit (61) updates a parameter group of the artificial neural network structure (41) by using a plurality of error signals indicating a difference between the plurality of selected signals and a plurality of teaching signals corresponding, respectively, to the plurality of back distortion signals, and executing adaptive processing based on a back propagation learning algorithm.

Description

人工ニューラルネットワーク学習装置、歪み補償回路及び信号処理装置Artificial neural network learning device, distortion compensation circuit and signal processing device
 本発明は、人工ニューラルネットワーク構造に学習させる技術、並びに、人工ニューラルネットワーク構造を用いて高周波回路などの信号処理回路で発生する信号歪みを補償する技術に関するものである。 The present invention relates to a technique for learning an artificial neural network structure, and a technique for compensating for signal distortion generated in a signal processing circuit such as a high frequency circuit using the artificial neural network structure.
 近年、脳の神経回路網(Neural Networks)を模した人工ニューラルネットワーク(Artificial Neural Networks)は、画像認識、音声認識、自然言語処理及び機械翻訳などの様々な技術分野において用いられている。無線通信分野においては、無線通信装置などの通信モジュールは、電力増幅器及びフィルタなどの回路素子を含む高周波回路を備えており、高周波回路の周波数特性や非線形な入出力特性などに起因して当該高周波回路の出力信号に歪み(信号歪み)が発生することがある。このような信号歪みによる信号品質の劣化を抑制するために、高周波回路で発生した信号歪みを打ち消す(補償する)歪み補償技術が一般に使用されている。近年、人工ニューラルネットワークを用いた歪み補償技術が多数提案されている。以下、人工ニューラルネットワークを単に「ANN」と呼ぶこととする。 In recent years, artificial neural networks (Artificial Neural Networks) that imitate the neural network of the brain (Neural Networks) have been used in various technical fields such as image recognition, voice recognition, natural language processing, and machine translation. In the field of wireless communication, a communication module such as a wireless communication device includes a high frequency circuit including circuit elements such as a power amplifier and a filter, and the high frequency circuit is caused by the frequency characteristic of the high frequency circuit and the nonlinear input/output characteristic. Distortion (signal distortion) may occur in the output signal of the circuit. In order to suppress the deterioration of signal quality due to such signal distortion, a distortion compensation technique for canceling (compensating) the signal distortion generated in the high frequency circuit is generally used. In recent years, many distortion compensation techniques using artificial neural networks have been proposed. Hereinafter, the artificial neural network will be simply referred to as “ANN”.
 たとえば、下記の非特許文献1には、RVTDNN(Real-Valued Time-Delay Neural Networks)と呼ばれる多階層型ANN構造を用いて電力増幅器の非線形特性を補償するプリディストーション(Pre-Distortion)方式の歪み補償技術が開示されている。プリディストーション方式とは、高周波回路に入力されるべき信号にあらかじめ逆特性の歪みを与えることで、当該高周波回路で発生する信号歪みを打ち消すという歪み補償技術である。 For example, in Non-Patent Document 1 below, a distortion of a pre-distortion method that compensates for the non-linear characteristic of a power amplifier using a multi-layered ANN structure called RVTDNN (Real-Valued Time-Delay Neural Networks) is used. Compensation techniques are disclosed. The pre-distortion method is a distortion compensation technique of canceling the signal distortion generated in the high frequency circuit by previously applying distortion having an inverse characteristic to the signal to be input to the high frequency circuit.
 非特許文献1に開示されている歪み補償技術は、複数個のノードからなる入力層(input layer)と、複数個のノードからなる隠れ層(hidden layer)と、2個のノードからなる出力層(output layer)とで構成されたANN構造に基づいて動作するプリディストーション部(predistorter)を使用するものである。入力層の複数個のノードにそれぞれ入力された信号成分は、入力層から隠れ層のノードに伝播し、その後、隠れ層から出力層のノードに伝播する。このANN構造の特性を定める重み係数(weights)は、逆伝播アルゴリズム(Back Propagation Learning Algorithm,BPLA)と呼ばれる機械学習アルゴリズムにより算出される。 The distortion compensation technique disclosed in Non-Patent Document 1 is an input layer (input layer) composed of a plurality of nodes, a hidden layer (hidden layer) composed of a plurality of nodes, and an output layer composed of two nodes. It uses a predistorter that operates based on the ANN structure configured with (output layer). The signal components respectively input to the plurality of nodes of the input layer propagate from the input layer to the nodes of the hidden layer, and then from the hidden layer to the nodes of the output layer. The weighting factors (weights) that determine the characteristics of the ANN structure are calculated by a machine learning algorithm called a back propagation algorithm (Back Propagation Learning Algorithm, BPLA).
 従来の逆伝播学習アルゴリズムでは、ANN構造から出力された時系列データに基づいてANN構造の重み係数が更新されるので、ANN構造の学習効率(learning rate)が低いという課題がある。具体的には、或る離散時間t(nは整数)におけるANN構造の出力信号をO(n)で表すとすれば、従来の逆伝播学習アルゴリズムでは、出力信号O(n),O(n-1),・・・,O(n-Nb+1)からなる時系列データに基づいて、ANN構造の重み係数の更新が実行される(Nbは正整数)。一般に時間的に近い出力信号O(i),O(j)(i≠j)同士の相関性は高いことが多いので、従来の逆伝播学習アルゴリズムの実行時に時系列データが使用されれば、相関性の高い出力信号群を用いて重み係数の更新が行われることとなる。これにより、ANN構造の重み係数の収束性が低下したり、あるいは、ANN構造の重み係数が本来収束すべき解からずれた値へ収束したりするという課題がある。 In the conventional back-propagation learning algorithm, since the weighting coefficient of the ANN structure is updated based on the time series data output from the ANN structure, there is a problem that the learning efficiency (learning rate) of the ANN structure is low. Specifically, if the output signal of the ANN structure at a certain discrete time t n (n is an integer) is represented by O(n), the output signals O(n), O( The weighting coefficient of the ANN structure is updated based on the time series data consisting of (n-1),..., O(n-Nb+1) (Nb is a positive integer). In general, the output signals O(i) and O(j) (i≠j) that are close in time are often highly correlated with each other. Therefore, if time series data is used when the conventional back propagation learning algorithm is executed, The weighting coefficient is updated using the output signal group having a high correlation. As a result, there is a problem that the convergence of the weighting coefficient of the ANN structure deteriorates, or the weighting coefficient of the ANN structure converges to a value that deviates from the solution that should originally converge.
 上記に鑑みて本発明の目的は、ANN構造の学習効率を向上させることができる人工ニューラルネットワーク学習装置、歪み補償回路及び信号処理装置を提供することである。 In view of the above, an object of the present invention is to provide an artificial neural network learning device, a distortion compensation circuit, and a signal processing device that can improve the learning efficiency of the ANN structure.
 本発明の一態様による人工ニューラルネットワーク学習装置は、時間的に連続する複数のトレーニング信号を信号処理回路に供給して当該信号処理回路から複数の応答信号を出力させる信号供給部と、前記複数の応答信号を記憶する信号メモリと、前記信号メモリに記憶された当該複数の応答信号の中からランダムまたは擬似ランダムに選択された複数の選択信号を読み出し、当該複数の選択信号を人工ニューラルネットワーク構造に供給して当該人工ニューラルネットワーク構造から複数の後歪み信号を出力させるメモリ制御部と、当該複数の後歪み信号と当該複数の選択信号に対応する複数の教師信号との間の差分を示す複数の誤差信号を算出し、当該複数の誤差信号を用いて逆伝播学習アルゴリズムに基づく適応処理を実行することにより、前記人工ニューラルネットワーク構造の入出力特性を定めるパラメータ群を更新する学習制御部とを備えることを特徴とする。 An artificial neural network learning device according to an aspect of the present invention includes a signal supply unit that supplies a plurality of temporally continuous training signals to a signal processing circuit and outputs a plurality of response signals from the signal processing circuit, A signal memory that stores a response signal, and a plurality of selection signals that are randomly or pseudo-randomly selected from the plurality of response signals that are stored in the signal memory are read, and the plurality of selection signals are stored in an artificial neural network structure. A memory control unit that supplies a plurality of post-distortion signals from the artificial neural network structure and outputs a plurality of differences between the plurality of post-distortion signals and a plurality of teacher signals corresponding to the plurality of selection signals. And a learning control unit that updates the parameter group that determines the input/output characteristics of the artificial neural network structure by calculating an error signal and executing adaptive processing based on a back propagation learning algorithm using the plurality of error signals. It is characterized by
 本発明の一態様によれば、ランダムまたは擬似ランダムに選択された複数の選択信号と複数の教師信号との間の差分を示す複数の誤差信号を用いて逆伝播学習アルゴリズムに基づく適応処理が実行されるので、ANN構造の学習効率の向上が可能となる。 According to one aspect of the present invention, adaptive processing based on a back propagation learning algorithm is performed using a plurality of error signals indicating differences between a plurality of selection signals selected randomly or pseudo-randomly and a plurality of teacher signals. Therefore, the learning efficiency of the ANN structure can be improved.
本発明に係る実施の形態1の人工ニューラルネットワーク(ANN)学習システムの概略構成を示すブロック図である。It is a block diagram which shows schematic structure of the artificial neural network (ANN) learning system of Embodiment 1 which concerns on this invention. 処理時間,トレーニング信号及び応答信号の間の対応関係を表す図である。It is a figure showing the correspondence between processing time, a training signal, and a response signal. 処理時間,応答信号,選択信号,トレーニング信号及び教師信号(参照信号)の間の対応関係の一例を表す図である。It is a figure showing an example of the correspondence of processing time, a response signal, a selection signal, a training signal, and a teacher signal (reference signal). ANN構造の具体例を示すニューラルネットワークモデルの概略図である。It is a schematic diagram of a neural network model showing a specific example of an ANN structure. 順伝播処理の一例を説明するための図である。It is a figure for explaining an example of forward propagation processing. ANN構造の他の具体例を示すニューラルネットワークモデルの概略図である。It is a schematic diagram of a neural network model showing another specific example of the ANN structure. 実施の形態1に係る学習処理の手順の例を概略的に示すフローチャートである。6 is a flowchart schematically showing an example of a procedure of learning processing according to the first embodiment. 歪み補償回路として使用される学習済みのANN構造を有する信号処理装置の概略構成を示すブロック図である。It is a block diagram which shows the schematic structure of the signal processing apparatus which has the learned ANN structure used as a distortion compensation circuit. ANN学習装置のハードウェア構成例であるデータ処理装置の概略構成を示すブロック図である。It is a block diagram showing a schematic structure of a data processor which is an example of hardware constitutions of an ANN learning device. 本発明に係る実施の形態2の信号処理装置の概略構成を示すブロック図である。It is a block diagram which shows schematic structure of the signal processing apparatus of Embodiment 2 which concerns on this invention. 処理時間,前歪み信号及び応答信号の間の対応関係を表す図である。It is a figure showing the correspondence between processing time, a pre-distortion signal, and a response signal. 処理時間,応答信号,選択信号,前歪み信号及び参照信号の間の対応関係の一例を表す図である。It is a figure showing an example of the correspondence of processing time, a response signal, a selection signal, a predistortion signal, and a reference signal. 実施の形態2に係る学習処理の手順の例を概略的に示すフローチャートである。9 is a flowchart schematically showing an example of a procedure of a learning process according to the second embodiment.
 以下、添付図面を参照しつつ、本発明に係る種々の実施の形態について詳細に説明する。なお、図面全体において同一符号を付された構成要素は、同一構成及び同一機能を有するものとする。 Hereinafter, various embodiments according to the present invention will be described in detail with reference to the accompanying drawings. In addition, the components denoted by the same reference numerals throughout the drawings have the same configuration and the same function.
実施の形態1.
 図1は、本発明に係る実施の形態1の人工ニューラルネットワーク(ANN)学習システム1の概略構成を示すブロック図である。図1に示される人工ニューラルネットワーク学習システム1は、入力信号に対してディジタル信号処理及びアナログ信号処理を実行する信号処理回路11と、人工ニューラルネットワーク(ANN)構造41を含む人工ニューラルネットワーク(ANN)学習装置31とを備えて構成されている。ANN学習装置31は、信号処理回路11で信号歪みを生じさせる歪み特性に対応する逆特性をANN構造41に学習させる機能を有する。なお、本実施の形態のANN構造41は、ANN学習装置31の内部に組み込まれているが、この代わりに、ANN学習装置31の外にANN構造41が配置されてもよい。
Embodiment 1.
FIG. 1 is a block diagram showing a schematic configuration of an artificial neural network (ANN) learning system 1 according to the first embodiment of the present invention. The artificial neural network learning system 1 shown in FIG. 1 includes a signal processing circuit 11 that performs digital signal processing and analog signal processing on an input signal, and an artificial neural network (ANN) including an artificial neural network (ANN) structure 41. The learning device 31 is provided. The ANN learning device 31 has a function of causing the ANN structure 41 to learn an inverse characteristic corresponding to a distortion characteristic that causes signal distortion in the signal processing circuit 11. The ANN structure 41 of the present embodiment is incorporated inside the ANN learning device 31, but instead of this, the ANN structure 41 may be arranged outside the ANN learning device 31.
 信号処理回路11は、図1に示されるように、直交変調器(QM)21、D/A変換器(DAC)22、アップコンバータ(UPC)23、高周波回路24、方向性結合器25、ダウンコンバータ(DNC)26、A/D変換器(ADC)27及び直交復調器(QD)28を備えている。直交変調器21は、同相成分及び直交成分からなる複素ディジタル信号を入力とし、当該複素ディジタル信号にディジタル直交変調を施してディジタル変調信号を生成する。D/A変換器22は、直交変調器21から入力された当該ディジタル変調信号をアナログ信号に変換し、アップコンバータ23は、そのアナログ信号を高周波帯(たとえば、マイクロ波帯)の信号に変換する。高周波回路24は、アップコンバータ23の出力信号にアナログ信号処理を施して高周波出力信号TSを生成する。高周波回路24としては、たとえば、高出力電力増幅器(High Power Amplifier,HPA)が挙げられるが、これに限定されるものではない。 As shown in FIG. 1, the signal processing circuit 11 includes a quadrature modulator (QM) 21, a D/A converter (DAC) 22, an up converter (UPC) 23, a high frequency circuit 24, a directional coupler 25, and a down converter. A converter (DNC) 26, an A/D converter (ADC) 27 and a quadrature demodulator (QD) 28 are provided. The quadrature modulator 21 receives a complex digital signal composed of an in-phase component and a quadrature component as input, and performs digital quadrature modulation on the complex digital signal to generate a digital modulation signal. The D/A converter 22 converts the digital modulation signal input from the quadrature modulator 21 into an analog signal, and the up converter 23 converts the analog signal into a high frequency band (for example, microwave band) signal. .. The high frequency circuit 24 performs analog signal processing on the output signal of the up converter 23 to generate a high frequency output signal TS. Examples of the high frequency circuit 24 include, but are not limited to, a high output power amplifier (High Power Amplifier, HPA).
 高周波出力信号TSの一部は、方向性結合器25によってフィードバックさせられる。ダウンコンバータ26は、方向性結合器25からフィードバックされた高周波信号をより低い周波数帯のアナログ信号に変換し、A/D変換器27は、そのアナログ信号をディジタル信号に変換する。直交復調器28は、そのディジタル信号にディジタル直交復調を施すことで、同相成分及び直交成分からなる複素ディジタル信号を生成する。生成された複素ディジタル信号は、応答信号として、ANN学習装置31に転送される。 A part of the high frequency output signal TS is fed back by the directional coupler 25. The down converter 26 converts the high frequency signal fed back from the directional coupler 25 into an analog signal in a lower frequency band, and the A/D converter 27 converts the analog signal into a digital signal. The quadrature demodulator 28 performs digital quadrature demodulation on the digital signal to generate a complex digital signal including an in-phase component and a quadrature component. The generated complex digital signal is transferred to the ANN learning device 31 as a response signal.
 なお、信号処理回路11の構成は、図1に示した構成に限定されるものではない。図1の例では、ディジタル直交変調を実行する直交変調器21とディジタル直交復調を実行する直交復調器28とが使用されている。直交変調器21及び直交復調器28を使用せずに、アナログ直交変調回路及びアナログ直交復調回路が使用される実施の形態もありうる。 The configuration of the signal processing circuit 11 is not limited to the configuration shown in FIG. In the example of FIG. 1, a quadrature modulator 21 that performs digital quadrature modulation and a quadrature demodulator 28 that performs digital quadrature demodulation are used. There may be an embodiment in which the analog quadrature modulation circuit and the analog quadrature demodulation circuit are used without using the quadrature modulator 21 and the quadrature demodulator 28.
 ANN学習装置31は、ANN構造41の学習に使用するために、時間的に連続する複数のトレーニング信号(離散信号)があらかじめ記憶された記憶領域55aを有する信号メモリ55と、信号メモリ55の信号読み出し動作及び信号書き込み動作を制御するメモリ制御部51と、信号メモリ55から読み出されたトレーニング信号T(k)を信号処理回路11に供給する信号供給部40とを有している。ここで、トレーニング信号T(k)の変数kは、離散時間を表す整数値であり、トレーニング信号T(k)は、同相成分及び直交成分からなる複素ディジタル信号である。信号メモリ55としては、たとえば、フラッシュメモリなどの不揮発性メモリと、SDRAM(Synchronous Dynamic Random Access Memory)などの揮発性メモリとが使用されればよい。 The ANN learning device 31 uses a signal memory 55 having a storage area 55a in which a plurality of temporally continuous training signals (discrete signals) are stored in advance for use in learning the ANN structure 41, and a signal of the signal memory 55. The memory control unit 51 controls the read operation and the signal write operation, and the signal supply unit 40 that supplies the training signal T(k) read from the signal memory 55 to the signal processing circuit 11. Here, the variable k of the training signal T(k) is an integer value representing discrete time, and the training signal T(k) is a complex digital signal composed of an in-phase component and a quadrature component. As the signal memory 55, for example, a non-volatile memory such as a flash memory and a volatile memory such as an SDRAM (Synchronous Dynamic Random Access Memory) may be used.
 信号処理回路11とANN学習装置31との間には、ANN学習装置31の信号供給部40から信号処理回路11の直交変調器21へトレーニング信号T(k)を転送するための配線またはケーブルなどの伝送線路(図示せず)が設けられている。 Between the signal processing circuit 11 and the ANN learning device 31, a wiring or a cable for transferring the training signal T(k) from the signal supply unit 40 of the ANN learning device 31 to the quadrature modulator 21 of the signal processing circuit 11 or the like. Transmission line (not shown) is provided.
 信号処理回路11は、ANN学習装置31の信号供給部40から転送された一連のトレーニング信号T(0),T(1),T(2),…にディジタル信号処理及びアナログ信号処理を施してフィードバック信号すなわち応答信号Y(0),Y(1),Y(2),…を生成し、当該応答信号Y(0),Y(1),Y(2),…をANN学習装置31に転送する。ANN学習装置31と信号処理回路11との間には、信号処理回路11の直交復調器28からANN学習装置31へ応答信号Y(k)を転送するための配線またはケーブルなどの伝送線路(図示せず)が設けられている。 The signal processing circuit 11 performs digital signal processing and analog signal processing on the series of training signals T(0), T(1), T(2),... Transferred from the signal supply unit 40 of the ANN learning device 31. A feedback signal, that is, response signals Y(0), Y(1), Y(2),... Is generated, and the response signals Y(0), Y(1), Y(2),. Forward. Between the ANN learning device 31 and the signal processing circuit 11, a transmission line such as a wire or a cable for transferring the response signal Y(k) from the quadrature demodulator 28 of the signal processing circuit 11 to the ANN learning device 31 (see FIG. (Not shown).
 ANN学習装置31は、信号処理回路11から転送された応答信号Y(k)を受信する信号受信部60を有する。信号メモリ55は、信号受信部60で受信された当該応答信号Y(k)を記憶する記憶領域55bを有している。後述するように、メモリ制御部51は、記憶領域55bに記憶された応答信号Y(0),Y(1),…の中からランダムまたは擬似ランダムに選択された選択信号y(c)を読み出し、当該選択信号y(c)をANN構造41に供給することができる(c=0,1,…)。ANN構造41は、記憶領域55bから入力された選択信号y(c)に対して順伝播処理を実行し、その結果得られた複素ディジタル信号を後歪み信号s(c)として出力する。ANN構造41の具体例及び順伝播処理の内容については、後述する。 The ANN learning device 31 includes a signal receiving unit 60 that receives the response signal Y(k) transferred from the signal processing circuit 11. The signal memory 55 has a storage area 55b that stores the response signal Y(k) received by the signal receiving unit 60. As described later, the memory control unit 51 outputs the selection signal y n (c) randomly or pseudo-randomly selected from the response signals Y(0), Y(1),... Stored in the storage area 55b. It is possible to read and supply the selection signal y n (c) to the ANN structure 41 (c=0, 1,... ). The ANN structure 41 performs forward propagation processing on the selection signal y n (c) input from the storage area 55b, and outputs the resulting complex digital signal as a post-distortion signal s n (c). A specific example of the ANN structure 41 and the contents of the forward propagation processing will be described later.
 図2は、処理時間t,トレーニング信号T(k)及び応答信号Y(k)の間の対応関係を表す図である。ここで、処理時間tは、t=T+k×ΔT、との式で表される。Tは、基準時間、ΔTは、処理時間間隔である。図2の例では、kは、0~K-1の範囲内の整数であり、Kは、信号メモリ55の記憶領域55bに記憶されている応答信号Y(0)~Y(K-1)の数である。図2の対応表に示されるように、処理時間t=T+k×ΔTでは、トレーニング信号T(k)に対応する応答信号Y(k)が得られる。 FIG. 2 is a diagram showing the correspondence between the processing time t, the training signal T(k), and the response signal Y(k). Here, the processing time t is represented by an equation of t=T 0 +k×ΔT. T 0 is the reference time and ΔT is the processing time interval. In the example of FIG. 2, k is an integer in the range of 0 to K−1, and K is the response signal Y(0) to Y(K−1) stored in the storage area 55b of the signal memory 55. Is the number of As shown in the correspondence table of FIG. 2, the response signal Y(k) corresponding to the training signal T(k) is obtained at the processing time t=T 0 +k×ΔT.
 図1を参照すると、ANN学習装置31は、さらに学習制御部61を有している。学習制御部61は、逆伝播学習アルゴリズムに基づく適応処理を実行することによりANN構造41の入出力特性(周波数特性を含む。)を定めるパラメータ群を更新する機能を有する。学習制御部61は、図1に示されるように、減算器63、適応処理部64及びパラメータ設定部65を含んで構成されている。 Referring to FIG. 1, the ANN learning device 31 further has a learning control unit 61. The learning control unit 61 has a function of updating the parameter group that defines the input/output characteristic (including the frequency characteristic) of the ANN structure 41 by executing the adaptive processing based on the back propagation learning algorithm. As shown in FIG. 1, the learning control unit 61 includes a subtractor 63, an adaptive processing unit 64, and a parameter setting unit 65.
 適応処理が実行される際には、メモリ制御部51は、信号メモリ55の記憶領域55bに記憶された一群の応答信号Y(0)~Y(K-1)の中から、C個の選択信号y(0)~y(C-1)をランダムまたは擬似ランダムに選択し、当該選択信号y(0)~y(C-1)を信号メモリ55からANN構造41に供給する。ここで、Cは、応答信号Y(0)~Y(K-1)の総数Kよりも小さい正整数であり、下付き添え字nは、適応処理の反復回数を示す正整数である。ANN構造41は、信号メモリ55から入力された選択信号y(c)に対して順伝播処理を実行し、その結果得られた複素ディジタル信号を後歪み信号s(c)として出力する。また、メモリ制御部51は、信号メモリ55の記憶領域55aに記憶された一群のトレーニング信号T(0)~T(K-1)の中から、当該選択信号y(0)~y(C-1)にそれぞれ対応するC個のトレーニング信号を選択し、当該選択されたC個のトレーニング信号を教師信号(参照信号)z(0)~z(C-1)として信号メモリ55から学習制御部61に出力させる。 When the adaptive processing is executed, the memory control unit 51 selects C pieces from the group of response signals Y(0) to Y(K-1) stored in the storage area 55b of the signal memory 55. The signals y n (0) to y n (C-1) are randomly or pseudo-randomly selected, and the selected signals y n (0) to y n (C-1) are supplied from the signal memory 55 to the ANN structure 41. .. Here, C is a positive integer smaller than the total number K of the response signals Y(0) to Y(K-1), and the subscript n is a positive integer indicating the number of iterations of the adaptive processing. The ANN structure 41 performs forward propagation processing on the selection signal y n (c) input from the signal memory 55, and outputs the resulting complex digital signal as a post-distortion signal s n (c). Further, the memory control unit 51 selects the selection signals y n (0) to y n (from the group of training signals T(0) to T(K−1) stored in the storage area 55a of the signal memory 55. C-1) corresponding to C training signals are selected, and the selected C training signals are used as teacher signals (reference signals) z n (0) to z n (C-1) in the signal memory 55. To the learning control unit 61.
 図3は、処理時間τ,応答信号Y(κ),選択信号y(c),トレーニング信号T(κ),及び教師信号(参照信号)z(c)の間の対応関係の一例を示す図である。ここで、処理時間τは、τ=T+κ×ΔT、との式で表され、κは、ランダムまたは擬似ランダムな整数(乱数)である。選択信号y(c)は、ランダムまたは擬似ランダムに選択された応答信号Y(κ)と一致し(y(c)=Y(κ))、教師信号z(c)は、応答信号Y(κ)に対応するトレーニング信号T(κ)と一致する(z(c)=T(κ))。 FIG. 3 shows an example of a correspondence relationship among the processing time τ, the response signal Y(κ), the selection signal y n (c), the training signal T(κ), and the teacher signal (reference signal) z n (c). FIG. Here, the processing time τ is represented by an equation of τ=T 0 +κ×ΔT, and κ is a random or pseudo-random integer (random number). The selection signal y n (c) matches the response signal Y(κ) selected randomly or pseudo-randomly (y n (c)=Y(κ)), and the teacher signal z n (c) is the response signal It matches the training signal T(κ) corresponding to Y(κ) (z n (c)=T(κ)).
 メモリ制御部51は、物理乱数列または擬似乱数列を用いて読み出しアドレスを連続的に生成し、当該読み出しアドレスを信号メモリ55に供給することで、選択信号y(c)と教師信号z(c)との対をランダムまたは擬似ランダムに選択することができる。物理乱数列を使用する場合、たとえば、メモリ制御部51は、物理乱数列のデータが記憶された不揮発性メモリから、そのデータを読み出して使用すればよい。一方、疑似乱数列を使用する場合には、メモリ制御部51は、公知の疑似乱数生成アルゴリズムにより疑似乱数列を計算するための演算回路または関数を内部に有していればよい。そのような演算回路は、たとえば、公知の線形フィードバックシフトレジスタを利用した乱数発生回路により実現可能であるが、これに限定されるものではない。 The memory control unit 51 continuously generates a read address using a physical random number sequence or a pseudo random number sequence, and supplies the read address to the signal memory 55, thereby selecting the selection signal y n (c) and the teacher signal z n. The pair with (c) can be selected randomly or pseudo-randomly. When using the physical random number sequence, for example, the memory control unit 51 may read the data from the nonvolatile memory in which the data of the physical random number sequence is stored and use it. On the other hand, when the pseudo random number sequence is used, the memory control unit 51 only needs to have an arithmetic circuit or a function for calculating the pseudo random number sequence by a known pseudo random number generation algorithm. Such an arithmetic circuit can be realized by, for example, a random number generation circuit using a known linear feedback shift register, but is not limited to this.
 あるいは、メモリ制御部51は、信号メモリ55の記憶領域55bに一旦格納された応答信号Y(0)~Y(K-1)の配列を、物理乱数列または擬似乱数列を用いてランダムまたは擬似ランダムな順番に並べ替えてもよい。これにより、メモリ制御部51は、信号メモリ55からアドレス順にC個の応答信号を読み出すことで選択信号y(0)~y(C-1)を得ることができる。 Alternatively, the memory control unit 51 uses a physical random number sequence or a pseudo random number sequence to randomly or pseudo array the response signals Y(0) to Y(K-1) that are temporarily stored in the storage area 55b of the signal memory 55. You may rearrange in random order. Thus, the memory control unit 51 can obtain the selection signals y n (0) to y n (C-1) by reading C response signals from the signal memory 55 in the order of addresses.
 学習制御部61は、ランダムまたは擬似ランダムに選択された選択信号y(0)~y(C-1)と教師信号z(0)~z(C-1)とを用いて、逆伝播学習アルゴリズムに基づく適応処理を実行することによりANN構造41のパラメータ群を更新することができる。これにより、学習制御部61は、相関性の高い離散信号からなる時系列データを用いた適応処理の実行を回避することができるので、ANN構造41のパラメータ群の収束性を向上させることができる。したがって、ANN構造41の学習効率の向上が可能となる。 The learning control unit 61 uses the selection signals y n (0) to y n (C-1) and the teacher signals z n (0) to z n (C-1) that are randomly or pseudo-randomly selected, The parameter group of the ANN structure 41 can be updated by executing the adaptive processing based on the back propagation learning algorithm. As a result, the learning control unit 61 can avoid performing adaptive processing using time-series data made up of highly correlated discrete signals, so that the convergence of the parameter group of the ANN structure 41 can be improved. .. Therefore, the learning efficiency of the ANN structure 41 can be improved.
 次に、逆伝播学習アルゴリズムについて説明する前に、ANN構造41の具体例及び順伝播処理について説明する。図4は、ANN構造41の具体例を示すニューラルネットワークモデル(NNモデル)41Aを示す概略図である。 Next, a specific example of the ANN structure 41 and forward propagation processing will be described before describing the back propagation learning algorithm. FIG. 4 is a schematic diagram showing a neural network model (NN model) 41A showing a specific example of the ANN structure 41.
 図4に示されるNNモデル41Aは、P個の層L~Lからなる多階層型のニューラルネットワークモデルである。ここで、Pは、階層数を示す4以上の整数である。NNモデル41Aは、入力された選択信号y(c)の同相成分y (c)及び直交成分y (c)が入力される入力層Lと、中間層(隠れ層)L~LP-1と、後歪み信号s(c)の同相成分s (c)及び直交成分s (c)を出力する出力層Lとを有している。図4の例では、中間層は2層以上であるが、この代わりに1層であってもよい。 The NN model 41A shown in FIG. 4 is a multi-layered neural network model composed of P layers L 1 to L P. Here, P is an integer of 4 or more indicating the number of layers. The NN model 41A includes an input layer L 1 to which the in-phase component y n i (c) and a quadrature component y n q (c) of the input selection signal y n (c) are input, and an intermediate layer (hidden layer) L 1. 2 to L P-1 and an output layer L P that outputs the in-phase component s n i (c) and the quadrature component s n q (c) of the post-distortion signal s n (c). In the example of FIG. 4, the number of intermediate layers is two or more, but one layer may be used instead.
 NNモデル41AにおけるP個の層L~Lのうち第p番目の層Lは、脳の神経回路網のニューロンを模したノードNp,1~Np,μ(p)を有するものとする。ここで、下付き添え字pは、層番号、下付き添え字μ(p)は、第p番目の層Lにおけるノードの個数を示す正整数である。NNモデル41Aにおいて、隣接する2層のうちの一方の層のノードと他方の層のノードとは互いに結合されており、ノード間の結合強度として重み係数(結合荷重)が割り当てられる。また、各ノードごとに、バイアスと呼ばれる値が割り当てられている。NNモデル41Aの入出力特性を定めるパラメータ群は、重み係数及びバイアスを含むものである。 Of the P layers L 1 to L P in the NN model 41A, the p-th layer L p has nodes N p,1 to N p,μ(p) imitating neurons of the neural network of the brain. And Here, the subscript p is a layer number, and the subscript μ(p) is a positive integer indicating the number of nodes in the p-th layer L p . In the NN model 41A, the node of one layer and the node of the other layer of the two adjacent layers are coupled to each other, and a weighting factor (coupling weight) is assigned as the coupling strength between the nodes. A value called bias is assigned to each node. The parameter group that defines the input/output characteristics of the NN model 41A includes a weighting coefficient and a bias.
 図4に示されるように、入力層Lは、選択信号y(c)の同相成分y (c)及び直交成分y (c)がそれぞれ入力される2個のノードN1,1,N1,μ(1)を有しており(μ(1)=2)、出力層Lは、後歪み信号s(c)の同相成分s (c)及び直交成分s (c)をそれぞれ出力する2個のノードNP,1,NP,μ(P)を有している(μ(P)=2)。 As shown in FIG. 4, the input layer L 1 includes two nodes N 1 to which the in-phase component y n i (c) and the quadrature component y n q (c) of the selection signal y n (c) are input, respectively. , 1 , N 1, μ(1) (μ(1)=2), the output layer L P has an in-phase component s n i (c) and a quadrature component of the post-distortion signal s n (c). It has two nodes N P,1 , N P,μ(P ) that respectively output s n q (c) (μ(P)=2).
 図5は、順伝播処理の一例を説明するための図である。図5に示されるように、第p-1番目の層Lp-1におけるノードNp-1,1~Np-1,μ(p-1)と、第p番目の層Lにおける第r番目のノードNp,rとの間が結合されている。ここで、rは、1~μ(p)の範囲内の整数である。図5に示されるように、ノードNp-1,1~Np-1,μ(p-1)とノードNp,rとの間の結合強度として、重み係数wr,1 (p,p-1)~wr,μ(p-1) (p,p-1)が割り当てられている。また、ノードNp,rには、バイアスb (p)が割り当てられる。 FIG. 5 is a diagram for explaining an example of forward propagation processing. As shown in FIG. 5, the nodes N p-1,1 to N p-1,μ(p-1) in the p-1 th layer L p-1 and the n th layer in the p th layer L p The r-th node N p,r is coupled. Here, r is an integer within the range of 1 to μ(p). As shown in FIG. 5, as the coupling strength between the nodes N p-1,1 to N p-1,μ(p-1) and the nodes N p,r , the weighting factors w r,1 (p, p-1) to w r,μ(p-1) (p,p-1) are assigned. A bias b r (p) is assigned to the node N p,r .
 このとき、ノードNp,rは、第p-1番目の層Lp-1におけるノードNp-1,1~Np-1,μ(p-1)から順方向に伝播された信号o (p-1)~oμ(p-1) (p-1)に対してそれぞれ重み係数wr,1 (p,p-1)~wr,μ(p-1) (p,p-1)を乗算し、その乗算結果とバイアスb (p)とを加算する。さらに、ノードNp,rは、その加算結果に活性化関数f[]を作用させる。結果として、ノードNp,rは、図5に示される数式で表現される信号o (p)を、第p+1番目の層Lp+1におけるすべてのノードNp+1,1~Np+1,μ(p+1)に向けて順方向に伝播させる。活性化関数f[]としては、たとえば、シグモイド関数(sigmoid function)または双曲線正接関数(hyperbolic tangent function)などの公知の活性化関数(activation function)が使用されればよい。なお、図5に示された数式の変数iは、1~μ(p-1)の範囲内の整数である。 At this time, the node N p,r is the signal o propagated in the forward direction from the nodes N p-1,1 to N p-1,μ(p-1) in the p-1 th layer L p-1 . Weighting factors w r,1 (p,p-1) to w r,μ(p-1) (p,p ) for 1 (p-1) to o μ(p-1) (p-1) , respectively. -1) is multiplied, and the multiplication result and the bias b r (p) are added. Further, the node N p,r causes the activation function f[] to act on the addition result. As a result, node N p, r, all the nodes N p + 1,1 ~ N p + 1 signals o r (p) expressed by the equation, in the p + 1 th layer L p + 1 shown in FIG. 5, mu (p + 1 ) To the forward direction. As the activation function f[], for example, a known activation function such as a sigmoid function or a hyperbolic tangent function may be used. The variable i in the mathematical formula shown in FIG. 5 is an integer in the range of 1 to μ(p−1).
 図4に示した出力層LにおけるノードNP,1,NP,μ(P)(μ(P)=2)は、信号o (P),o (P)をそれぞれ同相成分s (c)及び直交成分s (c)として出力する。 The nodes N P,1 , N P,μ(P) (μ(P)=2) in the output layer L P shown in FIG. 4 have the same phase component s as the signals o 1 (P) and o 2 (P) , respectively. It is output as n i (c) and the orthogonal component s n q (c).
 図6は、ANN構造41の他の具体例を示すニューラルネットワークモデル(NNモデル)41Bを示す概略図である。図6に示されるNNモデル41Bは、直列に接続された遅延素子D~Dと、P個の層L~Lとからなる多階層型のニューラルネットワークモデルである。ここで、Mは、遅延素子D~Dの個数を表す正整数である。 FIG. 6 is a schematic diagram showing a neural network model (NN model) 41B showing another specific example of the ANN structure 41. The NN model 41B shown in FIG. 6 is a multi-layered neural network model including delay elements D 1 to D M connected in series and P layers L 1 to L P. Here, M is a positive integer representing the number of delay elements D 1 to D M.
 NNモデル41Bにおける一群の遅延素子D~Dは、当該NNモデル41Bに入力された選択信号y(c)に対してM個の遅延信号y(c-1)~y(c-M)を生成する。NNモデル41Bの層構造は、信号y(c-1)~y(c-M)の同相成分y (c)~y (c-M)及び直交成分y (c)~y (c-M)が入力される入力層Lと、中間層(隠れ層)L~LP-1と、後歪み信号s(c)の同相成分s (c)及び直交成分s (c)を出力する出力層Lとを有している。図6の例では、中間層は2層以上であるが、この代わりに1層であってもよい。 The group of delay elements D 1 to D M in the NN model 41B has M delay signals y n (c-1) to y n (c) with respect to the selection signal y n (c) input to the NN model 41B. -M) is generated. The layered structure of the NN model 41B has in-phase components y n i (c) to y n i (c−M) and quadrature components y n q (c) of signals y n (c−1) to y n (c−M). ) To y n q (c−M) are input to the input layer L 1 , intermediate layers (hidden layers) L 2 to L P−1, and the in-phase component s n i (of the post-distortion signal s n (c)). c) and an output layer L P that outputs the orthogonal component s n q (c). In the example of FIG. 6, the number of intermediate layers is two or more, but one layer may be used instead.
 図4に示したNNモデル41Aの場合と同様に、NNモデル41BにおけるP個の層L~Lのうち第p番目の層Lは、脳の神経回路網のニューロンを模したノードNp,1~Np,μ(p)を有している。また、NNモデル41Bにおいて、隣接する2層のうちの一方の層のノードと他方の層のノードとの間の結合強度として重み係数(結合荷重)が割り当てられる。また、各ノードごとに、バイアスと呼ばれる値が割り当てられている。NNモデル41Bの入出力特性を定めるパラメータ群は、重み係数及びバイアスを含むものである。 Similar to the case of the NN model 41A shown in FIG. 4, the p-th layer L p of the P layers L 1 to L P in the NN model 41B is a node N that imitates a neuron of the neural network of the brain. p,1 to N p,μ(p) . In the NN model 41B, a weighting factor (coupling weight) is assigned as the coupling strength between the node of one layer and the node of the other layer of the two adjacent layers. A value called bias is assigned to each node. The parameter group that defines the input/output characteristics of the NN model 41B includes a weighting coefficient and a bias.
 次に、図7を参照しつつ、実施の形態1のANN学習装置31の動作について以下に説明する。図7は、ANN学習装置31により実行される学習処理の手順の例を概略的に示すフローチャートである。ANN構造41が初めて学習しようとする際には、ANN構造41のパラメータ群は、乱数値に初期化される。 Next, the operation of the ANN learning device 31 according to the first embodiment will be described below with reference to FIG. FIG. 7 is a flowchart schematically showing an example of the procedure of the learning process executed by the ANN learning device 31. When the ANN structure 41 tries to learn for the first time, the parameter group of the ANN structure 41 is initialized to a random value.
 先ず、トレーニング信号群に対応する応答信号群を得るための処理が実行される(ステップST11~ST13)。このとき、学習制御部61は動作しない。図7を参照すると、メモリ制御部51は、信号メモリ55の記憶領域55aからトレーニング信号T(k)を読み出し、信号供給部40は、当該読み出されたトレーニング信号を信号処理回路11に供給する(ステップST11)。このとき、信号処理回路11は、ANN学習装置31から入力されたトレーニング信号T(k)にディジタル信号処理及びアナログ信号処理を施して応答信号Y(k)を生成し、当該応答信号Y(k)をANN学習装置31に転送する。ANN学習装置31の信号受信部60は、信号処理回路11から転送された応答信号Y(k)を受信する。 First, a process for obtaining a response signal group corresponding to the training signal group is executed (steps ST11 to ST13). At this time, the learning control unit 61 does not operate. Referring to FIG. 7, the memory control unit 51 reads the training signal T(k) from the storage area 55 a of the signal memory 55, and the signal supply unit 40 supplies the read training signal to the signal processing circuit 11. (Step ST11). At this time, the signal processing circuit 11 performs digital signal processing and analog signal processing on the training signal T(k) input from the ANN learning device 31 to generate a response signal Y(k), and the response signal Y(k). ) To the ANN learning device 31. The signal receiving unit 60 of the ANN learning device 31 receives the response signal Y(k) transferred from the signal processing circuit 11.
 メモリ制御部51は、信号受信部60で受信された応答信号Y(k)を信号メモリ55の記憶領域55bに記憶させる(ステップST12)。 The memory control unit 51 stores the response signal Y(k) received by the signal receiving unit 60 in the storage area 55b of the signal memory 55 (step ST12).
 次いで、適応処理を開始するか否かが判定される(ステップST13)。たとえば、メモリ制御部51は、信号メモリ55の記憶領域55bに記憶された応答信号Y(k)の個数を計数し、当該個数が所定数に到達したときに、適応処理を開始すると判定することができる(ステップST13のYES)。適応処理を開始しないとの判定がなされたときは(ステップST13のNO)、ステップST11~ST12が繰り返し実行される。これにより、一連のトレーニング信号T(0),T(1),T(2),…が信号処理回路11に供給され、一連のトレーニング信号T(0),T(1),T(2),…に対応する一連の応答信号Y(0),Y(1),Y(2),…が信号メモリ55の記憶領域55bに格納される。 Next, it is determined whether to start the adaptive processing (step ST13). For example, the memory control unit 51 counts the number of response signals Y(k) stored in the storage area 55b of the signal memory 55, and when the number reaches a predetermined number, determines to start the adaptive processing. Can be performed (YES in step ST13). When it is determined that the adaptive processing is not started (NO in step ST13), steps ST11 to ST12 are repeatedly executed. As a result, a series of training signals T(0), T(1), T(2),... Is supplied to the signal processing circuit 11, and a series of training signals T(0), T(1), T(2). A series of response signals Y(0), Y(1), Y(2),... Corresponding to,... Are stored in the storage area 55b of the signal memory 55.
 適応処理を開始するとの判定がなされたとき(ステップST13のYES)、メモリ制御部51及び学習制御部61は、適応処理を開始する。初回の適応処理が実行される場合には、反復回数nの値は1に初期化される。 When it is determined that the adaptive processing is started (YES in step ST13), the memory control unit 51 and the learning control unit 61 start the adaptive processing. When the first adaptation process is executed, the value of the iteration count n is initialized to 1.
 具体的には、メモリ制御部51は、信号メモリ55の記憶領域55bから、ランダムまたは擬似ランダムに選択された選択信号y(0)~y(C-1)を読み出し、当該選択信号y(0)~y(C-1)をANN構造41に供給する(ステップST14)。これにより、ANN構造41から後歪み信号s(0)~s(C-1)が出力される。また、メモリ制御部51は、信号メモリ55の記憶領域55aから、選択信号y(0)~y(C-1)に対応する教師信号z(0)~z(C-1)を読み出し、当該教師信号z(0)~z(C-1)を学習制御部61に供給する(ステップST15)。学習制御部61における減算器63は、当該後歪み信号s(0)~s(C-1)と当該教師信号z(0)~z(C-1)との間の差分を示す誤差信号e(0)~e(C-1)を算出する(ステップST16)。なお、図7の例では、説明の便宜上、ステップST14,ST15,ST16がこの順番で実行されているが、これに限定されるものではない。ステップST14,ST15,ST16が互いに並列に実行されてもよい。 Specifically, the memory control unit 51 reads the selection signals y n (0) to y n (C-1) selected at random or pseudo-randomly from the storage area 55b of the signal memory 55, and selects the selection signal y. supplying n and (0) ~ y n (C -1) to the ANN structure 41 (step ST14). As a result, the post-distortion signals s n (0) to s n (C-1) are output from the ANN structure 41. Further, the memory controller 51, from the storage area 55a of the signal memory 55, selection signal y n (0) ~ y n teacher signal z n (0) corresponding to the (C-1) ~ z n (C-1) Is read and the teacher signals z n (0) to z n (C-1) are supplied to the learning control unit 61 (step ST15). The subtractor 63 in the learning control unit 61 calculates the difference between the post-distortion signals s n (0) to s n (C-1) and the teacher signals z n (0) to z n (C-1). calculating an error signal indicating e n (0) ~ e n (C-1) ( step ST16). Note that in the example of FIG. 7, steps ST14, ST15, and ST16 are executed in this order for convenience of description, but the present invention is not limited to this. Steps ST14, ST15, ST16 may be executed in parallel with each other.
 そして、適応処理部64は、誤差信号e(0)~e(C-1)を用いて逆伝播学習アルゴリズムに基づく適応処理を実行することによりANN構造41の新たなパラメータ群を算出する(ステップST17)。次に、パラメータ設定部65は、ANN構造41のパラメータ群として当該新たなパラメータ群を設定することにより、ANN構造41のパラメータ群を更新する(ステップST18)。このときの逆伝播学習アルゴリズムとしては、たとえば、勾配降下法(Gradient Descent algorithm)、ガウスニュートン法(Gauss Newton algorithm)またはレーベンバーグ・マルカート法(Levenberg-Marquardt algorithm)などの公知のアルゴリズムが使用されればよく、特に制限されるものではない。 Then, the adaptive processing section 64 calculates a new parameter group of ANN structure 41 by executing the adaptive processing based on the back propagation learning algorithm using the error signal e n (0) ~ e n (C-1) (Step ST17). Next, the parameter setting unit 65 updates the parameter group of the ANN structure 41 by setting the new parameter group as the parameter group of the ANN structure 41 (step ST18). As the back-propagation learning algorithm at this time, for example, a known algorithm such as a gradient descent method (Gradient Descent algorithm), a Gauss Newton algorithm (Gauss Newton algorithm) or a Levenberg-Marquardt algorithm is used. However, it is not particularly limited.
 ステップST17において、適応処理部64は、たとえば、次式(1)に従い、現在の重み係数を要素とするn回目の重み係数群wに基づいて、新たな重み係数を要素とするn+1回目の重み係数群wn+1を算出することができる。
    wn+1=w-G[μ,e]     (1)
 ここで、現在の重み係数群w及び新たな重み係数群wn+1は、ベクトルまたは行列で表現可能であり、μは、学習係数であり、eは、誤差信号e(0)~e(C-1)を要素とするベクトルであり、G[]は、逆伝播学習アルゴリズムによる更新量を決定する関数である。
In step ST17, the adaptive processing unit 64 uses the new weighting factor as an element based on the n-th weighting factor group w n having the current weighting factor as an element, for example, in accordance with the following equation (1). The weighting coefficient group w n+1 can be calculated.
w n + 1 = w n -G [μ, e n] (1)
Here, the current weighting coefficient group w n and the new weighting coefficient group w n+1 can be expressed by a vector or a matrix, μ is a learning coefficient, and e n is an error signal e n (0) to e G(] is a vector having n (C-1) as an element, and G[] is a function that determines the update amount by the back propagation learning algorithm.
 その後、適応処理部64は、反復処理を終了するか否かを判定する(ステップST19)。反復処理を終了しないと判定されたとき(ステップST19のNO)、メモリ制御部51及び学習制御部61は、ステップST14~ST18を繰り返し実行する。たとえば、反復回数nがあらかじめ定められた回数に到達した場合、あるいは、パラメータ群が十分に収束したことを示す収束条件が満たされる場合に、適応処理部64は、反復処理を終了すると判定すればよい(ステップST19のYES)。なお、収束条件としては、たとえば、誤差信号e(0)~e(C-1)の大きさを表す誤差評価値が所定回数だけ連続して所定値以下となるという条件が挙げられる。たとえば、誤差評価値としては、誤差信号e(c)の絶対値または当該絶対値の2乗をc=0~C-1について加算した値を使用すればよい。 After that, the adaptation processing unit 64 determines whether or not the iterative processing is to be ended (step ST19). When it is determined that the iterative process is not ended (NO in step ST19), the memory controller 51 and the learning controller 61 repeatedly execute steps ST14 to ST18. For example, if the number of iterations n reaches a predetermined number, or if a convergence condition indicating that the parameter group has sufficiently converged is satisfied, the adaptive processing unit 64 determines to end the iterative processing. Good (YES in step ST19). As the convergence condition, for example, the condition that the error evaluation value representing the magnitude of the error signal e n (0) ~ e n (C-1) is equal to or less than a predetermined value continuously for a predetermined number of times and the like. For example, the error evaluation value, may be used absolute value or a value of the square of the absolute value obtained by adding the c = 0 ~ C-1 of the error signal e n (c).
 反復処理を終了するとの判定がなされた場合(ステップST19のYES)、ANN構造41の学習を続行する場合には(ステップST20のNO)、学習制御部61は、ステップST11以後の学習処理を再度実行する。ANN構造41の学習を完了させる場合には(ステップST20のYES)、学習制御部61は、学習処理を終了する。 If it is determined that the iterative process is to be ended (YES in step ST19), and if learning of the ANN structure 41 is to be continued (NO in step ST20), the learning control unit 61 repeats the learning process after step ST11. Execute. When the learning of the ANN structure 41 is completed (YES in step ST20), the learning control unit 61 ends the learning process.
 ANN構造41の学習が完了した場合、学習済みのANN構造41を歪み補償回路として用いることができる。図8は、歪み補償回路として使用されるANN構造41を有する信号処理装置2の概略構成を示すブロック図である。ANN構造41は、高周波回路24の周波数特性及び非線形な入出力特性により発生する信号歪みをあらかじめ補償することができ、また、直交変調器21で発生する直交度誤差(直交変調誤差)をも補償することができる。直交変調器21の出力信号の同相成分及び直交成分は、互いに直交するという状態にあることが理想であるが、温度条件などの要因により、その出力信号の同相成分及び直交成分の実際の状態と理想状態との間に誤差が生ずることがある。このような誤差が、直交度誤差である。 When the learning of the ANN structure 41 is completed, the learned ANN structure 41 can be used as a distortion compensation circuit. FIG. 8 is a block diagram showing a schematic configuration of a signal processing device 2 having an ANN structure 41 used as a distortion compensation circuit. The ANN structure 41 can previously compensate for signal distortion generated by the frequency characteristic and the nonlinear input/output characteristic of the high frequency circuit 24, and also compensates for a quadrature error (quadrature modulation error) generated in the quadrature modulator 21. can do. Ideally, the in-phase component and the quadrature component of the output signal of the quadrature modulator 21 should be in a state of being orthogonal to each other, but due to factors such as temperature conditions, the in-phase component and the quadrature component of the output signal may differ from the actual state. There may be an error from the ideal state. Such an error is an orthogonality error.
 以上に説明したように実施の形態1のANN学習システム1では、ANN学習装置31は、ランダムまたは擬似ランダムに選択された選択信号y(0)~y(C-1)と教師信号z(0)~z(C-1)との間の差分を示す誤差信号e(0)~e(C-1)を用いて逆伝播学習アルゴリズムに基づく適応処理を実行するので、ANN構造41のパラメータ群の収束性の低下を回避したり、あるいは、そのパラメータ群が本来収束すべき解からずれた値へ収束することを回避したりすることができる。これにより、ANN構造41の学習効率の向上が可能となる。 As described above, in the ANN learning system 1 according to the first embodiment, the ANN learning device 31 includes the selection signals y n (0) to y n (C-1) and the teacher signal z that are randomly or pseudo-randomly selected. Since the adaptive processing based on the back propagation learning algorithm is executed using the error signals e n (0) to e n (C-1) indicating the difference between n (0) to z n (C-1), It is possible to avoid a decrease in the convergence of the parameter group of the ANN structure 41, or to prevent the parameter group from converging to a value that deviates from the solution that should originally converge. As a result, the learning efficiency of the ANN structure 41 can be improved.
 なお、上記したANN学習装置31の機能の全部または一部は、たとえば、DSP(Digital Signal Processor),ASIC(Application Specific Integrated Circuit)またはFPGA(Field-Programmable Gate Array)などの半導体集積回路を有する単数または複数のプロセッサにより実現可能である。あるいは、ANN学習装置31の機能の全部または一部は、ソフトウェアまたはファームウェアのプログラムコードを実行する、CPU(Central Processing Unit)またはGPU(Graphics Processing Unit)などの演算装置を含む単数または複数のプロセッサで実現されてもよい。あるいは、DSP,ASICまたはFPGAなどの半導体集積回路と、CPUまたはGPUなどの演算装置との組み合わせを含む単数または複数のプロセッサによってANN学習装置31の機能の全部または一部を実現することも可能である。 Note that all or some of the functions of the ANN learning device 31 described above include, for example, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Gate Integrated Circuit) integrated semiconductor circuit. Alternatively, it can be realized by a plurality of processors. Alternatively, all or some of the functions of the ANN learning device 31 are implemented by one or more processors including a computing device such as a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit) that executes program code of software or firmware. May be realized. Alternatively, all or part of the functions of the ANN learning device 31 can be realized by a single processor or a plurality of processors including a combination of a semiconductor integrated circuit such as DSP, ASIC or FPGA and an arithmetic unit such as CPU or GPU. is there.
 図9は、ANN学習装置31のハードウェア構成例であるデータ処理装置80の概略構成を示すブロック図である。図9に示されるデータ処理装置80は、プロセッサ81、入出力インタフェース84、メモリ82、記憶装置83及び信号路85を備えている。信号路85は、プロセッサ81、入出力インタフェース84、メモリ82及び記憶装置83を相互に接続するためのバスである。入出力インタフェース84は、プロセッサ81から転送された信号を信号処理回路11(図1)に出力することができ、また、信号処理回路11から入力された信号をプロセッサ81に転送する機能を有する。 FIG. 9 is a block diagram showing a schematic configuration of a data processing device 80 which is a hardware configuration example of the ANN learning device 31. The data processing device 80 shown in FIG. 9 includes a processor 81, an input/output interface 84, a memory 82, a storage device 83, and a signal path 85. The signal path 85 is a bus for connecting the processor 81, the input/output interface 84, the memory 82, and the storage device 83 to each other. The input/output interface 84 can output the signal transferred from the processor 81 to the signal processing circuit 11 (FIG. 1) and has a function of transferring the signal input from the signal processing circuit 11 to the processor 81.
 メモリ82は、プロセッサ81がディジタル信号処理を実行する際に使用されるワークメモリと、当該ディジタル信号処理で使用されるデータが展開される一時記憶メモリとを含む。たとえば、メモリ82は、フラッシュメモリ及びSDRAMなどの半導体メモリで構成されればよい。メモリ82は、図1の信号メモリ55として利用可能である。また、記憶装置83は、プロセッサ81がCPUまたはGPUなどの演算装置を含む場合に当該演算装置で実行されるべきソフトウェアまたはファームウェアのプログラムコードを格納する記憶領域として利用可能である。たとえば、記憶装置83は、フラッシュメモリまたはROM(Read Only Memory)などの不揮発性の半導体メモリで構成されればよい。 The memory 82 includes a work memory used when the processor 81 executes digital signal processing, and a temporary storage memory in which data used in the digital signal processing is expanded. For example, the memory 82 may be composed of a semiconductor memory such as a flash memory and an SDRAM. The memory 82 can be used as the signal memory 55 of FIG. Further, the storage device 83 can be used as a storage area for storing a program code of software or firmware to be executed by the arithmetic device when the processor 81 includes an arithmetic device such as a CPU or a GPU. For example, the storage device 83 may be composed of a non-volatile semiconductor memory such as a flash memory or a ROM (Read Only Memory).
 なお、図9の例では、プロセッサ81の個数は1つであるが、これに限定されるものではない。互いに連携して動作する複数個のプロセッサを用いてANN学習装置31のハードウェア構成が実現されてもよい。 In the example of FIG. 9, the number of processors 81 is one, but the number is not limited to this. The hardware configuration of the ANN learning device 31 may be realized by using a plurality of processors that operate in cooperation with each other.
実施の形態2.
 次に、本発明に係る実施の形態2について説明する。図10は、本発明に係る実施の形態2の信号処理装置3の概略構成を示すブロック図である。図10に示される信号処理装置3は、入力信号に対してディジタル信号処理及びアナログ信号処理を実行する信号処理回路11と、信号処理回路11で発生する信号歪みをあらかじめ補償する歪み補償回路13とを備えて構成されている。図10に示した信号処理回路11の構成は、図1に示した信号処理回路11の構成と同じであるので、その詳細な説明を省略する。
Embodiment 2.
Next, a second embodiment according to the present invention will be described. FIG. 10 is a block diagram showing a schematic configuration of the signal processing device 3 according to the second embodiment of the present invention. The signal processing device 3 shown in FIG. 10 includes a signal processing circuit 11 that executes digital signal processing and analog signal processing on an input signal, and a distortion compensation circuit 13 that pre-compensates for signal distortion generated in the signal processing circuit 11. Is configured. The configuration of the signal processing circuit 11 shown in FIG. 10 is the same as the configuration of the signal processing circuit 11 shown in FIG. 1, so detailed description thereof will be omitted.
 本実施の形態の歪み補償回路13は、いわゆる間接学習機構(Indirect Learning Architecture)を備えた歪み補償回路である。図3に示されるように歪み補償回路13は、第1ANN構造42及び第2ANN構造43という一組のANN構造を有している。歪み補償回路13は、第1ANN構造42の入出力特性(周波数特性を含む。)と第2ANN構造43の入出力特性(周波数特性を含む。)とが互いに一致するように、第1ANN構造42の入出力特性を定めるパラメータ群と第2ANN構造43の入出力特性を定めるパラメータ群とを更新することにより、信号処理回路11の歪み特性に対応する逆特性を第1ANN構造42に学習させることができる。本実施の形態の第1ANN構造42と第2ANN構造43とは、同一のニューラルネットワークモデルで構成されている。ニューラルネットワークモデルの具体例としては、図4または図5に示したNNモデル41Aまたは41Bが挙げられるが、これに限定されるものではない。 The distortion compensating circuit 13 of the present embodiment is a distortion compensating circuit including a so-called indirect learning mechanism (Indirect Learning Architecture). As shown in FIG. 3, the distortion compensation circuit 13 has a set of ANN structures, that is, a first ANN structure 42 and a second ANN structure 43. In the distortion compensation circuit 13, the input/output characteristic of the first ANN structure 42 (including the frequency characteristic) and the input/output characteristic of the second ANN structure 43 (including the frequency characteristic) match each other. By updating the parameter group defining the input/output characteristic and the parameter group defining the input/output characteristic of the second ANN structure 43, the first ANN structure 42 can be made to learn the inverse characteristic corresponding to the distortion characteristic of the signal processing circuit 11. .. The first ANN structure 42 and the second ANN structure 43 of this embodiment are configured by the same neural network model. A specific example of the neural network model is the NN model 41A or 41B shown in FIG. 4 or 5, but the neural network model is not limited to this.
 歪み補償回路13は、図10に示されるように、第1ANN構造42及び第2ANN構造43の他に、信号受信部60、信号メモリ55、メモリ制御部52及び学習制御部62を備えている。メモリ制御部52は、信号メモリ55の信号読み出し動作及び信号書き込み動作を制御することができる。 As shown in FIG. 10, the distortion compensation circuit 13 includes a signal receiving unit 60, a signal memory 55, a memory control unit 52, and a learning control unit 62 in addition to the first ANN structure 42 and the second ANN structure 43. The memory control unit 52 can control the signal reading operation and the signal writing operation of the signal memory 55.
 第1ANN構造42には、外部信号源(図示せず)から、同相成分及び直交成分からなる複素ディジタル信号である入力信号(離散信号)X(k)が供給される。ここで、その入力信号X(k)の変数kは、離散時間を表す整数値である。第1ANN構造42は、時間的に連続する複数の入力信号X(0),X(1),…に順伝播処理を施し、その結果得られた複素ディジタル信号を前歪み信号Z(0),Z(1),…として出力する。前歪み信号Z(0),Z(1),…は、信号処理回路11と信号メモリ55とに転送される。メモリ制御部52は、第1ANN構造42から転送された前歪み信号Z(k)を信号メモリ55の記憶領域55aに記憶させる。 An input signal (discrete signal) X(k), which is a complex digital signal including an in-phase component and a quadrature component, is supplied to the first ANN structure 42 from an external signal source (not shown). Here, the variable k of the input signal X(k) is an integer value representing discrete time. The first ANN structure 42 performs forward propagation processing on a plurality of time-sequential input signals X(0), X(1),..., And the complex digital signal obtained as a result is predistorted signal Z(0), Output as Z(1),.... The predistorted signals Z(0), Z(1),... Are transferred to the signal processing circuit 11 and the signal memory 55. The memory control unit 52 stores the predistorted signal Z(k) transferred from the first ANN structure 42 in the storage area 55 a of the signal memory 55.
 信号処理回路11は、第1ANN構造42から転送された一連の前歪み信号Z(0),Z(1),Z(2),…にディジタル信号処理及びアナログ信号処理を施してフィードバック信号すなわち応答信号Y(0),Y(1),Y(2),…を生成し、当該応答信号Y(0),Y(1),Y(2),…を歪み補償回路13に転送する。なお、信号処理回路11と歪み補償回路13との間には、前歪み信号Z(k)及び応答信号Y(k)を転送するための配線またはケーブルなどの伝送線路(図示せず)が設けられている。 The signal processing circuit 11 performs digital signal processing and analog signal processing on the series of pre-distorted signals Z(0), Z(1), Z(2),... Transferred from the first ANN structure 42 to provide a feedback signal, that is, a response. The signals Y(0), Y(1), Y(2),... Are generated and the response signals Y(0), Y(1), Y(2),. A transmission line (not shown) such as a wire or a cable for transferring the predistortion signal Z(k) and the response signal Y(k) is provided between the signal processing circuit 11 and the distortion compensation circuit 13. Has been.
 信号処理回路11から転送された応答信号Y(k)を信号受信部60が受信すると、メモリ制御部52は、当該応答信号Y(k)を信号メモリ55の記憶領域55bに記憶させる。後述するように、メモリ制御部52は、記憶領域55bに記憶された応答信号Y(0),Y(1),…の中からランダムまたは擬似ランダムに選択された選択信号y(c)を読み出し、当該選択信号y(c)を第2ANN構造43に供給することができる(c=0,1,…)。第2ANN構造43は、記憶領域55bから入力された選択信号y(c)に対して順伝播処理を実行し、その結果得られた複素ディジタル信号を後歪み信号s(c)として出力する。 When the signal receiving unit 60 receives the response signal Y(k) transferred from the signal processing circuit 11, the memory control unit 52 stores the response signal Y(k) in the storage area 55b of the signal memory 55. As will be described later, the memory control unit 52 outputs the selection signal y n (c) randomly or pseudo-randomly selected from the response signals Y(0), Y(1),... Stored in the storage area 55b. The selected signal y n (c) can be read and supplied to the second ANN structure 43 (c=0, 1,... ). The second ANN structure 43 performs forward propagation processing on the selection signal y n (c) input from the storage area 55b, and outputs the resulting complex digital signal as a post-distortion signal s n (c). ..
 図11は、処理時間t,前歪み信号Z(k)及び応答信号Y(k)の間の対応関係を表す図である。ここで、処理時間tは、t=T+k×ΔT、との式で表される。Tは、基準時間、ΔTは、処理時間間隔である。図11の例では、kは、0~K-1の範囲内の整数であり、Kは、信号メモリ55の記憶領域55bに記憶されている応答信号Y(0)~Y(K-1)の数である。図11の対応表に示されるように、前歪み信号Z(k)と応答信号Y(k)とは一対一で対応している。 FIG. 11 is a diagram showing the correspondence between the processing time t, the pre-distortion signal Z(k) and the response signal Y(k). Here, the processing time t is represented by an equation of t=T 0 +k×ΔT. T 0 is the reference time and ΔT is the processing time interval. In the example of FIG. 11, k is an integer in the range of 0 to K−1, and K is the response signal Y(0) to Y(K−1) stored in the storage area 55b of the signal memory 55. Is the number of As shown in the correspondence table of FIG. 11, the predistortion signal Z(k) and the response signal Y(k) have a one-to-one correspondence.
 図10に示される学習制御部62は、逆伝播学習アルゴリズムに基づく適応処理を実行することにより、第1ANN構造42のパラメータ群と第2ANN構造43のパラメータ群とを更新する機能を有する。学習制御部62は、減算器63、適応処理部64及びパラメータ設定部66を含んで構成されている。学習制御部62の構成は、図1のパラメータ設定部65に代えて図10のパラメータ設定部66を有する点を除いて、実施の形態1の学習制御部61の構成と同じである。 The learning control unit 62 shown in FIG. 10 has a function of updating the parameter group of the first ANN structure 42 and the parameter group of the second ANN structure 43 by executing the adaptive processing based on the back propagation learning algorithm. The learning control unit 62 includes a subtractor 63, an adaptive processing unit 64, and a parameter setting unit 66. The configuration of the learning control unit 62 is the same as the configuration of the learning control unit 61 of the first embodiment, except that the parameter setting unit 65 of FIG. 1 is replaced by the parameter setting unit 66 of FIG. 10.
 適応処理が実行される際には、メモリ制御部52は、信号メモリ55の記憶領域55bに記憶された一群の応答信号Y(0)~Y(K-1)の中から、C個の応答信号をランダムまたは擬似ランダムに選択し、当該選択されたC個の応答信号を選択信号y(0)~y(C-1)として信号メモリ55から第2ANN構造43に供給する。ここで、Cは、応答信号Y(0)~Y(K-1)の総数Kよりも小さい正整数であり、下付き添え字nは、適応処理の反復回数を示す正整数である。第2ANN構造43は、信号メモリ55から入力された選択信号y(c)に対して順伝播処理を実行し、その結果得られた複素ディジタル信号を後歪み信号s(c)として出力する。また、メモリ制御部52は、信号メモリ55の記憶領域55aに記憶された一群の前歪み信号の中から、当該選択信号y(0)~y(C-1)にそれぞれ対応するC個の前歪み信号を選択し、当該選択されたC個の前歪み信号を参照信号z(0)~z(C-1)として信号メモリ55から学習制御部62に出力させる。 When the adaptive processing is executed, the memory control unit 52 selects C responses from the group of response signals Y(0) to Y(K-1) stored in the storage area 55b of the signal memory 55. The signal is randomly or pseudo-randomly selected, and the C selected response signals are supplied from the signal memory 55 to the second ANN structure 43 as selection signals y n (0) to y n (C-1). Here, C is a positive integer smaller than the total number K of the response signals Y(0) to Y(K-1), and the subscript n is a positive integer indicating the number of iterations of the adaptive processing. The second ANN structure 43 performs forward propagation processing on the selection signal y n (c) input from the signal memory 55, and outputs the resulting complex digital signal as a post-distortion signal s n (c). .. In addition, the memory control unit 52 selects C signals corresponding to the selection signals y n (0) to y n (C-1) from the group of predistortion signals stored in the storage area 55a of the signal memory 55. The pre-distortion signals of C are selected, and the selected C pre-distortion signals are output from the signal memory 55 to the learning control unit 62 as reference signals z n (0) to z n (C-1).
 図12は、処理時間τ,応答信号Y(κ),選択信号y(c),前歪み信号Z(κ),及び参照信号z(c)の間の対応関係の一例を示す図である。ここで、処理時間τは、τ=T+κ×ΔT、との式で表され、κは、ランダムまたは擬似ランダムな整数(乱数)である。選択信号y(c)は、ランダムまたは擬似ランダムに選択された応答信号Y(κ)と一致し(y(c)=Y(κ))、参照信号z(c)は、応答信号Y(κ)に対応する前歪み信号Z(κ)と一致する(z(c)=Z(κ))。 FIG. 12 is a diagram showing an example of a correspondence relationship among the processing time τ, the response signal Y(κ), the selection signal y n (c), the predistortion signal Z(κ), and the reference signal z n (c). is there. Here, the processing time τ is represented by an equation of τ=T 0 +κ×ΔT, and κ is a random or pseudo-random integer (random number). The selection signal y n (c) matches the response signal Y(κ) randomly or pseudo-randomly selected (y n (c)=Y(κ)), and the reference signal z n (c) is the response signal. It matches the pre-distortion signal Z(κ) corresponding to Y(κ) (z n (c)=Z(κ)).
 メモリ制御部52は、実施の形態1のメモリ制御部51と同様に、物理乱数列または擬似乱数列を用いて読み出しアドレスを連続的に生成し、当該読み出しアドレスを信号メモリ55に供給することで、選択信号y(c)と参照信号z(c)との対をランダムまたは擬似ランダムに選択することができる。あるいは、メモリ制御部52は、信号メモリ55の記憶領域55bに一旦格納された応答信号Y(0)~Y(K-1)の配列を、物理乱数列または擬似乱数列を用いてランダムまたは擬似ランダムな順番に並べ替えてもよい。これにより、メモリ制御部52は、信号メモリ55からアドレス順にC個の応答信号を読み出すことで選択信号y(0)~y(C-1)を得ることができる。 Like the memory control unit 51 of the first embodiment, the memory control unit 52 continuously generates the read address using the physical random number sequence or the pseudo random number sequence, and supplies the read address to the signal memory 55. , The selection signal y n (c) and the reference signal z n (c) can be selected randomly or pseudo-randomly. Alternatively, the memory control unit 52 randomly or pseudo-randomizes the array of the response signals Y(0) to Y(K-1) temporarily stored in the storage area 55b of the signal memory 55 using a physical random number sequence or a pseudo random number sequence. You may rearrange in random order. Thus, the memory control unit 52 can obtain the selection signals y n (0) to y n (C-1) by reading C response signals from the signal memory 55 in the order of addresses.
 学習制御部62は、ランダムまたは擬似ランダムに選択された選択信号y(0)~s(C-1)と参照信号z(0)~z(C-1)とを用いて、逆伝播学習アルゴリズムに基づく適応処理を実行することにより、第1ANN構造42のパラメータ群及び第2ANN構造43のパラメータ群を更新することができる。これにより、学習制御部62は、相関性の高い離散信号からなる時系列データを用いた適応処理の実行を回避することができるので、第1ANN構造42及び第2ANN構造43のパラメータ群の収束性を向上させることができる。したがって、第1ANN構造42及び第2ANN構造43の学習効率の向上が可能となる。 The learning control unit 62 uses the selection signals y n (0) to s n (C-1) selected randomly or pseudo-randomly and the reference signals z n (0) to z n (C-1). The parameter group of the first ANN structure 42 and the parameter group of the second ANN structure 43 can be updated by executing the adaptive processing based on the back propagation learning algorithm. As a result, the learning control unit 62 can avoid performing adaptive processing using time-series data made up of highly correlated discrete signals, and thus the convergence of the parameter groups of the first ANN structure 42 and the second ANN structure 43. Can be improved. Therefore, the learning efficiency of the first ANN structure 42 and the second ANN structure 43 can be improved.
 次に、図13を参照しつつ、実施の形態2の歪み補償回路13における学習処理の動作について以下に説明する。図13は、歪み補償回路13により実行される学習処理の手順の例を概略的に示すフローチャートである。第1ANN構造42及び第2ANN構造43が初めて学習しようとする際には、第1ANN構造42及び第2ANN構造43のパラメータ群は、乱数値に初期化される。 Next, the operation of the learning process in the distortion compensation circuit 13 of the second embodiment will be described below with reference to FIG. FIG. 13 is a flowchart schematically showing an example of the procedure of the learning process executed by the distortion compensation circuit 13. When the first ANN structure 42 and the second ANN structure 43 try to learn for the first time, the parameter groups of the first ANN structure 42 and the second ANN structure 43 are initialized to random values.
 学習処理は、第1ANN構造42が入力信号X(k)に順伝播処理を施して前歪み信号Z(k)を生成するときに開始される(k=0,1,2,…)。当該前歪み信号Z(k)は、信号処理回路11と信号メモリ55とに転送される。歪み補償回路13のメモリ制御部52は、第1ANN構造42により生成された前歪み信号Z(k)を信号メモリ55の記憶領域55aに記憶させる(ステップST21)。信号処理回路11から出力された応答信号Y(k)を信号受信部60が受信すると、メモリ制御部52は、当該応答信号Y(k)を信号メモリ55の記憶領域55bに記憶させる(ステップST22)。 The learning process is started when the first ANN structure 42 performs the forward propagation process on the input signal X(k) to generate the predistorted signal Z(k) (k=0, 1, 2,... ). The pre-distortion signal Z(k) is transferred to the signal processing circuit 11 and the signal memory 55. The memory control unit 52 of the distortion compensation circuit 13 stores the pre-distortion signal Z(k) generated by the first ANN structure 42 in the storage area 55a of the signal memory 55 (step ST21). When the signal receiving unit 60 receives the response signal Y(k) output from the signal processing circuit 11, the memory control unit 52 stores the response signal Y(k) in the storage area 55b of the signal memory 55 (step ST22). ).
 次いで、適応処理を開始するか否かが判定される(ステップST23)。たとえば、メモリ制御部52は、信号メモリ55の記憶領域55bに記憶された応答信号Y(k)の個数を計数し、当該個数が所定数に到達したときに、適応処理を開始すると判定することができる(ステップST23のYES)。適応処理を開始しないとの判定がなされたときは(ステップST23のNO)、ステップST21~ST22が繰り返し実行される。これにより、一連の前歪み信号Z(0),Z(1),Z(2),…が信号メモリ55の記憶領域55aに格納され、一連の前歪み信号Z(0),Z(1),Z(2),…に対応する一連の応答信号Y(0),Y(1),Y(2),…が信号メモリ55の記憶領域55bに格納される。 Next, it is determined whether to start the adaptive processing (step ST23). For example, the memory control unit 52 counts the number of response signals Y(k) stored in the storage area 55b of the signal memory 55, and when the number reaches a predetermined number, determines to start the adaptive processing. Can be performed (YES in step ST23). When it is determined that the adaptive processing is not started (NO in step ST23), steps ST21 to ST22 are repeatedly executed. As a result, a series of pre-distortion signals Z(0), Z(1), Z(2),... Are stored in the storage area 55a of the signal memory 55, and a series of pre-distortion signals Z(0), Z(1). , Z(2),... A series of response signals Y(0), Y(1), Y(2),... Are stored in the storage area 55b of the signal memory 55.
 適応処理を開始するとの判定がなされたとき(ステップST23のYES)、メモリ制御部52及び学習制御部62は、適応処理を開始する。初回の適応処理が実行される場合には、反復回数nの値は1に初期化される。 When it is determined that the adaptive processing is started (YES in step ST23), the memory control unit 52 and the learning control unit 62 start the adaptive processing. When the first adaptation process is executed, the value of the iteration count n is initialized to 1.
 具体的には、メモリ制御部52は、信号メモリ55の記憶領域55bから、ランダムまたは擬似ランダムに選択された選択信号y(0)~y(C-1)を読み出し、当該選択信号y(0)~y(C-1)を第2ANN構造43に供給する(ステップST24)。これにより、第2ANN構造43から後歪み信号s(0)~s(C-1)が出力される。また、メモリ制御部52は、信号メモリ55の記憶領域55aから、選択信号y(0)~y(C-1)に対応する参照信号z(0)~z(C-1)を読み出し、当該参照信号z(0)~z(C-1)を学習制御部62に供給する(ステップST25)。学習制御部62における減算器63は、当該後歪み信号s(0)~s(C-1)と当該参照信号z(0)~z(C-1)との間の差分を示す誤差信号e(0)~e(C-1)を算出する(ステップST26)。なお、図13の例では、説明の便宜上、ステップST24,ST25,ST26がこの順番で実行されているが、これに限定されるものではない。ステップST24,ST25,ST26が互いに並列に実行されてもよい。 Specifically, the memory control unit 52 reads the selection signals y n (0) to y n (C-1) selected at random or pseudo-randomly from the storage area 55b of the signal memory 55, and selects the selection signal y. n (0) ~ y supplies n the (C-1) to the 2ANN structure 43 (step ST24). As a result, the post-distortion signals s n (0) to s n (C-1) are output from the second ANN structure 43. Further, the memory control unit 52 causes the reference signals z n (0) to z n (C-1) corresponding to the selection signals y n (0) to y n (C-1) from the storage area 55a of the signal memory 55. Is read and the reference signals z n (0) to z n (C-1) are supplied to the learning control unit 62 (step ST25). The subtracter 63 in the learning control unit 62 calculates the difference between the post-distortion signals s n (0) to s n (C-1) and the reference signals z n (0) to z n (C-1). calculating an error signal indicating e n (0) ~ e n (C-1) ( step ST26). Note that in the example of FIG. 13, steps ST24, ST25, and ST26 are executed in this order for convenience of description, but the present invention is not limited to this. Steps ST24, ST25, ST26 may be executed in parallel with each other.
 そして、適応処理部64は、実施の形態1のステップST17の場合と同様に、誤差信号e(0)~e(C-1)を用いて逆伝播学習アルゴリズムに基づく適応処理を実行することにより第2ANN構造43の新たなパラメータ群を算出する(ステップST27)。次に、パラメータ設定部66は、第1ANN構造42及び第2ANN構造43のパラメータ群として当該新たなパラメータ群を設定することにより、第1ANN構造42及び第2ANN構造43のパラメータ群を更新する(ステップST28)。 Then, the adaptive processing unit 64, as in step ST17 in the first embodiment, executes the adaptive processing based on the back propagation learning algorithm using the error signal e n (0) ~ e n (C-1) Thereby, a new parameter group of the second ANN structure 43 is calculated (step ST27). Next, the parameter setting unit 66 updates the parameter group of the first ANN structure 42 and the second ANN structure 43 by setting the new parameter group as the parameter group of the first ANN structure 42 and the second ANN structure 43 (step ST28).
 その後、適応処理部64は、反復処理を終了するか否かを判定する(ステップST29)。反復処理を終了しないと判定されたとき(ステップST29のNO)、メモリ制御部52及び学習制御部62は、ステップST24~ST28を繰り返し実行する。たとえば、反復回数nがあらかじめ定められた回数に到達した場合、あるいは、パラメータ群が十分に収束したことを示す収束条件が満たされる場合に、適応処理部64は、反復処理を終了すると判定すればよい(ステップST29のYES)。なお、収束条件としては、たとえば、誤差信号e(0)~e(C-1)の大きさを表す誤差評価値が所定回数だけ連続して所定値以下となるという条件が挙げられる。たとえば、誤差評価値としては、誤差信号e(c)の絶対値または当該絶対値の2乗をc=0~C-1について加算した値を使用すればよい。 After that, the adaptation processing unit 64 determines whether or not to end the iterative processing (step ST29). When it is determined that the iterative process is not ended (NO in step ST29), the memory control unit 52 and the learning control unit 62 repeatedly execute steps ST24 to ST28. For example, if the number of iterations n reaches a predetermined number, or if a convergence condition indicating that the parameter group has sufficiently converged is satisfied, the adaptive processing unit 64 determines to end the iterative processing. Good (YES in step ST29). As the convergence condition, for example, the condition that the error evaluation value representing the magnitude of the error signal e n (0) ~ e n (C-1) is equal to or less than a predetermined value continuously for a predetermined number of times and the like. For example, the error evaluation value, may be used absolute value or a value of the square of the absolute value obtained by adding the c = 0 ~ C-1 of the error signal e n (c).
 反復処理を終了するとの判定がなされた場合(ステップST29のYES)、第1ANN構造42及び第2ANN構造43の学習を続行する場合には(ステップST30のNO)、学習制御部62は、ステップST21以後の学習処理を再度実行する。第1ANN構造42及び第2ANN構造43の学習を完了させる場合には(ステップST30のYES)、学習制御部62は、学習処理を終了する。 If it is determined that the iterative process is to be ended (YES in step ST29), if learning of the first ANN structure 42 and the second ANN structure 43 is to be continued (NO in step ST30), the learning control unit 62 causes the learning control unit 62 to perform step ST21. The subsequent learning process is executed again. When the learning of the first ANN structure 42 and the second ANN structure 43 is completed (YES in step ST30), the learning control unit 62 ends the learning process.
 以上に説明したように実施の形態2の信号処理装置3では、歪み補償回路13は、ランダムまたは擬似ランダムに選択された選択信号y(0)~y(C-1)と参照信号z(0)~z(C-1)との間の差分を示す誤差信号e(0)~e(C-1)を用いて逆伝播学習アルゴリズムに基づく適応処理を実行するので、第1ANN構造42及び第2ANN構造43の学習効率の向上が可能となる。 As described above, in the signal processing device 3 according to the second embodiment, the distortion compensation circuit 13 includes the selection signals y n (0) to y n (C-1) selected randomly or pseudo-randomly and the reference signal z. Since the adaptive processing based on the back propagation learning algorithm is executed using the error signals e n (0) to e n (C-1) indicating the difference between n (0) to z n (C-1), The learning efficiency of the first ANN structure 42 and the second ANN structure 43 can be improved.
 なお、上記した歪み補償回路13の機能の全部または一部は、たとえば、DSP,ASICまたはFPGAなどの半導体集積回路を有する単数または複数のプロセッサにより実現可能である。あるいは、歪み補償回路13の機能の全部または一部は、ソフトウェアまたはファームウェアのプログラムコードを実行する、CPUまたはGPUなどの演算装置を含む単数または複数のプロセッサで実現されてもよい。あるいは、DSP,ASICまたはFPGAなどの半導体集積回路と、CPUまたはGPUなどの演算装置との組み合わせを含む単数または複数のプロセッサによって歪み補償回路13の機能の全部または一部を実現することも可能である。図9に示したデータ処理装置80によって歪み補償回路13のハードウェア構成が実現されてもよい。 Note that all or some of the functions of the distortion compensation circuit 13 described above can be realized by a single or multiple processors having a semiconductor integrated circuit such as DSP, ASIC, or FPGA. Alternatively, all or some of the functions of the distortion compensation circuit 13 may be implemented by one or more processors including a computing device such as a CPU or GPU that executes program code of software or firmware. Alternatively, all or part of the function of the distortion compensation circuit 13 can be realized by one or more processors including a combination of a semiconductor integrated circuit such as DSP, ASIC or FPGA and an arithmetic unit such as CPU or GPU. is there. The hardware configuration of the distortion compensation circuit 13 may be realized by the data processing device 80 shown in FIG.
 以上、図面を参照して本発明に係る種々の実施の形態1,2について述べたが、実施の形態1,2は本発明の例示であり、実施の形態1,2以外の様々な実施の形態がありうる。本発明の範囲内において、上記実施の形態1,2の自由な組み合わせ、各実施の形態の任意の構成要素の変形、または各実施の形態の任意の構成要素の省略が可能である。 Although various Embodiments 1 and 2 according to the present invention have been described with reference to the drawings, Embodiments 1 and 2 are examples of the present invention, and various Embodiments other than Embodiments 1 and 2 There can be forms. Within the scope of the present invention, it is possible to freely combine the first and second embodiments, modify any constituent element of each embodiment, or omit any constituent element of each embodiment.
 たとえば、ANN構造41,第1ANN構造42及び第2ANN構造43の具体例は、図4及び図6に示したNNモデル41A,41Bに限定されるものではない。たとえば、NNモデル41A,41Bに代えて、フィードバック結合を有する公知の再帰型ニューラルネットワークモデル(Recurrent Neural Network Model,RNN Model)が使用されてもよい。 For example, specific examples of the ANN structure 41, the first ANN structure 42, and the second ANN structure 43 are not limited to the NN models 41A and 41B shown in FIGS. 4 and 6. For example, in place of the NN models 41A and 41B, a known recurrent neural network model having feedback coupling (Recurrent Neural Network Model, RNN Model) may be used.
 本発明に係る人工ニューラルネットワーク(ANN)学習装置、歪み補償回路及び信号処理装置は、たとえば、移動体通信システム、ディジタル放送システム及び衛星通信システムといった無線通信システムにおける無線送信機または無線受信機の高周波回路(たとえば、電力増幅器)に用いることが可能である。 An artificial neural network (ANN) learning device, a distortion compensating circuit, and a signal processing device according to the present invention are, for example, high frequency radio transmitters or receivers in wireless communication systems such as mobile communication systems, digital broadcasting systems, and satellite communication systems. It can be used in a circuit (for example, a power amplifier).
 1 人工ニューラルネットワーク(ANN)学習システム、2,3 信号処理装置、11 信号処理回路、13 歪み補償回路、21 直交変調器(QM)、22 D/A変換器(DAC)、23 アップコンバータ(UPC)、24 高周波回路、25 方向性結合器、26 ダウンコンバータ(DNC)、27 A/D変換器(ADC)、28 直交復調器(QD)、31 人工ニューラルネットワーク(ANN)学習装置、40 信号供給部、41 人工ニューラルネットワーク(ANN)構造、41A,41B ニューラルネットワークモデル(NNモデル)、42 第1ANN構造、43 第2ANN構造、51,52 メモリ制御部、55 信号メモリ、55a,55b 記憶領域、60 信号受信部、61,62 学習制御部、63 減算器、64 適応処理部、65,66 パラメータ設定部、80 データ処理装置、81 プロセッサ、82 メモリ、83 記憶装置、84 入出力インタフェース、85 信号路。 1 artificial neural network (ANN) learning system, 2, 3 signal processing device, 11 signal processing circuit, 13 distortion compensation circuit, 21 quadrature modulator (QM), 22 D/A converter (DAC), 23 up converter (UPC) ), 24 high frequency circuit, 25 directional coupler, 26 down converter (DNC), 27 A/D converter (ADC), 28 quadrature demodulator (QD), 31 artificial neural network (ANN) learning device, 40 signal supply Part, 41 artificial neural network (ANN) structure, 41A, 41B neural network model (NN model), 42 first ANN structure, 43 second ANN structure, 51, 52 memory control unit, 55 signal memory, 55a, 55b storage area, 60 Signal receiving unit, 61, 62 learning control unit, 63 subtractor, 64 adaptive processing unit, 65, 66 parameter setting unit, 80 data processing device, 81 processor, 82 memory, 83 storage device, 84 input/output interface, 85 signal path ..

Claims (12)

  1.  時間的に連続する複数のトレーニング信号を信号処理回路に供給して当該信号処理回路から複数の応答信号を出力させる信号供給部と、
     前記複数の応答信号を記憶する信号メモリと、
     前記信号メモリに記憶された当該複数の応答信号の中からランダムまたは擬似ランダムに選択された複数の選択信号を読み出し、当該複数の選択信号を人工ニューラルネットワーク構造に供給して当該人工ニューラルネットワーク構造から複数の後歪み信号を出力させるメモリ制御部と、
     当該複数の後歪み信号と当該複数の選択信号に対応する複数の教師信号との間の差分を示す複数の誤差信号を算出し、当該複数の誤差信号を用いて逆伝播学習アルゴリズムに基づく適応処理を実行することにより、前記人工ニューラルネットワーク構造の入出力特性を定めるパラメータ群を更新する学習制御部と
    を備えることを特徴とする人工ニューラルネットワーク学習装置。
    A signal supply unit that supplies a plurality of temporally continuous training signals to a signal processing circuit and outputs a plurality of response signals from the signal processing circuit,
    A signal memory for storing the plurality of response signals,
    From the plurality of response signals stored in the signal memory, a plurality of selection signals randomly or pseudo-randomly selected are read out, and the plurality of selection signals are supplied to the artificial neural network structure to output from the artificial neural network structure. A memory control unit for outputting a plurality of post-distortion signals,
    An adaptive process based on a back-propagation learning algorithm is calculated by calculating a plurality of error signals indicating a difference between the plurality of post-distortion signals and a plurality of teacher signals corresponding to the plurality of selection signals. And a learning control unit that updates a parameter group that determines the input/output characteristics of the artificial neural network structure.
  2.  請求項1に記載の人工ニューラルネットワーク学習装置であって、前記学習制御部は、前記複数のトレーニング信号の中から選択された複数の信号を前記複数の教師信号として使用することを特徴とする人工ニューラルネットワーク学習装置。 The artificial neural network learning device according to claim 1, wherein the learning control unit uses a plurality of signals selected from the plurality of training signals as the plurality of teacher signals. Neural network learning device.
  3.  請求項1または請求項2に記載の人工ニューラルネットワーク学習装置であって、前記学習制御部は、前記逆伝播学習アルゴリズムに基づいて前記複数の誤差信号の大きさが小さくなるように前記パラメータ群を更新することを特徴とする人工ニューラルネットワーク学習装置。 The artificial neural network learning device according to claim 1 or 2, wherein the learning control unit sets the parameter groups so as to reduce the magnitudes of the plurality of error signals based on the back propagation learning algorithm. An artificial neural network learning device characterized by updating.
  4.  請求項3に記載の人工ニューラルネットワーク学習装置であって、前記メモリ制御部及び前記学習制御部は、前記複数の選択信号を前記信号メモリから読み出す処理と、当該複数の選択信号を前記人工ニューラルネットワーク構造に供給する処理と、前記複数の誤差信号を算出する処理と、前記適応処理とを反復して実行することにより前記パラメータ群を収束させることを特徴とする人工ニューラルネットワーク学習装置。 4. The artificial neural network learning device according to claim 3, wherein the memory control unit and the learning control unit read the plurality of selection signals from the signal memory, and the plurality of selection signals by the artificial neural network. An artificial neural network learning device, characterized in that the parameter group is converged by repeatedly executing the process of supplying the structure, the process of calculating the plurality of error signals, and the adaptive process.
  5.  請求項1から請求項4のうちのいずれか1項に記載の人工ニューラルネットワーク学習装置であって、
     前記人工ニューラルネットワーク構造は、前記選択信号が入力される入力層と、少なくとも1層からなる中間層と、前記後歪み信号を出力する出力層とからなる複数の層を含み、
     前記学習制御部は、前記複数の層の間を伝播する信号に重み付けされる重み係数を前記パラメータ群として更新する、
    ことを特徴とする人工ニューラルネットワーク学習装置。
    The artificial neural network learning device according to any one of claims 1 to 4,
    The artificial neural network structure includes a plurality of layers including an input layer to which the selection signal is input, an intermediate layer including at least one layer, and an output layer that outputs the post-distortion signal,
    The learning control unit updates a weighting factor weighted for a signal propagating between the plurality of layers as the parameter group,
    An artificial neural network learning device characterized by the above.
  6.  時間的に連続する複数の離散信号を入力として複数の前歪み信号を生成し、当該生成された複数の前歪み信号を信号処理回路に出力する第1人工ニューラルネットワーク構造と、
     前記複数の前歪み信号を記憶するとともに、前記信号処理回路が前記複数の前歪み信号の入力に応じて出力した複数の応答信号を記憶する信号メモリと、
     第2人工ニューラルネットワーク構造と、
     前記複数の応答信号の中からランダムまたは擬似ランダムに選択された複数の選択信号を前記信号メモリから読み出し、当該複数の選択信号を前記第2人工ニューラルネットワーク構造に供給して当該第2人工ニューラルネットワーク構造から複数の後歪み信号を出力させるとともに、前記複数の前歪み信号の中から前記複数の選択信号に対応する信号として選択された複数の参照信号を前記信号メモリから読み出すメモリ制御部と、
     前記信号メモリから読み出された当該複数の選択信号と当該複数の参照信号との間の差分を示す複数の誤差信号を算出し、当該複数の誤差信号を用いて逆伝播学習アルゴリズムに基づく適応処理を実行することにより、前記第1人工ニューラルネットワーク構造及び第2人工ニューラルネットワーク構造のそれぞれの入出力特性を定めるパラメータ群を更新する学習制御部と
    を備えることを特徴とする歪み補償回路。
    A first artificial neural network structure for generating a plurality of pre-distortion signals by inputting a plurality of temporally continuous discrete signals and outputting the generated plurality of pre-distortion signals to a signal processing circuit;
    A signal memory that stores the plurality of pre-distortion signals and that stores the plurality of response signals output by the signal processing circuit in response to the input of the plurality of pre-distortion signals,
    A second artificial neural network structure,
    The plurality of selection signals randomly or pseudo-randomly selected from the plurality of response signals are read out from the signal memory, and the plurality of selection signals are supplied to the second artificial neural network structure to supply the second artificial neural network. A memory control unit that outputs a plurality of post-distortion signals from the structure and reads out, from the signal memory, a plurality of reference signals selected as signals corresponding to the plurality of selection signals from the plurality of pre-distortion signals,
    An adaptive process based on a back propagation learning algorithm that calculates a plurality of error signals indicating a difference between the plurality of selection signals and the plurality of reference signals read from the signal memory and uses the plurality of error signals. And a learning control unit that updates the parameter groups that define the input/output characteristics of the first artificial neural network structure and the second artificial neural network structure.
  7.  請求項6に記載の歪み補償回路であって、前記学習制御部は、前記第1人工ニューラルネットワーク構造及び前記第2人工ニューラルネットワーク構造のそれぞれの当該入出力特性が互いに一致するように前記パラメータ群を更新することを特徴とする歪み補償回路。 7. The distortion compensation circuit according to claim 6, wherein the learning control unit sets the parameter groups so that the input/output characteristics of the first artificial neural network structure and the second artificial neural network structure match each other. A distortion compensation circuit characterized by updating.
  8.  請求項6または請求項7に記載の歪み補償回路であって、前記学習制御部は、前記逆伝播学習アルゴリズムに基づいて前記複数の誤差信号の大きさが小さくなるように前記パラメータ群を更新することを特徴とする歪み補償回路。 The distortion compensation circuit according to claim 6 or 7, wherein the learning control unit updates the parameter group based on the back propagation learning algorithm so that the magnitudes of the plurality of error signals become small. A distortion compensation circuit characterized by the above.
  9.  請求項8に記載の歪み補償回路であって、前記メモリ制御部及び前記学習制御部は、前記複数の選択信号を前記信号メモリから読み出す処理と、当該複数の選択信号を前記第2人工ニューラルネットワーク構造に供給する処理と、前記複数の参照信号を前記信号メモリから読み出す処理と、前記複数の誤差信号を算出する処理と、前記適応処理とを反復して実行することにより前記パラメータ群を収束させることを特徴とする歪み補償回路。 The distortion compensation circuit according to claim 8, wherein the memory control unit and the learning control unit read the plurality of selection signals from the signal memory, and the plurality of selection signals are used in the second artificial neural network. The parameter group is converged by repeatedly executing a process of supplying the structure, a process of reading the plurality of reference signals from the signal memory, a process of calculating the plurality of error signals, and the adaptive process. A distortion compensation circuit characterized by the above.
  10.  請求項6から請求項9のうちのいずれか1項に記載の歪み補償回路であって、
     前記第1人工ニューラルネットワーク構造は、前記離散信号が入力される第1の入力層と、少なくとも1層からなる第1の中間層と、前記前歪み信号を出力する第1の出力層とからなる複数の層を含み、
     前記第2人工ニューラルネットワーク構造は、前記選択信号が入力される第2の入力層と、少なくとも1層からなる第2の中間層と、前記後歪み信号を出力する第2の出力層とからなる複数の層を含み、
     前記学習制御部は、前記第1人工ニューラルネットワーク構造の当該複数の層の間を伝播する信号に重み付けされる重み係数と、前記第2人工ニューラルネットワーク構造の当該複数の層の間を伝播する信号に重み付けされる重み係数とを前記パラメータ群として更新する、ことを特徴とする歪み補償回路。
    The distortion compensation circuit according to any one of claims 6 to 9,
    The first artificial neural network structure includes a first input layer to which the discrete signal is input, a first intermediate layer including at least one layer, and a first output layer to output the predistortion signal. Including multiple layers,
    The second artificial neural network structure includes a second input layer to which the selection signal is input, a second intermediate layer including at least one layer, and a second output layer to output the post-distortion signal. Including multiple layers,
    The learning control unit weights a signal propagating between the plurality of layers of the first artificial neural network structure and a signal that propagates between the plurality of layers of the second artificial neural network structure. And a weighting coefficient to be weighted to the parameter group are updated as the parameter group.
  11.  請求項6から請求項10のうちのいずれか1項に記載の歪み補償回路と、
     前記信号処理回路と
    を備えることを特徴とする信号処理装置。
    A distortion compensation circuit according to any one of claims 6 to 10,
    A signal processing device comprising the signal processing circuit.
  12.  請求項11に記載の信号処理装置であって、前記信号処理回路は、前記複数の前歪み信号の電力を増幅する電力増幅器を含むことを特徴とする信号処理装置。 The signal processing device according to claim 11, wherein the signal processing circuit includes a power amplifier that amplifies the power of the plurality of predistorted signals.
PCT/JP2019/003645 2019-02-01 2019-02-01 Artificial neural network learning device, distortion compensation circuit, and signal processing device WO2020157961A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2020569314A JP6877662B2 (en) 2019-02-01 2019-02-01 Artificial neural network learning device, distortion compensation circuit and signal processing device
PCT/JP2019/003645 WO2020157961A1 (en) 2019-02-01 2019-02-01 Artificial neural network learning device, distortion compensation circuit, and signal processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/003645 WO2020157961A1 (en) 2019-02-01 2019-02-01 Artificial neural network learning device, distortion compensation circuit, and signal processing device

Publications (1)

Publication Number Publication Date
WO2020157961A1 true WO2020157961A1 (en) 2020-08-06

Family

ID=71841752

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/003645 WO2020157961A1 (en) 2019-02-01 2019-02-01 Artificial neural network learning device, distortion compensation circuit, and signal processing device

Country Status (2)

Country Link
JP (1) JP6877662B2 (en)
WO (1) WO2020157961A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05235792A (en) * 1992-02-18 1993-09-10 Fujitsu Ltd Adaptive equalizer
JPH07240901A (en) * 1994-02-28 1995-09-12 Victor Co Of Japan Ltd Line interpolating device using neural network
JPH10187649A (en) * 1996-12-27 1998-07-21 Toyo Electric Mfg Co Ltd Neural network
JPH117536A (en) * 1997-04-25 1999-01-12 Fuji Electric Co Ltd Device and method for recognizing picture by using neural network, and converter
JP2003078454A (en) * 2001-09-04 2003-03-14 Communication Research Laboratory Device for compensating for communication distortion and compensating method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05235792A (en) * 1992-02-18 1993-09-10 Fujitsu Ltd Adaptive equalizer
JPH07240901A (en) * 1994-02-28 1995-09-12 Victor Co Of Japan Ltd Line interpolating device using neural network
JPH10187649A (en) * 1996-12-27 1998-07-21 Toyo Electric Mfg Co Ltd Neural network
JPH117536A (en) * 1997-04-25 1999-01-12 Fuji Electric Co Ltd Device and method for recognizing picture by using neural network, and converter
JP2003078454A (en) * 2001-09-04 2003-03-14 Communication Research Laboratory Device for compensating for communication distortion and compensating method

Also Published As

Publication number Publication date
JPWO2020157961A1 (en) 2021-03-25
JP6877662B2 (en) 2021-05-26

Similar Documents

Publication Publication Date Title
Tarver et al. Neural network DPD via backpropagation through a neural network model of the PA
WO2003092154A1 (en) The method of improving the radio frequency power amplifier efficiency based on the baseband digital pre-distortion technique
CN110765720B (en) Power amplifier predistortion method of complex-valued pipeline recurrent neural network model
CN113812094B (en) MIMO DPD per branch, combined and packet combined
CN109075745A (en) pre-distortion device
Sarma et al. Modeling MIMO channels using a class of complex recurrent neural network architectures
CN111859795A (en) Polynomial-assisted neural network behavior modeling system and method for power amplifier
Le et al. Hierarchical partial update generalized functional link artificial neural network filter for nonlinear active noise control
WO2020157961A1 (en) Artificial neural network learning device, distortion compensation circuit, and signal processing device
JP5299958B2 (en) Predistorter
WO2021054118A1 (en) Parameter determining device, signal transmitting device, parameter determining method, signal transmitting method, and recording medium
US5319587A (en) Computing element for neural networks
JPH05145379A (en) Coefficient revision method in adaptive filter device
JP5226468B2 (en) Predistorter
Pang et al. A hierarchical alternative updated adaptive Volterra filter with pipelined architecture
CN106105032A (en) System and method for sef-adapting filter
US7693922B2 (en) Method for preprocessing a signal and method for signal processing
US20230275788A1 (en) Signal transmission apparatus, parameter determination apparatus, signal transmission method, parameter determination method and recording medium
CN115766355B (en) Low-complexity digital predistortion system
KUMAR Various nonlinear models and their identification, equalization and linearization
WO2022185505A1 (en) Learning device for distortion compensation circuit, and distortion compensation circuit
Lu et al. Generalized Combined Nonlinear Adaptive Filters for Nonlinear Acoustic Echo Cancellation
JP2022050316A (en) Parameter determination device, signal transmission device, parameter determination method, signal transmission method, and computer program
JP5260335B2 (en) Predistorter
Pochmara Modeling power amplifier nonlinearities with artifical neural network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19913513

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020569314

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19913513

Country of ref document: EP

Kind code of ref document: A1