WO2021244236A1 - 预失真方法、系统、设备及存储介质 - Google Patents

预失真方法、系统、设备及存储介质 Download PDF

Info

Publication number
WO2021244236A1
WO2021244236A1 PCT/CN2021/093060 CN2021093060W WO2021244236A1 WO 2021244236 A1 WO2021244236 A1 WO 2021244236A1 CN 2021093060 W CN2021093060 W CN 2021093060W WO 2021244236 A1 WO2021244236 A1 WO 2021244236A1
Authority
WO
WIPO (PCT)
Prior art keywords
complex
predistortion
vector
output
neural network
Prior art date
Application number
PCT/CN2021/093060
Other languages
English (en)
French (fr)
Inventor
刘磊
郁光辉
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to JP2022541199A priority Critical patent/JP7451720B2/ja
Priority to US17/788,042 priority patent/US20230033203A1/en
Priority to KR1020227021174A priority patent/KR102707510B1/ko
Priority to EP21818081.8A priority patent/EP4160915A4/en
Publication of WO2021244236A1 publication Critical patent/WO2021244236A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F1/00Details of amplifiers with only discharge tubes, only semiconductor devices or only unspecified devices as amplifying elements
    • H03F1/32Modifications of amplifiers to reduce non-linear distortion
    • H03F1/3241Modifications of amplifiers to reduce non-linear distortion using predistortion circuits
    • H03F1/3247Modifications of amplifiers to reduce non-linear distortion using predistortion circuits using feedback acting on predistortion circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F1/00Details of amplifiers with only discharge tubes, only semiconductor devices or only unspecified devices as amplifying elements
    • H03F1/30Modifications of amplifiers to reduce influence of variations of temperature or supply voltage or other physical parameters
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F1/00Details of amplifiers with only discharge tubes, only semiconductor devices or only unspecified devices as amplifying elements
    • H03F1/32Modifications of amplifiers to reduce non-linear distortion
    • H03F1/3241Modifications of amplifiers to reduce non-linear distortion using predistortion circuits
    • H03F1/3258Modifications of amplifiers to reduce non-linear distortion using predistortion circuits based on polynomial terms
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F3/00Amplifiers with only discharge tubes or only semiconductor devices as amplifying elements
    • H03F3/189High-frequency amplifiers, e.g. radio frequency amplifiers
    • H03F3/19High-frequency amplifiers, e.g. radio frequency amplifiers with semiconductor devices only
    • H03F3/195High-frequency amplifiers, e.g. radio frequency amplifiers with semiconductor devices only in integrated circuits
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F3/00Amplifiers with only discharge tubes or only semiconductor devices as amplifying elements
    • H03F3/20Power amplifiers, e.g. Class B amplifiers, Class C amplifiers
    • H03F3/24Power amplifiers, e.g. Class B amplifiers, Class C amplifiers of transmitter output stages
    • H03F3/245Power amplifiers, e.g. Class B amplifiers, Class C amplifiers of transmitter output stages with semiconductor devices only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F2200/00Indexing scheme relating to amplifiers
    • H03F2200/451Indexing scheme relating to amplifiers the amplifier being a radio frequency amplifier

Definitions

  • This application relates to the field of communication technology, for example, to a predistortion method, system, device, and storage medium.
  • the power amplifier is a key component related to energy consumption and signal quality.
  • the power amplifier model and predistortion model have become key technologies for efficient communication and high data rate communication.
  • the nonlinear characteristics of the ideal predistortion model and the power amplifier model are strictly reciprocal in mathematics. Both the predistortion model and the power amplifier model are typical nonlinear fitting problems. However, traditional predistortion models and power amplifier models have inherently insufficient non-linear representation capabilities.
  • This application provides a predistortion method, system, equipment, and storage medium, which solves the congenital inadequacy of the traditional predistortion model and the power amplifier model's nonlinear representation ability, and at the same time solves the problem of using separate processing when applying neural networks The problem of lagging response to the dynamic changes of the power amplifier system, resulting in poor predistortion correction effect.
  • the embodiment of the present application provides a predistortion method, which is applied to a predistortion system, the predistortion system includes: a predistortion multiplier, a complex neural network, and a radio frequency power amplifier output feedback loop; the method includes:
  • the embodiment of the present application also provides a predistortion system that implements the predistortion method provided in the embodiment of the present application.
  • the predistortion system includes: a predistortion multiplier, a complex neural network, and a radio frequency power amplifier output feedback loop; the predistortion
  • the first input terminal of the multiplier is the input terminal of the predistortion system and is connected to the first input terminal and the second input terminal of the complex neural network.
  • the output terminal of the predistortion multiplier is connected to the output of the radio frequency power amplifier.
  • the input terminal of the feedback loop is connected, the output terminal of the RF power amplifier output feedback loop is the output terminal of the predistortion system, and the output terminal of the RF power amplifier output feedback loop is connected to the second input terminal of the complex neural network;
  • the output terminal of the complex neural network is connected to the second input terminal of the predistortion multiplier.
  • the embodiment of the present application also provides a predistortion system that implements the predistortion method provided in the embodiment of the present application.
  • the predistortion system includes: a predistortion multiplier, a complex neural network, a radio frequency power amplifier output feedback loop, and a first real-time power A normalization unit and a second real-time power normalization unit; the input terminal of the first real-time power normalization unit is the input terminal of the predistortion system, and the output terminal of the first real-time power normalization unit Connected to the first input terminal of the predistortion multiplier, the first input terminal of the complex neural network, and the second input terminal of the complex neural network, the output terminal of the predistortion multiplier is connected to the radio frequency power amplifier The input terminal of the output feedback loop is connected, the output terminal of the RF power amplifier output feedback loop is the output terminal of the predistortion system, and the output terminal of the RF power amplifier output feedback loop is connected to the second real-time power normal
  • An embodiment of the present application also provides a device, including: one or more processors; a storage device, configured to store one or more programs; when the one or more programs are executed by the one or more processors , Enabling the one or more processors to implement the aforementioned predistortion method.
  • An embodiment of the present application also provides a storage medium, the storage medium stores a computer program, and the computer program implements the aforementioned predistortion method when the computer program is executed by a processor.
  • FIG. 1 is a schematic flowchart of a predistortion method provided by an embodiment of this application;
  • Figure 2 is a schematic diagram of a traditional power amplifier model based on memory polynomials
  • FIG. 3a is a schematic structural diagram of a predistortion system provided by an embodiment of this application.
  • FIG. 3b is a schematic structural diagram of another predistortion system provided by an embodiment of this application.
  • FIG. 4 is a performance effect diagram of generalized adjacent channel leakage ratio obtained by an embodiment of the application.
  • FIG. 5 is an improvement effect diagram of generalized adjacent channel leakage ratio obtained by an embodiment of the application.
  • FIG. 6 is a schematic structural diagram of a device provided by an embodiment of this application.
  • FIG. 1 is a schematic flowchart of a predistortion method provided by an embodiment of the application.
  • the method may be applicable to the case of performing predistortion processing.
  • the method may be executed by a predistortion system. It can be implemented by software and/or hardware and integrated on the terminal device.
  • Power amplifier usually has three characteristics: 1) the static nonlinearity of the device itself (ie Static (Device) Nonlinearity); 2) the linearity derived from the matching network (ie Matching Network) and the delay of equipment components Memory effect (i.e. Linear Memory Effect); 3) Non-linear memory effect (i.e. Nonlinear Memory Effect), which mainly comes from the non-ideal characteristics of the transistor trap effect (i.e. Trapping Effect) and the bias network (i.e. Bias Network), and The dependence of the input level on temperature changes, etc.
  • the predistorter module (Predistorter, PD) is located in front of the radio frequency power amplifier, and is used to pre-process the nonlinear distortion that will be generated when the signal passes through the power amplifier; the predistortion implemented in the digital domain is called digital predistortion (Digital PD, DPD). ).
  • Figure 2 is a schematic diagram of a traditional power amplifier model based on memory polynomials.
  • the power amplifier model is equivalent to a zero hidden layer network with only an input layer and an output layer. It can't even count as a Multi-Layer Perceptron (MLP) network.
  • MLP Multi-Layer Perceptron
  • the expressive ability of a network is positively related to its structural complexity; this is also the fundamental reason for the inherent inadequacy of the nonlinear expressive ability of traditional power amplifier models.
  • the aforementioned predistortion and power amplifier nonlinearity are inverse functions to each other, so it can be expected that the same bottleneck will inevitably exist when traditional series or polynomial models are used to model DPD.
  • the method and technology of using neural network for pre-distortion adopts the method of separating the two links of the nonlinear characteristic learning of the power amplifier and the pre-distortion processing, that is: the first step, the neural network is not connected to the signal processing
  • the main link does not perform predistortion processing on the signal, but only accepts the feedback output of the power amplifier to train or learn the nonlinear characteristics of the power amplifier or its inverse characteristics; wait until the nonlinear characteristics are learned, then connect the neural network In the main link, or pass its training weights to a replica neural network on the main link, and then start the predistortion processing.
  • This application provides a predistortion method, which is applied to a predistortion system, the predistortion system includes: a predistortion multiplier, a complex neural network, and a radio frequency power amplifier output feedback loop, as shown in FIG. 1, the method includes the following step:
  • the pre-distortion system can be considered as a system capable of realizing pre-distortion correction.
  • the pre-distortion system includes a power amplifier.
  • the pre-distortion system can learn the nonlinear characteristics of the power amplifier and correct the pre-distortion. Fusion, that is, the predistortion system is directly connected to the power amplifier during training, and the nonlinear characteristics of the power amplifier are learned while predistortion correction is performed, which improves the efficiency and accuracy.
  • the predistortion system also includes a predistortion multiplier and a complex neural network.
  • the complex neural network can provide complex coefficients for the predistortion multiplier.
  • the predistortion multiplier can be considered as a multiplier that realizes the predistortion function.
  • the predistortion system may further include a first real-time power normalization unit and a second real-time power normalization unit.
  • first and second are only used to distinguish real-time power normalization units.
  • the real-time power normalization unit can perform normalization processing.
  • the first real-time power normalization unit may normalize the complex vector for training and input it to the predistortion multiplier and the complex neural network of the predistortion system.
  • the second power normalization unit may normalize the complex scalar corresponding to the training complex vector and return it to the complex neural network of the predistortion system.
  • the training complex vector can be regarded as the complex vector used for training the predistortion system.
  • the training complex vector can be input to the predistortion system to obtain the complex scalar output by the predistortion system, and the complex scalar can be output by the RF power amplifier output feedback loop in the predistortion system.
  • the complex scalar and the training complex vector can be used as samples for training the predistortion system.
  • the predistortion system can be trained, and the nonlinear characteristics of the power amplifier can be learned during the training process.
  • This application can input the training complex vector and complex scalar to the predistortion system to train the predistortion system.
  • the training can be supervised training.
  • the weight parameters and bias parameters of each layer of the complex neural network can be updated through the loss function of the complex neural network.
  • the condition for the end of the predistortion system training can be determined based on the generalized error vector magnitude and the generalized adjacent channel leakage ratio.
  • the setting requirements can be set according to actual requirements, and there is no limitation here, such as determining the setting requirements based on predistortion indicators.
  • the training of the predistortion system is completed, and the generalization If the corresponding values of the error vector magnitude and the generalized adjacent channel leakage ratio are less than the respective set thresholds, continue training the predistortion system.
  • the set threshold can be set according to actual needs, and is not limited here. In the case that the generalized error vector magnitude and the generalized adjacent channel leakage ratio do not meet the set requirements, the predistortion system can continue to be trained using the training complex vector.
  • This application uses both the generalized error vector magnitude and the generalized adjacent channel leakage ratio to meet the set requirements as the pre-distortion system training end condition, which can measure and reflect the universality and generalization ability of the pre-distortion system when performing pre-distortion correction.
  • the generalization ability refers to the repeated input of the complex vector in the training set to the predistortion system, so that training is essentially the system's learning of known data. After this process, the parameters of the predistortion system are solidified, and then a brand new, The data in the test set unknown to the system is used to count and investigate the system's ability to adapt or predistortion to a wider range of new inputs or new data.
  • the service complex vector can be input to the trained predistortion system to obtain a complex scalar for predistortion correction.
  • the service complex vector may be a service vector when the predistortion system process is applied, and the vector is a complex vector.
  • This application provides a predistortion method, which is applied to a predistortion system, the predistortion system includes: a predistortion multiplier, a complex neural network, and a radio frequency power amplifier output feedback loop.
  • the method first inputs a complex vector for training into the The predistortion system obtains the complex scalar corresponding to the training complex vector output by the predistortion system; then based on the training complex vector and the complex scalar, the predistortion system is trained until the predistortion The generalized error vector magnitude and generalized adjacent channel leakage ratio corresponding to the system meet the set requirements; finally, the service complex vector is input into the trained predistortion system to obtain the complex scalar for predistortion correction.
  • This application replaces the pre-distortion model in related technologies with a pre-distortion system.
  • the complex neural network in the pre-distortion system has a rich structure and strong nonlinear expression or fitting ability, which can effectively solve the traditional pre-distortion model and power amplifier.
  • Non-linearity represents the problem of congenital insufficiency.
  • the predistortion system uses a complex error back propagation algorithm for training.
  • the complex error backpropagation algorithm can be considered as a complex error backpropagation algorithm.
  • the learning process of the error back propagation algorithm is composed of two processes: the forward propagation of the signal and the back propagation of the error.
  • training the predistortion system includes:
  • the system parameters can be considered as parameters required for the initialization of the predistortion system.
  • the content of the system parameters is not limited here, and can be set according to actual conditions.
  • the system parameters include but are not limited to: nonlinear order parameters, power amplifier memory effect parameters, and the initial output of the complex neural network included in the predistortion system.
  • Initializing system parameters can be considered as setting system parameters, and the set values can be empirical values or determined based on historical system parameters.
  • the training set can be considered as a set of complex vectors for training the predistortion system.
  • the test set can be considered as a set of complex vectors for testing the generalization performance of the predistortion system.
  • the complex number vectors included in the training set and the test set are different, and the combination of the two is the complex number vector for training.
  • the complex vector of the predistortion system is partially intercepted from the training set, and the corresponding output complex scalar of the predistortion system is in one-to-one correspondence; when the predistortion system is trained, it is based on the current in the complex vector.
  • the element and the complex scalar are used to calculate the loss function of the complex neural network in the predistortion system, and then the weight parameters and bias parameters of each layer of the complex neural network are updated based on the loss function.
  • the generalization performance of the predistortion system is checked based on the entire test set
  • the output set composed of all the complex scalars is used to calculate the generalization error vector magnitude (GEVM) and generalization adjacent channel leakage ratio (Generalization Adjacent Channel Power Ratio, GACLR) performance, and use it to determine whether it is over Training of the predistortion system.
  • GEVM generalization error vector magnitude
  • GACLR Generalization Adjacent Channel Power Ratio
  • the elements in the normalized complex number vector are determined by the following calculation expression:
  • x 1 (n) represents the nth element in the normalized complex number vector
  • x 1,raw (n) represents the nth element in the training complex number vector
  • x 1,raw (d) represents the training The dth element in the complex vector.
  • the initialization of the system parameters of the predistortion system includes:
  • the initialization of the corresponding layer is completed according to the layer type corresponding to each layer in the complex neural network.
  • Initializing the system parameters of the predistortion system also includes: setting the nonlinear order parameter, the power amplifier memory effect parameter, and the initial output of the complex neural network included in the predistortion system.
  • the set memory effect parameters of the power amplifier are not limited to the power amplifier.
  • the set nonlinear order parameters and power amplifier memory effect parameters are only used for the training of the predistortion system.
  • the non-linear order parameter, power amplifier memory effect parameter, and the initial output value of the complex neural network included in the predistortion system are not limited. There is no limitation on the execution order of the operation of setting the nonlinear order parameter, the power amplifier memory effect parameter and the initial output of the complex neural network included in the predistortion system and the operation of initializing each layer of the complex neural network.
  • the initialization of each layer of the complex neural network can be completed based on the corresponding initialization setting parameters, and the initialization setting parameters are not limited.
  • the initialization setting parameters can be distribution types and distribution parameters.
  • the completing the initialization of the corresponding layer according to the layer type corresponding to each layer in the complex neural network includes:
  • the weight parameter and the bias parameter of the corresponding layer are initialized.
  • the current (to be initialized) layer is a fully connected layer, according to the randomly initialized distribution type (for example: Gaussian distribution, uniform distribution, etc.) and distribution parameters (mean, variance, standard deviation, etc.) set by the layer, Initialize the weight parameters and bias parameters of this layer.
  • randomly initialized distribution type for example: Gaussian distribution, uniform distribution, etc.
  • distribution parameters mean, variance, standard deviation, etc.
  • the training of the predistortion system based on the training set and the complex scalar includes:
  • the complex vector in the training set is input into the predistortion system according to the element index.
  • the length of the complex vector input each time is determined based on the memory effect parameters of the power amplifier.
  • the complex vector input to the predistortion system each time is determined by the historical element and the current element in the training set.
  • the output of the RF power amplifier output feedback loop in the predistortion system is passed through the second real-time power normalization unit included in the predistortion system, and then input into the complex neural network; or
  • the output of the RF power amplifier output feedback loop in the predistortion system is directly input to the complex neural network; according to the loss function of the complex neural network in the predistortion system, the partial derivative of each element of the correction vector output by the complex neural network or Sensitivity, determining the partial derivative or sensitivity of the loss function to the weight parameter and bias parameter of each layer in the complex neural network; updating the weight parameter and bias of the corresponding layer according to the determined partial derivative or sensitivity of each layer Parameters; where, the expression of the complex vector input to the predistortion system each time is as follows:
  • M 1 is the power amplifier memory effect parameter
  • 0 ⁇ M 1 ⁇ TrainSize is the length of the training set
  • Is the nth element in the training set Is the current element
  • m is an integer greater than 1 and less than M 1
  • Training set are the first n-1 elements, the first element and the first nm nM 1 elements, and All are historical elements.
  • the output of the RF power amplifier output feedback loop in the predistortion system is a complex scalar.
  • the complex vector in the training set is input to the predistortion system according to the element index, that is, to the predistortion multiplier and the complex neural network in the predistortion system.
  • the length of the complex vector input to the predistortion system each time can be determined by the power amplifier memory effect parameter, such as inputting a current element in the training set each time, and then the power amplifier memory effect parameter determines the number of historical elements input to the predistortion system.
  • the method for determining the number of historical elements is not limited, for example, the number of historical elements is not greater than the memory effect parameter of the power amplifier.
  • the loss function of the complex neural network in the predistortion system can be determined, and then the complex neural network output based on the loss function
  • the partial derivative or sensitivity of each element of the correction vector is determined to determine the partial derivative or sensitivity of the loss function to the weight parameters and bias parameters of each layer in the complex neural network, so as to update the weight of each layer of the complex neural network. Value parameters and bias parameters to achieve pre-distortion system training.
  • the relationship between the complex vector input to the complex neural network in the predistortion system and the correction vector output by the complex neural network is as follows:
  • ComplexNN composite function represents a plurality of internal-layer neural network computing function.
  • the output of the multiplier is the complex pre-distortion vector of n elements, the n-1 elements, the first element nM + 1 nM 1 and the second element, Is the complex vector input to the predistortion multiplier, Represents dot multiplication.
  • I the output of the output feedback loop of the radio frequency power amplifier
  • PA represents the processing function of the input signal of the output feedback loop of the radio frequency power amplifier
  • the relationship between the input and output of the second real-time power normalization unit is as follows:
  • Is the nth output of the second real-time power normalization unit Is the nth input of the second real-time power normalization unit
  • n is a positive integer
  • d is a positive integer greater than or equal to 1 and less than or equal to n
  • the partial derivative or sensitivity of the loss function of the complex neural network in the predistortion system with respect to each element of the correction vector output by the complex neural network it is determined that the loss function affects the complex neural network.
  • the partial derivatives or sensitivity of the weight parameters and bias parameters of each layer include:
  • the loss function is calculated and determined by the following calculation expression:
  • the feedback amount of the output feedback loop of the radio frequency power amplifier is the first value obtained after inputting the output of the output feedback loop of the radio frequency power amplifier into the second real-time power normalization unit 2.
  • the output of the real-time power normalization unit; in the case that the predistortion system does not include the second real-time power normalization unit, the feedback value of the output feedback loop of the RF power amplifier is the output of the output feedback loop of the RF power amplifier.
  • the partial derivative or sensitivity of the loss function to each element of the correction vector output by the complex neural network can be based on the loss function to the output of the predistortion multiplier, the RF power amplifier output feedback loop and/or the second real-time power normalization unit in the predistortion system.
  • the partial derivative or sensitivity is determined.
  • this application can update the weight parameters and bias parameters of each layer of the complex neural network based on the partial derivative or sensitivity of the correction vector.
  • the calculation expression of the partial derivative or sensitivity of each element of the correction vector is as follows:
  • ⁇ 1, m is determined by the following calculation expression:
  • ⁇ l,m is determined by the following calculation expression:
  • c l, m are complex coefficients obtained by using the least squares algorithm based on the complex number vector in the training set and the output obtained by inputting it into the output feedback loop of the RF power amplifier according to the memory polynomial model; Is the partial derivative or sensitivity of the loss function to the complex scalar output of the RF power amplifier output feedback loop.
  • the partial derivative or sensitivity of the weight parameter and the bias parameter of the fully connected layer inside the complex neural network are respectively:
  • f'( ⁇ ) represents the derivative of the neuron activation function to the input signal, Is the partial derivative or sensitivity of the weight parameter of the fully connected layer, Is the partial derivative or sensitivity of the bias parameter of the fully connected layer, Is the loss function pair The partial derivative or sensitivity.
  • the loss function is The partial derivative or sensitivity of is equal to the partial derivative or sensitivity of the correction vector.
  • the weight parameter and the partial derivative or sensitivity of the bias parameter of the qth convolution kernel of the convolution layer inside the complex neural network are respectively:
  • Each feature map can be in the form of a complex vector.
  • the weight parameters and bias parameters of the fully connected layer inside the complex neural network are updated by the following expressions:
  • testing the trained predistortion system based on the test set to obtain the error vector magnitude and adjacent channel leakage ratio corresponding to the predistortion system includes:
  • the length of the complex vector input each time is determined based on the memory effect parameters of the power amplifier.
  • the complex vector input to the predistortion system each time is obtained from the combination of the historical element and the current element in the test set.
  • the output of the RF power amplifier output feedback loop in the predistortion system is input to the predistortion system, and the generalized error vector magnitude and generalized adjacent channel leakage ratio corresponding to the predistortion system are determined.
  • the method used by the predistortion system after testing and training based on the test set is similar to the method used for training the predistortion system based on the training set. The difference is that there is no need to update the weight parameters and bias parameters of each layer of the complex neural network based on the test set. Instead, directly calculate and determine the generalized error vector magnitude and generalized adjacent channel leakage ratio to determine whether the generalized error vector magnitude and generalized adjacent channel leakage ratio meet the set requirements. When calculating the generalized error vector magnitude and generalized adjacent channel leakage ratio, it can be determined based on the feedback amount of the RF power amplifier output feedback loop and the corresponding complex vector in the test set.
  • Inputting the output of the RF power amplifier output feedback loop in the predistortion system to the predistortion information system includes passing the output of the RF power amplifier output feedback loop through the second real-time power normalization unit included in the predistortion system, and then Input the complex neural network; or directly input the output of the RF power amplifier output feedback loop into the complex neural network.
  • the generalized error vector magnitude and the generalized adjacent channel leakage ratio are respectively determined by the following calculation expressions:
  • GEVM 1 represents the magnitude of the generalization error vector, Express The generalized adjacent channel leakage ratio, Indicates the amount of feedback of the output feedback loop of the RF power amplifier, Represents the nth element feedback quantity of the output feedback loop of the radio frequency power amplifier, Represents the nth element of the corresponding complex number vector in the test set, TestSize represents the length of the test set, the sum of TestSize and TrainSize is N 1 , N 1 represents the length of the complex vector for training, HBW represents half of the effective signal bandwidth, GBW Represents half of the protection bandwidth, NFFT represents the number of discrete Fourier transform points; Win NFFT is the coefficient vector of the window function with a window length of NFFT, Means random from Intercept the NFFT-long signal in the middle; It is a uniformly distributed random positive integer in the value range [1, TestSize]; K is the number of random intercepts.
  • the feedback amount of the RF power amplifier output feedback loop can be considered as the output of the second real-time power normalization unit; the predistortion system does not include the second real-time power normalization unit.
  • the feedback amount of the output feedback loop of the radio frequency power amplifier may be the output of the output feedback loop of the radio frequency power amplifier.
  • the predistortion method provided by the present application solves the problems of the traditional power amplifier model and the predistortion model with inadequate non-linear representation capabilities and lack of generalization capabilities, and obtains a better error vector magnitude (Error Vector). Magnitude, EVM) performance and Adjacent Channel Power Ratio (ACLR) performance.
  • Error Vector Error Vector
  • Magnitude, EVM EVM
  • ACLR Adjacent Channel Power Ratio
  • This application adopts a neural network with a richer structure and form to carry out system modeling, which solves the inherent shortcomings of the traditional power amplifier model and DPD model's inadequate non-linear expression ability; at the same time, completely abandon the application of neural network for predistortion methods and technology separation processing In this way, the characteristic learning and predistortion correction of the power amplifier are integrated and processed into a whole from beginning to end, and the AI-DPD integrated solution is proposed for the first time, that is, the integrated solution of artificial intelligence instead of DPD.
  • the known training complex vector is sent, and the feedback loop is output through the predistortion multiplier, the complex neural network and the radio frequency power amplifier. Together with the training complex vector, it is used for AI -DPD integrated solution system (i.e. predistortion system) training; when the training reaches the required GEVM and GACLR, the training stops; at the same time, the service complex vector is sent through the predistortion system to output the complex scalar for predistortion correction .
  • AI -DPD integrated solution system i.e. predistortion system
  • Fig. 3a is a schematic structural diagram of a predistortion system provided by an embodiment of the application.
  • the predistortion system that is, the AI-DPD integrated solution system, mainly includes: a first real-time power normalization (ie, PNorm) unit , The second real-time power normalization unit, the predistortion multiplier (ie DPD Multiplier) unit, the complex neural network (ie Complex Neural Network) unit, and the RF power amplifier output feedback loop.
  • the predistortion multiplier unit namely the predistortion multiplier.
  • the complex neural network unit may include a complex neural network, a selector (ie, MUX), and an adder.
  • the output feedback loop of the radio frequency power amplifier includes a digital-to-analog converter unit (namely D/A), a radio frequency modulation unit, a power amplifier unit, an analog-to-digital converter unit (namely A/D), and a radio frequency demodulation unit.
  • D/A digital-to-analog converter unit
  • A/D analog-to-digital converter unit
  • the complex neural network is a kind of neural network: complex numbers include real part (Real Part, hereinafter abbreviated as I path) and imaginary part (Imaginary Part, hereinafter abbreviated as Q path), and usually means that the imaginary part is not The case of 0 is distinguished from real numbers.
  • the input and output of the neural network can be directly complex variables, vectors, matrices or tensors, or in the form of a combination of complex numbers I and Q; neurons in each layer (ie Layer) of the neural network
  • the input and output of the activation function and all other processing functions can be directly complex variables, vectors, matrices or tensors, or in the form of a combination of complex I and Q paths;
  • the neural network can use complex error inversion To train the propagation algorithm (Complex Error Back Propagation Algorithm, Complex BP), you can also use the real error back propagation algorithm (BP) for training.
  • the complex error back propagation algorithm includes but is not limited to the following steps:
  • Step 1 Initialize the system parameters of the AI-DPD integrated solution.
  • L 1 5
  • Each element of is 1
  • the index of each layer is run layer by layer in the forward direction, or all layers can be parallelized at the same time, and the corresponding initialization steps are completed according to the layer type.
  • the initialization of the different types of layers mainly includes but is not limited to the following steps:
  • Step 1-1 If the current (to be initialized) layer is a fully connected layer, according to the randomly initialized distribution type (such as Gaussian distribution, uniform distribution, etc.) and distribution parameters (mean, variance, standard deviation, etc.) set for this layer , Initialize the weight parameters and bias parameters of this layer; Step 1-2: If the current (to be initialized) layer is a convolutional layer, initialize each convolution of this layer according to the randomly initialized distribution type and distribution parameters set for this layer The weight parameters and bias parameters of the core; Steps 1-3: If the current (to be initialized) layer is another type of layer, such as a micro-step convolution (ie Fractionally Strided Convolutions) layer, etc., set the parameters according to the initialization of this type of layer (Such as randomly initialized distribution types and distribution parameters), complete the initialization of the weights and bias parameters of this layer.
  • the randomly initialized distribution type such as Gaussian distribution, uniform distribution, etc.
  • distribution parameters mean, variance, standard deviation, etc.
  • Step 2 AI-DPD integrated solution system is based on training set training.
  • N 1 perform real-time power normalization (PNorm), because there are many variations of real-time power normalization, the following is just an example:
  • the sequence after the real-time power is normalized Divided into training set And test set
  • the lengths of the two are respectively recorded as TrainSize and TestSize; among them, the training set is used for the overall learning or training of the AI-DPD integrated solution system; the test set is used to test the generalized adaptation of the solution system to new data in the second half of training Performance; this performance includes the generalized EVM performance and generalized ACPR performance, which are defined as follows:
  • HBW refers to half of the effective signal bandwidth
  • GBW refers to half of the protection bandwidth
  • NFFT is the number of discrete Fourier transform (FFT) points
  • Win NFFT is the coefficient vector of a window function with a window length of NFFT, for example: the window function can use a Blackman window; Means random from Intercept the NFFT-long signal in the middle; Is a uniformly distributed random positive integer in the value range [1, TestSize]; K is the number of random intercepts; Represents dot multiplication.
  • the training of the AI-DPD integrated solution system includes but is not limited to the following sub-steps:
  • M 1 is the power amplifier memory effect parameter, 0 ⁇ M 1 ⁇ TrainSize.
  • Step 2-2 Convert the complex number vector Through the complex neural network (Complex Neural Network, hereinafter referred to as ComplexNN) unit, the predistortion complex correction vector is obtained The relationship between the input and output of this unit is shown in the following formula:
  • Step 2-3 Convert the complex number vector Obtain the pre-distortion corrected complex vector through a pre-distortion multiplier (DPD Multiplier) unit The relationship between the input and output of this unit is shown in the following formula:
  • Step 2-4 The complex vector after the predistortion correction Input the output feedback loop of the radio frequency power amplifier, and the output obtained is The relationship between the output of the predistortion multiplier and the output of the RF power amplifier output feedback loop is shown in the following equation:
  • the power amplifier (PA) unit in the output feedback loop of the radio frequency power amplifier can be any actual power amplifier product; in other words, this embodiment does not have any restriction on the nonlinear characteristics of the power amplifier model.
  • Steps 2-5 will: the final original output of the AI-DPD integrated solution After radio frequency demodulation and analog-to-digital conversion (A/D), input the real-time power normalization (PNorm) unit to obtain the output feedback of the power amplifier
  • PNorm real-time power normalization
  • Step 2-6 According to the output feedback of the power amplifier And the complex vector Calculate the loss function of the complex neural network (Loss Function). Because there are many variations of additional regularization (Regularization) terms, the following loss function expression is just a simple example:
  • Step 2-7 Calculate the loss function according to formula (2-16) For the power amplifier output feedback Partial derivative or sensitivity
  • Step 2-8 According to the above sensitivity Calculate the loss function according to the following formula (2-17) For the final original output of the AI-DPD integrated solution Partial derivative or sensitivity
  • Step 2-9 According to the above sensitivity Use the following formula (2-18) to calculate the following intermediate partial derivative or sensitivity ⁇ l,m :
  • c l, m are the complex coefficients obtained by the least square algorithm based on the complex number vector in the training set and the output obtained by inputting it into the output feedback loop of the RF power amplifier according to the memory polynomial model; conj( ⁇ ) Take the conjugate of complex numbers.
  • Step 2-10 Calculate the loss function according to the above-mentioned sensitivity ⁇ l,m according to the following formula (2-19) For complex vector after predistortion correction Partial derivative or sensitivity of each element
  • Step 2-11 According to the above sensitivity Calculate the loss function according to the following formula (2-21) Complex correction vector for the predistortion Partial derivative or sensitivity of each element
  • Step 2-12 According to the above sensitivity Calculate the loss function in turn according to the reverse order of the forward operation of the complex neural network (Complex Neural Network) unit
  • Step 2-12-1 If the current layer to be calculated is a Full-Connected Layer, calculate the loss function according to formula (2-22) and formula (2-23) respectively For the complex weight parameter of this layer And complex offset parameters
  • Kernel convolution kernel
  • Represents the p (p 1,2,...,P) complex number vector output by the previous layer (jth layer) of the convolutional layer; Represents the qth complex number vector output by the current convolutional layer (the kth layer); Represents the loss function For the above The partial derivative or sensitivity of; Conv( ⁇ ) represents the convolution operation; Fliplr( ⁇ ) represents the position inversion of the input vector.
  • Step 2-13 The loss function calculated according to each layer For the sensitivity of the weight parameters and bias parameters, update the parameters, because there are many ways to update the parameters of different training algorithms. The following is just a simple example:
  • Step 3 The AI-DPD integrated solution system is based on the performance statistics of the test set.
  • Step 3-2 Convert the complex number vector Obtain the complex correction vector of the predistortion through the complex neural network (Complex Neural Network) unit
  • Step 3-3 Convert the complex number vector Obtain the pre-distortion corrected complex vector through a pre-distortion multiplier (DPD Multiplier) unit
  • Step 3-4 The complex vector after the predistortion correction Input the output feedback loop of the RF power amplifier to get the output
  • Steps 3-5 Put the After radio frequency demodulation and analog-to-digital conversion (A/D), input the real-time power normalization (PNorm) unit to obtain the output feedback of the power amplifier
  • A/D analog-to-digital conversion
  • PNorm real-time power normalization
  • Step 3-6 According to the above And the test set According to the above formula (2-3) and formula (2-4), the statistical calculation system is based on the generalized EVM performance (GEVM) and generalized ACLR performance (GACLR) of the test set.
  • GEVM generalized EVM performance
  • GACLR generalized ACLR performance
  • Steps 3-7 If the GEVM and GACLR meet the set index requirements, stop the training of the AI-DPD integrated solution system; otherwise, return to all steps in step 2 to start a new round of training.
  • the known training complex vector is sent, and the complex scalar obtained by the predistortion correction is output through the predistortion multiplier, the complex neural network, and the RF power amplifier output feedback loop. It is fed back to the sending end and used for training of the AI-DPD integrated scheme system together with the training complex vector; when the training reaches the required generalized EVM performance and generalized ACLR performance, the training stops; at the same time, the sending station
  • the complex vector of the business passes through the predistortion system to output the complex scalar for obtaining the predistortion correction.
  • Fig. 3b is a schematic structural diagram of another predistortion system provided by an embodiment of the application. See Fig. 3b.
  • the predistortion system is also called the AI-DPD integrated solution system and mainly includes: a predistortion multiplier unit and a complex neural network unit And RF power amplifier output feedback loop.
  • a predistortion multiplier unit and a complex neural network unit And RF power amplifier output feedback loop.
  • Is the input vector Complex correction vector for predistortion It is also the output of a complex neural network
  • y 2 is the final output of the AI-DPD integrated solution
  • y 2 is a complex scalar.
  • the complex number neural network is a kind of neural network: a complex number includes two parts, a real part and an imaginary part, and usually refers to the situation where the imaginary part is not 0, to distinguish it from the real number; the input and output of the neural network can either be directly Complex variables, vectors, matrices, or tensors can also be in the form of a combination of I-way and Q-way of complex numbers; the neuron activation function of each layer of the neural network and the input and output of all other processing functions can be directly Complex variables, vectors, matrices or tensors can also be in the form of a combination of I and Q of complex numbers; the neural network can be trained using complex error backpropagation algorithms, or real error backpropagation algorithms train.
  • the complex error back propagation algorithm includes but is not limited to the following steps:
  • Step1 Initialization of the system parameters of the AI-DPD integrated solution.
  • L 2 5
  • Each element of is 1
  • the index of each layer is run layer by layer in the forward direction, or all layers can be parallelized at the same time, and the corresponding initialization steps are completed according to the layer type.
  • the initialization of the different types of layers mainly includes but is not limited to the following steps:
  • Step1-1 If the current (to be initialized) layer is a fully connected layer, according to the randomly initialized distribution type (for example: Gaussian distribution, uniform distribution, etc.) and distribution parameters (mean, variance, standard deviation, etc.) set by this layer, Initialize the weight parameters and bias parameters of this layer; Step1-2: If the current (to be initialized) layer is a convolutional layer, initialize the layers of each convolution kernel according to the randomly initialized distribution type and distribution parameters set for this layer Weight parameters and bias parameters; Step1-3: If the current (to be initialized) layer is another type of layer, set the parameters according to the initialization of this type of layer to complete the initialization of the weight and bias parameters of this layer.
  • the randomly initialized distribution type for example: Gaussian distribution, uniform distribution, etc.
  • distribution parameters mean, variance, standard deviation, etc.
  • Step2 The AI-DPD integrated solution system is based on the training of the training set. Take a known training complex vector sequence of length N 2 According to a certain ratio (for example: 0.5:0.5), divided into training set And test set The lengths of the two are respectively recorded as TrainSize and TestSize; among them, the training set is used for the overall learning or training of the AI-DPD integrated solution system; the test set is used to test the generalized adaptation of the solution system to new data in the second half of training Performance; the performance includes the generalized EVM performance (GEVM) and generalized ACPR performance (GACLR), which are defined as follows:
  • GEVM generalized EVM performance
  • GACLR generalized ACPR performance
  • HBW refers to half of the effective signal bandwidth
  • GBW refers to half of the protection bandwidth
  • NFFT is the number of discrete Fourier transform (FFT) points
  • Win NFFT is the coefficient vector of a window function with a window length of NFFT.
  • the window function can use a Blackman window; Means random from Intercept the NFFT long signal in the middle; Is a uniformly distributed random positive integer in the value range [1, TestSize]; K is the number of random intercepts; Represents dot multiplication.
  • the training of the AI-DPD integrated solution system includes but is not limited to the following sub-steps:
  • M 2 is the memory effect parameter of the power amplifier, 0 ⁇ M 2 ⁇ TrainSize.
  • Step2-2 Convert the complex number vector Obtain the complex correction vector of the predistortion through the complex neural network unit (ComplexNN) The relationship between the input and output of this unit is shown in the following formula:
  • Step2-3 Convert the complex number vector Obtain the pre-distortion corrected complex vector through a pre-distortion multiplier (DPD Multiplier) unit The relationship between the input and output of this unit is shown in the following formula:
  • Step2-4 The complex vector after the predistortion correction Input the output feedback loop of the radio frequency power amplifier to get the output As shown in the following formula:
  • the power amplifier (PA) unit in the output feedback loop of the radio frequency power amplifier can be any actual power amplifier product; in other words, this embodiment does not have any restriction on the nonlinear characteristics of the power amplifier model.
  • Step2-5 According to the above And the complex vector Calculate the loss function of the complex neural network. Because there are many variations of additional regularization (Regularization) terms, the following loss function expression is just a simple example:
  • Step2-6 Calculate the loss function according to formula (2-47) For the power amplifier output feedback Partial derivative or sensitivity
  • Step2-7 According to the above sensitivity Use the following formula (2-48) to calculate the following intermediate partial derivative or sensitivity ⁇ l,m :
  • c l, m are the complex coefficients obtained by the least square algorithm based on the complex number vector in the training set and the output obtained by inputting it into the output feedback loop of the RF power amplifier according to the memory polynomial model; conj( ⁇ ) Take the conjugate of complex numbers.
  • Step2-8 Calculate the loss function according to the above-mentioned sensitivity ⁇ l, m according to the following formula (2-49) For complex vector after predistortion correction Partial derivative or sensitivity of each element
  • Step2-9 According to the above sensitivity Calculate the loss function according to the following formula (2-51) Complex correction vector for the predistortion Partial derivative or sensitivity of each element
  • Step2-10 According to the above sensitivity Calculate the loss function in turn according to the reverse order of the forward running of the complex neural network unit For the partial derivatives or sensitivity of the weights and bias parameters of the internal layers, it includes but not limited to the following sub-steps:
  • Step2-10-1 If the current layer to be calculated is a fully connected layer, calculate the loss function according to formula (2-52) and formula (2-53) respectively For the complex weight parameter of this layer And complex offset parameters
  • Represents the p (p 1,2,...,P) complex number vector output by the previous layer (jth layer) of the convolutional layer; Represents the qth complex number vector output by the current convolutional layer (the kth layer); Represents the loss function For the above The partial derivative or sensitivity of; Conv( ⁇ ) represents the convolution operation; Fliplr( ⁇ ) represents the position inversion of the input vector.
  • Step2-11 The loss function calculated according to each layer For the sensitivity of the weight parameters and bias parameters, update the parameters, because there are many ways to update the parameters of different training algorithms. The following is just a simple example:
  • Step3 The AI-DPD integrated solution system is based on the performance statistics of the test set.
  • Step3-2 Convert the complex number vector Obtain the complex correction vector of the predistortion through the complex neural network (ComplexNN) unit
  • Step3-3 Convert the complex number vector Obtain the pre-distortion corrected complex vector through a pre-distortion multiplier (DPD Multiplier) unit
  • Step3-4 The complex vector after the predistortion correction Input the output feedback loop of the RF power amplifier to get the output
  • Step3-5 According to the above And the test set According to the above formula (2-36) and formula (2-37), the statistical calculation system is based on the generalized EVM performance (GEVM) and generalized ACLR performance (GACLR) of the test set.
  • GEVM generalized EVM performance
  • GACLR generalized ACLR performance
  • Step3-6 If the GEVM and GACLR meet the set index requirements, stop the training of the AI-DPD integrated solution system; otherwise, return to all the steps of Step2 and start a new round of training.
  • Table 1 is a generalization performance effect table provided by an embodiment of this application. See Table 1.
  • the predistortion system provided by this application guarantees generalized EVM performance.
  • the training set includes 39,320 complex number vectors; the validation set includes 13,107 complex number vectors; and the test set includes 13,105 complex number vectors.
  • Table 1 A generalization performance effect table provided by the embodiment of the application.
  • FIG. 4 is a performance effect diagram of the generalized adjacent channel leakage ratio obtained by an embodiment of the application.
  • an integrated method that is, the generalized adjacent channel leakage ratio and expected output of the actual output of the predistortion system provided by this application
  • the generalized adjacent channel leakage ratio deviation is smaller.
  • FIG. 5 is a diagram showing the improvement effect of the generalized adjacent channel leakage ratio obtained by an embodiment of the application. Referring to FIG. Has been improved.
  • Some implementations also include a machine-readable or computer-readable program storage device (for example, a digital data storage medium) and coded machine-executable or computer-executable program instructions, where the instructions execute some of the above methods or All steps.
  • the program storage device may be a digital memory, a magnetic storage medium (for example, a magnetic disk and a magnetic tape), a hardware, or an optically readable digital data storage medium.
  • Embodiments also include a programmed computer that executes the steps of the above-described method.
  • the present application provides a predistortion system.
  • the predistortion system can execute the predistortion method provided in the embodiments of the present application, and the predistortion system includes : Predistortion multiplier, complex neural network and RF power amplifier output feedback loop; the first input terminal of the predistortion multiplier is the input terminal of the predistortion system, and the first input terminal and the first input terminal of the complex neural network Two input ends are connected, the output end of the predistortion multiplier is connected to the input end of the output feedback loop of the radio frequency power amplifier, the output end of the output feedback loop of the radio frequency power amplifier is the output end of the predistortion system, and the radio frequency
  • the output terminal of the power amplifier output feedback loop is connected with the second input terminal of the complex neural network; the output terminal of the complex neural network is connected with the second input terminal of the predistortion multiplier.
  • the radio frequency power amplifier output feedback loop includes: a digital-to-analog converter unit, a radio frequency modulation unit, a power amplifier unit, an analog-to-digital converter unit, and a radio frequency demodulation unit.
  • the output end of the predistortion multiplier can be connected to the input end of the power amplifier unit of the RF power amplifier output feedback loop through the digital-to-analog converter unit (ie D/A) and the RF modulation unit of the RF power amplifier output feedback loop.
  • the output terminal of the amplifier unit can be connected to the second input terminal of the complex neural network through a radio frequency demodulation unit and an analog-to-digital converter unit (ie, A/D).
  • the predistortion system provided in this embodiment is used to implement the predistortion method provided in the embodiment of the present application.
  • the implementation principle and technical effect of the predistortion system provided in this embodiment are similar to the predistortion method provided in the embodiment of the present application, and will not be omitted here. Go into details.
  • the present application provides a predistortion system.
  • the predistortion system can execute the predistortion method provided in the embodiments of the present application, and the predistortion system includes : Predistortion multiplier, complex neural network, RF power amplifier output feedback loop, first real-time power normalization unit and second real-time power normalization unit; the input of the first real-time power normalization unit is said The input terminal of the predistortion system, the output terminal of the first real-time power normalization unit and the first input terminal of the predistortion multiplier, the first input terminal of the complex neural network and the input terminal of the complex neural network The second input terminal is connected.
  • the output terminal of the predistortion multiplier is connected to the input terminal of the output feedback loop of the radio frequency power amplifier.
  • the output terminal of the output feedback loop of the radio frequency power amplifier is the output terminal of the predistortion system.
  • the output terminal of the output feedback loop of the radio frequency power amplifier is connected with the input terminal of the second real-time power normalization unit, and the output terminal of the second real-time power normalization unit is connected with the second input terminal of the complex neural network,
  • the output terminal of the complex neural network is connected to the second input terminal of the predistortion multiplier.
  • the radio frequency power amplifier output feedback loop includes: a digital-to-analog converter unit, a radio frequency modulation unit, a power amplifier unit, an analog-to-digital converter unit, and a radio frequency demodulation unit.
  • the input end of the digital-to-analog converter unit is the input end of the output feedback loop of the radio frequency power amplifier, the output end of the digital-to-analog conversion unit is connected to the input end of the radio frequency modulation unit, and the output end of the radio frequency modulation unit is connected to the power
  • the input end of the amplifier unit is connected, the output end of the power amplifier unit is connected to the input end of the radio frequency demodulation unit, and the output end of the radio frequency demodulation unit is connected to the input end of the analog-to-digital converter unit, so
  • the output terminal of the analog-to-digital converter unit is the output terminal of the output feedback loop of the radio frequency power amplifier.
  • the predistortion system provided in this embodiment is used to implement the predistortion method provided in the embodiment of the present application.
  • the implementation principle and technical effect of the predistortion system provided in this embodiment are similar to the predistortion method provided in the embodiment of the present application, and will not be omitted here. Go into details.
  • FIG. 6 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • the device provided by the present application includes one or A plurality of processors 21 and a storage device 22; there may be one or more processors 21 in the device, and one processor 21 is taken as an example in FIG. 6; the storage device 22 is used to store one or more programs; the one The one or more programs are executed by the one or more processors 21, so that the one or more processors 21 implement the method as described in the embodiments of the present application.
  • the equipment also includes: a communication device 23, an input device 24, and an output device 25.
  • the processor 21, the storage device 22, the communication device 23, the input device 24, and the output device 25 in the device may be connected by a bus or other methods.
  • the connection by a bus is taken as an example.
  • the input device 24 can be used to receive input digital or character information, and generate key signal input related to user settings and function control of the device.
  • the output device 25 may include a display device such as a display screen.
  • the communication device 23 may include a receiver and a transmitter.
  • the communication device 23 is configured to perform information transceiving and communication under the control of the processor 21.
  • the storage device 22 can be configured to store software programs, computer-executable programs, and modules, such as program instructions/predistortion systems corresponding to the methods described in the embodiments of the present application.
  • the storage device 22 may include a storage program area and a storage data area.
  • the storage program area may store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of the device, and the like.
  • the storage device 22 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the storage device 22 may include memories remotely provided with respect to the processor 21, and these remote memories may be connected to the device through a network.
  • Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the embodiment of the present application further provides a storage medium, the storage medium stores a computer program, and the computer program implements any of the methods described in the present application when the computer program is executed by a processor, the storage medium stores a computer program, and the computer
  • the predistortion method provided in the embodiment of the present application is implemented and applied to a predistortion system.
  • the predistortion system includes: a predistortion multiplier, a complex neural network, and a radio frequency power amplifier output feedback loop.
  • the method includes: Input the training complex vector into the predistortion system to obtain the complex scalar corresponding to the training complex vector output by the predistortion system; train the predistortion system based on the training complex vector and the complex scalar , Until the generalized error vector magnitude and generalized adjacent channel leakage ratio corresponding to the predistortion system meet the set requirements; input the service complex vector into the trained predistortion system to obtain a complex scalar for predistortion correction.
  • the computer storage medium of the embodiment of the present application may adopt any combination of one or more computer-readable media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above.
  • Examples of computer-readable storage media include: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (Random Access Memory, RAM), read-only memory (Read Only) Memory, ROM), Erasable Programmable Read Only Memory (EPROM), flash memory, optical fiber, compact Disc Read-Only Memory (CD-ROM), optical storage devices, magnetic Storage device, or any suitable combination of the above.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to: electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable medium may send, propagate or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wire, optical cable, radio frequency (RF), etc., or any suitable combination of the foregoing.
  • suitable medium including but not limited to: wireless, wire, optical cable, radio frequency (RF), etc., or any suitable combination of the foregoing.
  • the computer program code used to perform the operations of this application can be written in one or more programming languages or a combination thereof.
  • the programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network-including Local Area Network (LAN) or Wide Area Network (WAN)-or it can be connected to an external computer (For example, use an Internet service provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • equipment such as terminal equipment
  • wireless user equipment such as mobile phones, portable data processing devices, portable web browsers, or vehicle-mounted mobile stations.
  • the various embodiments of the present application can be implemented in hardware or dedicated circuits, software, logic or any combination thereof.
  • some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software that may be executed by a controller, microprocessor, or other computing device, although the application is not limited thereto.
  • the embodiments of the present application may be implemented by executing computer program instructions by a data processor of a mobile device, for example, in a processor entity, or by hardware, or by a combination of software and hardware.
  • Computer program instructions can be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or written in any combination of one or more programming languages Source code or object code.
  • the block diagram of any logic flow in the drawings of the present application may represent program steps, or may represent interconnected logic circuits, modules, and functions, or may represent a combination of program steps and logic circuits, modules, and functions.
  • the computer program can be stored on the memory.
  • the memory can be of any type suitable for the local technical environment and can be implemented using any suitable data storage technology, such as but not limited to read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), optical Memory devices and systems (Digital Video Disc (DVD) or Compact Disk (CD)), etc.
  • Computer-readable media may include non-transitory storage media.
  • the data processor can be any type suitable for the local technical environment, such as but not limited to general-purpose computers, special-purpose computers, microprocessors, digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (ASICs) ), programmable logic devices (Field-Programmable Gate Array, FPGA), and processors based on multi-core processor architecture.
  • DSP Digital Signal Processing
  • ASICs application specific integrated circuits
  • FPGA Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array

Landscapes

  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Nonlinear Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Amplifiers (AREA)

Abstract

本文公开了一种预失真方法、系统、设备及存储介质,该方法应用于预失真系统,所述预失真系统包括:预失真乘法器、复数神经网络和射频功放输出反馈回路,所述方法包括:将训练用复数向量输入所述预失真系统,得到所述预失真系统输出所述训练用复数向量对应的复数标量;基于所述训练用复数向量和所述复数标量,训练所述预失真系统,直至所述预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比达到设定要求;将业务复数向量输入训练后的预失真系统,得到预失真校正的复数标量。

Description

预失真方法、系统、设备及存储介质 技术领域
本申请涉及通信技术领域,例如涉及一种预失真方法、系统、设备及存储介质。
背景技术
功率放大器是关系到能量消耗和信号质量的关键部件。功率放大器模型和预失真模型已经成为高效通信和高数据率通信的关键技术。
理想的预失真模型与功率放大器模型的非线性特性在数学上是严格互逆的,预失真模型与功率放大器模型均属于典型的非线性拟合问题。然而,传统的预失真模型和功率放大器模型的非线性表示能力先天不足。
发明内容
本申请提供了一种预失真方法、系统、设备及存储介质,解决了传统的预失真模型和功率放大器模型的非线性表示能力先天不足的问题,同时解决了应用神经网络时采用分离式处理所造成的对于功放系统的动态变化反应滞后从而预失真校正效果较差的问题。
本申请实施例提供了一种预失真方法,应用于预失真系统,所述预失真系统包括:预失真乘法器、复数神经网络和射频功放输出反馈回路;所述方法包括:
将训练用复数向量输入所述预失真系统,得到所述预失真系统输出的所述训练用复数向量对应的复数标量;基于所述训练用复数向量和所述复数标量,训练所述预失真系统,直至所述预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比达到设定要求;将业务复数向量输入训练后的预失真系统,得到预失真校正的复数标量。
本申请实施例还提供了一种预失真系统,执行本申请实施例提供的预失真方法,所述预失真系统包括:预失真乘法器、复数神经网络和射频功放输出反馈回路;所述预失真乘法器的第一输入端为所述预失真系统的输入端,与所述复数神经网络的第一输入端和第二输入端连接,所述预失真乘法器的输出端与所述射频功放输出反馈回路的输入端连接,所述射频功放输出反馈回路的输出端为所述预失真系统的输出端,所述射频功放输出反馈回路的输出端与所述复 数神经网络的第二输入端连接;所述复数神经网络的输出端与所述预失真乘法器的第二输入端连接。
本申请实施例还提供了一种预失真系统,执行本申请实施例提供的预失真方法,所述预失真系统包括:预失真乘法器、复数神经网络、射频功放输出反馈回路、第一实时功率归一化单元和第二实时功率归一化单元;所述第一实时功率归一化单元的输入端为所述预失真系统的输入端,所述第一实时功率归一化单元的输出端与所述预失真乘法器的第一输入端、所述复数神经网络的第一输入端和所述复数神经网络的第二输入端连接,所述预失真乘法器的输出端与所述射频功放输出反馈回路的输入端连接,所述射频功放输出反馈回路的输出端为所述预失真系统的输出端,所述射频功放输出反馈回路的输出端与所述第二实时功率归一化单元的输入端连接,所述第二实时功率归一化单元的输出端与所述复数神经网络的第二输入端连接,所述复数神经网络的输出端与所述预失真乘法器的第二输入端连接。
本申请实施例还提供了一种设备,包括:一个或多个处理器;存储装置,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述的预失真方法。
本申请实施例还提供了一种存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述的预失真方法。
附图说明
图1为本申请实施例提供的一种预失真方法的流程示意图;
图2为传统的基于记忆多项式的功放模型示意图;
图3a为本申请实施例提供的一种预失真系统的结构示意图;
图3b为本申请实施例提供的又一种预失真系统的结构示意图;
图4为本申请实施例所获得的泛化邻道泄漏比的性能效果图;
图5为本申请实施例所获得的泛化邻道泄漏比的改善效果图;
图6为本申请实施例提供的一种设备的结构示意图。
具体实施方式
下文中将结合附图对本申请的实施例进行说明。
在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在一些情况下,可以 以不同于此处的顺序执行所示出或描述的步骤。
在一个示例性实施方式中,图1为本申请实施例提供的一种预失真方法的流程示意图,该方法可以适用于进行预失真处理的情况,该方法可以由预失真系统执行,预失真系统可以由软件和/或硬件实现,并集成在终端设备上。
功率放大器(Power Amplifier,PA)通常具有三种特性:1)器件本身的静态非线性(即Static(Device)Nonlinearity);2)来源于匹配网络(即Matching Network)和设备元器件延时的线性记忆效应(即Linear Memory Effect);3)非线性记忆效应(即Nonlinear Memory Effect),该效应主要来源于晶体管陷阱效应(即Trapping Effect)和偏置网络(即Bias Network)的非理想特性,以及输入电平对温度变化的依赖性等。
预失真模块(Predistorter,PD)位于射频功放之前,用以对信号通过功放时将会产生的非线性失真进行预先的逆处理;在数字域实施的预失真称为数字预失真(Digital PD,DPD)。
因此,理想的预失真与功放的非线性特性在数学上是严格互逆的,二者均属于典型的非线性拟合问题。而传统的用于功放非线性特性拟合的级数或多项式模型,对较复杂非线性的表示或拟合能力存在先天不足的问题。下面以记忆多项式的功放模型为例,该模型的数学表达式如下:
Figure PCTCN2021093060-appb-000001
图2为传统的基于记忆多项式的功放模型示意图,该功放模型相当于一个仅有输入层和输出层的零隐层网络,连多层感知器网络(Multi-Layer Perceptron,MLP)都算不上。然而,领域内众所周知地,网络的表达能力与其结构复杂度是正向相关的;这也是传统功放模型的非线性表达能力先天不足的根本原因。然而,前述预失真与功放非线性互为反函数,故可以预计采用传统级数或多项式模型来建模DPD时,也势必存在相同瓶颈。
传统功放模型通常还会进行各种截短或简化,从而使得对功放真实的非线性特性刻画更为不足;也导致由该模型所求解的参数,在遇到全新数据或输入时,得到的输出与预期输出存在较大偏差,即模型不具有对新数据的普适和泛化能力,而泛化能力正是神经网络性能测试的最主要指标。同时神经网络具有丰富的结构形态,并且其非线性表达或拟合能力也早已经过学界和业界的广泛验证。
不过,采用神经网络进行预失真的方法和技术,都采用的是对功放非线性特性学习和进行预失真处理两个环节相互分离的方式,即:第一步,神经网络 并不接入信号处理的主链路,即并不对信号进行预失真处理,只是接受功放的反馈输出,用以训练或学习功放的非线性特性或其逆特性;等到非线性特性学习完毕后,再将神经网络接入主链路中,或者将其训练所得的权值传给一个主链路上的复本神经网络,再开始进行预失真处理。
这种分离式处理势必造成神经网络对于功放系统的动态变化反应滞后或学习不匹配的问题,即接入系统时刻以前训练的神经网络的权值,所反映的仅是系统之前的动态特性,对于当前时刻系统动态的适应能力则是不确定或无保证的;从而预失真校正效果较差。
本申请提供了一种预失真方法,该方法应用于预失真系统,所述预失真系统包括:预失真乘法器、复数神经网络和射频功放输出反馈回路,如图1所示,该方法包括如下步骤:
S110、将训练用复数向量输入所述预失真系统,得到所述预失真系统输出的所述训练用复数向量对应的复数标量。
本申请基于预失真系统进行预失真校正,预失真系统可以认为是能够实现预失真校正的系统,该预失真系统包括功率放大器,该预失真系统可以将功率放大器的非线性特性学习和预失真校正进行融合,即预失真系统在训练时直接连接至功率放大器,一边进行功率放大器的非线性特征学习,一边进行预失真校正,提高了效率和准确率。预失真系统还包括预失真乘法器和复数神经网络。复数神经网络可以为预失真乘法器提供所乘的复系数。预失真乘法器可以认为是实现预失真功能的乘法器。
在一个实施例中,预失真系统还可以包括第一实时功率归一化单元和第二实时功率归一化单元。其中,“第一”和“第二”仅用于区分实时功率归一化单元。实时功率归一化单元可以进行归一化处理。如,第一实时功率归一化单元可以将训练用复数向量归一化后输入至预失真系统的预失真乘法器和复数神经网络。第二功率归一化单元可以将训练用复数向量对应的复数标量归一化后返回至预失真系统的复数神经网络。
训练用复数向量可以认为是训练预失真系统使用的复数向量。本步骤可以将训练用复数向量输入预失真系统以得到预失真系统输出的复数标量,该复数标量可以由预失真系统中的射频功放输出反馈回路输出。复数标量和训练用复数向量可以作为训练预失真系统的样本,基于复数标量可以训练预失真系统,并在训练的过程中能够学习功率放大器的非线性特征。
S120、基于所述训练用复数向量和所述复数标量,训练所述预失真系统,直至所述预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比达到设定要求。
本申请可以将训练用复数向量和复数标量输入至预失真系统,以对预失真系统进行训练。该训练可以为有监督的训练。可以通过复数神经网络的损失函数更新复数神经网络各层的权值参数和偏置参数。
预失真系统训练结束的条件可以基于泛化误差向量幅度和泛化邻道泄漏比确定。在泛化误差向量幅度和泛化邻道泄漏比达到设定要求的情况下,可以认为预失真系统训练完成。设定要求可以根据实际需求设定,此处不作限定,如基于预失真的指标确定设定要求。
在一个实施例中,在所述泛化误差向量幅度和泛化邻道泄漏比对应的数值大于或等于各自的设定阈值的情况下,完成所述预失真系统的训练,在所述泛化误差向量幅度和泛化邻道泄漏比对应的数值小于各自的设定阈值的情况下,继续训练所述预失真系统。
设定阈值可以为根据实际需求设定的,此处不作限定。在泛化误差向量幅度和泛化邻道泄漏比未达到设定要求的情况下,可以继续使用该训练用复数向量训练预失真系统。
本申请通过将泛化误差向量幅度和泛化邻道泄漏比均达到设定要求作为预失真系统训练结束的条件,可以衡量和反映预失真系统进行预失真校正时的普适和泛化能力。所述泛化能力是指对所述预失真系统重复输入训练集中复数向量、从而训练在本质上是系统对已知数据的学习,在这个过程之后固化预失真系统的参数,然后输入全新的、系统未知的测试集中的数据以统计和考察系统对更广泛的新输入或新数据的适应或预失真的能力。
S130、将业务复数向量输入训练后的预失真系统,得到预失真校正的复数标量。
在预失真系统训练完成后,可以将业务复数向量输入至训练后的预失真系统,得到预失真校正的复数标量。其中,业务复数向量可以为在应用预失真系统过程时的业务向量,该向量为复数向量。
本申请提供了一种预失真方法,该方法应用于预失真系统,所述预失真系统包括:预失真乘法器、复数神经网络和射频功放输出反馈回路,该方法首先将训练用复数向量输入所述预失真系统,得到所述预失真系统输出的所述训练用复数向量对应的复数标量;然后基于所述训练用复数向量和所述复数标量,训练所述预失真系统,直至所述预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比达到设定要求;最后将业务复数向量输入训练后的预失真系统,得到预失真校正的复数标量。本申请通过预失真系统替代相关技术中的预失真模型,预失真系统中的复数神经网络具有丰富的结构形态,非线性表达或拟合能力强, 能够有效解决传统的预失真模型和功率放大器的非线性表示能力先天不足的问题。
在上述实施例的基础上,提出了上述实施例的变型实施例,为了使描述简要,在变型实施例中仅描述与上述实施例的不同之处。
在一个实施例中,所述预失真系统使用复数误差反向传播算法进行训练。
复数误差反向传播算法可以认为是复数的误差反向传播算法。误差反向传播算法学习过程由信号的正向传播与误差的反向传播两个过程组成。
在一个实施例中,训练所述预失真系统,包括:
初始化预失真系统的系统参数;基于训练集和所述复数标量训练所述预失真系统;基于测试集测试训练后的预失真系统,得到所述预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比;在所述泛化误差向量幅度和泛化邻道泄漏比对应的数值大于或等于各自的设定阈值的情况下,完成所述预失真系统的训练,在所述泛化误差向量幅度和泛化邻道泄漏比对应的数值小于各自的设定阈值的情况下,继续基于所述训练集训练所述预失真系统;其中,所述训练集和所述测试集基于归一化后的复数向量拆分获得,所述归一化后的复数向量为将所述训练用复数向量输入所述预失真系统所包括的第一实时功率归一化单元后得到的第一实时功率归一化单元的输出;或者,所述训练集和所述测试集基于所述训练用复数向量拆分获得。
系统参数可以认为是预失真系统初始化时所需的参数,此处不对系统参数的内容进行限定,可以根据实际情况设定。示例性的,系统参数包括但不限于:非线性阶数参数、功放记忆效应参数和预失真系统所包括复数神经网络的初始输出。
初始化系统参数可以认为是设置系统参数,设置的数值可以为经验值或基于历史的系统参数确定。
训练集可以认为是训练预失真系统的复数向量的集合。测试集可以认为是检验预失真系统泛化性能的复数向量的集合。训练集和测试集中所包括的复数向量不同,两者的合集为训练用复数向量。
每次从训练集中部分截取地输入所述预失真系统的复数向量,和预失真系统相应输出的复数标量,是一一对应的;训练预失真系统时,正是基于所述复数向量中的当前元素和所述复数标量来计算预失真系统中复数神经网络的损失函数,进而基于损失函数更新复数神经网络各层的权值参数和偏置参数。
每次从测试集中部分截取地输入所述预失真系统的复数向量,和预失真系统相应输出的复数标量,也是一一对应的;检验预失真系统的泛化性能时,正 是基于整个测试集和所有所述复数标量构成的输出集,来统计获得泛化误差向量幅度(Generalization Error Vector Magnitude,GEVM)和泛化邻道泄漏比(Generalization Adjacent Channel Power Ratio,GACLR)性能,并用以判断是否结束对预失真系统的训练。
在一个实施例中,归一化后的复数向量中的元素通过如下计算表达式确定:
Figure PCTCN2021093060-appb-000002
Figure PCTCN2021093060-appb-000003
其中,x 1(n)表示归一化后的复数向量中的第n个元素,x 1,raw(n)表示训练用复数向量中第n个元素,x 1,raw(d)表示训练用复数向量中第d个元素。
在一个实施例中,所述初始化预失真系统的系统参数,包括:
根据所述复数神经网络中每层对应的层类型完成相应层的初始化。
初始化预失真系统的系统参数还包括:设置非线性阶数参数、功放记忆效应参数和预失真系统所包括复数神经网络的初始输出。设置的功放记忆效应参数,并非对功率放大器的限定。设置的非线性阶数参数和功放记忆效应参数仅用于预失真系统的训练。
对非线性阶数参数、功放记忆效应参数和预失真系统所包括复数神经网络的初始输出的数值不作限定。对设置非线性阶数参数、功放记忆效应参数和预失真系统所包括复数神经网络的初始输出的操作和初始化复数神经网络每层的操作的执行顺序不作限定。
复数神经网络各层的初始化可以基于对应的初始化设置参数完成,对初始化设置参数不作限定,如初始化设置参数可以为分布类型和分布参数。
在一个实施例中,所述根据所述复数神经网络中每层对应的层类型完成相应层的初始化,包括:
根据所述复数神经网络中每层的分布类型和分布参数初始化对应层的权值参数和偏置参数。
示例性的,若当前(待初始化)层为全连接层,根据该层所设置的随机初始化的分布类型(例如:高斯分布、均匀分布等)和分布参数(均值、方差、标准差等),初始化该层的权值参数和偏置参数。
在一个实施例中,所述基于训练集和所述复数标量训练所述预失真系统, 包括:
将训练集中的复数向量按照元素索引输入所述预失真系统,每次输入的复数向量的长度基于功放记忆效应参数确定,每次输入至预失真系统的复数向量由训练集中的历史元素和当前元素组合得到;将所述预失真系统中的射频功放输出反馈回路的输出,通过所述预失真系统所包括的第二实时功率归一化单元后,再输入所述复数神经网络;或者,将所述预失真系统中的射频功放输出反馈回路的输出直接输入所述复数神经网络;根据所述预失真系统中复数神经网络的损失函数对于所述复数神经网络输出的校正向量各元素的偏导数或灵敏度,确定所述损失函数对所述复数神经网络内部各层的权值参数和偏置参数的偏导数或灵敏度;根据确定的各层的偏导数或灵敏度更新对应层的权值参数和偏置参数;其中,每次输入至预失真系统的复数向量的表达式如下所示:
Figure PCTCN2021093060-appb-000004
其中,
Figure PCTCN2021093060-appb-000005
为每次输入至预失真系统的复数向量,M 1为功放记忆效应参数,0≤M 1≤TrainSize,TrainSize为所述训练集的长度,
Figure PCTCN2021093060-appb-000006
为训练集中第n个元素,
Figure PCTCN2021093060-appb-000007
为当前元素,m为大于1且小于M 1的整数,
Figure PCTCN2021093060-appb-000008
Figure PCTCN2021093060-appb-000009
分别为训练集中第n-1个元素、第n-m个元素和第n-M 1个元素,
Figure PCTCN2021093060-appb-000010
Figure PCTCN2021093060-appb-000011
均为历史元素。
预失真系统中的射频功放输出反馈回路的输出是复数标量。训练集中的复数向量按照元素索引输入至预失真系统,即输入至预失真系统中的预失真乘法器和复数神经网络。每次输入至预失真系统的复数向量的长度可以由功放记忆效应参数确定,如每次输入训练集中的一个当前元素,然后由功放记忆效应参数确定输入至预失真系统的历史元素的个数。对确定历史元素的个数的手段不作限定,如历史元素的个数不大于功放记忆效应参数。
在将训练集中的复数向量输入预失真系统,并将射频功放输出反馈回路的输出反馈会预失真系统后,可以确定预失真系统中复数神经网络的损失函数,然后基于损失函数对复数神经网络输出的校正向量各元素的偏导数或灵敏度,确定所述损失函数对所述复数神经网络内部各层的权值参数和偏置参数的偏导数或灵敏度,以用于更新复数神经网络各层的权值参数和偏置参数,以实现预失真系统的训练。
在一个实施例中,输入至所述预失真系统中的复数神经网络的复数向量和所述复数神经网络输出的校正向量的关系如下所示:
Figure PCTCN2021093060-appb-000012
其中,
Figure PCTCN2021093060-appb-000013
为校正向量,
Figure PCTCN2021093060-appb-000014
Figure PCTCN2021093060-appb-000015
分别为校正向量中的第n个元素、第n-1个元素、第n-m个元素和第n-M 1个元素,ComplexNN表示复数神经网络内部逐层运算函数的复合函数。
输入至所述预失真系统中的预失真乘法器的复数向量、校正向量和所述预失真乘法器的输出的关系如下所示:
Figure PCTCN2021093060-appb-000016
其中,
Figure PCTCN2021093060-appb-000017
为预失真乘法器输出的复数向量,
Figure PCTCN2021093060-appb-000018
Figure PCTCN2021093060-appb-000019
为预失真乘法器输出的复数向量中的第n个元素、第n-1个元素、第n-M+1个元素和第n-M 1个元素,
Figure PCTCN2021093060-appb-000020
为输入至预失真乘法器的复数向量,
Figure PCTCN2021093060-appb-000021
表示点乘。
所述预失真乘法器的输出和所述射频功放输出反馈回路的输出的关系如下所示:
Figure PCTCN2021093060-appb-000022
其中,
Figure PCTCN2021093060-appb-000023
为射频功放输出反馈回路的输出,PA表示所述射频功放输出反馈回路对输入信号的处理函数。
在所述预失真系统包括第二实时功率归一化单元的情况下,所述第二实时功率归一化单元的输入和输出的关系如下所示:
Figure PCTCN2021093060-appb-000024
Figure PCTCN2021093060-appb-000025
其中,
Figure PCTCN2021093060-appb-000026
为所述第二实时功率归一化单元的第n个输出,
Figure PCTCN2021093060-appb-000027
为第二实时功率归一化单元的第n个输入,n为正整数,d为大于或等于1且小于或等于n的正整数,
Figure PCTCN2021093060-appb-000028
为射频功放输出反馈回路的第d个输出。
在一个实施例中,根据所述预失真系统中的复数神经网络的损失函数对于所述复数神经网络输出的校正向量各元素的偏导数或灵敏度,确定所述损失函数对所述复数神经网络内部各层的权值参数和偏置参数的偏导数或灵敏度,包括:
基于射频功放输出反馈回路的反馈量和输入至所述复数神经网络的复数向量,确定所述复数神经网络的损失函数;确定所述损失函数对所述复数神经网络输出的校正向量各元的偏导数或灵敏度;根据所述校正向量的偏导数或灵敏 度,确定所述损失函数对所述复数神经网络内部各层的权值参数和偏置参数的偏导数或灵敏度;基于各层的权值参数和偏置参数的偏导数或灵敏度更新对应层的权值参数和偏置参数。
在一个实施例中,所述损失函数通过如下计算表达式计算确定:
Figure PCTCN2021093060-appb-000029
Figure PCTCN2021093060-appb-000030
为损失函数。
在预失真系统包括第二实时功率归一化单元的情况下,射频功放输出反馈回路的反馈量是将所述射频功放输出反馈回路的的输出输入第二实时功率归一化单元后得到的第二实时功率归一化单元的输出;在预失真系统不包括第二实时功率归一化单元的情况下,射频功放输出反馈回路的反馈量就是射频功放输出反馈回路的输出。
损失函数对复数神经网络输出的校正向量各元素的偏导数或灵敏度可以基于损失函数对预失真系统中预失真乘法器、射频功放输出反馈回路和/或第二实时功率归一化单元的输出的偏导数或灵敏度确定。
在确定出校正向量的偏导数或灵敏度后,本申请可以基于校正向量的偏导数或灵敏度更新复数神经网络各层的权值参数和偏置参数。
在一个实施例中,所述校正向量各元素的偏导数或灵敏度的计算表达式如下所示:
Figure PCTCN2021093060-appb-000031
其中,
Figure PCTCN2021093060-appb-000032
为所述校正向量各元的偏导数或灵敏度,
Figure PCTCN2021093060-appb-000033
为所述损失函数对于所述预失真乘法器输出的复数向量各元素的偏导数或灵敏度,conj(·)表示对复数取共轭;
Figure PCTCN2021093060-appb-000034
通过如下计算表达式计算确定:
Figure PCTCN2021093060-appb-000035
其中,
Figure PCTCN2021093060-appb-000036
“·”表示标量相乘,L 1为非线性阶数参数,
Figure PCTCN2021093060-appb-000037
为所述预失真乘法器输出的复数向量中的元素,real(·) 表示求实部,imag(·)表示求虚部,δ l,m表示中间偏导数或灵敏度。
在一个实施例中,在所述预失真系统不包括第二实时功率归一化单元的情况下,δ l,m通过如下计算表达式确定:
Figure PCTCN2021093060-appb-000038
在所述预失真系统包括第二实时功率归一化单元的情况下,δ l,m通过如下计算表达式确定:
Figure PCTCN2021093060-appb-000039
其中,c l,m为根据记忆多项式模型,基于训练集中的复数向量和将其输入所述射频功放输出反馈回路所得的输出,采用最小二乘算法所求得的复系数;
Figure PCTCN2021093060-appb-000040
为所述损失函数对于射频功放输出反馈回路输出的复数标量的偏导数或灵敏度。
Figure PCTCN2021093060-appb-000041
通过如下计算表达式计算确定:
Figure PCTCN2021093060-appb-000042
其中,
Figure PCTCN2021093060-appb-000043
为所述损失函数对于射频功放输出反馈回路反馈量的偏导数或灵敏度。
Figure PCTCN2021093060-appb-000044
通过如下计算表达式计算确定:
Figure PCTCN2021093060-appb-000045
在一个实施例中,所述复数神经网络内部的全连接层的权值参数和偏置参数的偏导数或灵敏度分别为:
Figure PCTCN2021093060-appb-000046
Figure PCTCN2021093060-appb-000047
其中,
Figure PCTCN2021093060-appb-000048
表示从复数神经网络的第j层的第u个神经元到第k层的第v个神经元连接的复数权值参数,
Figure PCTCN2021093060-appb-000049
表示复数神经网络的第k层的第v个神经元的偏置参数,
Figure PCTCN2021093060-appb-000050
Figure PCTCN2021093060-appb-000051
分别表示复数神经网络的第j层的第u个神经元和第k层的第v个神经元的输出复数向量;f'(·)表示神经元激活函数对输入信号的导数,
Figure PCTCN2021093060-appb-000052
为全连接层的权值参数的偏导数或灵敏度,
Figure PCTCN2021093060-appb-000053
为全连接层的 偏置参数的偏导数或灵敏度,
Figure PCTCN2021093060-appb-000054
为损失函数对
Figure PCTCN2021093060-appb-000055
的偏导数或灵敏度。
在当前层为复数神经网络的最后一层时,损失函数对
Figure PCTCN2021093060-appb-000056
的偏导数或灵敏度等于校正向量的偏导数或灵敏度。
在一个实施例中,所述复数神经网络内部的卷积层的第q个卷积核的权值参数和偏置参数的偏导数或灵敏度分别为:
Figure PCTCN2021093060-appb-000057
Figure PCTCN2021093060-appb-000058
其中,
Figure PCTCN2021093060-appb-000059
为第q个卷积核的权值参数的偏导数或灵敏度,
Figure PCTCN2021093060-appb-000060
为第q个卷积核的偏置参数的偏导数或灵敏度,
Figure PCTCN2021093060-appb-000061
表示该卷积层的前一层,第j层输出的第p个复数向量,
Figure PCTCN2021093060-appb-000062
表示第k层输出的第q个复数向量,q=1,2,…,Q,p=1,2,…,P,其中,Q和P分别为第k层和第j层的输出特征向量数,
Figure PCTCN2021093060-appb-000063
表示所述损失函数对于
Figure PCTCN2021093060-appb-000064
的偏导数或灵敏度,Fliplr(·)表示对输入向量做位置反置。每个特征图可以是一个复数向量形式。
在一个实施例中,所述复数神经网络内部的全连接层的权值参数和偏置参数通过如下表达式更新:
Figure PCTCN2021093060-appb-000065
Figure PCTCN2021093060-appb-000066
其中,
Figure PCTCN2021093060-appb-000067
Figure PCTCN2021093060-appb-000068
分别表示全连接层的权值参数
Figure PCTCN2021093060-appb-000069
在当前时刻和上一时刻时的值;
Figure PCTCN2021093060-appb-000070
Figure PCTCN2021093060-appb-000071
分别表示全连接层的偏置参数
Figure PCTCN2021093060-appb-000072
在当前时刻和上一时刻时的值;
Figure PCTCN2021093060-appb-000073
表示当前时刻对于权值参数
Figure PCTCN2021093060-appb-000074
的更新步长;
Figure PCTCN2021093060-appb-000075
表示当前时刻对于偏置参数
Figure PCTCN2021093060-appb-000076
的更新步长。
在一个实施例中,基于测试集测试训练后的预失真系统,得到所述预失真系统对应的误差向量幅度和邻道泄漏比,包括:
将测试集中的复数向量按照元素索引输入预失真系统,每次输入的复数向量的长度基于功放记忆效应参数确定,每次输入至预失真系统的复数向量由测试集中的历史元素和当前元素组合得到;将所述预失真系统中的射频功放输出反馈回路的输出输入至所述预失真系统,确定所述预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比。
基于测试集测试训练后的预失真系统采用的手段与基于训练集训练预失真系统所采用的手段相似,不同之处在于基于测试集无需更新复数神经网络各层的权值参数和偏置参数,而是,直接计算确定泛化误差向量幅度和泛化邻道泄漏比,以确定泛化误差向量幅度和泛化邻道泄漏比是否满足设定要求。计算泛化误差向量幅度和泛化邻道泄漏比时,可以基于射频功放输出反馈回路的反馈量和对应的测试集中的复数向量等确定。
将所述预失真系统中的射频功放输出反馈回路的输出输入至预失真信息系统包括将射频功放输出反馈回路的输出通过所述预失真系统所包括的第二实时功率归一化单元后,再输入所述复数神经网络;或者,将射频功放输出反馈回路的输出直接输入所述复数神经网络。
在一个实施例中,泛化误差向量幅度和泛化邻道泄漏比分别通过如下计算表达式确定:
Figure PCTCN2021093060-appb-000077
Figure PCTCN2021093060-appb-000078
Figure PCTCN2021093060-appb-000079
Figure PCTCN2021093060-appb-000080
其中,GEVM 1表示泛化误差向量幅度,
Figure PCTCN2021093060-appb-000081
表示
Figure PCTCN2021093060-appb-000082
的泛化邻道泄漏比,
Figure PCTCN2021093060-appb-000083
表示所述射频功放输出反馈回路的反馈量,
Figure PCTCN2021093060-appb-000084
表示所述射频功放输出反馈回路的第n个元素反馈量,
Figure PCTCN2021093060-appb-000085
表示所述测试集中对应的复数向量的第n个元素,TestSize表示测试集的长度,TestSize与TrainSize的和为N 1,N 1表示训练用复数向量的长度,HBW表示有效信号带宽的一半,GBW表示一半的保护带宽,NFFT表示离散傅里叶变换的点数;Win NFFT是窗长为NFFT的窗函数的系数向量,
Figure PCTCN2021093060-appb-000086
表示随机从
Figure PCTCN2021093060-appb-000087
中截取NFFT长的信号;
Figure PCTCN2021093060-appb-000088
是取值范围[1,TestSize]里均匀分布的随机正整数;K是随机截取的次数。
在所述预失真系统包括第二实时功率归一化单元的情况下,射频功放输出反馈回路的反馈量可以认为是第二实时功率归一化单元的输出;在所述预失真系统不包括第二实时功率归一化单元的情况下,射频功放输出反馈回路的反馈量可以为射频功放输出反馈回路的输出。
以下对本申请进行示例性的描述,本申请提供的预失真方法解决了传统功放模型和预失真模型的非线性表示能力先天不足和泛化能力缺失的问题,获得了更好的误差向量幅度(Error Vector Magnitude,EVM)性能和邻道泄漏比(Adjacent Channel Power Ratio,ACLR)性能。
本申请采用结构形态更为丰富的神经网络进行系统的建模,解决了传统的功放模型和DPD模型非线性表达能力先天不足的缺陷;同时彻底摒弃应用神经网络进行预失真的方法和技术的分离处理方式,从始至终将功放的特性学习和预失真校正融合和处理为一个整体,首次提出了AI-DPD一体化方案,即人工智能替代DPD的一体化方案。
在一个实施例中,在业务复数向量发送之前先发送已知的训练用复数向量,通过预失真乘法器、复数神经网络和射频功放输出反馈回路,与所述训练用复数向量一起,用于AI-DPD一体化方案系统(即预失真系统)的训练;当训练达到了所要求的GEVM和GACLR后停止训练;同时,发送所述业务复数向量通过预失真系统,输出得到预失真校正的复数标量。
图3a为本申请实施例提供的一种预失真系统的结构示意图,参见图3a,该预失真系统,即AI-DPD一体化方案系统主要包括:第一实时功率归一化(即PNorm)单元、第二实时功率归一化单元、预失真乘法器(即DPD Multiplier)单元、复数神经网络(即Complex Neural Network)单元、和射频功放输出反馈回路。其中,预失真乘法器单元,即预失真乘法器。复数神经网络单元可以包括复数神经网络、选择器(即MUX)和加法器。射频功放输出反馈回路包括数模转换器单元(即D/A)、射频调制单元、功率放大器单元、模数转换器单元(即A/D)和射频解调单元。
其中,
Figure PCTCN2021093060-appb-000089
是由历史元素(历史元素的长度由功放记忆效应参数M 1设定)和当前元素组成的原始输入复数向量;
Figure PCTCN2021093060-appb-000090
是对
Figure PCTCN2021093060-appb-000091
进行实时功率归一化后的复数向量,
Figure PCTCN2021093060-appb-000092
也是复数神经网络的输入;
Figure PCTCN2021093060-appb-000093
是对输入向量
Figure PCTCN2021093060-appb-000094
进行预失真的复数校正向量,
Figure PCTCN2021093060-appb-000095
也是复数神经网络的输出;
Figure PCTCN2021093060-appb-000096
是预失真校正后的复数向量;y 1,raw是AI-DPD一体化方案最后的原始输出,y 1,raw是一个复数标量;y 1是对y 1,raw进行实时功率归一后的复数标量。
所述复数神经网络是一种神经网络:复数包括实部(Real Part,以下简记为I路)和虚部(Imaginary Part,以下简记为Q路)两部分,且通常指虚部不为0的情形,以与实数加以区分。该神经网络的输入和输出,既可以直接是复数的变量、向量、矩阵或张量,也可以是复数的I路和Q路相组合的形式;该神经网络各层(即Layer)的神经元激活函数和所有其他处理函数的输入和输出,既 可以直接是复数的变量、向量、矩阵或张量,也可以是复数的I路和Q路组合的形式;该神经网络既可以使用复数误差反向传播算法(Complex Error Back Propagation Algorithm,Complex BP)进行训练,也可以使用实数误差反向传播算法(BP)进行训练。
所述复数误差反向传播算法包括但不限于如下步骤:
步骤1:AI-DPD一体化方案系统参数的初始化。
设置非线性阶数参数L 1和功放记忆效应参数M 1,例如:L 1=5,M 1=4;设置复数神经网络的初始输出,也即初始的预失真校正向量
Figure PCTCN2021093060-appb-000097
的各元素为1;并按照前向运行各层的索引逐层地,也可所有层同时并行地,根据层类型完成相应的初始化步骤。所述不同类型层的初始化主要包括但不限于以下步骤:
步骤1-1:若当前(待初始化)层为全连接层,根据该层所设置的随机初始化的分布类型(例如:高斯分布、均匀分布等)和分布参数(均值、方差、标准差等),初始化该层的权值参数和偏置参数;步骤1-2:若当前(待初始化)层为卷积层,根据该层设置的随机初始化的分布类型和分布参数,初始化该层各卷积核的权值参数和偏置参数;步骤1-3:若当前(待初始化)层为其他类型层,例如:微步卷积(即Fractionally Strided Convolutions)层等,按照该类型层的初始化设置参数(如随机初始化的分布类型和分布参数),完成该层权值和偏置参数的初始化。
步骤2:AI-DPD一体化方案系统基于训练集的训练。首先,将长度为N 1的已知的训练用复数向量序列
Figure PCTCN2021093060-appb-000098
进行实时功率归一化(PNorm),因实时功率归一化存在多种变化形式,以下仅是一个示例:
Figure PCTCN2021093060-appb-000099
Figure PCTCN2021093060-appb-000100
然后,再按一定比例(例如:0.5:0.5),将所述实时功率归一化后的序列
Figure PCTCN2021093060-appb-000101
分为训练集
Figure PCTCN2021093060-appb-000102
和测试集
Figure PCTCN2021093060-appb-000103
二者的长度分别记为TrainSize和TestSize;其中,训练集用于AI-DPD一体化方案系统的整体学习或训练;测试集用于在训练的后半阶段测试方案系统对新数据的泛化适应性能;该性能包括所述泛化EVM性能和泛化ACPR性能,二者分别定义如下:
Figure PCTCN2021093060-appb-000104
Figure PCTCN2021093060-appb-000105
其中,HBW指有效信号带宽的一半;GBW指一半的保护带宽;
Figure PCTCN2021093060-appb-000106
的计算如下:
Figure PCTCN2021093060-appb-000107
Figure PCTCN2021093060-appb-000108
其中,NFFT是离散傅里叶变换(FFT)的点数;Win NFFT是窗长为NFFT的窗函数的系数向量,例如:窗函数可采用Blackman窗;
Figure PCTCN2021093060-appb-000109
表示随机从
Figure PCTCN2021093060-appb-000110
中截取NFFT长的信号;
Figure PCTCN2021093060-appb-000111
是取值范围[1,TestSize]里均匀分布的随机正整数;K是随机截取的次数;
Figure PCTCN2021093060-appb-000112
表示点乘。
所述AI-DPD一体化方案系统的训练包括但不限于以下子步骤:
步骤2-1:将所述训练集
Figure PCTCN2021093060-appb-000113
按元素索引n(n=1,2,…,TrainSize)每次组织为如下历史元素和当前元素组合的复数向量
Figure PCTCN2021093060-appb-000114
的形式不断输入系统:
Figure PCTCN2021093060-appb-000115
其中,M 1为功放记忆效应参数,0≤M 1≤TrainSize。
步骤2-2:将所述复数向量
Figure PCTCN2021093060-appb-000116
通过所述复数神经网络(Complex Neural Network,以下简记为ComplexNN)单元,得到所述预失真的复数校正向量
Figure PCTCN2021093060-appb-000117
该单元的输入和输出的关系如下式所示:
Figure PCTCN2021093060-appb-000118
Figure PCTCN2021093060-appb-000119
步骤2-3:将所述复数向量
Figure PCTCN2021093060-appb-000120
通过预失真乘法器(DPD Multiplier)单元,得到所述预失真校正后复数向量
Figure PCTCN2021093060-appb-000121
该单元的输入和输出的关系如下式所示:
Figure PCTCN2021093060-appb-000122
Figure PCTCN2021093060-appb-000123
以上,
Figure PCTCN2021093060-appb-000124
表示点乘。
步骤2-4:将所述预失真校正后复数向量
Figure PCTCN2021093060-appb-000125
输入所述射频功放输出反馈回路,得到的输出为
Figure PCTCN2021093060-appb-000126
预失真乘法器的输出和所述射频功放输出反馈回路的输出的关系如下式所示:
Figure PCTCN2021093060-appb-000127
所述射频功放输出反馈回路中的功率放大器(PA)单元可以为实际的任意功放产品;换言之,本实施例对功放模型的非线性特性没有任何限制条件。
步骤2-5将:所述AI-DPD一体化方案最后的原始输出
Figure PCTCN2021093060-appb-000128
经过射频解调和模数转换(A/D)后,再输入实时功率归一(PNorm)单元,得到功放输出反馈量
Figure PCTCN2021093060-appb-000129
该单元的输入和输出关系如下两式所示:
Figure PCTCN2021093060-appb-000130
Figure PCTCN2021093060-appb-000131
步骤2-6:根据所述功放输出反馈量
Figure PCTCN2021093060-appb-000132
和所述复数向量
Figure PCTCN2021093060-appb-000133
计算复数神经网络的损失函数(Loss Function)。因存在多种附加正则化(Regularization)项的变化形式,故以下损失函数表达式只是一个简单示例:
Figure PCTCN2021093060-appb-000134
步骤2-7:按下式(2-16)计算所述损失函数
Figure PCTCN2021093060-appb-000135
对于所述功放输出反馈量
Figure PCTCN2021093060-appb-000136
的偏导数或灵敏度
Figure PCTCN2021093060-appb-000137
Figure PCTCN2021093060-appb-000138
步骤2-8:根据上述灵敏度
Figure PCTCN2021093060-appb-000139
按下式(2-17)计算所述损失函数
Figure PCTCN2021093060-appb-000140
对于所述AI-DPD一体化方案最后的原始输出
Figure PCTCN2021093060-appb-000141
的偏导数或灵敏度
Figure PCTCN2021093060-appb-000142
Figure PCTCN2021093060-appb-000143
以上,
Figure PCTCN2021093060-appb-000144
由上式(2-13)计算得到。
步骤2-9:根据上述灵敏度
Figure PCTCN2021093060-appb-000145
按下式(2-18)计算如下中间偏导数或灵敏度δ l,m
Figure PCTCN2021093060-appb-000146
以上,c l,m为根据记忆多项式模型,基于训练集中的复数向量和将其输入所述射频功放输出反馈回路所得的输出,采用最小二乘算法所求得的复系数;conj(·)表示对复数取共轭。
步骤2-10:根据上述灵敏度δ l,m按下式(2-19)计算所述损失函数
Figure PCTCN2021093060-appb-000147
对于预失真校正后复数向量
Figure PCTCN2021093060-appb-000148
各元素的偏导数或灵敏度
Figure PCTCN2021093060-appb-000149
Figure PCTCN2021093060-appb-000150
Figure PCTCN2021093060-appb-000151
为避免输入出现0+0j的极端数值情形,若所述预失真校正后复数向量
Figure PCTCN2021093060-appb-000152
中存在零元素(即0+0j),则将上式(2-20)定义的B l,m置为0。
步骤2-11:根据上述灵敏度
Figure PCTCN2021093060-appb-000153
按下式(2-21)计算所述损失函数
Figure PCTCN2021093060-appb-000154
对于所述预失真的复数校正向量
Figure PCTCN2021093060-appb-000155
各元素的偏导数或灵敏度
Figure PCTCN2021093060-appb-000156
Figure PCTCN2021093060-appb-000157
步骤2-12:根据上述灵敏度
Figure PCTCN2021093060-appb-000158
按复数神经网络(Complex Neural Network)单元前向运行的相反顺序,依次计算所述损失函数
Figure PCTCN2021093060-appb-000159
对于其内部各层(Layer)的权值和偏置参数的偏导数或灵敏度,包括但不限于以下子步骤:
步骤2-12-1:若当前待计算层为全连接层(Full-Connected Layer),分别按下式(2-22)和式(2-23)计算所述损失函数
Figure PCTCN2021093060-appb-000160
对于该层的复数权值参数
Figure PCTCN2021093060-appb-000161
和复数偏置参数
Figure PCTCN2021093060-appb-000162
的偏导数或灵敏度:
Figure PCTCN2021093060-appb-000163
Figure PCTCN2021093060-appb-000164
以上,
Figure PCTCN2021093060-appb-000165
表示从网络的第j层的第u个神经元到第k层的第v个神经元连接的复数权值;
Figure PCTCN2021093060-appb-000166
Figure PCTCN2021093060-appb-000167
分别表示网络第j层的第u个神经元和第k层的第v个神经元的输出复数向量;f'(·)表示神经元激活函数对输入信号的导数。
步骤2-12-2:若当前待计算层(设为第k层)为卷积层(Convolutional Layer),分别按下式(2-24)和式(2-25)计算所述损失函数
Figure PCTCN2021093060-appb-000168
对于该层第q(q=1,2,…,Q)个卷积核(Kernel)的复数权值
Figure PCTCN2021093060-appb-000169
和复数偏置
Figure PCTCN2021093060-appb-000170
的偏导数或灵敏度:
Figure PCTCN2021093060-appb-000171
Figure PCTCN2021093060-appb-000172
以上,
Figure PCTCN2021093060-appb-000173
表示该卷积层的前一层(第j层)输出的第p(p=1,2,…,P)个复数向量;
Figure PCTCN2021093060-appb-000174
表示当前卷积层(第k层)输出的第q个复数向量;
Figure PCTCN2021093060-appb-000175
表示所述损失函数
Figure PCTCN2021093060-appb-000176
对于上述
Figure PCTCN2021093060-appb-000177
的偏导数或灵敏度;Conv(·)表示卷积运算;Fliplr(·)表示对输入向量做位置反置。
步骤2-13:根据各层计算的所述损失函数
Figure PCTCN2021093060-appb-000178
对于权值参数和偏置参数的灵敏度,进行参数更新,因为不同训练算法的参数更新方式多种多样,以下只是一个简单的示例:
Figure PCTCN2021093060-appb-000179
Figure PCTCN2021093060-appb-000180
步骤3:AI-DPD一体化方案系统基于测试集的性能统计。
步骤3-1:将所述测试集
Figure PCTCN2021093060-appb-000181
按元素索引n(n=1,2,…,TestSize)每次组织为如下历史元素和当前元素组合的复数向量
Figure PCTCN2021093060-appb-000182
的形式不断输入系统:
Figure PCTCN2021093060-appb-000183
步骤3-2:将所述复数向量
Figure PCTCN2021093060-appb-000184
通过所述复数神经网络(Complex Neural Network)单元,得到所述预失真的复数校正向量
Figure PCTCN2021093060-appb-000185
Figure PCTCN2021093060-appb-000186
Figure PCTCN2021093060-appb-000187
步骤3-3:将所述复数向量
Figure PCTCN2021093060-appb-000188
通过预失真乘法器(DPD Multiplier)单元,得到所述预失真校正后复数向量
Figure PCTCN2021093060-appb-000189
Figure PCTCN2021093060-appb-000190
Figure PCTCN2021093060-appb-000191
步骤3-4:将所述预失真校正后复数向量
Figure PCTCN2021093060-appb-000192
输入所述射频功放输出反馈回路,得到输出
Figure PCTCN2021093060-appb-000193
Figure PCTCN2021093060-appb-000194
步骤3-5将:将所述
Figure PCTCN2021093060-appb-000195
经过射频解调和模数转换(A/D)后,再输入实时功率归一(PNorm)单元,得到功放输出反馈量
Figure PCTCN2021093060-appb-000196
Figure PCTCN2021093060-appb-000197
Figure PCTCN2021093060-appb-000198
步骤3-6:根据上述
Figure PCTCN2021093060-appb-000199
和所述测试集的
Figure PCTCN2021093060-appb-000200
按上式(2-3)和式(2-4)统计计算系统基于测试集的泛化EVM性能(GEVM)和泛化ACLR性能(GACLR)。
步骤3-7:若所述GEVM和GACLR满足设定的指标要求,则停止AI-DPD一体化方案系统的训练;否则,返回步骤2的所有步骤,开始新一轮训练。
在一个实施例中,在业务复数向量发送之前先发送已知的训练用复数向量,通过预失真乘法器、复数神经网络和射频功放输出反馈回路,输出得到预失真校正的复数标量,并将其反馈回发送端,与所述训练用复数向量一起,用于AI-DPD一体化方案系统的训练;当训练达到了所要求的泛化EVM性能和泛化ACLR性能后停止训练;同时,发送所述业务复数向量通过预失真系统,输出得到预失真校正的复数标量。
图3b为本申请实施例提供的又一种预失真系统的结构示意图,参见图3b,该预失真系统又称AI-DPD一体化方案系统,主要包括:预失真乘法器单元、复数神经网络单元和射频功放输出反馈回路。其中,
Figure PCTCN2021093060-appb-000201
是由历史元素(历史元素的长度由功放记忆效应参数M 2设定)和当前元素组成的原始输入复数向量;
Figure PCTCN2021093060-appb-000202
是对输入向量
Figure PCTCN2021093060-appb-000203
进行预失真的复数校正向量,
Figure PCTCN2021093060-appb-000204
也是复数神经网络的输出;
Figure PCTCN2021093060-appb-000205
是预失真校正后的复数向量;y 2是AI-DPD一体化方案最后的输出,y 2是一个复数标量。本申请中各公式中的下角标“1”和“2”无实际意义,分别针对图3a和图3b对应的两个预失真系统。如M 1和M 2仅表示功放记忆效应参数,即功放记忆效应参数,下角标“1”针对的是图3a所示的预失真系统,下角标“2”针对的是图3b所示的预失真系统。
所述复数神经网络是一种神经网络:复数包括实部和虚部两部分,且通常指虚部不为0的情形,以与实数加以区分;该神经网络的输入和输出,既可以直接是复数的变量、向量、矩阵或张量,也可以是复数的I路和Q路相组合的形式;该神经网络各层的神经元激活函数和所有其他处理函数的输入和输出,既可以直接是复数的变量、向量、矩阵或张量,也可以是复数的I路和Q路组合的形式;该神经网络既可以使用复数误差反向传播算法进行训练,也可以使用实数误差反向传播算法进行训练。
所述复数误差反向传播算法包括但不限于如下步骤:
Step1:AI-DPD一体化方案系统参数的初始化。
设置非线性阶数参数L 2和功放记忆效应参数M 2,例如:L 2=5,M 2=4;设置复数神经网络的初始输出,也即初始的预失真校正向量
Figure PCTCN2021093060-appb-000206
的各素元为1;并按照前向运行各层的索引逐层地,也可所有层同时并行地,根据层类型完成相应的初始化步骤。所述不同类型层的初始化主要包括但不限于以下步骤:
Step1-1:若当前(待初始化)层为全连接层,根据该层所设置的随机初始化的分布类型(例如:高斯分布、均匀分布等)和分布参数(均值、方差、标准差等),初始化该层的权值参数和偏置参数;Step1-2:若当前(待初始化)层为卷积层,根据该层设置的随机初始化的分布类型和分布参数,初始化该层各卷积核的权值参数和偏置参数;Step1-3:若当前(待初始化)层为其他类型层,按照该类型层的初始化设置参数,完成该层权值和偏置参数的初始化。
Step2:AI-DPD一体化方案系统基于训练集的训练。将长度为N 2的已知的训练用复数向量序列
Figure PCTCN2021093060-appb-000207
按一定比例(例如:0.5:0.5),分为训练集
Figure PCTCN2021093060-appb-000208
和测试集
Figure PCTCN2021093060-appb-000209
二者的长度分别记为TrainSize和TestSize;其中,训练集用于AI-DPD一体化方案系统的整体学习或训练;测试集用于在训练的后半阶段测试方案系统对新数据的泛化适应性能;该性能包括所述泛化EVM性能(GEVM)和泛化ACPR性能(GACLR),二者分别定义如下:
Figure PCTCN2021093060-appb-000210
Figure PCTCN2021093060-appb-000211
其中,HBW指有效信号带宽的一半;GBW指一半的保护带宽;
Figure PCTCN2021093060-appb-000212
的 计算如下:
Figure PCTCN2021093060-appb-000213
Figure PCTCN2021093060-appb-000214
以上,NFFT是离散傅里叶变换(FFT)的点数;Win NFFT是窗长为NFFT的窗函数的系数向量,例如:窗函数可采用Blackman窗;
Figure PCTCN2021093060-appb-000215
表示随机从
Figure PCTCN2021093060-appb-000216
中截取NFFT长的信号;
Figure PCTCN2021093060-appb-000217
是取值范围[1,TestSize]里均匀分布的随机正整数;K是随机截取的次数;
Figure PCTCN2021093060-appb-000218
表示点乘。
所述AI-DPD一体化方案系统的训练包括但不限于以下子步骤:
Step2-1:将所述训练集
Figure PCTCN2021093060-appb-000219
按元素索引n(n=1,2,…,TrainSize)每次组织为如下历史元素和当前元素组合的复数向量
Figure PCTCN2021093060-appb-000220
的形式不断输入系统:
Figure PCTCN2021093060-appb-000221
其中,M 2为功放记忆效应参数,0≤M 2≤TrainSize。
Step2-2:将所述复数向量
Figure PCTCN2021093060-appb-000222
通过所述复数神经网络单元(ComplexNN),得到所述预失真的复数校正向量
Figure PCTCN2021093060-appb-000223
该单元的输入和输出的关系如下式所示:
Figure PCTCN2021093060-appb-000224
Figure PCTCN2021093060-appb-000225
Step2-3:将所述复数向量
Figure PCTCN2021093060-appb-000226
通过预失真乘法器(DPD Multiplier)单元,得到所述预失真校正后复数向量
Figure PCTCN2021093060-appb-000227
该单元的输入和输出的关系如下式所示:
Figure PCTCN2021093060-appb-000228
Figure PCTCN2021093060-appb-000229
以上,
Figure PCTCN2021093060-appb-000230
表示点乘。
Step2-4:将所述预失真校正后复数向量
Figure PCTCN2021093060-appb-000231
输入所述射频功放输出反馈回路,得到输出的
Figure PCTCN2021093060-appb-000232
如下式所示:
Figure PCTCN2021093060-appb-000233
所述射频功放输出反馈回路中的功率放大器(PA)单元可以为实际的任意功放产品;换言之,本实施例对功放模型的非线性特性没有任何限制条件。
Step2-5:根据所述
Figure PCTCN2021093060-appb-000234
和所述复数向量
Figure PCTCN2021093060-appb-000235
计算复数神经网络的损失函 数。因存在多种附加正则化(Regularization)项的变化形式,故以下损失函数表达式只是一个简单示例:
Figure PCTCN2021093060-appb-000236
Step2-6:按下式(2-47)计算所述损失函数
Figure PCTCN2021093060-appb-000237
对于所述功放输出反馈量
Figure PCTCN2021093060-appb-000238
的偏导数或灵敏度
Figure PCTCN2021093060-appb-000239
Figure PCTCN2021093060-appb-000240
Step2-7:根据上述灵敏度
Figure PCTCN2021093060-appb-000241
按下式(2-48)计算如下中间偏导数或灵敏度δ l,m
Figure PCTCN2021093060-appb-000242
以上,c l,m为根据记忆多项式模型,基于训练集中的复数向量和将其输入所述射频功放输出反馈回路所得的输出,采用最小二乘算法所求得的复系数;conj(·)表示对复数取共轭。
Step2-8:根据上述灵敏度δ l,m按下式(2-49)计算所述损失函数
Figure PCTCN2021093060-appb-000243
对于预失真校正后复数向量
Figure PCTCN2021093060-appb-000244
各元素的偏导数或灵敏度
Figure PCTCN2021093060-appb-000245
Figure PCTCN2021093060-appb-000246
Figure PCTCN2021093060-appb-000247
为避免输入出现0+0j的极端数值情形,若所述预失真校正后复数向量
Figure PCTCN2021093060-appb-000248
中存在零元素(即0+0j),则将上式(2-50)定义的B l,m置为0。
Step2-9:根据上述灵敏度
Figure PCTCN2021093060-appb-000249
按下式(2-51)计算所述损失函数
Figure PCTCN2021093060-appb-000250
对于所述预失真的复数校正向量
Figure PCTCN2021093060-appb-000251
各元素的偏导数或灵敏度
Figure PCTCN2021093060-appb-000252
Figure PCTCN2021093060-appb-000253
Step2-10:根据上述灵敏度
Figure PCTCN2021093060-appb-000254
按复数神经网络单元前向运行的相反顺序,依次计算所述损失函数
Figure PCTCN2021093060-appb-000255
对于其内部各层的权值和偏置参数的偏导数或灵敏度,包括但不限于以下子步骤:
Step2-10-1:若当前待计算层为全连接层,分别按下式(2-52)和式(2-53)计算所述损失函数
Figure PCTCN2021093060-appb-000256
对于该层的复数权值参数
Figure PCTCN2021093060-appb-000257
和复数偏置参数
Figure PCTCN2021093060-appb-000258
的偏导数或灵敏度:
Figure PCTCN2021093060-appb-000259
Figure PCTCN2021093060-appb-000260
以上,
Figure PCTCN2021093060-appb-000261
表示从网络的第j层的第u个神经元到第k层的第v个神经元连接的复数权值;
Figure PCTCN2021093060-appb-000262
Figure PCTCN2021093060-appb-000263
分别表示网络第j层的第u个神经元和第k层的第v个神经元的输出复数向量;f'(·)表示神经元激活函数对输入信号的导数。
Step2-10-2:若当前待计算层(设为第k层)为卷积层,分别按下式(2-54)和式(2-55)计算所述损失函数
Figure PCTCN2021093060-appb-000264
对于该层第q(q=1,2,…,Q)个卷积核(Kernel)的复数权值
Figure PCTCN2021093060-appb-000265
和复数偏置
Figure PCTCN2021093060-appb-000266
的偏导数或灵敏度:
Figure PCTCN2021093060-appb-000267
Figure PCTCN2021093060-appb-000268
以上,
Figure PCTCN2021093060-appb-000269
表示该卷积层的前一层(第j层)输出的第p(p=1,2,…,P)个复数向量;
Figure PCTCN2021093060-appb-000270
表示当前卷积层(第k层)输出的第q个复数向量;
Figure PCTCN2021093060-appb-000271
表示所述损失函数
Figure PCTCN2021093060-appb-000272
对于上述
Figure PCTCN2021093060-appb-000273
的偏导数或灵敏度;Conv(·)表示卷积运算;Fliplr(·)表示对输入向量做位置反置。
Step2-11:根据各层计算的所述损失函数
Figure PCTCN2021093060-appb-000274
对于权值参数和偏置参数的灵敏度,进行参数更新,因为不同训练算法的参数更新方式多种多样,以下只是一个简单的示例:
Figure PCTCN2021093060-appb-000275
Figure PCTCN2021093060-appb-000276
Step3:AI-DPD一体化方案系统基于测试集的性能统计。
Step3-1:将所述测试集
Figure PCTCN2021093060-appb-000277
按元素索引n(n=1,2,…,TestSize)每次组 织为如下历史元素和当前元素组合的复数向量
Figure PCTCN2021093060-appb-000278
的形式不断输入系统:
Figure PCTCN2021093060-appb-000279
Step3-2:将所述复数向量
Figure PCTCN2021093060-appb-000280
通过所述复数神经网络(ComplexNN)单元,得到所述预失真的复数校正向量
Figure PCTCN2021093060-appb-000281
Figure PCTCN2021093060-appb-000282
Figure PCTCN2021093060-appb-000283
Step3-3:将所述复数向量
Figure PCTCN2021093060-appb-000284
通过预失真乘法器(DPD Multiplier)单元,得到所述预失真校正后复数向量
Figure PCTCN2021093060-appb-000285
Figure PCTCN2021093060-appb-000286
Figure PCTCN2021093060-appb-000287
Step3-4:将所述预失真校正后复数向量
Figure PCTCN2021093060-appb-000288
输入所述射频功放输出反馈回路,得到输出
Figure PCTCN2021093060-appb-000289
Figure PCTCN2021093060-appb-000290
Step3-5:根据上述
Figure PCTCN2021093060-appb-000291
和所述测试集的
Figure PCTCN2021093060-appb-000292
按上式(2-36)和式(2-37)统计计算系统基于测试集的泛化EVM性能(GEVM)和泛化ACLR性能(GACLR)。
Step3-6:若所述GEVM和GACLR满足设定的指标要求,则停止AI-DPD一体化方案系统的训练;否则,返回Step2的所有步骤,开始新一轮训练。
表1为本申请实施例提供的一种泛化性能效果表,参见表1,通过本申请提供的预失真系统保证了泛化EVM性能。复数MLP网络包括3个隐层,每个隐层包括5个神经元;非线性阶数P=5;记忆效应长度M=6;每次输入所述预失真系统的复数向量的长度=P×(M+1)=35。训练集包括39320个复数向量;验证集包括13107个复数向量;测试集包括13105个复数向量。
表1本申请实施例提供的一种泛化性能效果表
Figure PCTCN2021093060-appb-000293
Figure PCTCN2021093060-appb-000294
图4为本申请实施例所获得的泛化邻道泄漏比的性能效果图,参见图4,一体化方法,即本申请提供的预失真系统的实际输出的泛化邻道泄漏比和预期输出的泛化邻道泄漏比偏差较小。图5为本申请实施例所获得的泛化邻道泄漏比的改善效果图,参见图5,本申请提供的预失真系统相比于信号只通过功放的系统而言,泛化邻道泄漏比得到了改善。
可以通过编程计算机实现上述方法的不同步骤。在此,一些实施方式同样包括机器可读或计算机可读的程序存储设备(例如:数字数据存储介质)以及编码机器可执行或计算机可执行的程序指令,其中,该指令执行上述方法的一些或全部步骤。例如,程序存储设备可以是数字存储器、磁存储介质(例如:磁盘和磁带)、硬件或光可读数字数据存储介质。实施方式同样包括执行上述方法的所述步骤的编程计算机。
在一个示例性实施方式中,本申请提供了一种预失真系统,该预失真系统的结构示意图参见图3b,该预失真系统可以执行本申请实施例提供的预失真方法,该预失真系统包括:预失真乘法器、复数神经网络和射频功放输出反馈回路;所述预失真乘法器的第一输入端为所述预失真系统的输入端,与所述复数神经网络的第一输入端和第二输入端连接,所述预失真乘法器的输出端与所述射频功放输出反馈回路的输入端连接,所述射频功放输出反馈回路的输出端为所述预失真系统的输出端,所述射频功放输出反馈回路的输出端与所述复数神经网络的第二输入端连接;所述复数神经网络的输出端与所述预失真乘法器的第二输入端连接。
在一个实施例中,所述射频功放输出反馈回路包括:数模转换器单元、射频调制单元、功率放大器单元、模数转换器单元和射频解调单元。
预失真乘法器的输出端可以通过射频功放输出反馈回路的数模转换器单元(即D/A)和射频调制单元与所述射频功放输出反馈回路的功率放大器单元的输入端相连,所述功率放大器单元的输出端可以通过射频解调单元和模数转换器单元(即A/D)与所述复数神经网络的第二输入端相连。
本实施例提供的预失真系统用于实现本申请实施例提供的预失真方法,本实施例提供的预失真系统实现原理和技术效果与本申请实施例提供的预失真方法类似,此处不再赘述。
在一个示例性实施方式中,本申请提供了一种预失真系统,该预失真系统的结构示意图参见图3a,该预失真系统可以执行本申请实施例提供的预失真方法,该预失真系统包括:预失真乘法器、复数神经网络、射频功放输出反馈回路、第一实时功率归一化单元和第二实时功率归一化单元;所述第一实时功率归一化单元的输入端为所述预失真系统的输入端,所述第一实时功率归一化单元的输出端与所述预失真乘法器的第一输入端、所述复数神经网络的第一输入端和所述复数神经网络的第二输入端连接,所述预失真乘法器的输出端与所述射频功放输出反馈回路的输入端连接,所述射频功放输出反馈回路的输出端为所述预失真系统的输出端,所述射频功放输出反馈回路的输出端与所述第二实时功率归一化单元的输入端连接,所述第二实时功率归一化单元的输出端与所述复数神经网络的第二输入端连接,所述复数神经网络的输出端与所述预失真乘法器的第二输入端连接。
在一个实施例中,所述射频功放输出反馈回路包括:数模转换器单元、射频调制单元、功率放大器单元、模数转换器单元和射频解调单元。
数模转换器单元的输入端为所述射频功放输出反馈回路的输入端,数模转换单元的输出端与所述射频调制单元的输入端相连,所述射频调制单元的输出端与所述功率放大器单元的输入端相连,所述功率放大器单元的输出端与所述射频解调单元的输入端相连,所述射频解调单元的输出端与所述模数转换器单元的输入端相连,所述模数转换器单元的输出端为所述射频功放输出反馈回路的输出端。
本实施例提供的预失真系统用于实现本申请实施例提供的预失真方法,本实施例提供的预失真系统实现原理和技术效果与本申请实施例提供的预失真方法类似,此处不再赘述。
在一个示例性实施方式中,本申请实施例还提供了一种设备,图6为本申请实施例提供的一种设备的结构示意图,如图6所示,本申请提供的设备,包括一个或多个处理器21和存储装置22;该设备中的处理器21可以是一个或多个,图6中以一个处理器21为例;存储装置22用于存储一个或多个程序;所述一个或多个程序被所述一个或多个处理器21执行,使得所述一个或多个处理器21实现如本申请实施例中所述的方法。
设备还包括:通信装置23、输入装置24和输出装置25。
设备中的处理器21、存储装置22、通信装置23、输入装置24和输出装置25可以通过总线或其他方式连接,图6中以通过总线连接为例。
输入装置24可用于接收输入的数字或字符信息,以及产生与设备的用户设置以及功能控制有关的按键信号输入。输出装置25可包括显示屏等显示设备。
通信装置23可以包括接收器和发送器。通信装置23设置为根据处理器21的控制进行信息收发通信。
存储装置22作为一种计算机可读存储介质,可设置为存储软件程序、计算机可执行程序以及模块,如本申请实施例所述方法对应的程序指令/预失真系统。存储装置22可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据设备的使用所创建的数据等。此外,存储装置22可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储装置22可包括相对于处理器21远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
本申请实施例还提供一种存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现本申请任一所述方法,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现本申请实施例提供的预失真方法,应用于预失真系统,所述预失真系统包括:预失真乘法器、复数神经网络和射频功放输出反馈回路,所述方法包括:将训练用复数向量输入所述预失真系统,得到所述预失真系统输出的所述训练用复数向量对应的复数标量;基于所述训练用复数向量和所述复数标量,训练所述预失真系统,直至所述预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比达到设定要求;将业务复数向量输入训练后的预失真系统,得到预失真校正的复数标量。
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存、光纤、便携式光盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。计算机可读存储介质可以是任何包含或存储 程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于:电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、无线电频率(Radio Frequency,RF)等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
术语设备(如终端设备)涵盖任何适合类型的无线用户设备,例如移动电话、便携数据处理装置、便携网络浏览器或车载移动台。
一般来说,本申请的多种实施例可以在硬件或专用电路、软件、逻辑或其任何组合中实现。例如,一些方面可以被实现在硬件中,而其它方面可以被实现在可以被控制器、微处理器或其它计算装置执行的固件或软件中,尽管本申请不限于此。
本申请的实施例可以通过移动装置的数据处理器执行计算机程序指令来实现,例如在处理器实体中,或者通过硬件,或者通过软件和硬件的组合。计算机程序指令可以是汇编指令、指令集架构(Instruction Set Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码。
本申请附图中的任何逻辑流程的框图可以表示程序步骤,或者可以表示相 互连接的逻辑电路、模块和功能,或者可以表示程序步骤与逻辑电路、模块和功能的组合。计算机程序可以存储在存储器上。存储器可以具有任何适合于本地技术环境的类型并且可以使用任何适合的数据存储技术实现,例如但不限于只读存储器(Read-Only Memory,ROM)、随机访问存储器(Random Access Memory,RAM)、光存储器装置和系统(数码多功能光碟(Digital Video Disc,DVD)或光盘(Compact Disk,CD))等。计算机可读介质可以包括非瞬时性存储介质。数据处理器可以是任何适合于本地技术环境的类型,例如但不限于通用计算机、专用计算机、微处理器、数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑器件(Field-Programmable Gate Array,FPGA)以及基于多核处理器架构的处理器。

Claims (22)

  1. 一种预失真方法,应用于预失真系统,所述预失真系统包括预失真乘法器、复数神经网络和射频功放输出反馈回路;所述方法包括:
    将训练用复数向量输入所述预失真系统,得到所述预失真系统输出的所述训练用复数向量对应的复数标量;
    基于所述训练用复数向量和所述复数标量,训练所述预失真系统,直至所述预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比达到设定要求;
    将业务复数向量输入训练后的预失真系统,得到预失真校正的复数标量。
  2. 根据权利要求1所述的方法,其中,所述预失真系统使用复数误差反向传播算法进行训练。
  3. 根据权利要求1所述的方法,其中,所述基于所述训练用复数向量和所述复数标量,训练所述预失真系统,直至所述预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比达到设定要求,包括:
    初始化所述预失真系统的系统参数;
    基于训练集和所述复数标量训练参数初始化后的预失真系统;
    基于测试集测试训练后的预失真系统,得到所述训练后的预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比;
    在所述泛化误差向量幅度和泛化邻道泄漏比对应的数值大于或等于各自的设定阈值的情况下,完成所述预失真系统的训练,在所述泛化误差向量幅度和泛化邻道泄漏比对应的数值小于各自的设定阈值的情况下,继续基于所述训练集训练所述训练后的预失真系统;
    其中,所述训练集和所述测试集基于归一化后的复数向量拆分获得,所述归一化后的复数向量为将所述训练用复数向量输入所述预失真系统还包括的第一实时功率归一化单元后得到的所述第一实时功率归一化单元的输出;或者,所述训练集和所述测试集基于所述训练用复数向量拆分获得。
  4. 根据权利要求3所述的方法,其中,所述归一化后的复数向量中的元素通过如下计算表达式确定:
    Figure PCTCN2021093060-appb-100001
    Figure PCTCN2021093060-appb-100002
    其中,x 1(n)表示所述归一化后的复数向量中的第n个元素,x 1,raw(n)表示 所述训练用复数向量中第n个元素,x 1,raw(d)表示所述训练用复数向量中第d个元素。
  5. 根据权利要求3所述的方法,其中,所述初始化所述预失真系统的系统参数,包括:
    根据所述复数神经网络中每层对应的层类型完成所述每层的初始化。
  6. 根据权利要求5所述的方法,其中,所述根据所述复数神经网络中每层对应的层类型完成所述每层的初始化,包括:
    根据所述复数神经网络中每层的分布类型和分布参数初始化所述每层的权值参数和偏置参数。
  7. 根据权利要求3所述的方法,其中,所述基于训练集和所述复数标量训练所述预失真系统,包括:
    将所述训练集中的复数向量按照元素索引输入所述预失真系统,每次输入的复数向量的长度基于所述预失真系统的功放记忆效应参数确定,每次输入至所述预失真系统的复数向量由所述训练集中的历史元素值和当前元素组合得到;
    将所述预失真系统中的所述射频功放输出反馈回路的输出,通过所述预失真系统还包括的第二实时功率归一化单元后,再输入所述复数神经网络;或者,将所述预失真系统中的所述射频功放输出反馈回路的输出直接输入所述复数神经网络;
    根据所述预失真系统中的所述复数神经网络的损失函数对于所述复数神经网络输出的校正向量中的每个元素的偏导数或灵敏度,确定所述损失函数对所述复数神经网络内部每层的权值参数和偏置参数的偏导数或灵敏度;
    根据确定的所述复数神经网络内部每层的偏导数或灵敏度更新所述每层的权值参数和偏置参数;
    其中,每次输入至所述预失真系统的复数向量的表达式如下所示:
    Figure PCTCN2021093060-appb-100003
    其中,
    Figure PCTCN2021093060-appb-100004
    为每次输入至所述预失真系统的复数向量,M 1为所述功放记忆效应参数,0≤M 1≤TrainSize,TrainSize为所述训练集的长度,
    Figure PCTCN2021093060-appb-100005
    为所述训练集中第n个元素,
    Figure PCTCN2021093060-appb-100006
    为所述当前元素,m为大于1且小于M 1的整数,
    Figure PCTCN2021093060-appb-100007
    Figure PCTCN2021093060-appb-100008
    分别为所述训练集中第n-1个元素、第n-m个元素和第n-M 1个元素,
    Figure PCTCN2021093060-appb-100009
    Figure PCTCN2021093060-appb-100010
    均为历史元素。
  8. 根据权利要求7所述的方法,其中,
    输入至所述预失真系统中的所述复数神经网络的复数向量和所述复数神经网络输出的校正向量的关系如下所示:
    Figure PCTCN2021093060-appb-100011
    其中,
    Figure PCTCN2021093060-appb-100012
    为所述校正向量,
    Figure PCTCN2021093060-appb-100013
    Figure PCTCN2021093060-appb-100014
    分别为所述校正向量中的第n个元素、第n-1个元素、第n-m个元素和第n-M 1个元素,ComplexNN表示所述复数神经网络内部逐层运算函数的复合函数;
    输入至所述预失真系统中的所述预失真乘法器的复数向量、所述校正向量和所述预失真乘法器的输出的关系如下所示:
    Figure PCTCN2021093060-appb-100015
    其中,
    Figure PCTCN2021093060-appb-100016
    为所述预失真乘法器输出的复数向量,
    Figure PCTCN2021093060-appb-100017
    Figure PCTCN2021093060-appb-100018
    为所述预失真乘法器输出的复数向量中的第n个元素、第n-1个元素、第n-M+1个元素和第n-M 1个元素,
    Figure PCTCN2021093060-appb-100019
    为输入至所述预失真乘法器的复数向量,“ο”表示点乘;
    所述预失真乘法器的输出和所述射频功放输出反馈回路的输出的关系如下所示:
    Figure PCTCN2021093060-appb-100020
    其中,
    Figure PCTCN2021093060-appb-100021
    为所述射频功放输出反馈回路的输出,PA表示所述射频功放输出反馈回路对输入信号的处理函数;
    在所述预失真系统包括所述第二实时功率归一化单元的情况下,所述第二实时功率归一化单元的输入和输出的关系如下所示:
    Figure PCTCN2021093060-appb-100022
    Figure PCTCN2021093060-appb-100023
    其中,
    Figure PCTCN2021093060-appb-100024
    为所述第二实时功率归一化单元的第n个输出,
    Figure PCTCN2021093060-appb-100025
    为所述第二实时功率归一化单元的第n个输入,n为正整数,d为大于或等于1且小于或等于n的正整数,
    Figure PCTCN2021093060-appb-100026
    为所述射频功放输出反馈回路的第d个输出。
  9. 根据权利要求7所述的方法,其中,所述根据所述预失真系统中的所述复数神经网络的损失函数对于所述复数神经网络输出的校正向量中的每个元素的偏导数或灵敏度,确定所述损失函数对所述复数神经网络内部每层的权值参数 和偏置参数的偏导数或灵敏度,包括:
    基于所述射频功放输出反馈回路的反馈量和输入至所述复数神经网络的复数向量,确定所述复数神经网络的损失函数;
    确定所述损失函数对所述复数神经网络输出的校正向量中的每个元素的偏导数或灵敏度;
    根据所述校正向量中的每个元素的偏导数或灵敏度,确定所述损失函数对所述复数神经网络内部每层的权值参数和偏置参数的偏导数或灵敏度;
    基于每层的权值参数和偏置参数的偏导数或灵敏度更新所述每层的权值参数和偏置参数。
  10. 根据权利要求9所述的方法,所述损失函数通过如下计算表达式计算确定:
    Figure PCTCN2021093060-appb-100027
    Figure PCTCN2021093060-appb-100028
    为损失函数。
  11. 根据权利要求9所述的方法,其中,所述校正向量中的每个元素的偏导数或灵敏度的计算表达式如下所示:
    Figure PCTCN2021093060-appb-100029
    其中,
    Figure PCTCN2021093060-appb-100030
    为所述校正向量中的每个元素的偏导数或灵敏度,
    Figure PCTCN2021093060-appb-100031
    为所述损失函数对于所述预失真乘法器输出的复数向量中的每个元素的偏导数或灵敏度,conj(·)表示对复数取共轭;
    Figure PCTCN2021093060-appb-100032
    通过如下计算表达式计算确定:
    Figure PCTCN2021093060-appb-100033
    其中,
    Figure PCTCN2021093060-appb-100034
    “·”表示标量相乘;L 1为所述预失真系统的非线性阶数参数,
    Figure PCTCN2021093060-appb-100035
    为所述预失真乘法器输出的复数向量中的元素,real(·)表示求实部,imag(·)表示求虚部,δ l,m表示中间偏导数或灵敏度。
  12. 根据权利要求11所述的方法,其中,
    在所述预失真系统不包括所述第二实时功率归一化单元的情况下,δ l,m通过如下计算表达式确定:
    Figure PCTCN2021093060-appb-100036
    在所述预失真系统包括所述第二实时功率归一化单元的情况下,δ l,m通过如下计算表达式确定:
    Figure PCTCN2021093060-appb-100037
    其中,c l,m为根据记忆多项式模型,基于所述训练集中的复数向量和将所述训练集中的复数向量输入所述射频功放输出反馈回路所得的输出,采用最小二乘算法所求得的复系数;
    Figure PCTCN2021093060-appb-100038
    为所述损失函数对于所述射频功放输出反馈回路输出的复数标量的偏导数或灵敏度;
    Figure PCTCN2021093060-appb-100039
    通过如下计算表达式计算确定:
    Figure PCTCN2021093060-appb-100040
    其中,
    Figure PCTCN2021093060-appb-100041
    为所述损失函数对于所述射频功放输出反馈回路反馈量的偏导数或灵敏度;
    Figure PCTCN2021093060-appb-100042
    通过如下计算表达式计算确定:
    Figure PCTCN2021093060-appb-100043
  13. 根据权利要求9所述的方法,其中,所述复数神经网络内部的全连接层的权值参数和偏置参数的偏导数或灵敏度分别为:
    Figure PCTCN2021093060-appb-100044
    Figure PCTCN2021093060-appb-100045
    其中,
    Figure PCTCN2021093060-appb-100046
    表示从所述复数神经网络的第j层的第u个神经元到第k层的第v个神经元连接的复数权值参数,
    Figure PCTCN2021093060-appb-100047
    表示所述复数神经网络的第k层的第v个神经元的偏置参数,
    Figure PCTCN2021093060-appb-100048
    Figure PCTCN2021093060-appb-100049
    分别表示所述复数神经网络的第j层的第u个神经元和第k层的第v个神经元的输出复数向量;f'(·)表示所述复数神经网络 的神经元激活函数对输入信号的导数,
    Figure PCTCN2021093060-appb-100050
    为所述全连接层的权值参数的偏导数或灵敏度,
    Figure PCTCN2021093060-appb-100051
    为所述全连接层的偏置参数的偏导数或灵敏度,
    Figure PCTCN2021093060-appb-100052
    为所述损失函数对
    Figure PCTCN2021093060-appb-100053
    的偏导数或灵敏度,conj(·)表示对复数取共轭。
  14. 根据权利要求9所述的方法,其中,所述复数神经网络内部的卷积层的第q个卷积核的权值参数和偏置参数的偏导数或灵敏度分别为:
    Figure PCTCN2021093060-appb-100054
    Figure PCTCN2021093060-appb-100055
    其中,
    Figure PCTCN2021093060-appb-100056
    为第q个卷积核的权值参数的偏导数或灵敏度,
    Figure PCTCN2021093060-appb-100057
    为第q个卷积核的偏置参数的偏导数或灵敏度,
    Figure PCTCN2021093060-appb-100058
    表示所述卷积层的前一层,所述复数神经网络的第j层输出的第p个复数向量,
    Figure PCTCN2021093060-appb-100059
    表示所述复数神经网络的第k层输出的第q个复数向量,q=1,2,…,Q,p=1,2,…,P,其中,Q和P分别为所述复数神经网络的第k层和第j层的输出特征向量数,
    Figure PCTCN2021093060-appb-100060
    表示所述损失函数对于
    Figure PCTCN2021093060-appb-100061
    的偏导数或灵敏度,Fliplr(·)表示对输入向量做位置反置,Conv(·)表示卷积运算。
  15. 根据权利要求9所述的方法,其中,所述复数神经网络内部的全连接层的权值参数和偏置参数通过如下表达式更新:
    Figure PCTCN2021093060-appb-100062
    Figure PCTCN2021093060-appb-100063
    其中,
    Figure PCTCN2021093060-appb-100064
    Figure PCTCN2021093060-appb-100065
    分别表示所述全连接层的权值参数
    Figure PCTCN2021093060-appb-100066
    在当前时刻和上一时刻时的值;
    Figure PCTCN2021093060-appb-100067
    Figure PCTCN2021093060-appb-100068
    分别表示所述全连接层的偏置参数
    Figure PCTCN2021093060-appb-100069
    在当前时刻和上一时刻时的值;
    Figure PCTCN2021093060-appb-100070
    表示当前时刻对于所述权值参数
    Figure PCTCN2021093060-appb-100071
    的更新步长;
    Figure PCTCN2021093060-appb-100072
    表示当前时刻对于所述偏置参数
    Figure PCTCN2021093060-appb-100073
    的更新步长。
  16. 根据权利要求3所述的方法,其中,所述基于测试集测试训练后的预失真系统,得到所述训练后的预失真系统对应的误差向量幅度和邻道泄漏比,包括:
    将所述测试集中的复数向量按照元素索引输入预失真系统,每次输入的复数向量的长度基于所述预失真系统的功放记忆效应参数确定,每次输入至所述 预失真系统的复数向量由所述测试集中的历史元素和当前元素组合得到;
    将所述预失真系统中的所述射频功放输出反馈回路的输出输入至所述预失真系统,确定所述预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比。
  17. 根据权利要求16所述的方法,其中,所述泛化误差向量幅度和所述泛化邻道泄漏比分别通过如下计算表达式确定:
    Figure PCTCN2021093060-appb-100074
    Figure PCTCN2021093060-appb-100075
    Figure PCTCN2021093060-appb-100076
    Figure PCTCN2021093060-appb-100077
    其中,GEVM 1表示所述泛化误差向量幅度,
    Figure PCTCN2021093060-appb-100078
    表示
    Figure PCTCN2021093060-appb-100079
    的所述泛化邻道泄漏比,
    Figure PCTCN2021093060-appb-100080
    表示所述射频功放输出反馈回路的反馈量,
    Figure PCTCN2021093060-appb-100081
    表示所述射频功放输出反馈回路的第n个元素反馈量,
    Figure PCTCN2021093060-appb-100082
    表示所述测试集中对应的复数向量的第n个元素,TestSize表示所述测试集的长度,TestSize与TrainSize的和为N 1,N 1表示所述训练用复数向量的长度,HBW表示有效信号带宽的一半,GBW表示一半的保护带宽,NFFT表示离散傅里叶变换的点数;Win NFFT是窗长为NFFT的窗函数的系数向量,
    Figure PCTCN2021093060-appb-100083
    表示随机从
    Figure PCTCN2021093060-appb-100084
    中截取NFFT长的信号;
    Figure PCTCN2021093060-appb-100085
    是取值范围[1,TestSize]里均匀分布的随机正整数;K是随机截取的次数。
  18. 一种预失真系统,执行如权利要求1-17任一项所述的预失真方法,所述预失真系统包括:预失真乘法器、复数神经网络和射频功放输出反馈回路;
    所述预失真乘法器的第一输入端为所述预失真系统的输入端,与所述复数神经网络的第一输入端和第二输入端连接,所述预失真乘法器的输出端与所述射频功放输出反馈回路的输入端连接,所述射频功放输出反馈回路的输出端为所述预失真系统的输出端,所述射频功放输出反馈回路的输出端与所述复数神经网络的第二输入端连接;所述复数神经网络的输出端与所述预失真乘法器的第二输入端连接。
  19. 根据权利要求18所述的预失真系统,其中,所述射频功放输出反馈回路 包括:数模转换器单元、射频调制单元、功率放大器单元、模数转换器单元和射频解调单元;
    所示预失真乘法器的输出端通过所述数模转换器单元和所述射频调制单元与所述功率放大器单元的输入端相连,所述功率放大器单元的输出端通过所述射频解调单元和所述模数转换器单元与所述复数神经网络的第二输入端相连。
  20. 一种预失真系统,执行如权利要求1-17任一项所述的预失真方法,所述预失真系统包括:预失真乘法器、复数神经网络、射频功放输出反馈回路、第一实时功率归一化单元和第二实时功率归一化单元;
    所述第一实时功率归一化单元的输入端为所述预失真系统的输入端,所述第一实时功率归一化单元的输出端与所述预失真乘法器的第一输入端、所述复数神经网络的第一输入端和所述复数神经网络的第二输入端连接,所述预失真乘法器的输出端与所述射频功放输出反馈回路的输入端连接,所述射频功放输出反馈回路的输出端为所述预失真系统的输出端,所述射频功放输出反馈回路的输出端与所述第二实时功率归一化单元的输入端连接,所述第二实时功率归一化单元的输出端与所述复数神经网络的第二输入端连接,所述复数神经网络的输出端与所述预失真乘法器的第二输入端连接。
  21. 一种设备,包括:
    至少一个处理器;
    存储装置,设置为存储至少一多个程序;
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-17任一项所述的预失真方法。
  22. 一种存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-17任一项所述的预失真方法。
PCT/CN2021/093060 2020-06-02 2021-05-11 预失真方法、系统、设备及存储介质 WO2021244236A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2022541199A JP7451720B2 (ja) 2020-06-02 2021-05-11 プリディストーション方法、システム、装置及び記憶媒体
US17/788,042 US20230033203A1 (en) 2020-06-02 2021-05-11 Predistortion method and system, device, and storage medium
KR1020227021174A KR102707510B1 (ko) 2020-06-02 2021-05-11 전치 왜곡 방법, 시스템, 디바이스 및 저장 매체
EP21818081.8A EP4160915A4 (en) 2020-06-02 2021-05-11 PREDISTORTION METHOD AND SYSTEM, DEVICE, AND STORAGE MEDIUM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010491525.5 2020-06-02
CN202010491525.5A CN111900937A (zh) 2020-06-02 2020-06-02 一种预失真方法、系统、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021244236A1 true WO2021244236A1 (zh) 2021-12-09

Family

ID=73206773

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/093060 WO2021244236A1 (zh) 2020-06-02 2021-05-11 预失真方法、系统、设备及存储介质

Country Status (6)

Country Link
US (1) US20230033203A1 (zh)
EP (1) EP4160915A4 (zh)
JP (1) JP7451720B2 (zh)
KR (1) KR102707510B1 (zh)
CN (1) CN111900937A (zh)
WO (1) WO2021244236A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115037580A (zh) * 2022-07-12 2022-09-09 西安电子科技大学 基于自学习的射频预失真系统及方法
CN115099389A (zh) * 2022-06-02 2022-09-23 北京理工大学 基于复数神经网络的非训练相位重建方法及装置
CN117110700A (zh) * 2023-08-23 2023-11-24 易集康健康科技(杭州)有限公司 一种射频电源脉冲功率检测方法及系统

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111900937A (zh) * 2020-06-02 2020-11-06 中兴通讯股份有限公司 一种预失真方法、系统、设备及存储介质
US11496341B2 (en) * 2020-08-13 2022-11-08 Micron Technology, Inc. Wireless devices and systems including examples of compensating I/Q imbalance with neural networks or recurrent neural networks
CN112865721B (zh) * 2021-01-05 2023-05-16 紫光展锐(重庆)科技有限公司 信号处理方法、装置、设备及存储介质、芯片、模组设备
CN113517865B (zh) * 2021-04-20 2022-11-22 重庆邮电大学 一种基于记忆多项式的功放模型及其硬件实现方法
US12003261B2 (en) 2021-05-12 2024-06-04 Analog Devices, Inc. Model architecture search and optimization for hardware
US12028188B2 (en) * 2021-05-12 2024-07-02 Analog Devices, Inc. Digital predistortion with hybrid basis-function-based actuator and neural network
CN113468814B (zh) * 2021-07-09 2024-02-27 成都德芯数字科技股份有限公司 一种基于神经网络的数字预失真训练数据筛选方法及装置
CN113411056B (zh) * 2021-07-12 2022-11-29 电子科技大学 一种基于广义多项式和神经网络的非线性预失真方法
CN115086121A (zh) * 2022-06-15 2022-09-20 Oppo广东移动通信有限公司 预失真参数值的确定方法、装置、终端及存储介质
CN117394802A (zh) * 2022-07-01 2024-01-12 中兴通讯股份有限公司 数字预失真方案或硬件结构的实现方法、设备和介质
CN118345334B (zh) * 2024-06-17 2024-08-23 华兴源创(成都)科技有限公司 膜层厚度的校正方法、装置、计算机设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1335699A (zh) * 2000-07-20 2002-02-13 华为技术有限公司 一种宽带发射机的自适应数字预失真方法和装置
KR100695632B1 (ko) * 2006-02-15 2007-03-16 한국과학기술원 증폭기 비선형성 및 직교 복조 오류를 위한 동시 적응보상기법
CN105207716A (zh) * 2015-08-20 2015-12-30 上海交通大学 室内可见光通信发光二极管传输预失真系统及方法
US20170338841A1 (en) * 2016-05-19 2017-11-23 Analog Devices Global Wideband digital predistortion
CN109302156A (zh) * 2018-09-28 2019-02-01 东南大学 基于模式识别的功率放大器动态线性化系统及其方法
CN111900937A (zh) * 2020-06-02 2020-11-06 中兴通讯股份有限公司 一种预失真方法、系统、设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3684377B2 (ja) * 2001-09-04 2005-08-17 独立行政法人情報通信研究機構 通信歪み補償装置および補償方法
TWI425772B (zh) * 2011-06-08 2014-02-01 Mstar Semiconductor Inc 包絡偵測器與相關方法
US9906428B2 (en) * 2016-04-28 2018-02-27 Samsung Electronics Co., Ltd. System and method for frequency-domain weighted least squares
CN110765720B (zh) * 2019-09-12 2024-05-24 重庆大学 一种复值流水线递归神经网络模型的功放预失真方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1335699A (zh) * 2000-07-20 2002-02-13 华为技术有限公司 一种宽带发射机的自适应数字预失真方法和装置
KR100695632B1 (ko) * 2006-02-15 2007-03-16 한국과학기술원 증폭기 비선형성 및 직교 복조 오류를 위한 동시 적응보상기법
CN105207716A (zh) * 2015-08-20 2015-12-30 上海交通大学 室内可见光通信发光二极管传输预失真系统及方法
US20170338841A1 (en) * 2016-05-19 2017-11-23 Analog Devices Global Wideband digital predistortion
CN109302156A (zh) * 2018-09-28 2019-02-01 东南大学 基于模式识别的功率放大器动态线性化系统及其方法
CN111900937A (zh) * 2020-06-02 2020-11-06 中兴通讯股份有限公司 一种预失真方法、系统、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4160915A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115099389A (zh) * 2022-06-02 2022-09-23 北京理工大学 基于复数神经网络的非训练相位重建方法及装置
CN115037580A (zh) * 2022-07-12 2022-09-09 西安电子科技大学 基于自学习的射频预失真系统及方法
CN115037580B (zh) * 2022-07-12 2023-09-08 西安电子科技大学 基于自学习的射频预失真系统及方法
CN117110700A (zh) * 2023-08-23 2023-11-24 易集康健康科技(杭州)有限公司 一种射频电源脉冲功率检测方法及系统
CN117110700B (zh) * 2023-08-23 2024-06-04 易集康健康科技(杭州)有限公司 一种射频电源脉冲功率检测方法及系统

Also Published As

Publication number Publication date
JP2023509699A (ja) 2023-03-09
CN111900937A (zh) 2020-11-06
KR20220104019A (ko) 2022-07-25
JP7451720B2 (ja) 2024-03-18
KR102707510B1 (ko) 2024-09-19
EP4160915A4 (en) 2024-07-17
EP4160915A1 (en) 2023-04-05
US20230033203A1 (en) 2023-02-02

Similar Documents

Publication Publication Date Title
WO2021244236A1 (zh) 预失真方法、系统、设备及存储介质
Paryanti et al. A direct learning approach for neural network based pre-distortion for coherent nonlinear optical transmitter
Uncini et al. Complex-valued neural networks with adaptive spline activation function for digital-radio-links nonlinear equalization
WO2021223504A1 (zh) 上下行信道互易的实现方法、通信节点和存储介质
CN110765720B (zh) 一种复值流水线递归神经网络模型的功放预失真方法
Zhao et al. Functional link neural network cascaded with Chebyshev orthogonal polynomial for nonlinear channel equalization
US20230299872A1 (en) Neural Network-Based Communication Method and Related Apparatus
Phartiyal et al. LSTM-deep neural networks based predistortion linearizer for high power amplifiers
CN111490737A (zh) 一种用于功率放大器的非线性补偿方法和设备
KR102550079B1 (ko) 딥러닝 기반의 전력 증폭기의 비선형 왜곡에 대한 보상 방법 및 장치
CN111200470A (zh) 一种适用于受非线性干扰的高阶调制信号传输控制方法
Pan et al. A predistortion algorithm based on accurately solving the reverse function of memory polynomial model
CN112865721B (zh) 信号处理方法、装置、设备及存储介质、芯片、模组设备
Logins et al. Block-structured deep learning-based OFDM channel equalization
CN112104580B (zh) 基于广义近似消息传递-稀疏贝叶斯学习的稀疏水声信道估计方法
Ney et al. Unsupervised ANN-based equalizer and its trainable FPGA implementation
CN115913844A (zh) 一种基于神经网络的mimo系统数字预失真补偿方法、装置、设备和存储介质
CN115913140B (zh) 运算精度控制的分段多项式数字预失真装置和方法
Shahkarami et al. Efficient deep learning of nonlinear fiber-optic communications using a convolutional recurrent neural network
Suresh et al. A fast learning fully complex-valued relaxation network (FCRN)
Imtiaz et al. Performance vs. complexity in NN pre-distortion for a nonlinear channel
Paryanti et al. Recurrent neural network for pre-distortion of combined nonlinear optical transmitter impairments with memory
Qiu et al. A novel neural network based equalizer for nonlinear power amplifiers
US10097280B2 (en) Systems and methods for communication using sparsity based pre-compensation
WO2020019240A1 (en) Method, apparatus and computer readable media for data processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21818081

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20227021174

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022541199

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021818081

Country of ref document: EP

Effective date: 20230102