WO2021244236A1 - 预失真方法、系统、设备及存储介质 - Google Patents
预失真方法、系统、设备及存储介质 Download PDFInfo
- Publication number
- WO2021244236A1 WO2021244236A1 PCT/CN2021/093060 CN2021093060W WO2021244236A1 WO 2021244236 A1 WO2021244236 A1 WO 2021244236A1 CN 2021093060 W CN2021093060 W CN 2021093060W WO 2021244236 A1 WO2021244236 A1 WO 2021244236A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- complex
- predistortion
- vector
- output
- neural network
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 239000013598 vector Substances 0.000 claims abstract description 249
- 238000013528 artificial neural network Methods 0.000 claims abstract description 158
- 238000012549 training Methods 0.000 claims abstract description 137
- 238000012937 correction Methods 0.000 claims abstract description 62
- 230000006870 function Effects 0.000 claims description 85
- 230000035945 sensitivity Effects 0.000 claims description 82
- 238000010606 normalization Methods 0.000 claims description 60
- 238000012360 testing method Methods 0.000 claims description 46
- 230000014509 gene expression Effects 0.000 claims description 28
- 230000003446 memory effect Effects 0.000 claims description 28
- 210000002569 neuron Anatomy 0.000 claims description 25
- 238000004364 calculation method Methods 0.000 claims description 23
- 238000009826 distribution Methods 0.000 claims description 21
- 230000015654 memory Effects 0.000 claims description 20
- 238000004422 calculation algorithm Methods 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 13
- 230000000694 effects Effects 0.000 claims description 13
- 230000004913 activation Effects 0.000 claims description 6
- 239000002131 composite material Substances 0.000 claims description 2
- 230000003321 amplification Effects 0.000 abstract 1
- 238000003199 nucleic acid amplification method Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 15
- 238000004891 communication Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000009827 uniform distribution Methods 0.000 description 3
- 101100440934 Candida albicans (strain SC5314 / ATCC MYA-2876) CPH1 gene Proteins 0.000 description 2
- 101100273252 Candida parapsilosis SAPP1 gene Proteins 0.000 description 2
- 206010010356 Congenital anomaly Diseases 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03F—AMPLIFIERS
- H03F1/00—Details of amplifiers with only discharge tubes, only semiconductor devices or only unspecified devices as amplifying elements
- H03F1/32—Modifications of amplifiers to reduce non-linear distortion
- H03F1/3241—Modifications of amplifiers to reduce non-linear distortion using predistortion circuits
- H03F1/3247—Modifications of amplifiers to reduce non-linear distortion using predistortion circuits using feedback acting on predistortion circuits
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03F—AMPLIFIERS
- H03F1/00—Details of amplifiers with only discharge tubes, only semiconductor devices or only unspecified devices as amplifying elements
- H03F1/30—Modifications of amplifiers to reduce influence of variations of temperature or supply voltage or other physical parameters
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03F—AMPLIFIERS
- H03F1/00—Details of amplifiers with only discharge tubes, only semiconductor devices or only unspecified devices as amplifying elements
- H03F1/32—Modifications of amplifiers to reduce non-linear distortion
- H03F1/3241—Modifications of amplifiers to reduce non-linear distortion using predistortion circuits
- H03F1/3258—Modifications of amplifiers to reduce non-linear distortion using predistortion circuits based on polynomial terms
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03F—AMPLIFIERS
- H03F3/00—Amplifiers with only discharge tubes or only semiconductor devices as amplifying elements
- H03F3/189—High-frequency amplifiers, e.g. radio frequency amplifiers
- H03F3/19—High-frequency amplifiers, e.g. radio frequency amplifiers with semiconductor devices only
- H03F3/195—High-frequency amplifiers, e.g. radio frequency amplifiers with semiconductor devices only in integrated circuits
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03F—AMPLIFIERS
- H03F3/00—Amplifiers with only discharge tubes or only semiconductor devices as amplifying elements
- H03F3/20—Power amplifiers, e.g. Class B amplifiers, Class C amplifiers
- H03F3/24—Power amplifiers, e.g. Class B amplifiers, Class C amplifiers of transmitter output stages
- H03F3/245—Power amplifiers, e.g. Class B amplifiers, Class C amplifiers of transmitter output stages with semiconductor devices only
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03F—AMPLIFIERS
- H03F2200/00—Indexing scheme relating to amplifiers
- H03F2200/451—Indexing scheme relating to amplifiers the amplifier being a radio frequency amplifier
Definitions
- This application relates to the field of communication technology, for example, to a predistortion method, system, device, and storage medium.
- the power amplifier is a key component related to energy consumption and signal quality.
- the power amplifier model and predistortion model have become key technologies for efficient communication and high data rate communication.
- the nonlinear characteristics of the ideal predistortion model and the power amplifier model are strictly reciprocal in mathematics. Both the predistortion model and the power amplifier model are typical nonlinear fitting problems. However, traditional predistortion models and power amplifier models have inherently insufficient non-linear representation capabilities.
- This application provides a predistortion method, system, equipment, and storage medium, which solves the congenital inadequacy of the traditional predistortion model and the power amplifier model's nonlinear representation ability, and at the same time solves the problem of using separate processing when applying neural networks The problem of lagging response to the dynamic changes of the power amplifier system, resulting in poor predistortion correction effect.
- the embodiment of the present application provides a predistortion method, which is applied to a predistortion system, the predistortion system includes: a predistortion multiplier, a complex neural network, and a radio frequency power amplifier output feedback loop; the method includes:
- the embodiment of the present application also provides a predistortion system that implements the predistortion method provided in the embodiment of the present application.
- the predistortion system includes: a predistortion multiplier, a complex neural network, and a radio frequency power amplifier output feedback loop; the predistortion
- the first input terminal of the multiplier is the input terminal of the predistortion system and is connected to the first input terminal and the second input terminal of the complex neural network.
- the output terminal of the predistortion multiplier is connected to the output of the radio frequency power amplifier.
- the input terminal of the feedback loop is connected, the output terminal of the RF power amplifier output feedback loop is the output terminal of the predistortion system, and the output terminal of the RF power amplifier output feedback loop is connected to the second input terminal of the complex neural network;
- the output terminal of the complex neural network is connected to the second input terminal of the predistortion multiplier.
- the embodiment of the present application also provides a predistortion system that implements the predistortion method provided in the embodiment of the present application.
- the predistortion system includes: a predistortion multiplier, a complex neural network, a radio frequency power amplifier output feedback loop, and a first real-time power A normalization unit and a second real-time power normalization unit; the input terminal of the first real-time power normalization unit is the input terminal of the predistortion system, and the output terminal of the first real-time power normalization unit Connected to the first input terminal of the predistortion multiplier, the first input terminal of the complex neural network, and the second input terminal of the complex neural network, the output terminal of the predistortion multiplier is connected to the radio frequency power amplifier The input terminal of the output feedback loop is connected, the output terminal of the RF power amplifier output feedback loop is the output terminal of the predistortion system, and the output terminal of the RF power amplifier output feedback loop is connected to the second real-time power normal
- An embodiment of the present application also provides a device, including: one or more processors; a storage device, configured to store one or more programs; when the one or more programs are executed by the one or more processors , Enabling the one or more processors to implement the aforementioned predistortion method.
- An embodiment of the present application also provides a storage medium, the storage medium stores a computer program, and the computer program implements the aforementioned predistortion method when the computer program is executed by a processor.
- FIG. 1 is a schematic flowchart of a predistortion method provided by an embodiment of this application;
- Figure 2 is a schematic diagram of a traditional power amplifier model based on memory polynomials
- FIG. 3a is a schematic structural diagram of a predistortion system provided by an embodiment of this application.
- FIG. 3b is a schematic structural diagram of another predistortion system provided by an embodiment of this application.
- FIG. 4 is a performance effect diagram of generalized adjacent channel leakage ratio obtained by an embodiment of the application.
- FIG. 5 is an improvement effect diagram of generalized adjacent channel leakage ratio obtained by an embodiment of the application.
- FIG. 6 is a schematic structural diagram of a device provided by an embodiment of this application.
- FIG. 1 is a schematic flowchart of a predistortion method provided by an embodiment of the application.
- the method may be applicable to the case of performing predistortion processing.
- the method may be executed by a predistortion system. It can be implemented by software and/or hardware and integrated on the terminal device.
- Power amplifier usually has three characteristics: 1) the static nonlinearity of the device itself (ie Static (Device) Nonlinearity); 2) the linearity derived from the matching network (ie Matching Network) and the delay of equipment components Memory effect (i.e. Linear Memory Effect); 3) Non-linear memory effect (i.e. Nonlinear Memory Effect), which mainly comes from the non-ideal characteristics of the transistor trap effect (i.e. Trapping Effect) and the bias network (i.e. Bias Network), and The dependence of the input level on temperature changes, etc.
- the predistorter module (Predistorter, PD) is located in front of the radio frequency power amplifier, and is used to pre-process the nonlinear distortion that will be generated when the signal passes through the power amplifier; the predistortion implemented in the digital domain is called digital predistortion (Digital PD, DPD). ).
- Figure 2 is a schematic diagram of a traditional power amplifier model based on memory polynomials.
- the power amplifier model is equivalent to a zero hidden layer network with only an input layer and an output layer. It can't even count as a Multi-Layer Perceptron (MLP) network.
- MLP Multi-Layer Perceptron
- the expressive ability of a network is positively related to its structural complexity; this is also the fundamental reason for the inherent inadequacy of the nonlinear expressive ability of traditional power amplifier models.
- the aforementioned predistortion and power amplifier nonlinearity are inverse functions to each other, so it can be expected that the same bottleneck will inevitably exist when traditional series or polynomial models are used to model DPD.
- the method and technology of using neural network for pre-distortion adopts the method of separating the two links of the nonlinear characteristic learning of the power amplifier and the pre-distortion processing, that is: the first step, the neural network is not connected to the signal processing
- the main link does not perform predistortion processing on the signal, but only accepts the feedback output of the power amplifier to train or learn the nonlinear characteristics of the power amplifier or its inverse characteristics; wait until the nonlinear characteristics are learned, then connect the neural network In the main link, or pass its training weights to a replica neural network on the main link, and then start the predistortion processing.
- This application provides a predistortion method, which is applied to a predistortion system, the predistortion system includes: a predistortion multiplier, a complex neural network, and a radio frequency power amplifier output feedback loop, as shown in FIG. 1, the method includes the following step:
- the pre-distortion system can be considered as a system capable of realizing pre-distortion correction.
- the pre-distortion system includes a power amplifier.
- the pre-distortion system can learn the nonlinear characteristics of the power amplifier and correct the pre-distortion. Fusion, that is, the predistortion system is directly connected to the power amplifier during training, and the nonlinear characteristics of the power amplifier are learned while predistortion correction is performed, which improves the efficiency and accuracy.
- the predistortion system also includes a predistortion multiplier and a complex neural network.
- the complex neural network can provide complex coefficients for the predistortion multiplier.
- the predistortion multiplier can be considered as a multiplier that realizes the predistortion function.
- the predistortion system may further include a first real-time power normalization unit and a second real-time power normalization unit.
- first and second are only used to distinguish real-time power normalization units.
- the real-time power normalization unit can perform normalization processing.
- the first real-time power normalization unit may normalize the complex vector for training and input it to the predistortion multiplier and the complex neural network of the predistortion system.
- the second power normalization unit may normalize the complex scalar corresponding to the training complex vector and return it to the complex neural network of the predistortion system.
- the training complex vector can be regarded as the complex vector used for training the predistortion system.
- the training complex vector can be input to the predistortion system to obtain the complex scalar output by the predistortion system, and the complex scalar can be output by the RF power amplifier output feedback loop in the predistortion system.
- the complex scalar and the training complex vector can be used as samples for training the predistortion system.
- the predistortion system can be trained, and the nonlinear characteristics of the power amplifier can be learned during the training process.
- This application can input the training complex vector and complex scalar to the predistortion system to train the predistortion system.
- the training can be supervised training.
- the weight parameters and bias parameters of each layer of the complex neural network can be updated through the loss function of the complex neural network.
- the condition for the end of the predistortion system training can be determined based on the generalized error vector magnitude and the generalized adjacent channel leakage ratio.
- the setting requirements can be set according to actual requirements, and there is no limitation here, such as determining the setting requirements based on predistortion indicators.
- the training of the predistortion system is completed, and the generalization If the corresponding values of the error vector magnitude and the generalized adjacent channel leakage ratio are less than the respective set thresholds, continue training the predistortion system.
- the set threshold can be set according to actual needs, and is not limited here. In the case that the generalized error vector magnitude and the generalized adjacent channel leakage ratio do not meet the set requirements, the predistortion system can continue to be trained using the training complex vector.
- This application uses both the generalized error vector magnitude and the generalized adjacent channel leakage ratio to meet the set requirements as the pre-distortion system training end condition, which can measure and reflect the universality and generalization ability of the pre-distortion system when performing pre-distortion correction.
- the generalization ability refers to the repeated input of the complex vector in the training set to the predistortion system, so that training is essentially the system's learning of known data. After this process, the parameters of the predistortion system are solidified, and then a brand new, The data in the test set unknown to the system is used to count and investigate the system's ability to adapt or predistortion to a wider range of new inputs or new data.
- the service complex vector can be input to the trained predistortion system to obtain a complex scalar for predistortion correction.
- the service complex vector may be a service vector when the predistortion system process is applied, and the vector is a complex vector.
- This application provides a predistortion method, which is applied to a predistortion system, the predistortion system includes: a predistortion multiplier, a complex neural network, and a radio frequency power amplifier output feedback loop.
- the method first inputs a complex vector for training into the The predistortion system obtains the complex scalar corresponding to the training complex vector output by the predistortion system; then based on the training complex vector and the complex scalar, the predistortion system is trained until the predistortion The generalized error vector magnitude and generalized adjacent channel leakage ratio corresponding to the system meet the set requirements; finally, the service complex vector is input into the trained predistortion system to obtain the complex scalar for predistortion correction.
- This application replaces the pre-distortion model in related technologies with a pre-distortion system.
- the complex neural network in the pre-distortion system has a rich structure and strong nonlinear expression or fitting ability, which can effectively solve the traditional pre-distortion model and power amplifier.
- Non-linearity represents the problem of congenital insufficiency.
- the predistortion system uses a complex error back propagation algorithm for training.
- the complex error backpropagation algorithm can be considered as a complex error backpropagation algorithm.
- the learning process of the error back propagation algorithm is composed of two processes: the forward propagation of the signal and the back propagation of the error.
- training the predistortion system includes:
- the system parameters can be considered as parameters required for the initialization of the predistortion system.
- the content of the system parameters is not limited here, and can be set according to actual conditions.
- the system parameters include but are not limited to: nonlinear order parameters, power amplifier memory effect parameters, and the initial output of the complex neural network included in the predistortion system.
- Initializing system parameters can be considered as setting system parameters, and the set values can be empirical values or determined based on historical system parameters.
- the training set can be considered as a set of complex vectors for training the predistortion system.
- the test set can be considered as a set of complex vectors for testing the generalization performance of the predistortion system.
- the complex number vectors included in the training set and the test set are different, and the combination of the two is the complex number vector for training.
- the complex vector of the predistortion system is partially intercepted from the training set, and the corresponding output complex scalar of the predistortion system is in one-to-one correspondence; when the predistortion system is trained, it is based on the current in the complex vector.
- the element and the complex scalar are used to calculate the loss function of the complex neural network in the predistortion system, and then the weight parameters and bias parameters of each layer of the complex neural network are updated based on the loss function.
- the generalization performance of the predistortion system is checked based on the entire test set
- the output set composed of all the complex scalars is used to calculate the generalization error vector magnitude (GEVM) and generalization adjacent channel leakage ratio (Generalization Adjacent Channel Power Ratio, GACLR) performance, and use it to determine whether it is over Training of the predistortion system.
- GEVM generalization error vector magnitude
- GACLR Generalization Adjacent Channel Power Ratio
- the elements in the normalized complex number vector are determined by the following calculation expression:
- x 1 (n) represents the nth element in the normalized complex number vector
- x 1,raw (n) represents the nth element in the training complex number vector
- x 1,raw (d) represents the training The dth element in the complex vector.
- the initialization of the system parameters of the predistortion system includes:
- the initialization of the corresponding layer is completed according to the layer type corresponding to each layer in the complex neural network.
- Initializing the system parameters of the predistortion system also includes: setting the nonlinear order parameter, the power amplifier memory effect parameter, and the initial output of the complex neural network included in the predistortion system.
- the set memory effect parameters of the power amplifier are not limited to the power amplifier.
- the set nonlinear order parameters and power amplifier memory effect parameters are only used for the training of the predistortion system.
- the non-linear order parameter, power amplifier memory effect parameter, and the initial output value of the complex neural network included in the predistortion system are not limited. There is no limitation on the execution order of the operation of setting the nonlinear order parameter, the power amplifier memory effect parameter and the initial output of the complex neural network included in the predistortion system and the operation of initializing each layer of the complex neural network.
- the initialization of each layer of the complex neural network can be completed based on the corresponding initialization setting parameters, and the initialization setting parameters are not limited.
- the initialization setting parameters can be distribution types and distribution parameters.
- the completing the initialization of the corresponding layer according to the layer type corresponding to each layer in the complex neural network includes:
- the weight parameter and the bias parameter of the corresponding layer are initialized.
- the current (to be initialized) layer is a fully connected layer, according to the randomly initialized distribution type (for example: Gaussian distribution, uniform distribution, etc.) and distribution parameters (mean, variance, standard deviation, etc.) set by the layer, Initialize the weight parameters and bias parameters of this layer.
- randomly initialized distribution type for example: Gaussian distribution, uniform distribution, etc.
- distribution parameters mean, variance, standard deviation, etc.
- the training of the predistortion system based on the training set and the complex scalar includes:
- the complex vector in the training set is input into the predistortion system according to the element index.
- the length of the complex vector input each time is determined based on the memory effect parameters of the power amplifier.
- the complex vector input to the predistortion system each time is determined by the historical element and the current element in the training set.
- the output of the RF power amplifier output feedback loop in the predistortion system is passed through the second real-time power normalization unit included in the predistortion system, and then input into the complex neural network; or
- the output of the RF power amplifier output feedback loop in the predistortion system is directly input to the complex neural network; according to the loss function of the complex neural network in the predistortion system, the partial derivative of each element of the correction vector output by the complex neural network or Sensitivity, determining the partial derivative or sensitivity of the loss function to the weight parameter and bias parameter of each layer in the complex neural network; updating the weight parameter and bias of the corresponding layer according to the determined partial derivative or sensitivity of each layer Parameters; where, the expression of the complex vector input to the predistortion system each time is as follows:
- M 1 is the power amplifier memory effect parameter
- 0 ⁇ M 1 ⁇ TrainSize is the length of the training set
- Is the nth element in the training set Is the current element
- m is an integer greater than 1 and less than M 1
- Training set are the first n-1 elements, the first element and the first nm nM 1 elements, and All are historical elements.
- the output of the RF power amplifier output feedback loop in the predistortion system is a complex scalar.
- the complex vector in the training set is input to the predistortion system according to the element index, that is, to the predistortion multiplier and the complex neural network in the predistortion system.
- the length of the complex vector input to the predistortion system each time can be determined by the power amplifier memory effect parameter, such as inputting a current element in the training set each time, and then the power amplifier memory effect parameter determines the number of historical elements input to the predistortion system.
- the method for determining the number of historical elements is not limited, for example, the number of historical elements is not greater than the memory effect parameter of the power amplifier.
- the loss function of the complex neural network in the predistortion system can be determined, and then the complex neural network output based on the loss function
- the partial derivative or sensitivity of each element of the correction vector is determined to determine the partial derivative or sensitivity of the loss function to the weight parameters and bias parameters of each layer in the complex neural network, so as to update the weight of each layer of the complex neural network. Value parameters and bias parameters to achieve pre-distortion system training.
- the relationship between the complex vector input to the complex neural network in the predistortion system and the correction vector output by the complex neural network is as follows:
- ComplexNN composite function represents a plurality of internal-layer neural network computing function.
- the output of the multiplier is the complex pre-distortion vector of n elements, the n-1 elements, the first element nM + 1 nM 1 and the second element, Is the complex vector input to the predistortion multiplier, Represents dot multiplication.
- I the output of the output feedback loop of the radio frequency power amplifier
- PA represents the processing function of the input signal of the output feedback loop of the radio frequency power amplifier
- the relationship between the input and output of the second real-time power normalization unit is as follows:
- Is the nth output of the second real-time power normalization unit Is the nth input of the second real-time power normalization unit
- n is a positive integer
- d is a positive integer greater than or equal to 1 and less than or equal to n
- the partial derivative or sensitivity of the loss function of the complex neural network in the predistortion system with respect to each element of the correction vector output by the complex neural network it is determined that the loss function affects the complex neural network.
- the partial derivatives or sensitivity of the weight parameters and bias parameters of each layer include:
- the loss function is calculated and determined by the following calculation expression:
- the feedback amount of the output feedback loop of the radio frequency power amplifier is the first value obtained after inputting the output of the output feedback loop of the radio frequency power amplifier into the second real-time power normalization unit 2.
- the output of the real-time power normalization unit; in the case that the predistortion system does not include the second real-time power normalization unit, the feedback value of the output feedback loop of the RF power amplifier is the output of the output feedback loop of the RF power amplifier.
- the partial derivative or sensitivity of the loss function to each element of the correction vector output by the complex neural network can be based on the loss function to the output of the predistortion multiplier, the RF power amplifier output feedback loop and/or the second real-time power normalization unit in the predistortion system.
- the partial derivative or sensitivity is determined.
- this application can update the weight parameters and bias parameters of each layer of the complex neural network based on the partial derivative or sensitivity of the correction vector.
- the calculation expression of the partial derivative or sensitivity of each element of the correction vector is as follows:
- ⁇ 1, m is determined by the following calculation expression:
- ⁇ l,m is determined by the following calculation expression:
- c l, m are complex coefficients obtained by using the least squares algorithm based on the complex number vector in the training set and the output obtained by inputting it into the output feedback loop of the RF power amplifier according to the memory polynomial model; Is the partial derivative or sensitivity of the loss function to the complex scalar output of the RF power amplifier output feedback loop.
- the partial derivative or sensitivity of the weight parameter and the bias parameter of the fully connected layer inside the complex neural network are respectively:
- f'( ⁇ ) represents the derivative of the neuron activation function to the input signal, Is the partial derivative or sensitivity of the weight parameter of the fully connected layer, Is the partial derivative or sensitivity of the bias parameter of the fully connected layer, Is the loss function pair The partial derivative or sensitivity.
- the loss function is The partial derivative or sensitivity of is equal to the partial derivative or sensitivity of the correction vector.
- the weight parameter and the partial derivative or sensitivity of the bias parameter of the qth convolution kernel of the convolution layer inside the complex neural network are respectively:
- Each feature map can be in the form of a complex vector.
- the weight parameters and bias parameters of the fully connected layer inside the complex neural network are updated by the following expressions:
- testing the trained predistortion system based on the test set to obtain the error vector magnitude and adjacent channel leakage ratio corresponding to the predistortion system includes:
- the length of the complex vector input each time is determined based on the memory effect parameters of the power amplifier.
- the complex vector input to the predistortion system each time is obtained from the combination of the historical element and the current element in the test set.
- the output of the RF power amplifier output feedback loop in the predistortion system is input to the predistortion system, and the generalized error vector magnitude and generalized adjacent channel leakage ratio corresponding to the predistortion system are determined.
- the method used by the predistortion system after testing and training based on the test set is similar to the method used for training the predistortion system based on the training set. The difference is that there is no need to update the weight parameters and bias parameters of each layer of the complex neural network based on the test set. Instead, directly calculate and determine the generalized error vector magnitude and generalized adjacent channel leakage ratio to determine whether the generalized error vector magnitude and generalized adjacent channel leakage ratio meet the set requirements. When calculating the generalized error vector magnitude and generalized adjacent channel leakage ratio, it can be determined based on the feedback amount of the RF power amplifier output feedback loop and the corresponding complex vector in the test set.
- Inputting the output of the RF power amplifier output feedback loop in the predistortion system to the predistortion information system includes passing the output of the RF power amplifier output feedback loop through the second real-time power normalization unit included in the predistortion system, and then Input the complex neural network; or directly input the output of the RF power amplifier output feedback loop into the complex neural network.
- the generalized error vector magnitude and the generalized adjacent channel leakage ratio are respectively determined by the following calculation expressions:
- GEVM 1 represents the magnitude of the generalization error vector, Express The generalized adjacent channel leakage ratio, Indicates the amount of feedback of the output feedback loop of the RF power amplifier, Represents the nth element feedback quantity of the output feedback loop of the radio frequency power amplifier, Represents the nth element of the corresponding complex number vector in the test set, TestSize represents the length of the test set, the sum of TestSize and TrainSize is N 1 , N 1 represents the length of the complex vector for training, HBW represents half of the effective signal bandwidth, GBW Represents half of the protection bandwidth, NFFT represents the number of discrete Fourier transform points; Win NFFT is the coefficient vector of the window function with a window length of NFFT, Means random from Intercept the NFFT-long signal in the middle; It is a uniformly distributed random positive integer in the value range [1, TestSize]; K is the number of random intercepts.
- the feedback amount of the RF power amplifier output feedback loop can be considered as the output of the second real-time power normalization unit; the predistortion system does not include the second real-time power normalization unit.
- the feedback amount of the output feedback loop of the radio frequency power amplifier may be the output of the output feedback loop of the radio frequency power amplifier.
- the predistortion method provided by the present application solves the problems of the traditional power amplifier model and the predistortion model with inadequate non-linear representation capabilities and lack of generalization capabilities, and obtains a better error vector magnitude (Error Vector). Magnitude, EVM) performance and Adjacent Channel Power Ratio (ACLR) performance.
- Error Vector Error Vector
- Magnitude, EVM EVM
- ACLR Adjacent Channel Power Ratio
- This application adopts a neural network with a richer structure and form to carry out system modeling, which solves the inherent shortcomings of the traditional power amplifier model and DPD model's inadequate non-linear expression ability; at the same time, completely abandon the application of neural network for predistortion methods and technology separation processing In this way, the characteristic learning and predistortion correction of the power amplifier are integrated and processed into a whole from beginning to end, and the AI-DPD integrated solution is proposed for the first time, that is, the integrated solution of artificial intelligence instead of DPD.
- the known training complex vector is sent, and the feedback loop is output through the predistortion multiplier, the complex neural network and the radio frequency power amplifier. Together with the training complex vector, it is used for AI -DPD integrated solution system (i.e. predistortion system) training; when the training reaches the required GEVM and GACLR, the training stops; at the same time, the service complex vector is sent through the predistortion system to output the complex scalar for predistortion correction .
- AI -DPD integrated solution system i.e. predistortion system
- Fig. 3a is a schematic structural diagram of a predistortion system provided by an embodiment of the application.
- the predistortion system that is, the AI-DPD integrated solution system, mainly includes: a first real-time power normalization (ie, PNorm) unit , The second real-time power normalization unit, the predistortion multiplier (ie DPD Multiplier) unit, the complex neural network (ie Complex Neural Network) unit, and the RF power amplifier output feedback loop.
- the predistortion multiplier unit namely the predistortion multiplier.
- the complex neural network unit may include a complex neural network, a selector (ie, MUX), and an adder.
- the output feedback loop of the radio frequency power amplifier includes a digital-to-analog converter unit (namely D/A), a radio frequency modulation unit, a power amplifier unit, an analog-to-digital converter unit (namely A/D), and a radio frequency demodulation unit.
- D/A digital-to-analog converter unit
- A/D analog-to-digital converter unit
- the complex neural network is a kind of neural network: complex numbers include real part (Real Part, hereinafter abbreviated as I path) and imaginary part (Imaginary Part, hereinafter abbreviated as Q path), and usually means that the imaginary part is not The case of 0 is distinguished from real numbers.
- the input and output of the neural network can be directly complex variables, vectors, matrices or tensors, or in the form of a combination of complex numbers I and Q; neurons in each layer (ie Layer) of the neural network
- the input and output of the activation function and all other processing functions can be directly complex variables, vectors, matrices or tensors, or in the form of a combination of complex I and Q paths;
- the neural network can use complex error inversion To train the propagation algorithm (Complex Error Back Propagation Algorithm, Complex BP), you can also use the real error back propagation algorithm (BP) for training.
- the complex error back propagation algorithm includes but is not limited to the following steps:
- Step 1 Initialize the system parameters of the AI-DPD integrated solution.
- L 1 5
- Each element of is 1
- the index of each layer is run layer by layer in the forward direction, or all layers can be parallelized at the same time, and the corresponding initialization steps are completed according to the layer type.
- the initialization of the different types of layers mainly includes but is not limited to the following steps:
- Step 1-1 If the current (to be initialized) layer is a fully connected layer, according to the randomly initialized distribution type (such as Gaussian distribution, uniform distribution, etc.) and distribution parameters (mean, variance, standard deviation, etc.) set for this layer , Initialize the weight parameters and bias parameters of this layer; Step 1-2: If the current (to be initialized) layer is a convolutional layer, initialize each convolution of this layer according to the randomly initialized distribution type and distribution parameters set for this layer The weight parameters and bias parameters of the core; Steps 1-3: If the current (to be initialized) layer is another type of layer, such as a micro-step convolution (ie Fractionally Strided Convolutions) layer, etc., set the parameters according to the initialization of this type of layer (Such as randomly initialized distribution types and distribution parameters), complete the initialization of the weights and bias parameters of this layer.
- the randomly initialized distribution type such as Gaussian distribution, uniform distribution, etc.
- distribution parameters mean, variance, standard deviation, etc.
- Step 2 AI-DPD integrated solution system is based on training set training.
- N 1 perform real-time power normalization (PNorm), because there are many variations of real-time power normalization, the following is just an example:
- the sequence after the real-time power is normalized Divided into training set And test set
- the lengths of the two are respectively recorded as TrainSize and TestSize; among them, the training set is used for the overall learning or training of the AI-DPD integrated solution system; the test set is used to test the generalized adaptation of the solution system to new data in the second half of training Performance; this performance includes the generalized EVM performance and generalized ACPR performance, which are defined as follows:
- HBW refers to half of the effective signal bandwidth
- GBW refers to half of the protection bandwidth
- NFFT is the number of discrete Fourier transform (FFT) points
- Win NFFT is the coefficient vector of a window function with a window length of NFFT, for example: the window function can use a Blackman window; Means random from Intercept the NFFT-long signal in the middle; Is a uniformly distributed random positive integer in the value range [1, TestSize]; K is the number of random intercepts; Represents dot multiplication.
- the training of the AI-DPD integrated solution system includes but is not limited to the following sub-steps:
- M 1 is the power amplifier memory effect parameter, 0 ⁇ M 1 ⁇ TrainSize.
- Step 2-2 Convert the complex number vector Through the complex neural network (Complex Neural Network, hereinafter referred to as ComplexNN) unit, the predistortion complex correction vector is obtained The relationship between the input and output of this unit is shown in the following formula:
- Step 2-3 Convert the complex number vector Obtain the pre-distortion corrected complex vector through a pre-distortion multiplier (DPD Multiplier) unit The relationship between the input and output of this unit is shown in the following formula:
- Step 2-4 The complex vector after the predistortion correction Input the output feedback loop of the radio frequency power amplifier, and the output obtained is The relationship between the output of the predistortion multiplier and the output of the RF power amplifier output feedback loop is shown in the following equation:
- the power amplifier (PA) unit in the output feedback loop of the radio frequency power amplifier can be any actual power amplifier product; in other words, this embodiment does not have any restriction on the nonlinear characteristics of the power amplifier model.
- Steps 2-5 will: the final original output of the AI-DPD integrated solution After radio frequency demodulation and analog-to-digital conversion (A/D), input the real-time power normalization (PNorm) unit to obtain the output feedback of the power amplifier
- PNorm real-time power normalization
- Step 2-6 According to the output feedback of the power amplifier And the complex vector Calculate the loss function of the complex neural network (Loss Function). Because there are many variations of additional regularization (Regularization) terms, the following loss function expression is just a simple example:
- Step 2-7 Calculate the loss function according to formula (2-16) For the power amplifier output feedback Partial derivative or sensitivity
- Step 2-8 According to the above sensitivity Calculate the loss function according to the following formula (2-17) For the final original output of the AI-DPD integrated solution Partial derivative or sensitivity
- Step 2-9 According to the above sensitivity Use the following formula (2-18) to calculate the following intermediate partial derivative or sensitivity ⁇ l,m :
- c l, m are the complex coefficients obtained by the least square algorithm based on the complex number vector in the training set and the output obtained by inputting it into the output feedback loop of the RF power amplifier according to the memory polynomial model; conj( ⁇ ) Take the conjugate of complex numbers.
- Step 2-10 Calculate the loss function according to the above-mentioned sensitivity ⁇ l,m according to the following formula (2-19) For complex vector after predistortion correction Partial derivative or sensitivity of each element
- Step 2-11 According to the above sensitivity Calculate the loss function according to the following formula (2-21) Complex correction vector for the predistortion Partial derivative or sensitivity of each element
- Step 2-12 According to the above sensitivity Calculate the loss function in turn according to the reverse order of the forward operation of the complex neural network (Complex Neural Network) unit
- Step 2-12-1 If the current layer to be calculated is a Full-Connected Layer, calculate the loss function according to formula (2-22) and formula (2-23) respectively For the complex weight parameter of this layer And complex offset parameters
- Kernel convolution kernel
- Represents the p (p 1,2,...,P) complex number vector output by the previous layer (jth layer) of the convolutional layer; Represents the qth complex number vector output by the current convolutional layer (the kth layer); Represents the loss function For the above The partial derivative or sensitivity of; Conv( ⁇ ) represents the convolution operation; Fliplr( ⁇ ) represents the position inversion of the input vector.
- Step 2-13 The loss function calculated according to each layer For the sensitivity of the weight parameters and bias parameters, update the parameters, because there are many ways to update the parameters of different training algorithms. The following is just a simple example:
- Step 3 The AI-DPD integrated solution system is based on the performance statistics of the test set.
- Step 3-2 Convert the complex number vector Obtain the complex correction vector of the predistortion through the complex neural network (Complex Neural Network) unit
- Step 3-3 Convert the complex number vector Obtain the pre-distortion corrected complex vector through a pre-distortion multiplier (DPD Multiplier) unit
- Step 3-4 The complex vector after the predistortion correction Input the output feedback loop of the RF power amplifier to get the output
- Steps 3-5 Put the After radio frequency demodulation and analog-to-digital conversion (A/D), input the real-time power normalization (PNorm) unit to obtain the output feedback of the power amplifier
- A/D analog-to-digital conversion
- PNorm real-time power normalization
- Step 3-6 According to the above And the test set According to the above formula (2-3) and formula (2-4), the statistical calculation system is based on the generalized EVM performance (GEVM) and generalized ACLR performance (GACLR) of the test set.
- GEVM generalized EVM performance
- GACLR generalized ACLR performance
- Steps 3-7 If the GEVM and GACLR meet the set index requirements, stop the training of the AI-DPD integrated solution system; otherwise, return to all steps in step 2 to start a new round of training.
- the known training complex vector is sent, and the complex scalar obtained by the predistortion correction is output through the predistortion multiplier, the complex neural network, and the RF power amplifier output feedback loop. It is fed back to the sending end and used for training of the AI-DPD integrated scheme system together with the training complex vector; when the training reaches the required generalized EVM performance and generalized ACLR performance, the training stops; at the same time, the sending station
- the complex vector of the business passes through the predistortion system to output the complex scalar for obtaining the predistortion correction.
- Fig. 3b is a schematic structural diagram of another predistortion system provided by an embodiment of the application. See Fig. 3b.
- the predistortion system is also called the AI-DPD integrated solution system and mainly includes: a predistortion multiplier unit and a complex neural network unit And RF power amplifier output feedback loop.
- a predistortion multiplier unit and a complex neural network unit And RF power amplifier output feedback loop.
- Is the input vector Complex correction vector for predistortion It is also the output of a complex neural network
- y 2 is the final output of the AI-DPD integrated solution
- y 2 is a complex scalar.
- the complex number neural network is a kind of neural network: a complex number includes two parts, a real part and an imaginary part, and usually refers to the situation where the imaginary part is not 0, to distinguish it from the real number; the input and output of the neural network can either be directly Complex variables, vectors, matrices, or tensors can also be in the form of a combination of I-way and Q-way of complex numbers; the neuron activation function of each layer of the neural network and the input and output of all other processing functions can be directly Complex variables, vectors, matrices or tensors can also be in the form of a combination of I and Q of complex numbers; the neural network can be trained using complex error backpropagation algorithms, or real error backpropagation algorithms train.
- the complex error back propagation algorithm includes but is not limited to the following steps:
- Step1 Initialization of the system parameters of the AI-DPD integrated solution.
- L 2 5
- Each element of is 1
- the index of each layer is run layer by layer in the forward direction, or all layers can be parallelized at the same time, and the corresponding initialization steps are completed according to the layer type.
- the initialization of the different types of layers mainly includes but is not limited to the following steps:
- Step1-1 If the current (to be initialized) layer is a fully connected layer, according to the randomly initialized distribution type (for example: Gaussian distribution, uniform distribution, etc.) and distribution parameters (mean, variance, standard deviation, etc.) set by this layer, Initialize the weight parameters and bias parameters of this layer; Step1-2: If the current (to be initialized) layer is a convolutional layer, initialize the layers of each convolution kernel according to the randomly initialized distribution type and distribution parameters set for this layer Weight parameters and bias parameters; Step1-3: If the current (to be initialized) layer is another type of layer, set the parameters according to the initialization of this type of layer to complete the initialization of the weight and bias parameters of this layer.
- the randomly initialized distribution type for example: Gaussian distribution, uniform distribution, etc.
- distribution parameters mean, variance, standard deviation, etc.
- Step2 The AI-DPD integrated solution system is based on the training of the training set. Take a known training complex vector sequence of length N 2 According to a certain ratio (for example: 0.5:0.5), divided into training set And test set The lengths of the two are respectively recorded as TrainSize and TestSize; among them, the training set is used for the overall learning or training of the AI-DPD integrated solution system; the test set is used to test the generalized adaptation of the solution system to new data in the second half of training Performance; the performance includes the generalized EVM performance (GEVM) and generalized ACPR performance (GACLR), which are defined as follows:
- GEVM generalized EVM performance
- GACLR generalized ACPR performance
- HBW refers to half of the effective signal bandwidth
- GBW refers to half of the protection bandwidth
- NFFT is the number of discrete Fourier transform (FFT) points
- Win NFFT is the coefficient vector of a window function with a window length of NFFT.
- the window function can use a Blackman window; Means random from Intercept the NFFT long signal in the middle; Is a uniformly distributed random positive integer in the value range [1, TestSize]; K is the number of random intercepts; Represents dot multiplication.
- the training of the AI-DPD integrated solution system includes but is not limited to the following sub-steps:
- M 2 is the memory effect parameter of the power amplifier, 0 ⁇ M 2 ⁇ TrainSize.
- Step2-2 Convert the complex number vector Obtain the complex correction vector of the predistortion through the complex neural network unit (ComplexNN) The relationship between the input and output of this unit is shown in the following formula:
- Step2-3 Convert the complex number vector Obtain the pre-distortion corrected complex vector through a pre-distortion multiplier (DPD Multiplier) unit The relationship between the input and output of this unit is shown in the following formula:
- Step2-4 The complex vector after the predistortion correction Input the output feedback loop of the radio frequency power amplifier to get the output As shown in the following formula:
- the power amplifier (PA) unit in the output feedback loop of the radio frequency power amplifier can be any actual power amplifier product; in other words, this embodiment does not have any restriction on the nonlinear characteristics of the power amplifier model.
- Step2-5 According to the above And the complex vector Calculate the loss function of the complex neural network. Because there are many variations of additional regularization (Regularization) terms, the following loss function expression is just a simple example:
- Step2-6 Calculate the loss function according to formula (2-47) For the power amplifier output feedback Partial derivative or sensitivity
- Step2-7 According to the above sensitivity Use the following formula (2-48) to calculate the following intermediate partial derivative or sensitivity ⁇ l,m :
- c l, m are the complex coefficients obtained by the least square algorithm based on the complex number vector in the training set and the output obtained by inputting it into the output feedback loop of the RF power amplifier according to the memory polynomial model; conj( ⁇ ) Take the conjugate of complex numbers.
- Step2-8 Calculate the loss function according to the above-mentioned sensitivity ⁇ l, m according to the following formula (2-49) For complex vector after predistortion correction Partial derivative or sensitivity of each element
- Step2-9 According to the above sensitivity Calculate the loss function according to the following formula (2-51) Complex correction vector for the predistortion Partial derivative or sensitivity of each element
- Step2-10 According to the above sensitivity Calculate the loss function in turn according to the reverse order of the forward running of the complex neural network unit For the partial derivatives or sensitivity of the weights and bias parameters of the internal layers, it includes but not limited to the following sub-steps:
- Step2-10-1 If the current layer to be calculated is a fully connected layer, calculate the loss function according to formula (2-52) and formula (2-53) respectively For the complex weight parameter of this layer And complex offset parameters
- Represents the p (p 1,2,...,P) complex number vector output by the previous layer (jth layer) of the convolutional layer; Represents the qth complex number vector output by the current convolutional layer (the kth layer); Represents the loss function For the above The partial derivative or sensitivity of; Conv( ⁇ ) represents the convolution operation; Fliplr( ⁇ ) represents the position inversion of the input vector.
- Step2-11 The loss function calculated according to each layer For the sensitivity of the weight parameters and bias parameters, update the parameters, because there are many ways to update the parameters of different training algorithms. The following is just a simple example:
- Step3 The AI-DPD integrated solution system is based on the performance statistics of the test set.
- Step3-2 Convert the complex number vector Obtain the complex correction vector of the predistortion through the complex neural network (ComplexNN) unit
- Step3-3 Convert the complex number vector Obtain the pre-distortion corrected complex vector through a pre-distortion multiplier (DPD Multiplier) unit
- Step3-4 The complex vector after the predistortion correction Input the output feedback loop of the RF power amplifier to get the output
- Step3-5 According to the above And the test set According to the above formula (2-36) and formula (2-37), the statistical calculation system is based on the generalized EVM performance (GEVM) and generalized ACLR performance (GACLR) of the test set.
- GEVM generalized EVM performance
- GACLR generalized ACLR performance
- Step3-6 If the GEVM and GACLR meet the set index requirements, stop the training of the AI-DPD integrated solution system; otherwise, return to all the steps of Step2 and start a new round of training.
- Table 1 is a generalization performance effect table provided by an embodiment of this application. See Table 1.
- the predistortion system provided by this application guarantees generalized EVM performance.
- the training set includes 39,320 complex number vectors; the validation set includes 13,107 complex number vectors; and the test set includes 13,105 complex number vectors.
- Table 1 A generalization performance effect table provided by the embodiment of the application.
- FIG. 4 is a performance effect diagram of the generalized adjacent channel leakage ratio obtained by an embodiment of the application.
- an integrated method that is, the generalized adjacent channel leakage ratio and expected output of the actual output of the predistortion system provided by this application
- the generalized adjacent channel leakage ratio deviation is smaller.
- FIG. 5 is a diagram showing the improvement effect of the generalized adjacent channel leakage ratio obtained by an embodiment of the application. Referring to FIG. Has been improved.
- Some implementations also include a machine-readable or computer-readable program storage device (for example, a digital data storage medium) and coded machine-executable or computer-executable program instructions, where the instructions execute some of the above methods or All steps.
- the program storage device may be a digital memory, a magnetic storage medium (for example, a magnetic disk and a magnetic tape), a hardware, or an optically readable digital data storage medium.
- Embodiments also include a programmed computer that executes the steps of the above-described method.
- the present application provides a predistortion system.
- the predistortion system can execute the predistortion method provided in the embodiments of the present application, and the predistortion system includes : Predistortion multiplier, complex neural network and RF power amplifier output feedback loop; the first input terminal of the predistortion multiplier is the input terminal of the predistortion system, and the first input terminal and the first input terminal of the complex neural network Two input ends are connected, the output end of the predistortion multiplier is connected to the input end of the output feedback loop of the radio frequency power amplifier, the output end of the output feedback loop of the radio frequency power amplifier is the output end of the predistortion system, and the radio frequency
- the output terminal of the power amplifier output feedback loop is connected with the second input terminal of the complex neural network; the output terminal of the complex neural network is connected with the second input terminal of the predistortion multiplier.
- the radio frequency power amplifier output feedback loop includes: a digital-to-analog converter unit, a radio frequency modulation unit, a power amplifier unit, an analog-to-digital converter unit, and a radio frequency demodulation unit.
- the output end of the predistortion multiplier can be connected to the input end of the power amplifier unit of the RF power amplifier output feedback loop through the digital-to-analog converter unit (ie D/A) and the RF modulation unit of the RF power amplifier output feedback loop.
- the output terminal of the amplifier unit can be connected to the second input terminal of the complex neural network through a radio frequency demodulation unit and an analog-to-digital converter unit (ie, A/D).
- the predistortion system provided in this embodiment is used to implement the predistortion method provided in the embodiment of the present application.
- the implementation principle and technical effect of the predistortion system provided in this embodiment are similar to the predistortion method provided in the embodiment of the present application, and will not be omitted here. Go into details.
- the present application provides a predistortion system.
- the predistortion system can execute the predistortion method provided in the embodiments of the present application, and the predistortion system includes : Predistortion multiplier, complex neural network, RF power amplifier output feedback loop, first real-time power normalization unit and second real-time power normalization unit; the input of the first real-time power normalization unit is said The input terminal of the predistortion system, the output terminal of the first real-time power normalization unit and the first input terminal of the predistortion multiplier, the first input terminal of the complex neural network and the input terminal of the complex neural network The second input terminal is connected.
- the output terminal of the predistortion multiplier is connected to the input terminal of the output feedback loop of the radio frequency power amplifier.
- the output terminal of the output feedback loop of the radio frequency power amplifier is the output terminal of the predistortion system.
- the output terminal of the output feedback loop of the radio frequency power amplifier is connected with the input terminal of the second real-time power normalization unit, and the output terminal of the second real-time power normalization unit is connected with the second input terminal of the complex neural network,
- the output terminal of the complex neural network is connected to the second input terminal of the predistortion multiplier.
- the radio frequency power amplifier output feedback loop includes: a digital-to-analog converter unit, a radio frequency modulation unit, a power amplifier unit, an analog-to-digital converter unit, and a radio frequency demodulation unit.
- the input end of the digital-to-analog converter unit is the input end of the output feedback loop of the radio frequency power amplifier, the output end of the digital-to-analog conversion unit is connected to the input end of the radio frequency modulation unit, and the output end of the radio frequency modulation unit is connected to the power
- the input end of the amplifier unit is connected, the output end of the power amplifier unit is connected to the input end of the radio frequency demodulation unit, and the output end of the radio frequency demodulation unit is connected to the input end of the analog-to-digital converter unit, so
- the output terminal of the analog-to-digital converter unit is the output terminal of the output feedback loop of the radio frequency power amplifier.
- the predistortion system provided in this embodiment is used to implement the predistortion method provided in the embodiment of the present application.
- the implementation principle and technical effect of the predistortion system provided in this embodiment are similar to the predistortion method provided in the embodiment of the present application, and will not be omitted here. Go into details.
- FIG. 6 is a schematic structural diagram of a device provided by an embodiment of the present application.
- the device provided by the present application includes one or A plurality of processors 21 and a storage device 22; there may be one or more processors 21 in the device, and one processor 21 is taken as an example in FIG. 6; the storage device 22 is used to store one or more programs; the one The one or more programs are executed by the one or more processors 21, so that the one or more processors 21 implement the method as described in the embodiments of the present application.
- the equipment also includes: a communication device 23, an input device 24, and an output device 25.
- the processor 21, the storage device 22, the communication device 23, the input device 24, and the output device 25 in the device may be connected by a bus or other methods.
- the connection by a bus is taken as an example.
- the input device 24 can be used to receive input digital or character information, and generate key signal input related to user settings and function control of the device.
- the output device 25 may include a display device such as a display screen.
- the communication device 23 may include a receiver and a transmitter.
- the communication device 23 is configured to perform information transceiving and communication under the control of the processor 21.
- the storage device 22 can be configured to store software programs, computer-executable programs, and modules, such as program instructions/predistortion systems corresponding to the methods described in the embodiments of the present application.
- the storage device 22 may include a storage program area and a storage data area.
- the storage program area may store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of the device, and the like.
- the storage device 22 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
- the storage device 22 may include memories remotely provided with respect to the processor 21, and these remote memories may be connected to the device through a network.
- Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
- the embodiment of the present application further provides a storage medium, the storage medium stores a computer program, and the computer program implements any of the methods described in the present application when the computer program is executed by a processor, the storage medium stores a computer program, and the computer
- the predistortion method provided in the embodiment of the present application is implemented and applied to a predistortion system.
- the predistortion system includes: a predistortion multiplier, a complex neural network, and a radio frequency power amplifier output feedback loop.
- the method includes: Input the training complex vector into the predistortion system to obtain the complex scalar corresponding to the training complex vector output by the predistortion system; train the predistortion system based on the training complex vector and the complex scalar , Until the generalized error vector magnitude and generalized adjacent channel leakage ratio corresponding to the predistortion system meet the set requirements; input the service complex vector into the trained predistortion system to obtain a complex scalar for predistortion correction.
- the computer storage medium of the embodiment of the present application may adopt any combination of one or more computer-readable media.
- the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
- the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above.
- Examples of computer-readable storage media include: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (Random Access Memory, RAM), read-only memory (Read Only) Memory, ROM), Erasable Programmable Read Only Memory (EPROM), flash memory, optical fiber, compact Disc Read-Only Memory (CD-ROM), optical storage devices, magnetic Storage device, or any suitable combination of the above.
- the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
- the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to: electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
- the computer-readable medium may send, propagate or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
- the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wire, optical cable, radio frequency (RF), etc., or any suitable combination of the foregoing.
- suitable medium including but not limited to: wireless, wire, optical cable, radio frequency (RF), etc., or any suitable combination of the foregoing.
- the computer program code used to perform the operations of this application can be written in one or more programming languages or a combination thereof.
- the programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language.
- the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network-including Local Area Network (LAN) or Wide Area Network (WAN)-or it can be connected to an external computer (For example, use an Internet service provider to connect via the Internet).
- LAN Local Area Network
- WAN Wide Area Network
- equipment such as terminal equipment
- wireless user equipment such as mobile phones, portable data processing devices, portable web browsers, or vehicle-mounted mobile stations.
- the various embodiments of the present application can be implemented in hardware or dedicated circuits, software, logic or any combination thereof.
- some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software that may be executed by a controller, microprocessor, or other computing device, although the application is not limited thereto.
- the embodiments of the present application may be implemented by executing computer program instructions by a data processor of a mobile device, for example, in a processor entity, or by hardware, or by a combination of software and hardware.
- Computer program instructions can be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or written in any combination of one or more programming languages Source code or object code.
- the block diagram of any logic flow in the drawings of the present application may represent program steps, or may represent interconnected logic circuits, modules, and functions, or may represent a combination of program steps and logic circuits, modules, and functions.
- the computer program can be stored on the memory.
- the memory can be of any type suitable for the local technical environment and can be implemented using any suitable data storage technology, such as but not limited to read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), optical Memory devices and systems (Digital Video Disc (DVD) or Compact Disk (CD)), etc.
- Computer-readable media may include non-transitory storage media.
- the data processor can be any type suitable for the local technical environment, such as but not limited to general-purpose computers, special-purpose computers, microprocessors, digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (ASICs) ), programmable logic devices (Field-Programmable Gate Array, FPGA), and processors based on multi-core processor architecture.
- DSP Digital Signal Processing
- ASICs application specific integrated circuits
- FPGA Field-Programmable Gate Array
- FPGA Field-Programmable Gate Array
Landscapes
- Engineering & Computer Science (AREA)
- Power Engineering (AREA)
- Physics & Mathematics (AREA)
- Nonlinear Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Amplifiers (AREA)
Abstract
Description
Claims (22)
- 一种预失真方法,应用于预失真系统,所述预失真系统包括预失真乘法器、复数神经网络和射频功放输出反馈回路;所述方法包括:将训练用复数向量输入所述预失真系统,得到所述预失真系统输出的所述训练用复数向量对应的复数标量;基于所述训练用复数向量和所述复数标量,训练所述预失真系统,直至所述预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比达到设定要求;将业务复数向量输入训练后的预失真系统,得到预失真校正的复数标量。
- 根据权利要求1所述的方法,其中,所述预失真系统使用复数误差反向传播算法进行训练。
- 根据权利要求1所述的方法,其中,所述基于所述训练用复数向量和所述复数标量,训练所述预失真系统,直至所述预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比达到设定要求,包括:初始化所述预失真系统的系统参数;基于训练集和所述复数标量训练参数初始化后的预失真系统;基于测试集测试训练后的预失真系统,得到所述训练后的预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比;在所述泛化误差向量幅度和泛化邻道泄漏比对应的数值大于或等于各自的设定阈值的情况下,完成所述预失真系统的训练,在所述泛化误差向量幅度和泛化邻道泄漏比对应的数值小于各自的设定阈值的情况下,继续基于所述训练集训练所述训练后的预失真系统;其中,所述训练集和所述测试集基于归一化后的复数向量拆分获得,所述归一化后的复数向量为将所述训练用复数向量输入所述预失真系统还包括的第一实时功率归一化单元后得到的所述第一实时功率归一化单元的输出;或者,所述训练集和所述测试集基于所述训练用复数向量拆分获得。
- 根据权利要求3所述的方法,其中,所述初始化所述预失真系统的系统参数,包括:根据所述复数神经网络中每层对应的层类型完成所述每层的初始化。
- 根据权利要求5所述的方法,其中,所述根据所述复数神经网络中每层对应的层类型完成所述每层的初始化,包括:根据所述复数神经网络中每层的分布类型和分布参数初始化所述每层的权值参数和偏置参数。
- 根据权利要求3所述的方法,其中,所述基于训练集和所述复数标量训练所述预失真系统,包括:将所述训练集中的复数向量按照元素索引输入所述预失真系统,每次输入的复数向量的长度基于所述预失真系统的功放记忆效应参数确定,每次输入至所述预失真系统的复数向量由所述训练集中的历史元素值和当前元素组合得到;将所述预失真系统中的所述射频功放输出反馈回路的输出,通过所述预失真系统还包括的第二实时功率归一化单元后,再输入所述复数神经网络;或者,将所述预失真系统中的所述射频功放输出反馈回路的输出直接输入所述复数神经网络;根据所述预失真系统中的所述复数神经网络的损失函数对于所述复数神经网络输出的校正向量中的每个元素的偏导数或灵敏度,确定所述损失函数对所述复数神经网络内部每层的权值参数和偏置参数的偏导数或灵敏度;根据确定的所述复数神经网络内部每层的偏导数或灵敏度更新所述每层的权值参数和偏置参数;其中,每次输入至所述预失真系统的复数向量的表达式如下所示:
- 根据权利要求7所述的方法,其中,输入至所述预失真系统中的所述复数神经网络的复数向量和所述复数神经网络输出的校正向量的关系如下所示:输入至所述预失真系统中的所述预失真乘法器的复数向量、所述校正向量和所述预失真乘法器的输出的关系如下所示:其中, 为所述预失真乘法器输出的复数向量, 和 为所述预失真乘法器输出的复数向量中的第n个元素、第n-1个元素、第n-M+1个元素和第n-M 1个元素, 为输入至所述预失真乘法器的复数向量,“ο”表示点乘;所述预失真乘法器的输出和所述射频功放输出反馈回路的输出的关系如下所示:在所述预失真系统包括所述第二实时功率归一化单元的情况下,所述第二实时功率归一化单元的输入和输出的关系如下所示:
- 根据权利要求7所述的方法,其中,所述根据所述预失真系统中的所述复数神经网络的损失函数对于所述复数神经网络输出的校正向量中的每个元素的偏导数或灵敏度,确定所述损失函数对所述复数神经网络内部每层的权值参数 和偏置参数的偏导数或灵敏度,包括:基于所述射频功放输出反馈回路的反馈量和输入至所述复数神经网络的复数向量,确定所述复数神经网络的损失函数;确定所述损失函数对所述复数神经网络输出的校正向量中的每个元素的偏导数或灵敏度;根据所述校正向量中的每个元素的偏导数或灵敏度,确定所述损失函数对所述复数神经网络内部每层的权值参数和偏置参数的偏导数或灵敏度;基于每层的权值参数和偏置参数的偏导数或灵敏度更新所述每层的权值参数和偏置参数。
- 根据权利要求3所述的方法,其中,所述基于测试集测试训练后的预失真系统,得到所述训练后的预失真系统对应的误差向量幅度和邻道泄漏比,包括:将所述测试集中的复数向量按照元素索引输入预失真系统,每次输入的复数向量的长度基于所述预失真系统的功放记忆效应参数确定,每次输入至所述 预失真系统的复数向量由所述测试集中的历史元素和当前元素组合得到;将所述预失真系统中的所述射频功放输出反馈回路的输出输入至所述预失真系统,确定所述预失真系统对应的泛化误差向量幅度和泛化邻道泄漏比。
- 根据权利要求16所述的方法,其中,所述泛化误差向量幅度和所述泛化邻道泄漏比分别通过如下计算表达式确定:
- 一种预失真系统,执行如权利要求1-17任一项所述的预失真方法,所述预失真系统包括:预失真乘法器、复数神经网络和射频功放输出反馈回路;所述预失真乘法器的第一输入端为所述预失真系统的输入端,与所述复数神经网络的第一输入端和第二输入端连接,所述预失真乘法器的输出端与所述射频功放输出反馈回路的输入端连接,所述射频功放输出反馈回路的输出端为所述预失真系统的输出端,所述射频功放输出反馈回路的输出端与所述复数神经网络的第二输入端连接;所述复数神经网络的输出端与所述预失真乘法器的第二输入端连接。
- 根据权利要求18所述的预失真系统,其中,所述射频功放输出反馈回路 包括:数模转换器单元、射频调制单元、功率放大器单元、模数转换器单元和射频解调单元;所示预失真乘法器的输出端通过所述数模转换器单元和所述射频调制单元与所述功率放大器单元的输入端相连,所述功率放大器单元的输出端通过所述射频解调单元和所述模数转换器单元与所述复数神经网络的第二输入端相连。
- 一种预失真系统,执行如权利要求1-17任一项所述的预失真方法,所述预失真系统包括:预失真乘法器、复数神经网络、射频功放输出反馈回路、第一实时功率归一化单元和第二实时功率归一化单元;所述第一实时功率归一化单元的输入端为所述预失真系统的输入端,所述第一实时功率归一化单元的输出端与所述预失真乘法器的第一输入端、所述复数神经网络的第一输入端和所述复数神经网络的第二输入端连接,所述预失真乘法器的输出端与所述射频功放输出反馈回路的输入端连接,所述射频功放输出反馈回路的输出端为所述预失真系统的输出端,所述射频功放输出反馈回路的输出端与所述第二实时功率归一化单元的输入端连接,所述第二实时功率归一化单元的输出端与所述复数神经网络的第二输入端连接,所述复数神经网络的输出端与所述预失真乘法器的第二输入端连接。
- 一种设备,包括:至少一个处理器;存储装置,设置为存储至少一多个程序;当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-17任一项所述的预失真方法。
- 一种存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-17任一项所述的预失真方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022541199A JP7451720B2 (ja) | 2020-06-02 | 2021-05-11 | プリディストーション方法、システム、装置及び記憶媒体 |
US17/788,042 US20230033203A1 (en) | 2020-06-02 | 2021-05-11 | Predistortion method and system, device, and storage medium |
KR1020227021174A KR102707510B1 (ko) | 2020-06-02 | 2021-05-11 | 전치 왜곡 방법, 시스템, 디바이스 및 저장 매체 |
EP21818081.8A EP4160915A4 (en) | 2020-06-02 | 2021-05-11 | PREDISTORTION METHOD AND SYSTEM, DEVICE, AND STORAGE MEDIUM |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010491525.5 | 2020-06-02 | ||
CN202010491525.5A CN111900937A (zh) | 2020-06-02 | 2020-06-02 | 一种预失真方法、系统、设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021244236A1 true WO2021244236A1 (zh) | 2021-12-09 |
Family
ID=73206773
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/093060 WO2021244236A1 (zh) | 2020-06-02 | 2021-05-11 | 预失真方法、系统、设备及存储介质 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20230033203A1 (zh) |
EP (1) | EP4160915A4 (zh) |
JP (1) | JP7451720B2 (zh) |
KR (1) | KR102707510B1 (zh) |
CN (1) | CN111900937A (zh) |
WO (1) | WO2021244236A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115037580A (zh) * | 2022-07-12 | 2022-09-09 | 西安电子科技大学 | 基于自学习的射频预失真系统及方法 |
CN115099389A (zh) * | 2022-06-02 | 2022-09-23 | 北京理工大学 | 基于复数神经网络的非训练相位重建方法及装置 |
CN117110700A (zh) * | 2023-08-23 | 2023-11-24 | 易集康健康科技(杭州)有限公司 | 一种射频电源脉冲功率检测方法及系统 |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111900937A (zh) * | 2020-06-02 | 2020-11-06 | 中兴通讯股份有限公司 | 一种预失真方法、系统、设备及存储介质 |
US11496341B2 (en) * | 2020-08-13 | 2022-11-08 | Micron Technology, Inc. | Wireless devices and systems including examples of compensating I/Q imbalance with neural networks or recurrent neural networks |
CN112865721B (zh) * | 2021-01-05 | 2023-05-16 | 紫光展锐(重庆)科技有限公司 | 信号处理方法、装置、设备及存储介质、芯片、模组设备 |
CN113517865B (zh) * | 2021-04-20 | 2022-11-22 | 重庆邮电大学 | 一种基于记忆多项式的功放模型及其硬件实现方法 |
US12003261B2 (en) | 2021-05-12 | 2024-06-04 | Analog Devices, Inc. | Model architecture search and optimization for hardware |
US12028188B2 (en) * | 2021-05-12 | 2024-07-02 | Analog Devices, Inc. | Digital predistortion with hybrid basis-function-based actuator and neural network |
CN113468814B (zh) * | 2021-07-09 | 2024-02-27 | 成都德芯数字科技股份有限公司 | 一种基于神经网络的数字预失真训练数据筛选方法及装置 |
CN113411056B (zh) * | 2021-07-12 | 2022-11-29 | 电子科技大学 | 一种基于广义多项式和神经网络的非线性预失真方法 |
CN115086121A (zh) * | 2022-06-15 | 2022-09-20 | Oppo广东移动通信有限公司 | 预失真参数值的确定方法、装置、终端及存储介质 |
CN117394802A (zh) * | 2022-07-01 | 2024-01-12 | 中兴通讯股份有限公司 | 数字预失真方案或硬件结构的实现方法、设备和介质 |
CN118345334B (zh) * | 2024-06-17 | 2024-08-23 | 华兴源创(成都)科技有限公司 | 膜层厚度的校正方法、装置、计算机设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1335699A (zh) * | 2000-07-20 | 2002-02-13 | 华为技术有限公司 | 一种宽带发射机的自适应数字预失真方法和装置 |
KR100695632B1 (ko) * | 2006-02-15 | 2007-03-16 | 한국과학기술원 | 증폭기 비선형성 및 직교 복조 오류를 위한 동시 적응보상기법 |
CN105207716A (zh) * | 2015-08-20 | 2015-12-30 | 上海交通大学 | 室内可见光通信发光二极管传输预失真系统及方法 |
US20170338841A1 (en) * | 2016-05-19 | 2017-11-23 | Analog Devices Global | Wideband digital predistortion |
CN109302156A (zh) * | 2018-09-28 | 2019-02-01 | 东南大学 | 基于模式识别的功率放大器动态线性化系统及其方法 |
CN111900937A (zh) * | 2020-06-02 | 2020-11-06 | 中兴通讯股份有限公司 | 一种预失真方法、系统、设备及存储介质 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3684377B2 (ja) * | 2001-09-04 | 2005-08-17 | 独立行政法人情報通信研究機構 | 通信歪み補償装置および補償方法 |
TWI425772B (zh) * | 2011-06-08 | 2014-02-01 | Mstar Semiconductor Inc | 包絡偵測器與相關方法 |
US9906428B2 (en) * | 2016-04-28 | 2018-02-27 | Samsung Electronics Co., Ltd. | System and method for frequency-domain weighted least squares |
CN110765720B (zh) * | 2019-09-12 | 2024-05-24 | 重庆大学 | 一种复值流水线递归神经网络模型的功放预失真方法 |
-
2020
- 2020-06-02 CN CN202010491525.5A patent/CN111900937A/zh active Pending
-
2021
- 2021-05-11 WO PCT/CN2021/093060 patent/WO2021244236A1/zh unknown
- 2021-05-11 KR KR1020227021174A patent/KR102707510B1/ko active IP Right Grant
- 2021-05-11 US US17/788,042 patent/US20230033203A1/en active Pending
- 2021-05-11 EP EP21818081.8A patent/EP4160915A4/en active Pending
- 2021-05-11 JP JP2022541199A patent/JP7451720B2/ja active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1335699A (zh) * | 2000-07-20 | 2002-02-13 | 华为技术有限公司 | 一种宽带发射机的自适应数字预失真方法和装置 |
KR100695632B1 (ko) * | 2006-02-15 | 2007-03-16 | 한국과학기술원 | 증폭기 비선형성 및 직교 복조 오류를 위한 동시 적응보상기법 |
CN105207716A (zh) * | 2015-08-20 | 2015-12-30 | 上海交通大学 | 室内可见光通信发光二极管传输预失真系统及方法 |
US20170338841A1 (en) * | 2016-05-19 | 2017-11-23 | Analog Devices Global | Wideband digital predistortion |
CN109302156A (zh) * | 2018-09-28 | 2019-02-01 | 东南大学 | 基于模式识别的功率放大器动态线性化系统及其方法 |
CN111900937A (zh) * | 2020-06-02 | 2020-11-06 | 中兴通讯股份有限公司 | 一种预失真方法、系统、设备及存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4160915A4 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115099389A (zh) * | 2022-06-02 | 2022-09-23 | 北京理工大学 | 基于复数神经网络的非训练相位重建方法及装置 |
CN115037580A (zh) * | 2022-07-12 | 2022-09-09 | 西安电子科技大学 | 基于自学习的射频预失真系统及方法 |
CN115037580B (zh) * | 2022-07-12 | 2023-09-08 | 西安电子科技大学 | 基于自学习的射频预失真系统及方法 |
CN117110700A (zh) * | 2023-08-23 | 2023-11-24 | 易集康健康科技(杭州)有限公司 | 一种射频电源脉冲功率检测方法及系统 |
CN117110700B (zh) * | 2023-08-23 | 2024-06-04 | 易集康健康科技(杭州)有限公司 | 一种射频电源脉冲功率检测方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
JP2023509699A (ja) | 2023-03-09 |
CN111900937A (zh) | 2020-11-06 |
KR20220104019A (ko) | 2022-07-25 |
JP7451720B2 (ja) | 2024-03-18 |
KR102707510B1 (ko) | 2024-09-19 |
EP4160915A4 (en) | 2024-07-17 |
EP4160915A1 (en) | 2023-04-05 |
US20230033203A1 (en) | 2023-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021244236A1 (zh) | 预失真方法、系统、设备及存储介质 | |
Paryanti et al. | A direct learning approach for neural network based pre-distortion for coherent nonlinear optical transmitter | |
Uncini et al. | Complex-valued neural networks with adaptive spline activation function for digital-radio-links nonlinear equalization | |
WO2021223504A1 (zh) | 上下行信道互易的实现方法、通信节点和存储介质 | |
CN110765720B (zh) | 一种复值流水线递归神经网络模型的功放预失真方法 | |
Zhao et al. | Functional link neural network cascaded with Chebyshev orthogonal polynomial for nonlinear channel equalization | |
US20230299872A1 (en) | Neural Network-Based Communication Method and Related Apparatus | |
Phartiyal et al. | LSTM-deep neural networks based predistortion linearizer for high power amplifiers | |
CN111490737A (zh) | 一种用于功率放大器的非线性补偿方法和设备 | |
KR102550079B1 (ko) | 딥러닝 기반의 전력 증폭기의 비선형 왜곡에 대한 보상 방법 및 장치 | |
CN111200470A (zh) | 一种适用于受非线性干扰的高阶调制信号传输控制方法 | |
Pan et al. | A predistortion algorithm based on accurately solving the reverse function of memory polynomial model | |
CN112865721B (zh) | 信号处理方法、装置、设备及存储介质、芯片、模组设备 | |
Logins et al. | Block-structured deep learning-based OFDM channel equalization | |
CN112104580B (zh) | 基于广义近似消息传递-稀疏贝叶斯学习的稀疏水声信道估计方法 | |
Ney et al. | Unsupervised ANN-based equalizer and its trainable FPGA implementation | |
CN115913844A (zh) | 一种基于神经网络的mimo系统数字预失真补偿方法、装置、设备和存储介质 | |
CN115913140B (zh) | 运算精度控制的分段多项式数字预失真装置和方法 | |
Shahkarami et al. | Efficient deep learning of nonlinear fiber-optic communications using a convolutional recurrent neural network | |
Suresh et al. | A fast learning fully complex-valued relaxation network (FCRN) | |
Imtiaz et al. | Performance vs. complexity in NN pre-distortion for a nonlinear channel | |
Paryanti et al. | Recurrent neural network for pre-distortion of combined nonlinear optical transmitter impairments with memory | |
Qiu et al. | A novel neural network based equalizer for nonlinear power amplifiers | |
US10097280B2 (en) | Systems and methods for communication using sparsity based pre-compensation | |
WO2020019240A1 (en) | Method, apparatus and computer readable media for data processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21818081 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20227021174 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2022541199 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021818081 Country of ref document: EP Effective date: 20230102 |