WO2022166534A1 - Pre-distortion processing method and apparatus - Google Patents

Pre-distortion processing method and apparatus Download PDF

Info

Publication number
WO2022166534A1
WO2022166534A1 PCT/CN2022/071120 CN2022071120W WO2022166534A1 WO 2022166534 A1 WO2022166534 A1 WO 2022166534A1 CN 2022071120 W CN2022071120 W CN 2022071120W WO 2022166534 A1 WO2022166534 A1 WO 2022166534A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
timing signal
time series
neural network
network model
Prior art date
Application number
PCT/CN2022/071120
Other languages
French (fr)
Chinese (zh)
Inventor
关鹏飞
Original Assignee
大唐移动通信设备有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大唐移动通信设备有限公司 filed Critical 大唐移动通信设备有限公司
Publication of WO2022166534A1 publication Critical patent/WO2022166534A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2474Sequence data queries, e.g. querying versioned data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2465Query processing support for facilitating data mining operations in structured databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F1/00Details of amplifiers with only discharge tubes, only semiconductor devices or only unspecified devices as amplifying elements
    • H03F1/32Modifications of amplifiers to reduce non-linear distortion
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03FAMPLIFIERS
    • H03F3/00Amplifiers with only discharge tubes or only semiconductor devices as amplifying elements
    • H03F3/189High frequency amplifiers, e.g. radio frequency amplifiers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular, the present application relates to a predistortion processing method and apparatus.
  • a power amplifier is a device that amplifies the modulated frequency band signal to the required power value.
  • Digital Pre-distortion establishes an inverse behavior model of the PA, and then inserts the inverse behavior model into the PA in the link to preprocess the signal before it enters the PA.
  • the power amplifier remains linear in the high power region, improving the quality of the transmitted signal.
  • the traditional inverse behavioral model in the field of digital predistortion is obtained by using General Memory Polynomial (GMP), and the traditional GMP model is simplified based on the Volterra series model.
  • GMP General Memory Polynomial
  • the present application provides a predistortion processing method, including:
  • Inverse behavioral model of amplifier PA Inverse behavioral model of amplifier PA.
  • the first continuous timing signal includes a first preset number of timing signals before the to-be-processed timing signal, and a second preset number of timing signals after the to-be-processed timing signal.
  • performing normalization processing on the first continuous time series signal to obtain a corresponding second continuous time series signal including:
  • the training data set includes a third preset number of PA input time series signal samples and corresponding PA output time series signal samples of the target PA;
  • the second continuous time series signal is received, and the second continuous time series signal is converted into an initial feature vector
  • the predistorted time series signal is output.
  • the training process of the BP neural network model is as follows:
  • the BP neural network model is trained by using the training data set, and the pre-trained BP neural network model is obtained, including:
  • the network parameters of the BP neural network model are updated by the back-propagation algorithm, and then the pre-trained BP neural network model is obtained.
  • the method further includes:
  • the network parameters of the BP neural network model are updated again.
  • the network parameters of the BP neural network are updated again, including:
  • the network parameters of the BP neural network model are updated based on each mean square error.
  • the present application provides a predistortion processing device, including:
  • timing signal acquisition module used for acquiring the first continuous timing signal including the timing signal to be processed
  • the normalization module is used for normalizing the first continuous time series signal to obtain the corresponding second continuous time series signal
  • the pre-distortion processing module is used to input the second continuous time series signal into the pre-trained back-propagation BP neural network model, and output the pre-distorted time series signal corresponding to the time series signal to be processed, wherein the BP neural network model is used for processing the time series signal to be processed.
  • the first continuous timing signal includes a first preset number of timing signals before the to-be-processed timing signal, and a second preset number of timing signals after the to-be-processed timing signal.
  • the normalization module is specifically used for:
  • the training data set includes a third preset number of PA input time series signal samples and corresponding PA output time series signal samples of the target PA;
  • the predistortion processing module is specifically configured to:
  • the second continuous time series signal is received, and the second continuous time series signal is converted into an initial feature vector
  • the predistorted time series signal is output.
  • the device further includes a first training module for:
  • the first training module is specifically used for:
  • the network parameters of the BP neural network model are updated by the back-propagation algorithm, and then the pre-trained BP neural network model is obtained.
  • the device further includes a second training module for:
  • the network parameters of the BP neural network model are updated again.
  • the second training module is specifically used for:
  • the network parameters of the BP neural network model are updated based on each mean square error.
  • the present application provides an electronic device, including a memory and a processor
  • a computer program is stored in the memory
  • a processor configured to execute a computer program to implement the method provided in the embodiment of the first aspect or any optional embodiment of the first aspect.
  • the present application provides a computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, an embodiment of the first aspect or any one of the first aspect is implemented Methods provided in alternative embodiments.
  • the time series signal to be processed is pre-distorted by the inverse behavior model based on BP neural network, and the time series signal input to the inverse behavior model based on BP neural network is processed before the pre-distortion process.
  • the normalization process is carried out, the inverse behavior model based on BP neural network is used, and the time series signal of the input model is normalized, which can better improve the nonlinear factors of the overall system and improve the accuracy of pre-distortion processing.
  • FIG. 1 is a schematic diagram of a LUT table lookup process in the prior art
  • FIG. 2 is a schematic flowchart of a predistortion processing method provided by an embodiment of the present application
  • FIG. 3 is a schematic structural diagram of a BP neural network model in an example of an embodiment of the present application.
  • 4a is a schematic diagram of a comparison between a normalized IQ time series signal and an unnormalized time series signal in an example of an embodiment of the present application;
  • FIG. 4b is a schematic diagram of a comparison of NMSE curves in a process of training a model by a normalized IQ time series signal and an unnormalized time series signal, respectively, in an example of an embodiment of the present application;
  • Fig. 5 is the result schematic diagram of the BP neural network model in another example of the embodiment of the application.
  • FIG. 6 is a feature combination mode of the input layer in an example of the embodiment of the present application.
  • FIG. 7 is a schematic diagram of a training process of a BP neural network model in an example of an embodiment of the present application.
  • FIG. 8 is a structural block diagram of a predistortion processing apparatus provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the digital predistortion modeling scheme in the existing base station can be expressed as the following formula using the GMP model:
  • x is the output signal of the power amplifier and y is the input signal of the power amplifier when establishing the reverse mode of the power amplifier (ie, the reverse behavior model), wherein the three parameters i, j, and k represent the signal memory depth and the signal modulus value respectively.
  • Memory depth, nonlinear order of signal modulus value, b is the coefficient of the model.
  • the trained GMP model is usually implemented by a look-up table (Look-Up-Table, LUT).
  • LUT table looks up the table according to the signal amplitude.
  • the value is the accumulated value of the multiplication of the distortion coefficient and the nonlinear term of the signal modulus value.
  • the number of LUT tables is determined by the model memory depths i, j, each (i, j) combination corresponds to a LUT table, and the value stored in the table is calculated according to formula (2), where AMP is the quantized signal amplitude.
  • the traditional GMP model is simplified based on the Volterra series model. With the increase of bandwidth and frequency, it often requires larger memory depth and nonlinear order, and has potential numerical instability. The modeling accuracy of the power amplifier model with strong memory effect and strong nonlinearity is not high. In other words, in this context, the pre-distortion processing accuracy of the traditional GMP model is low.
  • the embodiments of the present application provide a predistortion processor, an apparatus, an electronic device, and a computer-readable storage medium, and the solutions provided by the present application will be described in detail below.
  • FIG. 2 is a schematic flowchart of a predistortion processing method provided by an embodiment of the present application. As shown in FIG. 2 , the method may include the following steps:
  • Step S201 acquiring a first continuous timing signal including the timing signal to be processed.
  • the timing signal to be processed is the timing signal that is predistorted before entering the power amplifier.
  • the continuous timing signal can be understood as a plurality of timing signals sampled at a certain time interval. If the timing signal to be processed corresponds to the current sampling time, the time series signal collected at the sampling time before the current sampling time can be called the time series signal before the to-be-processed time series signal, and the time series signal collected at the sampling time after the current sampling time can be called the time series signal after the to-be-processed time series signal , then the first continuous timing signal includes the timing signal to be processed, the timing signal before the timing signal to be processed, and the timing signal after the timing signal to be processed.
  • the use of the BP neural network model in the subsequent steps needs to obtain the predistorted time series signal of the time series signal to be processed according to the characteristics of the time series signal before and after the time series signal to be processed, so the first continuous time series signal needs to be obtained here.
  • Step S202 performing normalization processing on the first continuous time series signal to obtain a corresponding second continuous time series signal.
  • the system involves a large amount of data in both training and online predistortion processing, and may also introduce magnitude order information, different features in the data will have different magnitudes of difference.
  • the gradient weight of the large number of features will be larger, and the training speed will be reduced when the loss function is used to find the optimal value.
  • some activation functions commonly used in neural networks such as sigmoid, tanh, etc.
  • the data when the data is too large, it will appear in the saturation region of the activation function for derivation, resulting in extremely small parameter gradients, which is not conducive to error back propagation. Therefore, before the time series data is input into the input layer of the BP network model, the data needs to be normalized. Specifically, each timing signal included in the first continuous timing signal needs to be normalized to obtain the corresponding second continuous timing signal.
  • Step S203 input the second continuous time sequence signal into the pre-trained back propagation BP (Back Propagation) neural network model, and output the pre-distorted time sequence signal corresponding to the time sequence signal to be processed, wherein the BP neural network model is to be processed by the time sequence signal to be processed.
  • BP Back Propagation
  • the structure of the BP neural network model is shown in Figure 3. It can be composed of three parts: the input layer, the hidden layer and the output layer.
  • the input layer is the structured data required to build the inverse behavior model, and the number of hidden layers can be arbitrary.
  • the hidden layer generates nonlinear features through nonlinear activation functions and linear weighting, and propagates useful feature information to the next layer in the form of dense feature vectors.
  • the initial coefficients of the neural network are generally initialized with random values, and the training method is gradient descent. This training method requires multiple iterations on the data set for each model training.
  • the normalized second continuous data is input into the pre-trained BP neural network model, and the corresponding pre-distorted time-series signal is output, and the corresponding pre-distorted time-series signal is input into the corresponding target PA to realize the pending processing. Amplification of timing signals.
  • the inverse behavior model based on BP neural network is used to pre-distort the time series signal to be processed, and before the pre-distortion processing is performed, the input signal based on the BP neural network is pre-distorted.
  • the time series signal of the inverse behavior model is normalized, using the inverse behavior model based on the BP neural network, and normalizing the time series signal of the input model, which can better improve the nonlinear factors of the overall system and improve the prediction performance. Accuracy of distortion handling.
  • the first continuous timing signal includes a first preset number of timing signals before the to-be-processed timing signal, and a second preset number of timing signals after the to-be-processed timing signal.
  • the first preset number and the second preset number may be set according to actual requirements, for example, the first preset number and the second preset number may be set to be the same number.
  • performing normalization processing on the first continuous time series signal to obtain a corresponding second continuous time series signal including:
  • the training data set includes a third preset number of PA input time series signal samples and corresponding PA output time series signal samples of the target PA;
  • the training data set is also used to train the pre-trained BP neural network model.
  • the mean and standard required for subsequent normalization of time series signals are determined. Difference.
  • the number of time series signal samples included in the training training data set may be set according to actual requirements, that is, the third preset number may be set according to actual requirements.
  • x k is the timing signal included in the first continuous timing signal
  • the normalized timing signal corresponding to x′ k is the timing signal included in the second continuous timing signal.
  • is the mean of the sample data
  • is the standard deviation of the sample data.
  • the normalized data conforms to a normal distribution, that is, the mean is 0 and the standard deviation is 1.
  • each time series signal can be expressed as a real part (In-phase, may be referred to as I) component and an imaginary part (Quadrature, may be referred to as Q) component part, then the time series signal can also be It is called IQ timing signal, and can also be called IQ data or IQ format data.
  • the abscissas in the left and right graphs both represent the real component (ie, I) of the time series signal, and the ordinates both represent the imaginary component (ie, Q) of the time series signal, and the left image is not normalized
  • the normalized data set, the picture on the right is the data set after normalization. Comparing the left and right pictures, we can see that the order of magnitude of the normalized data is significantly reduced (that is, the order of magnitude of the corresponding horizontal and vertical coordinates is significantly reduced), which is convenient for model processing. .
  • the unnormalized dataset is used to train the BP neural network model (left image), and the normalized dataset is used to train the BP neural network model (right image).
  • the data is divided into training set (corresponding to the curve 401 in the two figures), validation set (corresponding to the curve 402 in the two figures) and test set (corresponding to the curve 403 in the two figures), and the abscissas in the two figures represent The number of training times and the ordinate both identify the corresponding normalized mean square error (NMSE), and the corresponding curve can be called the NMSE curve.
  • NMSE normalized mean square error
  • the time series signal may be normalized first, and then the normalized time series signal is used for training or online predistortion processing.
  • the BP neural network includes an input layer, a third preset number of hidden layers, and an output layer
  • the second continuous time series signal is input into the pre-trained back-propagation BP neural network model
  • the output The predistorted timing signal corresponding to the timing signal to be processed includes:
  • the second continuous time series signal is received, and the second continuous time series signal is converted into an initial feature vector
  • the predistorted time series signal is output.
  • the BP neural network model is shown in Figure 5, including an input layer, a hidden layer (ie, a hidden layer) and an output layer.
  • a normalizer is also set before the input layer for normalizing the first continuous time series data.
  • the input layer mainly involves how to combine the input time series signals into the input format required by the BP neural network model (ie, conversion from the initial feature vector).
  • the main function of the hidden layer is to abstract the input information and discover hidden features, so that the input information can be represented as a feature vector (that is, to obtain an abstract feature vector).
  • the number of hidden layers in this embodiment of the present application is 2, which are hidden layer 1 and hidden layer 2 in the figure respectively.
  • the main function of the output layer is to convert the abstract feature information into the predistorted target IQ format, that is, to output the predistorted time sequence signal corresponding to the time sequence signal to be processed.
  • the input layer converts the input time series signal into the input of the BP neural network.
  • the information of the input layer can be listed in the form of a matrix shown in The memory depth, the value of the signal at the current time, and the delay information, I and Q represent the imaginary part information and real part information corresponding to the time series signal, A ⁇ n represents the order information of the signal amplitude, and "other" is other scalable information to facilitate Extensions to model inputs.
  • the matrix representation in Figure 6 needs to be converted into a preset vector form (ie, the initial feature vector), and the vector dimension is 1*n.
  • the depth and size of the hidden layer determine the learning upper limit of the BP network model, but simply increasing the depth and size cannot directly make the model better, but will make the model training difficult to converge.
  • the present invention uses a two-layer hidden layer structure, and the calculation expression of each hidden layer is shown in (4):
  • X input is the input of the layer
  • W h is a matrix of n*m
  • n represents the output dimension of the upper layer
  • m represents the number of neurons in this layer
  • f(x) is the activation function
  • b is the bias.
  • Activation function The activation function is introduced to give the network model the ability to learn nonlinear functions.
  • the ReLU function can be used as the activation function, and the formula is shown in (5):
  • a is the output of the activation function
  • z is the calculation result of the linear change of the neuron.
  • the output layer uses a fully connected linear unit for output, and the formula is shown in (6):
  • Wo is a matrix of m*2
  • m represents the output dimension of the hidden layer
  • 2 represents the 2-dimensional vector output of the real part and the imaginary part.
  • the training process of the BP neural network model is as follows:
  • FIG. 7 is a schematic diagram of the online real-time system corresponding to the embodiment of the application
  • the training of the BP neural network model can be divided into two stages, one of which is the offline training stage, and the pre-trained data obtained in the basic application
  • the second stage of the BP neural network model is the step-by-step tuning stage in the online use process.
  • Each training process requires a large amount of iterative calculations.
  • the application embodiment adopts the timing of the above two-stage training, and selects a large batch of data with different powers as the training time.
  • Pre-training data set a model is trained offline, and the online real-time system will use the pre-training model as the training starting point to gradually optimize.
  • the structure of the pre-trained BP neural network model can be set according to different power amplifier models.
  • the parameters of the normalizer are determined at the same time, in other words, it can be understood as determining the mean and standard deviation in the normalization process, then in the subsequent online use process and online update stage normalizer parameters remain unchanged.
  • the BP neural network model is trained by using the pre-acquired training data set, wherein the training data used are the PA input time series signal samples of the target PA and the corresponding PA output time series signal samples.
  • the training data used are the PA input time series signal samples of the target PA and the corresponding PA output time series signal samples.
  • each PA output timing signal sample is input into the BP neural network model, and the corresponding predicted PA input timing signal is output; based on each predicted PA input timing signal and the corresponding PA input timing signal sample, the corresponding mean square error is obtained. ;
  • the network parameters of the BP neural network model are updated by the back propagation algorithm, and then the pre-trained BP neural network model is obtained.
  • the method further includes:
  • the network parameters of the BP neural network model are updated again.
  • each PA output signal is input into the BP neural network model, and the corresponding predicted PA input time series signal is output; based on each predicted PA input time series signal and the corresponding predicted PA input time series signal
  • the distorted time series signal is obtained, and the corresponding mean square error is obtained; the network parameters of the BP neural network model are updated based on each mean square error.
  • the obtained PA output signal can also be referred to as the feedback data in the figure, that is, using multiple feedback data to update the network parameters of the BP neural network model, wherein the update frequency of the network parameters can be set according to actual needs.
  • the network parameters can be updated every preset time interval.
  • the fourth preset number can be set according to actual requirements.
  • y represents the IQ data before over-amplification (that is, the PA input timing signal sample or the predistorted timing signal), that is, the expected value obtained by the inverse behavior model of the power amplifier, and y output is the output of the BP neural network model (the predicted PA input timing signal) . Since the gradient descent algorithm is used to solve the BP network, the cut-off condition for training an available model iteration is: when the number of iterations exceeds the preset maximum number of iterations or the objective function does not decrease within the specified number of iterations.
  • FIG. 8 is a structural block diagram of a predistortion processing apparatus provided by an embodiment of the present application.
  • the apparatus 800 may include: a timing signal acquisition module 801, a normalization module 802, and a predistortion processing module 803, wherein :
  • the timing signal acquisition module 801 is configured to acquire the first continuous timing signal including the timing signal to be processed
  • the normalization module 802 is configured to perform normalization processing on the first continuous time series signal to obtain a corresponding second continuous time series signal;
  • the predistortion processing module 803 is used to input the second continuous time series signal into the pretrained back-propagation BP neural network model, and output the predistorted time series signal corresponding to the time series signal to be processed, wherein the BP neural network model is to perform the processing of the time series signal to be processed.
  • the inverse behavior model based on the BP neural network is used to pre-distort the time series signal to be processed, and before the pre-distortion processing is performed, the input signal based on the BP neural network is pre-distorted.
  • the time series signal of the inverse behavior model is normalized, using the inverse behavior model based on the BP neural network, and normalizing the time series signal of the input model, which can better improve the nonlinear factors of the overall system and improve the prediction performance. Accuracy of distortion handling.
  • the first continuous timing signal includes a first preset number of timing signals before the to-be-processed timing signal, and a second preset number of timing signals after the to-be-processed timing signal.
  • the normalization module is specifically used for:
  • the training data set includes a third preset number of PA input time series signal samples and corresponding PA output time series signal samples of the target PA;
  • the predistortion processing module is specifically configured to:
  • the second continuous time series signal is received, and the second continuous time series signal is converted into an initial feature vector
  • the predistorted time series signal is output.
  • the device further includes a first training module for:
  • the first training module is specifically used for:
  • the network parameters of the BP neural network model are updated by the back-propagation algorithm, and then the pre-trained BP neural network model is obtained.
  • the device further includes a second training module for:
  • the network parameters of the BP neural network model are updated again.
  • the second training module is specifically used for:
  • the network parameters of the BP neural network model are updated based on each mean square error.
  • FIG. 9 it shows a schematic structural diagram of an electronic device (eg, a terminal device or a server that executes the method shown in FIG. 2 ) 900 suitable for implementing an embodiment of the present application.
  • the electronic devices in the embodiments of the present application may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), in-vehicle terminals (such as mobile terminals such as in-vehicle navigation terminals), wearable devices, etc., and stationary terminals such as digital TVs, desktop computers, and the like.
  • the electronic device shown in FIG. 9 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present application.
  • the electronic device includes: a memory and a processor, where the memory is used to store a program for executing the methods described in the above method embodiments; the processor is configured to execute the program stored in the memory.
  • the processor here may be referred to as the processing device 901 described below, and the memory may include at least one of a read-only memory (ROM) 902, a random access memory (RAM) 903, and a storage device 908, which are specifically as follows shown:
  • an electronic device 900 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 901 that may be loaded into random access according to a program stored in a read only memory (ROM) 902 or from a storage device 908 Various appropriate actions and processes are executed by the programs in the memory (RAM) 903 . In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are also stored.
  • the processing device 901, the ROM 902, and the RAM 903 are connected to each other through a bus 904.
  • An input/output (I/O) interface 905 is also connected to bus 904 .
  • the following devices can be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 907 such as a computer; a storage device 908 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 909 .
  • the communication means 909 may allow the electronic device 900 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 9 illustrates an electronic device having various means, it should be understood that not all of the illustrated means are required to be implemented or available. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present application include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 909, or from the storage device 908, or from the ROM 902.
  • the processing device 901 the above-mentioned functions defined in the methods of the embodiments of the present application are executed.
  • the computer-readable storage medium mentioned above in the present application may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the above.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the client and server can use any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects.
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device:
  • the first continuous time series signal including the to-be-processed time series signal; perform normalization processing on the first continuous time series signal to obtain the corresponding second continuous time series signal; input the second continuous time series signal into the pre-trained back-propagation BP neural network
  • the model outputs the predistorted time series signal corresponding to the time series signal to be processed, wherein the BP neural network model is an inverse behavior model of the target power amplifier PA used for power amplification of the time series signal to be processed.
  • Computer program code for performing the operations of the present application may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the modules or units involved in the embodiments of the present application may be implemented in a software manner, and may also be implemented in a hardware manner.
  • the name of the module or unit does not constitute a limitation of the unit itself under certain circumstances, for example, the timing signal acquisition module can also be described as a "module for acquiring timing signals".
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLDs Complex Programmable Logical Devices
  • a machine-readable medium may be a tangible medium that may contain or store the program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the apparatuses provided in the embodiments of the present application may implement at least one module among the multiple modules through the AI model.
  • AI-related functions may be performed by non-volatile memory, volatile memory, and a processor.
  • the processor may include one or more processors.
  • the one or more processors may be general-purpose processors, such as central processing units (CPUs), application processors (APs), etc., or pure graphics processing units, such as graphics processing units (GPUs), visual processing Units (VPUs), and/or AI-specific processors, such as Neural Processing Units (NPUs).
  • CPUs central processing units
  • APs application processors
  • GPUs graphics processing units
  • VPUs visual processing Units
  • AI-specific processors such as Neural Processing Units (NPUs).
  • the one or more processors control the processing of input data according to predefined operating rules or artificial intelligence (AI) models stored in non-volatile memory and volatile memory. Provides predefined operating rules or artificial intelligence models through training or learning.
  • AI artificial intelligence
  • providing by learning refers to obtaining a predefined operation rule or an AI model with desired characteristics by applying a learning algorithm to a plurality of learning data.
  • This learning may be performed in the apparatus itself in which the AI according to an embodiment is performed, and/or may be implemented by a separate server/system.
  • the AI model can contain multiple neural network layers. Each layer has multiple weight values, and the calculation of a layer is performed by the calculation result of the previous layer and the multiple weights of the current layer.
  • Examples of neural networks include, but are not limited to, Convolutional Neural Networks (CNN), Deep Neural Networks (DNN), Recurrent Neural Networks (RNN), Restricted Boltzmann Machines (RBM), Deep Belief Networks (DBN), Bidirectional Recurrent Deep Neural Networks (BRDNN), Generative Adversarial Networks (GAN), and Deep Q-Networks.
  • a learning algorithm is a method of training a predetermined target device (eg, a robot) using a plurality of learning data to cause, allow or control the target device to make a determination or prediction.
  • Examples of such learning algorithms include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
  • the applicable system may be a global system of mobile communication (GSM) system, a code division multiple access (CDMA) system, a wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA) general packet Wireless service (general packet radio service, GPRS) system, long term evolution (long term evolution, LTE) system, LTE frequency division duplex (frequency division duplex, FDD) system, LTE time division duplex (time division duplex, TDD) system, Long term evolution advanced (LTE-A) system, universal mobile telecommunication system (UMTS), worldwide interoperability for microwave access (WiMAX) system, 5G New Radio (New Radio, NR) system, etc.
  • GSM global system of mobile communication
  • CDMA code division multiple access
  • WCDMA wideband Code Division Multiple Access
  • general packet Wireless service general packet Radio service
  • GPRS general packet Wireless service
  • LTE long term evolution
  • LTE frequency division duplex frequency division duplex
  • time division duplex time division duplex
  • TDD Time division duplex
  • the terminal device involved in the embodiments of the present application may be a device that provides voice and/or data connectivity to a user, a handheld device with a wireless connection function, or other processing device connected to a wireless modem.
  • the name of the terminal device may be different.
  • the terminal device may be called user equipment (User Equipment, UE).
  • Wireless terminal equipment can communicate with one or more core networks (Core Network, CN) via a radio access network (Radio Access Network, RAN).
  • RAN Radio Access Network
  • "telephone) and computers with mobile terminal equipment eg portable, pocket-sized, hand-held, computer-built or vehicle-mounted mobile devices, which exchange language and/or data with the radio access network.
  • Wireless terminal equipment may also be referred to as system, subscriber unit, subscriber station, mobile station, mobile station, remote station, access point , a remote terminal device (remote terminal), an access terminal device (access terminal), a user terminal device (user terminal), a user agent (user agent), and a user device (user device), which are not limited in the embodiments of the present application.

Abstract

A pre-distortion processing method and apparatus. The method comprises: obtaining a first continuous timing signal comprising a timing signal to be processed (S201); performing normalization processing on the first continuous timing signal to obtain a corresponding second continuous timing signal (S202); and inputting the second continuous timing signal into a pre-trained back propagation (BP) neural network model, and outputting a pre-distorted timing signal corresponding to the timing signal to be processed (S203), wherein the BP neural network model is an inverse behavior model of a corresponding target power amplifier (PA). Using the inverse behavior model based on the BP neural network, and performing normalization processing on the timing signal inputted into the model can better improve the non-linear factor of an overall system and improve the precision of pre-distortion processing.

Description

预失真处理方法和装置Predistortion processing method and device 技术领域technical field
本申请涉及人工智能技术领域,具体而言,本申请涉及一种预失真处理方法和装置。The present application relates to the technical field of artificial intelligence, and in particular, the present application relates to a predistortion processing method and apparatus.
背景技术Background technique
功率放大器(Power Amplifier,PA)是将已调制好的频带信号放大到所需要的功率值的器件,作为通信基站中耗能最大的器件,如何提升其效率尤为重要。由于PA存在非线性特性和记忆效应,导致带内失真和带外的频谱扩展,直接影响基站的效率和发射信号质量。A power amplifier (PA) is a device that amplifies the modulated frequency band signal to the required power value. As the device that consumes the most energy in a communication base station, how to improve its efficiency is particularly important. Due to the nonlinear characteristics and memory effect of the PA, it leads to in-band distortion and out-of-band spectrum spreading, which directly affects the efficiency of the base station and the quality of the transmitted signal.
数字预失真技术(Digital Pre-distortion,DPD)通过建立PA的逆行为模型,再将该逆行为模型插入到链路中的PA之前,在信号进入PA前对信号其进行预处理,其作用使功率放大器在高功率区仍然保持线性,提高发射信号的质量。数字预失真领域传统的逆行为模型是使用广义记忆多项式(General Memory Polynomial,GMP)获取的,传统方式的GMP模型是根据Volterra级数模型简化而来的。Digital Pre-distortion (DPD) establishes an inverse behavior model of the PA, and then inserts the inverse behavior model into the PA in the link to preprocess the signal before it enters the PA. The power amplifier remains linear in the high power region, improving the quality of the transmitted signal. The traditional inverse behavioral model in the field of digital predistortion is obtained by using General Memory Polynomial (GMP), and the traditional GMP model is simplified based on the Volterra series model.
但随着近些年无线通信用户数量的激增,尤其是5G(5th Generation mobile networks,第五代移动通信技术)逐步开始商用,带宽和频率也随之增加,这就需要较大的记忆深度和非线性阶数,而且具有潜在的数值不稳定性,在此背景下,传统GMP模型的预失真处理精度较低。However, with the surge in the number of wireless communication users in recent years, especially the commercial use of 5G (5th Generation mobile networks, fifth-generation mobile communication technology), the bandwidth and frequency have also increased, which requires greater memory depth and nonlinear order and potential numerical instability. In this context, the predistortion processing accuracy of the traditional GMP model is low.
发明内容SUMMARY OF THE INVENTION
本申请的目的旨在至少能解决上述的技术缺陷之一,本申请实施例所提供的技术方案如下:The purpose of this application is to solve at least one of the above-mentioned technical defects, and the technical solutions provided by the embodiments of this application are as follows:
第一方面,本申请提供了一种预失真处理方法,包括:In a first aspect, the present application provides a predistortion processing method, including:
获取包含待处理时序信号的第一连续时序信号;obtaining the first continuous timing signal containing the timing signal to be processed;
对第一连续时序信号进行归一化处理,得到对应的第二连续时序信号;Normalize the first continuous time series signal to obtain a corresponding second continuous time series signal;
将第二连续时序信号输入预训练的反向传播BP神经网络模型,输出待处理时序信号对应的预失真的时序信号,其中,BP神经网络模型为对待处理时序信号进行功率放大所采用的目标功率放大器PA的逆行为模型。Input the second continuous time series signal into the pre-trained back-propagation BP neural network model, and output the pre-distorted time series signal corresponding to the time series signal to be processed, wherein the BP neural network model is the target power used for power amplification of the time series signal to be processed. Inverse behavioral model of amplifier PA.
在本申请的一种可选实施例中,第一连续时序信号包括待处理时序信号之前的第一预设数量的时序信号、以及在待处理时序信号之后的第二预设数量的时序信号。In an optional embodiment of the present application, the first continuous timing signal includes a first preset number of timing signals before the to-be-processed timing signal, and a second preset number of timing signals after the to-be-processed timing signal.
在本申请的一种可选实施例中,对第一连续时序信号进行归一化处理,得到对应的第二连续时序信号,包括:In an optional embodiment of the present application, performing normalization processing on the first continuous time series signal to obtain a corresponding second continuous time series signal, including:
获取训练数据集,并获取训练数据集中时序数据样本的均值和标准差,其中,训练数据集包含所述目标PA的第三预设数量的PA输入时序信号样本和对应的PA输出时序信号样本;Obtain a training data set, and obtain the mean and standard deviation of the time series data samples in the training data set, wherein the training data set includes a third preset number of PA input time series signal samples and corresponding PA output time series signal samples of the target PA;
基于均值和标准差,获取第一连续时序信号中各时序信号对应的归一化后的时序信号;Based on the mean value and the standard deviation, obtain the normalized time series signal corresponding to each time series signal in the first continuous time series signal;
基于各归一化后的时序信号,获取第二连续时序信号。Based on each normalized timing signal, a second continuous timing signal is obtained.
在本申请的一种可选实施例中,包括:In an optional embodiment of the present application, it includes:
通过输入层,接收第二连续时序信号,并将第二连续时序信号转换为初始特征向量;Through the input layer, the second continuous time series signal is received, and the second continuous time series signal is converted into an initial feature vector;
通过隐藏层,基于初始特征向量,获取对应的抽象特征向量;Through the hidden layer, based on the initial feature vector, the corresponding abstract feature vector is obtained;
通过输出层,基于抽象特征向量,输出预失真的时序信号。Through the output layer, based on the abstract feature vector, the predistorted time series signal is output.
在本申请的一种可选实施例中,BP神经网络模型的训练过程如下:In an optional embodiment of the present application, the training process of the BP neural network model is as follows:
利用训练数据集对BP神经网络模型进行训练,获取预训练的BP神经网络模型。Use the training data set to train the BP neural network model, and obtain the pre-trained BP neural network model.
在本申请的一种可选实施例中,利用训练数据集对BP神经网络模型进行训练,获取预训练的BP神经网络模型,包括:In an optional embodiment of the present application, the BP neural network model is trained by using the training data set, and the pre-trained BP neural network model is obtained, including:
将各PA输出时序信号样本输入BP神经网络模型,输出对应的预测的PA输入时序信号;Input each PA output timing signal sample into the BP neural network model, and output the corresponding predicted PA input timing signal;
基于各预测的PA输入时序信号和对应的PA输入时序信号样本,获取对应的均方误差;Based on each predicted PA input timing signal and the corresponding PA input timing signal sample, obtain the corresponding mean square error;
基于各均方误差,利用反向传播算法对BP神经网络模型的网络参数进行更新,进而得到预训练的BP神经网络模型。Based on each mean square error, the network parameters of the BP neural network model are updated by the back-propagation algorithm, and then the pre-trained BP neural network model is obtained.
在本申请的一种可选实施例中,该方法还包括:In an optional embodiment of the present application, the method further includes:
在获取到第四预设数量的待处理时序信号对应的预失真的时序信号后,将各预失真的时序信号分别输入目标PA,得到对应的至少一个PA输出时序信号;After acquiring the pre-distorted timing signals corresponding to the fourth preset number of timing signals to be processed, input each pre-distorted timing signal into the target PA respectively to obtain at least one corresponding PA output timing signal;
基于各预失真的时序信号和对应的PA输出时序信号,再次对BP神经网络模型的网络参数进行更新。Based on each predistorted timing signal and the corresponding PA output timing signal, the network parameters of the BP neural network model are updated again.
在本申请的一种可选实施例中,基于各预失真的时序信号和对应的PA输出时序信号,再次对BP神经网络的网络参数进行更新,包括:In an optional embodiment of the present application, based on each predistorted timing signal and the corresponding PA output timing signal, the network parameters of the BP neural network are updated again, including:
将各PA输出信号输入BP神经网络模型,输出对应的预测的PA输入时序信号;Input each PA output signal into the BP neural network model, and output the corresponding predicted PA input timing signal;
基于各预测的PA输入时序信号和对应的预失真的时序信号,获取对应的均方误差;Obtain the corresponding mean square error based on each predicted PA input timing signal and the corresponding predistorted timing signal;
基于各均方误差对BP神经网络模型的网络参数进行更新。The network parameters of the BP neural network model are updated based on each mean square error.
第二方面,本申请提供了一种预失真处理装置,包括:In a second aspect, the present application provides a predistortion processing device, including:
时序信号获取模块,用于获取包含待处理时序信号的第一连续时序信号;a timing signal acquisition module, used for acquiring the first continuous timing signal including the timing signal to be processed;
归一化模块,用于对第一连续时序信号进行归一化处理,得到对应的第二连续时序信号;The normalization module is used for normalizing the first continuous time series signal to obtain the corresponding second continuous time series signal;
预失真处理模块,用于将第二连续时序信号输入预训练的反向传播BP神经网络模型,输出待处理时序信号对应的预失真的时序信号,其中,BP神经网络模型为对待处理时序信号进行功率放大所采用的目标功率放大器件PA的逆行为模型。The pre-distortion processing module is used to input the second continuous time series signal into the pre-trained back-propagation BP neural network model, and output the pre-distorted time series signal corresponding to the time series signal to be processed, wherein the BP neural network model is used for processing the time series signal to be processed. Inverse behavioral model of the target power amplifier device PA used for power amplification.
在本申请的一种可选实施例中,第一连续时序信号包括待处理时序信号之前的第一预设数量的时序信号、以及在待处理时序信号之后的第二预设数量的时序信号。In an optional embodiment of the present application, the first continuous timing signal includes a first preset number of timing signals before the to-be-processed timing signal, and a second preset number of timing signals after the to-be-processed timing signal.
在本申请的一种可选实施例中,归一化模块具体用于:In an optional embodiment of the present application, the normalization module is specifically used for:
获取训练数据集,并获取训练数据集中时序数据样本的均值和标准差, 其中,训练数据集包含所述目标PA的第三预设数量的PA输入时序信号样本和对应的PA输出时序信号样本;Obtaining a training data set, and obtaining the mean and standard deviation of the time series data samples in the training data set, wherein the training data set includes a third preset number of PA input time series signal samples and corresponding PA output time series signal samples of the target PA;
基于均值和标准差,获取第一连续时序信号中各时序信号对应的归一化后的时序信号;Based on the mean value and the standard deviation, obtain the normalized time series signal corresponding to each time series signal in the first continuous time series signal;
基于各归一化后的时序信号,获取第二连续时序信号。Based on each normalized timing signal, a second continuous timing signal is obtained.
在本申请的一种可选实施例中,预失真处理模块具体用于:In an optional embodiment of the present application, the predistortion processing module is specifically configured to:
通过输入层,接收第二连续时序信号,并将第二连续时序信号转换为初始特征向量;Through the input layer, the second continuous time series signal is received, and the second continuous time series signal is converted into an initial feature vector;
通过隐藏层,基于初始特征向量,获取对应的抽象特征向量;Through the hidden layer, based on the initial feature vector, the corresponding abstract feature vector is obtained;
通过输出层,基于抽象特征向量,输出预失真的时序信号。Through the output layer, based on the abstract feature vector, the predistorted time series signal is output.
在本申请的一种可选实施例中,该装置还包括第一训练模块,用于:In an optional embodiment of the present application, the device further includes a first training module for:
利用训练数据集对BP神经网络模型进行训练,获取预训练的BP神经网络模型。Use the training data set to train the BP neural network model, and obtain the pre-trained BP neural network model.
在本申请的一种可选实施例中,第一训练模块具体用于:In an optional embodiment of the present application, the first training module is specifically used for:
将各PA输出时序信号样本输入BP神经网络模型,输出对应的预测的PA输入时序信号;Input each PA output timing signal sample into the BP neural network model, and output the corresponding predicted PA input timing signal;
基于各预测的PA输入时序信号和对应的PA输入时序信号样本,获取对应的均方误差;Based on each predicted PA input timing signal and the corresponding PA input timing signal sample, obtain the corresponding mean square error;
基于各均方误差,利用反向传播算法对BP神经网络模型的网络参数进行更新,进而得到预训练的BP神经网络模型。Based on each mean square error, the network parameters of the BP neural network model are updated by the back-propagation algorithm, and then the pre-trained BP neural network model is obtained.
在本申请的一种可选实施例中,该装置还包括第二训练模块,用于:In an optional embodiment of the present application, the device further includes a second training module for:
在获取到第四预设数量的待处理时序信号对应的预失真的时序信号后,将各预失真的时序信号分别输入目标PA,得到对应的至少一个PA输出时序信号;After acquiring the pre-distorted timing signals corresponding to the fourth preset number of timing signals to be processed, input each pre-distorted timing signal into the target PA respectively to obtain at least one corresponding PA output timing signal;
基于各预失真的时序信号和对应的PA输出时序信号,再次对BP神经网络模型的网络参数进行更新。Based on each predistorted timing signal and the corresponding PA output timing signal, the network parameters of the BP neural network model are updated again.
在本申请的一种可选实施例中,第二训练模块具体用于:In an optional embodiment of the present application, the second training module is specifically used for:
将各PA输出信号输入BP神经网络模型,输出对应的预测的PA输入时序信号;Input each PA output signal into the BP neural network model, and output the corresponding predicted PA input timing signal;
基于各预测的PA输入时序信号和对应的预失真的时序信号,获取对应的均方误差;Obtain the corresponding mean square error based on each predicted PA input timing signal and the corresponding predistorted timing signal;
基于各均方误差对BP神经网络模型的网络参数进行更新。The network parameters of the BP neural network model are updated based on each mean square error.
第三方面,本申请提供了一种电子设备,包括存储器和处理器;In a third aspect, the present application provides an electronic device, including a memory and a processor;
存储器中存储有计算机程序;A computer program is stored in the memory;
处理器,用于执行计算机程序以实现第一方面实施例或第一方面任一可选实施例中所提供的方法。A processor, configured to execute a computer program to implement the method provided in the embodiment of the first aspect or any optional embodiment of the first aspect.
第四方面,本申请提供了一种计算机可读存储介质,其特征在于,计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现第一方面实施例或第一方面任一可选实施例中所提供的方法。In a fourth aspect, the present application provides a computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, an embodiment of the first aspect or any one of the first aspect is implemented Methods provided in alternative embodiments.
本申请提供的技术方案带来的有益效果是:The beneficial effects brought by the technical solution provided by the application are:
在通过目标PA对待处理时序信号进行放大之前,通过基于BP神经网络的逆行为模型对待处理时序信号进行预失真处理,并在进行预失真处理之前对输入基于BP神经网络的逆行为模型的时序信号进行了归一化处理,使用基于BP神经网络的逆行为模型,并对输入模型的时序信号进行归一化处理,能更好的改善整体系统的非线性因素,提高预失真处理的精度。Before amplifying the time series signal to be processed by the target PA, the time series signal to be processed is pre-distorted by the inverse behavior model based on BP neural network, and the time series signal input to the inverse behavior model based on BP neural network is processed before the pre-distortion process. The normalization process is carried out, the inverse behavior model based on BP neural network is used, and the time series signal of the input model is normalized, which can better improve the nonlinear factors of the overall system and improve the accuracy of pre-distortion processing.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对本申请实施例描述中所需要使用的附图作简单地介绍。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments of the present application.
图1为现有技术中LUT表查找过程示意图;1 is a schematic diagram of a LUT table lookup process in the prior art;
图2为本申请实施例提供的一种预失真处理方法的流程示意图;FIG. 2 is a schematic flowchart of a predistortion processing method provided by an embodiment of the present application;
图3为本申请实施例的一个示例中的BP神经网络模型的结构示意图;3 is a schematic structural diagram of a BP neural network model in an example of an embodiment of the present application;
图4a为本申请实施例的一个示例中归一化后的IQ时序信号和未归一化的时序信号的对比示意图;4a is a schematic diagram of a comparison between a normalized IQ time series signal and an unnormalized time series signal in an example of an embodiment of the present application;
图4b为本申请实施例的一个示例中分别归一化后的IQ时序信号和未归一化的时序信号对模型进行训练过程的NMSE曲线对比示意图;FIG. 4b is a schematic diagram of a comparison of NMSE curves in a process of training a model by a normalized IQ time series signal and an unnormalized time series signal, respectively, in an example of an embodiment of the present application;
图5为本申请实施例的又一个示例中的BP神经网络模型的结果示意 图;Fig. 5 is the result schematic diagram of the BP neural network model in another example of the embodiment of the application;
图6为本申请实施例的一个示例中输入层的特征组合方式;FIG. 6 is a feature combination mode of the input layer in an example of the embodiment of the present application;
图7为本申请实施例的一个示例中BP神经网络模型的训练过程的示意图;7 is a schematic diagram of a training process of a BP neural network model in an example of an embodiment of the present application;
图8为本申请实施例提供的一种预失真处理装置的结构框图;FIG. 8 is a structural block diagram of a predistortion processing apparatus provided by an embodiment of the present application;
图9为本申请实施例提供的一种电子设备的结构示意图。FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
具体实施方式Detailed ways
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本申请,而不能解释为对本申请的限制。The following describes in detail the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary and are only used to explain the present application, but not to be construed as a limitation on the present application.
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本申请的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。It will be understood by those skilled in the art that the singular forms "a", "an", "the" and "the" as used herein can include the plural forms as well, unless expressly stated otherwise. It should be further understood that the word "comprising" used in the specification of this application refers to the presence of stated features, integers, steps, operations, elements and/or components, but does not preclude the presence or addition of one or more other features, Integers, steps, operations, elements, components and/or groups thereof. It will be understood that when we refer to an element as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Furthermore, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combination of one or more of the associated listed items.
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present application clearer, the embodiments of the present application will be further described in detail below with reference to the accompanying drawings.
现有基站中的数字预失真建模方案使用GMP模型可以表示为如下公式:The digital predistortion modeling scheme in the existing base station can be expressed as the following formula using the GMP model:
Figure PCTCN2022071120-appb-000001
Figure PCTCN2022071120-appb-000001
在公式(1)中,在建立功放的逆向模式(即逆行为模型)时x为功放输出信号,y为功放输入信号,其中i、j、k三个参数分别表示信号记忆 深度、信号模值记忆深度、信号模值非线性阶数,b为模型的系数。在整机实时系统中,出于对速度和资源的考虑,通常将训练好的GMP模型用查找表(Look-Up-Table,LUT)方式实现,LUT表根据信号幅度进行查表,表格中的值为失真系数和信号模值非线性项相乘的累加值。LUT表格数量由模型记忆深度i、j决定,每一(i,j)组合对应一张LUT表,表格中存储的值按照公式(2)计算得到,式中的AMP为量化后的信号幅度。In formula (1), x is the output signal of the power amplifier and y is the input signal of the power amplifier when establishing the reverse mode of the power amplifier (ie, the reverse behavior model), wherein the three parameters i, j, and k represent the signal memory depth and the signal modulus value respectively. Memory depth, nonlinear order of signal modulus value, b is the coefficient of the model. In the real-time system of the whole machine, for the consideration of speed and resources, the trained GMP model is usually implemented by a look-up table (Look-Up-Table, LUT). The LUT table looks up the table according to the signal amplitude. The value is the accumulated value of the multiplication of the distortion coefficient and the nonlinear term of the signal modulus value. The number of LUT tables is determined by the model memory depths i, j, each (i, j) combination corresponds to a LUT table, and the value stored in the table is calculated according to formula (2), where AMP is the quantized signal amplitude.
Figure PCTCN2022071120-appb-000002
Figure PCTCN2022071120-appb-000002
如图1所示,为信号记忆深度为i、幅度记忆深度为j的LUT表查找过程。图中的Z表示时延,例如在一组时序信号中有x(n),经过Z -j变化后得到x(n-j),abs表示求模值,abs(x(n-j))即为公式(2)中的|AMP|,再结合时延Z -i得到x(n-i),进而得到: As shown in Figure 1, it is the LUT table lookup process with signal memory depth i and amplitude memory depth j. Z in the figure represents the time delay. For example, there is x(n) in a set of timing signals. After Z- j changes, x(nj) is obtained, abs represents the modulo value, and abs(x(nj)) is the formula ( 2) in |AMP|, combined with the delay Z -i to get x(ni), and then get:
Figure PCTCN2022071120-appb-000003
Figure PCTCN2022071120-appb-000003
然后,再把所有的y ij(n)求和就是y(n)。 Then, sum all y ij(n) to get y(n).
传统方式的GMP模型是根据Volterra级数模型简化而来的,随着带宽和频率的增加,往往需要较大的记忆深度和非线性阶数,而且具有潜在的数值不稳定性,将其应用于强记忆效应强非线性的功放模型的建模精度不高,换言之,在此背景下,传统GMP模型的预失真处理精度较低。The traditional GMP model is simplified based on the Volterra series model. With the increase of bandwidth and frequency, it often requires larger memory depth and nonlinear order, and has potential numerical instability. The modeling accuracy of the power amplifier model with strong memory effect and strong nonlinearity is not high. In other words, in this context, the pre-distortion processing accuracy of the traditional GMP model is low.
针对上述问题,本申请实施例提供了一种预失真处理方、装置、电子设备及计算机可读存储介质,下面将对本申请提供的方案进行详细说明。In view of the above problems, the embodiments of the present application provide a predistortion processor, an apparatus, an electronic device, and a computer-readable storage medium, and the solutions provided by the present application will be described in detail below.
图2为本申请实施例提供的一种预失真处理方法的流程示意图,如图2所示,该方法可以包括以下步骤:FIG. 2 is a schematic flowchart of a predistortion processing method provided by an embodiment of the present application. As shown in FIG. 2 , the method may include the following steps:
步骤S201,获取包含待处理时序信号的第一连续时序信号。Step S201, acquiring a first continuous timing signal including the timing signal to be processed.
其中,待处理时序信号即在进入功率放大器之前进行预失真处理的时序信号,其中,连续时序信号可以理解为按一定时间间隔采样得到的多个时序信号,若待处理时序信号对应的是当前采样时刻,则在当前采样时刻之前的采样时刻采集的时序信号可以称为待处理时序信号之前的时序信号,在当前采样时刻之后的采样时刻采集的时序信号可以称为待处理时序 信号之后的时序信号,那么第一连续时序信号为包含待处理时序信号、待处理时序信号之前的时序信号以及待处理时序信号之后的时序信号。后续步骤中利用BP神经网络模型需要根据待处理时序信号的前后的时序信号的特征,获取待处理时序信号的预失真的时序信号,因此这里需要获取第一连续时序信号。Among them, the timing signal to be processed is the timing signal that is predistorted before entering the power amplifier. The continuous timing signal can be understood as a plurality of timing signals sampled at a certain time interval. If the timing signal to be processed corresponds to the current sampling time, the time series signal collected at the sampling time before the current sampling time can be called the time series signal before the to-be-processed time series signal, and the time series signal collected at the sampling time after the current sampling time can be called the time series signal after the to-be-processed time series signal , then the first continuous timing signal includes the timing signal to be processed, the timing signal before the timing signal to be processed, and the timing signal after the timing signal to be processed. The use of the BP neural network model in the subsequent steps needs to obtain the predistorted time series signal of the time series signal to be processed according to the characteristics of the time series signal before and after the time series signal to be processed, so the first continuous time series signal needs to be obtained here.
步骤S202,对第一连续时序信号进行归一化处理,得到对应的第二连续时序信号。Step S202, performing normalization processing on the first continuous time series signal to obtain a corresponding second continuous time series signal.
具体地,由于系统无论是在训练还是在线的预失真处理过程中,涉及的数据量都较大,并且可能还会引入幅度的阶数信息,数据中的不同特征会有不同量级的差别,在对系数求解的时候会使大数特征的梯度权重较大,使用损失函数寻找最优值时造成训练速度降低。另外对于一些神经网络常用的激活函数,如sigmoid、tanh等,在数据过大时会出现在激活函数的饱和区域进行求导,导致参数的梯度极小不利于误差反向传播。所以在时序数据输入BP网络模型的输入层前,需要对数据进行归一化处理。具体来说,需要对第一连续时序信号中所包含的各时序信号进行归一化处理,以得到对应的第二连续时序信号。Specifically, since the system involves a large amount of data in both training and online predistortion processing, and may also introduce magnitude order information, different features in the data will have different magnitudes of difference. When the coefficient is solved, the gradient weight of the large number of features will be larger, and the training speed will be reduced when the loss function is used to find the optimal value. In addition, for some activation functions commonly used in neural networks, such as sigmoid, tanh, etc., when the data is too large, it will appear in the saturation region of the activation function for derivation, resulting in extremely small parameter gradients, which is not conducive to error back propagation. Therefore, before the time series data is input into the input layer of the BP network model, the data needs to be normalized. Specifically, each timing signal included in the first continuous timing signal needs to be normalized to obtain the corresponding second continuous timing signal.
步骤S203,将第二连续时序信号输入预训练的反向传播BP(Back Propagation)神经网络模型,输出待处理时序信号对应的预失真的时序信号,其中,BP神经网络模型为对待处理时序信号进行功率放大所采用的目标功率放大器PA的逆行为模型。Step S203, input the second continuous time sequence signal into the pre-trained back propagation BP (Back Propagation) neural network model, and output the pre-distorted time sequence signal corresponding to the time sequence signal to be processed, wherein the BP neural network model is to be processed by the time sequence signal to be processed. The inverse behavioral model of the target power amplifier PA used for power amplification.
其中,BP神经网络模型的结构如图3所示,它可以由输入层、隐层、输出层三部分组成,输入层为构建逆行为模型所需的结构化的数据,隐层层数可以任意多,隐层通过非线性激活函数和线性加权的方式产生非线性特征,将有用的特征信息以稠密特征向量的方式向下一层传播。神经网络的初始系数一般使用随机值进行初始化,训练方式是使用梯度下降,这种训练方法在每次模型训练都需要在数据集上多次迭代完成。Among them, the structure of the BP neural network model is shown in Figure 3. It can be composed of three parts: the input layer, the hidden layer and the output layer. The input layer is the structured data required to build the inverse behavior model, and the number of hidden layers can be arbitrary. The hidden layer generates nonlinear features through nonlinear activation functions and linear weighting, and propagates useful feature information to the next layer in the form of dense feature vectors. The initial coefficients of the neural network are generally initialized with random values, and the training method is gradient descent. This training method requires multiple iterations on the data set for each model training.
具体地,将归一化后的第二连续数据输入预训练的BP神经网络模型,输出对应的预失真的时序信号,该对应的预失真的时序信号被输入对应的目标PA中,实现对待处理时序信号的放大。Specifically, the normalized second continuous data is input into the pre-trained BP neural network model, and the corresponding pre-distorted time-series signal is output, and the corresponding pre-distorted time-series signal is input into the corresponding target PA to realize the pending processing. Amplification of timing signals.
本申请提供的方案,在通过目标PA对待处理时序信号进行放大之前,通过基于BP神经网络的逆行为模型对待处理时序信号进行预失真处理,并在进行预失真处理之前对输入基于BP神经网络的逆行为模型的时序信号进行了归一化处理,使用基于BP神经网络的逆行为模型,并对输入模型的时序信号进行归一化处理,能更好的改善整体系统的非线性因素,提高预失真处理的精度。In the solution provided by this application, before the target PA is used to amplify the time series signal to be processed, the inverse behavior model based on BP neural network is used to pre-distort the time series signal to be processed, and before the pre-distortion processing is performed, the input signal based on the BP neural network is pre-distorted. The time series signal of the inverse behavior model is normalized, using the inverse behavior model based on the BP neural network, and normalizing the time series signal of the input model, which can better improve the nonlinear factors of the overall system and improve the prediction performance. Accuracy of distortion handling.
在本申请的一种可选实施例中,第一连续时序信号包括待处理时序信号之前的第一预设数量的时序信号、以及在待处理时序信号之后的第二预设数量的时序信号。In an optional embodiment of the present application, the first continuous timing signal includes a first preset number of timing signals before the to-be-processed timing signal, and a second preset number of timing signals after the to-be-processed timing signal.
其中,第一预设数量和第二预设数量可以根据实际需求进行设定,例如,可以将第一预设数量和第二预设数量设定为相同数量。The first preset number and the second preset number may be set according to actual requirements, for example, the first preset number and the second preset number may be set to be the same number.
在本申请的一种可选实施例中,对第一连续时序信号进行归一化处理,得到对应的第二连续时序信号,包括:In an optional embodiment of the present application, performing normalization processing on the first continuous time series signal to obtain a corresponding second continuous time series signal, including:
获取训练数据集,并获取训练数据集中时序数据样本的均值和标准差,其中,训练数据集包含所述目标PA的第三预设数量的PA输入时序信号样本和对应的PA输出时序信号样本;Obtain a training data set, and obtain the mean and standard deviation of the time series data samples in the training data set, wherein the training data set includes a third preset number of PA input time series signal samples and corresponding PA output time series signal samples of the target PA;
基于均值和标准差,获取第一连续时序信号中各时序信号对应的归一化后的时序信号;Based on the mean value and the standard deviation, obtain the normalized time series signal corresponding to each time series signal in the first continuous time series signal;
基于各归一化后的时序信号,获取第二连续时序信号。Based on each normalized timing signal, a second continuous timing signal is obtained.
其中,训练数据集也用于训练得到前述预训练的BP神经网络模型,换言之,在训练得到预训练的BP神经网络模型同时,确定出后续用于时序信号归一化处理所需的均值和标准差。The training data set is also used to train the pre-trained BP neural network model. In other words, at the same time as the pre-trained BP neural network model is trained, the mean and standard required for subsequent normalization of time series signals are determined. Difference.
其中,训练训练数据集中包含的时序信号样本的数量可以根据实际需求进行设定,即可以根据实际需求设定第三预设数量。The number of time series signal samples included in the training training data set may be set according to actual requirements, that is, the third preset number may be set according to actual requirements.
具体地,对第一连续时序信号中包含的各时序信号进行归一化的过程可以表示为如下公式:Specifically, the process of normalizing each timing signal included in the first continuous timing signal can be expressed as the following formula:
Figure PCTCN2022071120-appb-000004
Figure PCTCN2022071120-appb-000004
式中,x k为第一连续时序信号中包含的时序信号,x′ k对应的归一化后 的时序信号,即为第二连续时序信号中包含的时序信号。μ为样本数据的均值,σ为样本数据的标准差,经归一化处理后的数据符合正态分布,即均值为0,标准差为1。 In the formula, x k is the timing signal included in the first continuous timing signal, and the normalized timing signal corresponding to x′ k is the timing signal included in the second continuous timing signal. μ is the mean of the sample data, and σ is the standard deviation of the sample data. The normalized data conforms to a normal distribution, that is, the mean is 0 and the standard deviation is 1.
进一步地,对于一个时序信号数据集,对于每个时序信号可以将其表示为实部(In-phase,可简称I)分量和虚部(Quadrature,可简称Q)分量部分,那么时序信号还可以称为IQ时序信号,也可以称为IQ数据或IQ格式数据。如图4a所示,左图和右图中横坐标都表示时序信号的实部分量(即I),纵坐标都表示时序信号的虚部分量(即Q),其中左图为未进行归一化的数据集,右图为进行归一化后的数据集,对比左右两图可知,归一化后的数据的数量级显著下降(即对应的横纵坐标的数量级显著变小),便于模型处理。Further, for a time series signal data set, each time series signal can be expressed as a real part (In-phase, may be referred to as I) component and an imaginary part (Quadrature, may be referred to as Q) component part, then the time series signal can also be It is called IQ timing signal, and can also be called IQ data or IQ format data. As shown in Figure 4a, the abscissas in the left and right graphs both represent the real component (ie, I) of the time series signal, and the ordinates both represent the imaginary component (ie, Q) of the time series signal, and the left image is not normalized The normalized data set, the picture on the right is the data set after normalization. Comparing the left and right pictures, we can see that the order of magnitude of the normalized data is significantly reduced (that is, the order of magnitude of the corresponding horizontal and vertical coordinates is significantly reduced), which is convenient for model processing. .
如图4b所示,分别采用未归一化的数据集对BP神经网络模型进行训练(左图),以及归一化后的数据集对BP神经网络模型进行训练(右图),具体来说,将数据即分别划分为训练集(对应于两图中曲线401)、验证集(对应于两图中曲线402)以及测试集(对应于两图中曲线403),两图中横坐标都表示训练次数,纵坐标都标识对应的归一化均方误差(Normalized Mean Square,NMSE),对应的曲线可以称为NMSE曲线。对比左右两图可知,利用归一化后的数据集进行训练,模型收敛更快,得到的模型预失真处理效果更好。综上所述,在进行训练和在线预失真处理过程中都可有先对时序信号进行归一化处理,然后再将归一化处理后的时序信号用于训练或者在线预失真处理。As shown in Figure 4b, the unnormalized dataset is used to train the BP neural network model (left image), and the normalized dataset is used to train the BP neural network model (right image). Specifically, , the data is divided into training set (corresponding to the curve 401 in the two figures), validation set (corresponding to the curve 402 in the two figures) and test set (corresponding to the curve 403 in the two figures), and the abscissas in the two figures represent The number of training times and the ordinate both identify the corresponding normalized mean square error (NMSE), and the corresponding curve can be called the NMSE curve. Comparing the left and right figures, we can see that using the normalized data set for training, the model converges faster, and the obtained model has better pre-distortion processing effect. To sum up, in the process of training and online predistortion processing, the time series signal may be normalized first, and then the normalized time series signal is used for training or online predistortion processing.
在本申请的一种可选实施例中,BP神经网络包括输入层、第三预设数量的隐藏层以及输出层,将第二连续时序信号输入预训练的反向传播BP神经网络模型,输出待处理时序信号对应的预失真的时序信号,包括:In an optional embodiment of the present application, the BP neural network includes an input layer, a third preset number of hidden layers, and an output layer, the second continuous time series signal is input into the pre-trained back-propagation BP neural network model, and the output The predistorted timing signal corresponding to the timing signal to be processed includes:
通过输入层,接收第二连续时序信号,并将第二连续时序信号转换为初始特征向量;Through the input layer, the second continuous time series signal is received, and the second continuous time series signal is converted into an initial feature vector;
通过隐藏层,基于初始特征向量,获取对应的抽象特征向量;Through the hidden layer, based on the initial feature vector, the corresponding abstract feature vector is obtained;
通过输出层,基于抽象特征向量,输出预失真的时序信号。Through the output layer, based on the abstract feature vector, the predistorted time series signal is output.
其中,BP神经网络模型如图5所示,包括输入层、隐层(即隐藏层) 以及输出层。具体来说,在输入层之前还设置有归一化器,用于对第一连续时序数据进行归一化处理。输入层主要涉及如何将输入的时序信号组合成BP神经网络模型要求的输入格式(即转换从初始特征向量)。隐层的主要功能为将输入信息进行抽象、发现隐含特征,以便将输入信息表示成特征向量(即获取抽象特征向量),其中,隐层的数量(即深度)可以根据需求进行设定,本申请实施例中隐层的数量为2,分别为图中的隐层1和隐层2。输出层的主要功能是将抽象的特征信息转化为预失真后的目标IQ格式,即输出待处理时序信号对应的预失真的时序信号。The BP neural network model is shown in Figure 5, including an input layer, a hidden layer (ie, a hidden layer) and an output layer. Specifically, a normalizer is also set before the input layer for normalizing the first continuous time series data. The input layer mainly involves how to combine the input time series signals into the input format required by the BP neural network model (ie, conversion from the initial feature vector). The main function of the hidden layer is to abstract the input information and discover hidden features, so that the input information can be represented as a feature vector (that is, to obtain an abstract feature vector). The number of hidden layers in this embodiment of the present application is 2, which are hidden layer 1 and hidden layer 2 in the figure respectively. The main function of the output layer is to convert the abstract feature information into the predistorted target IQ format, that is, to output the predistorted time sequence signal corresponding to the time sequence signal to be processed.
具体地,输入层即将输入的时序信号转化为BP神经网络的输入,本申请实施例可以以图6所示的矩阵的方式列出输入层的信息,该矩阵从上到下各行依次对应信号的记忆深度、当前时刻信号的值以及时延信息,I、Q表示时序信号对应的虚部信息和实部信息,A^n表示信号幅度的阶数信息,“其他”为其他可扩展的信息以便于模型输入的扩展。使用BP神经网络模型时,在特征矩阵进入隐层前,需要将图6中的矩阵表示形式转化为预设向量形式(即初始特征向量),向量维度为1*n。Specifically, the input layer converts the input time series signal into the input of the BP neural network. In this embodiment of the present application, the information of the input layer can be listed in the form of a matrix shown in The memory depth, the value of the signal at the current time, and the delay information, I and Q represent the imaginary part information and real part information corresponding to the time series signal, A^n represents the order information of the signal amplitude, and "other" is other scalable information to facilitate Extensions to model inputs. When using the BP neural network model, before the feature matrix enters the hidden layer, the matrix representation in Figure 6 needs to be converted into a preset vector form (ie, the initial feature vector), and the vector dimension is 1*n.
隐层深度和大小决定了BP网络模型的学习上限,但是单纯增加深度和大小并不能直接使模型效果更好,反而会使模型训练难以收敛。通过实验测试,本发明使用两层隐层结构,每个隐层的计算表达式为(4)所示:The depth and size of the hidden layer determine the learning upper limit of the BP network model, but simply increasing the depth and size cannot directly make the model better, but will make the model training difficult to converge. Through experimental tests, the present invention uses a two-layer hidden layer structure, and the calculation expression of each hidden layer is shown in (4):
h=f(X input*W h+b)  (4) h=f(X input *W h +b) (4)
其中,X input为本层的输入,W h为n*m的矩阵,n表示上层的输出维度,m表示该层神经元的个数,f(x)为激活函数,b为偏置。 Among them, X input is the input of the layer, W h is a matrix of n*m, n represents the output dimension of the upper layer, m represents the number of neurons in this layer, f(x) is the activation function, and b is the bias.
激活函数:激活函数的引入,是为了赋予网络模型对非线性函数的学习能力,本申请实施例可以使用ReLU函数作为激活函数,公式如(5)所示:Activation function: The activation function is introduced to give the network model the ability to learn nonlinear functions. In this embodiment of the present application, the ReLU function can be used as the activation function, and the formula is shown in (5):
a=max(0,z)  (5)a=max(0,z) (5)
其中,a为激活函数输出,z为神经元线性变化计算结果。Among them, a is the output of the activation function, and z is the calculation result of the linear change of the neuron.
输出层使用全连接线性单元进行输出,公式如(6)所示:The output layer uses a fully connected linear unit for output, and the formula is shown in (6):
y output=h*W o+b   (6) y output = h*W o +b (6)
Wo为m*2的矩阵,m表示隐层的输出维度,2表示实部和虚部的2 维向量输出。Wo is a matrix of m*2, m represents the output dimension of the hidden layer, and 2 represents the 2-dimensional vector output of the real part and the imaginary part.
在本申请的一种可选实施例中,BP神经网络模型的训练过程如下:In an optional embodiment of the present application, the training process of the BP neural network model is as follows:
利用训练数据集对BP神经网络模型进行训练,获取预训练的BP神经网络模型。Use the training data set to train the BP neural network model, and obtain the pre-trained BP neural network model.
具体地,如图7所示为本申请实施例对应的在线的实时系统的示意图,对于BP神经网络模型的训练可以分为两个阶段,其一为离线训练阶段,基本申请中得到预训练的BP神经网络模型的阶段,其二为在线使用过程中的逐步调优阶段。每次训练过程需要大量的迭代计算,考虑到基站在上电后第一次训练存在训练时间过长的情况,所以申请实施例采用上述两个阶段训练的按时,选择大批量不同功率的数据作为预训练数据集,在离线状态下训练出一个模型,在线的实时系统中将以预训练模型为训练起点逐步调优。其中,预训练的BP神经网络模型的结构可以根据不同功放型号进行设置。Specifically, as shown in FIG. 7 is a schematic diagram of the online real-time system corresponding to the embodiment of the application, the training of the BP neural network model can be divided into two stages, one of which is the offline training stage, and the pre-trained data obtained in the basic application The second stage of the BP neural network model is the step-by-step tuning stage in the online use process. Each training process requires a large amount of iterative calculations. Considering that the first training time of the base station after power-on is too long, the application embodiment adopts the timing of the above two-stage training, and selects a large batch of data with different powers as the training time. Pre-training data set, a model is trained offline, and the online real-time system will use the pre-training model as the training starting point to gradually optimize. Among them, the structure of the pre-trained BP neural network model can be set according to different power amplifier models.
需要说明的是,在离线训练阶段,同时确定归一化器的参数,换言之,可以理解为确定归一化处理中的均值和标准差,那么在后续在线使用过程以及在线更新阶段归一化器的参数保持不变。It should be noted that in the offline training stage, the parameters of the normalizer are determined at the same time, in other words, it can be understood as determining the mean and standard deviation in the normalization process, then in the subsequent online use process and online update stage normalizer parameters remain unchanged.
进一步地,离线训练阶段,采用预先获取的训练数据集对BP神经网络模型进行训练,其中所采用的训练数据为目标PA的PA输入时序信号样本和对应的PA输出时序信号样本。具体来说,将各PA输出时序信号样本输入BP神经网络模型,输出对应的预测的PA输入时序信号;基于各预测的PA输入时序信号和对应的PA输入时序信号样本,获取对应的均方误差;基于各均方误差,利用反向传播算法对BP神经网络模型的网络参数进行更新,进而得到预训练的BP神经网络模型。Further, in the offline training stage, the BP neural network model is trained by using the pre-acquired training data set, wherein the training data used are the PA input time series signal samples of the target PA and the corresponding PA output time series signal samples. Specifically, each PA output timing signal sample is input into the BP neural network model, and the corresponding predicted PA input timing signal is output; based on each predicted PA input timing signal and the corresponding PA input timing signal sample, the corresponding mean square error is obtained. ; Based on each mean square error, the network parameters of the BP neural network model are updated by the back propagation algorithm, and then the pre-trained BP neural network model is obtained.
在本申请的一种可选实施例中,该方法还包括:In an optional embodiment of the present application, the method further includes:
在获取到第四预设数量的待处理时序信号对应的预失真的时序信号后,将各预失真的时序信号分别输入目标PA,得到对应的至少一个PA输出时序信号;After acquiring the pre-distorted timing signals corresponding to the fourth preset number of timing signals to be processed, input each pre-distorted timing signal into the target PA respectively to obtain at least one corresponding PA output timing signal;
基于各预失真的时序信号和对应的PA输出时序信号,再次对BP神经网络模型的网络参数进行更新。Based on each predistorted timing signal and the corresponding PA output timing signal, the network parameters of the BP neural network model are updated again.
具体地,上述过程即未第二阶段训练过程,具体来说,将各PA输出信号输入BP神经网络模型,输出对应的预测的PA输入时序信号;基于各预测的PA输入时序信号和对应的预失真的时序信号,获取对应的均方误差;基于各均方误差对BP神经网络模型的网络参数进行更新。可以理解的是,获取到的PA输出信号也可称为图中的反馈数据,即利用多个反馈数据对BP神经网络模型进行网络参数更新,其中,网络参数的更新频率可以根据实际需求进行设定,例如可以每间隔预设时长即进行网络参数更新。可以理解的是,第四预设数量可以根据实际需求进行设定。Specifically, the above process is the second-stage training process. Specifically, each PA output signal is input into the BP neural network model, and the corresponding predicted PA input time series signal is output; based on each predicted PA input time series signal and the corresponding predicted PA input time series signal The distorted time series signal is obtained, and the corresponding mean square error is obtained; the network parameters of the BP neural network model are updated based on each mean square error. It can be understood that the obtained PA output signal can also be referred to as the feedback data in the figure, that is, using multiple feedback data to update the network parameters of the BP neural network model, wherein the update frequency of the network parameters can be set according to actual needs. For example, the network parameters can be updated every preset time interval. It can be understood that, the fourth preset number can be set according to actual requirements.
需要说明的是,两个阶段的训练过程中所采用的的目标函数可以为最小化均方误差,公式如(7)所示:It should be noted that the objective function used in the two-stage training process can be to minimize the mean square error, and the formula is shown in (7):
Figure PCTCN2022071120-appb-000005
Figure PCTCN2022071120-appb-000005
y表示过功放前的IQ数据(即PA输入时序信号样本或预失真的时序信号),即功放逆行为模型所求的期望值,y output为BP神经网络模型的输出(预测的PA输入时序信号)。由于使用梯度下降算法对BP网络求解,训练一次可用模型迭代的截止条件为:当迭代次数超过预设最大迭代次数或目标函数在指定迭代次数内不在下降。 y represents the IQ data before over-amplification (that is, the PA input timing signal sample or the predistorted timing signal), that is, the expected value obtained by the inverse behavior model of the power amplifier, and y output is the output of the BP neural network model (the predicted PA input timing signal) . Since the gradient descent algorithm is used to solve the BP network, the cut-off condition for training an available model iteration is: when the number of iterations exceeds the preset maximum number of iterations or the objective function does not decrease within the specified number of iterations.
图8为本申请实施例提供的一种预失真处理装置的结构框图,如图8所示,该装置800可以包括:时序信号获取模块801、归一化模块802和预失真处理模块803,其中:FIG. 8 is a structural block diagram of a predistortion processing apparatus provided by an embodiment of the present application. As shown in FIG. 8 , the apparatus 800 may include: a timing signal acquisition module 801, a normalization module 802, and a predistortion processing module 803, wherein :
时序信号获取模块801用于获取包含待处理时序信号的第一连续时序信号;The timing signal acquisition module 801 is configured to acquire the first continuous timing signal including the timing signal to be processed;
归一化模块802用于对第一连续时序信号进行归一化处理,得到对应的第二连续时序信号;The normalization module 802 is configured to perform normalization processing on the first continuous time series signal to obtain a corresponding second continuous time series signal;
预失真处理模块803用于将第二连续时序信号输入预训练的反向传播BP神经网络模型,输出待处理时序信号对应的预失真的时序信号,其中,BP神经网络模型为对待处理时序信号进行功率放大所采用的目标功率放大器件PA的逆行为模型。The predistortion processing module 803 is used to input the second continuous time series signal into the pretrained back-propagation BP neural network model, and output the predistorted time series signal corresponding to the time series signal to be processed, wherein the BP neural network model is to perform the processing of the time series signal to be processed. Inverse behavioral model of the target power amplifier device PA used for power amplification.
本申请提供的方案,在通过目标PA对待处理时序信号进行放大之前,通过基于BP神经网络的逆行为模型对待处理时序信号进行预失真处理, 并在进行预失真处理之前对输入基于BP神经网络的逆行为模型的时序信号进行了归一化处理,使用基于BP神经网络的逆行为模型,并对输入模型的时序信号进行归一化处理,能更好的改善整体系统的非线性因素,提高预失真处理的精度。In the solution provided by this application, before the target PA is used to amplify the time series signal to be processed, the inverse behavior model based on the BP neural network is used to pre-distort the time series signal to be processed, and before the pre-distortion processing is performed, the input signal based on the BP neural network is pre-distorted. The time series signal of the inverse behavior model is normalized, using the inverse behavior model based on the BP neural network, and normalizing the time series signal of the input model, which can better improve the nonlinear factors of the overall system and improve the prediction performance. Accuracy of distortion handling.
在本申请的一种可选实施例中,第一连续时序信号包括待处理时序信号之前的第一预设数量的时序信号、以及在待处理时序信号之后的第二预设数量的时序信号。In an optional embodiment of the present application, the first continuous timing signal includes a first preset number of timing signals before the to-be-processed timing signal, and a second preset number of timing signals after the to-be-processed timing signal.
在本申请的一种可选实施例中,归一化模块具体用于:In an optional embodiment of the present application, the normalization module is specifically used for:
获取训练数据集,并获取训练数据集中时序数据样本的均值和标准差,其中,训练数据集包含所述目标PA的第三预设数量的PA输入时序信号样本和对应的PA输出时序信号样本;Obtain a training data set, and obtain the mean and standard deviation of the time series data samples in the training data set, wherein the training data set includes a third preset number of PA input time series signal samples and corresponding PA output time series signal samples of the target PA;
基于均值和标准差,获取第一连续时序信号中各时序信号对应的归一化后的时序信号;Based on the mean value and the standard deviation, obtain the normalized time series signal corresponding to each time series signal in the first continuous time series signal;
基于各归一化后的时序信号,获取第二连续时序信号。Based on each normalized timing signal, a second continuous timing signal is obtained.
在本申请的一种可选实施例中,预失真处理模块具体用于:In an optional embodiment of the present application, the predistortion processing module is specifically configured to:
通过输入层,接收第二连续时序信号,并将第二连续时序信号转换为初始特征向量;Through the input layer, the second continuous time series signal is received, and the second continuous time series signal is converted into an initial feature vector;
通过隐藏层,基于初始特征向量,获取对应的抽象特征向量;Through the hidden layer, based on the initial feature vector, the corresponding abstract feature vector is obtained;
通过输出层,基于抽象特征向量,输出预失真的时序信号。Through the output layer, based on the abstract feature vector, the predistorted time series signal is output.
在本申请的一种可选实施例中,该装置还包括第一训练模块,用于:In an optional embodiment of the present application, the device further includes a first training module for:
利用训练数据集对BP神经网络模型进行训练,获取预训练的BP神经网络模型。Use the training data set to train the BP neural network model, and obtain the pre-trained BP neural network model.
在本申请的一种可选实施例中,第一训练模块具体用于:In an optional embodiment of the present application, the first training module is specifically used for:
将各PA输出时序信号样本输入BP神经网络模型,输出对应的预测的PA输入时序信号;Input each PA output timing signal sample into the BP neural network model, and output the corresponding predicted PA input timing signal;
基于各预测的PA输入时序信号和对应的PA输入时序信号样本,获取对应的均方误差;Based on each predicted PA input timing signal and the corresponding PA input timing signal sample, obtain the corresponding mean square error;
基于各均方误差,利用反向传播算法对BP神经网络模型的网络参数进行更新,进而得到预训练的BP神经网络模型。Based on each mean square error, the network parameters of the BP neural network model are updated by the back-propagation algorithm, and then the pre-trained BP neural network model is obtained.
在本申请的一种可选实施例中,该装置还包括第二训练模块,用于:In an optional embodiment of the present application, the device further includes a second training module for:
在获取到第四预设数量的待处理时序信号对应的预失真的时序信号后,将各预失真的时序信号分别输入目标PA,得到对应的至少一个PA输出时序信号;After acquiring the pre-distorted timing signals corresponding to the fourth preset number of timing signals to be processed, input each pre-distorted timing signal into the target PA respectively to obtain at least one corresponding PA output timing signal;
基于各预失真的时序信号和对应的PA输出时序信号,再次对BP神经网络模型的网络参数进行更新。Based on each predistorted timing signal and the corresponding PA output timing signal, the network parameters of the BP neural network model are updated again.
在本申请的一种可选实施例中,第二训练模块具体用于:In an optional embodiment of the present application, the second training module is specifically used for:
将各PA输出信号输入BP神经网络模型,输出对应的预测的PA输入时序信号;Input each PA output signal into the BP neural network model, and output the corresponding predicted PA input timing signal;
基于各预测的PA输入时序信号和对应的预失真的时序信号,获取对应的均方误差;Obtain the corresponding mean square error based on each predicted PA input timing signal and the corresponding predistorted timing signal;
基于各均方误差对BP神经网络模型的网络参数进行更新。The network parameters of the BP neural network model are updated based on each mean square error.
下面参考图9,其示出了适于用来实现本申请实施例的电子设备(例如执行图2所示方法的终端设备或服务器)900的结构示意图。本申请实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)、可穿戴设备等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图9示出的电子设备仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。Referring next to FIG. 9 , it shows a schematic structural diagram of an electronic device (eg, a terminal device or a server that executes the method shown in FIG. 2 ) 900 suitable for implementing an embodiment of the present application. The electronic devices in the embodiments of the present application may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), in-vehicle terminals (such as mobile terminals such as in-vehicle navigation terminals), wearable devices, etc., and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in FIG. 9 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present application.
电子设备包括:存储器以及处理器,存储器用于存储执行上述各个方法实施例所述方法的程序;处理器被配置为执行存储器中存储的程序。其中,这里的处理器可以称为下文所述的处理装置901,存储器可以包括下文中的只读存储器(ROM)902、随机访问存储器(RAM)903以及存储装置908中的至少一项,具体如下所示:The electronic device includes: a memory and a processor, where the memory is used to store a program for executing the methods described in the above method embodiments; the processor is configured to execute the program stored in the memory. The processor here may be referred to as the processing device 901 described below, and the memory may include at least one of a read-only memory (ROM) 902, a random access memory (RAM) 903, and a storage device 908, which are specifically as follows shown:
如图9所示,电子设备900可以包括处理装置(例如中央处理器、图形处理器等)901,其可以根据存储在只读存储器(ROM)902中的程序或者从存储装置908加载到随机访问存储器(RAM)903中的程序而执行各种适当的动作和处理。在RAM903中,还存储有电子设备900操作所需的各种程序和数据。处理装置901、ROM 902以及RAM903通过总线 904彼此相连。输入/输出(I/O)接口905也连接至总线904。As shown in FIG. 9 , an electronic device 900 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 901 that may be loaded into random access according to a program stored in a read only memory (ROM) 902 or from a storage device 908 Various appropriate actions and processes are executed by the programs in the memory (RAM) 903 . In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are also stored. The processing device 901, the ROM 902, and the RAM 903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to bus 904 .
通常,以下装置可以连接至I/O接口905:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置906;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置907;包括例如磁带、硬盘等的存储装置908;以及通信装置909。通信装置909可以允许电子设备900与其他设备进行无线或有线通信以交换数据。虽然图9示出了具有各种装置的电子设备,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Typically, the following devices can be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 907 such as a computer; a storage device 908 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 909 . The communication means 909 may allow the electronic device 900 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 9 illustrates an electronic device having various means, it should be understood that not all of the illustrated means are required to be implemented or available. More or fewer devices may alternatively be implemented or provided.
特别地,根据本申请的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本申请的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置909从网络上被下载和安装,或者从存储装置908被安装,或者从ROM 902被安装。在该计算机程序被处理装置901执行时,执行本申请实施例的方法中限定的上述功能。In particular, according to embodiments of the present application, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network via the communication device 909, or from the storage device 908, or from the ROM 902. When the computer program is executed by the processing device 901, the above-mentioned functions defined in the methods of the embodiments of the present application are executed.
需要说明的是,本申请上述的计算机可读存储介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外 的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。It should be noted that the computer-readable storage medium mentioned above in the present application may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two. The computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the above. In this application, a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. In this application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device . Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。In some embodiments, the client and server can use any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects. Examples of communication networks include local area networks ("LAN"), wide area networks ("WAN"), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device:
获取包含待处理时序信号的第一连续时序信号;对第一连续时序信号进行归一化处理,得到对应的第二连续时序信号;将第二连续时序信号输入预训练的反向传播BP神经网络模型,输出待处理时序信号对应的预失真的时序信号,其中,BP神经网络模型为对待处理时序信号进行功率放大所采用的目标功率放大器PA的逆行为模型。Obtain the first continuous time series signal including the to-be-processed time series signal; perform normalization processing on the first continuous time series signal to obtain the corresponding second continuous time series signal; input the second continuous time series signal into the pre-trained back-propagation BP neural network The model outputs the predistorted time series signal corresponding to the time series signal to be processed, wherein the BP neural network model is an inverse behavior model of the target power amplifier PA used for power amplification of the time series signal to be processed.
可以以一种或多种程序设计语言或其组合来编写用于执行本申请的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing the operations of the present application may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
描述于本申请实施例中所涉及到的模块或单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块或单元的名称在某种情况下并不构成对该单元本身的限定,例如,时序信号获取模块还可以被描述为“获取时序信号的模块”。The modules or units involved in the embodiments of the present application may be implemented in a software manner, and may also be implemented in a hardware manner. Wherein, the name of the module or unit does not constitute a limitation of the unit itself under certain circumstances, for example, the timing signal acquisition module can also be described as a "module for acquiring timing signals".
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
在本申请的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of this application, a machine-readable medium may be a tangible medium that may contain or store the program for use by or in connection with the instruction execution system, apparatus or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
本申请实施例中所提供的装置,可以通过AI模型来实现多个模块中的至少一个模块。可以通过非易失性存储器、易失性存储器和处理器来执行与AI相关联的功能。The apparatuses provided in the embodiments of the present application may implement at least one module among the multiple modules through the AI model. AI-related functions may be performed by non-volatile memory, volatile memory, and a processor.
该处理器可以包括一个或多个处理器。此时,该一个或多个处理器可以是通用处理器,例如中央处理单元(CPU)、应用处理器(AP)等、或者是纯图形处理单元,例如,图形处理单元(GPU)、视觉处理单元(VPU)、和/或AI专用处理器,例如神经处理单元(NPU)。The processor may include one or more processors. At this point, the one or more processors may be general-purpose processors, such as central processing units (CPUs), application processors (APs), etc., or pure graphics processing units, such as graphics processing units (GPUs), visual processing Units (VPUs), and/or AI-specific processors, such as Neural Processing Units (NPUs).
该一个或多个处理器根据存储在非易失性存储器和易失性存储器中的预定义的操作规则或人工智能(AI)模型来控制对输入数据的处理。通过训练或学习来提供预定义的操作规则或人工智能模型。The one or more processors control the processing of input data according to predefined operating rules or artificial intelligence (AI) models stored in non-volatile memory and volatile memory. Provides predefined operating rules or artificial intelligence models through training or learning.
这里,通过学习来提供指的是通过将学习算法应用于多个学习数据来得到预定义的操作规则或具有期望特性的AI模型。该学习可以在其中执行根据实施例的AI的装置本身中执行,和/或可以通过单独的服务器/系统来实现。Here, providing by learning refers to obtaining a predefined operation rule or an AI model with desired characteristics by applying a learning algorithm to a plurality of learning data. This learning may be performed in the apparatus itself in which the AI according to an embodiment is performed, and/or may be implemented by a separate server/system.
该AI模型可以包含多个神经网络层。每一层具有多个权重值,一个层的计算是通过前一层的计算结果和当前层的多个权重来执行的。神经网络的示例包括但不限于卷积神经网络(CNN)、深度神经网络(DNN)、循环神经网络(RNN)、受限玻尔兹曼机(RBM)、深度信念网络(DBN)、双向循环深度神经网络(BRDNN)、生成对抗网络(GAN)、以及深度Q网络。The AI model can contain multiple neural network layers. Each layer has multiple weight values, and the calculation of a layer is performed by the calculation result of the previous layer and the multiple weights of the current layer. Examples of neural networks include, but are not limited to, Convolutional Neural Networks (CNN), Deep Neural Networks (DNN), Recurrent Neural Networks (RNN), Restricted Boltzmann Machines (RBM), Deep Belief Networks (DBN), Bidirectional Recurrent Deep Neural Networks (BRDNN), Generative Adversarial Networks (GAN), and Deep Q-Networks.
学习算法是一种使用多个学习数据训练预定目标装置(例如,机器人)以使得、允许或控制目标装置进行确定或预测的方法。该学习算法的示例包括但不限于监督学习、无监督学习、半监督学习、或强化学习。A learning algorithm is a method of training a predetermined target device (eg, a robot) using a plurality of learning data to cause, allow or control the target device to make a determination or prediction. Examples of such learning algorithms include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的计算机可读介质被电子设备执行时实现的具体方法,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, for the specific method implemented when the computer-readable medium described above is executed by an electronic device, reference may be made to the corresponding process in the foregoing method embodiment, which is not repeated here. Repeat.
本申请实施例提供的技术方案可以适用于多种系统,尤其是5G系统。例如适用的系统可以是全球移动通讯(global system of mobile communication,GSM)系统、码分多址(code division multiple access, CDMA)系统、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)通用分组无线业务(general packet radio service,GPRS)系统、长期演进(long term evolution,LTE)系统、LTE频分双工(frequency division duplex,FDD)系统、LTE时分双工(time division duplex,TDD)系统、高级长期演进(long term evolution advanced,LTE-A)系统、通用移动系统(universal mobile telecommunication system,UMTS)、全球互联微波接入(worldwide interoperability for microwave access,WiMAX)系统、5G新空口(New Radio,NR)系统等。这多种系统中均包括终端设备和网络设备。系统中还可以包括核心网部分,例如演进的分组系统(Evloved Packet System,EPS)、5G系统(5GS)等。The technical solutions provided in the embodiments of the present application can be applied to various systems, especially 5G systems. For example, the applicable system may be a global system of mobile communication (GSM) system, a code division multiple access (CDMA) system, a wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA) general packet Wireless service (general packet radio service, GPRS) system, long term evolution (long term evolution, LTE) system, LTE frequency division duplex (frequency division duplex, FDD) system, LTE time division duplex (time division duplex, TDD) system, Long term evolution advanced (LTE-A) system, universal mobile telecommunication system (UMTS), worldwide interoperability for microwave access (WiMAX) system, 5G New Radio (New Radio, NR) system, etc. These various systems include terminal equipment and network equipment. The system may also include a core network part, such as an evolved packet system (Evloved Packet System, EPS), a 5G system (5GS), and the like.
本申请实施例涉及的终端设备,可以是指向用户提供语音和/或数据连通性的设备,具有无线连接功能的手持式设备、或连接到无线调制解调器的其他处理设备等。在不同的系统中,终端设备的名称可能也不相同,例如在5G系统中,终端设备可以称为用户设备(User Equipment,UE)。无线终端设备可以经无线接入网(Radio Access Network,RAN)与一个或多个核心网(Core Network,CN)进行通信,无线终端设备可以是移动终端设备,如移动电话(或称为“蜂窝”电话)和具有移动终端设备的计算机,例如,可以是便携式、袖珍式、手持式、计算机内置的或者车载的移动装置,它们与无线接入网交换语言和/或数据。例如,个人通信业务(Personal Communication Service,PCS)电话、无绳电话、会话发起协议(Session Initiated Protocol,SIP)话机、无线本地环路(Wireless Local Loop,WLL)站、个人数字助理(Personal Digital Assistant,PDA)等设备。无线终端设备也可以称为系统、订户单元(subscriber unit)、订户站(subscriber station),移动站(mobile station)、移动台(mobile)、远程站(remote station)、接入点(access point)、远程终端设备(remote terminal)、接入终端设备(access terminal)、用户终端设备(user terminal)、用户代理(user agent)、用户装置(user device),本申请实施例中并不限定。The terminal device involved in the embodiments of the present application may be a device that provides voice and/or data connectivity to a user, a handheld device with a wireless connection function, or other processing device connected to a wireless modem. In different systems, the name of the terminal device may be different. For example, in the 5G system, the terminal device may be called user equipment (User Equipment, UE). Wireless terminal equipment can communicate with one or more core networks (Core Network, CN) via a radio access network (Radio Access Network, RAN). "telephone) and computers with mobile terminal equipment, eg portable, pocket-sized, hand-held, computer-built or vehicle-mounted mobile devices, which exchange language and/or data with the radio access network. For example, Personal Communication Service (PCS) phones, cordless phones, Session Initiated Protocol (SIP) phones, Wireless Local Loop (WLL) stations, Personal Digital Assistants (Personal Digital Assistants), PDA) and other devices. Wireless terminal equipment may also be referred to as system, subscriber unit, subscriber station, mobile station, mobile station, remote station, access point , a remote terminal device (remote terminal), an access terminal device (access terminal), a user terminal device (user terminal), a user agent (user agent), and a user device (user device), which are not limited in the embodiments of the present application.
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文 中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the various steps in the flowchart of the accompanying drawings are sequentially shown in the order indicated by the arrows, these steps are not necessarily executed in sequence in the order indicated by the arrows. Unless explicitly stated herein, the steps are performed in no strict order and may be performed in other orders. Moreover, at least a part of the steps in the flowchart of the accompanying drawings may include multiple sub-steps or multiple stages, and these sub-steps or stages are not necessarily executed at the same time, but may be executed at different times, and the execution sequence is also It does not have to be performed sequentially, but may be performed alternately or alternately with other steps or at least a portion of sub-steps or stages of other steps.
以上仅是本申请的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。The above are only part of the embodiments of the present application. It should be pointed out that for those skilled in the art, some improvements and modifications can be made without departing from the principles of the present application. These improvements and modifications should also be regarded as The protection scope of this application.

Claims (11)

  1. 一种预失真处理方法,所述方法包括:A predistortion processing method, the method comprising:
    获取包含待处理时序信号的第一连续时序信号;obtaining the first continuous timing signal containing the timing signal to be processed;
    对所述第一连续时序信号进行归一化处理,得到对应的第二连续时序信号;normalizing the first continuous timing signal to obtain a corresponding second continuous timing signal;
    将所述第二连续时序信号输入预训练的反向传播BP神经网络模型,输出所述待处理时序信号对应的预失真的时序信号,其中,所述BP神经网络模型为对所述待处理时序信号进行功率放大所采用的目标功率放大器PA的逆行为模型。Inputting the second continuous time series signal into a pre-trained back-propagation BP neural network model, and outputting a pre-distorted time series signal corresponding to the to-be-processed time-series signal, wherein the BP neural network model is for the to-be-processed time series The inverse behavior model of the target power amplifier PA used for power amplification of the signal.
  2. 根据权利要求1所述的方法,其中,所述第一连续时序信号包括所述待处理时序信号之前的第一预设数量的时序信号、以及在所述待处理时序信号之后的第二预设数量的时序信号。The method of claim 1 , wherein the first continuous timing signal comprises a first preset number of timing signals before the to-be-processed timing signal and a second preset number after the to-be-processed timing signal number of timing signals.
  3. 根据权利要求1所述的方法,其中,所述对所述第一连续时序信号进行归一化处理,得到对应的第二连续时序信号,包括:The method according to claim 1, wherein the normalizing the first continuous time series signal to obtain the corresponding second continuous time series signal comprises:
    获取训练数据集,并获取所述训练数据集中时序数据样本的均值和标准差,其中,所述训练数据集包含所述目标PA的第三预设数量的PA输入时序信号样本和对应的PA输出时序信号样本;Obtain a training data set, and obtain the mean and standard deviation of time series data samples in the training data set, wherein the training data set includes a third preset number of PA input time series signal samples of the target PA and corresponding PA outputs timing signal samples;
    基于所述均值和所述标准差,获取所述第一连续时序信号中各时序信号对应的归一化后的时序信号;obtaining, based on the mean value and the standard deviation, a normalized time sequence signal corresponding to each time sequence signal in the first continuous time sequence signal;
    基于各归一化后的时序信号,获取所述第二连续时序信号。The second continuous timing signal is obtained based on each normalized timing signal.
  4. 根据权利要求1所述的方法,其中,所述BP神经网络包括输入层、第三预设数量的隐藏层以及输出层,所述将所述第二连续时序信号输入预训练的反向传播BP神经网络模型,输出所述待处理时序信号对应的预失真的时序信号,包括:The method of claim 1, wherein the BP neural network comprises an input layer, a third preset number of hidden layers, and an output layer, and the second continuous time series signal is input into a pretrained backpropagation BP The neural network model outputs the predistorted time series signal corresponding to the time series signal to be processed, including:
    通过所述输入层,接收所述第二连续时序信号,并将所述第二连续时 序信号转换为初始特征向量;Through the input layer, the second continuous time series signal is received, and the second continuous time series signal is converted into an initial feature vector;
    通过所述隐藏层,基于所述初始特征向量,获取对应的抽象特征向量;Through the hidden layer, based on the initial feature vector, obtain the corresponding abstract feature vector;
    通过所述输出层,基于所述抽象特征向量,输出所述预失真的时序信号。Through the output layer, the predistorted time series signal is output based on the abstract feature vector.
  5. 根据权利要求3所述的方法,其中,所述BP神经网络模型的训练过程如下:The method according to claim 3, wherein, the training process of the BP neural network model is as follows:
    利用所述训练数据集对所述BP神经网络模型进行训练,获取所述预训练的BP神经网络模型。The BP neural network model is trained by using the training data set to obtain the pre-trained BP neural network model.
  6. 根据权利要求5所述的方法,其中,所述利用所述训练数据集对所述BP神经网络模型进行训练,获取所述预训练的BP神经网络模型,包括:The method according to claim 5, wherein the training of the BP neural network model by using the training data set to obtain the pre-trained BP neural network model comprises:
    将各PA输出时序信号样本输入所述BP神经网络模型,输出对应的预测的PA输入时序信号;Input each PA output timing signal sample into the BP neural network model, and output the corresponding predicted PA input timing signal;
    基于各预测的PA输入时序信号和对应的PA输入时序信号样本,获取对应的均方误差;Based on each predicted PA input timing signal and the corresponding PA input timing signal sample, obtain the corresponding mean square error;
    基于各均方误差,利用反向传播算法对所述BP神经网络模型的网络参数进行更新,进而得到所述预训练的BP神经网络模型。Based on each mean square error, the network parameters of the BP neural network model are updated by using the back-propagation algorithm, and then the pre-trained BP neural network model is obtained.
  7. 根据权利要求1所述的方法,所述方法还包括:The method of claim 1, further comprising:
    在获取到第四预设数量的待处理时序信号对应的预失真的时序信号后,将各预失真的时序信号分别输入所述目标PA,得到对应的至少一个PA输出时序信号;After acquiring the pre-distorted timing signals corresponding to the fourth preset number of timing signals to be processed, input each pre-distorted timing signal into the target PA respectively to obtain at least one corresponding PA output timing signal;
    基于各预失真的时序信号和对应的PA输出时序信号,再次对所述BP神经网络模型的网络参数进行更新。Based on each predistorted timing signal and the corresponding PA output timing signal, the network parameters of the BP neural network model are updated again.
  8. 根据权利要求7所述的方法,其中,所述基于各预失真的时序信号和对应的PA输出时序信号,再次对所述BP神经网络的网络参数进行 更新,包括:The method according to claim 7, wherein, the network parameters of the BP neural network are updated again based on the timing signals of each predistortion and the corresponding PA output timing signals, including:
    将各PA输出信号输入所述BP神经网络模型,输出对应的预测的PA输入时序信号;Input each PA output signal into the BP neural network model, and output the corresponding predicted PA input timing signal;
    基于各预测的PA输入时序信号和对应的预失真的时序信号,获取对应的均方误差;Obtain the corresponding mean square error based on each predicted PA input timing signal and the corresponding predistorted timing signal;
    基于各均方误差对所述BP神经网络模型的网络参数进行更新。The network parameters of the BP neural network model are updated based on each mean square error.
  9. 一种预失真处理装置,所述装置包括:A predistortion processing device, the device comprising:
    时序信号获取模块,用于获取包含待处理时序信号的第一连续时序信号;a timing signal acquisition module, used for acquiring the first continuous timing signal including the timing signal to be processed;
    归一化模块,用于对所述第一连续时序信号进行归一化处理,得到对应的第二连续时序信号;a normalization module, configured to perform normalization processing on the first continuous time series signal to obtain a corresponding second continuous time series signal;
    预失真处理模块,用于将所述第二连续时序信号输入预训练的反向传播BP神经网络模型,输出所述待处理时序信号对应的预失真的时序信号,其中,所述BP神经网络模型为对所述待处理时序信号进行功率放大所采用的目标功率放大器件PA的逆行为模型。A predistortion processing module, configured to input the second continuous time series signal into a pretrained back-propagation BP neural network model, and output a predistorted time series signal corresponding to the to-be-processed time series signal, wherein the BP neural network model An inverse behavioral model of the target power amplifier device PA used for power amplifying the to-be-processed timing signal.
  10. 一种电子设备,所述电子设备包括存储器和处理器;An electronic device including a memory and a processor;
    所述存储器中存储有计算机程序;A computer program is stored in the memory;
    所述处理器,用于执行所述计算机程序以实现权利要求1至8中任一项所述的方法。The processor for executing the computer program to implement the method of any one of claims 1 to 8.
  11. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至8中任一项所述的方法。A computer-readable storage medium storing a computer program on the computer-readable storage medium, when the computer program is executed by a processor, implements the method of any one of claims 1 to 8.
PCT/CN2022/071120 2021-02-07 2022-01-10 Pre-distortion processing method and apparatus WO2022166534A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110169623.1A CN114911837A (en) 2021-02-07 2021-02-07 Pre-distortion processing method and device
CN202110169623.1 2021-02-07

Publications (1)

Publication Number Publication Date
WO2022166534A1 true WO2022166534A1 (en) 2022-08-11

Family

ID=82740852

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/071120 WO2022166534A1 (en) 2021-02-07 2022-01-10 Pre-distortion processing method and apparatus

Country Status (2)

Country Link
CN (1) CN114911837A (en)
WO (1) WO2022166534A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117354058A (en) * 2023-12-04 2024-01-05 武汉安域信息安全技术有限公司 Industrial control network APT attack detection system and method based on time sequence prediction

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115856873B (en) * 2022-11-15 2023-11-07 大连海事大学 Bank-based AIS signal credibility judging model, method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080130788A1 (en) * 2006-12-01 2008-06-05 Texas Instruments Incorporated System and method for computing parameters for a digital predistorter
CN101320960A (en) * 2008-07-18 2008-12-10 东南大学 Power amplifier predistortion method of Hammerstein model based on fuzzy neural network
CN102082751A (en) * 2009-11-27 2011-06-01 电子科技大学 Neural network pre-distortion method based on improved MLBP (Levenberg-Marquardt back propagation) algorithm
CN111245375A (en) * 2020-01-19 2020-06-05 西安空间无线电技术研究所 Power amplifier digital predistortion method of complex value full-connection recurrent neural network model
CN111490737A (en) * 2019-01-28 2020-08-04 中国移动通信有限公司研究院 Nonlinear compensation method and device for power amplifier

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080130788A1 (en) * 2006-12-01 2008-06-05 Texas Instruments Incorporated System and method for computing parameters for a digital predistorter
CN101320960A (en) * 2008-07-18 2008-12-10 东南大学 Power amplifier predistortion method of Hammerstein model based on fuzzy neural network
CN102082751A (en) * 2009-11-27 2011-06-01 电子科技大学 Neural network pre-distortion method based on improved MLBP (Levenberg-Marquardt back propagation) algorithm
CN111490737A (en) * 2019-01-28 2020-08-04 中国移动通信有限公司研究院 Nonlinear compensation method and device for power amplifier
CN111245375A (en) * 2020-01-19 2020-06-05 西安空间无线电技术研究所 Power amplifier digital predistortion method of complex value full-connection recurrent neural network model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117354058A (en) * 2023-12-04 2024-01-05 武汉安域信息安全技术有限公司 Industrial control network APT attack detection system and method based on time sequence prediction

Also Published As

Publication number Publication date
CN114911837A (en) 2022-08-16

Similar Documents

Publication Publication Date Title
WO2022166534A1 (en) Pre-distortion processing method and apparatus
CN103685111B (en) Calculating method of digital pre-distortion parameters and pre-distortion system
US9112649B2 (en) Method and apparatus for predicting signal characteristics for a nonlinear power amplifier
WO2023273985A1 (en) Method and apparatus for training speech recognition model and device
US20220198315A1 (en) Method for denoising quantum device, electronic device, and computer-readable medium
JP2019145010A (en) Computer, calculation program, recording medium, and calculation method
Prakash et al. IoT device friendly and communication-efficient federated learning via joint model pruning and quantization
WO2022121180A1 (en) Model training method and apparatus, voice conversion method, device, and storage medium
Hegade et al. Digitized adiabatic quantum factorization
WO2019220755A1 (en) Information processing device and information processing method
EP4213074A1 (en) Analog hardware implementation of activation functions
CN114020950B (en) Training method, device, equipment and storage medium for image retrieval model
WO2022033462A9 (en) Method and apparatus for generating prediction information, and electronic device and medium
Jung et al. A two-step approach for DLA-based digital predistortion using an integrated neural network
CN110046670B (en) Feature vector dimension reduction method and device
CN111767993A (en) INT8 quantization method, system, device and storage medium for convolutional neural network
WO2023011397A1 (en) Method for generating acoustic features, training speech models and speech recognition, and device
CN113780534A (en) Network model compression method, image generation method, device, equipment and medium
TWI763975B (en) System and method for reducing computational complexity of artificial neural network
Yang et al. An am-lstm based behavioral model of nonlinear power amplifiers
CN111709366A (en) Method, apparatus, electronic device, and medium for generating classification information
WO2019085378A1 (en) Hardware implementation device and method for high-speed full-connection calculation
CN116894163B (en) Charging and discharging facility load prediction information generation method and device based on information security
CN113052323B (en) Model training method and device based on federal learning and electronic equipment
WO2024060727A1 (en) Method and apparatus for training neural network model, and device and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22748813

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22748813

Country of ref document: EP

Kind code of ref document: A1