WO2023284731A1 - Parameter setting method and apparatus, and electronic device and storage medium - Google Patents

Parameter setting method and apparatus, and electronic device and storage medium Download PDF

Info

Publication number
WO2023284731A1
WO2023284731A1 PCT/CN2022/105171 CN2022105171W WO2023284731A1 WO 2023284731 A1 WO2023284731 A1 WO 2023284731A1 CN 2022105171 W CN2022105171 W CN 2022105171W WO 2023284731 A1 WO2023284731 A1 WO 2023284731A1
Authority
WO
WIPO (PCT)
Prior art keywords
performance
learning rate
communication processing
processing model
metric information
Prior art date
Application number
PCT/CN2022/105171
Other languages
French (fr)
Chinese (zh)
Inventor
杨振
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023284731A1 publication Critical patent/WO2023284731A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/373Predicting channel quality or other radio frequency [RF] parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/391Modelling the propagation channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/06Testing, supervising or monitoring using simulated traffic

Definitions

  • the present application relates to the technical field of wireless communication, for example, to a parameter setting method, device, electronic equipment and storage medium.
  • AI artificial intelligence
  • the training of neural networks often depends on the setting of the learning rate. For example, if the learning rate is set too high, the performance indicators of the neural network will fluctuate back and forth, resulting in failure to converge, and the neural network generated by training will lead to a decline in communication quality. If the learning rate is set too small, the performance indicators of the neural network will converge slowly, and a longer training process will lead to increased communication delays. How to set the learning rate becomes an important factor to improve communication efficiency and communication quality.
  • an adaptive learning rate setting method is urgently needed to reduce the training time of the neural network in the communication network, reduce the communication delay, and improve the communication quality.
  • Embodiments of the present application provide a parameter setting method, device, electronic equipment, and storage medium, so as to realize fast training of a communication processing model, reduce communication delay, and improve system communication quality.
  • the embodiment of the present application provides a parameter setting method, the method includes the following steps:
  • the embodiment of the present application also provides a parameter setting device, which includes:
  • the current parameter module is configured to determine the current performance measurement information of the communication processing model
  • a training trend module configured to determine a performance change trend according to the performance measurement information and historical performance measurement information
  • a parameter adjustment module configured to adjust the learning rate of the communication processing model based on the performance change trend, and retrain the communication processing model according to the learning rate.
  • the embodiment of the present application also provides an electronic device, the electronic device includes:
  • processors one or more processors
  • memory configured to store one or more programs
  • the one or more processors are made to implement the parameter setting method as described in any one of the embodiments of the present application.
  • the embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the parameter setting method as described in any one of the embodiments of the present application is implemented.
  • the performance measurement information of the communication processing model is obtained, the performance measurement information is compared with the historical performance measurement information to determine the performance change trend, the learning rate of the communication processing model is adjusted according to the performance change trend, and the learning rate is adjusted according to the learning rate Retrain the communication processing model to realize the dynamic training of the communication processing model, improve the accuracy of the learning rate of the communication processing model, and improve the communication quality of the communication system, and adjust the learning rate by using the performance change trend, which can reduce the training time of the model, thereby reducing the communication cost.
  • the waiting time delay of the system can enhance the communication efficiency.
  • FIG. 1 is a flow chart of a parameter setting method provided in an embodiment of the present application
  • Fig. 2 is a flow chart of another parameter setting method provided by the embodiment of the present application.
  • Fig. 3 is a flowchart of another parameter setting method provided by the embodiment of the present application.
  • FIG. 4 is an example diagram of a parameter setting method provided by an embodiment of the present application.
  • Fig. 5 is a verification example diagram of a parameter setting method provided by the embodiment of the present application.
  • FIG. 6 is an example diagram of a learning rate change trend provided by an embodiment of the present application.
  • Fig. 7 is a comparison diagram of a neural network convergence speed provided by the embodiment of the present application.
  • Fig. 8 is a training example diagram of an MP model under a fixed learning rate provided by an embodiment of the present application.
  • Fig. 9 is a training example diagram of an MP model under a parameter setting method provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a communication processing model provided by an embodiment of the present application.
  • Fig. 11 is an example diagram of the training effect of a communication processing model provided by the embodiment of the present application.
  • FIG. 12 is a comparison diagram of training effects of a communication processing model provided in an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of another communication processing model provided by an embodiment of the present application.
  • Fig. 14 is an example diagram of a training effect of an LMS model provided by an embodiment of the present application.
  • Fig. 15 is an example diagram of the training effect of another LMS model provided by the embodiment of the present application.
  • Fig. 16 is a schematic structural diagram of a parameter setting device provided by an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Fig. 1 is a flow chart of a parameter setting method provided by the embodiment of the present application.
  • the embodiment of the present application can be applied to the situation of deep learning model training in the communication system.
  • the method can be executed by a parameter setting device, which can be implemented by software And/or hardware implementation, generally integrated in a centralized or communication terminal, referring to Figure 1, the method provided by the embodiment of the present application includes the following steps:
  • Step 110 determine the current performance measurement information of the communication processing model.
  • the communication processing model can be a deep learning model used to process communication parameters in the wireless communication system.
  • the communication processing model needs to be pre-trained and generated using massive data in the wireless communication system.
  • the accuracy of the communication processing model directly affects the communication of the wireless communication system.
  • the communication processing model may include a power amplifier behavior model for signal processing, a digital predistortion model for improving the linearity of the power amplifier, and an adaptive equalization model for estimating the transmitted signal, etc.
  • the performance measurement information may reflect the accuracy and processing efficiency of the communication processing model, for example, it may include the error between the output result of the communication processing model and the verification set data, and the processing speed of the communication processing model.
  • the performance measurement information of the current state of the communication processing model may be obtained, and the acquisition method may include: inputting verification data into the communication processing model to obtain output results, and outputting The result is compared with the verification data, and the comparison result can be used as the current performance measurement information, or the timing can be started when the verification data is input into the communication processing model, and the timing can be stopped when the communication processing model generates the output result, and the timing length can be set to Performance metric information as a communication processing model.
  • Step 120 determine the performance change trend according to the performance measurement information and the historical performance measurement information.
  • the historical performance measurement information may be the performance measurement information determined during each verification during the training process of the communication processing model, the historical performance measurement information may be all the performance measurement information determined in the history, or all the performance measurement information determined in the history The best performance value in .
  • the performance change trend may be the comparison result between the performance measurement information and the historical performance measurement information, and may include performance improvement, performance degradation, or performance unchanged.
  • the performance measurement information is inferior to the historical performance measurement information, and the constant performance may mean that the performance measurement information is equal to the historical performance measurement information.
  • the performance measurement information may be compared with historical performance measurement information, and the comparison result may be used as a performance change trend.
  • a value difference between performance metric information and historical performance metric information may be determined, and a performance change trend may be determined according to the magnitude of the difference.
  • Step 130 adjust the learning rate of the communication processing model based on the performance change trend, and retrain the communication processing model according to the learning rate.
  • the learning rate may be the rate at which data is learned in deep learning, and the learning rate may be a hyperparameter, which is used to determine whether the objective function of the communication processing model can converge to a local minimum and the convergence speed of the objective function.
  • the learning rate of the communication processing model can be adjusted according to the performance change trend. For example, when the performance increases, the value of the learning rate can be increased to improve the accuracy of the communication processing model; The value of the learning rate is reduced to improve the training speed of the communication processing model. After adjusting the value of the learning rate, the training data can be used to retrain the communication processing model to obtain new weights of the communication processing model.
  • the performance change trend is determined according to the comparison result between the performance measurement information and the historical performance measurement information, the learning rate of the communication processing model is adjusted according to the performance change trend, and the learning rate is re-established according to the learning rate.
  • Training the communication processing model realizes the rapid training of the communication processing model. Dynamically adjusting the learning rate through the performance change trend can improve the accuracy of the communication processing model and reduce the training time of the communication processing model, thereby reducing the waiting delay of the communication system. Enhance communication efficiency.
  • the performance measurement information includes at least one of mean square error and normalized mean square error.
  • Mean Squared Error can refer to the mean value of the square of the difference between the output result of the communication processing model and the verification data
  • the calculation formula of the mean square error can be as follows:
  • n is the number of verification data used to verify the performance of the communication processing model
  • x i is the output result
  • y i is the verification data.
  • the normalized mean squared error (Normalized Mean Squared Error, NMSE) can be the expression of the mean squared error after transformation, converted into a dimensionless expression to become a scalar, the normalized mean squared error
  • the calculation formula can be as follows:
  • n is the number of verification data used to verify the performance of the communication processing model
  • x i is the output result
  • y i is the verification data.
  • the historical performance metric information includes historical optimal performance metric information and previous training performance metric information.
  • the historical optimal performance metric information may be the optimal information among the previously determined performance metric information, and the optimum may refer to the highest accuracy rate of the communication processing model or the fastest processing speed of the communication processing model Wait.
  • the performance metric information of the previous training may be the performance metric information determined the previous time or several times. For example, after the performance measurement information is determined each time, the determined performance measurement information can be compared with the historical optimal performance measurement information. If the currently determined performance measurement information is better than the historical optimal performance measurement information, then the historical maximum The optimal performance metric information is replaced with the currently determined performance metric information. And, replace the previous training performance measurement information with the currently determined performance measurement information.
  • Fig. 2 is a flow chart of another parameter setting method provided by the embodiment of the present application.
  • the embodiment of the present application is a refinement based on the above-mentioned embodiment of the application. Referring to Fig. 2, the method provided by the embodiment of the present application includes the following steps:
  • Step 210 determine the current performance measurement information of the communication processing model.
  • Step 220 Determine the first comparison result and the second comparison result between the performance measurement information and the historical optimal performance measurement information and the previous training performance measurement information respectively; use the first comparison result and the second comparison result as the performance change trend.
  • the performance metric information can be compared with the historical optimal performance metric information and the previous training performance metric information respectively, and the comparison result between the performance metric information and the historical optimal performance metric information can be recorded as the first comparison result , and the comparison result between the performance measurement information and the previous training performance measurement information is recorded as the second comparison result, and the obtained first comparison result and the second comparison result can be used as a performance change trend reflecting the training state of the communication processing model.
  • Step 230 Decrease the value of the learning rate when the first comparison result is that the performance metric information is worse than the historical optimal performance metric information, and the second comparison result is that the performance metric information is worse than the previous training performance metric information.
  • the first comparison result is that the performance metric information is worse than the historical optimal performance metric information
  • the second comparison result is that the performance metric information is worse than the previous training performance metric information
  • Step 240 if the first comparison result is that the performance metric information is better than the historical optimal performance metric information, increase the value of the learning rate.
  • the performance of the current communication processing model is the highest value in history, and the value of the learning rate can be continuously increased to improve Performance of the communication processing model.
  • the learning rate may not be adjusted.
  • Step 250 retrain the communication processing model according to the learning rate.
  • the training data may be used to retrain the communication processing model to obtain the weight of the new communication processing model.
  • the performance measurement information is compared with the historical optimal performance measurement information and the previous training performance measurement information to obtain the first comparison result and the second comparison result as the performance Change trend, when the first comparison result is that the performance metric information is worse than the historical optimal performance metric information and the second comparison result is that the performance metric information is worse than the previous training performance metric information, reduce the learning rate, and in the first comparison
  • the result is better than the historical optimal performance measurement information, increase the learning rate, rebuild and train the communication processing model according to the adjusted learning rate, realize the rapid training of the communication processing model, and dynamically adjust the learning rate through the performance change trend, which can improve
  • the accuracy of the communication processing model reduces the training time of the communication processing model, thereby reducing the waiting delay of the communication system and enhancing communication efficiency.
  • Fig. 3 is a flowchart of another parameter setting method provided by the embodiment of the present application.
  • the embodiment of the present application is a refinement based on the above-mentioned embodiment of the application. Referring to Fig. 3, the method provided by the embodiment of the present application includes the following steps:
  • Step 310 initialize the historical performance measurement information and the learning rate of the communication processing model.
  • the historical performance measurement information and the learning rate can be initialized, and an initial value is set for the historical performance measurement information and the learning rate, and the accuracy of the initial value setting using the method of this application
  • the performance requirements are not high, and the setting range is large, which can reduce the influence of the learning rate setting range on the communication processing model.
  • Step 320 determine the current performance measurement information of the communication processing model.
  • Step 330 determine the performance change trend according to the performance measurement information and the historical performance measurement information.
  • Step 340 Determine the adjustment factor corresponding to the performance change trend, and use the adjustment factor to update the value of the learning rate.
  • the adjustment factor may be a weighting coefficient for adjusting the value of the learning rate.
  • different adjustment factors can be set in advance for different performance change trends.
  • different adjustment factors can be selected for different performance change trends, and the adjustment factor can be used to adjust the learning rate. value to be adjusted.
  • the product of the adjustment factor and the learning rate may be used as the learning rate used for training the communication processing model.
  • Step 350 retrain the communication processing model according to the learning rate.
  • Step 360 Determine the fitting relationship between the input voltage and the output voltage of the communication device according to the trained communication processing model.
  • the communication processing model can be a communication power amplifier behavior model that processes the input voltage and output voltage of the device.
  • This model can be used to fit the relationship between the input voltage and the output voltage of the communication device, so as to realize accurate control of the voltage of the communication device and reduce the power consumption of the device. consumption.
  • the current performance measurement information of the communication processing model is determined during the training process, and the performance change trend is determined according to the performance measurement information and historical performance measurement information, and the obtained
  • the adjustment factor corresponding to the performance change trend is updated according to the adjustment factor
  • the communication processing model is retrained using the learning rate, and the input voltage and output voltage of the communication device are fitted after the training of the communication processing model is completed to realize communication Accurate control of equipment voltage reduces voltage fluctuations, improves communication stability, and enhances communication quality.
  • the communication processing model includes: at least one of a communication power amplifier behavior model, a digital predistortion model, and an adaptive equalization model.
  • FIG. 4 is an example diagram of a parameter setting method provided in the embodiment of the present application.
  • the neural network parameter is c
  • the performance measurement is M
  • the learning rate is ⁇ , where ⁇ is a real number greater than 0.
  • the parameter when the nth training is completed is recorded as c, the performance measure is recorded as M n , and the learning rate is recorded as ⁇ n .
  • ⁇ 1 , ⁇ 2 , ⁇ 3 are learning rate adjustment factors, where ⁇ 1 ⁇ 1, ⁇ 3 ⁇ 1;
  • ⁇ 1 , ⁇ 2 , and ⁇ 3 can be fixed values, or values that vary with the training process
  • the embodiment of the present application uses the method of the embodiment of the present application to set parameters in the case of a neural network to improve the accuracy of the sinusoidal function fitting as an example to test the effectiveness of the implementation method of the present application.
  • the input of the sinusoidal signal is x
  • x is 1024 numbers starting from 0 with an interval of 0.01 between two adjacent numbers.
  • a neural network model is established, the number of inputs of the model is 1, the number of outputs is 1, it contains 1 hidden layer, and the number of neurons in the hidden layer is 4.
  • the hidden layer activation function is a tanh function, and the output layer uses a linear activation function.
  • the parameters of the neural network are initialized, and the initialization parameters are denoted as c 0 .
  • the initial value of the learning rate is set to 0.001, the maximum value of the learning rate is set to 0.1, and the minimum value is set to 0.0001.
  • Figure 6 is an example diagram of a learning rate change trend provided by the embodiment of the present application; see Figure 6, which shows the change trend of the learning rate during the entire training process, which increases rapidly at the beginning, and then slowly decays until the set learning rate min.
  • Fig. 7 is a comparison diagram of the convergence speed of a neural network provided by the embodiment of the present application. Referring to Fig. 7, the learning rate adjustment method in the embodiment of the present application is compared with the fixed learning rate through the neural network, showing the proposed embodiment of the present application. The role played by the adaptive learning rate mechanism is faster than the fixed learning rate, and the final convergence performance is equivalent to the optimal fixed learning rate, and the fluctuation is small.
  • the relationship between the input x and the output y of the fitted power amplifier is determined, where x and y are both complex numbers.
  • the communication processing is a memory polynomial (Memory Polynomial, MP) model, which is often used for power amplifier behavior modeling, and the expression is as follows:
  • Z is the output of the model
  • w k, q are the coefficients of the model
  • Q represents the memory depth
  • K represents the order of nonlinearity.
  • the process of using the MP model to model the power amplifier is the process of solving the model coefficient w k,q .
  • the Least Mean Square (LMS) algorithm is a common method to solve the model coefficients w k, q . This algorithm is similar to the training algorithm of the neural network. It is an iterative solution algorithm based on the gradient descent method. The complex number in the general sense The principle of the LMS algorithm is as follows:
  • n n+1, if n ⁇ N then return to (4);
  • the parameter setting method in the embodiment of the present application can be combined with the common LMS algorithm, and the steps are as follows:
  • n n+1, if n ⁇ N then return to (5);
  • Figure 8 is a training example diagram of an MP model under a fixed learning rate provided by the embodiment of the present application.
  • the common LMS algorithm is used to set different fixed learning rates respectively.
  • the NMSE changing circumstances. It can be seen that the final effect of different learning rates is very different.
  • the LMS algorithm combined with the parameter setting of the embodiment of the present application is used, different initial learning rates are set respectively, and the NMSE changes with the increase of the number of training times, as shown in Fig. 9
  • the setting of the initial value of the learning rate has no effect on the convergence speed of the learning rate in this application, and can reduce the influence of inaccurate setting of the initial value of the learning rate on the training of the communication processing model.
  • a dynamic adjustment factor may be set to adjust the learning rate, which may include the following steps:
  • n n+1, if n ⁇ N then return to (5);
  • the adjustment factors ⁇ 1 and ⁇ 3 may change dynamically along with the training process of the MP model.
  • FIG. 10 is a schematic structural diagram of a communication processing model provided by the embodiment of the present application.
  • the communication processing model shown in FIG. 10 can be used for radio frequency power amplifier modeling (Real-Valued Time-Delay Neural Networks, RVTDNN) model
  • RVTDNN is a real neural network that divides complex signals into real and imaginary parts to avoid the situation that traditional real neural networks cannot handle complex data.
  • RVTDNN consists of two parts, the first part is a time delay structure, used to simulate the nonlinear characteristics of the power amplifier, and the second part is a conventional multilayer perceptron (Multilayer Perceptron, MLP) network, used to simulate the nonlinear characteristics of the power amplifier.
  • MLP Multilayer Perceptron
  • the input x in the figure is divided into real part Real, imaginary part imag and abs three items, and each item contains 4 time delay units besides itself.
  • These data are fed into the subsequent MLP network, which contains a hidden layer consisting of 16 nodes, and the activation function is the tanh function.
  • the parameters of the neural network are initialized, and the initialization parameters are denoted as c 0 .
  • the calculation formula of NMSE is in is the output of the neural network for xi .
  • the initial value of the learning rate is set to 0.001, the maximum value of the learning rate is set to 0.1, and the minimum value is set to 0.0001.
  • Figure 11 is an example diagram of the training effect of a communication processing model provided by the embodiment of the present application. Referring to Figure 11, the change trend of the learning rate during the entire training process increases rapidly at the beginning, and then slowly decays until the set minimum value. Fig.
  • FIG. 12 is a comparison diagram of the training effect of a communication processing model provided by the embodiment of the present application.
  • Fig. 12 shows the RVTDNN model applying the adaptive learning rate set by the parameter setting method of the embodiment of the present application and the training using the fixed learning rate method
  • the comparison results show that compared with the fixed learning rate, the performance measurement information of the adaptive learning rate set by the parameter setting method provided by the embodiment of the present application converges faster, and the final convergence performance is also better.
  • FIG. 13 is a schematic structural diagram of another communication processing model provided by the embodiment of the present application.
  • the communication processing model can be a digital pre-distortion (Digital Pre-Distortion, DPD) model, which allows the transmitted signal to be processed by the DPD module first. , and then fed into the power amplifier, so that the output signal of the power amplifier is a linear amplification of the original signal.
  • DPD Digital Pre-Distortion
  • the memory polynomial model is a commonly used DPD module model, and its expression is as follows:
  • FIG. 13 shows an LMS algorithm for solving DPD coefficients with an indirect architecture.
  • the training process of the LMS algorithm is similar to the algorithm of the neural network.
  • the parameter setting method in the embodiment of the present application can be used.
  • the processing process of the common LMS algorithm can be Including the following steps:
  • n n+1, if n ⁇ N then return to (4);
  • n n+1, if n ⁇ N then return to (5);
  • FIG 14 is an example diagram of the training effect of an LMS model provided by the embodiment of the present application. Referring to Figure 14, it shows that in the digital predistortion model, the common LMS algorithm is used to set different fixed learning rates respectively. In the case of increasing NMSE changes, it can be seen that the final effects of different learning rates are very different. The learning rate trained by related technologies is extremely unstable, which can easily lead to distortion of the signal sent by the communication system and reduce the communication quality.
  • Fig. 15 is an example diagram of the training effect of another LMS model provided by the embodiment of the present application. Referring to Fig. 15, the LMS algorithm combined with the parameter setting method in the embodiment of the present application is used in the same digital predistortion model.
  • the NMSE changes with the increase of training times. It can be seen that even if different initial learning rates are set, the training performance will quickly converge to the same level.
  • the setting of the initial learning rate is insensitive, which is convenient for the user to train the digital predistortion model.
  • the adjustment factors ⁇ 1 and ⁇ 3 change dynamically along with the training process.
  • a signal sent by a transmitter will be affected by multipath propagation when passing through a wireless channel, and an equalization module needs to be used on the receiver side to restore the original signal.
  • the transmitted signal is x
  • the received signal is y
  • the channel impulse response is h
  • equalization is the process of recovering x from y, and the equalization can adopt a linear model, for example:
  • the coefficient w of the model can be solved using the LMS algorithm.
  • the equalization can also be the RVTDNN model in the above embodiment, and the coefficients of the model can be solved using a back propagation (Back Propagation, BP) algorithm.
  • BP Back Propagation
  • Fig. 16 is a schematic structural diagram of a parameter setting device provided by an embodiment of the present application, which can execute the parameter setting method provided by any embodiment of the present application, and has corresponding functional modules and beneficial effects for executing the method.
  • the device can be controlled by software and/or Hardware implementation, including: a current parameter module 401 , a training trend module 402 and a parameter adjustment module 403 .
  • the current parameter module 401 is configured to determine the current performance measurement information of the communication processing model.
  • the training trend module 402 is configured to determine a performance change trend according to the performance metric information and historical performance metric information.
  • the parameter adjustment module 403 is configured to adjust the learning rate of the communication processing model based on the performance change trend, and retrain the communication processing model according to the learning rate.
  • the performance measurement information of the communication processing model is determined by the current parameter module
  • the training trend module determines the performance change trend according to the comparison result between the performance measurement information and the historical performance measurement information
  • the parameter adjustment module adjusts the communication processing model according to the performance change trend Learning rate, and retraining the communication processing model according to the learning rate, realizes the rapid training of the communication processing model, and dynamically adjusts the learning rate through the performance change trend, which can improve the accuracy of the communication processing model and reduce the training time of the communication processing model, thus Reducing the waiting delay of the communication system can enhance the communication efficiency.
  • the performance measurement information in the current parameter module 301 includes at least one of mean square error and normalized mean square error.
  • the historical performance metric information in the training trend module 302 includes historical optimal performance metric information and previous training performance metric information.
  • the training trend module 302 includes:
  • the comparison execution unit is configured to determine a first comparison result and a second comparison result between the performance metric information and the historical optimal performance metric information and the previous training performance metric information respectively.
  • a trend determination unit is configured to use the first comparison result and the second comparison result as the performance change trend.
  • the parameter adjustment module 303 includes:
  • the first processing unit is configured to, when the first comparison result is that the performance metric information is worse than the historical optimal performance metric information, and the second comparison result is that the performance metric information is worse than the previous time In the case of training performance metric information, reduce the value of the learning rate.
  • the second processing unit is configured to increase the value of the learning rate when the first comparison result is that the performance metric information is better than the historical optimal performance metric information.
  • the parameter adjustment module 303 in the device further includes:
  • the factor adjustment unit is configured to determine an adjustment factor corresponding to the performance change trend, and use the adjustment factor to update the value of the learning rate.
  • the device further includes:
  • a parameter initialization module configured to initialize the historical performance measurement information and the learning rate of the communication processing model.
  • the device further includes:
  • the model use module is configured to determine the fitting relationship between the input voltage and the output voltage of the communication device according to the trained communication processing model.
  • the communication processing model in the device includes: at least one of a communication power amplifier behavior model, a predistortion model, and an adaptive equalization model.
  • Figure 17 is a schematic structural diagram of an electronic device provided by an embodiment of the present application, the electronic device includes a processor 50, a memory 51, an input device 52 and an output device 53; the number of processors 50 in the electronic device can be one or more
  • a processor 50 is taken as an example; the processor 50, memory 51, input device 52 and output device 53 in the electronic device can be connected through a bus or in other ways.
  • the connection through a bus is taken as an example.
  • Memory 51 as a computer-readable storage medium, can be set to store software programs, computer-executable programs and modules, such as the modules corresponding to the parameter setting device in the embodiment of the present application (current parameter module 401, training trend module 402 and parameter adjustment module 403).
  • the processor 50 executes various functional applications and data processing of the electronic device by running the software programs, instructions and modules stored in the memory 51 , that is, realizes the above parameter setting method.
  • the computer readable storage medium may be a non-transitory computer readable storage medium.
  • the memory 51 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system and at least one application required by a function; the data storage area may store data created according to the use of the electronic device, and the like.
  • the memory 51 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage devices.
  • memory 51 may include memory located remotely relative to processor 50, and these remote memories may be connected to the electronic device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the input device 52 can be configured to receive input numbers or character information, and generate key signal input related to user settings and function control of the electronic device.
  • the output device 53 may include a display device such as a display screen.
  • the embodiment of the present application also provides a storage medium containing computer-executable instructions, the computer-executable instructions are configured to execute a parameter setting method when executed by a computer processor, the method comprising:
  • the multiple units and modules included are only divided according to functional logic, but are not limited to the above-mentioned division, as long as the corresponding functions can be realized; in addition, multiple The specific names of the functional units are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the present application.
  • the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components. Components cooperate to execute.
  • Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application-specific integrated circuit .
  • a processor such as a central processing unit, digital signal processor, or microprocessor
  • Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .
  • the embodiment of the present application realizes the dynamic update of the learning rate, improves the accuracy of the communication processing model, can reduce the training time of the communication processing model, thereby reducing the waiting time of the communication system, enhancing communication efficiency, and improving communication quality.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Filters That Use Time-Delay Elements (AREA)

Abstract

Provided in the embodiments of the present application are a parameter setting method and apparatus, and an electronic device and a storage medium. The method comprises: determining current performance measurement information of a communication processing model; determining a performance change trend according to the performance measurement information and historical performance measurement information; and adjusting a learning rate of the communication processing model on the basis of the performance change trend, and retraining the communication processing model according to the learning rate.

Description

参数设置方法、装置、电子设备和存储介质Parameter setting method, device, electronic device and storage medium 技术领域technical field
本申请涉及无线通信技术领域,例如涉及一种参数设置方法、装置、电子设备和存储介质。The present application relates to the technical field of wireless communication, for example, to a parameter setting method, device, electronic equipment and storage medium.
背景技术Background technique
随着无线通信技术的发展,网络延时逐渐降低,通信技术也逐渐智能化和高效化。传统方法已无法满足发展趋势,而随着硬件计算能力的增加以及人工智能(Artificial Intelligence,AI)技术的成熟,越来越多的厂商选择使用AI技术解决无线通信技术中的问题,例如,信道估计以及功放行为模型分析等。然而,在AI领域中神经网络的训练往往依赖于学习率的设置,例如,学习率设置过大导致神经网络的性能指标来回震荡,导致无法收敛,训练生成的神经网络将会导致通信质量下降,学习率设置过小导致神经网络的性能指标收敛缓慢,训练过程较长将会导致通信延迟增加。如何设置学习率成为提高通信效率和通信质量的重要因素。With the development of wireless communication technology, network delay is gradually reduced, and communication technology is gradually becoming more intelligent and efficient. Traditional methods can no longer meet the development trend, and with the increase of hardware computing power and the maturity of artificial intelligence (AI) technology, more and more manufacturers choose to use AI technology to solve problems in wireless communication technology, for example, channel Estimation and power amplifier behavior model analysis, etc. However, in the field of AI, the training of neural networks often depends on the setting of the learning rate. For example, if the learning rate is set too high, the performance indicators of the neural network will fluctuate back and forth, resulting in failure to converge, and the neural network generated by training will lead to a decline in communication quality. If the learning rate is set too small, the performance indicators of the neural network will converge slowly, and a longer training process will lead to increased communication delays. How to set the learning rate becomes an important factor to improve communication efficiency and communication quality.
目前,解决学习率设置问题的传统方法存在以下两种:At present, there are two traditional methods to solve the learning rate setting problem:
1)设定固定的学习率,这种设置方式往往用于解决固定场景下的问题,但是对于无线通信中由于应用场景复杂,固定的学习率往往无法满足多场景的需求,导致训练生成的神经网络无法适应其复杂的场景,导致通信质量下降。1) Set a fixed learning rate. This setting method is often used to solve problems in fixed scenarios. However, due to the complexity of the application scenarios in wireless communication, the fixed learning rate often cannot meet the needs of multiple scenarios, resulting in neural networks generated by training. The network cannot adapt to its complex scenarios, resulting in poor communication quality.
2)设置学习率衰减,初始化一个较大的学习率,并在训练过程中对学习率进行衰减,其中,常见的衰减方式包括分段衰减和指数衰减等。但是由于学习率衰减会引入新的参数,增加了通信系统的复杂度,导致通信质量不可控,另外学习率衰减过程为单项衰减,当通信场景的实际需求大于初始设置的学习率时,花费大量实际训练的神经网络无法提高通信系统性能。2) Set the learning rate attenuation, initialize a larger learning rate, and attenuate the learning rate during the training process. Among them, common attenuation methods include segmental attenuation and exponential attenuation. However, since the learning rate attenuation will introduce new parameters, which increases the complexity of the communication system, resulting in uncontrollable communication quality. In addition, the learning rate attenuation process is a single attenuation. When the actual demand of the communication scene is greater than the initially set learning rate, it will cost a lot Realistically trained neural networks cannot improve communication system performance.
针对上述问题,目前亟需一种自适应的学习率设置方法,以减少通信网络中神经网络的训练时间,降低通信延迟,提高通信质量。In view of the above problems, an adaptive learning rate setting method is urgently needed to reduce the training time of the neural network in the communication network, reduce the communication delay, and improve the communication quality.
发明内容Contents of the invention
本申请实施例提供了一种参数设置方法、装置、电子设备和存储介质,以实现通信处理模型的快速训练,降低通信延时,可提高系统通信质量。Embodiments of the present application provide a parameter setting method, device, electronic equipment, and storage medium, so as to realize fast training of a communication processing model, reduce communication delay, and improve system communication quality.
本申请实施例提供了一种参数设置方法,该方法包括以下步骤:The embodiment of the present application provides a parameter setting method, the method includes the following steps:
确定通信处理模型当前的性能度量信息;determining current performance metric information for the communication processing model;
根据所述性能度量信息与历史性能度量信息确定性能变化趋势;determining a performance change trend according to the performance metric information and historical performance metric information;
基于所述性能变化趋势调整所述通信处理模型的学习率,并根据所述学习率重新训练所述通信处理模型。Adjusting the learning rate of the communication processing model based on the performance change trend, and retraining the communication processing model according to the learning rate.
本申请实施例还提供了一种参数设置装置,该装置包括:The embodiment of the present application also provides a parameter setting device, which includes:
当前参数模块,设置为确定通信处理模型当前的性能度量信息;The current parameter module is configured to determine the current performance measurement information of the communication processing model;
训练趋势模块,设置为根据所述性能度量信息与历史性能度量信息确定性能变化趋势;A training trend module, configured to determine a performance change trend according to the performance measurement information and historical performance measurement information;
参数调整模块,设置为基于所述性能变化趋势调整所述通信处理模型的学习率,并根据所述学习率重新训练所述通信处理模型。A parameter adjustment module, configured to adjust the learning rate of the communication processing model based on the performance change trend, and retrain the communication processing model according to the learning rate.
本申请实施例还提供了一种电子设备,该电子设备包括:The embodiment of the present application also provides an electronic device, the electronic device includes:
一个或多个处理器;one or more processors;
存储器,设置为存储一个或多个程序;memory configured to store one or more programs;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本申请实施例中任一所述的参数设置方法。When the one or more programs are executed by the one or more processors, the one or more processors are made to implement the parameter setting method as described in any one of the embodiments of the present application.
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如本申请实施例中任一所述的参数设置方法。The embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the parameter setting method as described in any one of the embodiments of the present application is implemented.
本申请实施例,通过获取通信处理模型的性能度量信息,将该性能度量信息与历史性能度量信息进行对比以确定性能变化趋势,根据性能变化趋势调整通信处理模型的学习率,并根据该学习率重新训练通信处理模型,实现通信处理模型的动态训练,提高通信处理模型的学习率的准确性,可提高通信系统的通信质量,使用性能变化趋势调整学习率,可减少模型训练时间,从而降低通信系统的等待时延,可增强通信效率。In the embodiment of the present application, the performance measurement information of the communication processing model is obtained, the performance measurement information is compared with the historical performance measurement information to determine the performance change trend, the learning rate of the communication processing model is adjusted according to the performance change trend, and the learning rate is adjusted according to the learning rate Retrain the communication processing model to realize the dynamic training of the communication processing model, improve the accuracy of the learning rate of the communication processing model, and improve the communication quality of the communication system, and adjust the learning rate by using the performance change trend, which can reduce the training time of the model, thereby reducing the communication cost. The waiting time delay of the system can enhance the communication efficiency.
附图说明Description of drawings
图1是本申请实施例提供的一种参数设置方法的流程图;FIG. 1 is a flow chart of a parameter setting method provided in an embodiment of the present application;
图2是本申请实施例提供的另一种参数设置方法的流程图;Fig. 2 is a flow chart of another parameter setting method provided by the embodiment of the present application;
图3是本申请实施例提供的另一种参数设置方法的流程图;Fig. 3 is a flowchart of another parameter setting method provided by the embodiment of the present application;
图4是本申请实施例提供的一种参数设置方法的示例图;FIG. 4 is an example diagram of a parameter setting method provided by an embodiment of the present application;
图5是本申请实施例提供的一种参数设置方法的验证示例图;Fig. 5 is a verification example diagram of a parameter setting method provided by the embodiment of the present application;
图6是本申请实施例提供的一种学习率变化趋势的示例图;FIG. 6 is an example diagram of a learning rate change trend provided by an embodiment of the present application;
图7是本申请实施例提供的一种神经网络收敛速度对比图;Fig. 7 is a comparison diagram of a neural network convergence speed provided by the embodiment of the present application;
图8是本申请实施例提供的一种固定学习率下的MP模型的训练示例图;Fig. 8 is a training example diagram of an MP model under a fixed learning rate provided by an embodiment of the present application;
图9是本申请实施例提供的一种参数设置方法下的MP模型的训练示例图;Fig. 9 is a training example diagram of an MP model under a parameter setting method provided by an embodiment of the present application;
图10是本申请实施例提供的一种通信处理模型的结构示意图;FIG. 10 is a schematic structural diagram of a communication processing model provided by an embodiment of the present application;
图11是本申请实施例提供的一种通信处理模型的训练效果示例图;Fig. 11 is an example diagram of the training effect of a communication processing model provided by the embodiment of the present application;
图12是本申请实施例提供的一种通信处理模型的训练效果对比图;FIG. 12 is a comparison diagram of training effects of a communication processing model provided in an embodiment of the present application;
图13是本申请实施例提供的另一种通信处理模型的结构示意图;FIG. 13 is a schematic structural diagram of another communication processing model provided by an embodiment of the present application;
图14是本申请实施例提供的一种LMS模型的训练效果示例图;Fig. 14 is an example diagram of a training effect of an LMS model provided by an embodiment of the present application;
图15是本申请实施例提供的一种另一种LMS模型的训练效果示例图;Fig. 15 is an example diagram of the training effect of another LMS model provided by the embodiment of the present application;
图16是本申请实施例提供的一种参数设置装置的结构示意图;Fig. 16 is a schematic structural diagram of a parameter setting device provided by an embodiment of the present application;
图17是本申请实施例提供的一种电子设备的结构示意图。FIG. 17 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
具体实施方式detailed description
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It should be understood that the specific embodiments described here are only used to explain the present application, and are not intended to limit the present application.
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本申请的说明,其本身没有特有的意义。因此,“模块”、“部件”或“单元”可以混合地使用。In the subsequent description, use of suffixes such as 'module', 'part' or 'unit' for denoting elements is only for facilitating the description of the present application and has no specific meaning by itself. Therefore, 'module', 'part' or 'unit' may be used in combination.
图1是本申请实施例提供的一种参数设置方法的流程图,本申请实施例可以适用于通信系统中深度学习模型训练的情况,该方法可以由参数设置装置来执行,该装置可以通过软件和/或硬件方式实现,一般集成在集中或通信终端中,参见图1,本申请实施例提供的方法包括如下步骤:Fig. 1 is a flow chart of a parameter setting method provided by the embodiment of the present application. The embodiment of the present application can be applied to the situation of deep learning model training in the communication system. The method can be executed by a parameter setting device, which can be implemented by software And/or hardware implementation, generally integrated in a centralized or communication terminal, referring to Figure 1, the method provided by the embodiment of the present application includes the following steps:
步骤110、确定通信处理模型当前的性能度量信息。 Step 110, determine the current performance measurement information of the communication processing model.
其中,通信处理模型可以是无线通信系统中用于处理通信参数的深度学习模型,通信处理模型在无线通信系统中需要使用海量数据预先训练生成,通信处理模型的准确性直接影响无线通信系统的通信质量,通信处理模型可以包括处理信号处理的功放行为模型,改善功放线性度的数字预失真模型以及估算发送信号的自适应均衡模型等。性能度量信息可以是反映通信处理模型的准确率和处理效率的信息,例如可以包括通信处理模型的输出结果与验证集数据的误差以及通信处理模型的处理速度等。Among them, the communication processing model can be a deep learning model used to process communication parameters in the wireless communication system. The communication processing model needs to be pre-trained and generated using massive data in the wireless communication system. The accuracy of the communication processing model directly affects the communication of the wireless communication system. Quality, the communication processing model may include a power amplifier behavior model for signal processing, a digital predistortion model for improving the linearity of the power amplifier, and an adaptive equalization model for estimating the transmitted signal, etc. The performance measurement information may reflect the accuracy and processing efficiency of the communication processing model, for example, it may include the error between the output result of the communication processing model and the verification set data, and the processing speed of the communication processing model.
在本申请实施例中,在通信处理模型的训练过程中,可以获取通信处理模型当前状态的性能度量信息,获取的方式可以包括:将验证数据输入到通信处理模型以获取输出结果,并将输出结果与验证数据进行比较,可以将比较结果作为当前的性能度量信息,又或者,可以在将验证数据输入到通信处理模型时开始计时,在通信处理模型生成输出结果时结束计时,可以将计时长度作为通信处理模型的性能度量信息。In the embodiment of the present application, during the training process of the communication processing model, the performance measurement information of the current state of the communication processing model may be obtained, and the acquisition method may include: inputting verification data into the communication processing model to obtain output results, and outputting The result is compared with the verification data, and the comparison result can be used as the current performance measurement information, or the timing can be started when the verification data is input into the communication processing model, and the timing can be stopped when the communication processing model generates the output result, and the timing length can be set to Performance metric information as a communication processing model.
步骤120、根据性能度量信息与历史性能度量信息确定性能变化趋势。 Step 120, determine the performance change trend according to the performance measurement information and the historical performance measurement information.
其中,历史性能度量信息可以是在通信处理模型的训练过程中每次验证时确定的性能度量信息,历史性能度量信息可以历史中确定的所有性能度量信息,还可以是历史中确定所有性能度量信息中的性能最佳值。性能变化趋势可以是性能度量信息与历史性能度量信息的比较结果,可以包括性能上升、性能下降或者性能不变,可以理解的是,性能上升为性能度量信息优于历史性能度量信息,性能下降为性能度量信息劣于历史性能度量信息,性能不变可以是性能度量信息与历史性能度量信息相等。Wherein, the historical performance measurement information may be the performance measurement information determined during each verification during the training process of the communication processing model, the historical performance measurement information may be all the performance measurement information determined in the history, or all the performance measurement information determined in the history The best performance value in . The performance change trend may be the comparison result between the performance measurement information and the historical performance measurement information, and may include performance improvement, performance degradation, or performance unchanged. The performance measurement information is inferior to the historical performance measurement information, and the constant performance may mean that the performance measurement information is equal to the historical performance measurement information.
例如,可以将性能度量信息与历史性能度量信息进行比较,可以将比较结果作为性能变化趋势。示例性的,可以确定性能度量信息与历史性能度量信息取值之差,根据差值的大小确定出性能变化趋势。For example, the performance measurement information may be compared with historical performance measurement information, and the comparison result may be used as a performance change trend. Exemplarily, a value difference between performance metric information and historical performance metric information may be determined, and a performance change trend may be determined according to the magnitude of the difference.
步骤130、基于性能变化趋势调整通信处理模型的学习率,并根据学习率重新训练通信处理模型。 Step 130, adjust the learning rate of the communication processing model based on the performance change trend, and retrain the communication processing model according to the learning rate.
其中,学习率可以是深度学习中对数据进行学习的速率,学习率可以为超参数,用于决定通信处理模型的目标函数是否能够收敛到局部最小值以及目标函数的收敛速度。Wherein, the learning rate may be the rate at which data is learned in deep learning, and the learning rate may be a hyperparameter, which is used to determine whether the objective function of the communication processing model can converge to a local minimum and the convergence speed of the objective function.
在申请实施例中,可以根据性能变化趋势调整通信处理模型的学习率,例如,在性能上升时可以将学习率的取值调高,以提高通信处理模型的准确率,在性能下降时可以将学习率的取值降低以提高通信处理模型的训练速度。在对学习率的取值调整后可以使用训练数据重新对通信处理模型进行训练,获取到通信处理模型新的权重。In the application embodiment, the learning rate of the communication processing model can be adjusted according to the performance change trend. For example, when the performance increases, the value of the learning rate can be increased to improve the accuracy of the communication processing model; The value of the learning rate is reduced to improve the training speed of the communication processing model. After adjusting the value of the learning rate, the training data can be used to retrain the communication processing model to obtain new weights of the communication processing model.
本申请实施例,通过确定通信处理模型的性能度量信息,根据性能度量信息与历史性能度量信息的比较结果确定性能变化趋势,按照性能变化趋势调整通信处理模型的学习率,并根据该学习率重新训练通信处理模型,实现了通信处理模型的快速训练,通过性能变化趋势动态调整学习率,可提高通信处理模型的准确性,降低通信处理模型的训练时间,从而降低通信系统的等待时延,可增强通信效率。In the embodiment of the present application, by determining the performance measurement information of the communication processing model, the performance change trend is determined according to the comparison result between the performance measurement information and the historical performance measurement information, the learning rate of the communication processing model is adjusted according to the performance change trend, and the learning rate is re-established according to the learning rate. Training the communication processing model realizes the rapid training of the communication processing model. Dynamically adjusting the learning rate through the performance change trend can improve the accuracy of the communication processing model and reduce the training time of the communication processing model, thereby reducing the waiting delay of the communication system. Enhance communication efficiency.
例如,在上述申请实施例的基础上,所述性能度量信息至少包括均方误差、归一化均方误差中至少之一。For example, on the basis of the foregoing application embodiments, the performance measurement information includes at least one of mean square error and normalized mean square error.
例如,均方误差(Mean Squared Error,MSE)可以是指通信处理模型的输出结果与验证数据的差值平方的均值,均方误差的计算公式可以如下:For example, Mean Squared Error (Mean Squared Error, MSE) can refer to the mean value of the square of the difference between the output result of the communication processing model and the verification data, and the calculation formula of the mean square error can be as follows:
Figure PCTCN2022105171-appb-000001
Figure PCTCN2022105171-appb-000001
其中,n为用于验证通信处理模型性能的验证数据的个数,x i为输出结果,y i为验证数据。 Among them, n is the number of verification data used to verify the performance of the communication processing model, x i is the output result, and y i is the verification data.
在本申请实施例中,归一化均方误差(Normalized Mean Squared Error,NMSE)可以是将均方误差的表达式经过变换,转化为无量纲的表达式成为标量,归一化均方误差的计算公式可以如下:In the embodiment of the present application, the normalized mean squared error (Normalized Mean Squared Error, NMSE) can be the expression of the mean squared error after transformation, converted into a dimensionless expression to become a scalar, the normalized mean squared error The calculation formula can be as follows:
Figure PCTCN2022105171-appb-000002
Figure PCTCN2022105171-appb-000002
其中,n为用于验证通信处理模型性能的验证数据的个数,x i为输出结果,y i为验证数 据。 Among them, n is the number of verification data used to verify the performance of the communication processing model, x i is the output result, and y i is the verification data.
例如,在上述申请实施例的基础上,所述历史性能度量信息包括历史最优性能度量信息和前次训练性能度量信息。For example, on the basis of the foregoing application embodiments, the historical performance metric information includes historical optimal performance metric information and previous training performance metric information.
在本申请实施例中,历史最优性能度量信息可以是历次确定出的性能度量信息中的最优信息,该最优可以是指通信处理模型的准确率最高或者通信处理模型的处理速度最快等。前次训练性能度量信息可以是前一次或前几次确定出的性能度量信息。例如,在每次确定出性能度量信息后,可以将确定出的性能度量信息与历史最优性能度量信息对比,若当前确定出的性能度量信息优于历史最优性能度量信息,则将历史最优性能度量信息替换为当前确定出的性能度量信息。以及,将前次训练性能度量信息替换为当前确定出的性能度量信息。In this embodiment of the present application, the historical optimal performance metric information may be the optimal information among the previously determined performance metric information, and the optimum may refer to the highest accuracy rate of the communication processing model or the fastest processing speed of the communication processing model Wait. The performance metric information of the previous training may be the performance metric information determined the previous time or several times. For example, after the performance measurement information is determined each time, the determined performance measurement information can be compared with the historical optimal performance measurement information. If the currently determined performance measurement information is better than the historical optimal performance measurement information, then the historical maximum The optimal performance metric information is replaced with the currently determined performance metric information. And, replace the previous training performance measurement information with the currently determined performance measurement information.
图2是本申请实施例提供的另一种参数设置方法的流程图,本申请实施例是在上述申请实施例基础上的细化,参见图2,本申请实施例提供的方法包括如下步骤:Fig. 2 is a flow chart of another parameter setting method provided by the embodiment of the present application. The embodiment of the present application is a refinement based on the above-mentioned embodiment of the application. Referring to Fig. 2, the method provided by the embodiment of the present application includes the following steps:
步骤210、确定通信处理模型当前的性能度量信息。Step 210, determine the current performance measurement information of the communication processing model.
步骤220、确定性能度量信息分别与历史最优性能度量信息以及前次训练性能度量信息的第一比较结果和第二比较结果;将第一比较结果和第二比较结果作为性能变化趋势。Step 220: Determine the first comparison result and the second comparison result between the performance measurement information and the historical optimal performance measurement information and the previous training performance measurement information respectively; use the first comparison result and the second comparison result as the performance change trend.
在本申请实施例中,可以分别将性能度量信息与历史最优性能度量信息和前次训练性能度量信息进行比较,将性能度量信息与历史最优性能度量信息的比较结果记为第一比较结果,以及将性能度量信息与前次训练性能度量信息的比较结果记为第二比较结果,可以将获取到的第一比较结果和第二比较结果作为反映通信处理模型训练状态的性能变化趋势。In the embodiment of the present application, the performance metric information can be compared with the historical optimal performance metric information and the previous training performance metric information respectively, and the comparison result between the performance metric information and the historical optimal performance metric information can be recorded as the first comparison result , and the comparison result between the performance measurement information and the previous training performance measurement information is recorded as the second comparison result, and the obtained first comparison result and the second comparison result can be used as a performance change trend reflecting the training state of the communication processing model.
步骤230、在第一比较结果为性能度量信息劣于历史最优性能度量信息,且第二比较结果为性能度量信息劣于前次训练性能度量信息的情况下,减小学习率的取值。Step 230: Decrease the value of the learning rate when the first comparison result is that the performance metric information is worse than the historical optimal performance metric information, and the second comparison result is that the performance metric information is worse than the previous training performance metric information.
例如,若第一比较结果为性能度量信息劣于历史最优性能度量信息,且第二比较结果为性能度量信息劣于前次训练性能度量信息,则确定当前通信处理模型的训练后的性能下降,则减小学习率的取值,使得通信处理模型的性能在后续的训练过程中上升。For example, if the first comparison result is that the performance metric information is worse than the historical optimal performance metric information, and the second comparison result is that the performance metric information is worse than the previous training performance metric information, then it is determined that the performance of the current communication processing model is degraded after training , then reduce the value of the learning rate, so that the performance of the communication processing model will increase in the subsequent training process.
步骤240、在第一比较结果为性能度量信息优于历史最优性能度量信息的情况下,增大学习率的取值。Step 240 , if the first comparison result is that the performance metric information is better than the historical optimal performance metric information, increase the value of the learning rate.
在本申请实施例中,若第一比较结果为性能度量信息优于历史最优性能度量信息,则当前通信处理模型的性能为历史最高值,则可以继续增大学习率的取值,以提高通信处理模型的性能。In the embodiment of the present application, if the first comparison result is that the performance metric information is better than the historical optimal performance metric information, then the performance of the current communication processing model is the highest value in history, and the value of the learning rate can be continuously increased to improve Performance of the communication processing model.
例如,若第一比较结果为性能度量信息劣于历史性能度量,但是第二比较结果为性能度量信息优于前次训练性能度量信息,则说明当前的学习率有助于增加通信处理模型的性能,因此,可以不对学习率进行调整。For example, if the first comparison result is that the performance metric information is worse than the historical performance metric, but the second comparison result is that the performance metric information is better than the previous training performance metric information, it means that the current learning rate is helpful to increase the performance of the communication processing model , therefore, the learning rate may not be adjusted.
步骤250、根据学习率重新训练通信处理模型。 Step 250, retrain the communication processing model according to the learning rate.
本申请实施例中,在对学习率的取值调整后可以使用训练数据重新对通信处理模型进行训练,获取新的通信处理模型的权重。In the embodiment of the present application, after the value of the learning rate is adjusted, the training data may be used to retrain the communication processing model to obtain the weight of the new communication processing model.
本申请实施例,通过获取当前通信处理模型的性能度量信息,将性能度量信息分别与历史最优性能度量信息和前次训练性能度量信息进行比较以获取第一比较结果和第二比较结果作为性能变化趋势,在第一比较结果为性能度量信息劣于历史最优性能度量信息以及第二比较结果为性能度量信息劣于前次训练性能度量信息的情况下,减小学习率,在第一比较结果优于历史最优性能度量信息的情况下,增大学习率,根据调整后的学习率重建训练通信处理模型,实现了通信处理模型的快速训练,通过性能变化趋势动态调整学习率,可提高通信处理模型的准确性,降低通信处理模型的训练时间,从而降低通信系统的等待时延,可增强通信效率。In the embodiment of the present application, by obtaining the performance measurement information of the current communication processing model, the performance measurement information is compared with the historical optimal performance measurement information and the previous training performance measurement information to obtain the first comparison result and the second comparison result as the performance Change trend, when the first comparison result is that the performance metric information is worse than the historical optimal performance metric information and the second comparison result is that the performance metric information is worse than the previous training performance metric information, reduce the learning rate, and in the first comparison When the result is better than the historical optimal performance measurement information, increase the learning rate, rebuild and train the communication processing model according to the adjusted learning rate, realize the rapid training of the communication processing model, and dynamically adjust the learning rate through the performance change trend, which can improve The accuracy of the communication processing model reduces the training time of the communication processing model, thereby reducing the waiting delay of the communication system and enhancing communication efficiency.
图3是本申请实施例提供的另一种参数设置方法的流程图,本申请实施例是在上述申请实施例基础上的细化,参见图3,本申请实施例提供的方法包括如下步骤:Fig. 3 is a flowchart of another parameter setting method provided by the embodiment of the present application. The embodiment of the present application is a refinement based on the above-mentioned embodiment of the application. Referring to Fig. 3, the method provided by the embodiment of the present application includes the following steps:
步骤310、初始化通信处理模型的历史性能度量信息以及学习率。 Step 310, initialize the historical performance measurement information and the learning rate of the communication processing model.
在本申请实施例中,在训练通信处理模型的开始可以对历史性能度量信息以及学习率进行初始化,为历史性能度量信息和学习率分别设置一个初值,使用本申请方法对初值设置的准确性要求不高,设置范围较大,可以降低学习率设置范围对通信处理模型的影响。In the embodiment of the present application, at the beginning of training the communication processing model, the historical performance measurement information and the learning rate can be initialized, and an initial value is set for the historical performance measurement information and the learning rate, and the accuracy of the initial value setting using the method of this application The performance requirements are not high, and the setting range is large, which can reduce the influence of the learning rate setting range on the communication processing model.
步骤320、确定通信处理模型当前的性能度量信息。 Step 320, determine the current performance measurement information of the communication processing model.
步骤330、根据性能度量信息与历史性能度量信息确定性能变化趋势。 Step 330, determine the performance change trend according to the performance measurement information and the historical performance measurement information.
步骤340、确定性能变化趋势对应的调整因子,并使用调整因子更新学习率的取值。Step 340: Determine the adjustment factor corresponding to the performance change trend, and use the adjustment factor to update the value of the learning rate.
其中,调整因子可以是一个加权系数,用于调整学习率的取值。Wherein, the adjustment factor may be a weighting coefficient for adjusting the value of the learning rate.
在本申请实施例中,可以预先针对不同的性能变化趋势设置不同的调整因子,在获取到性能变化趋势时,可以针对不同的性能变化趋势选择不同的调整因子,可以使用该调整因子对学习率的取值进行调整。例如,可以将调整因子与学习率的乘积作为通信处理模型训练使用的学习率。In the embodiment of the present application, different adjustment factors can be set in advance for different performance change trends. When the performance change trend is obtained, different adjustment factors can be selected for different performance change trends, and the adjustment factor can be used to adjust the learning rate. value to be adjusted. For example, the product of the adjustment factor and the learning rate may be used as the learning rate used for training the communication processing model.
步骤350、根据学习率重新训练通信处理模型。 Step 350, retrain the communication processing model according to the learning rate.
步骤360、根据训练完成的通信处理模型确定通信设备输入电压与输出电压的拟合关系。Step 360: Determine the fitting relationship between the input voltage and the output voltage of the communication device according to the trained communication processing model.
例如,通信处理模型可以是处理设备输入电压和输出电压的通信功放行为模型,可以使用该模型对通信设备的输入电压与输出电压的关系进行拟合,实现通信设备的电压准确控制,降低设备能耗。For example, the communication processing model can be a communication power amplifier behavior model that processes the input voltage and output voltage of the device. This model can be used to fit the relationship between the input voltage and the output voltage of the communication device, so as to realize accurate control of the voltage of the communication device and reduce the power consumption of the device. consumption.
本申请实施例,通过对通信处理模型的历史性能度量信息以及学习率进行初始化,在训练过程中确定通信处理模型当前的性能度量信息,根据性能度量信息以及历史性能度量信息确定性能变化趋势,获取性能变化趋势对应的调整因子,按照该调整因子更新学习率的取值,使用学习率重新训练通信处理模型,在通信处理模型训练完成后对通信设备的输入电压和输出电压进行拟合,实现通信设备电压的准确控制,降低电压的波动情况,可提高通信的稳定性,增强了通信质量。In the embodiment of the present application, by initializing the historical performance measurement information and the learning rate of the communication processing model, the current performance measurement information of the communication processing model is determined during the training process, and the performance change trend is determined according to the performance measurement information and historical performance measurement information, and the obtained The adjustment factor corresponding to the performance change trend, the value of the learning rate is updated according to the adjustment factor, the communication processing model is retrained using the learning rate, and the input voltage and output voltage of the communication device are fitted after the training of the communication processing model is completed to realize communication Accurate control of equipment voltage reduces voltage fluctuations, improves communication stability, and enhances communication quality.
例如,在上述申请实施例的基础上,所述通信处理模型包括:通信功放行为模型、数字预失真模型、自适应均衡模型中至少之一。For example, on the basis of the foregoing application embodiments, the communication processing model includes: at least one of a communication power amplifier behavior model, a digital predistortion model, and an adaptive equalization model.
在一个示例性的实施方式中,图4是本申请实施例提供的一种参数设置方法的示例图,参见图4,假设神经网络参数为c,性能的度量为M,度量函数为M=f(c)。学习率为μ,μ为大于0的实数。In an exemplary embodiment, FIG. 4 is an example diagram of a parameter setting method provided in the embodiment of the present application. Referring to FIG. 4, it is assumed that the neural network parameter is c, the performance measurement is M, and the measurement function is M=f (c). The learning rate is μ, where μ is a real number greater than 0.
第n次训练完成时的参数记为c,性能度量记为M n,学习率记为μ nThe parameter when the nth training is completed is recorded as c, the performance measure is recorded as M n , and the learning rate is recorded as μ n .
记M best为第n-1次训练结束时神经网络性能度量的历史最优值。 Record M best as the historical optimal value of the neural network performance measure at the end of the n-1th training.
本申请中需要保护的内容列举如下:The content to be protected in this application is listed as follows:
(1)学习率自适应机制:计算第n次训练完成时网络的性能度量M n=f(c), (1) learning rate adaptive mechanism: calculate the performance measure M n =f(c) of the network when the nth training is completed,
①如果M n优于M best,则令M best=M n,μ n+1=α 1μ n①If M n is better than M best , then let M best =M n , μ n+11 μ n ;
②如果M n劣于M best,且M n优于M n-1,则μ n+1=α 2μ n②If M n is worse than M best , and M n is better than M n-1 , then μ n+1 =α 2 μ n ;
③如果M n劣于M best,且M n劣于M n-1,则μ n+1=α 3μ n③If M n is worse than M best , and M n is worse than M n-1 , then μ n+1 = α 3 μ n ;
(2)如(1)所述,如果性能度量为MSE或NMSE,所述优于等同于小于,所述劣于等同于大于;(2) As described in (1), if the performance measure is MSE or NMSE, the superiority is equal to less than, and the inferiority is equal to greater than;
(3)如(1)所述,α 1,α 2,α 3是学习率调整因子,其中α 1≥1,α 3≤1; (3) As mentioned in (1), α 1 , α 2 , α 3 are learning rate adjustment factors, where α 1 ≥ 1, α 3 ≤ 1;
(4)如(1)所述,α 1,α 2,α 3可以是固定的值,也可以是随训练进程变化的值; (4) As described in (1), α 1 , α 2 , and α 3 can be fixed values, or values that vary with the training process;
(5)如(1)所述,如果配置了学习率最大值μ max,则μ n+1=min(μ n+1max); (5) As described in (1), if the maximum learning rate μ max is configured, then μ n+1 = min(μ n+1 , μ max );
(6)如(1)所述,如果配置了学习率最小值μ min,则μ n+1=max(μ n+1min); (6) As described in (1), if the minimum learning rate μ min is configured, then μ n+1 = max(μ n+1 , μ min );
以上的描述以神经网络为背景,但是并不意味着本申请实施例只适用于神经网络。The above description is based on the background of the neural network, but it does not mean that the embodiment of the present application is only applicable to the neural network.
本申请实施例通过在神经网络情况下使用本申请实施例的方法进行参数设置以提高正弦 函数拟合的准确性为例检验本申请实施方法的有效性,参见图5,假设正弦信号的输入为x,x是从0开始,相邻两数之间间隔0.01的1024个数字。正弦信号的输出是y,y=sin(x)。The embodiment of the present application uses the method of the embodiment of the present application to set parameters in the case of a neural network to improve the accuracy of the sinusoidal function fitting as an example to test the effectiveness of the implementation method of the present application. Referring to Figure 5, it is assumed that the input of the sinusoidal signal is x, x is 1024 numbers starting from 0 with an interval of 0.01 between two adjacent numbers. The output of the sinusoidal signal is y, y=sin(x).
建立一个神经网络的模型,模型的输入个数为1,输出个数为1,包含1个隐含层,隐含层神经元个数为4。隐含层激活函数为tanh函数,输出层使用线性激活函数。对该神经网络的参数进行初始化,初始化参数记为c 0。包含w和b两个参数集,其中b 11=b 12=b 13=b 14=b 21=0,w 21=-0.4228,w 22=-0.2863,w 23=-0.2793,w 24=0.0892,w 11=-0.6490,w 12=1.1812,w 13=-0.7585,w 14=-1.1096。 A neural network model is established, the number of inputs of the model is 1, the number of outputs is 1, it contains 1 hidden layer, and the number of neurons in the hidden layer is 4. The hidden layer activation function is a tanh function, and the output layer uses a linear activation function. The parameters of the neural network are initialized, and the initialization parameters are denoted as c 0 . Contains two parameter sets w and b, where b 11 =b 12 =b 13 =b 14 =b 21 =0, w 21 =-0.4228, w 22 =-0.2863, w 23 =-0.2793, w 24 =0.0892, w 11 =-0.6490, w 12 =1.1812, w 13 =-0.7585, w 14 =-1.1096.
使用NMSE作为神经网络拟合性能的度量,即M n=NMSE(c n)。NMSE表示归一化均方误差,值小为优。NMSE的表达式为: NMSE is used as a measure of neural network fitting performance, ie M n =NMSE(c n ). NMSE stands for normalized mean square error, and the smaller value is better. The expression of NMSE is:
Figure PCTCN2022105171-appb-000003
Figure PCTCN2022105171-appb-000003
其中
Figure PCTCN2022105171-appb-000004
是神经网络对第i个样本点x i的输出。
in
Figure PCTCN2022105171-appb-000004
is the output of the neural network to the i-th sample point x i .
学习率的初始值设置为0.001,学习率的最大值设置为0.1,最小值设置为0.0001。The initial value of the learning rate is set to 0.001, the maximum value of the learning rate is set to 0.1, and the minimum value is set to 0.0001.
设置固定的学习率调整因子α 1=1.02,α 2=1,α 3=0.99。 Set fixed learning rate adjustment factors α 1 =1.02, α 2 =1, α 3 =0.99.
使用初始化参数c 0计算神经网络拟合的性能,令M best=M 0=NMSE(c 0)=1.0691。 Using the initialization parameter c 0 to calculate the performance of neural network fitting, set M best =M 0 =NMSE(c 0 )=1.0691.
完成以上步骤后开始训练,训练使用Adam优化算法,截取部分训练的记录如下表所示:After completing the above steps, start training. The training uses the Adam optimization algorithm. The intercepted part of the training records are shown in the following table:
表1训练记录1~10Table 1 Training records 1-10
n=1n=1 Mn=1.0456e+00Mn=1.0456e+00 Mbest=1.0691e+00Mbest=1.0691e+00 μ=0.00102(↑)μ=0.00102(↑)
n=2n=2 Mn=1.0278e+00Mn=1.0278e+00 Mbest=1.0456e+00Mbest=1.0456e+00 μ=0.00104(↑)μ=0.00104(↑)
n=3n=3 Mn=1.0155e+00Mn=1.0155e+00 Mbest=1.0278e+00Mbest=1.0278e+00 μ=0.00106(↑)μ=0.00106(↑)
n=4n=4 Mn=1.0070e+00Mn=1.0070e+00 Mbest=1.0155e+00Mbest=1.0155e+00 μ=0.00108(↑)μ=0.00108(↑)
n=5n=5 Mn=1.0005e+00Mn=1.0005e+00 Mbest=1.0070e+00Mbest=1.0070e+00 μ=0.00110(↑)μ=0.00110(↑)
n=6n=6 Mn=9.9527e-01Mn=9.9527e-01 Mbest=1.0005e+00Mbest=1.0005e+00 μ=0.00113(↑)μ=0.00113(↑)
n=7n=7 Mn=9.9039e-01Mn=9.9039e-01 Mbest=9.9527e-01Mbest=9.9527e-01 μ=0.00115(↑)μ=0.00115(↑)
n=8n=8 Mn=9.8562e-01Mn=9.8562e-01 Mbest=9.9039e-01Mbest=9.9039e-01 μ=0.00117(↑)μ=0.00117(↑)
n=9n=9 Mn=9.8100e-01Mn=9.8100e-01 Mbest=9.8562e-01Mbest=9.8562e-01 μ=0.00120(↑)μ=0.00120(↑)
表1显示了从n=1到n=10的训练记录,可以看到在这个阶段神经网络性能度量呈现单调下降的趋势,因此学习率持续增大。Table 1 shows the training records from n=1 to n=10. It can be seen that the performance measurement of the neural network presents a monotonous downward trend at this stage, so the learning rate continues to increase.
表2训练记录271~279Table 2 Training records 271-279
n=271n=271 Mn=3.7540e-03Mn=3.7540e-03 Mbest=3.6499e-03Mbest=3.6499e-03 μ=0.04725(↓)μ=0.04725(↓)
n=272n=272 Mn=1.5344e-02Mn=1.5344e-02 Mbest=3.6499e-03Mbest=3.6499e-03 μ=0.04678(↓)μ=0.04678(↓)
n=273n=273 Mn=1.4601e-02Mn=1.4601e-02 Mbest=3.6499e-03Mbest=3.6499e-03 μ=0.04678μ = 0.04678
n=274n=274 Mn=6.4012e-03Mn=6.4012e-03 Mbest=3.6499e-03Mbest=3.6499e-03 μ=0.04678μ = 0.04678
n=275n=275 Mn=3.3701e-03Mn=3.3701e-03 Mbest=3.6499e-03Mbest=3.6499e-03 μ=0.04772(↑)μ=0.04772(↑)
n=276n=276 Mn=3.0402e-03Mn=3.0402e-03 Mbest=3.3701e-03Mbest=3.3701e-03 μ=0.04867(↑)μ=0.04867(↑)
n=277n=277 Mn=3.0309e-03Mn=3.0309e-03 Mbest=3.0402e-03Mbest=3.0402e-03 μ=0.04965(↑)μ=0.04965(↑)
n=278n=278 Mn=2.7975e-03Mn=2.7975e-03 Mbest=3.0309e-03Mbest=3.0309e-03 μ=0.05064(↑)μ=0.05064(↑)
n=279n=279 Mn=2.9533e-03Mn=2.9533e-03 Mbest=2.7975e-03Mbest=2.7975e-03 μ=0.05013(↓)μ=0.05013(↓)
图6是本申请实施例提供的一种学习率变化趋势的示例图;参见图6,显示在整个训练过程中学习率的变化趋势,一开始快速增大,然后缓慢衰减直到设定的学习率最小值。图7 是本申请实施例提供的一种神经网络收敛速度对比图,参见图7,通过神经网络使用本申请实施例中学习率调整方式与固定学习率进行对比,显示了本申请实施例所提出的自适应学习率机制所发挥的作用,相比固定学习率收敛速度更快,最终收敛性能与最优固定学习率相当,且波动较小。Figure 6 is an example diagram of a learning rate change trend provided by the embodiment of the present application; see Figure 6, which shows the change trend of the learning rate during the entire training process, which increases rapidly at the beginning, and then slowly decays until the set learning rate min. Fig. 7 is a comparison diagram of the convergence speed of a neural network provided by the embodiment of the present application. Referring to Fig. 7, the learning rate adjustment method in the embodiment of the present application is compared with the fixed learning rate through the neural network, showing the proposed embodiment of the present application. The role played by the adaptive learning rate mechanism is faster than the fixed learning rate, and the final convergence performance is equivalent to the optimal fixed learning rate, and the fluctuation is small.
示例性的,以在无线通信系统中对功放进行行为建模为例,确定出拟合功放输入x与输出y之间的关系,其中,x和y均为复数,本申请实施例中通信处理模型为记忆多项式(Memory Polynomial,MP)模型,常用于进行功放行为建模,表达式如下:Exemplarily, taking the behavioral modeling of the power amplifier in the wireless communication system as an example, the relationship between the input x and the output y of the fitted power amplifier is determined, where x and y are both complex numbers. In the embodiment of the application, the communication processing The model is a memory polynomial (Memory Polynomial, MP) model, which is often used for power amplifier behavior modeling, and the expression is as follows:
Figure PCTCN2022105171-appb-000005
Figure PCTCN2022105171-appb-000005
其中,Z是模型的输出,w k,q是模型的系数,Q表示记忆深度,K表示非线性的阶数。使用MP模型对功放进行建模的过程即为求解模型系数w k,q的过程。 Among them, Z is the output of the model, w k, q are the coefficients of the model, Q represents the memory depth, and K represents the order of nonlinearity. The process of using the MP model to model the power amplifier is the process of solving the model coefficient w k,q .
最小均方(Least Mean Square,LMS)算法是求解模型系数w k,q的一种常用方法,该算法和神经网络的训练算法类似,都是基于梯度下降法的迭代求解算法,一般意义的复数LMS算法原理如下所述: The Least Mean Square (LMS) algorithm is a common method to solve the model coefficients w k, q . This algorithm is similar to the training algorithm of the neural network. It is an iterative solution algorithm based on the gradient descent method. The complex number in the general sense The principle of the LMS algorithm is as follows:
(1)初始化w k,q,设定学习率μ; (1) Initialize w k,q , and set the learning rate μ;
(2)迭代次数cnt=1;(2) The number of iterations cnt=1;
(3)n=1;(3)n=1;
(4)计算目标函数
Figure PCTCN2022105171-appb-000006
对w k,q的偏导数
Figure PCTCN2022105171-appb-000007
其中
Figure PCTCN2022105171-appb-000008
(4) Calculate the objective function
Figure PCTCN2022105171-appb-000006
Partial derivative with respect to w k,q
Figure PCTCN2022105171-appb-000007
in
Figure PCTCN2022105171-appb-000008
(5)更新参数,
Figure PCTCN2022105171-appb-000009
(5) Update parameters,
Figure PCTCN2022105171-appb-000009
(6)n=n+1,如果n≤N则返回(4);(6) n=n+1, if n≤N then return to (4);
(7)cnt=cnt+1,返回(3);(7) cnt=cnt+1, return (3);
将本申请实施例中的参数设置方法可以与普通的LMS算法相结合,步骤如下所述:The parameter setting method in the embodiment of the present application can be combined with the common LMS algorithm, and the steps are as follows:
(1)初始化w k,q,设定学习率μ 1,固定的学习率调整因子α 1=1.3,α 2=1,α 3=0.9; (1) Initialize w k,q , set learning rate μ 1 , fixed learning rate adjustment factors α 1 =1.3, α 2 =1, α 3 =0.9;
(2)计算初始模型的性能度量,
Figure PCTCN2022105171-appb-000010
(2) Compute the performance metrics of the initial model,
Figure PCTCN2022105171-appb-000010
(3)迭代次数cnt=1;(3) The number of iterations cnt=1;
(4)n=1;(4)n=1;
(5)计算目标函数
Figure PCTCN2022105171-appb-000011
对w k,q的偏导数
Figure PCTCN2022105171-appb-000012
其中
Figure PCTCN2022105171-appb-000013
(5) Calculate the objective function
Figure PCTCN2022105171-appb-000011
Partial derivative with respect to w k,q
Figure PCTCN2022105171-appb-000012
in
Figure PCTCN2022105171-appb-000013
(6)更新参数,
Figure PCTCN2022105171-appb-000014
(6) update parameters,
Figure PCTCN2022105171-appb-000014
(7)n=n+1,如果n≤N则返回(5);(7) n=n+1, if n≤N then return to (5);
(8)计算cnt次迭代的性能度量,
Figure PCTCN2022105171-appb-000015
(8) Calculate the performance measure of cnt iterations,
Figure PCTCN2022105171-appb-000015
①如果M cnt小于M best,则令M best=M cnt,μ cnt+1=α 1μ cnt①If M cnt is smaller than M best , then let M best =M cnt , μ cnt+11 μ cnt ;
②如果M cnt大于M best,且M cnt小于M cnt-1,则μ cnt+1=α 2μ cnt②If M cnt is greater than M best , and M cnt is less than M cnt-1 , then μ cnt+1 = α 2 μ cnt ;
③如果M cnt大于M best,且M cnt大于M cnt-1,则μ cnt+1=α 3μ cnt③If M cnt is greater than M best , and M cnt is greater than M cnt-1 , then μ cnt+1 = α 3 μ cnt ;
(9)cnt=cnt+1,返回(4)。(9) cnt=cnt+1, return to (4).
图8是本申请实施例提供的一种固定学习率下的MP模型的训练示例图,在功放建模中使用普通的LMS算法分别设定不同的固定的学习率,随着训练次数的增加NMSE变化的情况。可以看出不同的学习率最终的效果差别很大。而参见图9,在同样的功放建模中使用结合了本申请实施例的参数设置的LMS算法,分别设定不同的初始学习率,随着训练次数的增加NMSE变化的情况,图9中可以看出即使设定不同的初始学习率,训练的性能都会很快收敛到同一水平,且收敛速度远小于设置固定学习率的MP模型训练的收敛速度。并且,学习率初值的设置大小对本申请中学习率的收敛速度无影响,可以降低学习率初始值设置不准确对通信处理模型训练的影响。Figure 8 is a training example diagram of an MP model under a fixed learning rate provided by the embodiment of the present application. In the power amplifier modeling, the common LMS algorithm is used to set different fixed learning rates respectively. As the number of training increases, the NMSE changing circumstances. It can be seen that the final effect of different learning rates is very different. Referring to Fig. 9, in the same power amplifier modeling, the LMS algorithm combined with the parameter setting of the embodiment of the present application is used, different initial learning rates are set respectively, and the NMSE changes with the increase of the number of training times, as shown in Fig. 9 It can be seen that even if different initial learning rates are set, the training performance will quickly converge to the same level, and the convergence speed is much lower than that of MP model training with a fixed learning rate. Moreover, the setting of the initial value of the learning rate has no effect on the convergence speed of the learning rate in this application, and can reduce the influence of inaccurate setting of the initial value of the learning rate on the training of the communication processing model.
在另一个示例性的实施方式中,无线通信系统中对进行射频功放行为建模时,可以设置动态的调整因子实现学习率的调整,可以包括如下步骤:In another exemplary embodiment, when modeling the behavior of a radio frequency power amplifier in a wireless communication system, a dynamic adjustment factor may be set to adjust the learning rate, which may include the following steps:
(1)初始化w k,q,设定学习率μ 1,固定的学习率调整因子α 1=1.3,α 2=1,α 3=0.9; (1) Initialize w k,q , set learning rate μ 1 , fixed learning rate adjustment factors α 1 =1.3, α 2 =1, α 3 =0.9;
(2)计算初始模型的性能度量,
Figure PCTCN2022105171-appb-000016
(2) Compute the performance metrics of the initial model,
Figure PCTCN2022105171-appb-000016
(3)迭代次数cnt=1;(3) The number of iterations cnt=1;
(4)n=1;(4)n=1;
(5)计算目标函数
Figure PCTCN2022105171-appb-000017
对w k,q的偏导数
Figure PCTCN2022105171-appb-000018
其中
Figure PCTCN2022105171-appb-000019
(5) Calculate the objective function
Figure PCTCN2022105171-appb-000017
Partial derivative with respect to w k,q
Figure PCTCN2022105171-appb-000018
in
Figure PCTCN2022105171-appb-000019
(6)更新参数,
Figure PCTCN2022105171-appb-000020
(6) update parameters,
Figure PCTCN2022105171-appb-000020
(7)n=n+1,如果n≤N则返回(5);(7) n=n+1, if n≤N then return to (5);
(8)计算cnt次迭代的性能度量,
Figure PCTCN2022105171-appb-000021
(8) Calculate the performance measure of cnt iterations,
Figure PCTCN2022105171-appb-000021
①如果M cnt小于M best,则令M best=M cnt,μ cnt+1=α 1μ cnt,α 1=1.1α 1,α 3=1.1; ①If M cnt is smaller than M best , then let M best =M cnt , μ cnt+11 μ cnt , α 1 =1.1α 1 , α 3 =1.1;
②如果M cnt大于M best,且M cnt小于M cnt-1,则μ cnt+1=α 2μ cnt②If M cnt is greater than M best , and M cnt is less than M cnt-1 , then μ cnt+1 = α 2 μ cnt ;
③如果M cnt大于M best,且M cnt大于M cnt-1,则μ cnt+1=α 3μ cnt,α 3=0.9α 3,α 1=1.3; ③If M cnt is greater than M best and M cnt is greater than M cnt-1 , then μ cnt+1 = α 3 μ cnt , α 3 =0.9α 3 , α 1 =1.3;
(9)cnt=cnt+1,返回(4);(9) cnt=cnt+1, return (4);
在本申请实施例中,调整因子α 1和α 3可以随着MP模型的训练进程动态变化。 In the embodiment of the present application, the adjustment factors α1 and α3 may change dynamically along with the training process of the MP model.
在上述实施例的基础上,图10是本申请实施例提供的一种通信处理模型的结构示意图,图10示出的通信处理模型可以是用于射频功放建模的(Real-Valued Time-Delay Neural Networks,RVTDNN)模型,RVTDNN是一种实数神经网络,它将复数信号分成实部和虚部两个部分来避免传统实数神经网络不能处理复数数据的情况。RVTDNN包含两部分,第一部分是一个时间延迟结构,用来模拟功放的非线性特征,第二部分是一个常规的多层感知器(Multilayer Perceptron,MLP)网络,用来模拟功放的非线性特征。图中输入x被分成了实部Real,虚部imag和abs三项,每项除了本身外还包含4个时间延迟单元。这些数据被馈入后面的MLP网络,该MLP网络包含一个隐含层,隐含层由16个节点组成,激活函数为tanh函数。对该神经网络的参数进行初始化,初始化参数记为c 0。使用NMSE作为神经网络拟合性能的度量,即M n=NMSE(c n)。NMSE表示归一化均方误差,值小为优。NMSE 的计算公式为
Figure PCTCN2022105171-appb-000022
其中
Figure PCTCN2022105171-appb-000023
是神经网络对x i的输出。
On the basis of the above-mentioned embodiments, FIG. 10 is a schematic structural diagram of a communication processing model provided by the embodiment of the present application. The communication processing model shown in FIG. 10 can be used for radio frequency power amplifier modeling (Real-Valued Time-Delay Neural Networks, RVTDNN) model, RVTDNN is a real neural network that divides complex signals into real and imaginary parts to avoid the situation that traditional real neural networks cannot handle complex data. RVTDNN consists of two parts, the first part is a time delay structure, used to simulate the nonlinear characteristics of the power amplifier, and the second part is a conventional multilayer perceptron (Multilayer Perceptron, MLP) network, used to simulate the nonlinear characteristics of the power amplifier. The input x in the figure is divided into real part Real, imaginary part imag and abs three items, and each item contains 4 time delay units besides itself. These data are fed into the subsequent MLP network, which contains a hidden layer consisting of 16 nodes, and the activation function is the tanh function. The parameters of the neural network are initialized, and the initialization parameters are denoted as c 0 . NMSE is used as a measure of neural network fitting performance, ie M n =NMSE(c n ). NMSE stands for normalized mean square error, and the smaller value is better. The calculation formula of NMSE is
Figure PCTCN2022105171-appb-000022
in
Figure PCTCN2022105171-appb-000023
is the output of the neural network for xi .
对该神经网络的参数进行初始化,初始化参数记为c 0。则初始化历史最优性能为M best=M 0=NMSE(c 0)。 The parameters of the neural network are initialized, and the initialization parameters are denoted as c 0 . Then the optimal performance of the initialization history is M best =M 0 =NMSE(c 0 ).
学习率的初始值设置为0.001,学习率的最大值设置为0.1,最小值设置为0.0001。The initial value of the learning rate is set to 0.001, the maximum value of the learning rate is set to 0.1, and the minimum value is set to 0.0001.
设置固定的学习率调整因子α 1=1.03,α 2=1,α 3=0.99。 Set fixed learning rate adjustment factors α 1 =1.03, α 2 =1, α 3 =0.99.
完成以上步骤后开始训练,训练使用Adam优化算法,截取部分训练的记录如下表所示:After completing the above steps, start training. The training uses the Adam optimization algorithm. Some of the training records are intercepted as shown in the following table:
表3训练记录1~9Table 3 Training records 1-9
n=1n=1 Mn=4.3162e-02Mn=4.3162e-02 Mbest=1.2055e+00Mbest=1.2055e+00 μ1.2055e+005e+0μ1.2055e+005e+0
n=2n=2 Mn=1.8839e-02Mn=1.8839e-02 Mbest=4.3162e-02Mbest=4.3162e-02 μ4.3162e-022e-0μ4.3162e-022e-0
n=3n=3 Mn=1.1451e-02Mn=1.1451e-02 Mbest=1.8839e-02Mbest=1.8839e-02 μ1.8839e-029e-0μ1.8839e-029e-0
n=4n=4 Mn=8.4722e-03Mn=8.4722e-03 Mbest=1.1451e-02Mbest=1.1451e-02 μ1.1451e-021e-0μ1.1451e-021e-0
n=5n=5 Mn=6.2968e-03Mn=6.2968e-03 Mbest=8.4722e-03Mbest=8.4722e-03 μ8.4722e-032e-0μ8.4722e-032e-0
n=6n=6 Mn=4.7780e-03Mn=4.7780e-03 Mbest=6.2968e-03Mbest=6.2968e-03 μ6.2968e-038e-0μ6.2968e-038e-0
n=7n=7 Mn=3.7248e-03Mn=3.7248e-03 Mbest=4.7780e-03Mbest=4.7780e-03 μ4.7780e-030e-0μ4.7780e-030e-0
n=8n=8 Mn=2.7483e-03Mn=2.7483e-03 Mbest=3.7248e-03Mbest=3.7248e-03 μ3.7248e-038e-0μ3.7248e-038e-0
n=9n=9 Mn=2.3303e-03Mn=2.3303e-03 Mbest=2.7483e-03Mbest=2.7483e-03 μ2.7483e-033e-0μ2.7483e-033e-0
表3显示了从n=1到n=9的训练记录,可以看到在这个阶段神经网络性能度量呈现单调下降的趋势,因此学习率持续增大。Table 3 shows the training records from n=1 to n=9. It can be seen that the performance measurement of the neural network presents a monotonous downward trend at this stage, so the learning rate continues to increase.
表4训练记录271~279Table 4 Training records 271-279
n=57n=57 Mn=5.2272e-04Mn=5.2272e-04 Mbest=3.9255e-04Mbest=3.9255e-04 μ3.9255e-045e-0μ3.9255e-045e-0
n=58n=58 Mn=4.2117e-04Mn=4.2117e-04 Mbest=3.9255e-04Mbest=3.9255e-04 μ3.9255e-045μ3.9255e-045
n=59n=59 Mn=3.8949e-04Mn=3.8949e-04 Mbest=3.9255e-04Mbest=3.9255e-04 μ3.9255e-045e-0μ3.9255e-045e-0
n=60n=60 Mn=5.3353e-04Mn=5.3353e-04 Mbest=3.8949e-04Mbest=3.8949e-04 μ3.8949e-049e-0μ3.8949e-049e-0
n=61n=61 Mn=3.9073e-04Mn=3.9073e-04 Mbest=3.8949e-04Mbest=3.8949e-04 μ3.8949e-049μ3.8949e-049
n=62n=62 Mn=3.8795e-04Mn=3.8795e-04 Mbest=3.8949e-04Mbest=3.8949e-04 μ3.8949e-049e-0μ3.8949e-049e-0
n=63n=63 Mn=3.5915e-04Mn=3.5915e-04 Mbest=3.8795e-04Mbest=3.8795e-04 μ3.8795e-045e-0μ3.8795e-045e-0
n=64n=64 Mn=4.2375e-04Mn=4.2375e-04 Mbest=3.5915e-04Mbest=3.5915e-04 μ3.5915e-045e-0μ3.5915e-045e-0
n=65n=65 Mn=4.2229e-04Mn=4.2229e-04 Mbest=3.5915e-04Mbest=3.5915e-04 μ3.5915e-045μ3.5915e-045
表4显示了RVTDNN模型从n=57到n=65的训练记录,可以看到在这个阶段神经网络性能度量呈现上下波动的趋势,学习率有时增大,有时减小,有时保持不变。其中在n等于58、61和65时学习率保持不变,n等于59、62和63时学习率增大,其余时刻学习率减小。而图11是本申请实施例提供的一种通信处理模型的训练效果示例图,参见图11,在整个训练过程中学习率的变化趋势,一开始快速增大,然后缓慢衰减直到设定的最小值。图12是本申请实施例提供的一种通信处理模型的训练效果对比图,图12示出了RVTDNN模型应用本申请实施例的参数设置方式设置的自适应学习率与采用固定学习率方式的训练对比结果,相比固定学习率,本申请实施例提供的参数设置方式设置的自适应学习率的性能度量信息收敛速度更快,最终收敛性能也更好。Table 4 shows the training records of the RVTDNN model from n=57 to n=65. It can be seen that the neural network performance metrics fluctuate up and down at this stage, and the learning rate sometimes increases, sometimes decreases, and sometimes remains unchanged. Among them, the learning rate remains unchanged when n is equal to 58, 61 and 65, the learning rate increases when n is equal to 59, 62 and 63, and the learning rate decreases at other times. Figure 11 is an example diagram of the training effect of a communication processing model provided by the embodiment of the present application. Referring to Figure 11, the change trend of the learning rate during the entire training process increases rapidly at the beginning, and then slowly decays until the set minimum value. Fig. 12 is a comparison diagram of the training effect of a communication processing model provided by the embodiment of the present application. Fig. 12 shows the RVTDNN model applying the adaptive learning rate set by the parameter setting method of the embodiment of the present application and the training using the fixed learning rate method The comparison results show that compared with the fixed learning rate, the performance measurement information of the adaptive learning rate set by the parameter setting method provided by the embodiment of the present application converges faster, and the final convergence performance is also better.
在一个示例性的实施方式中,图13是本申请实施例提供的另一种通信处理模型的结构示意图,在无线通信系统中发射机功放的非线性会造成发射信号的畸变,导致通信性能下降,为了避免这一情况,需要使用通信处理模型对原始信号进行线性放大,该通信处理模型可以 为数字预失真(Digital Pre-Distortion,DPD)模型,该模型可以让发射信号先经过DPD模块的处理,再馈入到功放中,这样功放输出的信号就是对原始信号的线性放大。而记忆多项式模型是常用的一种DPD模块的模型,其表达式如下:In an exemplary implementation, FIG. 13 is a schematic structural diagram of another communication processing model provided by the embodiment of the present application. In a wireless communication system, the nonlinearity of the power amplifier of the transmitter will cause distortion of the transmitted signal, resulting in a decrease in communication performance. , in order to avoid this situation, it is necessary to use a communication processing model to linearly amplify the original signal. The communication processing model can be a digital pre-distortion (Digital Pre-Distortion, DPD) model, which allows the transmitted signal to be processed by the DPD module first. , and then fed into the power amplifier, so that the output signal of the power amplifier is a linear amplification of the original signal. The memory polynomial model is a commonly used DPD module model, and its expression is as follows:
Figure PCTCN2022105171-appb-000024
Figure PCTCN2022105171-appb-000024
其中,Z是模型的输出,w k,q是模型的系数,Q表示记忆深度,K表示非线性的阶数。DPD问题的核心在于求解DPD模型的系数w k,q。图13示出了一种间接型架构求解DPD系数的LMS算法,该LMS算法的训练过程与神经网络的算法类似,可以使用本申请实施例中的参数设置方法,常见的LMS算法的处理过程可以包括如下步骤: Among them, Z is the output of the model, w k, q are the coefficients of the model, Q represents the memory depth, and K represents the order of nonlinearity. The core of the DPD problem is to solve the coefficient w k,q of the DPD model. Figure 13 shows an LMS algorithm for solving DPD coefficients with an indirect architecture. The training process of the LMS algorithm is similar to the algorithm of the neural network. The parameter setting method in the embodiment of the present application can be used. The processing process of the common LMS algorithm can be Including the following steps:
(1)初始化w k,q,设定学习率μ; (1) Initialize w k,q , and set the learning rate μ;
(2)迭代次数cnt=1;(2) The number of iterations cnt=1;
(3)n=1;(3)n=1;
(4)计算误差,e(n)=y(n)/G-x(n),其中G是功放的放大系数;(4) Calculation error, e(n)=y(n)/G-x(n), where G is the amplification factor of the power amplifier;
(5)更新参数,w k,q=w k,q-μ(x[n-q]|x[n-q]| k-1) *e; (5) Update parameters, w k, q = w k, q -μ(x[nq]|x[nq]| k-1 ) * e;
(6)n=n+1,如果n≤N则返回(4);(6) n=n+1, if n≤N then return to (4);
(7)cnt=cnt+1,返回(3)。(7) cnt=cnt+1, return to (3).
而结合本申请实施例中参数设置方法的LMS算法的处理过程可以包括如下步骤:And the processing process of the LMS algorithm combined with the parameter setting method in the embodiment of the present application may include the following steps:
(1)初始化w k,q,设定初始学习率μ 1,固定的学习率调整因子
Figure PCTCN2022105171-appb-000025
加速因子ν 1=1.01,ν 3=0.999;
(1) Initialize w k,q , set initial learning rate μ 1 , fixed learning rate adjustment factor
Figure PCTCN2022105171-appb-000025
Acceleration factor ν 1 = 1.01, ν 3 = 0.999;
(2)计算初始模型的性能度量,
Figure PCTCN2022105171-appb-000026
(2) Compute the performance metrics of the initial model,
Figure PCTCN2022105171-appb-000026
(3)迭代次数cnt=1;(3) The number of iterations cnt=1;
(4)n=1;(4)n=1;
(5)计算误差,e(n)=y(n)/G-x(n);(5) Calculation error, e(n)=y(n)/G-x(n);
(6)更新参数,w k,q=w k,q-μ(x[n-q]|x[n-q]| k-1) *e; (6) Update parameters, w k, q = w k, q -μ(x[nq]|x[nq]| k-1 ) * e;
(7)n=n+1,如果n≤N则返回(5);(7) n=n+1, if n≤N then return to (5);
(8)计算cnt次迭代的性能度量,
Figure PCTCN2022105171-appb-000027
(8) Calculate the performance measure of cnt iterations,
Figure PCTCN2022105171-appb-000027
a)如果M cnt小于M best,则令M best=M cnt,μ cnt+1=α 1μ cnt,α 1=ν 1α 1
Figure PCTCN2022105171-appb-000028
a) If M cnt is smaller than M best , then set M best = M cnt , μ cnt+1 = α 1 μ cnt , α 11 α 1 ,
Figure PCTCN2022105171-appb-000028
b)如果M cnt大于M best,且M best小于M cnt-1,则μ cnt+1=μ cntb) If M cnt is greater than M best and M best is less than M cnt-1 , then μ cnt+1 = μ cnt ;
c)如果M cnt大于M best,且M best大于M cnt-1,则μ cnt+1=α 3μ cnt,α 3=ν 3α 3
Figure PCTCN2022105171-appb-000029
c) If M cnt is greater than M best , and M best is greater than M cnt-1 , then μ cnt+1 = α 3 μ cnt , α 33 α 3 ,
Figure PCTCN2022105171-appb-000029
(9)cnt=cnt+1,返回(4)。(9) cnt=cnt+1, return to (4).
图14是本申请实施例提供的一种LMS模型的训练效果示例图,参见图14,展示了在数字预失真模型中使用普通的LMS算法分别设定不同的固定的学习率,随着训练次数的增加NMSE变化的情况,可以看出不同的学习率最终的效果差别很大,相关技术训练出的学习率极不稳定,容易导致通信系统的发送信号导致畸变,降低通信质量。图15是本申请实施例提供的一种另一种LMS模型的训练效果示例图,参见图15,在同样的数字预失真模型中使用结合了本申请实施例中的参数设置方法的LMS算法,分别设定不同的初始学习率,随着训练次数的增加而NMSE变化的情况,可以看出即使设定不同的初始学习率,训练的性能都会很快收敛到同一水平。本申请实施例中初始学习率设定不敏感,便于用户对数字预失真模型进行训练。在本实施例中,调整因子α 1和α 3是随着训练进程动态变化的。 Figure 14 is an example diagram of the training effect of an LMS model provided by the embodiment of the present application. Referring to Figure 14, it shows that in the digital predistortion model, the common LMS algorithm is used to set different fixed learning rates respectively. In the case of increasing NMSE changes, it can be seen that the final effects of different learning rates are very different. The learning rate trained by related technologies is extremely unstable, which can easily lead to distortion of the signal sent by the communication system and reduce the communication quality. Fig. 15 is an example diagram of the training effect of another LMS model provided by the embodiment of the present application. Referring to Fig. 15, the LMS algorithm combined with the parameter setting method in the embodiment of the present application is used in the same digital predistortion model. Different initial learning rates are set respectively, and the NMSE changes with the increase of training times. It can be seen that even if different initial learning rates are set, the training performance will quickly converge to the same level. In the embodiment of the present application, the setting of the initial learning rate is insensitive, which is convenient for the user to train the digital predistortion model. In this embodiment, the adjustment factors α1 and α3 change dynamically along with the training process.
在另一个示例性的实施方式中,无线通信系统中,发射机发出的信号在经过无线信道时会受到多径传播的影响,在接收机侧需要通过均衡模块将原始信号恢复出来。假设发射信号为x,接收信号为y,信道冲击响应为h,该过程的表达式为:In another exemplary embodiment, in a wireless communication system, a signal sent by a transmitter will be affected by multipath propagation when passing through a wireless channel, and an equalization module needs to be used on the receiver side to restore the original signal. Assuming that the transmitted signal is x, the received signal is y, and the channel impulse response is h, the expression of the process is:
Figure PCTCN2022105171-appb-000030
Figure PCTCN2022105171-appb-000030
其中,均衡是将x从y中恢复出来的过程,均衡可以采用线性模型,例如:Among them, equalization is the process of recovering x from y, and the equalization can adopt a linear model, for example:
Figure PCTCN2022105171-appb-000031
Figure PCTCN2022105171-appb-000031
其中,模型的系数w可以使用LMS算法求解。均衡也可以上述实施例中的RVTDNN模型,模型的系数可以使用反向传播(Back Propagation,BP)算法求解。不论采用何种模型,其中学习率的设置都可以采用本申请实施例所提出的参数设置方法。Among them, the coefficient w of the model can be solved using the LMS algorithm. The equalization can also be the RVTDNN model in the above embodiment, and the coefficients of the model can be solved using a back propagation (Back Propagation, BP) algorithm. No matter what kind of model is used, the setting of the learning rate can adopt the parameter setting method proposed in the embodiment of the present application.
图16是本申请实施例提供的一种参数设置装置的结构示意图,可执行本申请任意实施例提供的参数设置方法,具备执行方法相应的功能模块和有益效果,该装置可以由软件和/或硬件实现,包括:当前参数模块401、训练趋势模块402和参数调整模块403。Fig. 16 is a schematic structural diagram of a parameter setting device provided by an embodiment of the present application, which can execute the parameter setting method provided by any embodiment of the present application, and has corresponding functional modules and beneficial effects for executing the method. The device can be controlled by software and/or Hardware implementation, including: a current parameter module 401 , a training trend module 402 and a parameter adjustment module 403 .
当前参数模块401,设置为确定通信处理模型当前的性能度量信息。The current parameter module 401 is configured to determine the current performance measurement information of the communication processing model.
训练趋势模块402,设置为根据所述性能度量信息与历史性能度量信息确定性能变化趋势。The training trend module 402 is configured to determine a performance change trend according to the performance metric information and historical performance metric information.
参数调整模块403,设置为基于所述性能变化趋势调整所述通信处理模型的学习率,并根据所述学习率重新训练所述通信处理模型。The parameter adjustment module 403 is configured to adjust the learning rate of the communication processing model based on the performance change trend, and retrain the communication processing model according to the learning rate.
本申请实施例,通过当前参数模块确定通信处理模型的性能度量信息,训练趋势模块根据性能度量信息与历史性能度量信息的比较结果确定性能变化趋势,参数调整模块按照性能变化趋势调整通信处理模型的学习率,并根据该学习率重新训练通信处理模型,实现了通信处理模型的快速训练,通过性能变化趋势动态调整学习率,可提高通信处理模型的准确性,降低通信处理模型的训练时间,从而降低通信系统的等待时延,可增强通信效率。In the embodiment of the present application, the performance measurement information of the communication processing model is determined by the current parameter module, the training trend module determines the performance change trend according to the comparison result between the performance measurement information and the historical performance measurement information, and the parameter adjustment module adjusts the communication processing model according to the performance change trend Learning rate, and retraining the communication processing model according to the learning rate, realizes the rapid training of the communication processing model, and dynamically adjusts the learning rate through the performance change trend, which can improve the accuracy of the communication processing model and reduce the training time of the communication processing model, thus Reducing the waiting delay of the communication system can enhance the communication efficiency.
例如,在上述申请实施例的基础上,所述当前参数模块301中的性能度量信息至少包括均方误差、归一化均方误差中至少之一。For example, on the basis of the above application embodiments, the performance measurement information in the current parameter module 301 includes at least one of mean square error and normalized mean square error.
例如,在上述申请实施例的基础上,所述训练趋势模块302中历史性能度量信息包括历史最优性能度量信息和前次训练性能度量信息。For example, on the basis of the foregoing application embodiments, the historical performance metric information in the training trend module 302 includes historical optimal performance metric information and previous training performance metric information.
例如,在上述申请实施例的基础上,所述训练趋势模块302包括:For example, on the basis of the above-mentioned application embodiments, the training trend module 302 includes:
比较执行单元,设置为确定所述性能度量信息分别与所述历史最优性能度量信息以及所述前次训练性能度量信息的第一比较结果和第二比较结果。The comparison execution unit is configured to determine a first comparison result and a second comparison result between the performance metric information and the historical optimal performance metric information and the previous training performance metric information respectively.
趋势确定单元,设置为将所述第一比较结果和所述第二比较结果作为所述性能变化趋势。A trend determination unit is configured to use the first comparison result and the second comparison result as the performance change trend.
例如,在上述申请实施例的基础上,所述参数调整模块303包括:For example, on the basis of the above application embodiments, the parameter adjustment module 303 includes:
第一处理单元,设置为在所述第一比较结果为所述性能度量信息劣于所述历史最优性能度量信息,且所述第二比较结果为所述性能度量信息劣于所述前次训练性能度量信息的情况下,减小所述学习率的取值。The first processing unit is configured to, when the first comparison result is that the performance metric information is worse than the historical optimal performance metric information, and the second comparison result is that the performance metric information is worse than the previous time In the case of training performance metric information, reduce the value of the learning rate.
第二处理单元,设置为在所述第一比较结果为所述性能度量信息优于所述历史最优性能度量信息的情况下,增大所述学习率的取值。The second processing unit is configured to increase the value of the learning rate when the first comparison result is that the performance metric information is better than the historical optimal performance metric information.
例如,在上述申请实施例的基础上,所述装置中参数调整模块303还包括:For example, on the basis of the above-mentioned application embodiments, the parameter adjustment module 303 in the device further includes:
因子调整单元,设置为确定所述性能变化趋势对应的调整因子,并使用所述调整因子更新所述学习率的取值。The factor adjustment unit is configured to determine an adjustment factor corresponding to the performance change trend, and use the adjustment factor to update the value of the learning rate.
例如,在上述申请实施例的基础上,所述装置还包括:For example, on the basis of the above application embodiments, the device further includes:
参数初始模块,设置为初始化所述通信处理模型的所述历史性能度量信息以及所述学习率。A parameter initialization module, configured to initialize the historical performance measurement information and the learning rate of the communication processing model.
例如,在上述申请实施例的基础上,所述装置还包括:For example, on the basis of the above application embodiments, the device further includes:
模型使用模块,设置为根据训练完成的所述通信处理模型确定通信设备输入电压与输出电压的拟合关系。The model use module is configured to determine the fitting relationship between the input voltage and the output voltage of the communication device according to the trained communication processing model.
例如,在上述申请实施例的基础上,所述装置中通信处理模型包括:通信功放行为模型、预失真模型、自适应均衡模型中至少之一。For example, on the basis of the foregoing application embodiments, the communication processing model in the device includes: at least one of a communication power amplifier behavior model, a predistortion model, and an adaptive equalization model.
图17是本申请实施例提供的一种电子设备的结构示意图,该电子设备包括处理器50、存储器51、输入装置52和输出装置53;电子设备中处理器50的数量可以是一个或多个,图17中以一个处理器50为例;电子设备中处理器50、存储器51、输入装置52和输出装置53可以通过总线或其他方式连接,图17中以通过总线连接为例。Figure 17 is a schematic structural diagram of an electronic device provided by an embodiment of the present application, the electronic device includes a processor 50, a memory 51, an input device 52 and an output device 53; the number of processors 50 in the electronic device can be one or more In FIG. 17, a processor 50 is taken as an example; the processor 50, memory 51, input device 52 and output device 53 in the electronic device can be connected through a bus or in other ways. In FIG. 17, the connection through a bus is taken as an example.
存储器51作为一种计算机可读存储介质,可设置为存储软件程序、计算机可执行程序以及模块,如本申请实施例中的参数设置装置对应的模块(当前参数模块401、训练趋势模块402和参数调整模块403)。处理器50通过运行存储在存储器51中的软件程序、指令以及模块,从而执行电子设备的多种功能应用以及数据处理,即实现上述的参数设置方法。计算机可读存储介质可以为非暂态计算机可读存储介质。 Memory 51, as a computer-readable storage medium, can be set to store software programs, computer-executable programs and modules, such as the modules corresponding to the parameter setting device in the embodiment of the present application (current parameter module 401, training trend module 402 and parameter adjustment module 403). The processor 50 executes various functional applications and data processing of the electronic device by running the software programs, instructions and modules stored in the memory 51 , that is, realizes the above parameter setting method. The computer readable storage medium may be a non-transitory computer readable storage medium.
存储器51可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器51可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器51可包括相对于处理器50远程设置的存储器,这些远程存储器可以通过网络连接至电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 51 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system and at least one application required by a function; the data storage area may store data created according to the use of the electronic device, and the like. In addition, the memory 51 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage devices. In some examples, memory 51 may include memory located remotely relative to processor 50, and these remote memories may be connected to the electronic device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
输入装置52可设置为接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。输出装置53可包括显示屏等显示设备。The input device 52 can be configured to receive input numbers or character information, and generate key signal input related to user settings and function control of the electronic device. The output device 53 may include a display device such as a display screen.
本申请实施例还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时设置为执行一种参数设置方法,该方法包括:The embodiment of the present application also provides a storage medium containing computer-executable instructions, the computer-executable instructions are configured to execute a parameter setting method when executed by a computer processor, the method comprising:
确定通信处理模型当前的性能度量信息;determining current performance metric information for the communication processing model;
根据所述性能度量信息与历史性能度量信息确定性能变化趋势;determining a performance change trend according to the performance metric information and historical performance metric information;
基于所述性能变化趋势调整所述通信处理模型的学习率,并根据所述学习率重新训练所述通信处理模型。Adjusting the learning rate of the communication processing model based on the performance change trend, and retraining the communication processing model according to the learning rate.
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本申请可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请多个实施例所述的方法。Through the above descriptions about the implementation manners, those skilled in the art can clearly understand that the present application can be realized by software and necessary general-purpose hardware, and of course it can also be realized by hardware. Based on this understanding, the essence of the technical solution of this application or the part that contributes to related technologies can be embodied in the form of software products, and the computer software products can be stored in computer-readable storage media, such as computer floppy disks, Read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disc, etc., including several instructions to make a computer device (which can be a personal computer, A server, or a network device, etc.) executes the methods described in multiple embodiments of the present application.
值得注意的是,上述装置的实施例中,所包括的多个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,多个功能单元的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。It is worth noting that, in the embodiment of the above-mentioned device, the multiple units and modules included are only divided according to functional logic, but are not limited to the above-mentioned division, as long as the corresponding functions can be realized; in addition, multiple The specific names of the functional units are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the present application.
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、设备中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。Those of ordinary skill in the art can understand that all or some of the steps in the methods disclosed above, the functional modules/units in the system, and the device can be implemented as software, firmware, hardware, and an appropriate combination thereof.
在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集 成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components. Components cooperate to execute. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application-specific integrated circuit . Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). As known to those of ordinary skill in the art, the term computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer. In addition, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .
本申请实施例实现学习率的动态更新,提高通信处理模型的准确性,可减少通信处理模型的训练时间,从而减少通信系统的等待时间,可增强通信效率,提高通信质量。The embodiment of the present application realizes the dynamic update of the learning rate, improves the accuracy of the communication processing model, can reduce the training time of the communication processing model, thereby reducing the waiting time of the communication system, enhancing communication efficiency, and improving communication quality.

Claims (13)

  1. 一种参数设置方法,包括:A parameter setting method, comprising:
    确定通信处理模型当前的性能度量信息;determining current performance metric information for the communication processing model;
    根据所述性能度量信息与历史性能度量信息确定性能变化趋势;determining a performance change trend according to the performance metric information and historical performance metric information;
    基于所述性能变化趋势调整所述通信处理模型的学习率,并根据所述学习率重新训练所述通信处理模型。Adjusting the learning rate of the communication processing model based on the performance change trend, and retraining the communication processing model according to the learning rate.
  2. 根据权利要求1所述的方法,其中,所述性能度量信息包括均方误差、归一化均方误差中至少之一。The method according to claim 1, wherein the performance measurement information includes at least one of mean square error and normalized mean square error.
  3. 根据权利要求1所述的方法,其中,所述历史性能度量信息包括历史最优性能度量信息和前次训练性能度量信息。The method according to claim 1, wherein the historical performance metric information includes historical optimal performance metric information and previous training performance metric information.
  4. 根据权利要求3所述的方法,其中,所述根据所述性能度量信息与历史性能度量信息确定性能变化趋势,包括:The method according to claim 3, wherein said determining the performance change trend according to the performance metric information and historical performance metric information comprises:
    确定所述性能度量信息分别与所述历史最优性能度量信息以及所述前次训练性能度量信息的第一比较结果和第二比较结果;determining a first comparison result and a second comparison result between the performance metric information and the historical optimal performance metric information and the previous training performance metric information;
    将所述第一比较结果和所述第二比较结果作为所述性能变化趋势。The first comparison result and the second comparison result are used as the performance change trend.
  5. 根据权利要求4所述的方法,其中,所述基于所述性能变化趋势调整所述通信处理模型的学习率,包括以下至少之一:The method according to claim 4, wherein the adjusting the learning rate of the communication processing model based on the performance change trend comprises at least one of the following:
    响应于确定所述第一比较结果为所述性能度量信息劣于所述历史最优性能度量信息,且所述第二比较结果为所述性能度量信息劣于所述前次训练性能度量信息,减小所述学习率的取值;In response to determining that the first comparison result is that the performance metric information is inferior to the historical optimal performance metric information, and that the second comparison result is that the performance metric information is inferior to the previous training performance metric information, Decrease the value of the learning rate;
    响应于确定所述第一比较结果为所述性能度量信息优于所述历史最优性能度量信息,增大所述学习率的取值。In response to determining that the first comparison result is that the performance metric information is better than the historical optimal performance metric information, increasing the value of the learning rate.
  6. 根据权利要求1所述的方法,其中,所述基于所述性能变化趋势调整所述通信处理模型的学习率,包括:The method according to claim 1, wherein the adjusting the learning rate of the communication processing model based on the performance variation trend comprises:
    确定所述性能变化趋势对应的调整因子,并使用所述调整因子更新所述学习率的取值。An adjustment factor corresponding to the performance change trend is determined, and the value of the learning rate is updated using the adjustment factor.
  7. 根据权利要求6所述的方法,其中,不同所述性能变化趋势对应的所述调整因子不同。The method according to claim 6, wherein the adjustment factors corresponding to different performance change trends are different.
  8. 根据权利要求1所述的方法,还包括:The method according to claim 1, further comprising:
    初始化所述通信处理模型的所述历史性能度量信息以及所述学习率。Initializing the historical performance metric information and the learning rate of the communication processing model.
  9. 根据权利要求1所述的方法,还包括:The method according to claim 1, further comprising:
    根据训练完成的所述通信处理模型确定通信设备输入电压与输出电压的拟合关系。A fitting relationship between the input voltage and the output voltage of the communication device is determined according to the communication processing model that has been trained.
  10. 根据权利要求1所述的方法,其中,所述通信处理模型包括:通信功放行为模型、数字预失真模型、自适应均衡模型中至少之一。The method according to claim 1, wherein the communication processing model comprises: at least one of a communication power amplifier behavior model, a digital predistortion model, and an adaptive equalization model.
  11. 一种参数设置装置,包括:A parameter setting device, comprising:
    当前参数模块,设置为确定通信处理模型当前的性能度量信息;The current parameter module is configured to determine the current performance measurement information of the communication processing model;
    训练趋势模块,设置为根据所述性能度量信息与历史性能度量信息确定性能变化趋势;A training trend module, configured to determine a performance change trend according to the performance measurement information and historical performance measurement information;
    参数调整模块,设置为基于所述性能变化趋势调整所述通信处理模型的学习率,并根据所述学习率重新训练所述通信处理模型。A parameter adjustment module, configured to adjust the learning rate of the communication processing model based on the performance change trend, and retrain the communication processing model according to the learning rate.
  12. 一种电子设备,包括:An electronic device comprising:
    一个或多个处理器;one or more processors;
    存储器,设置为存储一个或多个程序;memory configured to store one or more programs;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-10中任一所述的参数设置方法。When the one or more programs are executed by the one or more processors, the one or more processors are made to implement the parameter setting method according to any one of claims 1-10.
  13. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-10中任一所述的参数设置方法。A computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the parameter setting method according to any one of claims 1-10 is implemented.
PCT/CN2022/105171 2021-07-13 2022-07-12 Parameter setting method and apparatus, and electronic device and storage medium WO2023284731A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110791372.0 2021-07-13
CN202110791372.0A CN115696361A (en) 2021-07-13 2021-07-13 Parameter setting method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2023284731A1 true WO2023284731A1 (en) 2023-01-19

Family

ID=84919903

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/105171 WO2023284731A1 (en) 2021-07-13 2022-07-12 Parameter setting method and apparatus, and electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN115696361A (en)
WO (1) WO2023284731A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416690A (en) * 2018-01-19 2018-08-17 中国矿业大学 Load Forecasting based on depth LSTM neural networks
CN108734116A (en) * 2018-05-04 2018-11-02 江南大学 A kind of face identification method learning depth autoencoder network based on speed change
JP2020198135A (en) * 2018-10-09 2020-12-10 株式会社Preferred Networks Hyper parameter tuning method, device and program
CN112232386A (en) * 2020-09-27 2021-01-15 国网福建省电力有限公司莆田供电公司 Voltage sag severity prediction method based on support vector machine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416690A (en) * 2018-01-19 2018-08-17 中国矿业大学 Load Forecasting based on depth LSTM neural networks
CN108734116A (en) * 2018-05-04 2018-11-02 江南大学 A kind of face identification method learning depth autoencoder network based on speed change
JP2020198135A (en) * 2018-10-09 2020-12-10 株式会社Preferred Networks Hyper parameter tuning method, device and program
CN112232386A (en) * 2020-09-27 2021-01-15 国网福建省电力有限公司莆田供电公司 Voltage sag severity prediction method based on support vector machine

Also Published As

Publication number Publication date
CN115696361A (en) 2023-02-03

Similar Documents

Publication Publication Date Title
Wu et al. Kernel recursive maximum correntropy
US20190095794A1 (en) Methods and apparatus for training a neural network
JP7451720B2 (en) Predistortion methods, systems, devices and storage media
CN107425929B (en) Non-auxiliary data equalization method for fading channel under Alpha stable distributed noise
US11450335B2 (en) Method and device for updating coefficient vector of finite impulse response filter
US9607627B2 (en) Sound enhancement through deverberation
WO2020166322A1 (en) Learning-data acquisition device, model learning device, methods for same, and program
US20190109581A1 (en) Adaptive filter method, system and apparatus
WO2023020289A1 (en) Processing method and apparatus for network model, and device and storage medium
CN110998723B (en) Signal processing device using neural network, signal processing method, and recording medium
US20240005166A1 (en) Minimum Deep Learning with Gating Multiplier
CN116560475A (en) Server fan control method and computer equipment
CN115410103A (en) Dam defect identification model rapid convergence method based on federal learning
WO2023284731A1 (en) Parameter setting method and apparatus, and electronic device and storage medium
Jia et al. Federated domain adaptation for asr with full self-supervision
Yang et al. Interval variable step-size spline adaptive filter for the identification of nonlinear block-oriented system
CN107018103B (en) Wavelet constant modulus blind equalization method based on adaptive step size monkey swarm optimization
JPWO2019044401A1 (en) Computer system realizing unsupervised speaker adaptation of DNN speech synthesis, method and program executed in the computer system
EP4138076A2 (en) Method and apparatus for determining echo, device and storage medium
WO2020015546A1 (en) Far-field speech recognition method, speech recognition model training method, and server
CN114614797B (en) Adaptive filtering method and system based on generalized maximum asymmetric correlation entropy criterion
Rupp et al. Supervised learning of perceptron and output feedback dynamic networks: A feedback analysis via the small gain theorem
JP5172536B2 (en) Reverberation removal apparatus, dereverberation method, computer program, and recording medium
CN106105032B (en) System and method for sef-adapting filter
Takizawa et al. Steepening squared error function facilitates online adaptation of Gaussian scales

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22841366

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE