Machine room temperature and humidity prediction method and system based on multiple model comparisons
Technical Field
The invention belongs to the field of machine room environment prediction, and particularly relates to a machine room temperature and humidity prediction method and system based on multiple model comparison.
Background
Along with the continuous expansion of computer application in China, a computer room matched with the computer room is rapidly developed, the stability of electronic equipment in the computer room during working is related to the communication performance of the whole network, the performance of the electronic equipment is influenced by various factors, and the change of temperature and humidity is a key factor influencing the performance of the electronic equipment in the computer room. Proper temperature and humidity conditions need to be kept in the operation and maintenance process, and the electrical parameters of the electronic equipment are changed due to overhigh or overlow temperature and humidity or overlarge change rate, so that the reliability and corrosion rate of the electronic equipment in a machine room are influenced.
Therefore, a stable, accurate, simple and convenient temperature and humidity detection and early warning method is urgently needed to be established, real-time detection and prediction are carried out on the temperature of the machine room in the operation and maintenance process of the electronic information system machine room, so that the setting of the air conditioner can be adjusted according to the temperature and humidity conditions in different time periods and different time periods, and the abnormal and even damaged conditions of the electronic equipment caused by the abnormal change of the temperature and the humidity can be accurately prevented.
Disclosure of Invention
In view of the defects in the prior art, the invention aims to provide a machine room temperature and humidity prediction method and system based on multiple model comparisons, which can predict the change trend of the temperature and humidity of a target machine room in real time so as to achieve the purpose of actively maintaining the machine room environment.
In a first aspect, the invention provides a machine room temperature and humidity prediction method based on multiple model comparisons, which comprises the following steps:
s101, collecting temperature and humidity data of a machine room;
s103, modeling the temperature and humidity data by using a plurality of models;
step S105, evaluating and comparing the multiple models, including:
determining a performance index system for evaluating the performance of the model based on the root mean square error and the average absolute error, and comprehensively comparing various models;
s107, determining an optimal model based on the comparison result;
and S109, adjusting parameters of the optimal model, and deploying on line.
Wherein the step S101 includes:
temperature and humidity sensors are placed in each key place of a machine room, and temperature and humidity data in the machine room are collected through the temperature and humidity sensors.
Wherein, after the step S101, the method further comprises: and preprocessing the acquired temperature and humidity data.
Wherein, in step S103, the plurality of models at least include: a time sequence ARIMA model, a Support Vector Regression (SVR) model and a BP neural network model.
Step S103 specifically includes modeling a plurality of models for the temperature and the humidity, respectively.
Wherein the pretreatment comprises the following steps:
carrying out normalized conversion processing on the temperature and humidity data, wherein the processing formula is as follows:
in the formula (I), the compound is shown in the specification,
Xas the original data, it is the original data,
in order to be able to normalize the data,
X max is the maximum value in the sample data,
X min is the minimum value.
The performance evaluation indexes are specifically as follows:
root mean square errorRMSEFor the square root of the mean of the squared differences between the predicted numerical result and the actual numerical result, in the case of non-linearityRMSEThe smaller the value of (A), the better, the formula is:
mean absolute errorMAEThe average value of absolute errors between a numerical result predicted by the model and an actual result is represented by the formula:
in the formula (I), the compound is shown in the specification,
nindicates the size of the interval at the prediction time point,
an actual value of the index at a certain point in time,
and the index predicted value at a certain time point is shown.
Wherein, the optimal model is a BP neural network model.
And adjusting parameters of the optimal model, wherein the adjusted model parameters comprise the number of hidden nodes, the learning rate and the iteration times.
In a second aspect, the present invention provides a machine room temperature and humidity prediction system based on multiple model comparisons, which includes:
the data acquisition module is used for acquiring temperature and humidity data of the machine room;
the model establishing module is used for establishing a model for the temperature and humidity data by using a plurality of models;
the evaluation comparison module is used for evaluating and comparing the plurality of models;
a model selection module that determines an optimal model based on the comparison result;
a parameter adjustment module for performing parameter adjustment on the optimal model.
Compared with the prior art, the method is modeled according to the temperature and humidity data, and can predict the change trend of the temperature and humidity in the machine room in real time. And the model of the optimal temperature and humidity prediction is obtained through comparison of various models, so that the reliability of the prediction result can be improved, in addition, corresponding coping and processing can be performed in advance according to the prediction result, the burden of management personnel is reduced, and the safety of a machine room is enhanced.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
fig. 1 is a flowchart illustrating a method for predicting temperature and humidity of a machine room based on comparison of multiple models according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a method for predicting temperature and humidity of a machine room based on comparison of multiple models according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating support vector regression according to an embodiment of the invention;
fig. 4 is a flowchart illustrating a method for predicting temperature and humidity of a machine room based on comparison of multiple models according to an embodiment of the present invention;
FIG. 5 is a flow diagram illustrating ARIMA model construction according to an embodiment of the invention;
FIG. 6 is a flow diagram illustrating SVR model construction according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a BP neural network model according to an embodiment of the present invention;
FIG. 8 is a flow diagram illustrating the construction of a BP neural network according to an embodiment of the present invention;
FIG. 9 is a schematic diagram illustrating a machine room temperature and humidity prediction system based on multiple model comparisons according to an embodiment of the present invention; and
fig. 10 is a schematic diagram showing an electronic apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
It should be understood that although the terms first, second, third, etc. may be used to describe … … in embodiments of the present invention, these … … should not be limited to these terms. These terms are used only to distinguish … …. For example, the first … … can also be referred to as the second … … and similarly the second … … can also be referred to as the first … … without departing from the scope of embodiments of the present invention.
Alternative embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Example one
Referring to fig. 1, the invention provides a machine room temperature and humidity prediction method based on multiple model comparisons, which includes the following steps:
s101, collecting temperature and humidity data of a machine room;
s103, modeling the temperature and humidity data by using a plurality of models;
s105, evaluating and comparing the multiple models;
s107, determining an optimal model based on the comparison result;
and S109, adjusting parameters of the optimal model, and deploying on line to complete prediction of the temperature and humidity of the machine room.
Example two
On the basis of the first embodiment, the present embodiment may further include the following:
in order to facilitate a clear understanding of the technical solutions of the embodiments of the present invention, those skilled in the art will now explain them in detail. In this embodiment, the step 101 may specifically include:
temperature and humidity sensors are placed in each key place of a machine room, and temperature and humidity data in the machine room are collected through the temperature and humidity sensors. Furthermore, the number of the temperature and humidity sensors can be selected according to needs, and in an application scene, one temperature and humidity sensor is arranged at each key place of the machine room. In another application scenario, a plurality of temperature and humidity sensors are arranged at each key place of a machine room, and the data accuracy during detection can be improved by arranging a plurality of temperature and humidity sensors at each key place. The arrangement mode of the humidity sensor can be selected according to requirements, and for example, the humidity sensor can be fixed on the electronic equipment through a bracket or directly fixed on the electronic equipment.
In order to facilitate predicting the collected data and obtaining a result, the embodiment of the present invention may further include, after step S101: and preprocessing the acquired temperature and humidity data.
Further, the preprocessing may include the following:
the temperature and humidity data detected by the temperature and humidity sensor are subjected to normalized conversion processing, and the processing formula is as follows:
in the formula (I), the compound is shown in the specification,
Xas the original data, it is the original data,
in order to be able to normalize the data,
X max is the maximum value in the sample data,
X min is the minimum value. Wherein, the original data is the real data of the sample collected by the temperature and humidity sensor.
In some application scenarios, the multiple models in the embodiment of the present invention may be common models for performing prediction, and the multiple models in step S103 may at least include: a time sequence ARIMA model, a Support Vector Regression (SVR) model and a BP neural network model. In addition, the plurality of models in step S103 are each a plurality of models obtained by modeling from temperature and humidity, respectively. In practical application, the optimal model most suitable for predicting temperature and humidity obtained according to the embodiment of the present invention is a BP neural network model (the specific content of which is described in detail later).
In an application scenario, step S105 of the present invention may specifically include:
and determining a performance index system for evaluating the performance of the model, and comprehensively comparing a plurality of models.
Further, the embodiment of the invention adopts the root mean square error and the average absolute error as performance evaluation indexes of the prediction result. Further, the performance evaluation index in the embodiment of the present invention may specifically be:
root mean square errorRMSE(root mean squared error) is the standard deviation of the difference between the predicted numerical result and the actual numerical result, in the case of non-linearityRMSEThe smaller the value of (A), the better, the formula is:
mean absolute errorMAE(mean absolute error) is the average of the absolute error between the numerical result predicted by the model and the actual result, and is formulated as:
in the formula (I), the compound is shown in the specification,
nindicates the size of the interval at the prediction time point,
an actual value of the index at a certain point in time,
and the index predicted value at a certain time point is shown.
EXAMPLE III
On the basis of the above embodiment, the present embodiment may further include the following:
referring to fig. 2, an embodiment of the present invention provides a machine room temperature and humidity prediction method based on multiple model comparisons, where the method may include the following steps:
step (1), temperature and humidity sensors are placed at each key place of a machine room, and temperature and humidity data in a target machine room are collected through the temperature and humidity sensors;
step (2), preprocessing the acquired temperature and humidity data;
respectively constructing a time sequence ARIMA model, a Support Vector Regression (SVR) model, a BP neural network model and other models aiming at temperature and humidity on the acquired temperature and humidity data;
step (4), evaluating and selecting the models, determining an index system for evaluating the performance of the models, comprehensively comparing a plurality of models, and determining the model with the highest accuracy in the models;
step (5), parameter adjustment is carried out on the model with the highest accuracy, and the optimal model is determined; wherein, the parameters of the model are adjusted according to the experimental result;
and (6) outputting a final model, and deploying an online module to predict the real-time collected temperature and humidity data of the machine room. The predicted result is the temperature and humidity data of the machine room in a future period.
According to the machine room temperature and humidity prediction method based on multiple model comparisons, in the step (2), normalization conversion processing is carried out on temperature and humidity data, normalization is only carried out by mapping collected sample data to a range between 0 and 1, and processing is facilitated.
Example four
On the basis of the third embodiment, the present embodiment may further include the following:
in the step (3), in order to compare the prediction effects of the different models, the different models are constructed according to the collected temperature and humidity data, wherein the constructed algorithm models are respectively as follows:
1) differential integration Moving Average Autoregressive model (ARIMA), which is a generic term for a class of models, commonly used by usp,d,qTo determine a specific ARIMA model, denoted ARIMA (p,d,q). The model is established based on a stationary time sequence, and is used for carrying out regression establishment on a hysteresis value of a dependent variable and a current value and a hysteresis value of a random error term. ARIMA (p,d,q) The expression of the model is as follows:
wherein the content of the first and second substances,
y t representing the variable after d-order differentiation. The first part being constant
The second part is
Autoregressive model, the third part is
Moving average. Further, the second part
The autoregressive model is
Where i = 1.. said, p, and the meaning of the second part is a regression of the values at the current point in time equal to the values at the past points in time; the autoregressive model first determines an order p representing the prediction of a current value using historical values of past p-periods, where
Is the autocorrelation coefficient. Third part
The moving average model is
I = 1.. said, q, and the meaning of the third part is that the value at the current point in time is equal to the regression of the prediction errors at several points in time in the past, the moving average model is concerned with the accumulation of error terms in the autoregressive model, the prediction error = model prediction value-true value, random fluctuations in the prediction can be effectively eliminated, wherein,
θ i the coefficients of the moving average equation, q the order,
is the error term of the prediction.
2) Support Vector Regression (SVR) mainly explores a modeling linear Regression equationy=g(x) To fit all sample points within the identifiable range, the optimal hyperplane sought is not the most obvious way to divide the sample boundaries, but rather an optimal method is found that minimizes the total variance between the sample points and the hyperplane.
Specifically, taking the basic principle of the SVR learning model as a theoretical basis, a training sample set is given:
wherein the content of the first and second substances,
the representation is a sample of the image,
it is desirable to learn a regression model so that
And
yas close as possible, the regression model formula is:
whereinω T =(ω1,ω2,…,ωd) Determining the direction of the hyperplane for the normal vector; b is a displacement term, and determines the distance between the hyperplane and the origin.
For sample (x, y), the assumption is that model-based input can be toleratedGo out
And true output
yHave at mostIs only when
And
ygreater than absolute value of difference betweenThe loss is calculated. See fig. 3, which corresponds to
As a center, a width of 2 is constructedIf the training samples fall within this interval zone (a shown in fig. 3), then the prediction is considered correct.
After the model is built, the model needs to be solved, and the solving process is an SVR problem, and then the SVR problem can be formalized as:
in the formula (I), the compound is shown in the specification,
is shown in FIG. 3
-An insensitive loss function; further, in the present invention,
introducing relaxation variables
And
the SVR problem expression may be rewritten as:
in the formula (I), the compound is shown in the specification,Cand (4) representing the degree of importance of the outliers for the penalty factor.
3) The BP neural network, namely an Error Back propagation algorithm (Error Back propagation), is a Back propagation learning neural network including an input layer, an output layer and a hidden layer. In view of the structural characteristics of the BP neural network, the learning process of the BP neural network comprises a plurality of hierarchical structures, and the BP neural network belongs to a neural network with a propagation process going from back to front. The BP neural network algorithm maps out the nonlinear relation between corresponding input data and output data through learning the data. The BP neural network is a supervised data training model, and a learning process of input data information comprises two parts, namely forward input data processing and reverse error correction processing.
In order to further analyze the prediction effect of the machine learning and neural network prediction model on sample data, Root-Mean-Square Error (RMSE) and Mean Absolute Error (MAE) are used as performance evaluation indexes of prediction results, and the numerical values of the two indexes are integrated to determine the finally selected model. Specifically, the MAE and RMSE are made as small as possible, that is, errors between predicted values and actual values are made as small as possible, based on the evaluation index.
In an application scenario, in step (4), for several well-constructed prediction models, the constructed evaluation index system is as follows:
the standard deviation of the difference between the predicted numerical result and the actual numerical value is expressed as RMSE, and in the case of non-linearity, the smaller the value of RMSE, the better, the formula:
the MAE between the model predicted numerical result and the actual result is formulated as:
in the formula (I), the compound is shown in the specification,
nindicates the size of the interval at the prediction time point,
an actual value of the index at a certain point in time,
and the index predicted value at a certain time point is shown.
EXAMPLE five
On the basis of the above embodiment, the present embodiment may further include the following:
referring to fig. 4, an embodiment of the present invention provides a machine room temperature and humidity prediction method based on multiple model comparisons, where the method specifically includes the following steps:
and acquiring data, and collecting temperature and humidity environmental data of the machine room for each acquisition point in the machine room.
And data preprocessing, namely preprocessing the acquired data, filtering dirty data, filling missing values, removing noise in the data and performing normalization processing.
And (3) constructing a prediction model, dividing machine room temperature and humidity data in a determined time period (for example, in the past 2-year time period) as a training data set, and dividing machine room temperature and humidity data in another time period (for example, 2 months after the time period) with a known temperature and humidity change result into a test data set. In order to compare the prediction effects of different models, ARIMA, SVR and BP neural networks are adopted for modeling respectively. Wherein:
1) the modeling of the ARIMA model, as shown in fig. 5, the ARIMA model is not a specific model, but a general name of a series of models, and is selected according to different experimental environments; specifically, the stationarity test is performed on the time sequence to determine whether the time sequence is a stationary time sequence, and if the time sequence is a non-stationary time sequence, further data processing (logarithm, difference, and the like) is required to obtain a stationary sequence. And carrying out non-white noise detection on the obtained stationary time sequence, and if the original sequence is not a white noise sequence and has correlation, further selecting a proper ARIMA model for fitting. As ARIMA is a general term of a series of models and is composed of three parameters, different time sequence sequences need to be selectedp,d,qValues were fitted to the model. The model being usually discriminated by ACF or PACFqAndpthe value is obtained.
Among them, ACF (autocorrelation function) reflects a linear correlation between a time-series observation and its past observations.
PACF (partial autocorrelation function), describes the linear correlation between a time series of observations and their past observations given an intermediate observation.
Using ACF and PACF to judge order of modelqAndpand performing model fitting on the basis, and checking and diagnosing the estimation result obtained by the model so as to verify whether the selected model is proper or not. If so, the model is considered to be reasonable. Otherwise, the effective model is selected again, and then the model is used for prediction.
Further, the values of p, d and q adopted in the embodiment of the invention are respectively 6,1 and 2 to perform model fitting
2) The SVR prediction model is constructed (the formula may refer to the fourth embodiment), what greatly affects the prediction accuracy is the selection of the kernel Function, the kernel Function selected in this embodiment is RBF (Radial Basis Function), the complexity of the influence Function is reduced, and the tolerance penalty coefficient for the error is set to 1e3 (i.e., 1 × 10 to the third power).
Referring to fig. 6, the construction process of the SVR prediction model is as follows: reading data, dividing the data into a training data set and a testing data set, establishing a target function by using training samples in the training data set, and then performing parameter selection, wherein a kernel function is selected as RBF, a penalty factor is 1e3, and a gamma is 0.01 in the embodiment of the invention. And solving an objective function, substituting the parameters w and b obtained after the solution into a regression function f (x), and predicting the data by using a test sample of the test data set. And judging the deviation between the real value and the expected value according to the target function, and repeating the process until an ideal target error value is obtained.
3) The BP neural network is a simple three-layer BP neural network, and as shown in fig. 7, the middle layer structure is single hidden. In this embodiment, the parameters applied are as follows: the number of input layer nodes is set to 3, the number of hidden layer ganglion nodes is set to 7, and the output layer node data is set to 1. And a logsig function is adopted for data transfer conversion between the input layer and the hidden layer, a purelin function is adopted for data output of the output layer, the momentum coefficient is 0.8, and the learning rate is 0.01.
Specifically, referring to fig. 8, the specific steps of constructing the prediction model by using the BP algorithm are as follows:
and reading data, dividing the data sample into a training sample set and a testing sample set.
And selecting the transfer functions of the hidden layer and the output layer. According to the BP algorithm principle, the process that input data passes through a network input layer to a network output layer needs transfer functions to carry out form conversion of each layer of the data, and different transfer functions are set among the layers, so that network performance is different. According to practical analysis, the logsig function is adopted by the input layer and the hidden layer to perform data transfer conversion, and the purelin function is adopted by the output layer to perform data output.
And planning important parameters influencing the model performance. The BP algorithm generally forms a local optimal value problem when iterative learning is performed on sample data of a training sample set, that is, an output error of a network is a constant value, and the correction capability of each parameter in the network is weak, but a prediction error value cannot meet an error set value specified by training data, so that a process of adding a momentum coefficient mc to error adjustment of a data signal is adopted, and generally mc belongs to [0,1 ]. The learning rate lr in the network is reasonably adjusted to the gradient descent process in the BP algorithm, and if the value of lr is set to be larger or smaller, the convergence capability of the network is poor, so for the setting of lr in the network, a descending trend can be presented after a period of iteration time passes during training and learning of data samples, so that the network can effectively end the convergence process, generally lr belongs to [0.01,0.8], and in this embodiment, lr =0.01 is set.
And after setting relevant parameters based on the steps, constructing a BP neural network model. From the theoretical knowledge of the BP neural network, the BP network is generally built by three layers, i.e., an input layer, one or more hidden layers and an output layer. Since setting multiple hidden layers increases the complexity of the network and further slows down the computational efficiency of the prediction model, the BP neural network is set as a hidden layer structure in this embodiment. The input layer nodes of the BP neural network are determined according to the dimensions of the data samples, and the number of the input layer nodes is set to 3 in the embodiment. The output layer node of the BP network is set to 1. For a BP network hidden layer, when the number of ganglion points is set, the advantages and disadvantages of the network prediction performance are considered; because the number of the cryptomelanic ganglion points is important for the influence of network prediction, if the number of the cryptomelanic ganglion points is set unreasonably, the deviation between a predicted expected value and a data true value is large, and the following formula is usually adopted as a node number selection standard.
In the formula (I), the compound is shown in the specification,
is the number of hidden layer nodes and,
is the number of the input nodes and,
as the number of the output nodes,
ais a random value, and the range belongs to
I.e. the number of hidden layer nodes is set to 7. The present embodiment is constructed according to the above-mentioned contents by using an input-hidden layer-output three-layer network structure as shown in fig. 7, i.e. a network model in the form of 3-7-1.
And (3) carrying out a training learning process of a training data set and a test number set by using a 3-7-1 network structure, judging the deviation between a true value and an expected value according to a target function, and repeating the process until an ideal target error value is obtained, wherein the process is also a core link of the BP algorithm.
And after the learning process of the data sample is completed, obtaining a final prediction result of the test sample set.
The method comprises the steps of evaluating and selecting models, taking the same temperature and humidity data in a time period with known temperature and humidity change results as input streams of the models, and carrying out several learning prediction experiments according to the input streams to predict the temperature and humidity environment of a machine room.
TABLE 1 comparison of Effect of predicted Performance of models
As can be seen from the table 1, both the RMSE and the MAE of the BP neural network model are smaller than those of the other two models, the experimental prediction error is smaller, and the model prediction performance is higher, so that the BP neural network model is determined for prediction.
Optimizing the model, namely adjusting parameters of the determined BP neural network model to determine an optimal BP neural network model;
and (4) putting the model on line, deploying the final prediction model on the line, and predicting the variation trend of the temperature and the humidity in the machine room in real time.
EXAMPLE six
On the basis of the fifth embodiment, the present embodiment may further include the following:
in the above embodiment, the determined model with the highest accuracy is the BP neural network model.
In step (5), the selected BP neural network model parameters are adjusted to improve the prediction performance, and the model parameters to be adjusted may include the number of hidden nodes, the learning rate, and the number of iterations. The parameter adjustment training model is used for enabling the difference between a predicted result and a true value to be as small as possible, and training can be carried out by adjusting the number of hidden nodes, the learning rate and the iteration number to obtain an optimal model. The training rules are as follows:
1. learning rate: for the embodiment of the invention, the cost on the training data is selected to be immediately decreased instead of concussion or increase, and the cost is used as the estimation of the eta threshold, so that the magnitude can be determined without too high accuracy. If the cost starts to decrease in the first rounds of training, the magnitude of η can be increased gradually until a suitable value is found such that the cost starts to oscillate or increase in the first rounds; conversely, if the cost function curve begins to oscillate or increase, the learning rate is determined by taking half the threshold in an attempt to reduce the magnitude until a setting is found in which the cost drops in the initial round. The reason for using the training data in the embodiment of the present invention is that the main purpose of the learning rate is to control the step size of the gradient descent, and the monitoring training cost is the best method for detecting the step size is too large;
2. iteration times are as follows: first, it is necessary to make clear what is called classification accuracy is not increased any more, so that an early stop can be achieved. Wherein the classification accuracy still shakes or oscillates when the overall trend is reduced. If it stops when accuracy has just started to decline, then better choices must be missed. A good solution is to terminate if the classification accuracy is no longer increasing for a period of time. It is suggested that in understanding the way of network training more deeply, only 10 rounds of no-boosting rules are used in the initial stage, and then longer rounds are selected step by step, for example, 20 rounds are terminated without boosting, 30 rounds are terminated without boosting, and so on;
3. selecting a hidden layer:
wherein h is the number of nodes of the hidden layer, m is the number of nodes of the input layer, n is the number of nodes of the output layer, and a is an adjusting constant between 1 and 10.
EXAMPLE seven
Referring to fig. 9, the embodiment provides a machine room temperature and humidity prediction system based on comparison of multiple models, which may include:
the data acquisition module is used for acquiring temperature and humidity data of the machine room;
the model establishing module is used for establishing a model for the temperature and humidity data by using a plurality of models;
the evaluation comparison module is used for evaluating and comparing the plurality of models;
a model selection module for determining an optimal model based on the comparison result;
a parameter adjustment module for performing parameter adjustment on the optimal model.
Example eight
Referring to fig. 10, the present embodiment further provides an electronic device 900, where the electronic device 900 includes: at least one processor 901; and a memory 902 communicatively coupled to the at least one processor 901; wherein the content of the first and second substances,
the memory 902 stores instructions executable by the one processor 901 to cause the at least one processor 901 to perform method steps as described in the above embodiments.
Example nine
The disclosed embodiments provide a non-volatile computer storage medium having stored thereon computer-executable instructions that may perform the method steps as described in the embodiments above.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a local Area Network (AN) or a Wide Area Network (WAN), or the connection may be made to AN external computer (for example, through the internet using AN internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The foregoing describes preferred embodiments of the present invention, and is intended to provide a clear and concise description of the spirit and scope of the invention, and not to limit the same, but to include all modifications, substitutions, and alterations falling within the spirit and scope of the invention as defined by the appended claims.