CN114692888A - System parameter processing method, device, equipment and storage medium - Google Patents

System parameter processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114692888A
CN114692888A CN202011625722.8A CN202011625722A CN114692888A CN 114692888 A CN114692888 A CN 114692888A CN 202011625722 A CN202011625722 A CN 202011625722A CN 114692888 A CN114692888 A CN 114692888A
Authority
CN
China
Prior art keywords
model
parameter
parameters
performance information
participants
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011625722.8A
Other languages
Chinese (zh)
Inventor
林江淼
黄启军
黄铭毅
陈瑞钦
刘玉德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202011625722.8A priority Critical patent/CN114692888A/en
Publication of CN114692888A publication Critical patent/CN114692888A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a system parameter processing method, a device, equipment and a storage medium, wherein the method is applied to a first participant in a plurality of participants participating in multi-party security calculation, and comprises the following steps: determining at least one training sample, wherein each training sample comprises system parameters and corresponding system performance information when calculation is carried out under the system parameters; constructing a model for predicting system performance information according to system parameters; and according to the at least one training sample, federating the model with at least part of other participants in the plurality of participants, wherein the trained model is used for any participant in the plurality of participants to adjust system parameters. The invention constructs and trains a model for predicting system performance information according to system parameters, and adjusts the system parameters of any participant in all participants. Therefore, the debugging of the system parameters is automated, and the debugging efficiency of the system parameters is improved.

Description

System parameter processing method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a system parameter processing method, a system parameter processing device, system parameter processing equipment and a storage medium.
Background
With the continuous development of computer technology and big data processing technology, the application of federal learning is more and more extensive. Federal learning not only has many design difficulties in algorithm design, but also has many problems to be solved from the computer engineering perspective.
Since federal learning involves distributed computing, distributed storage, and cross-site transport at the system architecture level, the system complexity is relatively large. The federate learning underlying distributed framework (hereinafter referred to as the framework) is an off-line training distributed system, and in order to adapt to different hardware environments of different users, the framework is designed into a parameterized system to provide a large number of configurable parameters. The user may adjust the frame parameters during use to achieve optimal performance or maximum stability depending on the configuration of the user's machine.
Currently, the configuration of federally learned underlying distributed system parameters is typically manually adjusted by a user based on experience. However, the system complexity is high, the configurable parameters are many, the manual parameter debugging mode is time-consuming, the debugging effect is difficult to guarantee, and the debugging efficiency is low.
Disclosure of Invention
The invention mainly aims to provide a system parameter processing method, a system parameter processing device, system parameter processing equipment and a storage medium, and aims to construct a system parameter debugging model and automate system parameter debugging so as to improve system parameter debugging efficiency.
In order to achieve the above object, the present invention provides a system parameter processing method applied to a first participant in a plurality of participants participating in multi-party secure computing, the method including:
determining at least one training sample, wherein each training sample comprises system parameters and corresponding system performance information when calculation is carried out under the system parameters;
constructing a model for predicting system performance information according to system parameters;
and according to the at least one training sample, federately training the model in combination with at least part of other participants in the plurality of participants, wherein the trained model is used for any participant in the plurality of participants to adjust system parameters.
In one possible implementation, federately training the model in conjunction with at least some other participants of the plurality of participants according to the at least one training sample includes:
federating the model in accordance with the at least one training sample in conjunction with at least some other of the plurality of participants;
repeatedly executing the following operations until the training end condition is met:
adjusting system parameters of the first participant according to the trained model;
judging whether the system performance information corresponding to the adjusted system parameters is superior to the corresponding system performance before adjustment;
if so, federating at least some other participants of the multiple participants to continue the model for federal training;
if not, determining that the training end condition is met.
In one possible implementation manner, adjusting the system parameter of the first participant according to the trained model includes:
randomly generating a plurality of parameter tuning schemes, each parameter scheme comprising system parameters for input to the model;
determining system performance information corresponding to each parameter adjusting scheme according to the model;
selecting a parameter adjusting scheme from the plurality of parameter adjusting schemes according to system performance information corresponding to each parameter adjusting scheme;
adjusting system parameters of the first participant according to the selected parameter adjustment scheme.
In a possible implementation manner, selecting a parameter adjusting scheme from the multiple parameter adjusting schemes according to system performance information corresponding to each parameter adjusting scheme includes:
selecting a parameter adjusting scheme with system performance information meeting preset conditions from the plurality of parameter adjusting schemes as an alternative scheme according to the system performance information corresponding to each parameter adjusting scheme;
if the alternative schemes are multiple, calculating a preset data set based on the system parameters of the alternative schemes for each alternative scheme, and detecting corresponding system performance information during calculation;
and selecting a parameter adjusting scheme from the multiple alternative schemes according to the system performance information corresponding to each alternative scheme obtained through detection.
In one possible implementation, federating the continuing model in conjunction with at least some other participants of the plurality of participants includes:
constructing a new training sample according to the system performance information obtained by detection;
federating the model in accordance with the new training sample in conjunction with at least some other of the plurality of participants.
In one possible implementation, the model is a linear regression model; the system parameters corresponding to the model comprise environment parameters and distributed parameters;
wherein the environmental parameter comprises at least one of: CPU information, memory information and hard disk information;
the distributed parameters include at least one of: thread pool information, network packet information, and retry latency.
In one possible implementation, the method further includes:
obtaining a model which is constructed and trained by at least part of other participants and used for predicting system performance information according to system parameters;
aggregating the obtained model and the model constructed and trained by the first participant to obtain an aggregated model;
wherein the model for any of the plurality of parties to adjust system parameters is the aggregated model.
The invention also provides a system parameter processing method, which comprises the following steps:
obtaining a model for predicting system performance information according to system parameters, wherein the model is obtained based on any one of the methods;
generating at least one tuning scheme, each parameter scheme comprising system parameters for input to the model;
and obtaining system performance information corresponding to each parameter adjusting scheme according to the model, and adjusting system parameters according to the system performance information of each parameter adjusting scheme.
The invention also provides a system parameter processing device, comprising:
the training sample determining module is used for determining at least one training sample, wherein each training sample comprises system parameters and corresponding system performance information when calculation is carried out under the system parameters;
the model construction module is used for constructing a model for predicting system performance information according to system parameters;
and the model training module is used for carrying out federal training on the model by combining at least part of other participants in the multiple participants according to the at least one training sample, wherein the trained model is used for any participant in the multiple participants to adjust system parameters.
The invention also provides a system parameter processing device, comprising:
an obtaining module, configured to obtain a model for predicting system performance information according to system parameters, where the model is obtained based on the apparatus according to any one of the preceding claims;
a parameter adjustment scheme generation module for generating at least one parameter adjustment scheme, each parameter scheme comprising system parameters for input to the model;
and the parameter adjusting module is used for obtaining the system performance information corresponding to each parameter adjusting scheme according to the model and adjusting the system parameters according to the system performance information of each parameter adjusting scheme.
The present invention also provides a system parameter processing apparatus, including: a memory, a processor and a system parameter processing program stored on the memory and executable on the processor, the system parameter processing program when executed by the processor implementing the steps of the system parameter processing method according to any one of the preceding claims.
The present invention also provides a computer readable storage medium having stored thereon a system parameter processing program which, when executed by a processor, implements the steps of the system parameter processing method according to any one of the preceding claims.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, implements the system parameter processing method of any one of the preceding claims.
The invention provides a system parameter processing method, a device, equipment and a storage medium, wherein the method is applied to a first participant in a plurality of participants participating in multi-party secure computation, and comprises the following steps: determining at least one training sample, wherein each training sample comprises system parameters and corresponding system performance information when calculation is carried out under the system parameters; constructing a model for predicting system performance information according to system parameters; and according to the at least one training sample, federately training the model in combination with at least part of other participants in the plurality of participants, wherein the trained model is used for any participant in the plurality of participants to adjust system parameters. The invention trains the model by constructing the model for predicting the system performance information according to the system parameters and jointly performing multi-party safety calculation by a plurality of participants, and the trained model can be used for adjusting the system parameters of any one of the participants. Therefore, the debugging of the system parameters is automated, and the debugging efficiency of the system parameters is improved.
Drawings
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating a system parameter processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a principle of horizontal federal learning provided by an embodiment of the invention;
fig. 4 is a schematic flow chart illustrating another system parameter processing method according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a method for model training according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of a method for adjusting system parameters according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a system parameter processing apparatus according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of another system parameter processing apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a system parameter processing device according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited by the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Distributed systems of the federal learning bottom layer are complex and have more configurable parameters, and the manual debugging needs a user to invest large learning cost and debugging cost to possibly obtain the optimal parameters of the system. This process is time consuming and labor intensive and inefficient.
To solve this problem, embodiments of the present invention provide a method, which can adjust system parameters based on a machine learning model. However, the data volume of the system parameters of a single user is too small, and the requirement of the machine learning model on the optimized data volume is difficult to achieve. And federal learning is a process of participation of multiple parties, so that the problem of insufficient data volume of a single user is solved. The federal learning framework is an off-line system, users are required to be installed in own hardware environment, and a system tuning model with stronger system tuning capability can be trained among multiple users through federal learning, so that system parameters can be effectively adjusted by using the model.
In view of this, embodiments of the present invention provide a method, an apparatus, a device, and a storage medium for processing system parameters, so as to construct a model for debugging system parameters, and automate system parameter debugging, so as to improve system parameter debugging efficiency.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present invention. As shown in fig. 1, a plurality of participants participating in multi-party security computing collaborate to perform data processing. In order to improve the efficiency of data processing, system parameters of each party need to be adjusted. Each participant has system parameters and corresponding system performance information, through federal learning, a plurality of participants jointly train a model (hereinafter referred to as a system parameter adjusting model) for predicting the system performance information according to the system parameters, and each participant can determine local system parameters by using the trained system parameter adjusting model. And then performing multi-party security calculation based on the determined system parameters.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The features of the embodiments and examples described below may be combined with each other without conflict between the embodiments.
Fig. 2 is a schematic flow chart of a system parameter processing method according to an embodiment of the present invention. The execution subject of the method provided by this embodiment may be any participant participating in the multi-party secure computing, and the participant may specifically be a server or a terminal device or a device cluster. As shown in fig. 2, the system parameter processing method of this embodiment may include:
step 201, at least one training sample is determined, wherein each training sample comprises a system parameter and corresponding system performance information when calculation is performed under the system parameter.
The system parameters refer to parameters of the system architecture of the participant, and may include hardware parameters and software parameters. Hardware parameters refer to parameters of the hardware device, and software parameters refer to parameters of the software configuration. The system performance information refers to information that can represent the system operation performance under corresponding system parameters, for example, information such as system performance scores may be determined by the processing speed, response time, and the like of the system.
The system parameters influence the system performance, so the system parameters can be used as characteristic variables, and the system performance can be used as target variables (namely labels) to jointly form a training sample.
For each participant, historical system parameter data and corresponding system performance data can be obtained locally as a training sample of the current participant.
Step 202, constructing a model for predicting system performance information according to the system parameters.
And determining a functional relation between the system parameters and the system performance information, and constructing a model according to the relation. This model is the system parameter model described above.
Specifically, the model may be a linear regression model, a logistic regression model, a neural network model, and the like, which is not limited herein. In practical applications, the model type can be selected according to the characteristics of different models.
And step 203, performing federated training on the model by combining at least part of other participants in the multiple participants according to at least one training sample, wherein the trained model is used for any participant in the multiple participants to adjust system parameters.
Specifically, the training sample size can be enlarged by acquiring training samples of other participants, so as to improve the accuracy of model training. In order to guarantee data privacy, sample data can be encrypted and transmitted, and encryption calculation is also used in the model training process of the participator.
And the parameters of the trained model are sent to other participants to realize the sharing of the model parameters, so that all the participants can adjust the system parameters of the local system by using the trained model.
The system parameter processing method provided by the embodiment is applied to a first participant in a plurality of participants participating in multi-party security computation, and comprises the following steps: determining at least one training sample, wherein each training sample comprises system parameters and corresponding system performance information when calculation is carried out under the system parameters; constructing a model for predicting system performance information according to system parameters; and performing federal training on the model in combination with at least part of other participants in the multiple participants according to at least one training sample, wherein the trained model is used for any participant in the multiple participants to adjust system parameters. The invention constructs a model for predicting the system performance information according to the system parameters, and a plurality of participants jointly carry out multi-party safety calculation to train the model, and the trained model can be used for adjusting the system parameters of any one of the participants. Therefore, the debugging of the system parameters is automated, and the debugging efficiency of the system parameters is improved.
Before the final determination of the system parameter model, multiple model training may be required. Optionally, in the model training process, a training round may be preset, and when the training round is reached, the training is ended; or, when the model converges, the training is ended, for example, a loss tolerance value may be preset, a loss value is calculated in the training process, and when the loss value reaches the preset loss tolerance value, the result of the loss function is determined to converge, and the training is ended.
In addition, the number of model training and the final model parameters can also be determined by examining the model effect.
Correspondingly, the step 203 of performing federal training on the model by combining at least some other participants of the multiple participants according to at least one training sample may specifically include: federating the model in accordance with at least one training sample in conjunction with at least some other participants of the plurality of participants; repeatedly executing the following operations until the training end condition is met: adjusting system parameters of the first participant according to the model obtained by training; judging whether the system performance information corresponding to the adjusted system parameters is superior to the corresponding system performance before adjustment; if so, federating at least part of other participants in the multiple participants to carry out federal training on the continuous model; if not, determining that the training end condition is met.
And performing at least one federate training on the model by combining at least part of other participants in the multiple participants according to at least one training sample, adjusting system parameters of the first participant by applying the trained model, and determining whether the system performance of the adjusted system is improved relative to that before the adjustment. If the system performance is improved, the model trained at this time is optimized compared with the model trained at the last time, namely the model has the possibility of optimization, and then the model training of the next round can be continued; if the performance of the system is not improved, the performance of the model is not improved by the training, namely the model possibly reaches the optimal degree, the next round of model training is not needed, and the training can be finished.
In this embodiment, the training model is directly used to adjust actual system parameters, and the adjusted system performance is analyzed, so that the degree of model training can be determined, the model training process can be efficiently monitored as an instructive index of the model training process, poor model using effect caused by too few model training times or waste of time and resources caused by too many model training times can be avoided, and the efficiency of model training can be improved.
In a possible implementation manner, the adjusting the system parameter of the first participant according to the trained model may specifically include: randomly generating a plurality of parameter adjusting schemes, wherein each parameter adjusting scheme comprises system parameters used for inputting to the model; determining system performance information corresponding to each parameter adjusting scheme according to the model; selecting a parameter adjusting scheme from a plurality of parameter adjusting schemes according to system performance information corresponding to each parameter adjusting scheme; adjusting system parameters of the first participant according to the selected parameter adjustment scheme.
Firstly, randomly generating a plurality of parameter adjusting schemes, wherein each parameter adjusting scheme corresponds to a group of system parameters; inputting each set of system parameters into the model, corresponding system performance information (i.e., labels) may be determined; and taking the system performance information corresponding to each group of system parameters as a screening basis, selecting the system parameter corresponding to one parameter adjusting scheme as a new system parameter, and adjusting the original system parameter of the first participant into the new system parameter.
The method for adjusting system parameters by using a model provided in this embodiment is suitable for both the process of detecting the performance of the model to determine whether the model training needs to be continued and the process of actually adjusting system parameters by using the finally trained model. By randomly generating parameter adjusting schemes and analyzing the system performance of each parameter adjusting scheme, the probability of generating optimal system parameters can be improved, namely, the accuracy of judging the model performance can be improved, and the efficiency of adjusting the system parameters can also be improved.
One way of selecting a parameter adjusting scheme from the multiple parameter adjusting schemes according to the system performance information corresponding to each parameter adjusting scheme may be to select a parameter adjusting scheme with the optimal system performance information from the multiple parameter adjusting schemes, and use the corresponding system parameter as a new system parameter.
Another way of selecting a parameter adjusting scheme from a plurality of parameter adjusting schemes according to system performance information corresponding to each parameter adjusting scheme may specifically include: selecting a parameter adjusting scheme with system performance information meeting preset conditions from a plurality of parameter adjusting schemes as an alternative scheme according to the system performance information corresponding to each parameter adjusting scheme; if a plurality of alternative schemes exist, calculating a preset data set based on system parameters of the alternative schemes for each alternative scheme, and detecting corresponding system performance information during calculation; and selecting a parameter adjusting scheme from the multiple alternative schemes according to the system performance information corresponding to each alternative scheme obtained through detection.
The preset condition may be that the system performance information is greater than or equal to a certain threshold; or, the system performance information is ranked according to the merits, and the best ones are selected, etc. Correspondingly, selecting an alternative scheme meeting preset conditions from the multiple parameter adjusting schemes; and utilizing the alternative schemes to actually calculate the preset data set and detect the actual system performance information corresponding to the preset data set, and accordingly, selecting an optimal parameter adjusting scheme from the alternative schemes and taking the system parameter corresponding to the optimal parameter adjusting scheme as a new system parameter.
The system performance information may be a calculation speed or a calculation duration for a preset data set. The scoring may be based on a calculated speed, e.g., greater than or equal to a certain speed value, corresponding to a system performance of 80 points, etc. The scoring may also be based on a calculated duration, such as less than or equal to a certain duration value, corresponding to a system performance of 80 points, and so on.
In the embodiment, the candidate is screened, and the actual system performance information corresponding to the candidate is analyzed based on the actual calculation process, so that the error of the system parameter adjustment model analysis can be eliminated, and the effect of system parameter adjustment is further improved.
In one possible implementation, federating the continuation model in conjunction with at least some other participants of the plurality of participants includes: constructing a new training sample according to the system performance information obtained by detection; and federating the model in association with at least some other participants of the plurality of participants based on the new training sample.
In the model training process, the actual calculation of a preset data set by using an alternative scheme may be involved; in the process of using the model to perform actual system parameter adjustment, calculation using the actual data set of the parameter adjustment scheme is involved. In these processes, a new data pair of "system parameter-system performance information" is generated, which can be used as new sample data. And updating the training samples by using newly-added sample data, and performing federal training again, so that the number of the samples can be increased, the model precision is improved, and the data value is mined to the greatest extent.
In one possible implementation, the model may be a linear regression model; the system parameters corresponding to the model comprise environment parameters and distributed parameters; wherein the environmental parameter may comprise at least one of: CPU information, memory information and hard disk information; the distributed parameters may include at least one of: thread pool information, network packet information, and retry latency.
The environment parameter is equal to the hardware parameter in step 201, and is a basic parameter of the hardware device. The distributed parameters are equivalent to the software parameters and are parameters of the software configuration of the distributed system. The environment parameters are generally fixed and not adjustable, but the system parameters combined with different distributed parameters result in different system performances, so that the model is trained as a training sample.
In the distributed parameters, the thread pool information may be the size of the thread pool in the multithread processing mode; the network packet information may be the size of a network packet when data transmission is performed with other participants; the retry waiting time may be a time when a response returned from the participant is not received within the time after the information is transmitted to the other participants, and the information is retransmitted.
And taking each system parameter as a characteristic variable, setting a weight value for each characteristic variable, taking the system performance information as a target variable, and constructing a linear regression function.
As explained in the above embodiments, the model may also be of other types, such as a logistic regression model, a neural network model, and the like.
The above embodiments illustrate the model training process in one party of any participant, and after each party completes training, a model with more optimized parameters can be obtained through model aggregation. The method specifically comprises the following steps: obtaining a model which is constructed and trained by at least part of other participants and used for predicting system performance information according to system parameters; aggregating the obtained model and the model constructed and trained by the first participant to obtain an aggregated model; wherein the model for any of the plurality of participants to adjust the system parameters is an aggregated model.
The participants participating in the model training can be part of or all of the multiple participants participating in the multi-party security computation. The participants of the subject executed by the method may be federately trained in conjunction with other participants participating in model training.
Specifically, steps 201-203 are also executed for each participant participating in model training, respective local training samples are provided, system parameter adjusting models are constructed, and federated training is performed through mutual combination.
Because each participant can provide target variable data and accords with the characteristics of horizontal federal learning, the model training can be carried out by adopting the horizontal federal learning. And each participant utilizes local sample data to calculate the model parameters, and the model parameters of each participant are combined to obtain the model parameters. Data transmission is involved in the model training process, and encryption calculation can be carried out on the data. The encryption algorithm may employ homomorphic encryption or semi-homomorphic encryption.
Fig. 3 is a schematic diagram of a principle of horizontal federal learning according to an embodiment of the present invention. As shown in fig. 3, each participant participating in the horizontal federal learning is a client terminal 1 and a client terminal 2 … …, each client terminal calculates the model parameters of the client terminal using local sample data, and transmits the calculated model parameters of the client terminal to the server (i.e., the coordinator of the horizontal federal learning), and the server aggregates the model parameters of each participant to obtain finally determined model parameters, and distributes the finally determined model parameters to each participant to complete the model training process. The data in the transmission process is encrypted data.
Fig. 4 is a schematic flow chart of another system parameter processing method according to an embodiment of the present invention, where the method of this embodiment is applicable to adjusting system parameters by using a system parameter tuning model, and the method includes:
step 401, a model for predicting system performance information according to system parameters is obtained.
The model is a system parameter adjusting model obtained based on the method of the embodiment.
At step 402, at least one tuning parameter solution is generated, each parameter solution including system parameters for input to the model.
Similar to the parameter adjustment schemes in the above embodiments, each parameter adjustment scheme corresponds to a set of system parameters, and each set of system parameters may include an environmental parameter and a distributed parameter.
The environment parameters can be directly obtained from the equipment, and the distributed parameters can be randomly generated.
And 403, obtaining system performance information corresponding to each parameter adjusting scheme according to the model, and adjusting system parameters according to the system performance information of each parameter adjusting scheme.
And inputting the system parameters corresponding to each parameter adjusting scheme into the model, and outputting the predicted value of the system performance information correspondingly. And then, system parameter adjustment can be carried out according to the system performance information corresponding to each parameter adjustment scheme.
The trained model can accurately predict system performance information corresponding to a group of system parameters, and the system performance information is closer to actual system performance information. Therefore, by adopting the method of the embodiment, the multiple groups of parameter adjusting schemes can be rapidly and accurately compared, and the optimal parameter adjusting scheme is determined from the multiple groups of parameter adjusting schemes to adjust the system parameters. Compared with the prior art, the overall efficiency is greatly improved.
The detailed implementation of some similar technical features can refer to the description in the above embodiments.
Fig. 5 is a flowchart of a method for training a model according to an embodiment of the present invention. Both agency a and agency B have deployed a federal learning distributed framework. Firstly, frame initial parameters are collected, and a system parameter adjusting model is initialized according to the frame initial parameters. And then training model parameters in a federal modeling mode to obtain a new model. And optimizing system parameters by using the new model after each modeling is completed, if the system performance is improved compared with the last time, indicating that the system parameters are successfully optimized by the federal modeling, and repeating the steps until the system performance is not improved, so that the federal optimization modeling is completed.
The system parameter adjustment model in this embodiment is fitted by using a multi-feature linear regression model. In this model, the current mechanism a has the following characteristics:
Figure BDA0002874793430000121
organization B also has similar parameters and scoring data.
The data includes environment parameters and distributed system parameters, where the former refers to server environment parameters of the user, such as CPU core number, memory size, and hard disk size; the latter is the distributed system parameter. The performance score is the federal learning performance score (the score range is 0-100) under the corresponding environment parameters and distributed system parameters. The above parameters were fit into the model:
hθ=θ01*x12*x23*x34*x4+…+θn*xn
wherein x represents the above-mentioned environmental parameter and distributed system parameter (x is the characteristic), hθ(x) The model is said in this embodiment and each institution can provide the same x (each institution has its own environmental parameters as well as distributed system parameters). Thus, the problem can be converted into a federally learned multi-feature linear regression modeling (all the parameters are the features, and the model h is obtained by trainingθ(x) ). By jointly performing federal learning multi-feature quantity linear regression modeling among multiple mechanisms, data (different environmental parameters, distributed system parameters and corresponding performance scores) of the multiple mechanisms can be aggregated, so that the training data quantity is greatly increased, and a model h with better effect can be obtainedθ(x)。
After the Federal learning multi-feature quantity linear regression modeling is completed, a model h can be obtainedθ(x) In that respect The invention utilizes an enumeration method to carry out parameter regression. For example, a model h is obtainedθ(x) The newly added institution C then enters its own environmental parameters (which tend to be fixed for a particular institution) and randomly generates a large number of distributed system parameters, as shown in fig. 6. The distributed system parameters generated by random generation and/or enumeration are combined with the environment parameters carried by the server to form various system parameter combinationsThe system performance prediction scores of all parameter combinations can be obtained by inputting the system performance prediction scores into the model, and then the parameter combination with the highest performance can be selected as a new system parameter. The more parameter combinations are, the better the final result is, so that the time for the mechanism to obtain the performance score through a real algorithm is saved, and the cost is greatly reduced.
In the original federal learning system, the system parameters of each user are isolated, and the users can obtain the optimal parameters only by depending on personal experience and repeated experiments in the process of system tuning, but the invention combines the parameters of a plurality of users to carry out federal learning to obtain a more optimal system model to solve the problem. The problem that a user needs to repeatedly adjust parameters in the process of using the federal learning system is solved, and meanwhile, more parameter data are concentrated to obtain a better parameter model so as to achieve a better system tuning effect.
Fig. 7 is a schematic structural diagram of a system parameter processing apparatus according to an embodiment of the present invention. As shown in fig. 7, the system parameter processing apparatus 700 may include: a training sample determination module 701, a model construction module 702, and a model training module 703.
A training sample determining module 701, configured to determine at least one training sample, where each training sample includes a system parameter and system performance information corresponding to the system parameter when performing calculation under the system parameter;
a model construction module 702 for constructing a model for predicting system performance information based on system parameters;
a model training module 703, configured to federate the model with at least some other participants of the multiple participants according to at least one training sample, wherein the trained model is used for any participant of the multiple participants to adjust system parameters.
In one possible implementation, the model training module 703 is specifically configured to:
federating the model in accordance with at least one training sample in conjunction with at least some other participants of the plurality of participants;
repeatedly executing the following operations until the training end condition is met:
adjusting system parameters of the first participant according to the model obtained by training;
judging whether the system performance information corresponding to the adjusted system parameters is superior to the corresponding system performance before adjustment;
if so, federating at least part of other participants in the multiple participants to carry out federal training on the continuous model;
if not, determining that the training end condition is met.
In a possible implementation manner, when the model training module 703 adjusts a system parameter of the first party according to the trained model, it is specifically configured to:
randomly generating a plurality of parameter adjusting schemes, wherein each parameter adjusting scheme comprises system parameters for inputting to the model;
determining system performance information corresponding to each parameter adjusting scheme according to the model;
selecting a parameter adjusting scheme from a plurality of parameter adjusting schemes according to system performance information corresponding to each parameter adjusting scheme;
and adjusting the system parameters of the first participant according to the selected parameter adjusting scheme.
In a possible implementation manner, when the model training module 703 selects a parameter adjusting scheme from the multiple parameter adjusting schemes according to the system performance information corresponding to each parameter adjusting scheme, the model training module is specifically configured to:
selecting a parameter adjusting scheme of which the system performance information meets a preset condition from a plurality of parameter adjusting schemes as an alternative scheme according to the system performance information corresponding to each parameter adjusting scheme;
if a plurality of alternative schemes exist, calculating a preset data set based on system parameters of the alternative schemes for each alternative scheme, and detecting corresponding system performance information during calculation;
and selecting a parameter adjusting scheme from the multiple alternative schemes according to the system performance information corresponding to each alternative scheme obtained through detection.
In one possible implementation, the model training module 703 is specifically configured to, when federating at least some other participants of the multiple participants to continue the federated training of the model:
constructing a new training sample according to the system performance information obtained by detection;
and federating the model in association with at least some other participants of the plurality of participants based on the new training sample.
In one possible implementation, the model is a linear regression model; the system parameters corresponding to the model comprise environment parameters and distributed parameters;
wherein the environmental parameter comprises at least one of: CPU information, memory information and hard disk information;
the distributed parameters include at least one of: thread pool information, network packet information, and retry latency.
In one possible implementation, the apparatus 700 further includes:
an obtaining module 704, configured to obtain a model, which is constructed and trained by at least some other participants and used for predicting system performance information according to system parameters;
the model aggregation module 705 is configured to aggregate the obtained model and the model constructed and trained by the first participant to obtain an aggregated model;
wherein the model for any of the plurality of participants to adjust the system parameters is an aggregated model.
The system parameter processing apparatus provided in this embodiment may be configured to execute the technical solutions provided in any of the foregoing method embodiments, and the implementation principle and the technical effects are similar, which are not described herein again.
Fig. 8 is a schematic structural diagram of another system parameter processing apparatus according to an embodiment of the present invention. As shown in fig. 8, the system parameter processing apparatus 800 may include: an obtaining module 801, a parameter adjusting scheme generating module 802, and a parameter adjusting module 803.
An obtaining module 801, configured to obtain a model for predicting system performance information according to system parameters, where the model is obtained based on the apparatus according to any one of the foregoing embodiments;
a parameter tuning scheme generating module 802 for generating at least one parameter tuning scheme, each parameter tuning scheme comprising system parameters for input to the model;
and the parameter adjusting module 803 is configured to obtain system performance information corresponding to each parameter adjusting scheme according to the model, and adjust system parameters according to the system performance information of each parameter adjusting scheme.
The system parameter processing apparatus provided in this embodiment may be configured to execute the technical solution provided in any of the foregoing method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 9 is a schematic structural diagram of a system parameter processing device according to an embodiment of the present invention. As shown in fig. 9, the system parameter processing device may include: a memory 901, a processor 902 and a data processing program stored on the memory 901 and executable on the processor 902, the data processing program implementing the steps of the system parameter processing method according to any of the embodiments as described above when executed by the processor 902.
Alternatively, the memory 901 may be separate or integrated with the processor 902.
For the implementation principle and the technical effect of the device provided by this embodiment, reference may be made to the foregoing embodiments, and details are not described here.
The embodiment of the present invention further provides a computer-readable storage medium, where a data processing program is stored on the computer-readable storage medium, and when the data processing program is executed by a processor, the steps of the system parameter processing method according to any of the foregoing embodiments are implemented.
An embodiment of the present invention further provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the method for processing system parameters according to any of the foregoing embodiments is implemented.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed.
The integrated module implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to make a computer device (which may be a personal computer, a server, or a network device) or a processor execute part of the steps of the methods according to the embodiments of the present invention.
It should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, etc.
The storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the storage medium may reside as discrete components in an electronic device or host device.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (13)

1. A system parameter processing method applied to a first participant of a plurality of participants participating in a multi-party security computation, the method comprising:
determining at least one training sample, wherein each training sample comprises system parameters and corresponding system performance information when calculation is carried out under the system parameters;
constructing a model for predicting system performance information according to system parameters;
and according to the at least one training sample, federately training the model in combination with at least part of other participants in the plurality of participants, wherein the trained model is used for any participant in the plurality of participants to adjust system parameters.
2. The method of claim 1, wherein the federating the model in conjunction with at least some other participants of the plurality of participants according to the at least one training sample comprises:
federating the model in accordance with the at least one training sample in conjunction with at least some other of the plurality of participants;
repeatedly executing the following operations until the training end condition is met:
adjusting system parameters of the first participant according to the trained model;
judging whether the system performance information corresponding to the adjusted system parameters is superior to the corresponding system performance before adjustment;
if so, federating at least some other participants of the multiple participants to continue the model for federal training;
if not, determining that the training end condition is met.
3. The method of claim 2, wherein adjusting the system parameters of the first participant according to the trained model comprises:
randomly generating a plurality of parameter tuning schemes, each parameter scheme comprising system parameters for input to the model;
determining system performance information corresponding to each parameter adjusting scheme according to the model;
selecting a parameter adjusting scheme from the plurality of parameter adjusting schemes according to system performance information corresponding to each parameter adjusting scheme;
adjusting system parameters of the first participant according to the selected parameter adjustment scheme.
4. The method of claim 3, wherein selecting a tuning parameter scheme from the plurality of tuning parameter schemes according to the system performance information corresponding to each tuning parameter scheme comprises:
selecting a parameter adjusting scheme with system performance information meeting preset conditions from the plurality of parameter adjusting schemes as an alternative scheme according to the system performance information corresponding to each parameter adjusting scheme;
if the alternative schemes are multiple, calculating a preset data set based on the system parameters of the alternative schemes for each alternative scheme, and detecting corresponding system performance information during calculation;
and selecting a parameter adjusting scheme from the multiple alternative schemes according to the system performance information corresponding to each alternative scheme obtained through detection.
5. The method of claim 4, wherein federated training to continue the model in conjunction with at least some other participants in the plurality of participants comprises:
constructing a new training sample according to the system performance information obtained by detection;
federating the model in accordance with the new training sample in conjunction with at least some other of the plurality of participants.
6. The method of claim 1, wherein the model is a linear regression model; the system parameters corresponding to the model comprise environment parameters and distributed parameters;
wherein the environmental parameter comprises at least one of: CPU information, memory information and hard disk information;
the distributed parameters include at least one of: thread pool information, network packet information, and retry latency.
7. The method of any one of claims 1-6, further comprising:
obtaining a model which is constructed and trained by at least part of other participants and used for predicting system performance information according to system parameters;
aggregating the obtained model and the model constructed and trained by the first participant to obtain an aggregated model;
wherein the model for any of the plurality of parties to adjust system parameters is the aggregated model.
8. A system parameter processing method is characterized by comprising the following steps:
obtaining a model for predicting system performance information according to system parameters, wherein the model is obtained based on the method of any one of claims 1-7;
generating at least one tuning scheme, each parameter scheme comprising system parameters for input to the model;
and obtaining system performance information corresponding to each parameter adjusting scheme according to the model, and adjusting system parameters according to the system performance information of each parameter adjusting scheme.
9. A system parameter processing apparatus, comprising:
the training sample determining module is used for determining at least one training sample, wherein each training sample comprises system parameters and corresponding system performance information when calculation is carried out under the system parameters;
the model construction module is used for constructing a model for predicting system performance information according to system parameters;
and the model training module is used for carrying out federal training on the model by combining at least part of other participants in the multiple participants according to the at least one training sample, wherein the trained model is used for any participant in the multiple participants to adjust system parameters.
10. A system parameter processing apparatus, comprising:
an obtaining module, configured to obtain a model for predicting system performance information according to system parameters, where the model is obtained based on the apparatus of claim 9;
a parameter adjustment scheme generation module for generating at least one parameter adjustment scheme, each parameter scheme comprising system parameters for input to the model;
and the parameter adjusting module is used for obtaining the system performance information corresponding to each parameter adjusting scheme according to the model and adjusting the system parameters according to the system performance information of each parameter adjusting scheme.
11. A system parameter processing apparatus, characterized in that the system parameter processing apparatus comprises: a memory, a processor and a system parameter processing program stored on the memory and executable on the processor, the system parameter processing program when executed by the processor implementing the steps of the system parameter processing method according to any one of claims 1 to 8.
12. A computer-readable storage medium, having stored thereon a system parameter processing program which, when executed by a processor, implements the steps of the system parameter processing method according to any one of claims 1 to 8.
13. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, carries out the system parameter processing method of any one of claims 1 to 8.
CN202011625722.8A 2020-12-30 2020-12-30 System parameter processing method, device, equipment and storage medium Pending CN114692888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011625722.8A CN114692888A (en) 2020-12-30 2020-12-30 System parameter processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011625722.8A CN114692888A (en) 2020-12-30 2020-12-30 System parameter processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114692888A true CN114692888A (en) 2022-07-01

Family

ID=82133693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011625722.8A Pending CN114692888A (en) 2020-12-30 2020-12-30 System parameter processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114692888A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115826544A (en) * 2023-02-17 2023-03-21 江苏御传新能源科技有限公司 Production parameter adjusting system for automobile parts

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115826544A (en) * 2023-02-17 2023-03-21 江苏御传新能源科技有限公司 Production parameter adjusting system for automobile parts

Similar Documents

Publication Publication Date Title
CN110610242B (en) Method and device for setting weights of participants in federal learning
EP3711000B1 (en) Regularized neural network architecture search
CN110782042A (en) Method, device, equipment and medium for combining horizontal federation and vertical federation
CN110807109A (en) Data enhancement strategy generation method, data enhancement method and device
JP2021501417A (en) Neural architecture search
CN112990478B (en) Federal learning data processing system
CN111340233B (en) Training method and device of machine learning model, and sample processing method and device
CN111797320A (en) Data processing method, device, equipment and storage medium
CN112766402A (en) Algorithm selection method and device and electronic equipment
CN112365007A (en) Model parameter determination method, device, equipment and storage medium
CN112437022B (en) Network traffic identification method, device and computer storage medium
CN114692888A (en) System parameter processing method, device, equipment and storage medium
Tembine Mean field stochastic games: Convergence, Q/H-learning and optimality
CN115577797B (en) Federal learning optimization method and system based on local noise perception
CN115829717B (en) Wind control decision rule optimization method, system, terminal and storage medium
CN115659807A (en) Method for predicting talent performance based on Bayesian optimization model fusion algorithm
CN112948582B (en) Data processing method, device, equipment and readable medium
CN112804304B (en) Task node distribution method and device based on multi-point output model and related equipment
CN114443970A (en) Artificial intelligence and big data based digital content pushing method and AI system
CN112070162A (en) Multi-class processing task training sample construction method, device and medium
CN112463964A (en) Text classification and model training method, device, equipment and storage medium
CN116739111A (en) Training method, device, equipment and medium for joint learning model
CN110688371B (en) Data adjustment method, device, electronic equipment and storage medium
US11973695B2 (en) Information processing apparatus and information processing method
CN112861951B (en) Image neural network parameter determining method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination