CN113158223A - Data processing method, device, equipment and medium based on state transition kernel optimization - Google Patents

Data processing method, device, equipment and medium based on state transition kernel optimization Download PDF

Info

Publication number
CN113158223A
CN113158223A CN202110115051.9A CN202110115051A CN113158223A CN 113158223 A CN113158223 A CN 113158223A CN 202110115051 A CN202110115051 A CN 202110115051A CN 113158223 A CN113158223 A CN 113158223A
Authority
CN
China
Prior art keywords
participant
preset
state
model parameters
federated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110115051.9A
Other languages
Chinese (zh)
Inventor
姜迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202110115051.9A priority Critical patent/CN113158223A/en
Priority to PCT/CN2021/101998 priority patent/WO2022160578A1/en
Publication of CN113158223A publication Critical patent/CN113158223A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Accounting & Taxation (AREA)
  • Software Systems (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a data processing method, a device, equipment and a medium based on state transition kernel optimization, wherein the method comprises the following steps: in the process of training local model parameters of a first participant each time, dynamically determining a state sampling algorithm of preset local sample data according to the resource attribute information of the first participant to obtain identification state information of the preset local sample data so as to determine combined state information of all preset local model parameters of the first participant; determining target model parameters to be federated according to the combined state information; and carrying out federal training with each second participant based on the target model parameters to be federal to obtain a preset prediction model of the first participant. The method and the device aim to solve the technical problems that in the prior art, the recognition probability of different states of the sample data is determined in a fixed mode, so that the resource adaptability in the model training process is poor, and the privacy of a user is easily invaded.

Description

Data processing method, device, equipment and medium based on state transition kernel optimization
Technical Field
The present application relates to the field of artificial intelligence technology for financial technology (Fintech), and in particular, to a data processing method, apparatus, device, and medium based on state transition kernel optimization.
Background
With the continuous development of financial technologies, especially internet technology and finance, more and more technologies are applied in the financial field, but the financial industry also puts higher requirements on the technologies, for example, the financial industry also has higher requirements on data processing based on state transition kernel optimization.
At present, in the process of training a model through machine learning by a participant, data exchange is usually performed directly with other participants, and the data exchange directly with other participants violates the privacy of a user, causing a security risk, and in addition, when the participant trains the model, sample data often has different identification states, such as in a speech recognition process (the speech recognition process recognizes a data frame as a state, combines states as factors, and combines the factors as words), a data frame can be recognized as an a state, a B state, a C state, and the like, wherein the types of the a state, the B state, the C state, and the like have different identification probabilities, in the process of determining the identification probabilities of different states of the sample data, the prior art often determines through a fixed manner, determines the identification probabilities of different states of the sample data through a fixed manner, the problem of poor resource adaptability in the model training process is caused.
Disclosure of Invention
The application mainly aims to provide a data processing method, a data processing device, data processing equipment and a data processing medium based on state transition kernel optimization, and aims to solve the technical problems that in the prior art, the recognition probabilities of different states of sample data are determined in a fixed mode, so that the resource adaptability in a model training process is poor, and the user privacy is easily invaded.
In order to achieve the above object, the present application provides a data processing method based on state transition core optimization, which is applied to a first party, where the first party and a second party perform federated communication connection, and the data processing method based on state transition core optimization includes:
in the process of training local model parameters of a first participant each time, dynamically determining a state sampling algorithm of preset local sample data according to the resource attribute information of the first participant to obtain identification state information of the preset local sample data so as to determine combined state information of all preset local model parameters of the first participant;
determining target model parameters to be federated according to the combined state information;
and carrying out federal training with each second participant based on the target model parameters to be federal, so as to obtain a preset prediction model of the first participant.
Optionally, in the process of training the local model parameter of the first participant each time, dynamically determining a state sampling algorithm of preset local sample data according to the resource attribute information of the first participant to obtain identification state information of the preset local sample data, so as to determine combined state information of all preset local model parameters of the first participant, including:
determining an upper limit of memory consumption according to the resource attribute information of a first participant in the process of training local model parameters of the first participant each time;
and dynamically determining a state sampling algorithm of preset local sample data according to the upper memory consumption limit and a preset sampling consumption calculation rule to obtain the identification state information of the preset local sample data, and determining the state information of each preset local model parameter of the first participant to determine the combined state information of all the preset local model parameters of the first participant.
Optionally, the step of dynamically determining a state sampling algorithm of preset local sample data according to the upper memory consumption limit and a preset sampling consumption calculation rule to obtain identification state information of the preset local sample data, and determining state information of each preset local model parameter of the first participant to determine combined state information of all preset local model parameters of the first participant includes:
respectively determining the upper limit of the consumption of the sub-memories of each preset local model parameter;
determining the type and the number of each preset local model parameter, and determining a state sampling algorithm of preset local sample data in a traversal mode according to the upper limit of the consumption of the sub-memory, the calculation rule of the preset sampling consumption and the type and the number of the states;
determining a minimum state transition route of each preset local model parameter under the corresponding state sampling algorithm;
and obtaining the identification state of the preset local sample data according to the state sampling algorithm and the minimum state transfer route so as to determine the combined state information of all preset local model parameters of the first participant.
Optionally, the step of respectively determining the sub-memory consumption upper limit of each preset local model parameter includes:
determining the influence degree of each preset local model parameter on a model training result;
and determining the upper limit of the consumption of the sub-memory of each preset local model parameter according to the influence degree.
Optionally, in the process of dynamically determining a state sampling algorithm for presetting local sample data, a sampling intermediate parameter to be stored is further determined according to the upper memory consumption limit and a preset sampling consumption calculation rule.
Optionally, the step of performing federal training with each second participant based on the first model parameter to be federal to obtain the preset prediction model of the first participant includes:
based on the first model parameter to be federated, a preset federated procedure is executed, and the first model parameter to be federated of each second participant is aggregated to obtain an aggregation parameter, so that the first model parameter to be federated is subjected to replacement updating based on the aggregation parameter, and a replacement updated model parameter of the first participant is obtained;
and continuously and dynamically determining the state sampling algorithm of the model parameters after replacement and update so as to continuously determine other model parameters of the first party to be federal, and continuously carrying out iterative training until a preset training completion condition is reached to obtain a preset prediction model.
Optionally, the first party is connected with the second party in a federal communication mode through a third party;
the step of aggregating the first model parameters to be federated based on the first model parameters to be federated with the second model parameters to be federated of each second participant by executing a preset federated procedure to obtain aggregated parameters, and performing replacement update on the first model parameters to be federated based on the aggregated parameters to obtain the replacement updated model parameters of the first participant, includes:
encrypting and sending the first model parameters to be federated to a third party so that the third party can aggregate the received second model parameters to be federated of each second participant based on the first model parameters to be federated to obtain aggregation parameters;
and receiving the aggregation parameters sent by the third party in an encrypted manner, and performing replacement updating on the first model parameters of the federation to be treated based on the aggregation parameters to obtain the model parameters of the first party subjected to replacement updating.
The present application further provides a data processing apparatus based on state transition kernel optimization, which is applied to a first party, where the first party and a second party perform federated communication connection, and the data processing apparatus based on state transition kernel optimization includes:
the first determining module is used for dynamically determining a state sampling algorithm of preset local sample data according to the resource attribute information of the first participant in the process of training local model parameters of the first participant each time so as to obtain the identification state information of the preset local sample data and determine the combined state information of all the preset local model parameters of the first participant;
the second determining module is used for determining target model parameters to be federated according to the combined state information;
and the federation module is used for carrying out federation training with each second participant based on the target model parameters to be federated to obtain a preset prediction model of the first participant.
Optionally, the first determining module includes:
the first determining unit is used for determining an upper memory consumption limit according to the resource attribute information of a first participant in the process of training local model parameters of the first participant each time;
and the second determining unit is used for dynamically determining a state sampling algorithm of preset local sample data according to the upper memory consumption limit and a preset sampling consumption calculation rule so as to obtain identification state information of the preset local sample data, so as to determine state information of each preset local model parameter of the first participant, and so as to determine combined state information of all preset local model parameters of the first participant.
Optionally, the second determining unit includes:
the first determining subunit is used for respectively determining the upper limit of the consumption of the sub-memories of each preset local model parameter;
the second determining subunit is used for determining the type and the quantity of each preset local model parameter, and determining a state sampling algorithm of preset local sample data in a traversal mode according to the upper limit of the consumption of the sub-memory, the calculation rule of the preset sampling consumption and the type and the quantity of the states;
the third determining subunit is used for determining the minimum state transition route of each preset local model parameter under the corresponding state sampling algorithm;
and the fourth determining subunit is configured to obtain the identification state of the preset local sample data according to the state sampling algorithm and the minimum state transition route, so as to determine the combined state information of all preset local model parameters of the first participant.
Optionally, the first determining subunit is configured to implement:
determining the influence degree of each preset local model parameter on a model training result;
and determining the upper limit of the consumption of the sub-memory of each preset local model parameter according to the influence degree.
Optionally, in the process of dynamically determining a state sampling algorithm for presetting local sample data, a sampling intermediate parameter to be stored is further determined according to the upper memory consumption limit and a preset sampling consumption calculation rule.
Optionally, the federation module includes:
the aggregation unit is used for aggregating the first model parameters to be federated with the second model parameters to be federated of each second participant by executing a preset federated procedure based on the first model parameters to be federated to obtain aggregation parameters, and performing replacement updating on the first model parameters to be federated based on the aggregation parameters to obtain replacement updated model parameters of the first participant;
and the third determining unit is used for continuously and dynamically determining the state sampling algorithm of the model parameters after replacement and update so as to continuously determine other model parameters of the first party to be federated, and continuously performing iterative training until a preset training completion condition is reached to obtain a preset prediction model.
Optionally, the first party is connected with the second party in a federal communication mode through a third party;
the third determination unit includes:
the sending unit is used for encrypting and sending the first model parameters to be federated to a third party so that the third party can aggregate the first model parameters to be federated based on the federation and the received second model parameters to be federated of each second participant to obtain aggregation parameters;
and the receiving unit is used for receiving the aggregation parameters sent by the third party in an encrypted manner, and performing replacement updating on the first model parameter of the federation to be treated based on the aggregation parameters to obtain a replacement updated model parameter of the first party.
The present application further provides a data processing device based on state transition kernel optimization, where the data processing device based on state transition kernel optimization is an entity device, and the data processing device based on state transition kernel optimization includes: the state transition core optimization-based data processing method comprises a memory, a processor and a program of the state transition core optimization-based data processing method stored on the memory and capable of running on the processor, wherein when the program of the state transition core optimization-based data processing method is executed by the processor, the steps of the state transition core optimization-based data processing method can be realized.
The present application also provides a medium having a program for implementing the data processing method based on state transition core optimization stored thereon, where the program for implementing the data processing method based on state transition core optimization implements the steps of the data processing method based on state transition core optimization as described above when being executed by a processor.
The present application also provides a computer program product, comprising a computer program, which when executed by a processor implements the steps of the above-described data processing method based on state transition kernel optimization.
Compared with the prior art that different parties directly exchange data and the recognition probabilities of different states of sample data are determined in a fixed mode, so that the resource applicability is poor in the model training process and the privacy of a user is invaded, the method and the device dynamically determine the state sampling algorithm of preset local sample data according to the resource attribute information of a first party in the process of training local model parameters of the first party each time so as to obtain the recognition state information of the preset local sample data and determine the combined state information of all the preset local model parameters of the first party; determining target model parameters to be federated according to the combined state information; and carrying out federal training with each second participant based on the target model parameters to be federal to obtain a preset prediction model of the first participant. In the application, because a first participant and each second participant perform federal training, privacy and safety hazards of users caused by direct data interaction of different participants are avoided, in addition, in the application, in the process of training local model parameters by the first participant each time, a state sampling algorithm of preset local sample data is dynamically determined according to resource attribute information of the first participant to obtain identification state information of the preset local sample data, so as to determine the combined state information of all preset local model parameters of the first participant, namely the identification state information of the identification state of the local sample data is dynamically determined based on the resource attribute information, so as to determine the combined state information of all preset local model parameters of the first participant, rather than determining the identification state information of different states of the sample data such as identification probability in a fixed manner, therefore, the resource adaptability in the model training process is improved, and the technical problems that in the prior art, the recognition probability of different states of sample data is determined in a fixed mode, so that the resource adaptability in the model training process is poor, and the user privacy is easily invaded are solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive labor.
FIG. 1 is a flow chart showing a first embodiment of a data processing method based on state transition kernel optimization according to the present application;
fig. 2 is a flowchart illustrating a detailed step of the step of dynamically determining a state sampling algorithm of preset local sample data according to resource attribute information of a first participant in the process of training local model parameters of the first participant each time in the data processing method based on state transition kernel optimization to obtain identification state information of the preset local sample data and determine combined state information of all preset local model parameters of the first participant;
fig. 3 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
The objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the data processing method based on state transition core optimization, referring to fig. 1, the data processing method based on state transition core optimization is applied to a first participant, where the first participant and a second participant perform federated communication connection, and the data processing method based on state transition core optimization includes:
step S10, in the process of training local model parameters of a first participant each time, dynamically determining a state sampling algorithm of preset local sample data according to the resource attribute information of the first participant to obtain the identification state information of the preset local sample data so as to determine the combined state information of all preset local model parameters of the first participant;
step S20, determining target model parameters to be federal according to the combined state information;
and step S30, carrying out federal training with each second participant based on the target model parameters to be federal, and obtaining a preset prediction model of the first participant.
The method comprises the following specific steps:
step S10, in the process of training local model parameters of a first participant each time, dynamically determining a state sampling algorithm of preset local sample data according to the resource attribute information of the first participant to obtain the identification state information of the preset local sample data so as to determine the combined state information of all preset local model parameters of the first participant;
in this embodiment, it should be noted that the data processing method based on the state transition core optimization can be applied to the data processing system based on the state transition core optimization (in particular, applied to the first participant in the data processing system based on the state transition core optimization), the state transition core optimization based data processing system is subordinate to the state transition core optimization based data processing apparatus, for the data processing system based on the state transition core optimization, a second participant may be built in or connected in communication with the second participant, it should be noted that, the first party and the second party (which may all belong to the data processing system based on the state transition core optimization) may directly perform the federal communication connection, in addition, the first party and the second party can also indirectly carry out the federal communication connection through a third party.
Before the first participant and the second participant are federated, the first participant needs to locally train its own model parameters, specifically, for example, after the first participant performs local iterative training 500 times (training model parameters), the first participant performs federated communication with the second participant to obtain aggregation parameters, then replaces the local model parameters with the aggregation parameters as replacement update model parameters, and continues to perform the next round of iterative training based on the replacement update model parameters until the required model is finally obtained.
In this embodiment, the data processing method based on state transition kernel optimization is applied to a process in which a first participant trains its own model parameters, in which a state sampling algorithm for preset local sample data needs to be dynamically determined according to resource attribute information of the first participant, and then recognition state information of the preset local sample data is obtained according to the state sampling algorithm, where the resource attribute information includes information such as computational power resources, storage resources, and transmission resources, and the state sampling algorithm for the preset local sample data includes a put-back sampling, a no-put-back sampling, a federal monte plos-black-top sampling, and optimized federal monte plos-black-top sampling, and other sampling algorithms, where each state sampling algorithm may be pre-stored locally in the first participant, or each state sampling algorithm is temporarily called or generated, obtaining the identification state information of the preset local sample data according to a state sampling algorithm, specifically, obtaining the identification probability of each identification state of the preset local sample data according to a state sampling algorithm, and further obtaining the numerical value of the model parameter corresponding to each identification state of the preset local sample data according to the state sampling algorithm, wherein the identification probability of each identification state is consistent regardless of the sampling algorithm or within a preset error range, but different sampling algorithms, resource consumption and sampling rate are different, specifically, for example, in the process of local model parameter training of a first participant, a certain sample data has an a state-B state-C state, through different sampling algorithms, the a state may be 70%, the B state may be 20%, c states are all 10% in percentage, but with the federal monte blos-black royal sampling algorithm, it is possible to get that 70% of the resources of the a states are M1 memory consumption, with the optimized federal monte blos-black royal sampling algorithm, it is possible to get that 70% of the resources of the a states are M2 memory consumption, and with put back sampling, it is possible to get that 70% of the resources of the a states are M3 memory consumption, where M2 memory consumption is minimal.
It should be noted that the sample data is data composed of a plurality of sample features, and the state sampling algorithm for dynamically determining preset local sample data according to the resource attribute information of the first party includes: the method comprises the steps of dynamically determining state sampling algorithms of different sample characteristics in preset local sample data according to resource attribute information of a first participant, namely, corresponding to the corresponding sample characteristics, correspondingly pre-stored state sampling algorithms are different, and further acquiring identification probabilities of different states of the corresponding sample characteristics. In the prior art, the recognition probability of the state corresponding to the sample feature in the sample data is determined by using a fixed state sampling algorithm, and the recognition probability of the state corresponding to the sample feature in the sample data is determined by using the fixed state sampling algorithm, which is difficult to consider the resource allocation during model training.
It should be noted that, because different sample characteristics exist in the preset local sample data, each sample characteristic corresponds to a different state sampling algorithm, after the identification state information of the different sample characteristics of the preset local sample data is obtained through the state sampling algorithm that each sample characteristic corresponds to a different state, the identification state information of the preset local sample data is obtained, and further, the combination state information of all the preset local model parameters of the first participant is determined, that is, the identification state information of the preset local sample data implies the combination state information of the corresponding preset local model parameters, and further, the combination state information of all the preset local model parameters of the first participant can be obtained, specifically, for example, it is assumed that the sample characteristics exist in three states, i.e., a state, and C state, and the identification probabilities corresponding to the three states, i.e., a state, B state, and C state, are 70%, 20%, 10%, then when the sample feature is identified as the a state, the corresponding identification model parameter, such as the weight, may be 0.7.
Referring to fig. 2, in the process of training the local model parameter of the first participant each time, the step of dynamically determining a state sampling algorithm of preset local sample data according to the resource attribute information of the first participant to obtain the identification state information of the preset local sample data, so as to determine the combined state information of all the preset local model parameters of the first participant includes:
step S11, in the process of training local model parameters of a first participant each time, determining an upper limit of memory consumption according to the resource attribute information of the first participant;
in this embodiment, in the process of training the local model parameter each time by the first participant, the upper limit of the memory consumption is determined according to the resource attribute information of the first participant, that is, the upper limit of the memory consumption is determined according to the memory capacity of the server of the first participant.
Step S12, dynamically determining a state sampling algorithm of the preset local sample data according to the upper memory consumption limit and the preset sampling consumption calculation rule to obtain the identification state information of the preset local sample data, and determining the state information of each preset local model parameter of the first participant to determine the combined state information of all preset local model parameters of the first participant.
According to the upper limit of memory consumption, such as 500G, and the preset sampling consumption calculation rule, such as the consumption calculation rule of sampling once and the consumption calculation rules of different state types, dynamically determining a state sampling algorithm of preset local sample data to obtain the identification state information of the preset local sample data, to determine the state information of each preset local model parameter of the first participant to determine the combined state information of all the preset local model parameters of the first participant, it should be noted that the resource consumption of the identification state information of the preset local sample data obtained by the different state sampling algorithms is different, and a first preset association relationship exists between the different state sampling algorithms and the resource consumption, or a second preset association relationship exists between the different state sampling algorithms, the state types and the resource consumption.
The step of dynamically determining a state sampling algorithm of preset local sample data according to the upper memory consumption limit and a preset sampling consumption calculation rule to obtain identification state information of the preset local sample data, and determining state information of each preset local model parameter of a first participant to determine combined state information of all preset local model parameters of the first participant includes:
a1, respectively determining the sub-memory consumption upper limit of each preset local model parameter;
respectively determining the sub-memory consumption upper limit of each preset local model parameter, wherein the mode for determining the sub-memory consumption upper limit of each preset local model parameter comprises the following steps:
the first method is as follows: determining the consumption upper limit of the corresponding sub-memory according to the type of each preset local model parameter;
the second method comprises the following steps: and determining the consumption upper limit of the corresponding sub-memory according to the weight of each preset local model parameter.
A2, determining the type and the quantity of each preset local model parameter, and determining a state sampling algorithm of preset local sample data in a traversal mode according to the sub-memory consumption upper limit, the preset sampling consumption calculation rule and the state type and quantity;
after the upper limit of the consumption of the sub-memory is determined, according to the upper limit of the consumption of the sub-memory, the preset sampling consumption calculation rule and the state types and the number, a state sampling algorithm of preset local sample data is determined in a traversal mode, namely the state sampling algorithm of the preset local sample data is correspondingly determined from all state sampling algorithms prestored in the first participant.
Specifically, for example, if the sub-memory consumption upper limit is 100 consumption metering values, according to the preset sampling consumption calculation rule, the state types and the number, each state sampling algorithm is traversed (for resource saving, the traversal process only calculates without performing actual sampling operation), and the memory consumption upper limits of the obtained state sampling algorithms corresponding to the specifically adopted processes are 200 consumption metering values, 300 consumption metering values, 150 consumption metering values and 90 consumption metering values, respectively, then the state sampling algorithm with 90 consumption metering values is selected for sampling.
Step a3, determining the minimum state transition route of each preset local model parameter under the corresponding state sampling algorithm;
determining a minimum state transition route of each preset local model parameter under the corresponding state sampling algorithm, specifically, the minimum state transition route may refer to: determining an adoption route of a minimum number of state combinations, specifically, for example, if there are a state, B state and D state, in the sampling process, the state a and B state may be regarded as one group, and the state D is regarded as one group, the minimum number of state combinations is 2 groups, and if there are a state a, B state and D state, in the sampling process, the state a, B state and D state may be regarded as different groups, respectively, and the minimum number of state combinations is 3 groups.
Step a4, obtaining the identification state of preset local sample data according to the state sampling algorithm and the minimum state transition route so as to determine the combined state information of all preset local model parameters of the first party.
And combining to obtain the identification state of the preset local sample data according to the state sampling algorithm and each minimum state transition route so as to determine the combined state information of all preset local model parameters of the first participant.
Step S20, determining target model parameters to be federal according to the combined state information;
and step S30, carrying out federal training with each second participant based on the target model parameters to be federal, and obtaining a preset prediction model of the first participant.
In this embodiment, the target model parameter to be federated is determined according to the combined state information, for example, if the sample feature is identified as the a state, the corresponding identification model parameter may be 0.7, for example, if three states of the a state, the B state, and the C state are assumed, and the identification probabilities corresponding to the three states of the a state, the B state, and the C state are 70%, 20%, and 10%, respectively. And after the target model parameters are obtained, carrying out federal training with each second participant based on the target model parameters to be federal, and obtaining a preset prediction model of the first participant.
Compared with the prior art that different parties directly exchange data and the recognition probabilities of different states of sample data are determined in a fixed mode, so that the resource applicability is poor in the model training process and the privacy of a user is invaded, the method and the device dynamically determine the state sampling algorithm of preset local sample data according to the resource attribute information of a first party in the process of training local model parameters of the first party each time so as to obtain the recognition state information of the preset local sample data and determine the combined state information of all the preset local model parameters of the first party; determining target model parameters to be federated according to the combined state information; and carrying out federal training with each second participant based on the target model parameters to be federal to obtain a preset prediction model of the first participant. In the application, because the first participant and each second participant perform federal training, the privacy and safety hidden danger of a user caused by direct data interaction of different participants is avoided, in addition, in the application, in the process of training local model parameters by the first participant each time, a state sampling algorithm of preset local sample data is dynamically determined according to the resource attribute information of the first participant to obtain the identification state information of the preset local sample data, so as to determine the combined state information of all the preset local model parameters of the first participant, namely, the identification state of the local sample data is dynamically determined based on the resource attribute information, so as to determine the combined state information of all the preset local model parameters of the first participant, rather than determining the identification probabilities of different states of the sample data in a fixed manner, therefore, the resource adaptability in the model training process is improved, and the technical problems that in the prior art, the recognition probability of different states of sample data is determined in a fixed mode, so that the resource adaptability in the model training process is poor, and the user privacy is easily invaded are solved.
Further, based on the first embodiment in the present application, another embodiment is provided, in which the step of respectively determining the sub-memory consumption upper limit of each preset local model parameter includes:
step A1, determining the influence degree of each preset local model parameter on the model training result;
determining the influence degree of each preset local model parameter on the model training result, wherein the mode for determining the influence degree of each preset local model parameter on the model training result comprises the following steps;
and determining the weight of each preset local model parameter to determine the influence degree on the model training result, or determining the influence factor of each preset local model parameter to determine the influence degree on the model training result.
And A2, determining the sub-memory consumption upper limit of each preset local model parameter according to the influence degree.
And determining the upper limit of the consumption of the sub-memory of each preset local model parameter according to the influence degree, wherein if the influence degree is large, determining that the upper limit of the consumption of the sub-memory of each preset local model parameter is high, wherein the influence degree can be determined through the influence factor.
In the embodiment, the influence degree of each preset local model parameter on the model training result is determined; and determining the upper limit of the consumption of the sub-memory of each preset local model parameter according to the influence degree. In this embodiment, the sub-memory consumption upper limit of each preset local model parameter is accurately determined.
Further, based on the first embodiment in the present application, another embodiment is provided, in which specifically, in the process of dynamically determining a state sampling algorithm for presetting local sample data, a sampling intermediate parameter that needs to be saved is further determined according to the upper memory consumption limit and a preset sampling consumption calculation rule.
In this embodiment, when the memory resources are rich, the memory resource swap efficiency may be used, and specifically, the sampling intermediate parameters to be saved are determined according to the upper memory consumption limit and the preset sampling consumption calculation rule, for example, since the sample data includes the Q1 feature, the Q2 feature, and the Q3 feature of each sample, each sample feature has different states, such as the a state, the B state, the C state of the Q1 feature, the D state of the Q2 feature, the E state, and the F state, the G state, and the H state of the Q3 feature, in the process of obtaining the combined state information corresponding to each state based on each sample feature, such as the Q1 feature-a state, the Q2 feature D state, and the Q3 feature F state combination, or the Q1 feature-B state, the Q2 feature D state, and the Q3 feature F state combination, and so on the prior art, in the present embodiment, in order to improve the combination efficiency, sequential combination is performed, specifically, in the combination process, Q1 feature-a states (which occupy a certain memory and can be determined according to the upper limit of memory consumption and the preset sampling consumption calculation rule) may be saved, and then, the Q1 feature-a states, the Q2 feature states, and the Q3 feature states are combined, and then, the Q1 feature-B states, the Q2 feature states, and the Q3 feature states are combined, so as to improve the efficiency of obtaining the output data. Specifically, in this embodiment, an alias table (alias table) may be set for different features, and then the combination of states is performed based on each alias table, thereby improving the efficiency.
Further, based on the first embodiment and the second embodiment in the present application, the step of performing federal training with each second participant based on the first model parameter to be joined to obtain the preset prediction model of the first participant includes:
step B1, based on the first model parameter to be federated, aggregating the first model parameter to be federated with the second model parameter to be federated of each second participant by executing a preset federated procedure to obtain an aggregated parameter, and replacing and updating the first model parameter to be federated based on the aggregated parameter to obtain a replaced and updated model parameter of the first participant;
in this embodiment, based on the first model parameter to be federated, a preset federated procedure is executed, and the first model parameter to be federated of each second participant is directly aggregated to obtain an aggregated parameter, so that the replacement update is performed on the first model parameter to be federated based on the aggregated parameter, and the replacement updated model parameter of the first participant is obtained.
And step B2, continuously and dynamically determining the state sampling algorithm for replacing the updated model parameters to continuously determine other model parameters of the first party to be federal, and continuously performing iterative training until a preset training completion condition is reached to obtain a preset prediction model.
And continuously performing iterative training until a preset training completion condition is reached, such as convergence of a preset loss function, so as to obtain a preset prediction model.
The first party and the second party are in federal communication connection through a third party;
the step of aggregating the first model parameters to be federated based on the first model parameters to be federated with the second model parameters to be federated of each second participant by executing a preset federated procedure to obtain aggregated parameters, and performing replacement update on the first model parameters to be federated based on the aggregated parameters to obtain the replacement updated model parameters of the first participant, includes:
step C1, encrypting and sending the first model parameters to be federated to a third party so that the third party can aggregate the first model parameters to be federated based on the federation and the received second model parameters to be federated of each second participant to obtain aggregation parameters;
and encrypting and sending the first model parameters to be federated to a third party to avoid model parameter leakage so that the third party can aggregate the first model parameters to be federated based on the first model parameters to be federated and the received second model parameters to be federated of each second participant to obtain aggregation parameters, and encrypting and sending the first model parameters to be federated to the third party so that the third party can aggregate the first model parameters to be federated based on the first model parameters to be federated and the received second model parameters to be federated of each second participant to obtain the aggregation parameters.
And step C2, receiving the aggregation parameters sent by the third party in an encrypted manner, and performing replacement updating on the first model parameters of the to-be-federal based on the aggregation parameters to obtain the replacement updated model parameters of the first party.
And receiving the aggregation parameters sent by the third party in an encrypted manner, and performing replacement updating on the first model parameters of the federation to be treated based on the aggregation parameters to obtain the model parameters of the first party subjected to replacement updating.
In this embodiment, the preset prediction model is accurately obtained through the federal model.
Referring to fig. 3, fig. 3 is a schematic diagram of an apparatus structure of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 3, the data processing apparatus based on state transition core optimization may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the data processing device based on state transition core optimization may further include a rectangular user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the state transition core optimization based data processing apparatus architecture shown in fig. 3 does not constitute a limitation of the state transition core optimization based data processing apparatus and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 3, a memory 1005, which is a medium, may include therein an operating system, a network communication module, and a data processing program optimized based on a state transition core. The operating system is a program that manages and controls hardware and software resources of the data processing device optimized based on the state transition core, and supports the operation of the data processing device optimized based on the state transition core as well as other software and/or programs. The network communication module is used to implement communication between the components inside the memory 1005 and with other hardware and software in the data processing system based on state transition core optimization.
In the data processing apparatus based on state transition core optimization shown in fig. 3, the processor 1001 is configured to execute a data processing program based on state transition core optimization stored in the memory 1005, and implement the steps of the data processing method based on state transition core optimization described in any one of the above.
The specific implementation of the data processing device based on state transition core optimization in the present application is basically the same as each embodiment of the data processing method based on state transition core optimization, and is not described herein again.
The present application further provides a data processing apparatus based on state transition kernel optimization, which is applied to a first party, where the first party and a second party perform federated communication connection, and the data processing apparatus based on state transition kernel optimization includes:
the first determining module is used for dynamically determining a state sampling algorithm of preset local sample data according to the resource attribute information of the first participant in the process of training local model parameters of the first participant each time so as to obtain the identification state information of the preset local sample data and determine the combined state information of all the preset local model parameters of the first participant;
the second determining module is used for determining target model parameters to be federated according to the combined state information;
and the federation module is used for carrying out federation training with each second participant based on the target model parameters to be federated to obtain a preset prediction model of the first participant.
Optionally, the first determining module includes:
the first determining unit is used for determining an upper memory consumption limit according to the resource attribute information of a first participant in the process of training local model parameters of the first participant each time;
and the second determining unit is used for dynamically determining a state sampling algorithm of preset local sample data according to the upper memory consumption limit and a preset sampling consumption calculation rule so as to obtain identification state information of the preset local sample data, so as to determine state information of each preset local model parameter of the first participant, and so as to determine combined state information of all preset local model parameters of the first participant.
Optionally, the second determining unit includes:
the first determining subunit is used for respectively determining the upper limit of the consumption of the sub-memories of each preset local model parameter;
the second determining subunit is used for determining the type and the quantity of each preset local model parameter, and determining a state sampling algorithm of preset local sample data in a traversal mode according to the upper limit of the consumption of the sub-memory, the calculation rule of the preset sampling consumption and the type and the quantity of the states;
the third determining subunit is used for determining the minimum state transition route of each preset local model parameter under the corresponding state sampling algorithm;
and the fourth determining subunit is configured to obtain the identification state of the preset local sample data according to the state sampling algorithm and the minimum state transition route, so as to determine the combined state information of all preset local model parameters of the first participant.
Optionally, the first determining subunit is configured to implement:
determining the influence degree of each preset local model parameter on a model training result;
and determining the upper limit of the consumption of the sub-memory of each preset local model parameter according to the influence degree.
Optionally, in the process of dynamically determining a state sampling algorithm for presetting local sample data, a sampling intermediate parameter to be stored is further determined according to the upper memory consumption limit and a preset sampling consumption calculation rule.
Optionally, the federation module includes:
the aggregation unit is used for aggregating the first model parameters to be federated with the second model parameters to be federated of each second participant by executing a preset federated procedure based on the first model parameters to be federated to obtain aggregation parameters, and performing replacement updating on the first model parameters to be federated based on the aggregation parameters to obtain replacement updated model parameters of the first participant;
and the third determining unit is used for continuously and dynamically determining the state sampling algorithm of the model parameters after replacement and update so as to continuously determine other model parameters of the first party to be federated, and continuously performing iterative training until a preset training completion condition is reached to obtain a preset prediction model.
Optionally, the first party is connected with the second party in a federal communication mode through a third party;
the third determination unit includes:
the sending unit is used for encrypting and sending the first model parameters to be federated to a third party so that the third party can aggregate the first model parameters to be federated based on the federation and the received second model parameters to be federated of each second participant to obtain aggregation parameters;
and the receiving unit is used for receiving the aggregation parameters sent by the third party in an encrypted manner, and performing replacement updating on the first model parameter of the federation to be treated based on the aggregation parameters to obtain a replacement updated model parameter of the first party.
The specific implementation of the data processing apparatus based on state transition core optimization in the present application is substantially the same as each embodiment of the data processing method based on state transition core optimization, and is not described herein again.
The present application provides a medium, and the medium stores one or more programs, which are further executable by one or more processors for implementing the steps of the data processing method based on state transition core optimization described in any one of the above.
The specific implementation of the medium of the present application is substantially the same as that of each embodiment of the data processing method based on state transition kernel optimization, and is not described herein again.
The present application also provides a computer program product, comprising a computer program, which when executed by a processor implements the steps of the above-described data processing method based on state transition kernel optimization.
The specific implementation of the computer program product of the present application is substantially the same as the embodiments of the data processing method based on state transition kernel optimization, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the present specification and drawings, or used directly or indirectly in other related fields, are included in the scope of the present invention.

Claims (11)

1. A data processing method based on state transition core optimization is applied to a first participant, the first participant and a second participant are in federated communication connection, and the data processing method based on state transition core optimization comprises the following steps:
in the process of training local model parameters of a first participant each time, dynamically determining a state sampling algorithm of preset local sample data according to the resource attribute information of the first participant to obtain identification state information of the preset local sample data so as to determine combined state information of all preset local model parameters of the first participant;
determining target model parameters to be federated according to the combined state information;
and carrying out federal training with each second participant based on the target model parameters to be federal to obtain a preset prediction model of the first participant.
2. The data processing method based on state transition kernel optimization of claim 1, wherein the step of dynamically determining a state sampling algorithm of preset local sample data according to the resource attribute information of the first participant during each training of the local model parameters by the first participant to obtain the identification state information of the preset local sample data so as to determine the combined state information of all the preset local model parameters of the first participant comprises:
determining an upper limit of memory consumption according to the resource attribute information of a first participant in the process of training local model parameters of the first participant each time;
and dynamically determining a state sampling algorithm of preset local sample data according to the upper memory consumption limit and a preset sampling consumption calculation rule to obtain the identification state information of the preset local sample data, and determining the state information of each preset local model parameter of the first participant to determine the combined state information of all the preset local model parameters of the first participant.
3. The data processing method based on state transition kernel optimization according to claim 2, wherein the step of dynamically determining a state sampling algorithm of preset local sample data according to the upper memory consumption limit and a preset sampling consumption calculation rule to obtain the identification state information of the preset local sample data, to determine the state information of each preset local model parameter of the first participant, to determine the combined state information of all the preset local model parameters of the first participant, comprises:
respectively determining the upper limit of the consumption of the sub-memories of each preset local model parameter;
determining the type and the number of each preset local model parameter, and determining a state sampling algorithm of preset local sample data in a traversal mode according to the upper limit of the consumption of the sub-memory, the calculation rule of the preset sampling consumption and the type and the number of the states;
determining a minimum state transition route of each preset local model parameter under the corresponding state sampling algorithm;
and obtaining the identification state of preset local sample data according to the state sampling algorithm and the minimum state transition route so as to determine the combined state information of all preset local model parameters of the first participant.
4. The data processing method based on state transition kernel optimization of claim 3, wherein the step of determining the sub-memory consumption upper limit of each preset local model parameter respectively comprises:
determining the influence degree of each preset local model parameter on a model training result;
and determining the upper limit of the consumption of the sub-memory of each preset local model parameter according to the influence degree.
5. The data processing method based on state transition kernel optimization of claim 2, wherein in the process of dynamically determining the state sampling algorithm of preset local sample data, the sampling intermediate parameters to be stored are further determined according to the upper memory consumption limit and a preset sampling consumption calculation rule.
6. The data processing method based on the state transition kernel optimization of claim 1, wherein the step of performing federal training with each second participant based on the first model parameter to be federal to obtain the preset prediction model of the first participant comprises:
based on the first model parameter to be federated, a preset federated procedure is executed, and the first model parameter to be federated of each second participant is aggregated to obtain an aggregation parameter, so that the first model parameter to be federated is replaced and updated based on the aggregation parameter to obtain a replacement updated model parameter of the first participant;
and continuously and dynamically determining the state sampling algorithm of the model parameters after replacement and update so as to continuously determine other model parameters of the first party to be federal, and continuously carrying out iterative training until a preset training completion condition is reached to obtain a preset prediction model.
7. The state-transition-core-optimization-based data processing method of claim 6, wherein the first party is in a federated communication connection with the second party via a third party;
the step of aggregating the first model parameters to be federated based on the first model parameters to be federated with the second model parameters to be federated of each second participant by executing a preset federated procedure to obtain aggregated parameters, and performing replacement update on the first model parameters to be federated based on the aggregated parameters to obtain the replacement updated model parameters of the first participant comprises the following steps:
encrypting and sending the first model parameters to be federated to a third party so that the third party can aggregate the first model parameters to be federated based on the federation and the received second model parameters to be federated of each second participant to obtain aggregation parameters;
and receiving the aggregation parameters sent by the third party in an encrypted manner, and performing replacement updating on the first model parameters of the federation to be treated based on the aggregation parameters to obtain the model parameters of the first party subjected to replacement updating.
8. A data processing device based on state transition core optimization is applied to a first participant, the first participant is in federated communication connection with a second participant, and the data processing device based on state transition core optimization comprises:
the first determining module is used for dynamically determining a state sampling algorithm of preset local sample data according to the resource attribute information of the first participant in the process of training local model parameters of the first participant each time so as to obtain the identification state information of the preset local sample data and determine the combined state information of all the preset local model parameters of the first participant;
the second determining module is used for determining target model parameters to be federated according to the combined state information;
and the federation module is used for carrying out federation training with each second participant based on the target model parameters to be federated to obtain a preset prediction model of the first participant.
9. A data processing apparatus based on state transition core optimization, the data processing apparatus based on state transition core optimization comprising: a memory, a processor, and a program stored on the memory for implementing the state transition core optimization-based data processing method,
the memory is used for storing a program for realizing the data processing method based on the state transition core optimization;
the processor is configured to execute a program implementing the state transition core optimization-based data processing method to implement the steps of the state transition core optimization-based data processing method according to any one of claims 1 to 7.
10. A medium having stored thereon a program for implementing a state transition core optimization-based data processing method, the program being executable by a processor to implement the steps of the state transition core optimization-based data processing method according to any one of claims 1 to 7.
11. A computer program product comprising a computer program, characterized in that the computer program realizes the method of any of claims 1 to 7 when executed by a processor.
CN202110115051.9A 2021-01-27 2021-01-27 Data processing method, device, equipment and medium based on state transition kernel optimization Pending CN113158223A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110115051.9A CN113158223A (en) 2021-01-27 2021-01-27 Data processing method, device, equipment and medium based on state transition kernel optimization
PCT/CN2021/101998 WO2022160578A1 (en) 2021-01-27 2021-06-24 State transition core optimization-based data processing method, apparatus and device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110115051.9A CN113158223A (en) 2021-01-27 2021-01-27 Data processing method, device, equipment and medium based on state transition kernel optimization

Publications (1)

Publication Number Publication Date
CN113158223A true CN113158223A (en) 2021-07-23

Family

ID=76878905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110115051.9A Pending CN113158223A (en) 2021-01-27 2021-01-27 Data processing method, device, equipment and medium based on state transition kernel optimization

Country Status (2)

Country Link
CN (1) CN113158223A (en)
WO (1) WO2022160578A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110297848A (en) * 2019-07-09 2019-10-01 深圳前海微众银行股份有限公司 Recommended models training method, terminal and storage medium based on federation's study
CN111275491A (en) * 2020-01-21 2020-06-12 深圳前海微众银行股份有限公司 Data processing method and device
CN111553470A (en) * 2020-07-10 2020-08-18 成都数联铭品科技有限公司 Information interaction system and method suitable for federal learning
CN111898768A (en) * 2020-08-06 2020-11-06 深圳前海微众银行股份有限公司 Data processing method, device, equipment and medium
WO2021008017A1 (en) * 2019-07-17 2021-01-21 深圳前海微众银行股份有限公司 Federation learning method, system, terminal device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711529B (en) * 2018-11-13 2022-11-08 中山大学 Cross-domain federated learning model and method based on value iterative network
CN110263908B (en) * 2019-06-20 2024-04-02 深圳前海微众银行股份有限公司 Federal learning model training method, apparatus, system and storage medium
CN110874649B (en) * 2020-01-16 2020-04-28 支付宝(杭州)信息技术有限公司 Federal learning execution method, system, client and electronic equipment
CN111882133B (en) * 2020-08-03 2022-02-01 重庆大学 Prediction-based federated learning communication optimization method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110297848A (en) * 2019-07-09 2019-10-01 深圳前海微众银行股份有限公司 Recommended models training method, terminal and storage medium based on federation's study
WO2021008017A1 (en) * 2019-07-17 2021-01-21 深圳前海微众银行股份有限公司 Federation learning method, system, terminal device and storage medium
CN111275491A (en) * 2020-01-21 2020-06-12 深圳前海微众银行股份有限公司 Data processing method and device
CN111553470A (en) * 2020-07-10 2020-08-18 成都数联铭品科技有限公司 Information interaction system and method suitable for federal learning
CN111898768A (en) * 2020-08-06 2020-11-06 深圳前海微众银行股份有限公司 Data processing method, device, equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUAN WANG: "Interpret Federated Learning with Shapley Values", 《 ARXIV》, 30 September 2019 (2019-09-30), pages 6 *
李健萌: "基于联邦学习的大数据风险控制技术研究与应用", 《中国优秀硕士学位论文全文数据库(信息科技辑)》, 15 August 2020 (2020-08-15), pages 138 - 307 *

Also Published As

Publication number Publication date
WO2022160578A1 (en) 2022-08-04

Similar Documents

Publication Publication Date Title
CN110351111A (en) A kind of subscription processing method, network node and customer data base
CN111222647A (en) Federal learning system optimization method, device, equipment and storage medium
WO2017045450A1 (en) Resource operation processing method and device
CN108023788A (en) Monitoring data method for uploading, device, equipment, system and storage medium
CN108804484A (en) The data measures and procedures for the examination and approval, equipment and computer readable storage medium
CN106326062A (en) Method and device for controlling running state of application program
CN112329010A (en) Adaptive data processing method, device, equipment and storage medium based on federal learning
CN107329916A (en) A kind of USB device control method, device and computing device
CN113807926A (en) Recommendation information generation method and device, electronic equipment and computer readable medium
CN115454561A (en) Customized interface display method, device, equipment and storage medium
CN104601448A (en) Method and device for handling virtual card
CN114168293A (en) Hybrid architecture system and task scheduling method based on data transmission time consumption
CN112861165A (en) Model parameter updating method, device, equipment, storage medium and program product
CN110264035B (en) Workflow configuration method, workflow configuration device, terminal and storage medium
CN113158223A (en) Data processing method, device, equipment and medium based on state transition kernel optimization
CN116094907A (en) Complaint information processing method, complaint information processing device and storage medium
CN110119429A (en) Data processing method, device, computer equipment and storage medium
CN114866970A (en) Policy control method, system and related equipment
CN112380411B (en) Sensitive word processing method, device, electronic equipment, system and storage medium
CN112417259B (en) Media resource processing method, device, equipment and storage medium
CN113706097A (en) Business approval method, device, equipment and storage medium
CN111639918A (en) Approval method and device, electronic equipment and computer readable medium
CN112270529A (en) Method and device for examining and approving business form, electronic equipment and storage medium
CN109559225A (en) A kind of method of commerce and device
CN108924668A (en) Picture load, data offering method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination