CN111753996A - Optimization method, device, equipment and storage medium of scheme determination model - Google Patents

Optimization method, device, equipment and storage medium of scheme determination model Download PDF

Info

Publication number
CN111753996A
CN111753996A CN202010591886.7A CN202010591886A CN111753996A CN 111753996 A CN111753996 A CN 111753996A CN 202010591886 A CN202010591886 A CN 202010591886A CN 111753996 A CN111753996 A CN 111753996A
Authority
CN
China
Prior art keywords
participants
participant
contribution
prediction result
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010591886.7A
Other languages
Chinese (zh)
Inventor
霍昱光
王雪
王梓桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCB Finetech Co Ltd
Original Assignee
China Construction Bank Corp
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp, CCB Finetech Co Ltd filed Critical China Construction Bank Corp
Priority to CN202010591886.7A priority Critical patent/CN111753996A/en
Publication of CN111753996A publication Critical patent/CN111753996A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Game Theory and Decision Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention discloses an optimization method, device, equipment and storage medium of a scheme determination model. Wherein, the method comprises the following steps: acquiring data to be processed of at least two participants; predicting to-be-processed data of at least two participants to obtain a prediction result based on a scheme prediction submodel of the at least two participants trained in advance; determining a target participant of the at least two participants, and sending the prediction result to the target participant. The embodiment of the invention solves the problem that the prediction result can only be sent to the party providing the label in the prior art by determining the target party receiving the prediction result, reduces the transmission process of the prediction result, avoids the prediction result from being tampered in the transmission process, and improves the transmission efficiency and the safety of the prediction result.

Description

Optimization method, device, equipment and storage medium of scheme determination model
Technical Field
The embodiment of the invention relates to computer technology, in particular to a method, a device, equipment and a storage medium for optimizing a scheme determination model.
Background
Federal learning techniques in the field of data security, in which sample alignment federal learning is also referred to as "longitudinal federal learning," are becoming increasingly popular. Each participant in sample alignment federal learning provides different characteristics participating in modeling samples, and finally each participant holds a part of the model. In the prediction stage after the training is finished, partial combined models held by all the participants act together to participate in the prediction process.
However, only the participant (party a) who provides the label in the training phase can obtain the prediction result in the prediction phase, and the participant (party B) who does not provide the label in the training phase can only derive the prediction result from the model by party a and then manually send the prediction result to party B to obtain the prediction result, so that the obtaining efficiency of the prediction result is low, and the possibility that party a modifies the prediction result exists, and the authenticity of the prediction result is reduced.
Disclosure of Invention
The embodiment of the invention provides an optimization method, device, equipment and storage medium of a scheme determination model, which are used for improving the authenticity of a prediction result and the sending efficiency of the prediction result and realizing the optimization of the scheme determination model.
In a first aspect, an embodiment of the present invention provides an optimization method for a solution determination model, where the method includes:
acquiring data to be processed of at least two participants;
predicting to-be-processed data of at least two participants to obtain a prediction result based on a scheme prediction submodel of the at least two participants trained in advance;
determining a target participant of the at least two participants, and sending the prediction result to the target participant.
In a second aspect, an embodiment of the present invention further provides an optimization apparatus for a solution determination model, where the apparatus includes:
the data acquisition module is used for acquiring data to be processed of at least two participants;
the result prediction module is used for predicting the data to be processed of the at least two participants to obtain a prediction result based on the pre-trained scheme prediction submodels of the at least two participants;
and the result sending module is used for determining a target participant in at least two participants and sending the prediction result to the target participant.
In a third aspect, an embodiment of the present invention further provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the method for optimizing a solution determination model according to any embodiment of the present invention.
In a fourth aspect, embodiments of the present invention further provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the method for optimizing a solution-determination model according to any of the embodiments of the present invention.
The embodiment of the invention obtains the scheme prediction submodels of at least two participants through the federal learning technology training, obtains the prediction result through the trained submodels, determines the target participants capable of receiving the prediction result, and sends the prediction result to the participants. The problem that the prediction result can only be sent to the participant who provides the label in the training stage in the prior art is solved, the prediction result can be obtained by all the participants, the participant who provides the label in the training stage does not need to send the prediction result to other participants after the prediction is finished, the data transmission process is reduced, the data transmission efficiency is improved, the participant who provides the label in the training stage is prevented from tampering the prediction result in the transmission process, and the safety and the authenticity of the prediction result obtained by other participants are improved.
Drawings
FIG. 1 is a schematic flow chart of a method for optimizing a solution determination model according to a first embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for optimizing a solution determination model according to a second embodiment of the present invention;
FIG. 3 is a block diagram of an optimizing apparatus for determining a model according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer device in the fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a schematic flowchart of an optimization method for a solution determination model according to an embodiment of the present invention, which may be applied to a case where a prediction result is obtained by using the solution determination model, and the method may be performed by an optimization apparatus for the solution determination model. As shown in fig. 1, the method specifically includes the following steps:
and step 110, acquiring data to be processed of at least two participants.
The method comprises the steps of training and predicting a scheme determination model by a federal learning system, wherein the federal learning system can be a sample alignment federal learning system, also called a longitudinal federal learning system, and the scheme determination model can be a model for providing a fund compensation scheme for participants. The federated learning system trains the scheme prediction submodels of at least two participants, and the scheme prediction submodels are combined to realize the use of the scheme determination model. When each participant uses the scheme determination model, a model prediction instruction is sent to a local storage facility, each locally stored scheme prediction sub-model is loaded into the federal learning system, data to be predicted to be processed is input, the data to be processed is obtained by the federal learning system, and the scheme is predicted.
And step 120, predicting the data to be processed of the at least two participants to obtain a prediction result based on the pre-trained scheme prediction submodels of the at least two participants.
The method comprises the steps of firstly training a scheme prediction submodel for at least two participants, wherein each participant provides partial characteristics of a training sample, and one participant provides a label besides the partial characteristics of the training sample. Each participant firstly carries out local operation and encryption on the own, then carries out interaction for many times, exchanges intermediate data, and finally carries out aggregation of multi-party intermediate data on the party providing the label to judge the loss of model training, wherein the intermediate data is an intermediate result value of the model in the interactive operation process. And the party providing the label determines whether to continue training according to the loss value, and if the loss value meets a preset loss threshold value, the scheme prediction submodel training of each participant is finished.
After the scheme prediction submodels are trained, the data to be processed of each participant is respectively input into the corresponding trained scheme prediction submodels, and each scheme prediction submodel firstly carries out local operation and encryption on the data of the own party and then carries out multiple times of intermediate data interaction. And the party providing the label in the training stage performs final aggregation calculation on the multi-party intermediate data to obtain a prediction result. For example, there are two parties, namely party a and party B, in the training phase, party a provides a part of features and labels of the training samples, party B provides another part of features of the training samples, the loss value in the training phase is determined by party a, and in the prediction phase, party a also performs final calculation of the prediction result.
Step 130, determining a target party of the at least two parties, and sending the prediction result to the target party.
After the prediction result is obtained, a target participant capable of receiving the prediction result needs to be determined so as to send the prediction result to the participants. In the prior art, the prediction result can only be sent to the participants who provide the labels in the training stage, for example, the participants have a party a and a party B, in the training stage, the party a provides a part of features and labels of the training samples, the party B provides another part of features of the training samples, the party a is the target participant, and only the party a can obtain the prediction result. If the B party wants to obtain the prediction result, the B party needs to negotiate with the A party, and the A party sends the prediction result to the B party.
In this embodiment, optionally, determining a target participant of the at least two participants includes: acquiring a contribution threshold in a current excitation mechanism; acquiring contribution values of at least two participants in a model training stage; and if the contribution value of the participant is larger than the contribution threshold value, taking the participant as a target participant.
Specifically, the federal learning system determines and stores contribution values of all participants to model training in a model training stage. The contribution value may represent the role played by the training samples provided by each participant in the model training process, and may be determined according to data validity or data authenticity, and the like. The data validity may represent the magnitude of the improvement in the performance of the overall model after the training sample features provided by the participant are added, and the data validity represents whether the training sample features provided by the participant are sufficiently honest.
An incentive mechanism is preset in the federal learning system, and the incentive mechanism can comprise a sending rule of a prediction result, wherein a contribution degree threshold value is set. In a prediction stage, acquiring a contribution threshold value in a current incentive mechanism and contribution values of all participants in a model training stage, comparing the contribution values of all participants in the model training stage with the contribution threshold value, if the contribution values of all the participants are larger than the contribution threshold value, taking the participants as target participants, wherein the participants can obtain a prediction result, and instructing the participants who provide labels in the training stage to send the prediction result to the participants meeting the requirement of the contribution threshold value by a federal learning system; if the contribution value of the participant does not exceed the contribution threshold value and the participant does not provide the label in the training stage, the participant is not the target participant and cannot receive the prediction result. The beneficial effects of the setting are that the contribution degrees of all the participants in the model training process can be quickly obtained by comparing the contribution degrees, the target participant is determined according to the contribution degrees, the enthusiasm of the participants only capable of providing characteristics in the training stage can be greatly improved, the prediction result is prevented from being maliciously tampered when being transmitted among all the participants after the prediction is finished, and the transmission efficiency and the authenticity of the prediction result are improved.
In this embodiment, optionally, the obtaining contribution values of at least two participants in the model training phase includes: and determining the contribution values of the at least two participants in the model training phase according to the intermediate data of the at least two participants in the model training phase.
Specifically, the contribution value is determined in a model training stage, in the model training stage, each participant performs interaction of intermediate data, and the contribution value of each participant to model training is determined according to the intermediate data. For example, the A side transmits the data after self operation to the B side, the B side further calculates the data transmitted by the A side according to the training sample characteristics of the B side to obtain intermediate data, and the training promotion degree of the training sample characteristics of the B side to the model can be determined according to the intermediate data, so that the contribution value of the B side to the model training is obtained. And if the C party exists, the B party transmits the intermediate data to the C party, and the contribution value of the C party is further determined. The beneficial effect of the setting is that the effect of the training sample characteristics provided by each participant on model training can be determined, and the contribution value is higher when the effect is larger, so that the target participant can be determined according to the contribution value, and the optimization of the scheme determination model is realized.
In this embodiment, optionally, determining contribution values of the at least two participants in the model training phase according to intermediate data of the at least two participants in the model training phase includes: determining and recording the information quantity of the characteristic variables of the training samples of the at least two participants according to the intermediate data of the at least two participants in the model training stage, wherein the information quantity represents the predictive capability of the characteristic variables of the training samples; and determining contribution values of the at least two participants in a model training stage according to the information quantity of the characteristic variables of the at least two participant training samples.
Specifically, the contribution value may be determined according to data validity of the training sample features of each participant, where the data validity is an overall model performance improvement range after the features provided by the participants are added, and an IV value (information value) may be used as a criterion for determining the data validity. The IV value is mainly used for coding the input variable and evaluating the prediction capability, and the magnitude of the IV value can be used as an information quantity to represent the strength of the prediction capability of the training sample characteristic variable. According to the intermediate data, an IV value can be calculated, and the contribution value of each participant in the model training stage is determined according to the information quantity of the characteristic variables of the training samples of at least two participants. The higher the IV value, the stronger the predictive power of the variable, and the higher the contribution value. WOE (Evidence Weight) can be determined by a binning method, and the IV value can be calculated according to WOE. The WOE and IV values can be calculated by the following equations:
Figure BDA0002555875610000071
Figure BDA0002555875610000072
where i is expressed as the number of bins, WOEiExpressed as the evidence weight of the current bin, b as the bad sample in the current bin, g as the good sample in the current bin, btotalExpressed as all bad samples, gtotalAll good samples are indicated. The beneficial effects of the setting are that the calculation precision of the contribution value can be improved by calculating the IV value, a judgment standard is provided for the contribution value, the target participant is accurately determined, and the optimization of the scheme determination model is realized.
According to the technical scheme, the scheme prediction submodels of at least two participants are obtained through the federal learning technology training, the prediction result is obtained through the trained submodels, the target participants capable of receiving the prediction result are determined, and the prediction result is sent to the participants. The problem that the prediction result can only be sent to the participant who provides the label in the training stage in the prior art is solved, the prediction result can be obtained by all the participants, the participant who provides the label in the training stage does not need to send the prediction result to other participants after the prediction is finished, the data transmission process is reduced, the data transmission efficiency is improved, the participant who provides the label in the training stage is prevented from tampering the scheme in the transmission process, the safety and the authenticity of the scheme obtained by other participants are improved, and the optimization of the scheme determination model is realized.
Example two
Fig. 2 is a schematic flow chart of an optimization method for a solution-determined model according to a second embodiment of the present invention, which is further optimized based on the above embodiments, and the method can be executed by an optimization apparatus for a solution-determined model. As shown in fig. 2, the method specifically includes the following steps:
step 210, obtaining data to be processed of at least two participants.
And step 220, predicting the data to be processed of the at least two participants to obtain a prediction result based on the pre-trained scheme prediction submodels of the at least two participants.
Step 230, at least two participant identifications are obtained.
Where each participant is identified prior to training the model, e.g., as a and B. Before determining the target participant, participant identifications of all participants are obtained, so that the target participant is determined according to the participant identifications.
And 240, comparing the at least two participant identifications with preset candidate participant identifications for receiving the prediction result, determining the participant consistent with the candidate participant identifications in the at least two participants as a target participant, and sending the prediction result to the target participant.
Before each participant uses the scheme determination model to predict, the participants capable of receiving the prediction result are negotiated and determined, and the participant identifications capable of receiving the prediction result after negotiation are stored as candidate participant identifications.
After the federal learning system obtains the prediction result, all participant identifications participating in prediction are compared with preset candidate participant identifications, participants consistent with the candidate participant identifications are searched from all participant identifications participating in prediction, the searched participants are determined as target participants, and the target participants can receive the prediction result. For example, the participant identities A, B and C, and the candidate participant identities a and B that can receive the prediction after negotiation are available, then participants a and B can receive the prediction and C cannot receive the prediction. The method has the advantages that the target participant can be directly determined through negotiation without calculation, the determination efficiency of the target participant is improved, the prediction result can be directly sent to the target participant, the prediction result is prevented from being maliciously tampered, and the optimization of the scheme determination model is realized.
The embodiment of the invention obtains the scheme prediction submodels of at least two participants through the federal learning technology training, obtains the prediction result through using the trained submodels, determines the target participants capable of receiving the prediction result according to the negotiation of the participants, and directly sends the prediction result to the participants. The problem that the prediction result can only be sent to the participant who provides the label in the training stage in the prior art is solved, the prediction result can be obtained by all the participants, the participant who provides the label in the training stage does not need to send the prediction result to other participants after the prediction is finished, the data transmission process is reduced, the determination efficiency of the target participant of the data transmission efficiency is improved, the participant who provides the label in the training stage is prevented from tampering the scheme in the transmission process, the safety and the authenticity of the scheme obtained by other participants are improved, and the optimization of the scheme determination model is realized.
EXAMPLE III
Fig. 3 is a structural block diagram of an optimization apparatus for a solution determination model according to a third embodiment of the present invention, which is capable of executing an optimization method for a solution determination model according to any embodiment of the present invention, and has corresponding functional modules and beneficial effects of the execution method. As shown in fig. 3, the apparatus specifically includes:
a data obtaining module 301, configured to obtain to-be-processed data of at least two parties;
the result prediction module 302 is configured to predict to-be-processed data of at least two participants to obtain a prediction result based on a scheme prediction submodel of the at least two participants trained in advance;
and a result sending module 303, configured to determine a target participant of the at least two participants, and send the prediction result to the target participant.
Optionally, the result sending module 303 includes:
the threshold value obtaining unit is used for obtaining a contribution threshold value in a current excitation mechanism;
the contribution value acquisition unit is used for acquiring the contribution values of at least two participants in a model training stage;
and the target determining unit is used for taking the participant as the target participant if the contribution value of the participant is greater than the contribution threshold value.
Optionally, the contribution value obtaining unit includes:
and the contribution value determining unit is used for determining the contribution values of the at least two participants in the model training stage according to the intermediate data of the at least two participants in the model training stage.
Optionally, the contribution value determining unit is specifically configured to:
determining and recording the information quantity of the characteristic variables of the training samples of the at least two participants according to the intermediate data of the at least two participants in the model training stage, wherein the information quantity represents the predictive capability of the characteristic variables of the training samples;
and determining contribution values of the at least two participants in a model training stage according to the information quantity of the characteristic variables of the at least two participant training samples.
Optionally, the result sending module 303 is further specifically configured to:
acquiring at least two participant identifications;
and comparing the at least two participant identifications with preset candidate participant identifications for receiving the prediction result, and determining the participant consistent with the candidate participant identifications as a target participant in the at least two participants.
The embodiment of the invention obtains the scheme prediction submodels of at least two participants through the federal learning technology training, obtains the prediction result through the trained submodels, determines the target participants capable of receiving the prediction result, and sends the prediction result to the participants. The problem that the prediction result can only be sent to the participant who provides the label in the training stage in the prior art is solved, the prediction result can be obtained by all the participants, the participant who provides the label in the training stage does not need to send the prediction result to other participants after the prediction is finished, the data transmission process is reduced, the data transmission efficiency is improved, the participant who provides the label in the training stage is prevented from tampering the scheme in the transmission process, the safety and the authenticity of the scheme obtained by other participants are improved, and the optimization of the scheme determination model is realized.
Example four
Fig. 4 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention. FIG. 4 illustrates a block diagram of an exemplary computer device 400 suitable for use in implementing embodiments of the present invention. The computer device 400 shown in fig. 4 is only an example and should not bring any limitations to the functionality or scope of use of the embodiments of the present invention.
As shown in fig. 4, computer device 400 is in the form of a general purpose computing device. The components of computer device 400 may include, but are not limited to: one or more processors or processing units 401, a system memory 402, and a bus 403 that couples the various system components (including the system memory 402 and the processing unit 401).
Bus 403 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 400 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 400 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 402 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)404 and/or cache memory 405. The computer device 400 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 406 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to the bus 403 by one or more data media interfaces. Memory 402 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 408 having a set (at least one) of program modules 407 may be stored, for example, in memory 402, such program modules 407 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 407 generally perform the functions and/or methods of the described embodiments of the invention.
The computer device 400 may also communicate with one or more external devices 409 (e.g., keyboard, pointing device, display 410, etc.), with one or more devices that enable a user to interact with the computer device 400, and/or with any devices (e.g., network card, modem, etc.) that enable the computer device 400 to communicate with one or more other computing devices. Such communication may be through input/output (I/O) interface 411. Moreover, computer device 400 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via network adapter 412. As shown, network adapter 412 communicates with the other modules of computer device 400 over bus 403. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computer device 400, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 401 executes various functional applications and data processing by running a program stored in the system memory 402, for example, to implement an optimization method of the solution determination model provided by the embodiment of the present invention, including:
acquiring data to be processed of at least two participants;
predicting to-be-processed data of at least two participants to obtain a prediction result based on a scheme prediction submodel of at least two participants trained in advance;
a target participant of the at least two participants is determined and the prediction result is sent to the target participant.
EXAMPLE five
The fifth embodiment of the present invention further provides a storage medium containing computer-executable instructions, where a computer program is stored on the storage medium, and when the computer program is executed by a processor, the method for optimizing a solution determination model according to the fifth embodiment of the present invention is implemented, where the method includes:
acquiring data to be processed of at least two participants;
predicting to-be-processed data of at least two participants to obtain a prediction result based on a scheme prediction submodel of at least two participants trained in advance;
a target participant of the at least two participants is determined and the prediction result is sent to the target participant.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for optimizing a solution determination model, comprising:
acquiring data to be processed of at least two participants;
predicting to-be-processed data of at least two participants to obtain a prediction result based on a scheme prediction submodel of the at least two participants trained in advance;
determining a target participant of the at least two participants, and sending the prediction result to the target participant.
2. The method of claim 1, wherein determining a target participant of the at least two participants comprises:
acquiring a contribution threshold in a current excitation mechanism;
acquiring contribution values of at least two participants in a model training stage;
and if the contribution degree value of the participant is larger than the contribution degree threshold value, taking the participant as the target participant.
3. The method of claim 2, wherein obtaining contribution values of at least two participants during a model training phase comprises:
and determining the contribution values of the at least two participants in the model training phase according to the intermediate data of the at least two participants in the model training phase.
4. The method of claim 3, wherein determining the contribution values of at least two participants in the model training phase according to the intermediate data of the at least two participants in the model training phase comprises:
determining and recording the information quantity of the characteristic variables of the training samples of at least two participants according to the intermediate data of the at least two participants in the model training stage, wherein the information quantity represents the predictive capability of the characteristic variables of the training samples;
and determining the contribution values of the at least two participants in the model training stage according to the information quantity of the characteristic variables of the at least two participant training samples.
5. The method of claim 1, wherein determining a target participant of at least two participants further comprises:
acquiring at least two participant identifications;
and comparing the at least two participant identifications with preset candidate participant identifications for receiving the prediction result, and determining the participant consistent with the candidate participant identifications in the at least two participants as the target participant.
6. An apparatus for optimizing a solution determination model, comprising:
the data acquisition module is used for acquiring data to be processed of at least two participants;
the result prediction module is used for predicting the data to be processed of the at least two participants to obtain a prediction result based on the pre-trained scheme prediction submodels of the at least two participants;
and the result sending module is used for determining a target participant in at least two participants and sending the prediction result to the target participant.
7. The apparatus of claim 6, wherein the result sending module comprises:
the threshold value obtaining unit is used for obtaining a contribution threshold value in a current excitation mechanism;
the contribution value acquisition unit is used for acquiring the contribution values of at least two participants in a model training stage;
and the target determining unit is used for taking the participant as the target participant if the contribution degree value of the participant is greater than the contribution degree threshold value.
8. The apparatus according to claim 7, wherein the contribution value obtaining unit includes:
and the contribution value determining unit is used for determining the contribution values of the at least two participants in the model training stage according to the intermediate data of the at least two participants in the model training stage.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the program, implements a method of optimizing a solution determination model according to any of claims 1-5.
10. A storage medium containing computer-executable instructions for performing a method of optimizing a solution determination model according to any one of claims 1-5 when executed by a computer processor.
CN202010591886.7A 2020-06-24 2020-06-24 Optimization method, device, equipment and storage medium of scheme determination model Pending CN111753996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010591886.7A CN111753996A (en) 2020-06-24 2020-06-24 Optimization method, device, equipment and storage medium of scheme determination model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010591886.7A CN111753996A (en) 2020-06-24 2020-06-24 Optimization method, device, equipment and storage medium of scheme determination model

Publications (1)

Publication Number Publication Date
CN111753996A true CN111753996A (en) 2020-10-09

Family

ID=72677222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010591886.7A Pending CN111753996A (en) 2020-06-24 2020-06-24 Optimization method, device, equipment and storage medium of scheme determination model

Country Status (1)

Country Link
CN (1) CN111753996A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418446A (en) * 2020-11-18 2021-02-26 脸萌有限公司 Model processing method, system, device, medium and electronic equipment
CN113011521A (en) * 2021-04-13 2021-06-22 上海嗨普智能信息科技股份有限公司 Chain-based multi-label federal learning method, controller and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418446A (en) * 2020-11-18 2021-02-26 脸萌有限公司 Model processing method, system, device, medium and electronic equipment
CN112418446B (en) * 2020-11-18 2024-04-09 脸萌有限公司 Model processing method, system, device, medium and electronic equipment
CN113011521A (en) * 2021-04-13 2021-06-22 上海嗨普智能信息科技股份有限公司 Chain-based multi-label federal learning method, controller and medium
CN113011521B (en) * 2021-04-13 2022-09-30 上海嗨普智能信息科技股份有限公司 Chain-based multi-label prediction method, controller and medium

Similar Documents

Publication Publication Date Title
CN108985358B (en) Emotion recognition method, device, equipment and storage medium
JP2022058915A (en) Method and device for training image recognition model, method and device for recognizing image, electronic device, storage medium, and computer program
US20220004811A1 (en) Method and apparatus of training model, device, medium, and program product
CN109300179B (en) Animation production method, device, terminal and medium
CN108922564B (en) Emotion recognition method and device, computer equipment and storage medium
CN110232411B (en) Model distillation implementation method, device, system, computer equipment and storage medium
CN105761102B (en) Method and device for predicting commodity purchasing behavior of user
WO2022048170A1 (en) Method and apparatus for conducting human-machine conversation, computer device, and storage medium
CN110704597B (en) Dialogue system reliability verification method, model generation method and device
CN111291882A (en) Model conversion method, device, equipment and computer storage medium
CN111753996A (en) Optimization method, device, equipment and storage medium of scheme determination model
CN112308077A (en) Sample data acquisition method, image segmentation method, device, equipment and medium
CN114449343A (en) Video processing method, device, equipment and storage medium
CN110955640A (en) Cross-system data file processing method, device, server and storage medium
CN113360683B (en) Method for training cross-modal retrieval model and cross-modal retrieval method and device
CN107862035A (en) Network read method, device, Intelligent flat and the storage medium of minutes
CN112149834A (en) Model training method, device, equipment and medium
CN111833847A (en) Speech processing model training method and device
CN111435452B (en) Model training method, device, equipment and medium
CN108984680B (en) Information recommendation method and device, server and storage medium
CN111369375A (en) Social relationship determination method, device, equipment and storage medium
CN110471961A (en) A kind of product demand acquisition methods, device, equipment and storage medium
CN116258720B (en) Image recognition method, device, electronic equipment and storage medium
CN114662129B (en) Data slicing security assessment method and device, storage medium and electronic equipment
CN117932040B (en) Information recommendation method and system applied to recruitment informatization system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220909

Address after: 12 / F, 15 / F, 99 Yincheng Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 200120

Applicant after: Jianxin Financial Science and Technology Co.,Ltd.

Address before: 25 Financial Street, Xicheng District, Beijing 100033

Applicant before: CHINA CONSTRUCTION BANK Corp.

Applicant before: Jianxin Financial Science and Technology Co.,Ltd.