CN112200713B - Business data processing method, device and equipment in federal learning - Google Patents

Business data processing method, device and equipment in federal learning Download PDF

Info

Publication number
CN112200713B
CN112200713B CN202011173171.6A CN202011173171A CN112200713B CN 112200713 B CN112200713 B CN 112200713B CN 202011173171 A CN202011173171 A CN 202011173171A CN 112200713 B CN112200713 B CN 112200713B
Authority
CN
China
Prior art keywords
homomorphic
digits
integer
integers
fragment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011173171.6A
Other languages
Chinese (zh)
Other versions
CN112200713A (en
Inventor
张君涛
周启贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202011173171.6A priority Critical patent/CN112200713B/en
Publication of CN112200713A publication Critical patent/CN112200713A/en
Application granted granted Critical
Publication of CN112200713B publication Critical patent/CN112200713B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the specification discloses a business data processing method, a business data processing device and business data processing equipment in federal learning. The scheme comprises the following steps: determining homomorphic operation to be executed in federal learning; determining integer parameters to be used by homomorphic operation according to service data provided by participants of federal learning; converting the integer parameter into a plurality of fragment integers, the number of bits of the fragment integers being less than the number of bits of the integer parameter; and obtaining a plurality of fragment integers through the GPU, and distributing corresponding homomorphic multiplication and/or homomorphic addition to a plurality of arithmetic logic units of the GPU according to the plurality of fragment integers so as to execute in parallel through a plurality of corresponding GPU threads to finish homomorphic operation.

Description

Business data processing method, device and equipment in federal learning
Technical Field
The present disclosure relates to the field of computer software technologies, and in particular, to a method, an apparatus, and a device for processing service data in federal learning.
Background
Federal learning is a privacy preserving machine learning scheme that has been gradually generalized and applied in recent years, which can effectively help multiple institutions perform data usage and machine learning modeling under the requirements of user privacy preservation, data security and legal regulations.
Under the condition that the original data does not go out of the domain, federal learning utilizes the intermediate result and gradient information of synchronous training among multiple participants to realize the purpose of jointly carrying out model training and prediction. Federal learning may homomorphic encrypt related service data to protect user privacy, in which case, further calculation is required for the homomorphic encryption result, and these calculations in federal learning are usually calculation of integers with very large digits (called large integers), for example, digits up to 1024 bits or 2048 bits, or even higher.
Based on this, a solution that enables more efficient federal learning is needed.
Disclosure of Invention
One or more embodiments of the present disclosure provide a method, an apparatus, a device, and a storage medium for processing service data in federal learning, so as to solve the following technical problems: there is a need for a solution that enables federal learning to be achieved more efficiently.
To solve the above technical problems, one or more embodiments of the present specification are implemented as follows:
one or more embodiments of the present disclosure provide a business data processing method in federal learning, including:
determining homomorphic operation to be executed in federal learning;
Determining integer parameters to be used by the homomorphic operation according to the business data provided by the federally learned participants;
converting the integer parameter into a plurality of fragment integers, the number of bits of the fragment integers being less than the number of bits of the integer parameter;
and acquiring the plurality of fragment integers through a graphics processor (Graphics Processing Unit, GPU), and distributing corresponding homomorphic multiplication and/or homomorphic addition to a plurality of arithmetic logic units of the GPU according to the plurality of fragment integers so as to execute in parallel through a plurality of corresponding GPU threads to finish the homomorphic operation.
One or more embodiments of the present disclosure provide a business data processing apparatus in federal learning, including:
the homomorphic operation determining module is used for determining homomorphic operation to be executed in federal learning;
the integer parameter determining module is used for determining integer parameters to be used for homomorphic operation according to the business data provided by the federal learning participants;
an integer parameter conversion module for converting the integer parameter into a plurality of segment integers, wherein the number of bits of the segment integers is smaller than that of the integer parameter;
and the homomorphic operation executing module is used for acquiring the plurality of fragment integers through the GPU, distributing corresponding homomorphic multiplication and/or homomorphic addition for a plurality of arithmetic logic units of the GPU according to the plurality of fragment integers so as to execute the corresponding homomorphic operation in parallel through a plurality of GPU threads, and completing the homomorphic operation.
One or more embodiments of the present specification provide a business data processing apparatus in federal learning, including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
determining homomorphic operation to be executed in federal learning;
determining integer parameters to be used by the homomorphic operation according to the business data provided by the federally learned participants;
converting the integer parameter into a plurality of fragment integers, the number of bits of the fragment integers being less than the number of bits of the integer parameter;
and obtaining the plurality of fragment integers through the GPU, and distributing corresponding homomorphic multiplication and/or homomorphic addition for a plurality of arithmetic logic units of the GPU according to the plurality of fragment integers so as to execute in parallel through a plurality of corresponding GPU threads to finish the homomorphic operation.
One or more embodiments of the present specification provide a non-volatile computer storage medium storing computer-executable instructions configured to:
Determining homomorphic operation to be executed in federal learning;
determining integer parameters to be used by the homomorphic operation according to the business data provided by the federally learned participants;
converting the integer parameter into a plurality of fragment integers, the number of bits of the fragment integers being less than the number of bits of the integer parameter;
and obtaining the plurality of fragment integers through the GPU, and distributing corresponding homomorphic multiplication and/or homomorphic addition for a plurality of arithmetic logic units of the GPU according to the plurality of fragment integers so as to execute in parallel through a plurality of corresponding GPU threads to finish the homomorphic operation.
The above-mentioned at least one technical solution adopted by one or more embodiments of the present disclosure can achieve the following beneficial effects: for large integer parameters to be homomorphic operated in federation learning, the large integer parameters can be converted into a plurality of fragment integers with fewer digits, so that corresponding small integer batch calculation tasks can be generated accordingly, and the small integer batch calculation tasks are executed in parallel in a GPU through a plurality of arithmetic logic units in a multithreading manner.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some of the embodiments described in the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow diagram of a business data processing method in federal learning according to one or more embodiments of the present disclosure;
FIG. 2 is a schematic diagram of a longitudinal federal learning principle provided by one or more embodiments of the present disclosure;
FIG. 3 is a schematic diagram of a lateral federal learning principle provided by one or more embodiments of the present disclosure;
FIG. 4 is a schematic diagram of a process for recursively converting integer parameters in an application scenario provided in one or more embodiments of the present disclosure;
FIG. 5 is a schematic structural diagram of a business data processing device in federal learning according to one or more embodiments of the present disclosure;
fig. 6 is a schematic structural diagram of a business data processing device in federal learning according to one or more embodiments of the present disclosure.
Detailed Description
The embodiment of the specification provides a business data processing method, a device, equipment and a storage medium in federal learning.
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
In one or more embodiments of the present disclosure, considering that the performance of the GPU is rapidly developed, the GPU is suitable for performing a large number of computing tasks, and in federal learning, homomorphic operations need to be frequently performed, where the homomorphic operations include a large number of computing tasks, based on this, the GPU is applied in federal learning, so as to accelerate homomorphic operations in federal learning, thereby improving federal learning efficiency.
For example, homomorphic encryption is performed on plaintext of gradient information provided by multiple scattered parties in federal learning to provide privacy protection, and homomorphic multiplication or homomorphic addition homomorphic calculation can be further performed on ciphertext to obtain overall gradient information for training. The homomorphic calculation is a calculation mode of ciphertext domain, which allows algebraic calculation of a specific form on ciphertext to obtain a result which is still encrypted, and the result obtained by decrypting the result is the same as the calculation result of plaintext.
In the same state operation, a large number of large integer computations are usually included, but GPUs are often not good at processing large integer computations, the large integer computation efficiency of the large integer computations is not necessarily capable of reaching practical requirements, based on this, the scheme of the present disclosure converts parameters of large integers (for example, 1024-bit integers, 2048-bit integers and the like) to be involved in the computation process into integers with fewer bits, and even according to the same conversion concept, the integers with fewer bits can be recursively reconverted to further reduce the bits, so that the large integer computations can be decomposed into a plurality of small integers (for example, 16-bit integers, 32-bit integers and the like) for computation, and then the GPU performs a plurality of small integer computations and combines the computation results, where the small integer computations are good for the GPU, and the GPU has high execution efficiency for the small integer batch computation task.
The concept is suitable for not only the scene of accelerating federal learning by using the GPU, but also other scenes trapped by the bottleneck of large integer computing efficiency. Based on such a concept, a scene of federal learning will be specifically described below as an example.
Fig. 1 is a flow chart of a business data processing method in federal learning according to one or more embodiments of the present disclosure. Federal learning involves the staged integration, or final integration, of data from multiple parties, such as computing devices of the operators of these integrations, the execution subject of the flow in fig. 1.
The flow in fig. 1 may include the steps of:
s102: homomorphic operation to be performed in federal learning is determined.
In one or more embodiments of the present description, homomorphic operations may include homomorphic encryption operations performed on plaintext, and may also include homomorphic computations such as homomorphic multiplications, homomorphic additions, etc., further performed on ciphertext. For federal learning, all of these homomorphic operations are listed, and according to actual needs, at least one homomorphic operation is specified in these homomorphic operations and a subsequent flow is implemented for them by executing step S102.
S104: and determining integer parameters to be used by the homomorphic operation according to the business data provided by the federally learned participants.
In one or more embodiments of the present description, the service data is, for example, a feature vector of a user in a certain service area. The feature vector itself, or an intermediate result of further processing (e.g., homomorphic encryption) of the feature vector to provide privacy protection, may be used as an integer parameter for homomorphic operation. Taking homomorphic multiplication as an example, the integer parameters to be used for homomorphic multiplication are multiplicands and/or multipliers, it should be noted that, assuming that the original multiplicands and/or multipliers contain decimal numbers, decimal parts can also be converted into integers by shifting operations for reprocessing, which does not prevent the continuation of the above idea.
S106: the integer parameter is converted to a plurality of fragment integers having fewer bits than the integer parameter.
In one or more embodiments of the present disclosure, processing common large integers is computationally inefficient for some computing devices, but processing certain specialized integers or even large integers is still able to achieve good efficiency. The integer parameters may be converted based on such characteristics. Such particulars are, for example: an integer is a power of 2, contains more consecutive 0 s (especially zero for all binary digits at the end), has a periodic variation law for each digit, has few binary digits with a number of 1, etc.
In one or more embodiments of the present disclosure, by splitting the plurality of fragment integers, previous integer parameters can be restored without losing information, so as to avoid introducing errors into the execution of subsequent homomorphic operations.
In one or more embodiments of the present disclosure, integer parameters may be converted to be lossy, within acceptable error limits, to calculate an improvement in efficiency. For example, if there are a small number of binary digits 1 at the end of the integer parameter, it is considered to convert these 1 into 0, which is more convenient to process; for another example, if the integer parameter has a sequence of consecutive binary digits of 1, it is considered to add 1 to the last digit of the sequence, so that the sequence is 1 digit, the first digit is 1, and all the latter digits are 0, which is more convenient to process.
Further, if the integer parameter is subjected to lossy conversion, corresponding compensation can be given in the subsequent training process to reduce errors. For example, the weight of the training data of the corresponding part is reduced; for another example, the training data generated based on the lossy conversion correspondence is used as a part of the samples, the integer parameter is subjected to lossy conversion in the reverse direction (for example, the integer parameter is reduced when the integer parameter is increased).
In one or more embodiments of the present description, the conversion of integer parameters may be performed multiple times through an iterative process. For example, the integer parameters are first converted into a number of fragment integers with a reduced number of bits, and then the number of fragment integers are iteratively converted into a number of fragment integers with a further reduced number of bits. The specific number of iterations may be determined with reference to the capabilities of the computing device, e.g., the computing device may perform homomorphic calculations of a 16-bit integer relatively most efficiently, and may iteratively convert the integer parameter into a plurality of 16-bit fractional integers.
S108: and obtaining the plurality of fragment integers through the GPU, and distributing corresponding homomorphic multiplication and/or homomorphic addition for a plurality of arithmetic logic units of the GPU according to the plurality of fragment integers so as to execute in parallel through a plurality of corresponding GPU threads to finish the homomorphic operation.
In one or more embodiments of the present disclosure, the step of calculating the integer parameter in homomorphic operation is converted into the step of calculating the integer parameter according to the integer of the segment, and the step of calculating the integer of the segment is performed to obtain a homomorphic operation result, which is the same as or similar to the homomorphic operation directly obtained by using the integer parameter. How closely the particular approximation is acceptable depends on the actual need and is not limited herein.
In practical applications, compared with a central processing unit (Central Processing Unit, CPU), the GPU has fewer control logic units, but a large number of logic operation units and a large number of GPU threads, which is suitable for efficiently executing batch computing tasks in parallel. Especially for small integer batch computing tasks with fewer bits, the number of tasks is large, the single task computing amount is small, and more logic operation units and GPU threads can be occupied in parallel, so that the overall computing efficiency is improved, namely the GPU has the advantage of high execution efficiency of the small integer batch computing tasks.
Homomorphic operation comprises large integer calculation, and if large integer calculation is directly executed, the number of execution threads is small, even single-thread execution is possible, parallelism is poor, and calculation efficiency is low. Based on the above, the large integer computing task is decomposed into a batch of small integer tasks according to the segment integers, and is executed by the GPU, thereby fully exerting the advantages of the GPU.
Specifically, a batch computing task set may be generated in the GPU according to the steps of the plurality of fragment integers and homomorphic operations, the computing tasks in the batch computing task set including homomorphic multiplications and/or homomorphic additions involving by at least two fragment integers; splitting the batch computing task set, and distributing the batch computing task set to a plurality of arithmetic logic units of the GPU; and executing batch calculation task sets through the arithmetic logic units and the corresponding threads to complete homomorphic operation. It should be noted that, in addition to the batch computing task set, homomorphic operations may also include more computing tasks, such as tasks for integrating the results of the batch computing tasks, other tasks that do not involve fragment integers, and so on.
Of course, the step of converting the integer parameters to a plurality of fragment integers may also be performed in the GPU. Compared with the scheme that the CPU is utilized to perform federal learning in some schemes, the scheme based on the GPU is higher in efficiency, can further accelerate federal learning and liberate the CPU, enables the CPU to concentrate on processing works such as logic control and the like which are good for the CPU, and realizes optimal configuration of resources.
In one or more embodiments of the present disclosure, after performing the homomorphic operation, if the result of the homomorphic operation includes training data or gradient information, the homomorphic operation may be used to train a machine learning model corresponding to federal learning, thereby helping to improve training efficiency.
By the method of FIG. 1, for large integer parameters to be homomorphic operated in federation learning, the large integer parameters can be converted into a plurality of segment integers with fewer digits, so that corresponding small integer batch calculation tasks can be generated accordingly, and the small integer batch calculation tasks are executed in parallel in a GPU through a plurality of arithmetic logic units in a multithreading manner.
Based on the method of fig. 1, the present specification also provides some specific embodiments and extensions of the method, and the following description will proceed.
Federal learning is currently of both longitudinal federal learning and transverse federal learning types. Taking two participants to participate in federal learning as an example, the participants respectively provide one data set to obtain two total data sets, which are used for training a corresponding machine learning model of federal learning.
Under the condition that the two data sets have more user overlap and less user feature overlap, the data sets can be segmented according to the longitudinal direction (such as feature dimension), and the part of data with the same user features and the non-identical user features of the two data sets is taken out for training.
Under the condition that the user features of the two data sets overlap more and the user overlap is less, the data sets can be segmented according to the transverse direction (for example, the user dimension), and the part of data with the same user features and the non-identical users of the two data sets is taken out for training.
Of course, training scenarios are similar for more data providers providing more data sets, and an appropriate learning scheme may be selected against the two types above.
Federal learning types vary, as do homomorphic operations. The description is given with reference to fig. 2 and 3.
Fig. 2 is a schematic diagram of a longitudinal federal learning principle provided in one or more embodiments of the present disclosure. In fig. 2, two participants a and B of longitudinal federal learning are shown, each providing own business data for training a corresponding machine learning model of federal learning. And A and B send the intermediate calculation result to the other party after homomorphic encryption in the model training stage, and synchronize gradient information of two sides by homomorphic multiplication. It follows that longitudinal federal learning involves at least homomorphic multiplication during the model training phase.
Fig. 3 is a schematic diagram of a lateral federal learning principle provided in one or more embodiments of the present disclosure. In fig. 3, one coordinator of the lateral federal learning is shown and three participants A, B, C, A, B, C are all data providers, providing their own business data for training the corresponding machine learning model of the federal learning, the coordinator being used to integrate the data from A, B, C. A. B, C in the model training stage, plaintext model training is carried out in each domain, and trained gradient information is sent to a coordinator after homomorphic encryption, and the coordinator carries out homomorphic addition on data in ciphertext domain. It follows that lateral federal learning involves at least homomorphic addition during the model training phase.
Based on this, if the federation learning is the horizontal federation learning, it may be determined that the homomorphic operation to be performed in the federation learning includes homomorphic addition, and if the federation learning is the vertical federation learning, it is determined that the homomorphic operation to be performed in the federation learning includes homomorphic multiplication.
The calculation formula of the horizontal federation learning or the vertical federation learning is not complex, but the calculation formula is repeatedly executed for a plurality of times, and the data are executed in batches, so that the method is suitable for processing by using the GPU, namely homomorphic multiplication and homomorphic addition which are involved in training by using the GPU in the figures 2 and 3.
In addition to the type of federal learning, homomorphic operations to be performed in federal learning may be determined based on business data provided by the participants. For example, if the party provides original plaintext service data and at least homomorphic encryption is required to be performed on the plaintext service data in order to protect user privacy, homomorphic operation to be performed in federal learning at least includes homomorphic encryption; for another example, if one of the participants provides service data in one dimension of a certain user and the other of the participants provides service data in another dimension of the certain user, then the subsequent training is likely to need to superimpose the service data to completely learn the characteristics of the certain user, and based on this, homomorphic operations to be performed in federal learning are more likely to include homomorphic addition.
In one or more embodiments of the present disclosure, a plurality of digit sets are partitioned based on a plurality of digits of an integer parameter, and the integer parameter is converted into a plurality of fragment integers that can restore the integer parameter based on the plurality of digit sets. This is a lossless conversion process that can avoid introducing errors into the subsequent homomorphic operation results. For ease of processing by a computing device, the digits herein are typically binary digits, and some embodiments below take binary digits as examples, although higher power digits of 2 are possible if the computing device is sufficiently capable, e.g., octal digits, hexadecimal digits, etc., to facilitate more compact representation and processing of integer parameters.
The manner in which the multiple sets of digits are partitioned is varied. Dividing the consecutive digits equally is one of typical dividing means, for example, for integer parameters of even bits (noted as N bits, if odd bits, it is possible to divide unevenly or divide equally after dividing by 1 bit) from the middle, dividing the front from the left
Figure BDA0002747926850000101
The digits (high-order part) are divided into a set of digits, and then +.>
Figure BDA0002747926850000102
The digits (lower parts) are divided into a set of digits. Of course, it is also possible to divide more flexibly on the basis of consecutive digits of the same number, for example, in a part of consecutive digits of the integer parameter, there are first 5 consecutive 1 s, next 3 consecutive zeros, and then 6 consecutive 1 s, in which case it is conceivable to divide these 5 1 s, 3 zeros, 6 1 s into a set of digits, respectively. In addition, for the discontinuous digits, the discontinuous digits can be divided according to a certain strategy, for example, digits with the number of 1 are respectively and independently divided into 1 digit sets, and when the homomorphic operation is executed subsequently, the same digits corresponding to different integer parameters are uniformly processed, so that the efficiency is improved.
In one or more embodiments of the present description, an order of digits of an integer parameter is determined, and a split point is determined in the order, at least one non-empty set of digits is partitioned before the split point, and at least one non-empty set of digits is partitioned after the split point. A compact concrete division mode is a dichotomy (the dividing point is positioned in the middle as much as possible), and the method has the advantages that: the method is characterized in that the method is divided into a plurality of groups (the number of the groups of the same level are the same or basically the same) with equal granularity as much as possible, so that the method is convenient for further and further dividing iteratively, a plurality of small integers aligned with the number can be correspondingly converted, and the efficient processing of the computing equipment is convenient.
For example, assuming that consecutive digits of the integer parameter are in the divided digit sets, the digits are binary digits, further, for the digit set in the digit sets, determining a total value represented by all digits in the digit set in the integer parameter, if the digit set does not contain the lowest digit of the integer parameter, converting the total value into two segment integers, otherwise, using the total value as one segment integer, wherein one of the two segment integers is obtained by sequentially connecting digits of the integer parameter on all digits, the other is m times 2, and m is the number of digits lower than the whole digits in the integer parameter. The number of bits of the integer parameter is denoted as N, then in dichotomy,
Figure BDA0002747926850000111
more intuitively, according to the conversion scheme in the previous paragraph. Assume that large integer multiplication is performed, p×q is denoted as P, P, Q are large integers, and the number of bits is N.
Equally dividing the digits of P to obtain digit sets corresponding to the high-order part and the low-order part respectively, and representing the digits of the corresponding digits, thereby converting P to obtain:
Figure BDA0002747926850000112
wherein P is high Corresponding to the higher part, P low Corresponding to the lower part, P high 、P low For the fraction integer obtained by conversion of P, of course, more completely,/or- >
Figure BDA0002747926850000113
And can also be regarded as fragment integers obtained by P conversion.
For ease of understanding, consider an 8-bit integer "11001010" as an example, assuming that P is the integer, then n=8, P high =1100,P low =1010。
Similarly, converting Q yields:
Figure BDA0002747926850000121
further, P x Q is losslessly converted into a product form of an integer with a smaller number of 4 bits:
Figure BDA0002747926850000122
it can be seen that large integer multiplication translates into integer multiplication and integer addition with fewer digits, such as 2 N
Figure BDA0002747926850000123
Such a power of 2 term may be efficiently computed by a shift operation for a computing device, and generally, for some computing devices such as GPUs, the efficiency of post-conversion re-computation may be improved over direct computation without conversion. For example, P can be high *Q high 、P high *Q low 、P low *Q high 、P low *Q low The above-described batch calculation task sets are constituted as calculation tasks, respectively, in which case the calculation tasks include multiplication of two segment integers.
Further, according to the same transformation concept, P, Q can be iteratively transformed multiple times, i.e., after P is obtained high 、P low 、Q high 、Q low After that, P is high 、P low 、Q high 、Q low The conversion to integers of fewer digits, respectively, continues, and so on until the desired number of digits, e.g., 16 bits or 32 bits, etc., is reached, see fig. 4.
Fig. 4 is a schematic diagram of a process for recursively converting integer parameters in an application scenario provided in one or more embodiments of the present disclosure.
In fig. 4, after the first conversion of p×q, P is obtained high *Q high 、P high *Q low 、P low *Q high 、P low *Q low These 4 terms are used to continue the multiplicative term of the iterative conversion.
With P high *Q high For example, P high Conversion to
Figure BDA0002747926850000124
Will Q high Conversion to->
Figure BDA0002747926850000125
Then: />
Figure BDA0002747926850000126
Thereby obtaining P hh *Q hh 、P hh *Q hl 、P hl *Q hh 、P hl *Q hl These 4 multiplicative terms used to continue the iterative conversion, and so on, may continue the recursive conversion, resulting in more smaller computing tasks from which a larger set of bulk computing tasks is formed.
Based on the same thought, one or more embodiments of the present disclosure further provide apparatuses and devices corresponding to the above method, as shown in fig. 5 and fig. 6.
Fig. 5 is a schematic structural diagram of a business data processing device in federal learning according to one or more embodiments of the present disclosure, in which a dashed box represents an optional module, and the device includes:
the homomorphic operation determining module 502 determines homomorphic operation to be executed in federal learning;
an integer parameter determining module 504, configured to determine integer parameters to be used by the homomorphic operation according to service data provided by the federally learned participants;
an integer parameter conversion module 506 that converts the integer parameter to a plurality of fractional integers, the number of bits of the fractional integer being less than the number of bits of the integer parameter;
And the homomorphic operation executing module 508 acquires the plurality of fragment integers through the GPU, and completes the homomorphic operation in the GPU according to the plurality of fragment integers by executing corresponding homomorphic multiplication and/or homomorphic addition.
Optionally, the homomorphic operation executing module 508 generates, in the GPU, a set of batch computing tasks according to the plurality of segment integers and the homomorphic operation steps, where the computing tasks in the set of batch computing tasks include homomorphic multiplication and/or homomorphic addition involving at least two of the segment integers;
and completing the homomorphic operation by executing the batch computing task set.
Optionally, the apparatus further comprises:
the model training module 510 trains the machine learning model corresponding to the federation learning according to the result of the homomorphic operation after the homomorphic operation executing module 508 completes the homomorphic operation.
Optionally, the homomorphic operation determining module 502 determines homomorphic operations to be performed in federal learning according to the service data or the type of federal learning.
Optionally, the homomorphic operation determining module 502 determines that the homomorphic operation to be performed in the federal learning includes homomorphic multiplication if the federal learning is longitudinal federal learning; and/or the number of the groups of groups,
If federation learning is lateral federation learning, determining homomorphic operations to be performed in the federation learning include homomorphic addition.
Optionally, the integer parameter conversion module 506 specifically includes:
a digital set dividing module 5062 for dividing a plurality of digital sets according to the plurality of digits of the integer parameter;
the fragment integer generation module 5064 converts the integer parameter to a plurality of fragment integers that can restore the integer parameter based on the plurality of digit sets.
Optionally, the digital set partitioning module 5062 determines an order of the digits of the integer parameter, and determines a partitioning point in the order;
at least one non-empty set of digits is partitioned before the partition point and at least one non-empty set of digits is partitioned after the partition point.
Optionally, consecutive digits of the integer parameter are in the digit set, and the digits are binary digits;
the fragment integer generation module 5064, for a digit set of the plurality of digit sets, determines a total value represented by all digits of the digit set in the integer parameter;
if the digit set does not contain the lowest digit of the integer parameter, converting the total value into two fragment integers, otherwise, taking the total value as a fragment integer;
One of the two fragment integers is obtained by sequentially connecting the digits of the integer parameter on all digits, the other one is the m power of 2, and m is the number of digits lower than all digits in the integer parameter.
Optionally, the integer parameter conversion module 506 further performs, after the segment integer generation module 5064 converts the total value into two segment integers, or after the total value is taken as one segment integer:
the obtained fragment integer is converted into a fragment integer having a smaller number of bits by iterative processing.
Optionally, the apparatus further comprises:
the homomorphic encryption module 512 protects the service data by homomorphic encryption before the homomorphic operation execution module 508 completes the homomorphic operation according to the plurality of fragment integers.
Fig. 6 is a schematic structural diagram of a business data processing device in federal learning according to one or more embodiments of the present disclosure, where the device includes:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
Determining homomorphic operation to be executed in federal learning;
determining integer parameters to be used by the homomorphic operation according to the business data provided by the federally learned participants;
converting the integer parameter into a plurality of fragment integers, the number of bits of the fragment integers being less than the number of bits of the integer parameter;
and obtaining the plurality of fragment integers through the GPU, and completing homomorphic operation in the GPU according to the plurality of fragment integers by executing corresponding homomorphic multiplication and/or homomorphic addition.
The processor and the memory may communicate over a bus, and the device may also include input/output interfaces to communicate with other devices.
Based on the same considerations, one or more embodiments of the present specification further provide a non-volatile computer storage medium corresponding to the above method, storing computer-executable instructions configured to:
determining homomorphic operation to be executed in federal learning;
determining integer parameters to be used by the homomorphic operation according to the business data provided by the federally learned participants;
converting the integer parameter into a plurality of fragment integers, the number of bits of the fragment integers being less than the number of bits of the integer parameter;
And obtaining the plurality of fragment integers through the GPU, and completing homomorphic operation in the GPU according to the plurality of fragment integers by executing corresponding homomorphic multiplication and/or homomorphic addition.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that the present description may be provided as a method, system, or computer program product. Accordingly, the present specification embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description embodiments may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus, devices, non-volatile computer storage medium embodiments, the description is relatively simple, as it is substantially similar to method embodiments, with reference to the section of the method embodiments being relevant.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing is merely one or more embodiments of the present description and is not intended to limit the present description. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, or the like, which is within the spirit and principles of one or more embodiments of the present description, is intended to be included within the scope of the claims of the present description.

Claims (21)

1. A business data processing method in federal learning includes:
determining homomorphic operation to be executed in federal learning;
determining integer parameters to be used by the homomorphic operation according to the business data provided by the federally learned participants;
converting the integer parameter into a plurality of segment integers, the segment integers having fewer bits than the integer parameter, the segment integers being partitioned according to the same number of digits, the digits being binary digits, the converting the integer parameter with a penalty, comprising: if there are a small number of binary digits 1 at the end of the integer parameter, consider converting these 1 s to 0; alternatively, if the integer parameter has a sequence of consecutive binary digits of 1, consider adding 1 to the last digit of the sequence, so that the sequence advances by 1 digit;
corresponding compensation is given in the subsequent training process, which comprises the following steps: reducing the weight of the training data of the corresponding portion; or, taking the training data correspondingly generated based on the lossy conversion as a part of samples, reversely performing the lossy conversion on the integer parameter, taking the correspondingly generated training data as a compensation sample of the part of samples, and training by using both the part of samples and the compensation sample;
And acquiring the plurality of fragment integers through a Graphics Processor (GPU), and distributing corresponding homomorphic multiplication and/or homomorphic addition for a plurality of arithmetic logic units of the GPU according to the plurality of fragment integers so as to execute in parallel through a plurality of corresponding GPU threads to finish homomorphic operation.
2. The method according to claim 1, wherein the allocating, according to the plurality of segment integers, corresponding homomorphic multiplications and/or homomorphic additions to the plurality of arithmetic logic units of the GPU specifically comprises:
generating a batch computing task set in the GPU according to the plurality of fragment integers and the homomorphic operation, wherein computing tasks in the batch computing task set comprise homomorphic multiplication and/or homomorphic addition participated by at least two fragment integers;
and after the batch computing task set is split, distributing the batch computing task set to a plurality of arithmetic logic units of the GPU.
3. The method of claim 1, after the homomorphic operation is completed, further comprising:
and training a machine learning model corresponding to the federal learning according to the homomorphic operation result.
4. The method according to claim 1, wherein the determining homomorphic operation to be performed in federal learning specifically comprises:
And determining homomorphic operation to be executed in the federation learning according to the service data or the type of the federation learning.
5. The method according to claim 4, wherein the determining homomorphic operation to be performed in the federal study according to the federal study type specifically comprises:
if the federation learning is longitudinal federation learning, determining that homomorphic operation to be executed in the federation learning comprises homomorphic multiplication; and/or the number of the groups of groups,
if federation learning is lateral federation learning, determining homomorphic operations to be performed in the federation learning include homomorphic addition.
6. The method according to claim 1, wherein said converting said integer parameter into a plurality of fragment integers, comprises:
dividing a plurality of digit sets according to a plurality of digits of the integer parameter;
the integer parameter is converted into a plurality of fragment integers that can restore the integer parameter according to the plurality of digit sets.
7. The method of claim 6, wherein the dividing the plurality of digit sets according to the plurality of digits of the integer parameter specifically comprises:
determining the order of the digits of the integer parameter, and determining a division point in the order;
At least one non-empty set of digits is partitioned before the partition point and at least one non-empty set of digits is partitioned after the partition point.
8. The method of claim 6, wherein successive digits of the integer parameter are in the set of digits, the digits being binary digits;
the converting the integer parameter into a plurality of fragment integers capable of restoring the integer parameter according to the plurality of digit sets specifically includes:
determining, for a set of digits in the plurality of sets of digits, a total value represented by all digits in the set of digits in the integer parameter;
if the digit set does not contain the lowest digit of the integer parameter, converting the total value into two fragment integers, otherwise, taking the total value as a fragment integer;
one of the two fragment integers is obtained by sequentially connecting the digits of the integer parameter on all digits, the other one is the m power of 2, and m is the number of digits lower than all digits in the integer parameter.
9. The method of claim 8, after said converting said total value into two segment integers, or after said taking said total value as one segment integer, further comprising:
The obtained fragment integer is converted into a fragment integer having a smaller number of bits by iterative processing.
10. The method of any of claims 1-9, the method further comprising, prior to the completion of the homomorphic operation:
and homomorphic encryption is carried out on the service data so as to provide privacy protection.
11. A business data processing device in federal learning, comprising:
the homomorphic operation determining module is used for determining homomorphic operation to be executed in federal learning;
the integer parameter determining module is used for determining integer parameters to be used for homomorphic operation according to the business data provided by the federal learning participants;
an integer parameter conversion module that converts the integer parameter into a plurality of fractional integers, the fractional integers having fewer bits than the integer parameter, the fractional integers being partitioned according to the same number of digits, the digits being binary digits, the conversion module comprising: if there are a small number of binary digits 1 at the end of the integer parameter, consider converting these 1 s to 0; alternatively, if the integer parameter has a sequence of consecutive binary digits of 1, consider adding 1 to the last digit of the sequence, so that the sequence advances by 1 digit;
Corresponding compensation is given in the subsequent training process, which comprises the following steps: reducing the weight of the training data of the corresponding portion; or, taking the training data correspondingly generated based on the lossy conversion as a part of samples, reversely performing the lossy conversion on the integer parameter, taking the correspondingly generated training data as a compensation sample of the part of samples, and training by using both the part of samples and the compensation sample;
and the homomorphic operation execution module acquires the plurality of fragment integers through a graphic processor GPU, and distributes corresponding homomorphic multiplication and/or homomorphic addition for a plurality of arithmetic logic units of the GPU according to the plurality of fragment integers so as to execute in parallel through a plurality of corresponding GPU threads to finish the homomorphic operation.
12. The apparatus of claim 11, the homomorphic operation execution module to generate, in the GPU, a set of bulk computing tasks according to the plurality of segment integers and the homomorphic operation, the computing tasks in the set of bulk computing tasks comprising homomorphic multiplications and/or homomorphic additions participated by at least two of the segment integers;
and after the batch computing task set is split, distributing the batch computing task set to a plurality of arithmetic logic units of the GPU.
13. The apparatus of claim 11, the apparatus further comprising:
and the model training module is used for training the machine learning model corresponding to the federation learning according to the result of the homomorphic operation after the homomorphic operation execution module finishes the homomorphic operation.
14. The apparatus of claim 11, the homomorphic operation determination module to determine homomorphic operations to be performed in federal learning based on the business data or a type of federal learning.
15. The apparatus of claim 14, the homomorphic operation determination module to determine that homomorphic operation to be performed in the federal study comprises homomorphic multiplication if the federal study is longitudinal federal study; and/or the number of the groups of groups,
if federation learning is lateral federation learning, determining homomorphic operations to be performed in the federation learning include homomorphic addition.
16. The apparatus of claim 11, the integer parameter conversion module specifically comprising:
the digital set dividing module is used for dividing a plurality of digital sets according to a plurality of digits of the integer parameter;
and the fragment integer generation module is used for converting the integer parameter into a plurality of fragment integers capable of restoring the integer parameter according to the plurality of digit sets.
17. The apparatus of claim 16, the digit set partitioning module to determine an order of digits of the integer parameter and to determine partitioning points in the order;
at least one non-empty set of digits is partitioned before the partition point and at least one non-empty set of digits is partitioned after the partition point.
18. The device of claim 16, wherein successive digits of the integer parameter are in the set of digits, the digits being binary digits;
the fragment integer generation module is used for determining the total value represented by all digits in the digit sets in the integer parameter aiming at the digit sets in the plurality of digit sets;
if the digit set does not contain the lowest digit of the integer parameter, converting the total value into two fragment integers, otherwise, taking the total value as a fragment integer;
one of the two fragment integers is obtained by sequentially connecting the digits of the integer parameter on all digits, the other one is the n power of 2, and n is the number of digits lower than all digits in the integer parameter.
19. The apparatus of claim 18, the integer parameter conversion module further performs, after the segment integer generation module converts the total value to two segment integers or after taking the total value as one segment integer:
The obtained fragment integer is converted into a fragment integer having a smaller number of bits by iterative processing.
20. The apparatus of any one of claims 11 to 19, further comprising:
and the homomorphic encryption module is used for protecting the service data through homomorphic encryption before the homomorphic operation execution module completes the homomorphic operation.
21. A business data processing device in federal learning, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
determining homomorphic operation to be executed in federal learning;
determining integer parameters to be used by the homomorphic operation according to the business data provided by the federally learned participants;
converting the integer parameter into a plurality of fragment integers that can restore the integer parameter, the fragment integers having fewer digits than the integer parameter, the fragment integers being partitioned according to digits that are identical digits, the digits being binary digits, the converting the integer parameter with a penalty comprising: if there are a small number of binary digits 1 at the end of the integer parameter, consider converting these 1 s to 0; alternatively, if the integer parameter has a sequence of consecutive binary digits of 1, consider adding 1 to the last digit of the sequence, so that the sequence advances by 1 digit;
Corresponding compensation is given in the subsequent training process, which comprises the following steps: reducing the weight of the training data of the corresponding portion; or, taking the training data correspondingly generated based on the lossy conversion as a part of samples, reversely performing the lossy conversion on the integer parameter, taking the correspondingly generated training data as a compensation sample of the part of samples, and training by using both the part of samples and the compensation sample;
and acquiring the plurality of fragment integers through a Graphics Processor (GPU), and distributing corresponding homomorphic multiplication and/or homomorphic addition for a plurality of arithmetic logic units of the GPU according to the plurality of fragment integers so as to execute in parallel through a plurality of corresponding GPU threads to finish homomorphic operation.
CN202011173171.6A 2020-10-28 2020-10-28 Business data processing method, device and equipment in federal learning Active CN112200713B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011173171.6A CN112200713B (en) 2020-10-28 2020-10-28 Business data processing method, device and equipment in federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011173171.6A CN112200713B (en) 2020-10-28 2020-10-28 Business data processing method, device and equipment in federal learning

Publications (2)

Publication Number Publication Date
CN112200713A CN112200713A (en) 2021-01-08
CN112200713B true CN112200713B (en) 2023-04-21

Family

ID=74011973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011173171.6A Active CN112200713B (en) 2020-10-28 2020-10-28 Business data processing method, device and equipment in federal learning

Country Status (1)

Country Link
CN (1) CN112200713B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011632B (en) * 2021-01-29 2023-04-07 招商银行股份有限公司 Enterprise risk assessment method, device, equipment and computer readable storage medium
CN113259363B (en) * 2021-05-26 2022-09-02 中国人民解放军战略支援部队信息工程大学 Covert communication method and device
CN113537508B (en) * 2021-06-18 2024-02-02 百度在线网络技术(北京)有限公司 Processing method and device for federal calculation, electronic equipment and storage medium
CN113541921B (en) * 2021-06-24 2022-06-10 电子科技大学 Method for realizing fully homomorphic encryption by using GPU
CN113407979B (en) * 2021-08-16 2021-11-26 深圳致星科技有限公司 Heterogeneous acceleration method, device and system for longitudinal federated logistic regression learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110955907A (en) * 2019-12-13 2020-04-03 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
CN111723948A (en) * 2020-06-19 2020-09-29 深圳前海微众银行股份有限公司 Federal learning method, device, equipment and medium based on evolution calculation

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9367372B2 (en) * 2013-06-18 2016-06-14 Advanced Micro Devices, Inc. Software only intra-compute unit redundant multithreading for GPUs
US10970402B2 (en) * 2018-10-19 2021-04-06 International Business Machines Corporation Distributed learning preserving model security
CN111563267B (en) * 2020-05-08 2024-04-05 京东科技控股股份有限公司 Method and apparatus for federal feature engineering data processing
CN111371544B (en) * 2020-05-27 2020-09-08 支付宝(杭州)信息技术有限公司 Prediction method and device based on homomorphic encryption, electronic equipment and storage medium
CN111813526A (en) * 2020-07-10 2020-10-23 深圳致星科技有限公司 Heterogeneous processing system, processor and task processing method for federal learning
CN111831330B (en) * 2020-07-10 2022-02-01 深圳致星科技有限公司 Heterogeneous computing system device interaction scheme for federated learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110955907A (en) * 2019-12-13 2020-04-03 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
CN111723948A (en) * 2020-06-19 2020-09-29 深圳前海微众银行股份有限公司 Federal learning method, device, equipment and medium based on evolution calculation

Also Published As

Publication number Publication date
CN112200713A (en) 2021-01-08

Similar Documents

Publication Publication Date Title
CN112200713B (en) Business data processing method, device and equipment in federal learning
CN112199707B (en) Data processing method, device and equipment in homomorphic encryption
Roy et al. FPGA-based high-performance parallel architecture for homomorphic computing on encrypted data
US11159305B2 (en) Homomorphic data decryption method and apparatus for implementing privacy protection
CN109063825B (en) Convolutional neural network accelerator
US9900147B2 (en) Homomorphic encryption with optimized homomorphic operations
US10153894B2 (en) Homomorphic encryption with optimized encoding
CN106445471A (en) Processor and method for executing matrix multiplication on processor
US20170134156A1 (en) Homomorphic Encryption with Optimized Parameter Selection
JP2017515195A (en) Solve digital logic constraint problems via adiabatic quantum computation
RU2701716C2 (en) Electronic computer for performing arithmetic with obfuscation
CN115344236B (en) Polynomial multiplication method, polynomial multiplier, device and medium
US20190235834A1 (en) Optimization apparatus and control method thereof
RU2698764C2 (en) Electronic computing device for performing concealed arithmetic operations
CN115034358A (en) Processing method and processing device of neural network computation graph
CN113467750A (en) Large integer bit width division circuit and method for SRT algorithm with radix of 4
CN112162723B (en) Quantum subtraction operation method, device, electronic device and storage medium
CN112214200B (en) Quantum subtraction operation method, device, electronic device and storage medium
CN117435855A (en) Method for performing convolution operation, electronic device, and storage medium
US9762285B1 (en) Compression using mu-law approximation
CN115834018A (en) Multi-party data processing method, system and equipment for protecting privacy
CN112162724B (en) Quantum division operation method and device with precision
RU2559771C2 (en) Device for primary division of molecular numbers
CN113554163B (en) Convolutional neural network accelerator
CN103023519A (en) Method and device for transforming Fermat number

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40044438

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant