CN112200713A - Business data processing method, device and equipment in federated learning - Google Patents

Business data processing method, device and equipment in federated learning Download PDF

Info

Publication number
CN112200713A
CN112200713A CN202011173171.6A CN202011173171A CN112200713A CN 112200713 A CN112200713 A CN 112200713A CN 202011173171 A CN202011173171 A CN 202011173171A CN 112200713 A CN112200713 A CN 112200713A
Authority
CN
China
Prior art keywords
homomorphic
integer
integers
digits
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011173171.6A
Other languages
Chinese (zh)
Other versions
CN112200713B (en
Inventor
张君涛
周启贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202011173171.6A priority Critical patent/CN112200713B/en
Publication of CN112200713A publication Critical patent/CN112200713A/en
Application granted granted Critical
Publication of CN112200713B publication Critical patent/CN112200713B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Machine Translation (AREA)

Abstract

The embodiment of the specification discloses a method, a device and equipment for processing service data in federated learning. The scheme comprises the following steps: determining a homomorphic operation to be performed in federated learning; determining integer parameters to be used for homomorphic operation according to business data provided by a federally learned participant; converting the integer parameter into a plurality of fragment integers, the number of bits of the fragment integers being less than the number of bits of the integer parameter; the method comprises the steps of obtaining a plurality of fragment integers through a GPU, distributing corresponding homomorphic multiplication and/or homomorphic addition to a plurality of arithmetic logic units of the GPU according to the fragment integers, and executing in parallel through a plurality of corresponding GPU threads to finish homomorphic operation.

Description

Business data processing method, device and equipment in federated learning
Technical Field
The present disclosure relates to the field of computer software technologies, and in particular, to a method, an apparatus, and a device for processing service data in federated learning.
Background
Federal learning is a privacy-preserving machine learning scheme which is gradually popularized and applied in recent years, and can effectively help a plurality of organizations to perform data use and machine learning modeling under the condition of meeting the requirements of user privacy protection, data security and legal regulations.
And under the condition that original data cannot be out of range, federated learning achieves the purpose of jointly performing model training and prediction by using intermediate results and gradient information of synchronous training among multiple parties. Federal learning can perform homomorphic encryption on related service data in order to protect user privacy, in this case, further calculation is needed for homomorphic encryption results, and currently, the calculation in federal learning is often calculation of integers with large digits (called large integers), such as 1024 bits or 2048 bits, and even higher digits.
Based on this, there is a need for a scheme that enables more efficient implementation of federal learning.
Disclosure of Invention
One or more embodiments of the present specification provide a method, an apparatus, a device, and a storage medium for processing service data in federated learning, so as to solve the following technical problems: there is a need for a scheme that enables more efficient implementation of federal learning.
To solve the above technical problem, one or more embodiments of the present specification are implemented as follows:
one or more embodiments of the present specification provide a method for processing service data in federated learning, including:
determining a homomorphic operation to be performed in federated learning;
determining integer parameters to be used by the homomorphic operation according to the business data provided by the participator of the federal study;
converting the integer parameter into a plurality of fractional integers, the fractional integers having a number of bits less than the number of bits of the integer parameter;
obtaining the fragment integers by a Graphics Processing Unit (GPU), and distributing corresponding homomorphic multiplication and/or homomorphic addition to a plurality of arithmetic logic units of the GPU according to the fragment integers so as to execute in parallel by a plurality of corresponding GPU threads to complete homomorphic operation.
One or more embodiments of the present specification provide a service data processing apparatus in federated learning, including:
the homomorphic operation determining module is used for determining homomorphic operation to be executed in the federal learning;
the integer parameter determining module is used for determining the integer parameters to be used by the homomorphic operation according to the business data provided by the party participating in the federal learning;
an integer parameter conversion module for converting the integer parameter into a plurality of fractional integers, wherein the number of bits of the fractional integers is less than the number of bits of the integer parameter;
and the homomorphic operation execution module is used for acquiring the fragment integers through the GPU, distributing corresponding homomorphic multiplication and/or homomorphic addition to the arithmetic logic units of the GPU according to the fragment integers, and executing in parallel through corresponding GPU threads to finish homomorphic operation.
One or more embodiments of the present specification provide a service data processing apparatus in federated learning, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
determining a homomorphic operation to be performed in federated learning;
determining integer parameters to be used by the homomorphic operation according to the business data provided by the participator of the federal study;
converting the integer parameter into a plurality of fractional integers, the fractional integers having a number of bits less than the number of bits of the integer parameter;
and acquiring the fragment integers through a GPU, and distributing corresponding homomorphic multiplication and/or homomorphic addition to a plurality of arithmetic logic units of the GPU according to the fragment integers so as to execute in parallel through a plurality of corresponding GPU threads to finish homomorphic operation.
One or more embodiments of the present specification provide a non-transitory computer storage medium storing computer-executable instructions configured to:
determining a homomorphic operation to be performed in federated learning;
determining integer parameters to be used by the homomorphic operation according to the business data provided by the participator of the federal study;
converting the integer parameter into a plurality of fractional integers, the fractional integers having a number of bits less than the number of bits of the integer parameter;
and acquiring the fragment integers through a GPU, and distributing corresponding homomorphic multiplication and/or homomorphic addition to a plurality of arithmetic logic units of the GPU according to the fragment integers so as to execute in parallel through a plurality of corresponding GPU threads to finish homomorphic operation.
At least one technical scheme adopted by one or more embodiments of the specification can achieve the following beneficial effects: for a large integer parameter which needs homomorphic operation in federal learning, the large integer parameter can be converted into a plurality of fragment integers with fewer digits, so that corresponding small integer batch calculation tasks can be generated accordingly, and the small integer batch calculation tasks with fewer digits can be executed in parallel by multiple arithmetic logic units in a multithreading manner.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic flowchart of a method for processing service data in federated learning according to one or more embodiments of the present specification;
FIG. 2 is a schematic diagram of longitudinal federal learning provided in one or more embodiments of the present disclosure;
FIG. 3 is a schematic diagram of a lateral federal learning system provided in one or more embodiments of the present disclosure;
fig. 4 is a schematic diagram of a process of performing recursive conversion on integer parameters in an application scenario provided in one or more embodiments of the present disclosure;
fig. 5 is a schematic structural diagram of a service data processing apparatus in federated learning according to one or more embodiments of the present specification;
fig. 6 is a schematic structural diagram of a service data processing device in federated learning according to one or more embodiments of the present specification.
Detailed Description
The embodiment of the specification provides a method, a device, equipment and a storage medium for processing service data in federated learning.
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any inventive step based on the embodiments of the present disclosure, shall fall within the scope of protection of the present application.
In one or more embodiments of the specification, considering that the performance of the GPU is rapidly developed and is suitable for executing a large number of computing tasks, and the federal learning requires frequent execution of homomorphic operations, which include a large number of computing tasks, the GPU is applied to the federal learning for accelerating the homomorphic operations in the federal learning, thereby improving the efficiency of the federal learning.
For example, in federal learning, the plaintext of gradient information provided by a multi-party decentralization mode is homomorphic encrypted to provide privacy protection, and homomorphic multiplication or homomorphic addition homomorphic calculation can be further performed on ciphertext to obtain overall gradient information for training. Homomorphic computation is a computation of the ciphertext domain that allows a particular form of algebraic computation on the ciphertext to result in what remains encrypted, and decryption to result in the same result as the same computation on the plaintext.
Based on the fact that the solution of the present specification converts parameters of large integers (e.g., 1024-bit integers, 2048-bit integers, etc.) to be involved in the calculation process into integers with fewer digits, and may even recursively re-convert the integers with fewer digits according to the same conversion idea to further reduce the digits, so that the large integer calculation can be decomposed into a plurality of small integers (e.g., 16-bit integers, 32-bit integers, etc.) for calculation, and then the GPU performs a plurality of small integer calculations and combines the calculation results, the small integer calculations, especially the small integer bulk calculations, are superior to the GPU, the GPU has high efficiency in performing small integer bulk calculation tasks, and thus, through such processing, the large integer calculation burden of the GPU can be effectively reduced, and the overall calculation efficiency is improved, which will be described in detail later.
The idea is not only suitable for the scene of accelerating federal learning by using the GPU, but also suitable for other scenes trapped by the bottleneck of large integer computing efficiency. Based on these ideas, a scenario of federal learning will be specifically described as an example.
Fig. 1 is a flowchart illustrating a method for processing service data in federated learning according to one or more embodiments of the present specification. The federal study involves the staged integration or final integration of data from multiple parties, and the executing entity of the process in fig. 1 is, for example, a computing device of the integrated operator.
The process in fig. 1 may include the following steps:
s102: a homomorphic operation to be performed in federated learning is determined.
In one or more embodiments of the present description, the homomorphic operation may include a homomorphic encryption operation performed on plaintext, and may also include homomorphic calculations such as homomorphic multiplication, homomorphic addition, and the like, performed on ciphertext further. For federal learning, the listed homomorphic operations are often involved, and then at least one homomorphic operation is specified in the homomorphic operations and a subsequent process is implemented for the homomorphic operations by executing the step S102 according to actual requirements.
S104: and determining integer parameters to be used by the homomorphic operation according to the business data provided by the participants of the federal study.
In one or more embodiments of the present description, the service data is, for example, a feature vector of a user in a certain service domain. The feature vector itself, or intermediate results from further processing of the feature vector (e.g., homomorphic encryption to provide privacy protection), may be used as an integer parameter to be used for homomorphic operations. Taking a homomorphic operation such as homomorphic multiplication as an example, integer parameters to be used in the homomorphic multiplication are multiplicands and/or multipliers, and it should be noted that, assuming that original multiplicands and/or multipliers contain decimal numbers, the decimal parts can be converted into integers through a shift operation for further processing, which does not hinder the continuous implementation of the above-mentioned idea.
S106: converting the integer parameter into a plurality of fractional integers, the fractional integers having a number of bits less than the number of bits of the integer parameter.
In one or more embodiments of the present description, it may be less efficient for some computing devices to handle common large integers, but still achieve good efficiency for handling computations for certain specific integers or even large integers. The integer parameters may be transformed based on such characteristics. Such specificity ratios are, for example: integers are powers of 2, contain more consecutive 0 s (especially the last binary digits are all zeros), each digit has a periodic variation law, the number of binary digits is 1, etc.
In one or more embodiments of the present specification, by splitting multiple fragment integers, previous integer parameters can be restored without losing information, so as to avoid introducing errors to the execution of subsequent homomorphic operations.
In one or more embodiments of the present disclosure, integer parameters may also be lossy converted to improve computational efficiency within an acceptable range of error. For example, if there are a small number of binary digits 1 at the end of the integer parameter, it is considered that these 1 are converted into 0, which is more convenient for processing; for another example, if there is a sequence of consecutive binary digits 1 of the integer parameter, it can be considered to add 1 to the last digit of the sequence, so that the sequence has 1 digit, the first digit is 1, and all the following digits are 0, which is more convenient to process.
Further, if lossy conversion is performed on the integer parameters, corresponding compensation can be given in the subsequent training process to reduce errors. For example, the weight of the training data of the corresponding part is reduced; for another example, training data generated based on the lossy transform is used as a part of samples, the integer parameter is subjected to lossy transform in the reverse direction (for example, if the integer parameter is originally increased, the integer parameter is decreased), and the correspondingly generated training data is used as a compensation sample of the part of samples, and training is performed using both the part of samples and the compensation sample.
In one or more embodiments of the present description, the conversion of the integer parameters may be performed multiple times through an iterative process. For example, the integer parameter is first converted into a number of fractional integers with reduced number of bits, and then the number of fractional integers are iteratively converted into a number of fractional integers with further reduced number of bits. The specific number of iterations may be determined with reference to the capabilities of the computing device, e.g., the computing device may perform homomorphic computations of 16-bit integers relatively most efficiently, and may iteratively convert the integer parameters into a plurality of 16-bit fractional integers.
S108: and acquiring the fragment integers by the GPU, and distributing corresponding homomorphic multiplication and/or homomorphic addition to a plurality of arithmetic logic units of the GPU according to the fragment integers so as to execute in parallel by a plurality of corresponding GPU threads to finish homomorphic operation.
In one or more embodiments of the present specification, the step of converting the integer parameter in the homomorphic operation into the step of calculating the fragment integer according to the fragment integer is performed to obtain a homomorphic operation result, which is the same as or similar to the homomorphic operation directly obtained by using the integer parameter. How close the approximation is to be determined is acceptable depending on the actual requirements and is not limited herein.
In practical applications, compared with a Central Processing Unit (CPU), there are fewer control logic units in the GPU, but there are a large number of logic operation units and a large number of GPU threads, and the GPU is suitable for efficiently executing batch computing tasks in parallel. Especially for small integer batch computing tasks with less bits, the number of tasks is large, and the computation amount of a single task is small, so that more logic operation units and GPU threads can be occupied in parallel, and the overall computing efficiency is improved, namely the aforementioned GPU has the advantage of high execution efficiency of the small integer batch computing tasks.
Homomorphic operations include large integer computations, and if the large integer computations are directly executed, the number of execution threads is small, even single-thread execution is possible, parallelism is poor, and computation efficiency is low. Based on the method, the large integer computing task is decomposed into batch small integer tasks according to the fragment integers, and the batch small integer tasks are executed by the GPU, so that the advantages of the GPU are fully exerted.
Specifically, a batch computation task set may be generated in the GPU according to a plurality of segment integers and steps of homomorphic operation, where computation tasks in the batch computation task set include homomorphic multiplication and/or homomorphic addition participated by at least two segment integers; splitting the batch computing task set, and then distributing the split batch computing task set to a plurality of arithmetic logic units of the GPU; and executing the batch computation task set through the arithmetic logic units and the corresponding threads to complete homomorphic operation. It should be noted that, besides the batch computation task set, the homomorphic operation may also include more computation tasks, such as a task for integrating the results of the batch computation tasks, other tasks that do not involve segment integers, and the like.
Of course, the step of converting the integer parameter into a plurality of fragment integers may also be performed in the GPU. Compared with some schemes in which the CPU is used for carrying out federal learning, the scheme based on the GPU has higher efficiency, and can further accelerate the federal learning and liberate the CPU, so that the CPU is more concentrated on processing works such as logic control and the like which are good at the CPU, and the optimal configuration of resources is realized.
In one or more embodiments of the present description, after the homomorphic operation is performed, if the result of the homomorphic operation includes training data or gradient information, the result may be used to train a corresponding machine learning model for federal learning, thereby facilitating an improvement in training efficiency.
Through the method of fig. 1, for a large integer parameter to be homomorphic in federal learning, the large integer parameter can be converted into a plurality of fragment integers with fewer digits, so that corresponding small integer batch computation tasks can be generated accordingly, and the large integer parameter is executed in parallel in multiple threads through multiple arithmetic logic units in the GPU.
Based on the process of fig. 1, some specific embodiments and embodiments of the process are also provided in the present specification, and the description is continued below.
Federal learning currently has two types, longitudinal federal learning and lateral federal learning. Taking two participating parties and federal learning as an example, the participating parties respectively provide a data set to obtain two data sets in total, and the two data sets are used for training a corresponding machine learning model for federal learning.
Under the condition that the users of the two data sets overlap more and the features of the users overlap less, the data sets can be divided according to the longitudinal direction (for example, feature dimension), and the part of data which is the same for the users and has not the same features for the users is taken out for training, and the learning scheme belongs to the longitudinal federal learning.
Under the condition that the user features of the two data sets are overlapped more and the user overlap is less, the data sets can be divided according to the horizontal direction (such as the user dimension), and the part of data with the same user features but not identical users is taken out for training, and the learning scheme belongs to horizontal federal learning.
Of course, the training scenario is similar for more data providers providing more data sets, and an appropriate learning scheme may be selected against the two types above.
Different federal learning types, and more particularly, homomorphic operations are also distinguished. The description will be made with reference to fig. 2 and 3.
Fig. 2 is a schematic diagram of a longitudinal federal learning principle provided in one or more embodiments of the present disclosure. In fig. 2, two parties a and B of longitudinal federal learning are shown, both of which provide their own business data for training a corresponding machine learning model of federal learning. And in the model training stage, the A and the B send the intermediate calculation result to the other party after homomorphic encryption, and the gradient information of two sides is synchronized by homomorphic multiplication. It follows that longitudinal federated learning involves at least homomorphic multiplications during the model training phase.
Fig. 3 is a schematic diagram of a lateral federal learning scheme provided in one or more embodiments of the present disclosure. In fig. 3, one coordinator and three participants a, B, C of the horizontal federal learning are shown, a, B, C are all data providers, providing their own business data for training the machine learning model corresponding to the federal learning, and the coordinator is used for integrating the data from A, B, C. A. B, C in the training stage, the plaintext model training is performed in each domain, the trained gradient information is homomorphic encrypted and then sent to the coordinator, and the coordinator performs homomorphic addition on the data in the ciphertext domain. It follows that at least homomorphic addition is involved in the lateral federal learning during the model training phase.
Based on this, if the federal learning is horizontal federal learning, it can be determined that the homomorphic operation to be performed in the federal learning includes homomorphic addition, and if the federal learning is vertical federal learning, it can be determined that the homomorphic operation to be performed in the federal learning includes homomorphic multiplication.
Whether horizontal federal learning or vertical federal learning is adopted, the calculation formula is not complex, but is repeatedly executed for multiple times, and data is executed in batch, so that the method is suitable for processing by using a GPU (graphics processing unit), namely homomorphic multiplication and homomorphic addition involved in training are executed by using the GPU in figures 2 and 3.
The homomorphic operation to be performed in federal learning can be determined according to business data provided by the participants, besides the type of federal learning. For example, the participant provides original plaintext service data, and in order to protect the privacy of the user, at least homomorphic encryption is performed on the plaintext service data, and homomorphic operations to be performed in federal learning at least include homomorphic encryption; for another example, if one of the participants provides business data of a part of dimensions of a certain user, and the other of the participants provides business data of another part of dimensions of the user, then it is highly likely that subsequent training requires overlapping the business data to completely learn the characteristics of the user, and based on this, the homomorphic operation to be performed in the federal learning is more likely to include homomorphic addition.
In one or more embodiments of the present description, a plurality of sets of digits are divided according to a plurality of digits of the integer parameter, and the integer parameter is converted into a plurality of fractional integers capable of restoring the integer parameter according to the plurality of sets of digits. The method is a lossless conversion process, and can avoid introducing errors into subsequent homomorphic operation results. For the convenience of processing by the computing device, the digits are generally binary digits, and some embodiments below also exemplify binary digits, but of course, if the computing device is sufficiently capable, digits of higher power of 2 are also possible, for example, octal digits, hexadecimal digits, and the like, which facilitates more concise expression and processing of integer parameters.
The way of dividing the plurality of digit sets is various. Dividing the continuous digits equally is a typical dividing method, for example, for integer parameters of even bits (which are marked as N bits, if the bits are odd, they can be divided equally or 1 bit can be divided separately and then divided equally), the parameters are divided from the middle, and the front is divided from the left
Figure BDA0002747926850000101
Dividing the digits (upper part) into a set of digits, and dividing the set of digits into a plurality of parts
Figure BDA0002747926850000102
The digits (lower part) are divided into a set of digits. Of course, the division can be more flexibly based on consecutive digits of the same numberFor example, in a part of consecutive digits of the integer parameter, first 5 consecutive 1s occur, next 3 consecutive zeros occur, and then 6 consecutive 1s occur, in which case, the 51 s, 3 zeros, and 6 1s may be considered to be respectively divided into a digit set. In addition, for the discontinuous digits, the discontinuous digits may also be divided according to a certain policy, for example, the digits with the discontinuous number 1 are respectively and independently divided into 1 digit set, and when the subsequent homomorphic operation is executed, the same digits corresponding to different integer parameters are processed in a unified manner, so as to improve the efficiency.
In one or more embodiments of the present description, an order of digits of the integer parameters is determined, and a division point is determined in the order, at least one non-empty set of digits is divided before the division point, and at least one non-empty set of digits is divided after the division point. A concise concrete division way is dichotomy (division points are located in the middle as much as possible), and has the advantages that: because the data are divided into equal granularity as much as possible (the numbers of the divided sets with the same level are the same or basically the same), further division is facilitated iteratively, a plurality of small integers with aligned numbers can be obtained through corresponding conversion, and efficient processing by computing equipment is facilitated.
For example, assuming that consecutive digits of the integer parameter are in the plurality of partitioned digit sets, the digits are binary digits, and further, for a digit set in the plurality of digit sets, determining a total number value represented by all digits in the digit set in the integer parameter, if the lowest digit of the integer parameter is not included in the digit set, converting the total number value into two fractional integers, otherwise, using the total number value as one fractional integer, wherein one of the two fractional integers is obtained by sequentially connecting digits of the integer parameter on all digits, the other is an m-th power of 2, and m is a number of digits lower than the all digits in the integer parameter. The number of digits of the integer parameter is denoted as N, and under the dichotomy,
Figure BDA0002747926850000111
more intuitively, according to the conversion scheme in the previous paragraph. Assuming that a multiplication of large integers is to be performed, denoted P x Q, P, Q are all large integers with a number of bits N.
Equally dividing the digits of P to obtain digit sets corresponding to the upper part and the lower part respectively, and expressing the digit sets by using the digits of the corresponding digits, thereby converting P into:
Figure BDA0002747926850000112
wherein, PhighCorresponding to the upper part, PlowCorresponding to the lower part, Phigh、PlowThe resulting fragment integer for P conversion is, of course, more completely,
Figure BDA0002747926850000113
and can also be regarded as a fragment integer converted by P.
For ease of understanding, an 8-bit integer "11001010" is used as an example, and assuming that P is the integer, N is 8, Phigh=1100,Plow=1010。
Similarly, Q is converted to yield:
Figure BDA0002747926850000121
further, P x Q lossless is converted to the product form of integers of 4 digits less:
Figure BDA0002747926850000122
it can be seen that large integer multiplications are converted to integer multiplications and integer additions with a smaller number of bits, such as 2N
Figure BDA0002747926850000123
Such power of 2 terms may be efficiently computed by a shift operation for computing devices, and in general, for some computing devices, such as GPUs, it may be more efficient to compute a post-conversion recalculation than a direct calculation without conversionAnd (4) improving. For example, P can behigh*Qhigh、Phigh*Qlow、Plow*Qhigh、Plow*QlowThe above-described batch computation task set is constituted as computation tasks, respectively, in which case the computation tasks include multiplication of two segment integers.
Further, P, Q may be iteratively transformed multiple times, i.e., until P is obtained, according to the same transformation concepthigh、Plow、Qhigh、QlowThen, P is addedhigh、Plow、Qhigh、QlowThe conversion to integers with fewer digits, and so on, continues until the desired number of digits is reached, e.g., 16 bits or 32 bits, etc., see fig. 4.
Fig. 4 is a schematic diagram of a process of performing recursive conversion on integer parameters in an application scenario provided in one or more embodiments of the present disclosure.
In FIG. 4, after the first conversion of P x Q, P is obtainedhigh*Qhigh、Phigh*Qlow、Plow*Qhigh、Plow*QlowThese 4 terms are used to continue the multiplication term of the iterative conversion.
With Phigh*QhighFor example, PhighIs converted into
Figure BDA0002747926850000124
Will QhighIs converted into
Figure BDA0002747926850000125
Then:
Figure BDA0002747926850000126
thereby obtaining Phh*Qhh、Phh*Qhl、Phl*Qhh、Phl*QhlThese 4 multiplication terms for continuing the iterative conversion, and so on, the recursive conversion can be continued, resulting in more and smaller calculation tasks, from which more and smaller calculations are madeThe tasks constitute a larger set of batch computation tasks.
Based on the same idea, one or more embodiments of the present specification further provide apparatuses and devices corresponding to the above-described method, as shown in fig. 5 and fig. 6.
Fig. 5 is a schematic structural diagram of a service data processing apparatus in federated learning according to one or more embodiments of the present specification, where a dashed-line box in the diagram indicates an optional module, where the apparatus includes:
a homomorphic operation determination module 502 for determining homomorphic operations to be executed in federal learning;
an integer parameter determining module 504, configured to determine an integer parameter to be used by the homomorphic operation according to service data provided by the federal learning party;
an integer parameter conversion module 506, converting the integer parameter into a plurality of fractional integers, the number of bits of the fractional integers being less than the number of bits of the integer parameter;
the homomorphic operation executing module 508 obtains the plurality of segment integers through the GPU, and completes the homomorphic operation by executing corresponding homomorphic multiplication and/or homomorphic addition in the GPU according to the plurality of segment integers.
Optionally, the homomorphic operation executing module 508, in the GPU, generates a batch of computation task sets according to the plurality of segment integers and the steps of the homomorphic operation, where computation tasks in the batch of computation task sets include homomorphic multiplication and/or homomorphic addition participated by at least two segment integers;
and completing the homomorphic operation by executing the batch computing task set.
Optionally, the apparatus further comprises:
and a model training module 510, configured to train a machine learning model corresponding to the federal learning according to a result of the homomorphic operation after the homomorphic operation execution module 508 completes the homomorphic operation.
Optionally, the homomorphic operation determining module 502 determines the homomorphic operation to be executed in the federal learning according to the business data or the type of the federal learning.
Optionally, the homomorphic operation determining module 502 determines that the homomorphic operation to be performed in the federal learning includes homomorphic multiplication if the federal learning is longitudinal federal learning; and/or the presence of a gas in the gas,
if the federated learning is horizontal federated learning, determining that homomorphic operations to be performed in the federated learning include homomorphic addition.
Optionally, the integer parameter conversion module 506 specifically includes:
a digit set dividing module 5062, for dividing a plurality of digit sets according to the plurality of digits of the integer parameter;
a fractional integer generation module 5064, converting the integer parameter into a plurality of fractional integers that can reduce the integer parameter according to the plurality of digit sets.
Optionally, the digit set partitioning module 5062, determining an order of digits of the integer parameters and determining a partitioning point in the order;
dividing at least one non-empty set of digits before the division point and dividing at least one non-empty set of digits after the division point.
Optionally, the set of digits are consecutive digits of the integer parameter, the digits being binary digits;
the fragment integer generation module 5064, for a set of digits of the plurality of sets of digits, determining a total number value represented in the integer parameter for all digits of the set of digits;
if the digit set does not contain the lowest digit of the integer parameter, converting the total value into two segment integers, and if not, taking the total value as one segment integer;
one of the two segment integers is obtained by sequentially connecting the numbers of the integer parameters on all digits, the other is the power m of 2, and m is the number of digits lower than all digits in the integer parameters.
Optionally, the integer parameter conversion module 506 further performs, after the fragment integer generation module 5064 converts the total number value into two fragment integers, or after the total number value is taken as one fragment integer:
the obtained fractional integer is converted into a fractional integer with a smaller number of bits by an iterative process.
Optionally, the apparatus further comprises:
the homomorphic encryption module 512 is configured to protect the service data through homomorphic encryption before the homomorphic operation execution module 508 completes the homomorphic operation according to the plurality of fragment integers.
Fig. 6 is a schematic structural diagram of a service data processing device in federated learning according to one or more embodiments of the present specification, where the device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
determining a homomorphic operation to be performed in federated learning;
determining integer parameters to be used by the homomorphic operation according to the business data provided by the participator of the federal study;
converting the integer parameter into a plurality of fractional integers, the fractional integers having a number of bits less than the number of bits of the integer parameter;
and acquiring the fragment integers through a GPU, and completing homomorphic operation in the GPU by executing corresponding homomorphic multiplication and/or homomorphic addition according to the fragment integers.
The processor and the memory may communicate via a bus, and the device may further include an input/output interface for communicating with other devices.
Based on the same idea, one or more embodiments of the present specification further provide a non-volatile computer storage medium corresponding to the above method, and storing computer-executable instructions configured to:
determining a homomorphic operation to be performed in federated learning;
determining integer parameters to be used by the homomorphic operation according to the business data provided by the participator of the federal study;
converting the integer parameter into a plurality of fractional integers, the fractional integers having a number of bits less than the number of bits of the integer parameter;
and acquiring the fragment integers through a GPU, and completing homomorphic operation in the GPU by executing corresponding homomorphic multiplication and/or homomorphic addition according to the fragment integers.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, the present specification embodiments may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the device, and the nonvolatile computer storage medium, since they are substantially similar to the embodiments of the method, the description is simple, and for the relevant points, reference may be made to the partial description of the embodiments of the method.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above description is merely one or more embodiments of the present disclosure and is not intended to limit the present disclosure. Various modifications and alterations to one or more embodiments of the present description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of one or more embodiments of the present specification should be included in the scope of the claims of the present specification.

Claims (21)

1. A business data processing method in federated learning comprises the following steps:
determining a homomorphic operation to be performed in federated learning;
determining integer parameters to be used by the homomorphic operation according to the business data provided by the participator of the federal study;
converting the integer parameter into a plurality of fractional integers, the fractional integers having a number of bits less than the number of bits of the integer parameter;
and acquiring the fragment integers through a GPU (graphics processing Unit), and distributing corresponding homomorphic multiplication and/or homomorphic addition to a plurality of arithmetic logic units of the GPU according to the fragment integers so as to execute in parallel through a plurality of corresponding GPU threads to finish homomorphic operation.
2. The method of claim 1, wherein allocating corresponding homomorphic multiplications and/or homomorphic additions to the plurality of arithmetic logic units of the GPU based on the plurality of fragment integers specifically comprises:
generating a batch of computing task sets in the GPU according to the fragment integers and the homomorphic operation steps, wherein computing tasks in the batch of computing task sets comprise homomorphic multiplication and/or homomorphic addition participated by at least two fragment integers;
and after splitting the batch computing task set, distributing the batch computing task set to a plurality of arithmetic logic units of the GPU.
3. The method of claim 1, after said completing said homomorphic operation, said method further comprising:
and training a machine learning model corresponding to the federal learning according to the homomorphic operation result.
4. The method according to claim 1, wherein the determining of the homomorphic operation to be performed in federal learning specifically comprises:
and determining homomorphic operation to be executed in the federal learning according to the business data or the type of the federal learning.
5. The method according to claim 4, wherein the determining, according to the type of federal learning, the homomorphic operation to be performed in the federal learning specifically includes:
if the federated learning is longitudinal federated learning, determining homomorphic operations to be executed in the federated learning to comprise homomorphic multiplication; and/or the presence of a gas in the gas,
if the federated learning is horizontal federated learning, determining that homomorphic operations to be performed in the federated learning include homomorphic addition.
6. The method according to claim 1, wherein converting the integer parameter into a plurality of segment integers specifically comprises:
dividing a plurality of digit sets according to a plurality of digits of the integer parameter;
converting the integer parameter into a plurality of fragment integers that can restore the integer parameter according to the plurality of digit sets.
7. The method of claim 6, wherein dividing the plurality of sets of digits according to the plurality of digits of the integer parameter comprises:
determining an order of digits of the integer parameters and determining a division point in the order;
dividing at least one non-empty set of digits before the division point and dividing at least one non-empty set of digits after the division point.
8. The method of claim 6, successive ones of the set of digits being the integer parameter, the digits being binary digits;
converting the integer parameter into a plurality of fragment integers that can restore the integer parameter according to the plurality of digit sets specifically includes:
determining, for a set of digits of the plurality of sets of digits, a total number of values that all digits of the set of digits represent in the integer parameter;
if the digit set does not contain the lowest digit of the integer parameter, converting the total value into two segment integers, and if not, taking the total value as one segment integer;
one of the two segment integers is obtained by sequentially connecting the numbers of the integer parameters on all digits, the other is the power m of 2, and m is the number of digits lower than all digits in the integer parameters.
9. The method of claim 8, after converting the total number value into two fractional integers or after treating the total number value as one fractional integer, the method further comprising:
the obtained fractional integer is converted into a fractional integer with a smaller number of bits by an iterative process.
10. The method of any of claims 1-9, prior to completing the homomorphic operation, further comprising:
and carrying out homomorphic encryption on the service data to provide privacy protection.
11. A business data processing device in federated learning comprises:
the homomorphic operation determining module is used for determining homomorphic operation to be executed in the federal learning;
the integer parameter determining module is used for determining the integer parameters to be used by the homomorphic operation according to the business data provided by the party participating in the federal learning;
an integer parameter conversion module for converting the integer parameter into a plurality of fractional integers, wherein the number of bits of the fractional integers is less than the number of bits of the integer parameter;
and the homomorphic operation execution module is used for acquiring the fragment integers through a GPU (graphics processing unit), distributing corresponding homomorphic multiplication and/or homomorphic addition to the arithmetic logic units of the GPU according to the fragment integers, and executing in parallel through corresponding GPU threads to finish homomorphic operation.
12. The apparatus of claim 11, the homomorphic operation execution module, in the GPU, to generate a set of batch computing tasks based on the plurality of segment integers and the steps of the homomorphic operation, the computing tasks in the set of batch computing tasks including homomorphic multiplication and/or homomorphic addition participated by at least two of the segment integers;
and after splitting the batch computing task set, distributing the batch computing task set to a plurality of arithmetic logic units of the GPU.
13. The apparatus of claim 11, the apparatus further comprising:
and the model training module is used for training the machine learning model corresponding to the federal learning according to the result of the homomorphic operation after the homomorphic operation execution module finishes the homomorphic operation.
14. The apparatus of claim 11, the homomorphic operation determination module to determine homomorphic operations to be performed in the federal learning based on the traffic data or a type of federal learning.
15. The apparatus of claim 14, the homomorphic operation determination module to determine that a homomorphic operation to be performed in the federated learning includes homomorphic multiplication if the federated learning is longitudinal federated learning; and/or the presence of a gas in the gas,
if the federated learning is horizontal federated learning, determining that homomorphic operations to be performed in the federated learning include homomorphic addition.
16. The apparatus according to claim 11, wherein the integer parameter conversion module specifically comprises:
a digit set dividing module for dividing a plurality of digit sets according to a plurality of digits of the integer parameter;
and the fragment integer generating module is used for converting the integer parameter into a plurality of fragment integers capable of restoring the integer parameter according to the plurality of digit sets.
17. The apparatus of claim 16, the digit set partitioning module to determine an order of digits of the integer parameter and to determine a split point in the order;
dividing at least one non-empty set of digits before the division point and dividing at least one non-empty set of digits after the division point.
18. The apparatus of claim 16, successive ones of the set of digits being the integer parameter, the digits being binary digits;
the segment integer generation module determines, for a set of digits of the plurality of sets of digits, a total number value represented in the integer parameter by all digits of the set of digits;
if the digit set does not contain the lowest digit of the integer parameter, converting the total value into two segment integers, and if not, taking the total value as one segment integer;
one of the two segment integers is obtained by sequentially connecting the numbers of the integer parameters on all digits, the other is the power n of 2, and n is the number of digits lower than all digits in the integer parameters.
19. The apparatus of claim 18, the integer parameter conversion module, after the fractional integer generation module converts the total number value into two fractional integers, or takes the total number value as one fractional integer, further performs:
the obtained fractional integer is converted into a fractional integer with a smaller number of bits by an iterative process.
20. The apparatus of any of claims 11 to 19, further comprising:
and the homomorphic encryption module is used for protecting the service data through homomorphic encryption before the homomorphic operation execution module finishes the homomorphic operation.
21. A business data processing device in federated learning, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
determining a homomorphic operation to be performed in federated learning;
determining integer parameters to be used by the homomorphic operation according to the business data provided by the participator of the federal study;
converting the integer parameter into a plurality of fractional integers which can restore the integer parameter, wherein the number of bits of the fractional integers is less than that of the integer parameter;
and acquiring the fragment integers through a GPU (graphics processing Unit), and distributing corresponding homomorphic multiplication and/or homomorphic addition to a plurality of arithmetic logic units of the GPU according to the fragment integers so as to execute in parallel through a plurality of corresponding GPU threads to finish homomorphic operation.
CN202011173171.6A 2020-10-28 2020-10-28 Business data processing method, device and equipment in federal learning Active CN112200713B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011173171.6A CN112200713B (en) 2020-10-28 2020-10-28 Business data processing method, device and equipment in federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011173171.6A CN112200713B (en) 2020-10-28 2020-10-28 Business data processing method, device and equipment in federal learning

Publications (2)

Publication Number Publication Date
CN112200713A true CN112200713A (en) 2021-01-08
CN112200713B CN112200713B (en) 2023-04-21

Family

ID=74011973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011173171.6A Active CN112200713B (en) 2020-10-28 2020-10-28 Business data processing method, device and equipment in federal learning

Country Status (1)

Country Link
CN (1) CN112200713B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011632A (en) * 2021-01-29 2021-06-22 招商银行股份有限公司 Enterprise risk assessment method, device, equipment and computer readable storage medium
CN113259363A (en) * 2021-05-26 2021-08-13 中国人民解放军战略支援部队信息工程大学 Covert communication method and device
CN113407979A (en) * 2021-08-16 2021-09-17 深圳致星科技有限公司 Heterogeneous acceleration method, device and system for longitudinal federated logistic regression learning
CN113537508A (en) * 2021-06-18 2021-10-22 百度在线网络技术(北京)有限公司 Federal calculation processing method and device, electronic equipment and storage medium
CN113541921A (en) * 2021-06-24 2021-10-22 电子科技大学 Fully homomorphic encryption GPU high-performance implementation method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140368513A1 (en) * 2013-06-18 2014-12-18 Advanced Micro Devices, Inc. Software Only Intra-Compute Unit Redundant Multithreading for GPUs
CN110955907A (en) * 2019-12-13 2020-04-03 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
US20200125739A1 (en) * 2018-10-19 2020-04-23 International Business Machines Corporation Distributed learning preserving model security
CN111371544A (en) * 2020-05-27 2020-07-03 支付宝(杭州)信息技术有限公司 Prediction method and device based on homomorphic encryption, electronic equipment and storage medium
CN111563267A (en) * 2020-05-08 2020-08-21 京东数字科技控股有限公司 Method and device for processing federal characteristic engineering data
CN111723948A (en) * 2020-06-19 2020-09-29 深圳前海微众银行股份有限公司 Federal learning method, device, equipment and medium based on evolution calculation
CN111813526A (en) * 2020-07-10 2020-10-23 深圳致星科技有限公司 Heterogeneous processing system, processor and task processing method for federal learning
CN111831330A (en) * 2020-07-10 2020-10-27 深圳致星科技有限公司 Heterogeneous computing system device interaction scheme for federated learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140368513A1 (en) * 2013-06-18 2014-12-18 Advanced Micro Devices, Inc. Software Only Intra-Compute Unit Redundant Multithreading for GPUs
US20200125739A1 (en) * 2018-10-19 2020-04-23 International Business Machines Corporation Distributed learning preserving model security
CN110955907A (en) * 2019-12-13 2020-04-03 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
CN111563267A (en) * 2020-05-08 2020-08-21 京东数字科技控股有限公司 Method and device for processing federal characteristic engineering data
CN111371544A (en) * 2020-05-27 2020-07-03 支付宝(杭州)信息技术有限公司 Prediction method and device based on homomorphic encryption, electronic equipment and storage medium
CN111723948A (en) * 2020-06-19 2020-09-29 深圳前海微众银行股份有限公司 Federal learning method, device, equipment and medium based on evolution calculation
CN111813526A (en) * 2020-07-10 2020-10-23 深圳致星科技有限公司 Heterogeneous processing system, processor and task processing method for federal learning
CN111831330A (en) * 2020-07-10 2020-10-27 深圳致星科技有限公司 Heterogeneous computing system device interaction scheme for federated learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
唐天泽等: "大数乘法的GPU加速实现", 《计算机应用研究》 *
蒋宝尚: "星云Clustar首席科学家胡水海:GPU在联邦机器学习中的探索", 《HTTPS://T.CJ.SINA.COM.CN/ARTICLES/VIEW/6552764637/1869340DD01900U0W5?FROM=TECH》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011632A (en) * 2021-01-29 2021-06-22 招商银行股份有限公司 Enterprise risk assessment method, device, equipment and computer readable storage medium
CN113011632B (en) * 2021-01-29 2023-04-07 招商银行股份有限公司 Enterprise risk assessment method, device, equipment and computer readable storage medium
CN113259363A (en) * 2021-05-26 2021-08-13 中国人民解放军战略支援部队信息工程大学 Covert communication method and device
CN113259363B (en) * 2021-05-26 2022-09-02 中国人民解放军战略支援部队信息工程大学 Covert communication method and device
CN113537508A (en) * 2021-06-18 2021-10-22 百度在线网络技术(北京)有限公司 Federal calculation processing method and device, electronic equipment and storage medium
WO2022262183A1 (en) * 2021-06-18 2022-12-22 百度在线网络技术(北京)有限公司 Federated computing processing method and apparatus, electronic device, and storage medium
CN113537508B (en) * 2021-06-18 2024-02-02 百度在线网络技术(北京)有限公司 Processing method and device for federal calculation, electronic equipment and storage medium
CN113541921A (en) * 2021-06-24 2021-10-22 电子科技大学 Fully homomorphic encryption GPU high-performance implementation method
CN113541921B (en) * 2021-06-24 2022-06-10 电子科技大学 Method for realizing fully homomorphic encryption by using GPU
CN113407979A (en) * 2021-08-16 2021-09-17 深圳致星科技有限公司 Heterogeneous acceleration method, device and system for longitudinal federated logistic regression learning
CN113407979B (en) * 2021-08-16 2021-11-26 深圳致星科技有限公司 Heterogeneous acceleration method, device and system for longitudinal federated logistic regression learning

Also Published As

Publication number Publication date
CN112200713B (en) 2023-04-21

Similar Documents

Publication Publication Date Title
CN112199707B (en) Data processing method, device and equipment in homomorphic encryption
CN112200713A (en) Business data processing method, device and equipment in federated learning
US11159305B2 (en) Homomorphic data decryption method and apparatus for implementing privacy protection
US9900147B2 (en) Homomorphic encryption with optimized homomorphic operations
US10075289B2 (en) Homomorphic encryption with optimized parameter selection
CN112162723B (en) Quantum subtraction operation method, device, electronic device and storage medium
US20170134157A1 (en) Homomorphic Encryption with Optimized Encoding
CN109756442B (en) Data statistics method, device and equipment based on garbled circuit
CN115622684B (en) Privacy computation heterogeneous acceleration method and device based on fully homomorphic encryption
US11164484B2 (en) Secure computation system, secure computation device, secure computation method, and program
RU2701716C2 (en) Electronic computer for performing arithmetic with obfuscation
RU2698764C2 (en) Electronic computing device for performing concealed arithmetic operations
EP3791331A1 (en) Efficient data encoding for deep neural network training
CN115344236B (en) Polynomial multiplication method, polynomial multiplier, device and medium
CN114095149A (en) Information encryption method, device, equipment and storage medium
CN112434317A (en) Data processing method, device, equipment and storage medium
CN115952526B (en) Ciphertext ordering method, equipment and storage medium
CN117155572A (en) Method for realizing large integer multiplication in cryptographic technology based on GPU (graphics processing Unit) parallel
WO2023000577A1 (en) Data compression method and apparatus, electronic device, and storage medium
CN115276952A (en) Private data processing method and device
CN108075889B (en) Data transmission method and system for reducing complexity of encryption and decryption operation time
CN116737390B (en) Atomic operation processing method and device, electronic equipment and storage medium
CN116738494B (en) Model training method and device for multiparty security calculation based on secret sharing
US9336579B2 (en) System and method of performing multi-level integration
CN113067694B (en) Method, device and equipment for comparing safety of two parties in communication optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40044438

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant