CN113946846B - Ciphertext computing device and method for federal learning and privacy computing - Google Patents

Ciphertext computing device and method for federal learning and privacy computing Download PDF

Info

Publication number
CN113946846B
CN113946846B CN202111196023.0A CN202111196023A CN113946846B CN 113946846 B CN113946846 B CN 113946846B CN 202111196023 A CN202111196023 A CN 202111196023A CN 113946846 B CN113946846 B CN 113946846B
Authority
CN
China
Prior art keywords
ciphertext
plaintext
modular exponentiation
calculation
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111196023.0A
Other languages
Chinese (zh)
Other versions
CN113946846A (en
Inventor
戴蒙
王玮
陈沫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhixing Technology Co Ltd
Original Assignee
Shenzhen Zhixing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhixing Technology Co Ltd filed Critical Shenzhen Zhixing Technology Co Ltd
Priority to CN202111196023.0A priority Critical patent/CN113946846B/en
Publication of CN113946846A publication Critical patent/CN113946846A/en
Application granted granted Critical
Publication of CN113946846B publication Critical patent/CN113946846B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/60Methods or arrangements for performing computations using a digital non-denominational number representation, i.e. number representation without radix; Computing devices using combinations of denominational and non-denominational quantity representations, e.g. using difunction pulse trains, STEELE computers, phase computers
    • G06F7/72Methods or arrangements for performing computations using a digital non-denominational number representation, i.e. number representation without radix; Computing devices using combinations of denominational and non-denominational quantity representations, e.g. using difunction pulse trains, STEELE computers, phase computers using residue arithmetic
    • G06F7/728Methods or arrangements for performing computations using a digital non-denominational number representation, i.e. number representation without radix; Computing devices using combinations of denominational and non-denominational quantity representations, e.g. using difunction pulse trains, STEELE computers, phase computers using residue arithmetic using Montgomery reduction

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Storage Device Security (AREA)

Abstract

The application relates to a ciphertext computing device and method for federal learning and privacy computing. The device comprises: the Montgomery modular exponentiation operation module comprises a first-layer modular exponentiation calculation module, a second-layer modular exponentiation calculation module and a third-layer modular exponentiation calculation module, wherein the first-layer modular exponentiation calculation module sends one of a plurality of ciphertext plaintext pairs each time and selects one of a plurality of modular exponentiation calculation engines of the first-layer modular exponentiation calculation module in a polling mode; the accumulation modular multiplication operation module is used for accumulating modular multiplication operation to obtain accumulation modular multiplication operation results of a plurality of ciphertext plaintext pairs; and a demomontgomery operation module. This improves the computational efficiency.

Description

Ciphertext computing device and method for federal learning and privacy computing
Technical Field
The application relates to the technical field of data security and privacy protection, and also relates to the technical field of privacy calculation, privacy data and federal learning, in particular to a ciphertext calculation device and method for federal learning and privacy calculation.
Background
With the development of application fields such as artificial intelligence and big data mining analysis, the demand for data volume is more and more increased. For example, training artificial intelligence application models requires the use of large amounts of training data with appropriate data labels or feature values. High quality data often comes from application data generated and accumulated in business activities. However, application data is often distributed among different organizations and individuals, for example, transaction data is distributed among various financial institutions and medical diagnosis data is distributed among various medical institutions. Application data across industries and domains is also dispersed, for example, social attribute data and e-commerce transaction data in the internet domain are controlled by different entities. As the importance of data ownership, user privacy, data security, and the like are more emphasized, and as the laws and regulations put more strict constraints and requirements on data collection processing, organizations or individuals who grasp application data are often unwilling or do not have appropriate means to collaborate with each other, so that it is difficult for the application data grasped by each organization or individual to work together. This dilemma in data sharing and collaborative collaboration is referred to as data islanding. In order to solve the problem of cross-industry and cross-organization data cooperation, especially the key problems of privacy protection and data security, a Federal Learning (FL) concept is proposed. The federated learning refers to each participant who owns data, and under the premise that protected private data is not shared and the own data is not transmitted to the outside, the relevant information of the model is exchanged in an encryption mode, so that the collaborative optimization of the federated learning model is realized. The federated learning can be divided into horizontal federated learning with large overlap in the data feature space and small overlap in the sample space, vertical federated learning with small overlap in the data feature space and large overlap in the sample space, and federated migration learning with small overlap in both the data feature space and the sample space according to the distribution conditions of the data feature space and the sample space of the training data.
In the related application scenarios of federal learning and privacy computation, each participant with data generally encrypts original data, also called plaintext, in a Homomorphic Encryption (HE) mode, and then uses the encrypted secret data, also called ciphertext, in joint network model training, gradient computation, model parameter training and the like. The HE algorithm meets the homomorphic operation property of the ciphertext, namely homomorphic encryption is carried out on the plaintext to obtain the ciphertext, and after corresponding homomorphic decryption is carried out on a ciphertext calculation result obtained by carrying out specific calculation on the ciphertext, the same calculation is directly carried out on the plaintext. The HE algorithm that supports arbitrary forms of computation on the ciphertext is called Fully Homomorphic Encryption (FHE). HE algorithms that support only addition, multiplication, or limited additions and multiplications are called semi-Homomorphic Encryption (SWHE) or Partial Homomorphic Encryption (PHE). Common swe algorithms include, for example, RSA algorithms, ElGamal algorithms, Paillier algorithms, and the like. In general, any computation performed on the ciphertext may be constructed by addition and multiplication, i.e., expanded into two basic forms, dense multiplication and dense addition, and a more complex form composed of the two basic forms. However, ciphertext calculation, particularly ciphertext calculation based on the HE algorithm, involves a large amount of high-latitude data, the related modulus often has a large bit width, such as 2048 bits, and a large amount of modular exponentiation and modular multiplication operations with a large integer bit width are required, and the data volume related to the operation can reach hundreds of millions. These pose significant challenges to storage resources and computing performance, and for this reason, there is a need for a ciphertext computing apparatus and method for federated learning and privacy computing that can handle complex and diverse situations and efficiently handle ciphertext computing requirements.
Disclosure of Invention
In a first aspect, an embodiment of the present application provides a ciphertext computing apparatus, which is applied to federal learning and privacy computing. The ciphertext calculation apparatus comprises: the Montgomery modular exponentiation module comprises a first-layer data distribution module and a first-layer modular exponentiation calculation module connected with the first-layer data distribution module, wherein the first-layer modular exponentiation calculation module comprises a plurality of modular exponentiation calculation engines in parallel, the first-layer data distribution module is configured to transmit one ciphertext plaintext pair of a plurality of ciphertext plaintext pairs at a time and select one modular exponentiation calculation engine of the plurality of modular exponentiation calculation engines in a polling manner to receive the transmitted ciphertext plaintext pair, the selected modular exponentiation calculation engine performs Montgomery operation on ciphertext in the transmitted ciphertext plaintext pair to obtain Montgomery ciphertext, and performs Montgomery modular exponentiation operation on the Montgomery ciphertext and plaintext in the transmitted ciphertext pair to obtain a Montgomery modular exponentiation calculation result of the selected modular exponentiation calculation engine, the Montgomery modular exponentiation results generated by the multiple modular exponentiation calculation engines are integrated to obtain Montgomery modular exponentiation results of the multiple ciphertext plaintext pairs; the second layer of modular multiplication calculation module comprises a plurality of parallel modular multiplication calculation engines, and the second layer of data distribution module is configured to select one of the plurality of modular multiplication calculation engines to receive Montgomery modular exponentiation operation results of the plurality of ciphertext plaintext pairs, and the selected modular multiplication calculation engine performs accumulated modular multiplication operation on the Montgomery modular exponentiation operation results of the plurality of ciphertext pairs to obtain the accumulated modular multiplication operation results of the plurality of plaintext pairs; and the demomontgomery operation module is configured to perform a demomontgomery operation on accumulated modular multiplication operation results of the plurality of ciphertext plaintext pairs to obtain ciphertext calculation results of the plurality of ciphertext plaintext pairs.
The technical scheme described in the first aspect can efficiently process massive data quantity related to operation, can cope with complex and variable conditions in the application scenes related to federal learning and privacy calculation, and can efficiently process ciphertext calculation requirements.
According to a possible implementation manner of the technical solution of the first aspect, the embodiments of the present application further provide that the plurality of ciphertext plaintext pairs are determined according to a ciphertext data group and a plaintext data group to be subjected to ciphertext multiplication.
According to a possible implementation manner of the technical solution of the first aspect, the embodiment of the present application further provides that the plurality of ciphertext plaintext pairs are determined according to a row vector of a ciphertext matrix and a column vector of a plaintext matrix to be subjected to ciphertext matrix multiplication, or the plurality of ciphertext plaintext pairs are determined according to a column vector of the ciphertext matrix and a row vector of the plaintext matrix.
According to a possible implementation manner of the technical solution of the first aspect, the embodiments of the present application further provide that the number of the plurality of modular exponentiation calculation engines and the number of the plurality of modular multiplication calculation engines are adjustable.
According to a possible implementation manner of the technical solution of the first aspect, an embodiment of the present application further provides that the ciphertext computing apparatus further includes: and the parameter calculation and distribution module is used for calculating parameters and distributing the parameters for modular exponentiation calculation to the first layer modular exponentiation calculation module through the first layer data distribution module and distributing the parameters for modular multiplication calculation to the second layer modular multiplication calculation module through the second layer data distribution module.
According to a possible implementation manner of the technical solution of the first aspect, an embodiment of the present application further provides that the ciphertext computing apparatus further includes: the task management module is configured to generate configuration information corresponding to the plurality of ciphertext plaintext pairs, where the configuration information includes a data header identifier and a data trailer identifier, the data header identifier indicates a ciphertext plaintext pair that is sent first among the plurality of ciphertext plaintext pairs, and the data trailer identifier indicates a ciphertext plaintext pair that is sent last among the plurality of ciphertext plaintext pairs.
According to a possible implementation manner of the technical solution of the first aspect, the embodiment of the present application further provides that the configuration information further includes a data bit width and a ciphertext computation mode identifier.
According to a possible implementation manner of the technical solution of the first aspect, an embodiment of the present application further provides that the first-layer data distribution module generates a first-layer batch completion signal according to the configuration information, and sends the first-layer batch completion signal to each of the plurality of modular exponentiation calculation engines after sending the plurality of ciphertext plaintext pairs, and the second-layer data distribution module generates a second-layer batch completion signal according to the configuration information, and sends the second-layer batch completion signal to the selected modular exponentiation calculation engine after sending the montgomery modular exponentiation operation result of the plurality of ciphertext plaintext pairs.
According to a possible implementation manner of the technical solution of the first aspect, an embodiment of the present application further provides that each of the plurality of modular exponentiation calculation engines determines whether the modular exponentiation calculation engine is suitable for receiving a new parameter for modular exponentiation calculation according to the first-layer batch completion signal, and each of the plurality of modular exponentiation calculation engines determines whether the modular exponentiation calculation engine is suitable for receiving a new parameter for modular exponentiation calculation according to the second-layer batch completion signal.
According to a possible implementation manner of the technical solution of the first aspect, the embodiment of the present application further provides that the configuration information is added to the data stream corresponding to the plurality of ciphertext plaintext pairs so as to be sent to the first layer data distribution module together with the plurality of ciphertext plaintext pairs, or the configuration information is sent to the first layer data distribution module through a configuration information channel, where the configuration information channel is different from a data channel through which the plurality of ciphertext plaintext pairs are sent to the first layer data distribution module.
According to a possible implementation manner of the technical solution of the first aspect, an embodiment of the present application further provides that the second layer data distribution module is configured to select one of the plurality of modular multiplication calculation engines to receive a result of montgomery modular exponentiation operation on the plurality of ciphertext-plaintext pairs, and includes: the layer-two data distribution module is configured to select one of the plurality of modular multiplication calculation engines to receive Montgomery modular exponentiation results of the plurality of ciphertext plaintext pairs in a round-robin manner.
In a second aspect, an embodiment of the present application provides a ciphertext computing system. The ciphertext computing system comprises: the secret adder is used for carrying out secret addition calculation; the ciphertext computing apparatus of any of the first aspects; and the main processor is used for splitting the secret calculation formula into a combination of a secret addition calculation formula and a secret multiplication calculation formula, calling the secret adder to complete the secret addition calculation formula and calling the ciphertext calculation device to complete the secret multiplication calculation formula.
The technical scheme described in the second aspect can efficiently process massive data quantity related to operation, can cope with complicated and variable conditions in the related application scenes of federal learning and privacy calculation, and can efficiently process ciphertext calculation requirements.
In a third aspect, an embodiment of the present application provides a ciphertext calculation method, which is applied to federal learning and privacy calculation. The method comprises the following steps: obtaining a plurality of ciphertext plaintext pairs to be subjected to secret multiplication; through a first-layer data distribution module, one ciphertext plaintext pair in a plurality of ciphertext plaintext pairs is sent each time, one modular exponentiation calculation engine in a plurality of parallel modular exponentiation calculation engines included in the first-layer modular exponentiation calculation module is selected according to a polling mode to receive the sent ciphertext plaintext pair, wherein the first-layer modular exponentiation calculation module is connected with the first-layer data distribution module; performing Montgomery operation on the ciphertext in the sent ciphertext plaintext pair by using the selected modular exponentiation engine to obtain a Montgomery ciphertext, and performing Montgomery modular exponentiation on the Montgomery ciphertext and the plaintext in the sent ciphertext plaintext pair to obtain a Montgomery modular exponentiation result of the selected modular exponentiation engine; integrating Montgomery modular exponentiation results generated by the multiple modular exponentiation calculation engines to obtain Montgomery modular exponentiation results of the multiple ciphertext plaintext pairs; selecting one modular multiplication calculation engine in a plurality of parallel modular multiplication calculation engines included in a second layer of modular multiplication calculation module to receive Montgomery modular exponentiation operation results of the plurality of ciphertext plaintext pairs through a second layer of data distribution module, wherein the second layer of modular multiplication calculation module is connected with the second layer of data distribution module; performing accumulated modular multiplication operation on Montgomery modular exponentiation operation results of the plurality of ciphertext plaintext pairs through the selected modular multiplication calculation engine to obtain accumulated modular multiplication operation results of the plurality of ciphertext plaintext pairs; and carrying out Montgomery removal operation on accumulated modular multiplication operation results of the plurality of ciphertext plaintext pairs through a Montgomery removal operation module to obtain ciphertext calculation results of the plurality of ciphertext plaintext pairs.
The technical scheme described in the third aspect can efficiently process massive data quantity related to operation, can cope with complicated and changeable situations in the application scenes related to federal learning and privacy calculation, and can efficiently process ciphertext calculation requirements.
According to a possible implementation manner of the technical solution of the third aspect, the embodiment of the present application further provides that the number of the plurality of modular exponentiation calculation engines and the number of the plurality of modular multiplication calculation engines are adjustable.
According to a possible implementation manner of the technical solution of the third aspect, an embodiment of the present application further provides that the method further includes: and the parameters for modular exponentiation calculation are distributed to the first layer modular exponentiation calculation module through the first layer data distribution module, and the parameters for modular multiplication calculation are distributed to the second layer modular multiplication calculation module through the second layer data distribution module.
According to a possible implementation manner of the technical solution of the third aspect, an embodiment of the present application further provides that the method further includes: generating, by a task management module, configuration information corresponding to the plurality of ciphertext plaintext pairs, wherein the configuration information includes a data header identifier indicating a ciphertext plaintext pair that is a first sent of the plurality of ciphertext plaintext pairs and a data trailer identifier indicating a ciphertext plaintext pair that is a last sent of the plurality of ciphertext plaintext pairs.
According to a possible implementation manner of the technical solution of the third aspect, the embodiment of the present application further provides that the configuration information further includes a data bit width and a ciphertext computation mode identifier.
According to a possible implementation manner of the technical solution of the third aspect, an embodiment of the present application further provides that the first-layer data distribution module generates a first-layer batch completion signal according to the configuration information, and sends the first-layer batch completion signal to each of the plurality of modular exponentiation calculation engines after sending the plurality of ciphertext plaintext pairs, and the second-layer data distribution module generates a second-layer batch completion signal according to the configuration information, and sends the second-layer batch completion signal to the selected modular exponentiation calculation engine after sending the montgomery modular exponentiation operation result of the plurality of ciphertext plaintext pairs.
According to a possible implementation manner of the technical solution of the third aspect, the embodiment of the present application further provides that each of the plurality of modular exponentiation calculation engines determines whether the modular exponentiation calculation engine is suitable for receiving a new parameter for modular exponentiation calculation according to the first-layer batch completion signal, and each of the plurality of modular exponentiation calculation engines determines whether the modular exponentiation calculation engine is suitable for receiving a new parameter for modular exponentiation calculation according to the second-layer batch completion signal.
According to a possible implementation manner of the technical solution of the third aspect, the embodiment of the present application further provides that the configuration information is added to the data stream corresponding to the plurality of ciphertext-plaintext pairs so as to be sent to the first layer data distribution module together with the plurality of ciphertext-plaintext pairs, or the configuration information is sent to the first layer data distribution module through a configuration information channel, where the configuration information channel is different from a data channel through which the plurality of ciphertext-plaintext pairs are sent to the first layer data distribution module.
Drawings
In order to explain the technical solutions in the embodiments or background art of the present application, the drawings used in the embodiments or background art of the present application will be described below.
Fig. 1 shows a block diagram of a ciphertext computing apparatus according to an embodiment of the present application.
Fig. 2 shows a block diagram of a ciphertext computing system including the ciphertext computing apparatus shown in fig. 1 according to an embodiment of the present application.
Fig. 3 shows a flowchart of the ciphertext calculation method according to the embodiment of the present application.
Detailed Description
In order to solve the technical problem that how to deal with complicated and variable conditions and efficiently process ciphertext calculation requirements in related application scenes of federal learning and privacy calculation, the embodiment of the application provides a ciphertext calculation device and a ciphertext calculation method for federal learning and privacy calculation. Therefore, the method can efficiently process mass data quantity related to operation, can cope with complex and variable conditions in the related application scenes of federal learning and privacy calculation, and efficiently process ciphertext calculation requirements.
The embodiment of the application can be applied to the following application scenarios including but not limited to federal learning, private calculation, ciphertext calculation based on homomorphic encryption algorithm, other application scenarios involving a large number of large integer modular multiplication operations, and the like.
The embodiments of the present application may be modified and improved according to specific application environments, and are not limited herein.
In order to make the technical field of the present application better understand, embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
Fig. 1 shows a block diagram of a ciphertext computing apparatus according to an embodiment of the present application. As shown in fig. 1, the ciphertext computing apparatus 100 includes a montgomery modular exponentiation module 110, an accumulation modular multiplication module 120, and a demomonty modular exponentiation module 130, and the ciphertext computing apparatus 100 further includes a parameter calculation distribution module 102 and a task management module 104. The ciphertext calculation apparatus 100 is configured to perform ciphertext calculation on input data and output a calculation result. The input data may be understood as a plurality of ciphertexts or as encrypted data records (C1, C2 up to Cn) and a plurality of plain texts (K1, K2 up to Kn), where n is the total number of ciphertexts or plain texts to be subjected to cipher text calculation. The ciphertext calculation of the input data is the secret multiplication, that is, the multiplication of each ciphertext and plaintext of the input data is performed, then the summation operation is performed on the multiplication result, and finally the modulus is taken. The cryptographic multiplication can be realized based on Montgomery domain modular multiplication, so that modular exponentiation is firstly performed on the ciphertext and the plaintext, and then modular exponentiation is performed on the modular exponentiation result. Specifically, the input data can be understood as a plurality of ciphertext plaintext pairs, each ciphertext plaintext pair comprises a ciphertext C and a plaintext K which are paired, the ciphertext C is subjected to Montgomery transformation to obtain an MC, and then the MC and the plaintext K are subjected to modular exponentiation to obtain a modular exponentiation result V; performing accumulated modular multiplication on the modular exponentiation result V of each ciphertext plaintext pair of a plurality of ciphertext plaintext pairs; and finally, carrying out the Montgomerization removal operation to obtain a calculation result. It should be understood that the secret calculation formula in the homomorphic encryption state includes two basic forms, namely a secret multiplication calculation formula and a secret addition calculation formula, and any secret calculation formula can be expressed as a combination or a composition of the two basic forms. The input data may be a one-dimensional vector, or may include a one-dimensional vector and/or a two-dimensional matrix, and for example, the input data, which are both one-dimensional vectors or two-dimensional matrices, are subjected to secret multiplication. The dense state matrix multiplication can be converted into a ciphertext data group and a plaintext data group to be subjected to the dense state multiplication by extracting corresponding row vectors or column vectors in the matrix. For example, the plurality of ciphertext plaintext pairs is determined based on a ciphertext data set and a plaintext data set to be subjected to ciphertext multiplication. For another example, the plurality of ciphertext plaintext pairs are determined according to a row vector of a ciphertext matrix and a column vector of a plaintext matrix to be subjected to ciphertext matrix multiplication, or the plurality of ciphertext plaintext pairs are determined according to a column vector of the ciphertext matrix and a row vector of the plaintext matrix. The construction and operation of the ciphertext computing apparatus 100 will be further described with reference to fig. 1.
Referring to fig. 1, the montgomery modular exponentiation module 110 includes a first layer data distribution module 112 and a first layer modular exponentiation calculation module 114 connected to the first layer data distribution module 112. The first-tier modular exponentiation calculation module 114 includes multiple modular exponentiation calculation engines (not shown) in parallel. The first tier data distribution module 112 is configured to transmit one of the plurality of ciphertext-plaintext pairs at a time and select one of the plurality of modular exponentiation engines in a round robin fashion to receive the transmitted ciphertext-plaintext pair. And the selected modular exponentiation engine performs Montgomery operation on the ciphertext in the sent ciphertext plaintext pair to obtain a Montgomery ciphertext, and performs Montgomery modular exponentiation on the Montgomery ciphertext and the plaintext in the sent ciphertext plaintext pair to obtain a Montgomery modular exponentiation result of the selected modular exponentiation engine. The Montgomery modular exponentiation results generated by the modular exponentiation calculation engines are integrated by the first layer data merging module 116 to obtain Montgomery modular exponentiation results of ciphertext-plaintext pairs. The first-layer data distribution module 112 is configured to send one ciphertext plaintext pair of the ciphertext plaintext pairs each time and select one modular exponentiation calculation engine of the modular exponentiation calculation engines to receive the sent ciphertext plaintext pair in a polling manner, which means to inquire whether a next modular exponentiation calculation engine of the modular exponentiation calculation engines is suitable for receiving the sent ciphertext plaintext pair according to a specific order, and if so, send the ciphertext plaintext pair and end the inquiry this time, and then start a new round of inquiry when a new ciphertext plaintext pair is sent next time. Here, whether a certain modular exponentiation calculation engine is suitable for receiving the transmitted ciphertext-plaintext pair is determined based on that the modular exponentiation calculation engine has completed a previous calculation task and can start a new calculation task.
With continued reference to fig. 1, in one possible implementation, the first layer data distribution module 112 may further include one or more stages of data distribution modules (not shown) therein. The one-stage or multi-stage data distribution modules distribute input data for calculation downwards stage by stage in a polling mode, and specifically, each stage of data distribution module sends the input data to the next stage of data distribution module in a polling mode. Each of these one or more levels of data distribution modules may include one or more data distribution units. Sending data to the next-stage data distribution module in a polling manner means inquiring whether each data distribution unit of the next-stage data distribution module is suitable for receiving input data according to a specific sequence, if so, sending the input data and ending the inquiry, and then waiting for a new round of inquiry when input data is to be sent next time. The last-level data distribution module in the one-level or multi-level data distribution modules included in the first-level data distribution module is configured to send the input data to the first-level modular exponentiation calculation module 114. Specifically, each of the plurality of data distribution units included in the last-stage data distribution module is connected to one or more of the plurality of modular exponentiation engines. In this way, by configuring a plurality of stages of data distribution modules for the first-layer data distribution module 112, where there are a plurality of data distribution units in each stage of data distribution module and each data distribution unit is connected to more than one (two or more) data distribution units in the data distribution module located at the next stage, querying the data distribution unit is equivalent to querying more than one data distribution unit located at the next stage connected to the data distribution unit; by analogy, the query can be continued step by step in a polling manner until the data distribution unit of the data distribution module at the last stage queries the modular exponentiation calculation engine connected with the data distribution unit; it is thus possible to determine in a query whether there is at least one modular exponentiation engine of the plurality of modular exponentiation engines adapted to receive the input data. Moreover, because the data distribution units of the data distribution modules at each stage are queried according to a specific sequence, that is, the polling mode is fixed or preset, the method is beneficial to control design and hardware implementation, and is also beneficial to ensuring that the idle computing resources are utilized to the maximum extent. And the number of data distribution units of each data distribution module in the multiple stages and the connection relationship between the data distribution units in different stages are adjustable, for example, hardware programming or reconfiguration is performed through an FPGA. In other exemplary embodiments, any number of data distribution units may be distributed, but as a whole, from the first stage data distribution module to the last stage data distribution module, each stage has a trend that the data distribution units should increase or at least not decrease, so that it may be advantageous to query multiple modular exponentiation calculation engines at once through a polling mechanism. For example, the first-stage data distribution module may have two data distribution units, the second-stage data distribution module may have 75 data distribution units, and the third-stage data distribution module may have 150 data distribution units and is connected to 300 modular exponentiation engines. And the two data distribution units of the first-level data distribution module can be respectively connected with 35 and 40 data distribution units positioned at the second level, or respectively connected with 25 and 50 data distribution units positioned at the second level. The number of the connection between each data distribution unit of the same level and the data distribution unit of the next level may be uniform or nonuniform. Specifically, the number of data distribution units of a certain stage of data distribution module, and the number of connections of each data distribution unit of the stage (the number of connections of a certain data distribution unit is the number of data distribution units of the next stage connected to the data distribution unit or the number of modular exponentiation calculation engines) are adjustable and also variable, and may be adjusted according to actual needs and application scenarios, or according to available calculation resources, or according to input data to be calculated, or according to calculation needs, for example. By flexibly adjusting the number of data distribution units at each level and the respective connection number of the data distribution units according to various factors, the demand can be better adapted so as to maximize the resource utilization efficiency and the parallel computing speed.
With continued reference to fig. 1, in one possible implementation, the first layer data merging module 116 may also include one or more stages of data merging modules (not shown), and each stage of data merging module sends the calculation result to the next stage of data merging module in a polling manner. In some exemplary embodiments, for the same montgomery modular exponentiation module 110, taking the first-layer modular exponentiation calculation module 114 of the montgomery modular exponentiation module 110 as a boundary, the data distribution modules of each stage send input data to the first-layer modular exponentiation calculation module 114 step by step, and the first-layer modular exponentiation calculation module 114 transfers the calculation result step by step through the data merging modules of each stage. Therefore, the data flow direction or data transfer direction inside the montgomery modular exponentiation module 110 can be defined from the first-stage data distribution module that receives the input data earliest to the last-stage data merging module that transfers the calculation result last through the first-stage modular exponentiation module 114, where it can be defined that a stage that transmits data is upstream and another stage that receives the data is downstream according to the data flow direction. Along the data flow direction defined in this way, with the first-layer modular exponentiation calculation module 114 as a boundary, the number of data distribution units and the number of connections of each data distribution module included in each data distribution module distributed from the first-layer modular exponentiation calculation module 114 in the reverse direction of the data flow direction (from downstream to upstream or from the first-layer modular exponentiation calculation module 114 to the first-stage data distribution module) are in one-to-one correspondence with the number of data merge units and the number of connections of each data merge module included in each data merge module distributed from the first-layer modular exponentiation calculation module 114 in the forward direction of the data flow direction (from upstream to downstream or from the first-layer modular exponentiation calculation module 114 to the last-stage data merge module). In this way, the data distribution modules and the data merge modules of each stage, which are distributed in a mirror image manner with the first-layer modular exponentiation calculation module 114 as a boundary, or the internal structure of the first-layer data distribution module 112 and the internal structure of the first-layer data merge module 116, are in a mirror-symmetrical relationship with the first-layer modular exponentiation calculation module 114 as a boundary.
Referring to fig. 1, the montgomery modular exponentiation module 110 uses a multi-level data distribution module and a multi-level data merging module to gradually transfer input data to the modular exponentiation calculation engine through the multi-level data distribution module in a polling manner, and gradually transfers a calculation result output when the modular exponentiation calculation engine reaches a calculation completion condition through the multi-level data merging module in a polling manner, so that an effect of improving single query through a polling mechanism is achieved, and resource utilization efficiency and parallel speed are improved. Moreover, the number of data distribution units and the number of connections included in each level of data distribution module and the number of data merge units and the number of connections included in each level of data merge module are mirror-symmetrically distributed along the reverse direction and the forward direction of the data flow direction with respect to the first-layer modular exponentiation calculation module 114, which is beneficial for exerting the effect of the polling mechanism. Moreover, the internal structure of the multi-level data distribution module and the multi-level data merging module includes how many levels of data distribution modules or data merging modules, the number and the connection number of the data distribution units or the data merging units included in the data distribution modules or the data merging modules of each level can be adjusted, and can be adjusted according to one of the following factors or a combination thereof: actual requirements, application scenarios, available computing resources, input data to be computed, computing requirements as a whole, expected resource time losses, etc. In this way, by flexibly adjusting the number of data distribution units or data merging units at each stage and the respective connection number of these data distribution units or data merging units according to various factors, it is possible to better adapt to the demand so as to maximize the resource utilization efficiency and the parallel computing speed.
With reference to fig. 1, the cumulative modular multiplication module 120 includes a second layer data distribution module 122 and a second layer modular multiplication calculation module 124 connected to the second layer data distribution module 122. The second tier of modular multiplication computation module 124 includes multiple modular multiplication computation engines (not shown) in parallel. The second layer data distribution module 122 is configured to select one of the plurality of modular multiplication calculation engines to receive a result of the montgomery modular exponentiation of the plurality of ciphertext plaintext pairs. The second layer data distribution module 122 is connected to the first layer data merging module 116 to receive the montgomery modular exponentiation operation results of the plurality of ciphertext plaintext pairs obtained after the integration. And the selected modular multiplication calculation engine performs accumulated modular multiplication operation on Montgomery modular exponentiation operation results of the plurality of ciphertext and plaintext pairs to obtain accumulated modular multiplication operation results of the plurality of ciphertext and plaintext pairs. The accumulation modular multiplication operation module 120 further includes a second layer data merging module 126 for integrating the accumulation modular multiplication operation results of the ciphertext-plaintext pairs and sending the result to the demong-montgomery operation module 130. The second layer data distribution module 122 may select one of the plurality of modular multiplication calculation engines included in the second layer modular multiplication calculation module 124 in any suitable manner.
In one possible embodiment, second layer data distribution module 122 is similar to first layer data distribution module 112, i.e., includes one or more stages of data distribution modules (not shown). The second layer data merge module 126 is similar to the first layer data merge module 116, i.e., includes one or more stages of data merge modules (not shown). And, the one-level or multi-level data distribution module included in the second layer data distribution module 122 and the one-level or multi-level data merging module included in the second layer data merging module 126 are all distributed downward step by step in a polling manner. The second-layer data distribution module 122 includes one or more stages of data distribution modules, and the last-stage data distribution module is configured to send the input data to the second-layer modular multiplication calculating module 124. Specifically, each of the plurality of data distribution units included in the last-stage data distribution module of the second-layer data distribution module 122 is connected to one or more of the plurality of modular multiplication calculation engines. As such, by configuring a plurality of stages of data distribution modules for the second layer data distribution module 122, where there are a plurality of data distribution units in each stage of data distribution module and each data distribution unit is connected to more than one (two or more) data distribution units in the data distribution module located at the next stage, querying the data distribution unit is equivalent to querying more than one data distribution unit located at the next stage connected to the data distribution unit; by analogy, the query can be continuously performed step by step in a polling manner until the data distribution unit of the data distribution module at the last stage queries the modular multiplication calculation engine connected with the data distribution unit; this allows a determination to be made in a query whether there is at least one modular multiplication computation engine of the plurality of modular multiplication computation engines adapted to receive the input data. The plurality of data distribution units included in the last-stage data distribution module may select one of at least one modular multiplication calculation engine suitable for receiving the input data according to a preset rule to receive the montgomery modular exponentiation operation result of the plurality of ciphertext-plaintext pairs, for example, a default selection sequence number is the smallest or a default selection order number is the largest, and is not limited herein. Moreover, because the data distribution units of the data distribution modules at each stage are queried according to a specific sequence, that is, the polling mode is fixed or preset, the method is beneficial to control design and hardware implementation, and is also beneficial to ensuring that the idle computing resources are utilized to the maximum extent. And the number of data distribution units of each data distribution module in the multiple stages and the connection relationship between the data distribution units in different stages are adjustable, for example, hardware programming or reconfiguration is performed through an FPGA. In other exemplary embodiments, any number of data distribution units may be distributed, but as a whole, from the first stage data distribution module to the last stage data distribution module, each stage has a trend that the data distribution units should increase or at least not decrease, so that it may be beneficial to query multiple modular multiplication computation engines at once through a polling mechanism. In addition, in some exemplary embodiments, the data distribution modules and the data merging modules of each level are distributed in a mirror image by taking the second-layer modular multiplication calculating module 124 as a boundary, or the internal structure of the second-layer data distribution module 122 and the internal structure of the second-layer data merging module 126 are in a mirror image relationship by taking the second-layer modular multiplication calculating module 124 as a boundary.
Referring to fig. 1, the accumulation modular multiplication operation module 120 uses a multi-level data distribution module and a multi-level data merging module to gradually transfer input data to the modular multiplication computation engine through the multi-level data distribution module in a polling manner, and gradually transfers a computation result output when the modular multiplication computation engine meets a computation completion condition through the multi-level data merging module in a polling manner, so that an effect of improving single query through a polling mechanism is achieved, and resource utilization efficiency and parallel speed are improved. In addition, the number and the connection number of the data distribution units included in the data distribution modules of each stage and the number and the connection number of the data merging units included in the data merging modules of each stage are respectively mirror-symmetrically distributed along the reverse direction and the forward direction of the data flow direction with respect to the second layer modular multiplication calculating module 124, which is beneficial to exerting the effect of the polling mechanism. Moreover, the internal structure of the multi-level data distribution module and the multi-level data merging module includes how many levels of data distribution modules or data merging modules, the number and the connection number of the data distribution units or the data merging units included in the data distribution modules or the data merging modules of each level can be adjusted, and can be adjusted according to one of the following factors or a combination thereof: actual requirements, application scenarios, available computing resources, input data to be computed, computing requirements as a whole, expected resource time losses, etc. In this way, by flexibly adjusting the number of data distribution units or data merging units at each stage and the respective connection number of these data distribution units or data merging units according to various factors, it is possible to better adapt to the demand so as to maximize the resource utilization efficiency and the parallel computing speed.
Referring to fig. 1, the demotgomerization operation module 130 is connected to the second layer data merge module 126 and configured to perform a demotgomerization operation on the accumulated modular multiplication results of the ciphertext plaintext pairs to obtain ciphertext calculation results of the ciphertext plaintext pairs. Thus, through the cooperation among the montgomery modular exponentiation module 110, the accumulation modular exponentiation module 120, and the demomontgomery modular exponentiation module 130, the ciphertext computing apparatus 100 implements cryptographic multiplication based on montgomery domain modular exponentiation, that is, performs modular exponentiation on the ciphertext and the plaintext, and then performs modular exponentiation on the result of the modular exponentiation. Also, the ciphertext computing apparatus 100 may be adapted to various types of input data, for example, the plurality of ciphertext plaintext pairs determined in accordance with a ciphertext data set and a plaintext data set to be subjected to ciphertext multiplication; for another example, the plurality of ciphertext plaintext pairs are determined according to a row vector of a ciphertext matrix and a column vector of a plaintext matrix to be subjected to ciphertext matrix multiplication, or the plurality of ciphertext plaintext pairs are determined according to a column vector of the ciphertext matrix and a row vector of the plaintext matrix. Moreover, the ciphertext computing apparatus 100 further includes optimization in a polling mechanism, a multi-level data distribution module and a multi-level data merging module, a mirror symmetry distribution, an adjustable internal structure (for example, the number of the plurality of modular exponentiation computing engines and the number of the plurality of modular multiplication computing engines are adjustable), and the like, so as to better adapt to the demand so as to maximize resource utilization efficiency and parallel computing speed, and to be beneficial to being able to cope with complicated and variable situations and efficiently process ciphertext computing demands in the application scenarios related to federal learning and privacy computing. In addition, the selection of the subsequent module by the polling mechanism may be implemented in any suitable manner, for example, by a channel selection counter, and after sending one piece of data to the selected subsequent module each time, the channel selection counter is incremented by 1, and the channel selection counter is cleared until each channel has sent the data.
With continued reference to fig. 1, the parameter calculation distribution module 102 is used for parameter calculation and distributes the parameters for modular exponentiation calculation to the first-layer modular exponentiation calculation module 114 through the first-layer data distribution module 112 and distributes the parameters for modular multiplication calculation to the second-layer modular multiplication calculation module 124 through the second-layer data distribution module 122. For example, the parameter calculation distribution module 102 may send the calculated parameters to the second level data distribution module (if any) of the first level data distribution module 112. The task management module 104 is configured to generate configuration information corresponding to the plurality of ciphertext plaintext pairs. The configuration information includes a data header identification indicating a ciphertext plaintext pair from which a first of the plurality of ciphertext plaintext pairs is transmitted and a data trailer identification indicating a ciphertext plaintext pair from which a last of the plurality of ciphertext plaintext pairs is transmitted. It should be understood that, in order to improve the data throughput rate, the embodiments of the present application provide that by adding data information, that is, configuration information, the configuration information includes a data header identifier and a data tail identifier, so that: the first-layer data distribution module 112 generates a first-layer batch completion signal according to the configuration information and sends the first-layer batch completion signal to each of the plurality of modular exponentiation calculation engines after sending the plurality of ciphertext plaintext pairs, and the second-layer data distribution module 122 generates a second-layer batch completion signal according to the configuration information and sends the second-layer batch completion signal to the selected modular exponentiation calculation engine after sending the montgomery modular exponentiation result of the plurality of ciphertext plaintext pairs. The first-layer batch completion signal is generated by the first-layer data distribution module 112 according to the configuration information, and is used for enabling each modular exponentiation calculation engine in the plurality of modular exponentiation calculation engines to judge whether the modular exponentiation calculation engine is suitable for receiving new parameters for modular exponentiation calculation according to the first-layer batch completion signal; the second-layer batch completion signal is generated by the second-layer data distribution module 122 according to the configuration information, and is used for enabling each of the plurality of modular multiplication calculation engines to determine whether the modular multiplication calculation engine is suitable for receiving a new parameter for modular multiplication calculation according to the second-layer batch completion signal. Thus, by adding the configuration information, the ciphertext computing apparatus 100 may pipeline the input data of different batches and distinguish the input data of different batches, which is beneficial to increasing the data throughput rate and fully utilizing the available computing resources. Moreover, the first-layer batch completion signal can be used to instruct the modular exponentiation engine receiving the first-layer batch completion signal to perform necessary operations to distinguish between different batches of input data, which is equivalent to equivalently converting the control operation into a data stream-driven control flow, and is beneficial to simplifying the control design. Similarly, the second-tier batch completion signal may be used to instruct the modular multiplication computation engine that receives the second-tier batch completion signal to perform the necessary operations to distinguish between different batches of input data. In some exemplary embodiments, the first layer data distribution module 112 generates the first layer batch completion signal according to the configuration information, which may be a data tail identifier in the configuration information identifying the input data of the current batch and a data head identifier in the configuration information identifying the input data of the next batch. In addition, the configuration information may also include data bit widths and ciphertext computation mode identifiers, as well as any suitable information. Thus, by adding the configuration information, the method is beneficial to realizing the pipeline processing of the input data of different batches and distinguishing the input data of different batches, thereby increasing the data throughput rate and fully utilizing the available computing resources. It should be understood that the configuration information may be added to the data stream corresponding to the plurality of ciphertext plaintext pairs to be transmitted to the first tier data distribution module along with the plurality of ciphertext plaintext pairs, or the configuration information may be transmitted to the first tier data distribution module through a configuration information channel. The configuration information channel is different from a data channel through which the plurality of ciphertext-plaintext pairs are sent to the first layer data distribution module. For example, assuming that the bit width of the data to be transmitted is 1024 bits and the bit width of the single data channel is 64 bits, 8 bits may be added to the data channel to embody the configuration information, so that the configuration information may be added to the data stream corresponding to the plurality of ciphertext plaintext pairs. Alternatively, a channel may be additionally added in addition to the data channel as a configuration information channel.
It should be understood that the ciphertext computing apparatus 100 shown in fig. 1 may further include other modules (not shown) for providing various auxiliary or control functions, including but not limited to, for example, a task management module for parsing upper level commands, distributing input data, and processing the results of the computation; for example, the memory management module is used for storing superior commands, input data and calculation results; for example, the data transmission module is used to transmit commands and data, such as a data transmission mode using a high-speed serial computer extended bus (PCIE) standard and a Direct Memory Access (DMA). These other blocks are not shown, may be provided separately or as part of or in addition to the blocks shown in fig. 1, or may add their functions and necessary circuit structures to the blocks shown in fig. 1. For example, the first-level data distribution module 112 or the task management module 104 may have a data interface of PCIE DMA to receive input data at high speed. These modifications are understood to be part of the disclosure of the present application and may be adapted to suit the actual needs or application and are not specifically limited thereto.
Fig. 2 shows a block diagram of a ciphertext computing system including the ciphertext computing apparatus shown in fig. 1 according to an embodiment of the present application. As shown in fig. 2, the ciphertext computing system 210 includes the ciphertext computing apparatus 200, the ciphertext adder 212, and the main processor 214. Ciphertext computing system 210 also includes receive module 216. The ciphertext computing apparatus 200 has a similar structure and function to the ciphertext computing apparatus 100 shown in fig. 1, and thus has the above-mentioned beneficial technical effects, which are not described herein again. The dense state adder 212 is used to perform the dense state addition calculation. The main processor 214 is configured to split the secret calculation formula into a combination of a secret addition calculation formula and a secret multiplication calculation formula, and call the secret adder to complete the secret addition calculation formula and call the ciphertext calculation device to complete the secret multiplication calculation formula. Therefore, the ciphertext computing system 210 is capable of efficiently handling large amounts of data associated with operations and handling complex and diverse situations and ciphertext computing requirements in federated learning and privacy computing-related application scenarios.
Fig. 3 shows a flowchart of the ciphertext calculation method according to the embodiment of the present application. As shown in fig. 3, the ciphertext computation method 300 may include the following steps.
S310: a plurality of ciphertext plaintext pairs to be subjected to ciphertext multiplication are obtained.
S312: through a first-layer data distribution module, one ciphertext plaintext pair in a plurality of ciphertext plaintext pairs is sent each time, one modular exponentiation calculation engine in a plurality of parallel modular exponentiation calculation engines included in the first-layer modular exponentiation calculation module is selected according to a polling mode to receive the sent ciphertext plaintext pair, and the first-layer modular exponentiation calculation module is connected with the first-layer data distribution module.
S314: and carrying out Montgomery modular exponentiation on the ciphertext in the sent ciphertext plaintext pair to obtain a Montgomery ciphertext and carrying out Montgomery modular exponentiation on the Montgomery ciphertext and the plaintext in the sent ciphertext plaintext pair to obtain a Montgomery modular exponentiation result of the selected modular exponentiation engine.
S316: and integrating Montgomery modular exponentiation results generated by the multiple modular exponentiation calculation engines to obtain Montgomery modular exponentiation results of the multiple ciphertext plaintext pairs.
S318: selecting one modular multiplication calculation engine in a plurality of parallel modular multiplication calculation engines included in a second layer modular multiplication calculation module through a second layer data distribution module to receive Montgomery modular exponentiation operation results of the plurality of ciphertext plaintext pairs, wherein the second layer modular multiplication calculation module is connected with the second layer data distribution module.
S320: and performing accumulated modular multiplication operation on Montgomery modular exponentiation operation results of the plurality of ciphertext plaintext pairs through the selected modular multiplication calculation engine to obtain the accumulated modular multiplication operation results of the plurality of ciphertext plaintext pairs.
S322: and performing Montgomery removal operation on accumulated modular multiplication operation results of the plurality of ciphertext plaintext pairs through a Montgomery removal operation module to obtain ciphertext calculation results of the plurality of ciphertext plaintext pairs.
Among them, the steps of the method shown in the ciphertext computing method 300 shown in fig. 3 may refer to the structure and functions of the ciphertext computing apparatus 100 shown in fig. 1 in detail. Therefore, the ciphertext computing method 300 can efficiently process massive data quantity related to operation, can cope with complicated and variable situations in the application scenes related to federal learning and privacy computing, and can efficiently process ciphertext computing requirements.
The embodiments provided herein may be implemented in any one or combination of hardware, software, firmware, or solid state logic circuitry, and may be implemented in connection with signal processing, control, and/or application specific circuitry. Particular embodiments of the present application provide an apparatus or device that may include one or more processors (e.g., microprocessors, controllers, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), etc.) that process various computer-executable instructions to control the operation of the apparatus or device. Particular embodiments of the present application provide an apparatus or device that can include a system bus or data transfer system that couples the various components together. A system bus can include any of a variety of different bus structures or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. The devices or apparatuses provided in the embodiments of the present application may be provided separately, or may be part of a system, or may be part of other devices or apparatuses.
Particular embodiments provided herein may include or be combined with computer-readable storage media, such as one or more storage devices capable of providing non-transitory data storage. The computer-readable storage medium/storage device may be configured to store data, programmers and/or instructions that, when executed by a processor of an apparatus or device provided by embodiments of the present application, cause the apparatus or device to perform operations associated therewith. The computer-readable storage medium/storage device may include one or more of the following features: volatile, non-volatile, dynamic, static, read/write, read-only, random access, sequential access, location addressability, file addressability, and content addressability. In one or more exemplary embodiments, the computer-readable storage medium/storage device may be integrated into a device or apparatus provided in the embodiments of the present application or belong to a common system. The computer-readable storage medium/memory device may include optical, semiconductor, and/or magnetic memory devices, etc., and may also include Random Access Memory (RAM), flash memory, read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a removable disk, a recordable and/or rewriteable Compact Disc (CD), a Digital Versatile Disc (DVD), a mass storage media device, or any other form of suitable storage media.
The above is an implementation manner of the embodiments of the present application, and it should be noted that the steps in the method described in the embodiments of the present application may be sequentially adjusted, combined, and deleted according to actual needs. In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. It is to be understood that the embodiments of the present application and the structures shown in the drawings are not to be construed as specifically limiting the devices or systems involved. In other embodiments of the present application, an apparatus or system may include more or fewer components than the specific embodiments and figures, or may combine certain components, or may separate certain components, or may have a different arrangement of components. Those skilled in the art will understand that various modifications and changes may be made in the arrangement, operation, and details of the methods and apparatus described in the specific embodiments without departing from the spirit and scope of the embodiments herein; without departing from the principles of embodiments of the present application, several improvements and modifications may be made, and such improvements and modifications are also considered to be within the scope of the present application.

Claims (20)

1. A ciphertext computing apparatus for federated learning and privacy computations, the ciphertext computing apparatus comprising:
the Montgomery modular exponentiation module comprises a first-layer data distribution module and a first-layer modular exponentiation calculation module connected with the first-layer data distribution module, wherein the first-layer modular exponentiation calculation module comprises a plurality of modular exponentiation calculation engines in parallel, the first-layer data distribution module is configured to transmit one ciphertext plaintext pair of a plurality of ciphertext plaintext pairs at a time and select one modular exponentiation calculation engine of the plurality of modular exponentiation calculation engines in a polling manner to receive the transmitted ciphertext plaintext pair, the selected modular exponentiation calculation engine performs Montgomery operation on ciphertext in the transmitted ciphertext plaintext pair to obtain Montgomery ciphertext, and performs Montgomery modular exponentiation operation on the Montgomery ciphertext and plaintext in the transmitted ciphertext pair to obtain a Montgomery modular exponentiation calculation result of the selected modular exponentiation calculation engine, the Montgomery modular exponentiation results generated by the multiple modular exponentiation calculation engines are integrated to obtain Montgomery modular exponentiation results of the multiple ciphertext plaintext pairs;
the second layer of modular multiplication calculation module comprises a plurality of parallel modular multiplication calculation engines, and the second layer of data distribution module is configured to select one of the plurality of modular multiplication calculation engines to receive Montgomery modular exponentiation operation results of the plurality of ciphertext plaintext pairs, and the selected modular multiplication calculation engine performs accumulated modular multiplication operation on the Montgomery modular exponentiation operation results of the plurality of ciphertext pairs to obtain the accumulated modular multiplication operation results of the plurality of plaintext pairs; and
the demotgomerization operation module is configured to perform a demotgomerization operation on accumulated modular multiplication operation results of the plurality of ciphertext plaintext pairs to obtain ciphertext calculation results of the plurality of ciphertext plaintext pairs.
2. The ciphertext computing apparatus of claim 1, wherein the plurality of ciphertext-plaintext pairs are determined based on a ciphertext data set and a plaintext data set to be cryptographically multiplied.
3. The ciphertext computing apparatus according to claim 1, wherein the plurality of ciphertext-plaintext pairs are determined according to a row vector of a ciphertext matrix and a column vector of a plaintext matrix to be subjected to ciphertext matrix multiplication, or wherein the plurality of ciphertext-plaintext pairs are determined according to a column vector of the ciphertext matrix and a row vector of the plaintext matrix.
4. The ciphertext computing apparatus of any one of claims 1 to 3, the number of the plurality of modular exponentiation engines and the number of the plurality of modular multiplication engines being adjustable.
5. The ciphertext computing apparatus of claim 1, further comprising:
and the parameter calculation and distribution module is used for calculating parameters and distributing the parameters for modular exponentiation calculation to the first layer modular exponentiation calculation module through the first layer data distribution module and distributing the parameters for modular multiplication calculation to the second layer modular multiplication calculation module through the second layer data distribution module.
6. The ciphertext computing apparatus of claim 1, further comprising:
the task management module is configured to generate configuration information corresponding to the plurality of ciphertext-plaintext pairs, where the configuration information includes a data head identifier and a data tail identifier, the data head identifier indicates a ciphertext-plaintext pair that is a first transmitted ciphertext-plaintext pair among the plurality of ciphertext-plaintext pairs, and the data tail identifier indicates a ciphertext-plaintext pair that is a last transmitted ciphertext-plaintext pair among the plurality of ciphertext-plaintext pairs.
7. The ciphertext computing apparatus of claim 6, wherein the configuration information further comprises a data bit width and a ciphertext computation mode identifier.
8. The apparatus as claimed in claim 6, wherein the first layer data distribution module generates a first layer batch completion signal according to the configuration information and sends the first layer batch completion signal to each of the plurality of modular exponentiation engines after sending the plurality of ciphertext plaintext pairs, and the second layer data distribution module generates a second layer batch completion signal according to the configuration information and sends the second layer batch completion signal to the selected modular exponentiation engine after sending the Montgomery modular exponentiation results of the plurality of ciphertext pairs.
9. The apparatus as claimed in claim 8, wherein each of the plurality of modular exponentiation engines determines whether it is suitable to receive new parameters for modular exponentiation based on the first-layer batch completion signal, and each of the plurality of modular exponentiation engines determines whether it is suitable to receive new parameters for modular exponentiation based on the second-layer batch completion signal.
10. The ciphertext computing apparatus of claim 8, wherein the configuration information is added to a data stream corresponding to the plurality of ciphertext plaintext pairs to be transmitted to the first tier data distribution module with the plurality of ciphertext plaintext pairs, or wherein the configuration information is transmitted to the first tier data distribution module via a configuration information channel that is different from a data channel through which the plurality of ciphertext plaintext pairs are transmitted to the first tier data distribution module.
11. The ciphertext computing apparatus of claim 1, wherein the layer two data distribution module is configured to select one of the plurality of modular multiplication computing engines to receive a result of a Montgomery modular exponentiation of the plurality of ciphertext plaintext pairs, comprising:
the layer-two data distribution module is configured to select one of the plurality of modular multiplication calculation engines to receive Montgomery modular exponentiation results of the plurality of ciphertext plaintext pairs in a round-robin manner.
12. A ciphertext computing system, the ciphertext computing system comprising:
the secret adder is used for carrying out secret addition calculation;
the ciphertext computing apparatus of any one of claims 1 to 11; and
and the main processor is used for splitting the secret calculation formula into a combination of a secret addition calculation formula and a secret multiplication calculation formula, calling the secret adder to complete the secret addition calculation formula and calling the ciphertext calculation device to complete the secret multiplication calculation formula.
13. A ciphertext computing method applied to federated learning and privacy computing is characterized by comprising the following steps:
obtaining a plurality of ciphertext plaintext pairs to be subjected to secret multiplication;
through a first-layer data distribution module, one ciphertext plaintext pair in a plurality of ciphertext plaintext pairs is sent each time, one modular exponentiation calculation engine in a plurality of parallel modular exponentiation calculation engines included in the first-layer modular exponentiation calculation module is selected according to a polling mode to receive the sent ciphertext plaintext pair, wherein the first-layer modular exponentiation calculation module is connected with the first-layer data distribution module;
performing Montgomery operation on the ciphertext in the sent ciphertext plaintext pair by using the selected modular exponentiation engine to obtain a Montgomery ciphertext, and performing Montgomery modular exponentiation on the Montgomery ciphertext and the plaintext in the sent ciphertext plaintext pair to obtain a Montgomery modular exponentiation result of the selected modular exponentiation engine;
integrating Montgomery modular exponentiation results generated by the multiple modular exponentiation calculation engines to obtain Montgomery modular exponentiation results of the multiple ciphertext plaintext pairs;
selecting one modular multiplication calculation engine in a plurality of parallel modular multiplication calculation engines included in a second layer of modular multiplication calculation module to receive Montgomery modular exponentiation operation results of the plurality of ciphertext plaintext pairs through a second layer of data distribution module, wherein the second layer of modular multiplication calculation module is connected with the second layer of data distribution module;
performing accumulated modular multiplication operation on Montgomery modular exponentiation operation results of the plurality of ciphertext plaintext pairs through the selected modular multiplication calculation engine to obtain accumulated modular multiplication operation results of the plurality of ciphertext plaintext pairs; and
and performing Montgomery removal operation on accumulated modular multiplication operation results of the plurality of ciphertext plaintext pairs through a Montgomery removal operation module to obtain ciphertext calculation results of the plurality of ciphertext plaintext pairs.
14. The ciphertext computing method of claim 13, wherein the number of the plurality of modular exponentiation engines and the number of the plurality of modular multiplication engines are adjustable.
15. The ciphertext computation method of claim 13, further comprising:
performing parameter calculation by a parameter calculation distribution module, an
And distributing the parameters for modular exponentiation calculation to the first layer modular exponentiation calculation module through the first layer data distribution module and distributing the parameters for modular multiplication calculation to the second layer modular multiplication calculation module through the second layer data distribution module.
16. The ciphertext computing method of claim 13, further comprising:
generating, by a task management module, configuration information corresponding to the plurality of ciphertext plaintext pairs, wherein the configuration information includes a data header identifier indicating a ciphertext plaintext pair that is a first sent of the plurality of ciphertext plaintext pairs and a data trailer identifier indicating a ciphertext plaintext pair that is a last sent of the plurality of ciphertext plaintext pairs.
17. The ciphertext computing method of claim 16, wherein the configuration information further comprises a data bit width and a ciphertext computing mode identifier.
18. The ciphertext computing method of claim 16, wherein the first-layer data distribution module generates a first-layer batch completion signal according to the configuration information and sends the first-layer batch completion signal to each of the plurality of modular exponentiation engines after sending the plurality of ciphertext plaintext pairs, and the second-layer data distribution module generates a second-layer batch completion signal according to the configuration information and sends the second-layer batch completion signal to the selected modular exponentiation engine after sending the montgomery modular exponentiation results of the plurality of ciphertext pairs.
19. The method of claim 18, wherein each of the plurality of modular exponentiation engines determines whether it is suitable to receive new parameters for modular exponentiation based on the first-layer batch completion signal, and each of the plurality of modular exponentiation engines determines whether it is suitable to receive new parameters for modular exponentiation based on the second-layer batch completion signal.
20. The ciphertext computation method of claim 18, wherein the configuration information is added to a data stream corresponding to the plurality of ciphertext-plaintext pairs to be sent to the first tier data distribution module with the plurality of ciphertext-plaintext pairs, or wherein the configuration information is sent to the first tier data distribution module through a configuration information channel that is different from a data channel through which the plurality of ciphertext-plaintext pairs are sent to the first tier data distribution module.
CN202111196023.0A 2021-10-14 2021-10-14 Ciphertext computing device and method for federal learning and privacy computing Active CN113946846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111196023.0A CN113946846B (en) 2021-10-14 2021-10-14 Ciphertext computing device and method for federal learning and privacy computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111196023.0A CN113946846B (en) 2021-10-14 2021-10-14 Ciphertext computing device and method for federal learning and privacy computing

Publications (2)

Publication Number Publication Date
CN113946846A CN113946846A (en) 2022-01-18
CN113946846B true CN113946846B (en) 2022-07-12

Family

ID=79329813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111196023.0A Active CN113946846B (en) 2021-10-14 2021-10-14 Ciphertext computing device and method for federal learning and privacy computing

Country Status (1)

Country Link
CN (1) CN113946846B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114793155A (en) * 2022-04-12 2022-07-26 支付宝(杭州)信息技术有限公司 Multi-party secure computing method and device
CN114721913B (en) * 2022-05-12 2022-08-23 华控清交信息科技(北京)有限公司 Method and device for generating data flow graph

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001022653A2 (en) * 1999-09-22 2001-03-29 Raytheon Company Key escrow systems
CN102207847A (en) * 2011-05-06 2011-10-05 广州杰赛科技股份有限公司 Data encryption and decryption processing method and device based on Montgomery modular multiplication operation
CN109814838A (en) * 2019-03-28 2019-05-28 贵州华芯通半导体技术有限公司 Obtain method, hardware device and the system of the intermediate result group in encryption and decryption operation
CN112070222A (en) * 2020-11-10 2020-12-11 深圳致星科技有限公司 Processing architecture, accelerator and method for federal learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8073892B2 (en) * 2005-12-30 2011-12-06 Intel Corporation Cryptographic system, method and multiplier
US20210256394A1 (en) * 2020-02-14 2021-08-19 Zymergen Inc. Methods and systems for the optimization of a biosynthetic pathway
CN111832050B (en) * 2020-07-10 2021-03-26 深圳致星科技有限公司 Paillier encryption scheme based on FPGA chip implementation for federal learning
CN112100673A (en) * 2020-09-29 2020-12-18 深圳致星科技有限公司 Federal learning accelerator and RSA intersection calculation method for privacy calculation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001022653A2 (en) * 1999-09-22 2001-03-29 Raytheon Company Key escrow systems
CN102207847A (en) * 2011-05-06 2011-10-05 广州杰赛科技股份有限公司 Data encryption and decryption processing method and device based on Montgomery modular multiplication operation
CN109814838A (en) * 2019-03-28 2019-05-28 贵州华芯通半导体技术有限公司 Obtain method, hardware device and the system of the intermediate result group in encryption and decryption operation
CN112070222A (en) * 2020-11-10 2020-12-11 深圳致星科技有限公司 Processing architecture, accelerator and method for federal learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Paillier-encrypted databases with fast aggregated queries;Nir Drucker et al;《IEEE》;20170720;2331-9860 *
联邦学习安全与隐私保护研究综述;周俊 等;《西华大学学报(自然科学版)》;20200630;第39卷(第4期);9-17 *

Also Published As

Publication number Publication date
CN113946846A (en) 2022-01-18

Similar Documents

Publication Publication Date Title
CN112070222B (en) Processing device, accelerator and method for federal learning
CN112865954B (en) Accelerator, chip and system for Paillier decryption
CN113946846B (en) Ciphertext computing device and method for federal learning and privacy computing
CN110008717B (en) Decision tree classification service system and method supporting privacy protection
CN112988237B (en) Paillier decryption system, chip and method
CN112883408B (en) Encryption and decryption system and chip for private calculation
CN112733161A (en) Device and method for federated learning ciphertext operation
CN114021734B (en) Parameter calculation device, system and method for federal learning and privacy calculation
WO2020199785A1 (en) Processing method and computing method for private data, and applicable device
CN113407979B (en) Heterogeneous acceleration method, device and system for longitudinal federated logistic regression learning
CN112100673A (en) Federal learning accelerator and RSA intersection calculation method for privacy calculation
EP3522137B1 (en) Secure equijoin system, secure equijoin device, secure equijoin method, and program
CN113656823B (en) Secret addition computing device and system for federal learning and privacy computing
CN113900828B (en) Special processor for federal learning, federal learning processing chip and chip
CN113553191B (en) Heterogeneous processing system for federated learning and privacy computing
CN110266481A (en) Rear quantum Encrypt and Decrypt method and decryption device based on matrix
EP3246900B1 (en) Matrix and key generation device, matrix and key generation system, matrix coupling device, matrix and key generation method, and program
WO2020169996A1 (en) Matrix-based cryptographic methods and apparatus
Patil Enhanced-elliptic curve Diffie Hellman algorithm for secure data storage in multi cloud environment
WO2023276142A1 (en) Secret equijoin device, secret equijoin method, and program
WO2021027598A1 (en) Method and apparatus for determining model parameter, and electronic device
CN116663064B (en) Privacy protection neural network prediction method and system
CN115242374A (en) Intelligent contract engine implementation method and system supporting FHE
CN107508673A (en) The method and relevant apparatus that key obtains between ERP and third party's component
Moyal A pathwise comparison of parallel queues

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant