WO2022156159A1 - 模型参数调整的方法、设备、存储介质及程序产品 - Google Patents
模型参数调整的方法、设备、存储介质及程序产品 Download PDFInfo
- Publication number
- WO2022156159A1 WO2022156159A1 PCT/CN2021/105839 CN2021105839W WO2022156159A1 WO 2022156159 A1 WO2022156159 A1 WO 2022156159A1 CN 2021105839 W CN2021105839 W CN 2021105839W WO 2022156159 A1 WO2022156159 A1 WO 2022156159A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- variable
- classification
- value
- result
- quotient
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/215—Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2458—Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
- G06F16/2465—Query processing support for facilitating data mining operations in structured databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- the present application relates to secure multi-party computing fields such as multi-party machine learning, federated learning, and joint modeling, and in particular, to a method, device, storage medium, and program product for adjusting model parameters.
- secure multi-party computing technology is often used to protect the data privacy of all parties involved in machine learning.
- the exponential operation on the classification prediction value will generate a large value, which is easy to exceed the security multi-party computation. Due to the data representation range, the problem of data overflow occurs, resulting in a decrease in the accuracy of the classification model obtained by training, and even the accuracy is abnormally unavailable.
- the present application provides a method, device, storage medium and program product for adjusting model parameters.
- a method for adjusting model parameters comprising:
- the input data is classified and processed to obtain the classification prediction value of the classification model
- the parameters of the classification model are updated according to the normalized result of the classification prediction value.
- a device for adjusting model parameters including:
- the classification processing module is used for classifying the input data by using the classification model obtained based on the secure multi-party computation training to obtain the classification prediction value of the classification model;
- a reduction processing module configured to perform reduction processing on the classification prediction value
- a normalization processing module configured to perform normalization processing on the reduced classification predicted value to obtain a normalized result of the classified predicted value
- a parameter updating module configured to update the parameters of the classification model according to the normalized result of the classification prediction value.
- an electronic device comprising:
- the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the method of the first aspect.
- a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to cause the computer to perform the method of the first aspect.
- a computer program product comprising: a computer program, the computer program being stored in a readable storage medium, from which at least one processor of an electronic device can read The storage medium reads the computer program, and the at least one processor executes the computer program to cause the electronic device to perform the method of the first aspect.
- the technology according to the present application improves the accuracy of a classification model trained based on secure multi-party computation.
- FIG. 1 is a schematic diagram of a machine learning framework based on multi-party secure computing according to an embodiment of the present application
- FIG. 2 is a flowchart of a method for adjusting model parameters provided by the first embodiment of the present application
- FIG. 3 is a flowchart of a method for adjusting model parameters provided by the second embodiment of the present application.
- FIG. 5 is a schematic flowchart of a plaintext long division method provided by the second embodiment of the present application.
- FIG. 6 is a schematic diagram of a device for adjusting model parameters provided by the third embodiment of the present application.
- FIG. 7 is a schematic diagram of a device for adjusting model parameters provided by the fourth embodiment of the present application.
- FIG. 8 is a block diagram of an electronic device for implementing the method for adjusting model parameters according to an embodiment of the present application.
- the present application provides a method, device, storage medium and program product for adjusting model parameters, which are applied to secure multi-party computing fields such as multi-party machine learning, federated learning, and joint modeling, so as to improve the accuracy of classification models trained based on secure multi-party computing. .
- the method for adjusting model parameters provided in this embodiment can be applied to the machine learning framework based on multi-party secure computing as shown in FIG. 1 .
- multi-party secure computing multiple data There are parties (in Figure 1, three data holders, data holder 1, data holder 2, and data holder 3, are used as examples for illustration) according to the private data held, based on multi-party security
- the computing (MPC) protocol is used to jointly train the classification model, and the private data of either party will not be known by the other two parties, thus realizing the security of its own private data while improving the value of data utilization.
- the softmax function needs to be used to normalize the classification prediction value of the current classification model.
- the calculated data represents the range, and the problem of data overflow occurs, which leads to a decrease in the accuracy of the classification model obtained by training, and even the accuracy is abnormally unavailable.
- the method for adjusting model parameters provided in this embodiment can avoid data overflow in the normalization processing based on secure multi-party computation, thereby improving the accuracy of the classification model trained based on secure multi-party computation.
- the data processing involved in the method for adjusting model parameters provided in this application is based on secure multi-party computing data processing. Unless otherwise specified, the data involved are all ciphertext data. All are based on ciphertext processing.
- FIG. 2 is a flowchart of a method for adjusting model parameters provided by the first embodiment of the present application. As shown in Figure 2, the specific steps of the method are as follows:
- Step S201 using a classification model trained based on secure multi-party computation, to classify the input data to obtain a classification prediction value of the classification model.
- the classification prediction value of the classification model includes the prediction value corresponding to each classification of the classification model.
- the current classification model obtained by training based on secure multi-party computation is used to classify the input data, and the classification prediction value of the current classification model is obtained.
- the input data may be data in a training set or a test set, and may be various types of data such as images, texts, and voices, which are not specifically limited in this embodiment.
- integers are generally used to represent ciphertext, and for floating-point data types, it is represented by fixed-point numbers and amplified to integers. Therefore, the range of data that can actually be represented in secure multi-party computation is limited.
- Step S202 reducing the classification prediction value.
- softmax In data processing based on secure multi-party computation, in the definition of a method (softmax) for normalizing multiple input values x 0 , x 1 ,...x i ...,x k , Among them, softmax(x i ) represents the normalized result of the input value xi , k is the number of input values, k is a positive integer greater than 1, and exp(x i ) represents the exponential operation on the input value xi . Among them, in the softmax method based on secure multi-party computation, the exponential operation increases the input value exponentially, and the exponential operation and its accumulation can easily generate data that exceeds the data length that the MPC can handle, resulting in data overflow.
- a set value can be subtracted from the classification prediction value, and the set value can be set and adjusted according to the data range that can be processed by the MPC and the range of the classification prediction value in the specific application scenario. Make specific restrictions.
- the classification prediction value can also be divided by a larger value, for example, each classification prediction value is divided by the maximum value of all classification prediction values, and the classification prediction value can be reduced to the range of [0, 1]. The result obtained by exponentiating the reduced classification prediction value is still in the range of [0,1].
- Step S203 performing a normalization process on the classification prediction value after the reduction process, to obtain a normalization result of the classification prediction value.
- Step S204 Update the parameters of the classification model according to the normalized result of the classification prediction value.
- the normalization result of the classification prediction value can be used as the prediction probability value of the corresponding classification
- the classification result of the classification model can be determined according to the prediction probability value of each classification
- the current classification result can be updated according to the classification result.
- the parameters of the classification model are obtained, so as to realize the training of the classification model and improve the training accuracy of the classification model.
- the classification model obtained by training based on secure multi-party computation is used to classify the input data to obtain the classification prediction value of the classification model; the classification prediction value is reduced by processing , normalize the classification prediction value after the reduction processing, and obtain the normalization result of the classification prediction value; update the parameters of the classification model according to the normalization result of the classification prediction value, and reduce the classification prediction value to get Reducing the input value of the normalization method can avoid data overflow during the normalization process, thereby improving the training accuracy of the classification model.
- FIG. 3 is a flowchart of a method for adjusting model parameters provided by the second embodiment of the present application.
- the classification prediction value is reduced by determining the maximum value among the classification prediction values, and dividing each classification prediction value by the maximum value. Further, by dividing each category predicted value by the maximum value, the quotient of each category predicted value and the maximum value can be calculated by long division based on secure multi-party computation. As shown in Figure 3, the specific steps of the method are as follows:
- Step S301 using a classification model trained based on secure multi-party computation to perform classification processing on the input data to obtain a classification prediction value of the classification model.
- the classification prediction value of the classification model includes the prediction value corresponding to each classification of the classification model.
- the current classification model obtained by training based on secure multi-party computation is used to classify the input data, and the classification prediction value of the current classification model is obtained.
- the input data may be data in a training set or a test set, and may be various types of data such as images, texts, and voices, which are not specifically limited in this embodiment.
- integers are generally used to represent ciphertext, and for floating-point data types, it is represented by fixed-point numbers and amplified to integers. Therefore, the range of data that can actually be represented in secure multi-party computation is limited.
- the input value of the exponential operation is the classification prediction value of the classification model. When the input value is too large, it is easy to cause the result of the exponential operation to exceed the data length that the MPC can handle, resulting in data overflow.
- each classification prediction value is divided by the maximum value of all classification prediction values, the classification prediction value is reduced to the range of [0, 1], and the classification prediction value after the reduction is carried out.
- the result obtained by the exponential operation is still in the range of [0,1], which can avoid data overflow during the normalization process.
- secure multi-party computation protocols can basically implement basic arithmetic operations such as addition, subtraction, multiplication, and comparison, as well as Boolean operations such as AND, OR, NOT, and XOR.
- common secure multi-party computation protocols include: ABY, ABY3 and other protocols.
- the arithmetic operations and Boolean operations involved in this application are all operations based on secure multi-party computation protocols, and can be based on basic arithmetic operations and Boolean operations in secure multi-party computation protocols. , using a Boolean circuit or an obfuscation circuit.
- Step S302 determining the maximum value among the classification prediction values.
- determining the maximum value among the plurality of classification prediction values can be implemented based on the method of calculating the maximum value among the two classification prediction values.
- the k classification prediction values can be grouped into two groups, and the maximum value of the two classification prediction values in each group can be calculated respectively. ; then calculate the maximum of the maximums in each group in a similar manner.
- the determination of the maximum value among the multiple classification prediction values may also be implemented in other manners, which will not be repeated in this embodiment.
- Step S303 Divide each classification prediction value by the maximum value, and the quotient of each classification prediction value obtained and the maximum value is the classification prediction value after reduction processing.
- each classification prediction value After determining the maximum value of all classification prediction values, by dividing each classification prediction value by the maximum value, each classification prediction value is reduced, and the obtained quotient of each classification prediction value and the maximum value is the reduction processing After the classification prediction value, the classification prediction value can be reduced to the range of (0, 1], and the result obtained by performing the exponential operation on the reduced classification prediction value is still in the range of (0, 1], which can avoid the input value. Too large will cause data overflow during normalization processing.
- Step S304 obtaining a parameter threshold.
- n is the parameter of the exponential function
- x is the variable that performs the exponential operation
- e x is the value of the exponential function of x.
- the parameters in the exponential function are set to a large parameter threshold to approximate infinity, the exponential operation is converted into a multiplication operation, and the approximated Exponential operation, when the parameter threshold is large, the error is small.
- the parameter thresholds may be set and adjusted according to actual application scenarios, which are not specifically limited in this embodiment.
- Step S305 Set the value of the function parameter n in the exponential function as the parameter threshold, and calculate the exponential function value of each reduced classification predicted value.
- the value of the function parameter n in the exponential function is set as the parameter threshold, and the exponential function value of each reduced classification prediction value is calculated.
- the exponential operation can be converted into a multiplication operation, and the approximate exponential operation can be realized by the multiplication operation based on the multi-party safe calculation.
- the parameter threshold is large, the error is small, and data overflow will not occur.
- the exponential function value of the classification prediction value after reduction processing is approximately calculated by the following formula 1:
- e is a natural constant
- x is the classification prediction value after reduction processing
- e x is the exponential function value of the classification prediction value after reduction processing
- M is the parameter threshold
- M is a constant.
- Step S306 according to each exponential function value and the sum of all exponential function values, determine the normalized result of each reduced classification prediction value.
- the normalized result of each reduced classification prediction value is obtained, that is, the classification prediction value corresponds to normalized results.
- softmax(x i ) represents the normalized result corresponding to the classification prediction value x i
- x max is the maximum value among x 0 , x 1 ,...x i ..., x k
- exp(x i /x max ) is The exponential function value of x i /x max .
- Step S307 Update the parameters of the classification model according to the normalized result of the classification prediction value.
- the parameters of the classification model can be updated according to the normalization result of the classification prediction value, so as to train the classification model.
- the classification result of the classification model can be determined according to the normalized result of the classification prediction value; the parameters of the classification model can be updated according to the classification result, thereby realizing the training of the classification model and improving the accuracy of the classification model.
- the normalized result of the classification prediction value may be used as the prediction probability value of the corresponding classification, and the classification result of the classification model is determined according to the prediction probability value of each classification.
- the value of the function parameter n in the exponential function is set as the parameter threshold, the exponential operation is converted into a multiplication operation, and an approximate exponential operation is realized by the multiplication operation based on the multi-party security calculation. , when the parameter threshold is large, the error is small and no data overflow occurs.
- the Newton approximation method is usually used to calculate the division.
- This method requires that the initial values of the algorithm parameters to be set are associated with the input dividend and the size range of the divisor, and there are certain restrictions on the data length.
- the range of intermediate results is difficult to predict.
- the intermediate results are used as input data for division, it is difficult to set appropriate initial values to perform correct calculations, resulting in easy overflow during division operations.
- the division operation based on secure multi-party computation involved in the present application is realized by implementing long division based on secure multi-party computation, which can be specifically implemented by the following steps:
- Step S401 Determine a first variable and a second variable according to the dividend and the divisor, and the significant digits of the second variable are greater than or equal to the significant digits of the first variable.
- Figure 5 shows an implementation of long division in plaintext.
- the processing shown in sequence number 2 needs to be performed before the iteration (processing flow corresponding to sequence numbers 3-12 shown in Figure 5 ) is performed.
- align the most significant bits of the dividend and the divisor so that the number of significant digits of the divisor is greater than or equal to that of the dividend after alignment.
- both the dividend and the divisor are ciphertext, and the most significant bits of the dividend and the divisor cannot be directly aligned.
- the second variable can be obtained by increasing the number of significant digits of the divisor; the dividend is directly used as the first variable.
- the second variable can be obtained by increasing the significand of the divisor so that the significand of the divisor is greater than or equal to the significand of the dividend.
- the divisor is shifted to the left by the largest number of significant digits of the dividend to obtain the second variable. In this way, the alignment of the dividend and the divisor can be realized without determining the significant digits of the dividend and the divisor. If the difference between the significant digits of the dividend and the divisor is known, then shifting the divisor to the left by this difference can make the significant digits of the divisor not less than the dividend.
- the shifting operations performed on the data are all shifting operations based on secure multi-party computation, which can be specifically implemented by using a Boolean circuit or an obfuscation circuit.
- the quotient of the dividend and the divisor can be initialized to 0.
- the dividend, the divisor and the quotient are all represented in binary.
- the following steps S402-S404 are iteratively performed, and the value of one bit of the quotient is determined in each iteration process, and added to The end of the quotient; the number of iterations is determined according to the length of the significant digits of the quotient. In this way, the long division based on secure multi-party computation can be realized for the dividend and divisor of the ciphertext, which can effectively avoid the Data overflow.
- Step S402 compare whether the first variable is greater than or equal to the second variable.
- the comparison operation for comparing the size of the two data is a comparison operation based on secure multi-party computation, which can be specifically implemented by using a Boolean circuit or an obfuscation circuit.
- Step S403 Determine the current bit value of the quotient according to the comparison result, and update the first variable and the second variable.
- the value of the current bit of the quotient is determined, and the first variable and the second variable are updated. Specifically, it needs to be implemented: if the first variable is greater than or equal to the second variable, add 1 to the end of the quotient, Subtract the second variable from the first variable, and shift the second variable to the right by one; if the first variable is smaller than the second variable, add 0 to the end of the quotient, shift the first variable to the left, and keep the second variable unchanged. Change.
- the current bit value of the quotient is determined, which can be implemented in the following manner:
- the comparison result in this step is a comparison operation based on secure multi-party computation.
- the comparison result is in ciphertext, and the comparison result of plaintext cannot be obtained. How to determine the current quotient of the quotient according to the comparison result when the comparison result of plaintext cannot be obtained.
- the value of the bits, and updating the first variable and the second variable, is a technical difficulty.
- adding 1 or 0 to the end of the quotient is implemented in the following manner: adding 0 to the end of the quotient; and performing an XOR operation on the quotient after adding 0 to the end and the comparison result.
- 1 or 0 can be directly added to the end of the quotient, and 1 can be added to the end of the quotient if the first variable is greater than or equal to the second variable.
- a variable less than the second variable adds 0 to the end of the quotient.
- the comparison result is 1 (ciphertext)
- the comparison result is 0 (ciphertext)
- by adding 0 ( ciphertext) perform XOR operation on the quotient after adding 0 at the end and the comparison result
- the current comparison result is 1 (ciphertext)
- the value after the XOR is 1, thus adding 0 ( Ciphertext) becomes 1 (ciphertext), that is, if the first variable is greater than or equal to the second variable, 1 is added to the end of the quotient
- the current comparison result is 0 (ciphertext)
- the value after XOR If it is 1, the 0 (ciphertext) added at the end of the quotient is still 0 (ciphertext), that is, if the first variable is smaller than the second variable, 0 is added to the end of the quotient.
- updating the first variable according to the comparison result can be implemented in the following manner:
- the first update process is an update process on the first variable if the first variable is greater than or equal to the second variable; perform a second update process on the first variable, The second update result is obtained, and the second update process is the update process of the first variable if the first variable is smaller than the second variable; the first product of the comparison result and the first update result, and the non-operation result of the comparison result and The second product of the second update result; the sum of the first product and the second product is taken as the updated first variable.
- the first variable can be directly updated when the plaintext comparison result is unknown.
- the first update process is the update process of the first variable when the first variable is greater than or equal to the second variable
- the first update process is specifically: subtracting the second variable from the first variable through a subtraction operation based on secure multi-party computation variable to get the first update result.
- the second update process is the update process of the first variable when the first variable is smaller than the second variable. 2. Update the results.
- the comparison result is 1 (ciphertext), and the non-operation result of the comparison result is 0 (ciphertext); if the first variable is less than the second variable, the comparison result is 0. (ciphertext), the non-operation result of the comparison result is 1 (ciphertext).
- the second product By calculating the first product of the comparison result and the first update result, and the second product of the non-operation result of the comparison result and the second update result; when the first variable is greater than or equal to the second variable, the first product is equal to the first The update result, the second product is 0 (ciphertext), by calculating the sum of the first product and the second product, the determined updated first variable is equal to the first update result; when the first variable is smaller than the second variable, the first variable One product is equal to 0 (ciphertext), and the second product is equal to the second update result. By calculating the sum of the first product and the second product, it can be determined that the updated first variable is the second update result.
- updating the second variable according to the comparison result can be implemented in the following manner:
- a third update process is performed on the second variable, and a third update result is obtained.
- the third update process is the update process of the second variable if the first variable is greater than or equal to the second variable.
- the triple product, and the fourth product of the non-operation result of the comparison result and the second variable; the sum of the third product and the fourth product is used as the updated second variable. In this way, based on the ciphertext comparison result, the second variable can be directly updated when the plaintext comparison result is unknown.
- the third update process is the update process of the second variable when the first variable is greater than or equal to the second variable, and the third update process is specifically: shifting the second variable to the right by one shift operation based on secure multi-party computation bit, get the third update result. If the first variable is smaller than the second variable, the second variable remains unchanged.
- the comparison result is 1 (ciphertext), and the non-operation result of the comparison result is 0 (ciphertext); if the first variable is less than the second variable, the comparison result is 0. (ciphertext), the non-operation result of the comparison result is 1 (ciphertext).
- the third product of the comparison result and the third update result, and the fourth product of the non-operation result of the comparison result and the second variable By calculating the third product of the comparison result and the third update result, and the fourth product of the non-operation result of the comparison result and the second variable; when the first variable is greater than or equal to the second variable, the first product is equal to the third update As a result, the second product is 0 (ciphertext), and by calculating the sum of the third product and the fourth product, the determined updated second variable is equal to the third update result; when the first variable is smaller than the second variable, the third The product is equal to 0 (ciphertext), and the fourth product is equal to the second variable. By calculating the sum of the third product and the fourth product, it can be determined that the updated second variable is still the second variable and remains unchanged.
- Step S404 whether the number of digits of the quotient is equal to the preset length of valid digits.
- step S402 If the number of digits of the quotient is not equal to the preset effective number of digits, continue to execute step S402, and perform the next iterative process according to the updated first variable and the second variable.
- step S405 is executed.
- Step S405 taking the currently obtained quotient as the quotient of the dividend and the divisor.
- each classification prediction value is divided by the maximum value among all classification prediction values, and the method shown in FIG. 4 can be used by taking each classification prediction value as the dividend and the maximum value as the divisor , calculate the quotient of the predicted value of each category and the maximum value.
- the first variable and the second variable corresponding to the divisor iteratively carry out the processing of the above steps S402-S404, determine the value of one bit of the quotient in each iteration process, and add it to the end of the quotient; until the number of iterations is greater than or equal to the quotient The length of significant digits of , to get the quotient of the classification predicted value and the maximum value.
- the classification prediction value of the ciphertext and the maximum value of each classification prediction value can be implemented to perform the long division operation based on the secure multi-party computation, which can effectively avoid the data overflow caused by the division operation based on the secure multi-party computation.
- the long division based on secure multi-party computation is adopted, and the Boolean circuit or the obfuscation circuit is used to realize the calculation of the quotient of each classification prediction value and the maximum value, without data overflow, and with high accuracy.
- FIG. 6 is a schematic diagram of a device for adjusting model parameters provided by the third embodiment of the present application.
- the device for adjusting model parameters provided in the embodiments of the present application may execute the processing flow provided in the method embodiments for adjusting model parameters.
- the model parameter adjustment device 60 includes: a classification processing module 601 , a reduction processing module 602 , a normalization processing module 603 and a parameter updating module 604 .
- the classification processing module 601 is configured to perform classification processing on the input data by using the classification model trained based on secure multi-party computation to obtain the classification prediction value of the classification model.
- the reduction processing module 602 is used to perform reduction processing on the classification prediction value.
- the normalization processing module 603 is configured to perform normalization processing on the reduced classification predicted value to obtain a normalized result of the classification predicted value.
- the parameter updating module 604 is configured to update the parameters of the classification model according to the normalized result of the classification prediction value.
- the device provided in this embodiment of the present application may be specifically used to execute the method embodiment provided in the foregoing first embodiment, and the specific functions will not be repeated here.
- the classification model obtained by training based on secure multi-party computing is used to classify the input data to obtain the classification prediction value of the classification model; the classification prediction value is reduced by processing , normalize the classification prediction value after the reduction processing, and obtain the normalization result of the classification prediction value; update the parameters of the classification model according to the normalization result of the classification prediction value, and reduce the classification prediction value to get Reducing the input value of the normalization method can avoid data overflow during the normalization process, thereby improving the training accuracy of the classification model.
- FIG. 7 is a schematic diagram of a device for adjusting model parameters provided by the fourth embodiment of the present application.
- the model parameter adjustment device 70 includes: a classification processing module 701 , a reduction processing module 702 , a normalization processing module 703 and a parameter updating module 704.
- the classification processing module 701 is configured to perform classification processing on the input data by using the classification model trained based on secure multi-party computation, and obtain the classification prediction value of the classification model.
- the reduction processing module 702 is used to perform reduction processing on the classification prediction value.
- the normalization processing module 703 is configured to perform normalization processing on the reduced classification predicted value to obtain a normalized result of the classification predicted value.
- the parameter updating module 704 is configured to update the parameters of the classification model according to the normalized result of the classification prediction value.
- the reduction processing module 702 includes:
- the maximum value determination unit 7021 is used to determine the maximum value among the classification prediction values.
- the division operation unit 7022 is used to divide each classification prediction value by the maximum value.
- the division unit 7022 includes:
- the aligning subunit is used to use each categorical predicted value and the maximum value as the first variable and the second variable, respectively, and the significant digits of the second variable are greater than or equal to the significant digits of the first variable.
- the iterative subunit is further used for:
- the iterative subunit is further used for:
- the iterative subunit is further used for:
- the iterative subunit is further used for:
- a third update process is performed on the second variable, and a third update result is obtained.
- the third update process is the update process of the second variable if the first variable is greater than or equal to the second variable.
- the triple product, and the fourth product of the non-operation result of the comparison result and the second variable; the sum of the third product and the fourth product is used as the updated second variable.
- the normalization processing module 703 includes:
- a parameter threshold value obtaining unit 7031 configured to obtain a parameter threshold value.
- the exponential operation unit 7032 is configured to set the value of the function parameter n in the exponential function as the parameter threshold, and calculate the exponential function value of each reduced classification predicted value.
- the normalization unit 7033 is configured to determine the normalization result of each reduced classification prediction value according to each exponential function value and the sum of all exponential function values.
- the exponential operation unit 7032 is further configured to:
- the exponential function value of the reduced classification predicted value is calculated by the following formula:
- x is a natural constant
- e is the classification prediction value after reduction processing
- e x is the exponential function value of the classification prediction value after reduction processing
- M is the parameter threshold
- M is a constant.
- the device provided in this embodiment of the present application may be specifically used to execute the method embodiment provided by the foregoing second embodiment, and the specific functions will not be repeated here.
- the value of the function parameter n in the exponential function is set as the parameter threshold, the exponential operation is converted into a multiplication operation, and an approximate exponential operation is realized by the multiplication operation based on the multi-party security calculation.
- the parameter threshold is large, the error is small and no data overflow occurs.
- the Boolean circuit or the obfuscation circuit is used to realize the calculation of the quotient between the predicted value of each classification and the maximum value, without data overflow, and with high accuracy.
- the present application further provides an electronic device and a readable storage medium.
- the present application further provides a computer program product, the program product includes: a computer program, where the computer program is stored in a readable storage medium, and at least one processor of the electronic device can be read from the readable storage medium A computer program, where at least one processor executes the computer program to cause the electronic device to execute the solution provided by any of the foregoing embodiments.
- FIG. 8 shows a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present application.
- Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
- Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing devices.
- the components shown herein, their connections and relationships, and their functions are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
- the electronic device 800 includes a computing unit 801 that can be programmed according to a computer program stored in a read only memory (ROM) 802 or a computer program loaded from a storage unit 808 into a random access memory (RAM) 803 Various appropriate actions and processes are performed. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored.
- the computing unit 801, the ROM 802, and the RAM 803 are connected to each other through a bus 804.
- An input/output (I/O) interface 805 is also connected to bus 804 .
- Various components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, mouse, etc.; an output unit 807, such as various types of displays, speakers, etc.; a storage unit 808, such as a magnetic disk, an optical disk, etc. ; and a communication unit 809, such as a network card, a modem, a wireless communication transceiver, and the like.
- the communication unit 809 allows the device 800 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
- Computing unit 801 may be various general-purpose and/or special-purpose processing components with processing and computing capabilities. Some examples of computing units 801 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), various specialized artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any suitable processor, controller, microcontroller, etc.
- the computing unit 801 performs the various methods and processes described above, such as the method of model parameter adjustment.
- the method of model parameter adjustment may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 808 .
- part or all of the computer program may be loaded and/or installed on device 800 via ROM 802 and/or communication unit 809.
- the computer program When the computer program is loaded into RAM 803 and executed by computing unit 801, one or more steps of the method of model parameter adjustment described above may be performed.
- the computing unit 801 may be configured to perform the method of model parameter adjustment by any other suitable means (eg, by means of firmware).
- Various implementations of the systems and techniques described herein above may be implemented in digital electronic circuitry, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips system (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof.
- FPGAs field programmable gate arrays
- ASICs application specific integrated circuits
- ASSPs application specific standard products
- SOC systems on chips system
- CPLD load programmable logic device
- computer hardware firmware, software, and/or combinations thereof.
- These various embodiments may include being implemented in one or more computer programs executable and/or interpretable on a programmable system including at least one programmable processor that
- the processor which may be a special purpose or general-purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device an output device.
- Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, performs the functions/functions specified in the flowcharts and/or block diagrams. Action is implemented.
- the program code may execute entirely on the machine, partly on the machine, partly on the machine and partly on a remote machine as a stand-alone software package or entirely on the remote machine or server.
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
- machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read only memory
- EPROM or flash memory erasable programmable read only memory
- CD-ROM compact disk read only memory
- magnetic storage or any suitable combination of the foregoing.
- the systems and techniques described herein may be implemented on a computer having a display device (eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user ); and a keyboard and pointing device (eg, a mouse or trackball) through which a user can provide input to the computer.
- a display device eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
- a keyboard and pointing device eg, a mouse or trackball
- Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (eg, visual feedback, auditory feedback, or tactile feedback); and can be in any form (including acoustic input, voice input, or tactile input) to receive input from the user.
- the systems and techniques described herein may be implemented on a computing system that includes back-end components (eg, as a data server), or a computing system that includes middleware components (eg, an application server), or a computing system that includes front-end components (eg, a user computer having a graphical user interface or web browser through which a user may interact with implementations of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
- the components of the system may be interconnected by any form or medium of digital data communication (eg, a communication network). Examples of communication networks include: Local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
- a computer system can include clients and servers. Clients and servers are generally remote from each other and usually interact through a communication network. The relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
- the server can be a cloud server, also known as a cloud computing server or a cloud host. It is a host product in the cloud computing service system to solve the traditional physical host and VPS service ("Virtual Private Server", or "VPS" for short) , there are the defects of difficult management and weak business expansion.
- the server can also be a server of a distributed system, or a server combined with a blockchain.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioethics (AREA)
- Computational Linguistics (AREA)
- Fuzzy Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Quality & Reliability (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (23)
- 一种模型参数调整的方法,包括:利用基于安全多方计算训练得到的分类模型,对输入数据进行分类处理,得到所述分类模型的分类预测值;对所述分类预测值进行缩小处理;对缩小处理后的分类预测值进行归一化处理,得到所述分类预测值的归一化结果;根据所述分类预测值的归一化结果,更新所述分类模型的参数。
- 根据权利要求1所述的方法,其中,所述对所述分类预测值进行缩小处理,包括:确定所述分类预测值中的最大值;将每个所述分类预测值除以所述最大值。
- 根据权利要求2所述的方法,其中,所述将每个所述分类预测值除以所述最大值,包括:将每个所述分类预测值和所述最大值分别作为第一变量和第二变量,所述第二变量的有效位数大于或者等于所述第一变量的有效位数;比较所述第一变量是否大于或等于所述第二变量;根据比较结果,确定商的当前比特位的值,并更新所述第一变量和第二变量;比较更新后的第一变量和第二变量,并根据比较结果确定所述商的下一个比特位的值,直至所述商的位数达到预设有效位数长度,得到的商为每个所述分类预测值除以所述最大值的结果。
- 根据权利要求3所述的方法,其中,所述根据比较结果,确定商的当前比特位的值,包括:根据所述比较结果,向所述商的末尾增加1或0,其中若所述第一变量大于或等于所述第二变量则向所述商的末尾增加1,若所述第一变量小于所述第二变量则向所述商的末尾增加0。
- 根据权利要求4所述的方法,其中,所述根据所述比较结果,向所述商的末尾增加1或0,其中若所述第一变量大于或等于所述第二变量则向所述商的末尾增加1,若所述第一变量小于所述第二变量则向所述商的末尾增加0,包括:在所述商的末尾增加0;将末尾增加0后的商与所述比较结果进行异或运算。
- 根据权利要求3所述的方法,其中,根据所述比较结果,更新所述第一变量,包括:对所述第一变量进行第一更新处理,得到第一更新结果,所述第一更新处理为若所述第一变量大于或等于所述第二变量时对第一变量的更新处理;对所述第一变量进行第二更新处理,得到第二更新结果,所述第二更新处理为若所述第一变量小于所述第二变量时对第一变量的更新处理;计算所述比较结果与所第一更新结果的第一乘积、以及所述比较结果的非运算结果与所述第二更新结果的第二乘积;将所述第一乘积和所述第二乘积之和,作为更新后的第一变量。
- 根据权利要求3所述的方法,其中,根据所述比较结果,更新所述第二变量,包括:对所述第二变量进行第三更新处理,得到第三更新结果,所述第三更新处理为若所述第一变量大于或等于所述第二变量时对所述第二变量的更新处理;计算所述比较结果与所述第三更新结果的第三乘积、以及所述比较结果的非运算结果与所述第二变量的第四乘积;将所述第三乘积和所述第四乘积之和,作为更新后的第二变量。
- 根据权利要求1所述的方法,其中,所述对缩小处理后的分类预测值进行归一化处理,得到所述分类预测值的归一化结果,包括:获取参数阈值;将指数函数中函数参数n的取值设置为所述参数阈值,计算每个所述缩小处理后的分类预测值的指数函数值;根据每个所述指数函数值,以及所有所述指数函数值之和,确定每个所述缩小处理后的分类预测值的归一化结果。
- 根据权利要求1-9中任一项所述的方法,其中,所述根据所述分类预测值的归一化结果,更新所述分类模型的参数,包括:根据所述分类预测值的归一化结果,确定所述分类模型的分类结果;根据所述分类结果,更新所述分类模型的参数。
- 一种模型参数调整的设备,包括:分类处理模块,用于利用基于安全多方计算训练得到的分类模型,对输入数据进行分类处理,得到所述分类模型的分类预测值;缩小处理模块,用于对所述分类预测值进行缩小处理;归一化处理模块,用于对缩小处理后的分类预测值进行归一化处理,得到所述分类预测值的归一化结果;参数更新模块,用于根据所述分类预测值的归一化结果,更新所述分类模型的参数。
- 根据权利要求11所述的设备,其中,所述缩小处理模块,包括:最大值确定单元,用于确定所述分类预测值中的最大值;除法运算单元,用于将每个所述分类预测值除以所述最大值。
- 根据权利要求12所述的设备,其中,所述除法运算单元包括:对齐子单元,用于将每个所述分类预测值和所述最大值分别作为第一变量和第二变量,所述第二变量的有效位数大于或者等于所述第一变量的有效位数;迭代子单元,用于:比较所述第一变量是否大于或等于所述第二变量;根据比较结果,确定商的当前比特位的值,并更新所述第一变量和第二变量;比较更新后的第一变量和第二变量,并根据比较结果确定所述商的下一个比特位的值,直至所述商的位数达到预设有效位数长度,得到的商为每个所述分类预测值除以所述最大值的结果。
- 根据权利要求13所述的设备,其中,所述迭代子单元还用于:根据所述比较结果,向所述商的末尾增加1或0,其中若所述第一变量大于或等于所述第二变量则向所述商的末尾增加1,若所述第一变量小于所述第二变量则向所述商的末尾增加0。
- 根据权利要求14所述的设备,其中,所述迭代子单元还用于:在所述商的末尾增加0;将末尾增加0后的商与所述比较结果进行异或运算。
- 根据权利要求13所述的设备,其中,所述迭代子单元还用于:对所述第一变量进行第一更新处理,得到第一更新结果,所述第一更新处理为若所述第一变量大于或等于所述第二变量时对第一变量的更新处理;对所述第一变量进行第二更新处理,得到第二更新结果,所述第二更新处理为若所述第一变量小于所述第二变量时对第一变量的更新处理;计算所述比较结果与所第一更新结果的第一乘积、以及所述比较结果的非运算结果与所述第二更新结果的第二乘积;将所述第一乘积和所述第二乘积之和,作为更新后的第一变量。
- 根据权利要求13所述的设备,其中,所述迭代子单元还用于:对所述第二变量进行第三更新处理,得到第三更新结果,所述第三更新处理为若所述第一变量大于或等于所述第二变量时对所述第二变量的更新处理;计算所述比较结果与所述第三更新结果的第三乘积、以及所述比较结果的非运算结果与所述第二变量的第四乘积;将所述第三乘积和所述第四乘积之和,作为更新后的第二变量。
- 根据权利要求11所述的设备,其中,所述归一化处理模块,包括:参数阈值获取单元,用于获取参数阈值;指数运算单元,用于将指数函数中函数参数n的取值设置为所述参数阈值,计算每个所述缩小处理后的分类预测值的指数函数值;归一化单元,用于根据每个所述指数函数值,以及所有所述指数函数值之和,确定每个所述缩小处理后的分类预测值的归一化结果。
- 根据权利要求11-19中任一项所述的设备,其中,所述参数更新模块包括:分类结果确定单元,用于根据所述分类预测值的归一化结果,确定所述分类模型的分类结果;参数更新单元,用于根据所述分类结果,更新所述分类模型的参数。
- 一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-10中任一项所述的方法。
- 一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行权利要求1-10中任一项所述的方法。
- 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时 实现根据权利要求1-10中任一项所述的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21920540.8A EP4195084A1 (en) | 2021-01-22 | 2021-07-12 | Method and device for adjusting model parameters, and storage medium and program product |
KR1020237009058A KR20230044318A (ko) | 2021-01-22 | 2021-07-12 | 모델 파라미터 조정 방법, 기기, 저장매체 및 프로그램 제품 |
US18/181,032 US20230206133A1 (en) | 2021-01-22 | 2023-03-09 | Model parameter adjusting method and device, storage medium and program product |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110088699.1A CN112818387A (zh) | 2021-01-22 | 2021-01-22 | 模型参数调整的方法、设备、存储介质及程序产品 |
CN202110088699.1 | 2021-01-22 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/181,032 Continuation US20230206133A1 (en) | 2021-01-22 | 2023-03-09 | Model parameter adjusting method and device, storage medium and program product |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022156159A1 true WO2022156159A1 (zh) | 2022-07-28 |
Family
ID=75858832
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/105839 WO2022156159A1 (zh) | 2021-01-22 | 2021-07-12 | 模型参数调整的方法、设备、存储介质及程序产品 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230206133A1 (zh) |
EP (1) | EP4195084A1 (zh) |
KR (1) | KR20230044318A (zh) |
CN (1) | CN112818387A (zh) |
WO (1) | WO2022156159A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117854156A (zh) * | 2024-03-07 | 2024-04-09 | 腾讯科技(深圳)有限公司 | 一种特征提取模型的训练方法和相关装置 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112818387A (zh) * | 2021-01-22 | 2021-05-18 | 百度在线网络技术(北京)有限公司 | 模型参数调整的方法、设备、存储介质及程序产品 |
CN114972928B (zh) * | 2022-07-26 | 2022-11-11 | 深圳比特微电子科技有限公司 | 一种图像识别模型训练方法及装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106055891A (zh) * | 2016-05-27 | 2016-10-26 | 大连楼兰科技股份有限公司 | 基于人工智能Softmax回归方法建立分车型远程定损系统及方法 |
CN110890985A (zh) * | 2019-11-27 | 2020-03-17 | 北京邮电大学 | 虚拟网络映射方法及其模型训练方法、装置 |
CN111798047A (zh) * | 2020-06-30 | 2020-10-20 | 平安普惠企业管理有限公司 | 风控预测方法、装置、电子设备及存储介质 |
CN112818387A (zh) * | 2021-01-22 | 2021-05-18 | 百度在线网络技术(北京)有限公司 | 模型参数调整的方法、设备、存储介质及程序产品 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109784490B (zh) * | 2019-02-02 | 2020-07-03 | 北京地平线机器人技术研发有限公司 | 神经网络的训练方法、装置和电子设备 |
-
2021
- 2021-01-22 CN CN202110088699.1A patent/CN112818387A/zh active Pending
- 2021-07-12 EP EP21920540.8A patent/EP4195084A1/en active Pending
- 2021-07-12 WO PCT/CN2021/105839 patent/WO2022156159A1/zh active Application Filing
- 2021-07-12 KR KR1020237009058A patent/KR20230044318A/ko unknown
-
2023
- 2023-03-09 US US18/181,032 patent/US20230206133A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106055891A (zh) * | 2016-05-27 | 2016-10-26 | 大连楼兰科技股份有限公司 | 基于人工智能Softmax回归方法建立分车型远程定损系统及方法 |
CN110890985A (zh) * | 2019-11-27 | 2020-03-17 | 北京邮电大学 | 虚拟网络映射方法及其模型训练方法、装置 |
CN111798047A (zh) * | 2020-06-30 | 2020-10-20 | 平安普惠企业管理有限公司 | 风控预测方法、装置、电子设备及存储介质 |
CN112818387A (zh) * | 2021-01-22 | 2021-05-18 | 百度在线网络技术(北京)有限公司 | 模型参数调整的方法、设备、存储介质及程序产品 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117854156A (zh) * | 2024-03-07 | 2024-04-09 | 腾讯科技(深圳)有限公司 | 一种特征提取模型的训练方法和相关装置 |
CN117854156B (zh) * | 2024-03-07 | 2024-05-07 | 腾讯科技(深圳)有限公司 | 一种特征提取模型的训练方法和相关装置 |
Also Published As
Publication number | Publication date |
---|---|
US20230206133A1 (en) | 2023-06-29 |
KR20230044318A (ko) | 2023-04-03 |
CN112818387A (zh) | 2021-05-18 |
EP4195084A1 (en) | 2023-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022156159A1 (zh) | 模型参数调整的方法、设备、存储介质及程序产品 | |
WO2022126993A1 (zh) | 多方安全计算方法、装置、电子设备和存储介质 | |
US20190156213A1 (en) | Gradient compressing apparatus, gradient compressing method, and non-transitory computer readable medium | |
CN114374440B (zh) | 量子信道经典容量的估计方法及装置、电子设备和介质 | |
CN112560996B (zh) | 用户画像识别模型训练方法、设备、可读存储介质及产品 | |
WO2024001023A1 (zh) | 隐私数据的安全处理方法和装置 | |
US20220398834A1 (en) | Method and apparatus for transfer learning | |
WO2023020456A1 (zh) | 网络模型的量化方法、装置、设备和存储介质 | |
CN112615852A (zh) | 数据的处理方法、相关装置及计算机程序产品 | |
CN113098624B (zh) | 量子态测量方法、装置、设备、存储介质及系统 | |
CN114528916A (zh) | 样本聚类处理方法、装置、设备及存储介质 | |
US20230141932A1 (en) | Method and apparatus for question answering based on table, and electronic device | |
US20220300848A1 (en) | Function Processing Method and Device and Electronic Apparatus | |
WO2023029464A1 (zh) | 数据处理装置、方法、芯片、计算机设备及存储介质 | |
CN114238611B (zh) | 用于输出信息的方法、装置、设备以及存储介质 | |
CN114048863A (zh) | 数据处理方法、装置、电子设备以及存储介质 | |
CN111382233A (zh) | 一种相似文本检测方法、装置、电子设备及存储介质 | |
CN111950689A (zh) | 神经网络的训练方法及装置 | |
CN113824546B (zh) | 用于生成信息的方法和装置 | |
CN114124360B (zh) | 加密装置及方法、设备和介质 | |
CN113362428B (zh) | 用于配置颜色的方法、装置、设备、介质和产品 | |
US20230367548A1 (en) | Computing method | |
WO2020024243A1 (zh) | 增量核密度估计器的生成方法、装置和计算机可读存储介质 | |
CN117540345A (zh) | 一种全局回归模型训练方法、装置及设备 | |
CN117792643A (zh) | 点预存表的生成方法、解密方法及其装置、设备和介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21920540 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021920540 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2021920540 Country of ref document: EP Effective date: 20230309 |
|
ENP | Entry into the national phase |
Ref document number: 20237009058 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |