CN112394974B - Annotation generation method and device for code change, electronic equipment and storage medium - Google Patents

Annotation generation method and device for code change, electronic equipment and storage medium Download PDF

Info

Publication number
CN112394974B
CN112394974B CN202011322526.3A CN202011322526A CN112394974B CN 112394974 B CN112394974 B CN 112394974B CN 202011322526 A CN202011322526 A CN 202011322526A CN 112394974 B CN112394974 B CN 112394974B
Authority
CN
China
Prior art keywords
annotation
model parameters
model
loss
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011322526.3A
Other languages
Chinese (zh)
Other versions
CN112394974A (en
Inventor
王健宗
李泽远
何安珣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011322526.3A priority Critical patent/CN112394974B/en
Publication of CN112394974A publication Critical patent/CN112394974A/en
Priority to PCT/CN2021/083079 priority patent/WO2021208701A1/en
Application granted granted Critical
Publication of CN112394974B publication Critical patent/CN112394974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Technology Law (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Storage Device Security (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to artificial intelligence technology, and discloses a code change annotation generation method, which comprises the following steps: the server randomly generates model parameters and loss lower limits, and sends the model parameters and loss lower limits to a plurality of clients; the client trains a pre-constructed annotation generation model according to the model parameters and the loss lower limit to obtain loss values and local model parameters, and encrypts and transmits the loss values and the local model parameters to the server; the server updates the model parameters and the loss lower limit and sends the model parameters and the loss lower limit to the client; the client returns to the training step until a preset termination condition is met, and a comment generation model after training is completed is obtained; an annotation is generated for the change code data using an annotation generation model. The invention also provides a code change annotation generating device, equipment and a computer readable storage medium. In addition, the present invention relates to blockchain techniques in which change code data may be stored in blockchain nodes. The method and the device can improve the accuracy of generating the annotation by the annotation generation model and protect the privacy of codes.

Description

Annotation generation method and device for code change, electronic equipment and storage medium
Technical Field
The present invention relates to the field of artificial intelligence technologies, and in particular, to a method and apparatus for generating comments on a code change, an electronic device, and a computer readable storage medium.
Background
Code changes refer to the addition, deletion, and replacement of source code by a programmer while maintaining the source code. Code change annotation is a natural language description of code change behavior of a programmer and is an important resource for understanding software evolution.
At present, the deep learning model is mainly used for automatically generating the annotation, and a large amount of data is often required for training the deep learning model, but enterprises or organizations pay more and more attention to protecting code privacy along with the enhancement of privacy protection consciousness, so that the quantity of training data is too small, and the accuracy of generating the annotation is lower.
Disclosure of Invention
The invention provides a code-changed annotation generation method, a code-changed annotation generation device, electronic equipment and a computer-readable storage medium, and aims to improve the accuracy of generating annotations by an annotation generation model and protect code privacy.
In order to achieve the above object, the present invention provides an annotation generation method for code modification applied to a client, including:
receiving model parameters and a loss lower limit sent by a server;
obtaining a difference file and a standard annotation generated according to the changed code to obtain a local training data set and a tag set;
Training a pre-constructed annotation generation model by using the local training data set according to the model parameters to obtain an annotation result output by the annotation generation model, obtaining a loss value according to the annotation result and the tag set, and acquiring the model parameters of the annotation generation model as local model parameters and transmitting the loss value and the local model parameters to the server in an encrypted manner when the loss value is smaller than the loss lower limit;
receiving updated model parameters and loss lower limits transmitted by the server side, and decrypting the updated model parameters and the loss lower limits;
Returning to the step of training the pre-constructed annotation generation model by using the local training data set according to the model parameters and the loss lower limit until a preset termination condition is met, so as to obtain a trained annotation generation model;
And generating an annotation according to the change code data input by the user by using the trained annotation generation model.
Optionally, training a pre-constructed annotation generation model by using the local training data set according to the model parameters to obtain an annotation result output by the annotation generation model, obtaining a loss value according to the annotation result and the tag set, and when the loss value is smaller than the loss lower limit, obtaining the model parameters of the annotation generation model as local model parameters and transmitting the loss value and the local model parameters to the server in an encrypted manner, where the method includes:
initializing the pre-constructed annotation generation model by using the model parameters;
Performing iterative training on the initialized annotation generation model by using the local training data set, and calculating an initial loss value according to an annotation result output by the annotation generation model and the tag set;
comparing the initial loss value to the lower loss limit;
when the initial loss value is larger than the loss lower limit, adjusting parameters of the annotation generation model, and returning to the step of performing iterative training on the initialized annotation generation model by using the local training data set;
And when the initial loss value is smaller than or equal to the loss lower limit, transmitting the initial loss value and the parameter encryption of the annotation generation model to the server.
Optionally, the performing iterative training on the initialized annotation generation model by using the local training data set, and calculating an initial loss value according to an annotation result output by the annotation generation model and the tag set, including:
Step A: inputting the local training data set into the initialized annotation generation model to obtain an annotation result output by the initialized annotation generation model;
And (B) step (B): according to the label set, calculating a training loss value of the annotation result by using a preset objective function;
Repeating the step A and the step B until the training loss value converges to obtain an initial loss value.
Optionally, the calculating the training loss value of the annotation result using a preset objective function includes:
Obtaining the annotation result, wherein the annotation result comprises a target sequence and a prediction probability of each word in the target sequence;
calculating a training loss value of the annotation result using the following objective function:
where L is the training loss value, N is the total number of the local training data sets, L is the length of the target sequence in the annotation result, Is the predictive probability value of the jth word in the target sequence corresponding to the ith data in the local training data set.
In order to achieve the above object, the present invention provides an annotation generation method for code modification applied to a server, including:
Randomly generating model parameters and loss lower limits, and transmitting the model parameters and the loss lower limits to a plurality of clients;
Receiving encrypted local model parameters and loss values sent by each client;
Calculating a weight vector according to the local model parameters of all the clients, calculating updated model parameters according to the weight vector and the local model parameters of all the clients, and calculating an updated loss lower limit according to the loss values of all the clients;
And sending the updated model parameters and the updated loss lower limit to each client.
Optionally, the calculating a weight vector according to the local model parameters of all the clients, and calculating updated model parameters according to the weight vector and the local model parameters of all the clients includes:
calculating the weight vector and updated model parameters by using the following functions:
where a i is the weight vector of the i-th client, [ W i ] is the parameter matrix consisting of the i-th client's local model parameters, [ W i]T ] is the transpose of the parameter matrix, Is the j-th parameter in the updated model parameters, n is the total number of clients,/>Is the j-th parameter of the local model parameters of the i-th client.
In order to solve the above-mentioned problems, the present invention also provides an annotation generation apparatus applied to code modification of a client, the apparatus comprising:
the parameter receiving module is used for receiving the model parameters and the loss lower limit sent by the server;
the model training module is used for acquiring a difference file and a standard annotation generated according to the changed code to obtain a local training data set and a tag set;
Training a pre-constructed annotation generation model by using the local training data set according to the model parameters to obtain an annotation result output by the annotation generation model, obtaining a loss value according to the annotation result and the tag set, and acquiring the model parameters of the annotation generation model as local model parameters and transmitting the loss value and the local model parameters to the server in an encrypted manner when the loss value is smaller than the loss lower limit;
The model updating module is used for receiving the updated model parameters and the loss lower limit transmitted by the server side and decrypting the updated model parameters and the loss lower limit;
And the model generation module is used for obtaining a trained annotation generation model when the training of the pre-constructed annotation generation model meets the preset termination condition, and generating an annotation according to the change code data input by the user by using the trained annotation generation model.
In order to solve the above-mentioned problem, the present invention also provides an annotation generating device applied to code modification of a server, the device comprising:
the parameter generation module is used for randomly generating model parameters and loss lower limits and sending the model parameters and the loss lower limits to a plurality of clients;
The data receiving module is used for receiving the encrypted local model parameters and the loss values sent by each client;
The parameter updating module is used for calculating weight vectors according to the local model parameters of all the clients, calculating updated model parameters according to the weight vectors and the local model parameters of all the clients, and calculating updated loss lower limits according to the loss values of all the clients;
And the data sending module is used for sending the updated model parameters and the updated loss lower limit to each client.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
a memory storing at least one computer program; and
And a processor executing the computer program stored in the memory to implement the annotation generation method of the code change.
In order to solve the above-mentioned problems, the present invention also provides a computer-readable storage medium including a storage data area storing created data and a storage program area storing a computer program; wherein the computer program, when executed by a processor, implements the annotation generation method for code modification described above.
According to the embodiment of the invention, the annotation generation model is trained through a plurality of clients, each client uses a local data set for training, the local data set is obtained according to the code change file of each client, the training sample of the model is enlarged, and the accuracy of the annotation generation model in generating the annotation when the code is changed is improved; meanwhile, each client does not need to exchange the respective original code data, only needs to transmit the model parameters to the server, and calculates and updates the model parameters by using the server, so that the privacy and the safety of the original code data can be protected. Therefore, the annotation generation method, the annotation generation device and the computer readable storage medium for the code change can improve the annotation generation accuracy of the code change and protect the privacy of the code.
Drawings
FIG. 1 is a flowchart illustrating a method for generating annotations of code changes applied to a client according to a first embodiment of the present invention;
Fig. 2 is a flowchart illustrating an annotation generation method applied to code modification of a server according to a second embodiment of the present invention;
FIG. 3 is a schematic block diagram of an annotation generating device for code modification according to a third embodiment of the present invention;
Fig. 4 is a schematic diagram of an internal structure of an electronic device implementing an annotation generation method for code modification according to a fourth embodiment of the present invention;
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the application provides a code change annotation generation method. The execution subject of the annotation generation method of code modification includes, but is not limited to, at least one of a server, a terminal, and the like, which can be configured to execute the method provided by the embodiment of the application. In other words, the annotation generation method of the code change may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
The annotation generation method for code change adopts a transverse federal learning architecture and comprises a server side and a plurality of clients. The server is responsible for initializing federal learning, carrying out security aggregation on encryption parameters sent by the client, carrying out calculation on some related parameters, and sending calculation results comprising model parameters and loss lower limits to the client. The client is used for training the model by utilizing a local database according to the model parameters and the loss lower limit sent by the server, and sending the training result to the server for federal learning again.
Referring to fig. 1, a flowchart of an annotation generation method for code modification according to a first embodiment of the present invention is shown. Preferably, the annotation generation method for code modification provided in the first embodiment of the present invention is applied to a client, and includes:
s1, receiving model parameters and a loss lower limit sent by a server.
The model parameters in the embodiment of the invention are parameters of the annotation generation model. The lower loss limit is a criterion for ending training when annotating the generated model training.
S2, obtaining a difference file and standard annotation generated according to the changed code to obtain a local training data set and a tag set.
The difference file (Diff file) in the embodiment of the present invention is a data file describing the condition of code change after the code change, and may be obtained from the code change system. The standard annotation is generated according to the difference file and can be used as a real label for calculating the loss of the annotation generation model.
The tag set includes the standard annotations, which correspond to the difference file. When the annotation generating model is trained, the tag set can be used as a standard data set, the annotation generating model can calculate the probability of each word in the output annotation according to the tag set, and the probability is higher when the similarity between the word in the output annotation and the tag in the tag set is higher.
And S3, training a pre-constructed annotation generation model by using the local training data set according to the model parameters to obtain an annotation result output by the annotation generation model, obtaining a loss value according to the annotation result and the tag set, and acquiring the model parameters of the annotation generation model as local model parameters and transmitting the loss value and the local model parameters to the server in an encrypted manner when the loss value is smaller than the loss lower limit.
The annotation generation model according to the embodiment of the invention is a language model based on a Recurrent Neural Network (RNN), and may be a seq2seq model. The annotation generation model may predict the input sequence based on a machine learning algorithm, such as predicting the data at time t+1 after inputting the data at time t, and calculate its probability using an activation function.
In detail, the S3 includes:
Step a, initializing the pre-constructed annotation generation model by using the model parameters;
Step b, performing iterative training on the initialized annotation generation model by using the local training data set, and calculating an initial loss value according to an annotation result output by the annotation generation model and the tag set;
Step c, comparing the initial loss value with the loss lower limit;
d, when the initial loss value is greater than or equal to the lower loss limit, adjusting parameters of the annotation generation model and returning to the step b;
And e, when the initial loss value is smaller than or equal to the loss lower limit, transmitting the initial loss value and the parameter encryption of the annotation generation model to the server.
In the embodiment of the invention, the model parameters are used for initializing the pre-constructed annotation generation model, and the model parameters are assigned as parameters in the annotation generation model.
Further, the performing iterative training on the initialized annotation generation model by using the local training data set, and calculating an initial loss value according to the annotation result output by the annotation generation model and the tag set, including:
Step A: inputting the local training data set into the initialized annotation generation model to obtain an annotation result output by the initialized annotation generation model;
And (B) step (B): according to the label set, calculating a training loss value of the annotation result by using a preset objective function;
Repeating the step A and the step B until the training loss value converges to obtain an initial loss value, wherein the convergence of the training loss value means that the training loss value is kept unchanged.
Further, the calculating the training loss value of the annotation result by using a preset objective function includes:
Obtaining the annotation result, wherein the annotation result comprises a target sequence and a prediction probability of each word in the target sequence;
calculating a training loss value of the annotation result using the following objective function:
where L is the training loss value, N is the total number of the local training data sets, L is the length of the target sequence in the annotation result, Is the predictive probability value of the jth word in the target sequence corresponding to the ith data in the local training data set.
Preferably, the embodiment of the invention encrypts the loss value and the local model parameter by using a homomorphic encryption algorithm. The homomorphic encryption algorithm is a set of encryption algorithm which can restore the ciphertext into the plaintext according to the corresponding decryption algorithm after the ciphertext is operated, and the homomorphic encryption algorithm comprises the encryption algorithm and the decryption algorithm, so that the data leakage can be effectively prevented, and the data security is improved.
S4, receiving the updated model parameters and the loss lower limit transmitted by the server side, and decrypting the updated model parameters and the loss lower limit.
After receiving the updated model parameters and the loss lower limit transmitted by the server, the embodiment of the invention decrypts the updated model parameters and the loss lower limit according to a decryption algorithm in the homomorphic encryption algorithm to obtain the corresponding model parameters and loss lower limit.
And S5, updating the annotation generation model according to the updated model parameters, and judging whether a preset termination condition is met.
In detail, the embodiment of the invention uses the updated model parameters to replace the local model parameters in the annotation generation model to finish updating the annotation generation model and judges whether a preset termination condition is reached.
The termination condition in the embodiment of the invention includes that the lower loss limit is smaller than a preset threshold.
And when the termination condition is not satisfied, returning to the step S2.
And when the termination condition is met, executing S6, and stopping training to obtain the annotation generation model with the training completed.
And S7, generating comments on the change code data input by the user by using the trained comment generation model.
The annotation generation model in the embodiment of the invention comprises an encoder and a decoder. Wherein the encoder and the decoder may be a network structure consisting of a bi-directional recurrent neural network.
The code changing data comprises a difference file generated according to comparison after code changing. To further ensure the privacy and security of the change code data, the change code data may be stored in a node of a blockchain.
The embodiment of the invention can automatically generate the annotation to the changed code part by using the annotation generation model when the service system code is changed.
In detail, the S7 includes:
inputting change code data into the annotation generation model after training;
Coding the changed code data through an encoder of the annotation generation model after training to obtain a hidden state sequence;
and decoding the hidden state sequence by a decoder of the trained annotation generation model to generate annotations corresponding to the change code data.
The embodiment of the invention encodes an input diff file into a hidden state sequence by using an encoder, wherein the hidden state sequence is a plurality of word vectors obtained through linear transformation, the hidden state sequence is used as the input of a decoder, the decoder predicts the probability of each word based on a natural language technology and combined with a pre-constructed dictionary, and a plurality of words are selected according to the probability to generate a target sequence, namely the annotation of the input diff file.
Preferably, the embodiment of the invention is based on a framework of horizontal federal learning, and combines a plurality of clients to train the annotation generation model, so that not only is training samples expanded, but also the secrecy of codes of all clients is ensured.
Referring to fig. 2, a flowchart of an annotation generation method for code modification according to a second embodiment of the present invention is shown. In this embodiment, the annotation generation method for code modification provided in the second embodiment is applied to a server, and includes:
s11, randomly generating model parameters and loss lower limits, and sending the model parameters and the loss lower limits to a plurality of clients.
The embodiment of the invention can randomly generate the model parameters and the loss lower limit by adopting a random algorithm, and send the model parameters and the loss lower limit to a plurality of clients.
Optionally, the model parameter is a parameter set corresponding to an annotation generation model owned by each client, and the loss lower limit is used for a training termination condition when each client trains the annotation generation model.
S12, receiving the encrypted local model parameters and the loss values sent by each client.
In the embodiment of the invention, each client trains the annotation generation model by using a local data set, and after training is completed, local model parameters and loss values corresponding to the annotation generation model are sent to the server.
S13, calculating weight vectors according to the local model parameters of all the clients, calculating updated model parameters according to the weight vectors and the local model parameters of all the clients, and calculating updated loss lower limits according to the loss values of all the clients.
In detail, the calculating the weight vector according to the local model parameters of all the clients according to the embodiment of the present invention includes:
The weight vector for each client is calculated using the following formula:
Where α i is the updated weight vector of the i-th client, n is the total number of clients, [ W i ] is a parameter matrix composed of the local model parameters of the i-th client, and [ W i]T ] is a transpose of the parameter matrix.
In detail, the calculating updated model parameters according to the weight vector and the local model parameters of all the clients includes:
the updated model parameters were calculated using the following formula:
Wherein, Is the j-th parameter in the updated model parameters, n is the total number of clients, alpha i is the weight vector of the i-th client,/>Is the j-th parameter of the local model parameters of the i-th client.
In detail, in the embodiment of the present invention, calculating the updated lower loss limit according to the loss values of all the clients includes: and calculating the median of the loss values of all the clients, and taking the median as the updated loss lower limit.
And S14, sending the updated model parameters and the updated loss lower limit to each client.
In detail, the updated model parameters and the updated loss lower limit are sent to each client, and each client can update a local annotation generation model according to the updated model parameters and re-use a local data set for training.
According to the embodiment of the invention, the annotation generation model is trained through a plurality of clients, each client uses a local data set for training, the local data set is obtained according to the code change file of each client, the training sample of the model is enlarged, and the accuracy of the annotation generation model in generating the annotation when the code is changed is improved; meanwhile, each client does not need to exchange the respective original code data, only needs to transmit the model parameters to the server, and calculates and updates the model parameters by using the server, so that the privacy and the safety of the original code data can be protected. Therefore, the annotation generation method, the annotation generation device and the computer readable storage medium for the code change can improve the annotation generation accuracy of the code change and protect the privacy of the code.
Fig. 3 is a schematic block diagram of an annotation generation device for code modification according to a third embodiment of the present invention.
In the embodiment of the present invention, the code-modified annotation generating device may be divided into a first code-modified annotation generating device 100 and a second code-modified annotation generating device 200. The first code-modified comment generation apparatus 100 is installed in a client, and the second code-modified comment generation apparatus 200 is installed in a server.
Depending on the implemented functionality, the annotation generation means applied to the first code change of the client may comprise a parameter receiving module 101, a model training module 102, a model updating module 103 and a model generating module 104; and the annotation generating device 200 applied to the second code change of the server may include a parameter generating module 201, a data receiving module 202, a parameter updating module 203, and a data transmitting module 204. The module of the invention, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the parameter receiving module 101 is configured to receive the model parameter and the loss lower limit sent by the server.
The model parameters in the embodiment of the invention are parameters of the annotation generation model. The lower loss limit is a criterion for ending training when annotating the generated model training.
The model training module 102 is configured to obtain a local training data set and a tag set according to a difference file and a standard annotation generated by a changed code;
Training a pre-constructed annotation generation model according to the model parameters and the loss lower limit to obtain a loss value and a local model parameter, and encrypting and transmitting the loss value and the local model parameter to the server when the loss value is smaller than the loss lower limit.
The difference file (Diff file) in the embodiment of the present invention is a data file describing the condition of code change after the code change, and may be obtained from the code change system. The standard annotation is generated according to the difference file and can be used as a real label for calculating the loss of the annotation generation model.
The tag set includes the standard annotations, which correspond to the difference file. When the annotation generating model is trained, the tag set can be used as a standard data set, the annotation generating model can calculate the probability of each word in the output annotation according to the tag set, and the probability is higher when the similarity between the word in the output annotation and the tag in the tag set is higher.
The annotation generation model according to the embodiment of the invention is a language model based on a Recurrent Neural Network (RNN), and may be a seq2seq model. The annotation generation model may predict the input sequence based on a machine learning algorithm, such as predicting the data at time t+1 after inputting the data at time t, and calculate its probability using an activation function.
In detail, the model training module 102 is specifically configured to:
Step a: initializing the pre-constructed annotation generation model by using the model parameters;
step b: performing iterative training on the initialized annotation generation model by using the local training data set, and calculating an initial loss value according to an annotation result output by the annotation generation model and the tag set;
Step c: comparing the initial loss value to the lower loss limit;
step d: b, when the initial loss value is greater than or equal to the loss lower limit, adjusting parameters of the annotation generation model and returning to the step b;
Step e: and when the initial loss value is smaller than or equal to the loss lower limit, transmitting the initial loss value and the parameter encryption of the annotation generation model to the server.
In the embodiment of the invention, the model parameters are used for initializing the pre-constructed annotation generation model, and the model parameters are assigned as parameters in the annotation generation model.
Further, when the initialized annotation generation model is iteratively trained by using the local training data set, and an initial loss value is calculated according to an annotation result output by the annotation generation model and the tag set, the model training module 102 specifically performs the following operations:
Step A: inputting the local training data set into the initialized annotation generation model to obtain an annotation result output by the initialized annotation generation model;
And (B) step (B): according to the label set, calculating a training loss value of the annotation result by using a preset objective function;
Repeating the step A and the step B until the training loss value converges to obtain an initial loss value, wherein the convergence of the training loss value means that the training loss value is kept unchanged.
Further, when calculating the training loss value of the annotation result using a preset objective function, the model training module 102 specifically performs the following operations:
Obtaining the annotation result, wherein the annotation result comprises a target sequence and a prediction probability of each word in the target sequence;
calculating a training loss value of the annotation result using the following objective function:
where L is the training loss value, N is the total number of the local training data sets, L is the length of the target sequence in the annotation result, Is the predictive probability value of the jth word in the target sequence corresponding to the ith data in the local training data set.
Preferably, the embodiment of the invention encrypts the loss value and the local model parameter by using a homomorphic encryption algorithm. The homomorphic encryption algorithm is a set of encryption algorithm which can restore the ciphertext into the plaintext according to the corresponding decryption algorithm after the ciphertext is operated, and the homomorphic encryption algorithm comprises the encryption algorithm and the decryption algorithm, so that the data leakage can be effectively prevented, and the data security is improved.
The model updating module 103 is configured to receive the updated model parameters and the loss lower limit transmitted by the server, and decrypt the updated model parameters and the loss lower limit.
After receiving the updated model parameters and the loss lower limit transmitted by the server, the embodiment of the invention decrypts the updated model parameters and the loss lower limit according to a decryption algorithm in the homomorphic encryption algorithm to obtain the corresponding model parameters and loss lower limit.
The model generating module 104 is configured to obtain a trained annotation generating model when training of the pre-constructed annotation generating model meets a preset termination condition, and generate an annotation for the change code data input by the user by using the trained annotation generating model.
In detail, the embodiment of the invention uses the updated model parameters to replace the local model parameters in the annotation generation model to finish updating the annotation generation model and judges whether a preset termination condition is reached.
The termination condition in the embodiment of the invention includes that the lower loss limit is smaller than a preset threshold.
And when the termination condition is not met, continuing to execute training on a pre-constructed annotation generation model according to the model parameters and the loss lower limit by the model training module 102 to obtain a loss value and a local model parameter, and encrypting and transmitting the loss value and the local model parameter to the server when the loss value is smaller than the loss lower limit.
When the termination condition is satisfied, training is stopped, and the model generation module 104 obtains a trained annotation generation model. The annotation generation model in the embodiment of the invention comprises an encoder and a decoder. Wherein the encoder and the decoder may be a network structure consisting of a bi-directional recurrent neural network.
The code changing data comprises a difference file generated according to comparison after code changing. To further ensure the privacy and security of the change code data, the change code data may be stored in a node of a blockchain.
The embodiment of the invention can automatically generate the annotation to the changed code part by using the annotation generation model when the service system code is changed.
In detail, when generating an annotation on the change code data input by the user using the trained annotation generation model, the model generation module 104 is specifically configured to:
inputting change code data into the annotation generation model after training;
Coding the changed code data through an encoder of the annotation generation model after training to obtain a hidden state sequence;
and decoding the hidden state sequence by a decoder of the trained annotation generation model to generate annotations corresponding to the change code data.
The embodiment of the invention encodes an input diff file into a hidden state sequence by using an encoder, wherein the hidden state sequence is a plurality of word vectors obtained through linear transformation, the hidden state sequence is used as the input of a decoder, the decoder predicts the probability of each word based on a natural language technology and combined with a pre-constructed dictionary, and a plurality of words are selected according to the probability to generate a target sequence, namely the annotation of the input diff file.
Preferably, the embodiment of the invention is based on a framework of horizontal federal learning, and combines a plurality of clients to train the annotation generation model, so that not only is training samples expanded, but also the secrecy of codes of all clients is ensured.
The parameter generating module 201 is configured to randomly generate model parameters and a lower loss limit, and send the model parameters and the lower loss limit to a plurality of clients.
The embodiment of the invention can randomly generate the model parameters and the loss lower limit by adopting a random algorithm, and send the model parameters and the loss lower limit to a plurality of clients.
Optionally, the model parameter is a parameter set corresponding to an annotation generation model owned by each client, and the loss lower limit is used for a training termination condition when each client trains the annotation generation model.
The data receiving module 202 is configured to receive the encrypted local model parameters and the loss values sent by each client.
In the embodiment of the invention, each client trains the annotation generation model by using a local data set, and after training is completed, local model parameters and loss values corresponding to the annotation generation model are sent to the server.
The parameter updating module 203 is configured to calculate a weight vector according to local model parameters of all the clients, calculate updated model parameters according to the weight vector and the local model parameters of all the clients, and calculate updated lower loss limits according to loss values of all the clients.
In detail, in the embodiment of the present invention, the weight vector of each client is calculated using the following formula:
Where α i is the updated weight vector of the i-th client, n is the total number of clients, [ W i ] is a parameter matrix composed of the local model parameters of the i-th client, and [ W i]T ] is a transpose of the parameter matrix.
In detail, in the embodiment of the present invention, the updated model parameters are calculated using the following formula:
Wherein, Is the j-th parameter in the updated model parameters, n is the total number of clients, alpha i is the weight vector of the i-th client,/>Is the j-th parameter of the local model parameters of the i-th client.
In detail, in the embodiment of the present invention, calculating the updated lower loss limit according to the loss values of all the clients includes: and calculating the median of the loss values of all the clients, and taking the median as the updated loss lower limit.
The data sending module 204 is configured to send the updated model parameters and the updated lower loss limit to each client.
In detail, the updated model parameters and the updated loss lower limit are sent to each client, and each client can update a local annotation generation model according to the updated model parameters and re-use a local data set for training.
Fig. 4 is a schematic structural diagram of an electronic device implementing an annotation generation method for code modification according to a fourth embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a code-altering annotation generating program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, including flash memory, a mobile hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only for storing application software installed in the electronic device 1 and various types of data, such as codes of the comment generation program 12 for code change, but also for temporarily storing data that has been output or is to be output.
The processor 10 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects respective components of the entire electronic device using various interfaces and lines, executes or executes programs or modules (e.g., an annotation generation program or the like that performs code modification) stored in the memory 11, and invokes data stored in the memory 11 to perform various functions of the electronic device 1 and process data.
The bus may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
Fig. 4 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 4 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device 1 may further include a power source (such as a battery) for supplying power to each component, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
Further, the electronic device 1 may also comprise a network interface, optionally the network interface may comprise a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the electronic device 1 and other electronic devices.
The electronic device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The annotation generation program 12 of code alterations stored in the memory 11 of the electronic device 1 is a combination of a plurality of computer programs, which when run in the processor 10, can implement:
receiving model parameters and a loss lower limit sent by a server;
obtaining a difference file and a standard annotation generated according to the changed code to obtain a local training data set and a tag set;
Training a pre-constructed annotation generation model by using the local training data set according to the model parameters to obtain an annotation result output by the annotation generation model, obtaining a loss value according to the annotation result and the tag set, and acquiring the model parameters of the annotation generation model as local model parameters and transmitting the loss value and the local model parameters to the server in an encrypted manner when the loss value is smaller than the loss lower limit;
receiving updated model parameters and loss lower limits transmitted by the server side, and decrypting the updated model parameters and the loss lower limits;
Returning to the step of training the pre-constructed annotation generation model by using the local training data set according to the model parameters and the loss lower limit until a preset termination condition is met, so as to obtain a trained annotation generation model;
And generating an annotation according to the change code data input by the user by using the trained annotation generation model.
In another embodiment of the present invention, the annotation generating program 12 of the code modification stored in the memory 11 of the electronic device 1 may further implement:
Randomly generating model parameters and loss lower limits, and transmitting the model parameters and the loss lower limits to a plurality of clients;
Receiving encrypted local model parameters and loss values sent by each client;
Calculating a weight vector according to the local model parameters of all the clients, calculating updated model parameters according to the weight vector and the local model parameters of all the clients, and calculating an updated loss lower limit according to the loss values of all the clients;
And sending the updated model parameters and the updated loss lower limit to each client.
Further, the modules/units integrated in the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
Further, the computer-usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any accompanying diagram representation in the claims should not be considered as limiting the claim concerned.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The blockchain (Blockchain), essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains information from a batch of network transactions for verifying the validity (anti-counterfeit) of its information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (8)

1. A method for annotation generation of code changes, the method being applied to a client and comprising:
receiving model parameters and a loss lower limit sent by a server;
obtaining a difference file and a standard annotation generated according to the changed code to obtain a local training data set and a tag set;
Training a pre-constructed annotation generation model by using the local training data set according to the model parameters to obtain an annotation result output by the annotation generation model, obtaining a loss value according to the annotation result and the tag set, and acquiring the model parameters of the annotation generation model as local model parameters and transmitting the loss value and the local model parameters to the server in an encrypted manner when the loss value is smaller than the loss lower limit;
receiving updated model parameters and loss lower limits transmitted by the server side, and decrypting the updated model parameters and the loss lower limits;
Returning to the step of training the pre-constructed annotation generation model by using the local training data set according to the model parameters and the loss lower limit until a preset termination condition is met, so as to obtain a trained annotation generation model;
generating an annotation according to the change code data input by the user by using the trained annotation generation model;
Training a pre-constructed annotation generation model by using the local training data set according to the model parameters to obtain an annotation result output by the annotation generation model, obtaining a loss value according to the annotation result and the tag set, and when the loss value is smaller than the loss lower limit, obtaining the model parameters of the annotation generation model as local model parameters and transmitting the loss value and the local model parameters to the server in an encrypted manner, wherein the method comprises the following steps: initializing the pre-constructed annotation generation model by using the model parameters; performing iterative training on the initialized annotation generation model by using the local training data set, and calculating an initial loss value according to an annotation result output by the annotation generation model and the tag set; comparing the initial loss value to the lower loss limit; when the initial loss value is larger than the loss lower limit, adjusting parameters of the annotation generation model, and returning to the step of performing iterative training on the initialized annotation generation model by using the local training data set; and when the initial loss value is smaller than or equal to the loss lower limit, transmitting the initial loss value and the parameter encryption of the annotation generation model to the server.
2. The method for generating a modified code annotation of claim 1, wherein iteratively training the initialized annotation generation model using the local training data set, and calculating an initial loss value based on the annotation result output by the annotation generation model and the tag set, comprises:
Step A: inputting the local training data set into the initialized annotation generation model to obtain an annotation result output by the initialized annotation generation model;
And (B) step (B): according to the label set, calculating a training loss value of the annotation result by using a preset objective function;
Repeating the step A and the step B until the training loss value converges to obtain an initial loss value.
3. The annotation generation method of code alteration according to claim 2, wherein the calculating training loss value of the annotation result using a preset objective function comprises:
Obtaining the annotation result, wherein the annotation result comprises a target sequence and a prediction probability of each word in the target sequence;
calculating a training loss value of the annotation result using the following objective function:
where L is the training loss value, N is the total number of the local training data sets, Is the length of the target sequence in the annotation result,/>Is the predictive probability value of the jth word in the target sequence corresponding to the ith data in the local training data set.
4. The annotation generation method of the code change is characterized by being applied to a server and comprising the following steps:
Randomly generating model parameters and loss lower limits, and transmitting the model parameters and the loss lower limits to a plurality of clients;
Receiving encrypted local model parameters and loss values sent by each client;
Calculating a weight vector according to the local model parameters of all the clients, calculating updated model parameters according to the weight vector and the local model parameters of all the clients, and calculating an updated loss lower limit according to the loss values of all the clients;
Sending the updated model parameters and the updated loss lower limit to each client;
the calculating the weight vector according to the local model parameters of all the clients, and calculating the updated model parameters according to the weight vector and the local model parameters of all the clients includes:
calculating the weight vector and updated model parameters by using the following functions:
Wherein, Is the weight vector of the i-th client,/>Is a parameter matrix consisting of the local model parameters of the i-th client,/>Is the transpose of the parameter matrix,/>Is the j-th parameter in the updated model parameters, n is the total number of clients,/>Is the j-th parameter of the local model parameters of the i-th client.
5. A code-altered annotation generation apparatus for implementing the code-altered annotation generation method of any one of claims 1 to 3, wherein the apparatus is applied to a client, the apparatus comprising:
the parameter receiving module is used for receiving the model parameters and the loss lower limit sent by the server;
the model training module is used for acquiring a difference file and a standard annotation generated according to the changed code to obtain a local training data set and a tag set;
Training a pre-constructed annotation generation model by using the local training data set according to the model parameters to obtain an annotation result output by the annotation generation model, obtaining a loss value according to the annotation result and the tag set, and acquiring the model parameters of the annotation generation model as local model parameters and transmitting the loss value and the local model parameters to the server in an encrypted manner when the loss value is smaller than the loss lower limit;
The model updating module is used for receiving the updated model parameters and the loss lower limit transmitted by the server side and decrypting the updated model parameters and the loss lower limit;
And the model generation module is used for obtaining a trained annotation generation model when the training of the pre-constructed annotation generation model meets the preset termination condition, and generating an annotation for the change code data by using the trained annotation generation model.
6. A code-altered annotation generation apparatus for implementing the code-altered annotation generation method of claim 4, wherein the apparatus is applied to a server, the apparatus comprising:
the parameter generation module is used for randomly generating model parameters and loss lower limits and sending the model parameters and the loss lower limits to a plurality of clients;
The data receiving module is used for receiving the encrypted local model parameters and the loss values sent by each client;
The parameter updating module is used for calculating weight vectors according to the local model parameters of all the clients, calculating updated model parameters according to the weight vectors and the local model parameters of all the clients, and calculating updated loss lower limits according to the loss values of all the clients;
And the data sending module is used for sending the updated model parameters and the updated loss lower limit to each client.
7. An electronic device, the electronic device comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the annotation generation method of code alteration according to any of claims 1 to 4.
8. A computer-readable storage medium comprising a storage data area storing created data and a storage program area storing a computer program; wherein the computer program, when executed by a processor, implements the annotation generation method of code alteration according to any of claims 1 to 4.
CN202011322526.3A 2020-11-23 2020-11-23 Annotation generation method and device for code change, electronic equipment and storage medium Active CN112394974B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011322526.3A CN112394974B (en) 2020-11-23 2020-11-23 Annotation generation method and device for code change, electronic equipment and storage medium
PCT/CN2021/083079 WO2021208701A1 (en) 2020-11-23 2021-03-25 Method, apparatus, electronic device, and storage medium for generating annotation for code change

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011322526.3A CN112394974B (en) 2020-11-23 2020-11-23 Annotation generation method and device for code change, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112394974A CN112394974A (en) 2021-02-23
CN112394974B true CN112394974B (en) 2024-05-07

Family

ID=74606950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011322526.3A Active CN112394974B (en) 2020-11-23 2020-11-23 Annotation generation method and device for code change, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112394974B (en)
WO (1) WO2021208701A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112394974B (en) * 2020-11-23 2024-05-07 平安科技(深圳)有限公司 Annotation generation method and device for code change, electronic equipment and storage medium
CN112965748B (en) * 2021-04-08 2022-04-15 武汉众邦银行股份有限公司 Configurable method for automatically adding code annotation
CN113052334B (en) * 2021-04-14 2023-09-29 中南大学 Federal learning realization method, system, terminal equipment and readable storage medium
US20230316090A1 (en) * 2022-03-01 2023-10-05 Qualcomm Incorporated Federated learning with training metadata
CN116841609B (en) * 2023-08-28 2023-11-24 中国兵器装备集团兵器装备研究所 Method, system, electronic device and storage medium for supplementing code annotation information
CN117891531B (en) * 2024-03-14 2024-06-14 蒲惠智造科技股份有限公司 System parameter configuration method, system, medium and electronic equipment for SAAS software

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509199A (en) * 2018-03-09 2018-09-07 平安科技(深圳)有限公司 Automatically generate the method, apparatus, equipment and storage medium of Chinese annotation
CN109783079A (en) * 2018-12-21 2019-05-21 南京航空航天大学 A kind of code annotation generation method based on program analysis and Recognition with Recurrent Neural Network
CN109960506A (en) * 2018-12-03 2019-07-02 复旦大学 A kind of code annotation generation method based on structure perception
CN110018820A (en) * 2019-04-08 2019-07-16 浙江大学滨海产业技术研究院 A method of the Graph2Seq based on deeply study automatically generates Java code annotation
US10380236B1 (en) * 2017-09-22 2019-08-13 Amazon Technologies, Inc. Machine learning system for annotating unstructured text
CN110908709A (en) * 2019-11-25 2020-03-24 中山大学 Code submission annotation prediction method based on code change key class judgment
CN111090461A (en) * 2019-11-18 2020-05-01 中山大学 Code annotation generation method based on machine translation model

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309607B (en) * 2020-02-12 2023-06-02 扬州大学 Software defect positioning method of code method level
CN111522581B (en) * 2020-04-22 2021-06-25 山东师范大学 Enhanced code annotation automatic generation method and system
CN112394974B (en) * 2020-11-23 2024-05-07 平安科技(深圳)有限公司 Annotation generation method and device for code change, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10380236B1 (en) * 2017-09-22 2019-08-13 Amazon Technologies, Inc. Machine learning system for annotating unstructured text
CN108509199A (en) * 2018-03-09 2018-09-07 平安科技(深圳)有限公司 Automatically generate the method, apparatus, equipment and storage medium of Chinese annotation
CN109960506A (en) * 2018-12-03 2019-07-02 复旦大学 A kind of code annotation generation method based on structure perception
CN109783079A (en) * 2018-12-21 2019-05-21 南京航空航天大学 A kind of code annotation generation method based on program analysis and Recognition with Recurrent Neural Network
CN110018820A (en) * 2019-04-08 2019-07-16 浙江大学滨海产业技术研究院 A method of the Graph2Seq based on deeply study automatically generates Java code annotation
CN111090461A (en) * 2019-11-18 2020-05-01 中山大学 Code annotation generation method based on machine translation model
CN110908709A (en) * 2019-11-25 2020-03-24 中山大学 Code submission annotation prediction method based on code change key class judgment

Also Published As

Publication number Publication date
WO2021208701A1 (en) 2021-10-21
CN112394974A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN112394974B (en) Annotation generation method and device for code change, electronic equipment and storage medium
CN109284313B (en) Federal modeling method, device and readable storage medium based on semi-supervised learning
Ghazal et al. Private blockchain-based encryption framework using computational intelligence approach
CN113505882B (en) Data processing method based on federal neural network model, related equipment and medium
CN113761577B (en) Big data desensitization method, device, computer equipment and storage medium
US20150172044A1 (en) Order-preserving encryption system, encryption device, decryption device, encryption method, decryption method, and programs thereof
CN112380439B (en) Target object recommendation method and device, electronic equipment and computer readable storage medium
CN114611008B (en) User service strategy determination method and device based on federal learning and electronic equipment
CN114124502B (en) Message transmission method, device, equipment and medium
CN112508200A (en) Method, apparatus, device, medium, and program for processing machine learning model file
CN112149174A (en) Model training method, device, equipment and medium
CN114186256A (en) Neural network model training method, device, equipment and storage medium
CN112990374B (en) Image classification method, device, electronic equipment and medium
CN114417374A (en) Intelligent contract business card method, device, equipment and storage medium based on block chain
CN113055153B (en) Data encryption method, system and medium based on fully homomorphic encryption algorithm
WO2022121183A1 (en) Text model training method, recognition method, apparatus, device and storage medium
CN116502732B (en) Federal learning method and system based on trusted execution environment
CN113849828A (en) Anonymous generation and attestation of processed data
CN113240461A (en) Method, system and medium for identifying potential customers based on longitudinal federal learning
CN113051586A (en) Federal modeling system and method, and federal model prediction method, medium, and device
CN116340918A (en) Full-secret-text face comparison method, device, equipment and storage medium
Achar et al. Confimizer: A novel algorithm to optimize cloud resource by confidentiality-cost trade-off using bilstm network
CN116192386A (en) Multi-platform intercommunication method and device based on blockchain privacy calculation
CN114021732B (en) Proportional risk regression model training method, device and system and storage medium
CN115527686A (en) Multi-data-source medical data analysis model training method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant