CN116644816A - Metering automation terminal chip encryption method based on asynchronous federal learning - Google Patents

Metering automation terminal chip encryption method based on asynchronous federal learning Download PDF

Info

Publication number
CN116644816A
CN116644816A CN202310472675.5A CN202310472675A CN116644816A CN 116644816 A CN116644816 A CN 116644816A CN 202310472675 A CN202310472675 A CN 202310472675A CN 116644816 A CN116644816 A CN 116644816A
Authority
CN
China
Prior art keywords
terminal
parameter
asynchronous
model
gradient value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310472675.5A
Other languages
Chinese (zh)
Inventor
张益鸣
赵毅涛
杨子阳
孙立元
杨茗
杨昊
代盛国
赵永辉
杨晓华
刘兴龙
艾渊
任建宇
茶建华
李家浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan Power Grid Co Ltd
Original Assignee
Yunnan Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan Power Grid Co Ltd filed Critical Yunnan Power Grid Co Ltd
Priority to CN202310472675.5A priority Critical patent/CN116644816A/en
Publication of CN116644816A publication Critical patent/CN116644816A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S40/00Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them
    • Y04S40/20Information technology specific aspects, e.g. CAD, simulation, modelling, system security

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The application discloses a metering automation terminal chip encryption method based on asynchronous federal learning, which comprises the following steps: the sub-nodes download and initialize the global model from the server, collect and upload the unique information of the terminal, and make a distinction; the server confirms the unique information of the terminal, makes different keys aiming at the terminal, transmits the keys to the terminal, encrypts the keys according to the keys, performs a round of federal learning, and uploads a gradient value after the learning is completed; and correcting the gradient value, judging the effective gradient value, and updating the global parameter model by the server according to the uploaded terminal gradient value. According to the metering automation terminal chip encryption method based on asynchronous federal learning, which is provided by the application, an asynchronous federal learning algorithm is used for replacing a traditional federal learning algorithm, so that the speed of collecting and uploading respective parameter models by each node is improved, and the risk of single-node privacy disclosure is reduced. The application has better effects in the aspects of confidentiality, terminal training efficiency, reliability resistance and the like.

Description

Metering automation terminal chip encryption method based on asynchronous federal learning
Technical Field
The application relates to the technical field of data encryption, in particular to a metering automation terminal chip encryption method based on asynchronous federal learning.
Background
In order to guarantee privacy security of the metering automation terminal, the encryption of a security chip of the terminal is indispensable, and once the security chip password is broken, huge economic losses are caused to users and companies, and only the traditional differential privacy and homomorphic encryption are far from enough. The federal learning is used as a novel distributed machine learning, has great advantages in privacy protection compared with the traditional machine learning, focuses on an end-to-end encryption technology, enables the secret key of each edge node to be unique, and can generate a new parameter model when the parameter server is globally updated, so that the safety of a terminal safety chip is ensured. However, the traditional synchronous federal learning is very dependent on the global model update of a parameter server, and the global model update needs each node to collect and upload the respective parameter model, so that the parameter upload of a single node slowly affects the update rate of the global parameter model, and the risk of single-node privacy leakage exists.
Therefore, a method for encrypting a metering automation terminal chip based on asynchronous federal learning is needed, which can solve the problems of low model training efficiency, idle resources of each node, single-node privacy leakage and the like in the traditional technical scheme, and improves the privacy and usability of the terminal.
Disclosure of Invention
This section is intended to outline some aspects of embodiments of the application and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section as well as in the description of the application and in the title of the application, which may not be used to limit the scope of the application.
The present application has been made in view of the above-described problems.
Therefore, the technical problems solved by the application are as follows: the existing metering automatic terminal chip encryption method has the problems of low model training efficiency, idle resources of each node, single-node privacy disclosure and optimization of how to improve the privacy and usability of the terminal.
In order to solve the technical problems, the application provides the following technical scheme: a metering automation terminal chip encryption method based on asynchronous federal learning comprises the following steps:
the sub-nodes download and initialize the global model from the server, collect and upload the unique information of the terminal, and make a distinction;
the server confirms the unique information of the terminal, makes different keys aiming at the terminal, transmits the keys to the terminal, encrypts the keys according to the keys, performs a round of federal learning, and uploads a gradient value after the learning is completed;
and correcting the gradient value, judging the effective gradient value, and updating the global parameter model by the server according to the uploaded terminal gradient value.
As a preferable scheme of the metering automation terminal chip encryption method based on asynchronous federal learning, the application comprises the following steps: the specific information comprises a terminal serial number, a terminal asset number and a user number.
As a preferable scheme of the metering automation terminal chip encryption method based on asynchronous federal learning, the application comprises the following steps: the terminal encrypts according to the secret key, and comprises updating and disturbing according to different gradient value encryption characteristic parameters of the terminal;
the parameter server issues an initialization modelThe node terminal downloads the initialization model and then updates, calculates the local model gradient, and is expressed as:
wherein ,representing gradient calculations, x j and yj Representing a local training dataset and a sample dataset, respectively,/->For the local model calculated for the previous round, i is the number of learning rounds, j is the j data of the data set, and B i Indicating the batch size of the local data, L indicating the loss function, +.>The local model gradient value in the k-th round of local model iteration is obtained;
updating the local model, expressed as:
wherein ,ηi Is the learning rate of the model.
As a preferable scheme of the metering automation terminal chip encryption method based on asynchronous federal learning, the application comprises the following steps: the disturbance comprises the steps of distributing privacy budget and adding noise to key gradient values;
the privacy gradient is sampled and the gradient absolute value availability is calculated, expressed as:
wherein u represents the absolute value of the gradient;
total privacy budget epsilon 1 The assignment of r gradient values is expressed as:
wherein l represents the gradient value, pr represents the privacy budget private function, exp represents the exponential function based on e, and delta represents the increment;
adding privacy budget noise ε for key gradient values using Laplace mechanism 2 Increasing privacy budget noise ε for the remaining r-n of r gradient values 3 Expressed as:
where, when l=1, 2, …, r, m=2, when l=r+1, r+2, …, n, m=3,is a parameter of +.>And q represents a scale parameter, and calculatesObtaining gradient value after disturbance->
As a preferable scheme of the metering automation terminal chip encryption method based on asynchronous federal learning, the application comprises the following steps: the gradient value correction comprises the steps of judging an effective gradient value through double weight calculation of asynchronous federal learning, wherein the double weight comprises a sample weight and a parameter weight, and the sample weight is expressed as follows:
wherein ,Di The number of samples representing a single node terminal, D representing the total number of samples, is expressed as:
wherein ,Dj Indicating the number of nodes of the terminal.
As a preferable scheme of the metering automation terminal chip encryption method based on asynchronous federal learning, the application comprises the following steps: the asynchronous federation learning comprises the steps that a parameter server performs parameter updating and parameter staleness calculation synchronously, and the parameter staleness calculation is expressed as follows:
μ staleness =I upload -I download
wherein ,μstaleness The number of parameter iterations performed by the parameter server after one parameter iteration is represented by the terminal node, and the number of learning rounds is represented by I.
As a preferable scheme of the metering automation terminal chip encryption method based on asynchronous federal learning, the application comprises the following steps: the gradient value correction further comprises the step of slowing down the attenuation process of the terminal node, and the set attenuation function is expressed as:
wherein ,and (3) representing the degree of staleness of the node i, wherein a is the speed of the adjustable attenuation parameter representing the attenuation of the terminal node.
As a preferable scheme of the metering automation terminal chip encryption method based on asynchronous federal learning, the application comprises the following steps: the updating of the global parameter model by the server comprises the steps that the gradient value uploaded by the node terminal participates in the optimization and updating of the global parameter model after double weight correction, and the double weight correction is expressed as follows:
wherein θ represents the model parameters subjected to the double weight correction, and θ represents the original model parameters;
when the uploaded effective gradient value reaches the preset updating requirement of the global parameter model, the parameter server carries out optimization updating on the global parameter model according to the gradient value of the round, and after updating is finished, the node downloads a new parameter model and carries out a new round of asynchronous federation learning.
As a preferable scheme of the metering automation terminal chip encryption method based on asynchronous federal learning, the application comprises the following steps: a computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements an asynchronous federal learning based metering automation terminal chip encryption method when executing the computer program.
As a preferable scheme of the metering automation terminal chip encryption method based on asynchronous federal learning, the application comprises the following steps: a computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements a metering automation terminal chip encryption method based on asynchronous federal learning.
The application has the beneficial effects that: according to the metering automation terminal chip encryption method based on asynchronous federal learning, which is provided by the application, an asynchronous federal learning algorithm is used for replacing a traditional federal learning algorithm, so that the speed of collecting and uploading respective parameter models by each node is improved, and the risk of single-node privacy disclosure is reduced. By updating and disturbing the two stages, the privacy gradient sampling method is utilized, so that the gradient value of the model is not broken through by reverse reasoning, and the privacy protection of the model is improved. By utilizing a differential privacy mechanism in asynchronous federal learning, each node terminal has own exclusive key, and simultaneously, along with iteration of federal learning, the password can be changed, so that the security of the chip password is ensured. The application has better effects in the aspects of confidentiality, terminal training efficiency, reliability resistance and the like.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
fig. 1 is an overall flowchart of a metering automation terminal chip encryption method based on asynchronous federal learning according to an embodiment of the present application.
Fig. 2 is a schematic diagram of parameter staleness of a metering automation terminal chip encryption method based on asynchronous federal learning according to a first embodiment of the present application.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present application can be understood in detail, a more particular description of the application, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present application is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the application. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
While the embodiments of the present application have been illustrated and described in detail in the drawings, the cross-sectional view of the device structure is not to scale in the general sense for ease of illustration, and the drawings are merely exemplary and should not be construed as limiting the scope of the application. In addition, the three-dimensional dimensions of length, width and depth should be included in actual fabrication.
Also in the description of the present application, it should be noted that the orientation or positional relationship indicated by the terms "upper, lower, inner and outer", etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of describing the present application and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present application. Furthermore, the terms "first, second, or third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected, and coupled" should be construed broadly in this disclosure unless otherwise specifically indicated and defined, such as: can be fixed connection, detachable connection or integral connection; it may also be a mechanical connection, an electrical connection, or a direct connection, or may be indirectly connected through an intermediate medium, or may be a communication between two elements. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art.
Example 1
Referring to fig. 1, for one embodiment of the present application, there is provided a metering automation terminal chip encryption method based on asynchronous federal learning, including:
s1: the sub-nodes download and initialize the global model from the server, collect and upload unique information of the terminal, and make distinction.
Further, the unique information includes: terminal serial number, terminal asset number, and user number.
It should be noted that, terminal specific information can distinguish the data of different terminals in asynchronous federal study, so as to ensure the privacy and security of the data. These specific data may be used to identify each terminal and also to help the system administrator track the terminal device. By collecting and encrypting these data, the privacy of the terminal device and user information can be protected and sensitive data can be ensured not to be accessed by unauthorized persons.
S2: the server confirms the unique information of the terminal, makes different keys aiming at the terminal, transmits the keys to the terminal, encrypts the keys according to the keys, performs a round of federal learning, and uploads the gradient value after the learning is completed.
Furthermore, since the model update in the traditional federal learning requires that each node terminal shares its local model, and the local model contains characteristic information of training data of the node terminal, especially for encrypting data of a security chip with extremely high confidentiality, the local model needs to be processed by a differential privacy mechanism, and the characteristic parameters of the local model are encrypted according to different gradient values of each terminal, which is specifically divided into two stages of update and disturbance.
It should be noted that asynchronous federal learning does not require collecting gradient values for all edge nodes to perform model updates, and is not affected by a small number of slow nodes.
It should be noted that, the terminal encrypting according to the key includes: and updating and disturbing the encryption characteristic parameters according to different gradient values of the terminal.
In the updating stage, due to the characteristic of asynchronous federation learning, the updating of the model is independent of the local model uploaded by the node terminal, and the global parameter model updating can be started after receiving the commonly-known effective model.
The parameter server issues an initialization modelThe node terminal downloads the initialization model and then updates, calculates the local model gradient, and is expressed as:
wherein ,representing gradient calculations, x j and yj Representing a local training dataset and a sample dataset, respectively,/->For the local model calculated for the previous round, i is the number of learning rounds, j is the j data of the data set, and B i Indicating the batch size of the local data, L indicating the loss function, +.>The local model gradient value in the k-th round of local model iteration is obtained; updating the local model, expressed as:
wherein ,ηi For the learning rate of the model, it is generally prescribed that the learning rates of all terminals are uniform for the sake of simplifying the calculation.
It should be noted that, when the traditional method ensures that the uploaded gradient value can reflect the model characteristics, the external reverse reasoning can not be resisted, and the privacy protection of the model is low; by updating and disturbing the two stages, the privacy gradient sampling method is utilized, so that the gradient value of the model is not broken through by reverse reasoning, and the privacy protection of the model is improved.
It should be noted that, in the perturbation stage, since the gradient value can obtain the feature information by reverse reasoning, it is necessary to prevent privacy disclosure by using a differential privacy mechanism. By allocating a small amount of privacy budgets, adding a large amount of noise to key gradient values, allocating a large amount of privacy budgets, adding a small amount of noise to non-key gradient values, and improving privacy protection of the model.
It should also be noted that the perturbations include: the privacy budget is allocated, and noise is added to the key gradient value; the privacy gradient is sampled and the gradient absolute value availability is calculated, expressed as:
wherein u represents the absolute value of the gradient; total privacy budget epsilon 1 The assignment of r gradient values is expressed as:
wherein l represents the gradient value, pr represents the privacy budget private function, exp represents the exponential function based on e, and delta represents the increment; adding privacy budget noise ε for key gradient values using Laplace mechanism 2 Increasing privacy budget noise ε for the remaining r-n of r gradient values 3 Expressed as:
where, when l=1, 2, …, r, m=2, when l=r+1, r+2, …, n, m=3,is a parameter of +.>Q represents a scale parameter, and the gradient value after disturbance is calculated>
Further, the larger the availability function u of the gradient value, the more critical the gradient value is, the more noise it is selected to add, and the more likely it is to allocate a small privacy budget, and vice versa.
S3: and correcting the gradient value, judging the effective gradient value, and updating the global parameter model by the server according to the uploaded terminal gradient value.
It should be noted that, in the double weight calculation of asynchronous federal learning, the double weights are the sample weight and the parameter weight, respectively. The determining factor of the sample weight is the proportion of the total number of samples of the terminal data to the total number of samples, and the determining factor of the parameter weight is the similarity of the global parameter and the gradient target parameter in the time dimension.
Further, the gradient value correction includes: judging an effective gradient value through double weight calculation of asynchronous federal learning, wherein the double weights comprise sample weights and parameter weights, and the sample weights are expressed as follows:
wherein ,Di The number of samples representing a single node terminal, D representing the total number of samples, is expressed as:
wherein ,Dj Indicating the number of nodes of the terminal.
It should be noted that, asynchronous federal learning includes that the parameter server performs parameter update and performs parameter staleness calculation synchronously, which is expressed as:
μ staleness =I upload -I download
wherein ,μstaleness The number of parameter iterations performed by the parameter server after one parameter iteration is represented by the terminal node, and the number of learning rounds is represented by I.
It should be further noted that, as shown in the stale diagram of fig. 2, from the perspective of the parameter server, the whole asynchronous federal learning process is to continuously iterate parameter optimization and update, and the terminal node continuously uploads new gradient value to update the global model parameters, and simultaneously continuously requests to download the latest parameters. The terminal B also performs the uploading operation in the time between the downloading of the parameters by the terminal a and the uploading of the gradient values, so that the parameter server has the problem of staleness when performing the parameter updating.
Furthermore, according to the definition of the parameter weight, in order to slow down the attenuation process of the terminal node with poor computing capability, a corresponding attenuation function needs to be set. Slowing down the attenuation process of the terminal node, and setting an attenuation function expressed as:
wherein ,and representing the degree of staleness of the node i, wherein a is the speed of the terminal node attenuation reflected by the adjustable attenuation parameter, and a is proportional to the attenuation function.
It should be noted that, the updating of the global parameter model by the server includes that the gradient value uploaded by the node terminal participates in the optimization and updating of the global parameter model after being subjected to dual weight correction, and the dual weight correction is expressed as:
wherein θ' represents a model parameter subjected to double weight correction, and θ represents an original model parameter; when the uploaded effective gradient value reaches the preset updating requirement of the global parameter model, the parameter server carries out optimization updating on the global parameter model according to the gradient value of the round, and after updating is finished, the node downloads a new parameter model and carries out a new round of asynchronous federation learning.
Example 2
In order to verify the beneficial effects of the application, the application carries out scientific demonstration through economic benefit calculation and simulation experiments.
MATLAB and CloudSim were used to evaluate the algorithm. Simulations have been run in an environment with an Intel processor and 16GB RAM. The operating system used was a 64 bit Windows 11Ultimate. And simulating the point system by using MATLAB programming language, connecting records, and constructing data distribution.
The method comprises the steps of setting a traditional technical scheme and 1000 iteration parameter updating of my application line, randomly carrying out external reverse reasoning cracking attack 200 times and key attack 200 times, and carrying out simulation experiments aiming at training efficiency indexes of a metering automation node terminal. Since the attack frequency is set, the average value is meaningless to represent the training efficiency in terms of the total training time.
As shown in the encryption performance comparison table of the table 1, the traditional method can not resist external reverse reasoning and attack when ensuring that the uploaded gradient value can reflect the model characteristics, the privacy protection of the model is low and easy to attack, and the privacy gradient sampling method is utilized by updating and disturbing two stages, so that the gradient value of the model is not attacked by reverse reasoning and attack, and the privacy protection of the model is improved. The traditional encryption method is a bidirectional secret key, the effect is general and easy to break, each node terminal of the application has own exclusive secret key, and simultaneously, the secret code can be changed along with iteration of federal learning, so that the security of the chip secret code is ensured. The traditional machine learning needs to train a large amount of data, the data encrypted by the chip is private data, special processing is needed, the efficiency is low, the total training time is long, the model can be updated without collecting gradient values of all edge nodes, the efficiency cannot be influenced by a small amount of slow nodes, and the training time is obviously shortened.
Table 1 comparison table of encryption performance
Reverse reasoning breaking down (secondary) Key attack cracking (secondary) Total training time(s)
Traditional technical scheme 17 11 16346
Technical scheme of my application 1 1 3433
It should be noted that the above embodiments are only for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present application may be modified or substituted without departing from the spirit and scope of the technical solution of the present application, which is intended to be covered in the scope of the claims of the present application.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only memory (ROM), a random access memory (RAM, random Access Memou), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.

Claims (10)

1. A metering automation terminal chip encryption method based on asynchronous federal learning is characterized by comprising the following steps:
the sub-nodes download and initialize the global model from the server, collect and upload the unique information of the terminal, and make a distinction;
the server confirms the unique information of the terminal, makes different keys aiming at the terminal, transmits the keys to the terminal, encrypts the keys according to the keys, performs a round of federal learning, and uploads a gradient value after the learning is completed;
and correcting the gradient value, judging the effective gradient value, and updating the global parameter model by the server according to the uploaded terminal gradient value.
2. The asynchronous federal learning-based metering automation terminal chip encryption method according to claim 1, wherein: the unique information includes a terminal serial number, a terminal asset number, and a user number.
3. The metering automation terminal chip encryption method based on asynchronous federal learning according to any one of claims 1 or 2, wherein: the terminal encrypts according to the secret key, and comprises updating and disturbing according to different gradient value encryption characteristic parameters of the terminal;
the parameter server issues an initialization modelThe node terminal downloads the initialization model and then updates, calculates the local model gradient, and is expressed as:
wherein ,representing gradient calculations, x j and yj Representing a local training dataset and a sample dataset, respectively,/->For the local model calculated for the previous round, i is the number of learning rounds, j is the j data of the data set, and B i Indicating the batch size of the local data, L indicating the loss function, +.>The local model gradient value in the k-th round of local model iteration is obtained;
updating the local model, expressed as:
wherein ,ηi Is the learning rate of the model.
4. The metering automation terminal chip encryption method based on asynchronous federal learning as set forth in claim 3, wherein: the disturbance comprises the steps of distributing privacy budget and adding noise to key gradient values;
the privacy gradient is sampled and the gradient absolute value availability is calculated, expressed as:
wherein u represents the absolute value of the gradient;
total privacy budget epsilon 1 The assignment of r gradient values is expressed as:
wherein l represents the gradient value, pr represents the privacy budget private function, exp represents the exponential function based on e, and delta represents the increment;
adding privacy budget noise ε for key gradient values using Laplace mechanism 2 Increasing privacy budget noise ε for the remaining r-n of r gradient values 3 Expressed as:
where, when l=1, 2, …, r, m=2, when l=r+1, r+2, …, n, m=3,is a parameter of +.>Q represents a scale parameter, and the gradient value after disturbance is calculated>
5. The asynchronous federal learning-based metering automation terminal chip encryption method according to claim 4, wherein: the gradient value correction comprises the steps of judging an effective gradient value through double weight calculation of asynchronous federal learning, wherein the double weight comprises a sample weight and a parameter weight, and the sample weight is expressed as follows:
wherein ,Di The number of samples representing a single node terminal, D representing the total number of samples, is expressed as:
wherein ,Dj Indicating the number of nodes of the terminal.
6. The asynchronous federal learning-based metering automation terminal chip encryption method according to claim 5, wherein: the asynchronous federation learning comprises the steps that a parameter server performs parameter updating and parameter staleness calculation synchronously, and the parameter staleness calculation is expressed as follows:
μ staleness =I upload -I download
wherein ,μstaleness The number of parameter iterations performed by the parameter server after one parameter iteration is represented by the terminal node, and the number of learning rounds is represented by I.
7. The asynchronous federal learning-based metering automation terminal chip encryption method according to claim 6, wherein: the gradient value correction further comprises the step of slowing down the attenuation process of the terminal node, and the set attenuation function is expressed as:
wherein ,and (3) representing the degree of staleness of the node i, wherein a is the speed of the adjustable attenuation parameter representing the attenuation of the terminal node.
8. The asynchronous federal learning-based metering automation terminal chip encryption method according to claim 1, wherein: the updating of the global parameter model by the server comprises the steps that the gradient value uploaded by the node terminal participates in the optimization and updating of the global parameter model after double weight correction, and the double weight correction is expressed as follows:
wherein θ represents the model parameters subjected to the double weight correction, and θ represents the original model parameters;
when the uploaded effective gradient value reaches the preset updating requirement of the global parameter model, the parameter server carries out optimization updating on the global parameter model according to the gradient value of the round, and after updating is finished, the node downloads a new parameter model and carries out a new round of asynchronous federation learning.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that: the processor, when executing the computer program, implements the steps of the method of any one of claims 1 to 8.
10. A computer-readable storage medium having stored thereon a computer program, characterized by: the computer program, when executed by a processor, implements the asynchronous federal learning-based metering automation terminal chip encryption method of any one of claims 1 to 8.
CN202310472675.5A 2023-04-27 2023-04-27 Metering automation terminal chip encryption method based on asynchronous federal learning Pending CN116644816A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310472675.5A CN116644816A (en) 2023-04-27 2023-04-27 Metering automation terminal chip encryption method based on asynchronous federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310472675.5A CN116644816A (en) 2023-04-27 2023-04-27 Metering automation terminal chip encryption method based on asynchronous federal learning

Publications (1)

Publication Number Publication Date
CN116644816A true CN116644816A (en) 2023-08-25

Family

ID=87623807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310472675.5A Pending CN116644816A (en) 2023-04-27 2023-04-27 Metering automation terminal chip encryption method based on asynchronous federal learning

Country Status (1)

Country Link
CN (1) CN116644816A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117395083A (en) * 2023-12-11 2024-01-12 东信和平科技股份有限公司 Data protection method and system based on federal learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117395083A (en) * 2023-12-11 2024-01-12 东信和平科技股份有限公司 Data protection method and system based on federal learning
CN117395083B (en) * 2023-12-11 2024-03-19 东信和平科技股份有限公司 Data protection method and system based on federal learning

Similar Documents

Publication Publication Date Title
CN105335411A (en) Method and system for data processing
CN116644816A (en) Metering automation terminal chip encryption method based on asynchronous federal learning
US20210142223A1 (en) Hierarchical federated learning using access permissions
CN111552849B (en) Searchable encryption method, system, storage medium, vehicle-mounted network and smart grid
CN110222874B (en) Information processing method and device, storage medium and computing equipment
CN104598539A (en) Internet event hot degree calculation method and terminal
CN112148883A (en) Embedding representation method of knowledge graph and related equipment
CN101984620B (en) Codebook generating method and convert communication system
CN114662157B (en) Block compressed sensing indistinguishable protection method and device for social text data stream
Kvet et al. Concept of temporal data retrieval: Undefined value management
CN103530390A (en) Webpage crawling method and device
Lian et al. Partially linear structure selection in Cox models with varying coefficients
CN115545210A (en) Method and related apparatus for quantum computing
Su et al. Parameter estimation for fractional diffusion process with discrete observations
CN117009539A (en) Entity alignment method, device, equipment and storage medium of knowledge graph
CN108683749A (en) A kind of judgment method, equipment and the medium of random email address
CN115099875A (en) Data classification method based on decision tree model and related equipment
CN111177565B (en) Interest point recommendation method based on correlation matrix and word vector model
Ma et al. [Retracted] The Construction of Big Data Computational Intelligence System for E‐Government in Cloud Computing Environment and Its Development Impact
CN114265560A (en) Self-standardization storage system for hundred million-level compliance index service data
Kabir et al. Optimal search algorithm in a big database using interpolation–extrapolation method
Samadi et al. Nonlocal fractional hybrid boundary value problems involving mixed fractional derivatives and integrals via a generalization of Darbo’s theorem
CN114430351B (en) Distributed database node secure communication method and system
Wang An adaptive variational mode decomposition technique with differential evolution algorithm and its application analysis
Singthongla et al. SEL series expansion and generalized model construction for the real number system via series of rationals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination