CN116090014B - Differential privacy distributed random optimization method and system for smart grid - Google Patents

Differential privacy distributed random optimization method and system for smart grid Download PDF

Info

Publication number
CN116090014B
CN116090014B CN202310361388.7A CN202310361388A CN116090014B CN 116090014 B CN116090014 B CN 116090014B CN 202310361388 A CN202310361388 A CN 202310361388A CN 116090014 B CN116090014 B CN 116090014B
Authority
CN
China
Prior art keywords
algorithm
differential privacy
privacy
noise
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310361388.7A
Other languages
Chinese (zh)
Other versions
CN116090014A (en
Inventor
张纪峰
王继民
赵延龙
郭金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Academy of Mathematics and Systems Science of CAS
Original Assignee
University of Science and Technology Beijing USTB
Academy of Mathematics and Systems Science of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB, Academy of Mathematics and Systems Science of CAS filed Critical University of Science and Technology Beijing USTB
Priority to CN202310361388.7A priority Critical patent/CN116090014B/en
Publication of CN116090014A publication Critical patent/CN116090014A/en
Application granted granted Critical
Publication of CN116090014B publication Critical patent/CN116090014B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Pure & Applied Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioethics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Complex Calculations (AREA)
  • Evolutionary Biology (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Algebra (AREA)
  • Computer Hardware Design (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)

Abstract

The invention provides a differential privacy distributed random optimization method and system for a smart grid, and relates to the technical field of data privacy protection. Comprising the following steps: the differential privacy distributed random optimization algorithm based on output disturbance is defined as algorithm I; a differential privacy distributed random optimization algorithm based on gradient disturbance is defined as an algorithm II; performing approximate infinite iterations on the first algorithm and the second algorithm respectively, and deducing that the noise variance condition of epsilon-differential privacy is met in the infinite iterations; according to the noise variance condition, the convergence speeds of the algorithm I and the algorithm II are calculated respectively, and the optimal point is determined; and the user selects the first algorithm and the second algorithm according to actual requirements to finish the differential privacy distributed random optimization oriented to the intelligent power grid. The invention establishes differential privacy with finite accumulated privacy budget epsilon for infinite iterations by a method of variable sample size. By properly selecting the lyapunov function, the algorithm achieves near deterministic and mean square convergence.

Description

Differential privacy distributed random optimization method and system for smart grid
Technical Field
The invention relates to the technical field of data privacy protection, in particular to a differential privacy distributed random optimization method and system for a smart grid.
Background
In recent years, information and artificial intelligence technology is increasingly applied to the emerging application fields of Internet of things, cloud-based control systems, intelligent buildings, automatic driving automobiles and the like. The widespread use of such techniques provides more avenues for attackers to access sensitive information (e.g., eavesdrop on communication channels, hacking information processing centers, or collusion with participants in the system, etc.), thereby rapidly increasing the risk of privacy disclosure.
In a centralized estimation scenario, all sensors transmit data to one fusion center; with the rapid development of sensor networks and wireless communication, the system scale is larger and larger, and the calculation and communication burden is rapidly increased with the increase of the system scale; on the other hand, in centralized processing, collecting measurements from all other distributed sensors on the network may not be feasible in many practical situations due to limited communication capacity, energy consumption or packet loss, etc.
The distributed stochastic optimization algorithm operates based on information exchanged from the neighbors of the individual and the individual's own (sampled) gradient information. An attacker uses the information sequence transmitted between the distributed networks to infer sensitive information of the agent. In distributed stochastic optimization, sensitive personal information is frequently embedded into the gradient information of each agent. The main reason is that the gradient information contains individual specific data as input, which is often proprietary.
In a location based on decentralized optimization, revealing the gradient of the agent is equivalent to revealing its location. In machine learning applications, gradients are calculated directly from the sensitive training data and embedded into the information of the sensitive training data. Thus, information about gradient information is considered sensitive, and should be protected from leakage during solving of the distributed random optimization problem.
Conventional distributed random optimization algorithms cannot protect sensitive information in these distributed networks. Therefore, a new privacy preserving distributed random optimization approach that can solve privacy concerns is necessary.
Disclosure of Invention
The invention provides a differential privacy distributed random optimization method and a differential privacy distributed random optimization system for a smart grid, which solve the problem that the traditional distributed random optimization algorithm cannot protect sensitive messages in the distributed networks.
In order to solve the above-mentioned purpose, the technical scheme provided by the invention is as follows: the differential privacy distributed random optimization method for the smart grid is characterized by comprising the following steps of:
the differential privacy distributed random optimization method for the smart grid is characterized by comprising the following steps of:
s1: constructing a differential privacy distributed random optimization model facing a smart grid, wherein the differential privacy distributed random optimization model comprises two algorithms: the differential privacy distributed random optimization algorithm based on output disturbance is defined as algorithm I; a differential privacy distributed random optimization algorithm based on gradient disturbance is defined as an algorithm II;
S2: performing approximate infinite iterations on the first algorithm and the second algorithm respectively, and deducing that the noise variance condition of epsilon-differential privacy is met in the infinite iterations;
s3: respectively calculating convergence rates of the first algorithm and the second algorithm according to the noise variance condition, and determining an optimal solution;
s4: and the user selects the first algorithm and the second algorithm according to actual requirements, optimizes the ammeter data in the intelligent power grid, and completes differential privacy distributed random optimization for the intelligent power grid.
Preferably, in step S2, performing an infinite number of iterations on the first algorithm includes:
for each intelligence in algorithm oneThe states of the bodies are all masked with additive random variables;
each agent i in algorithm one receives the noise status of agent i's neighbor agent j according to the following equation (1)Updating the estimated state:
(1)
wherein ,is intelligent body->An estimated state at the kth iteration; wherein k has the value of k=1, 2,3, …, n;each element is zero mean independent and equidistributed and the variance is +.>Is a laplace noise of (a);Representing elements in an adjacency matrix in a smart grid network graph;Representing intelligent agent->An optimized gradient at the kth iteration; / >Is the gradient step length;For introducing a new step size, according to the new step size +.>Weighting information from neighbor agent j;Representing intelligent agent->Is a neighborhood of (c).
Preferably, in step S2, the performing an infinite number of iterations on the second algorithm includes:
each agent in algorithm two according to the following equation (2)Accepting its neighbors intelligenceNoise state of volume jUpdating the estimated state:
(2)
wherein ,is privacy noise added for agent i at each time k.
Preferably, in step S2, deriving the noise variance condition for the algorithm that satisfies epsilon-differential privacy in an infinite number of iterations includes:
given a constant C greater than 0, different samples of two gradient information If->Just in the data sample +.>Different in thatThen->Are adjacent;
wherein ,representing the random variable +.>Two different realizations of l in the overall value range,/-> andRepresenting the random variable +.>In data sample->Is a different implementation of (a);
setting a critical quantity in the first algorithm, wherein the critical quantity is sensitivity, and judging the noise degree which should be added in each iteration when epsilon-differential privacy is realized through the sensitivity, wherein the sensitivity of the output graph q in the kth iteration is defined as the following formula (1-1):
(1-1)
wherein ,two different samples respectively representing gradient information; a represents an adjacency matrix; … 1 Representing norms, sup representing an upper bound with the smallest set;
the sensitivity at the kth iteration of algorithm one satisfies the following equation (1-2):
(1-2)
wherein ,representing variance as +.>Laplacian distributed noise, < >>Is any given positive number; b represents the accumulation times; t represents the number of tired multiplications, +.>Representing the cumulative multiplication step length introduced in the process of performing the cumulative calculation;Is the number of sampling gradients used at the kth iteration, then +.>Represents the 0 th time, ++>Represents the b-th time;Represents the gradient step size of the 0 th time;Representing the gradient step size of the b-th iteration;
then the following equation (1-3) holds, then it is explained that algorithm one satisfies ε -differential privacy for an infinite number of iterations:
(1-3)
wherein ,representing privacy noise parameters.
Preferably, in step S2, deriving the noise variance condition for the algorithm that satisfies the epsilon-differential privacy in an infinite number of iterations further includes:
based on the formula (1-2), letRepresents the gradient step size of the 1 st time;Represents the gradient step size of the 2 nd time; if one of the following conditions is satisfied:
i) ;
ii) ;
iii) .
algorithm one has a limited cumulative privacy budgetDifferential privacy is provided for an infinite number of iterations, where η is a constant value greater than 0.
Preferably, in step S2, deriving the noise variance condition for the second algorithm that satisfies the epsilon-differential privacy in an infinite number of iterations includes:
setting critical quantity-sensitivity in a second algorithm, and judging how much noise should be added in each iteration to realize epsilon-differential privacy; wherein the sensitivity of the output graph q at the kth iteration is defined as formula (1-1); the sensitivity of algorithm two is shown in the following formula (2-1):
(2-1)
is from variance->Sampling noise in the Laplace distribution, +.>Is any given positive number;Represents a random variable;Representing the gradient that each agent i can obtain; then the following equation (2-2) holds, then it is explained that algorithm one satisfies ε -differential privacy for an infinite number of iterations:
(2-2);
when (when) Then algorithm two has a limited cumulative privacy budget +.>Differential privacy for an infinite number of iterations.
Preferably, in step S3, before calculating the convergence rates of the first and second algorithms, respectively, condition setting is performed, including:
performing a stack vector definition of the following formula (3) on the algorithm one or the algorithm two:
(3)
wherein T represents a transpose;representing a transpose of the gradient;A matrix of gradient vectors representing each agent i, i=1, 2,3 … n; let- >Is->Average value of>Is->Average value of (i), i.eThe method comprises the steps of carrying out a first treatment on the surface of the Definitions->Algebraic->Since a is doubly random, the following formula (4) is obtained:
(4);
the following four condition settings are made according to the above formula (4):
condition 1: for all agentsThe method comprises the steps of carrying out a first treatment on the surface of the Each function +.>Is Lipohsh continuous, i.e. there is +.>Such that:
wherein ,Li Representing a defined one greater than zeroA variable;
each functionIs mu-strongly convex and only if +.>The method meets the following conditions:
wherein μ represents a constant greater than 0;
condition 2: for any fixed andThere is a positive constant +.>Make->The method meets the following conditions:
and :
wherein E represents the desire of the variable;
condition 3: undirected communication topologyIs connected, adjacency matrix->The following conditions are satisfied:
(i) There is a normal numberWhen intelligent agent->Time->When->When (I)>;(ii)Is birtransrandom, i.e.>
Condition 4: step sizePrivacy noise parameter->And variable sample size +.>One of the following conditions 4 a) or 4 b) is satisfied:
4a)
4b)
condition 5: step sizePrivacy noise parameter->And variable sample size +.>One of the following conditions is satisfied:
5a)
5b)
preferably, in step S3, the calculating the convergence speed of the first algorithm according to the noise variance condition, to determine the optimal solution includes:
When conditions 1, 2, 3 and 4 a are satisfied), then the algorithm pair is for all agentsConvergence, i.e. there is an optimal solution->Make->
When conditions 1, 2, 3 and 4 b) are satisfied, then the algorithm pairs allMean square convergence, i.e. there is an optimal solution +.>So thatWhere E represents the desire of the variable.
Preferably, in step S3, the calculating the convergence rate of the second algorithm according to the noise variance condition, to determine the optimal solution includes:
when conditions 1, 2, 3 and 5 a) are satisfied, algorithm two is for allConvergence, i.e. there is an optimal solution->So that
When conditions 1, 2, 3 and 5 b) are satisfied, then the algorithm pairs allMean square convergence, i.e. there is an optimal solution +.>So that
A differential privacy distributed random optimization system facing a smart grid is used for the differential privacy distributed random optimization method facing the smart grid, and comprises the following steps:
the algorithm construction module is used for constructing a differential privacy distributed random optimization model, and the differential privacy distributed random optimization model comprises two algorithms: the differential privacy distributed random optimization algorithm based on output disturbance is defined as algorithm I; a differential privacy distributed random optimization algorithm based on gradient disturbance is defined as an algorithm II;
The iteration module is used for carrying out approximate infinite iterations on the first algorithm and the second algorithm respectively and deducing a noise variance condition meeting epsilon-difference privacy in the infinite iterations;
the convergence calculation module is used for respectively calculating the convergence speeds of the algorithm I and the algorithm II according to the noise variance condition to determine the optimal point;
and the optimization algorithm selection module is used for selecting the algorithm I and the algorithm II according to actual requirements by a user to finish the differential privacy distributed random optimization oriented to the intelligent power grid.
In one aspect, an electronic device is provided, which includes a processor and a memory, where the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the above-described smart grid-oriented differential privacy distributed random optimization method.
In one aspect, a computer readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the above-described smart grid-oriented differential privacy distributed random optimization method.
Compared with the prior art, the technical scheme has at least the following beneficial effects:
According to the scheme, for both gradient and output disturbance, when the added privacy noise has incremental variance, the convergence of the algorithm and the differential privacy with limited accumulated privacy budget epsilon-under infinite iteration times are established simultaneously. The mean square convergence speed of the algorithm is strictly given, and how the added privacy noise affects the convergence speed of the algorithm is demonstrated.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow diagram of a differential privacy distributed random optimization method for a smart grid, which is provided by the embodiment of the invention;
FIG. 2 is a diagram of undirected interaction topology provided by an embodiment of the present invention;
FIG. 3 is a simulation diagram of an algorithm for unbiased estimation of a pair of unknown parameters provided by an embodiment of the present invention;
FIG. 4 is a graph showing the variation of the privacy budget ɛ affected by the parameters η and γ in algorithm one provided by an embodiment of the present invention;
FIG. 5 is a simulation diagram of an unbiased estimation of unknown parameters by algorithm two provided by an embodiment of the present invention;
FIG. 6 is a graph showing the change of privacy budget ɛ affected by parameters η and γ in a second dynamic algorithm according to an embodiment of the present invention;
fig. 7 is a block diagram of a differential privacy distributed random optimization system for a smart grid, which is provided by an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present invention. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without creative efforts, based on the described embodiments of the present invention fall within the protection scope of the present invention.
Aiming at the problem that the existing traditional distributed random optimization algorithm cannot protect sensitive information in the distributed network, the invention provides a novel privacy protection distributed random optimization method for a smart grid, which can solve the privacy problem.
As shown in fig. 1, the embodiment of the invention provides a differential privacy distributed random optimization method for a smart grid, which can be implemented by electronic equipment. The flow chart of the differential privacy distributed random optimization method for the smart grid shown in fig. 1 can comprise the following steps:
s101: constructing a differential privacy distributed random optimization model, wherein the differential privacy distributed random optimization model comprises two algorithms: the differential privacy distributed random optimization algorithm based on output disturbance is defined as algorithm I; the differential privacy distributed random optimization algorithm based on gradient disturbance is defined as an algorithm II.
In a possible embodiment, the invention establishes differential privacy with a finite cumulative privacy budget epsilon for an infinite number of iterations by means of a variable sample size method. By properly selecting the lyapunov function, the algorithm achieves near deterministic and mean square convergence.
In embodiments of the present invention, data privacy is an important issue in control systems, particularly when the data set contains sensitive information about individuals.
Differential privacy is a well-known privacy concept that has application in many areas. Differential privacy has so far attracted considerable attention in the whole computer, control and communication science fields, including data mining, social networking, machine learning, distributed optimization, etc. Roughly speaking, differential privacy can ensure that individual individuals participating in a database do not have a substantial impact on the output of data processing. In this case, an adversary eavesdropping on the data processing output does not threaten to destroy the privacy of each individual's private data.
In the embodiment of the invention, in the intelligent power grid, the real-time electricity utilization data and other sensitive data of the user are collected and transmitted by the intelligent ammeter, so that the risk of privacy disclosure of the individual user is brought. By analyzing the collected electricity consumption data, an attacker can infer the activities and behavior patterns of the user, such as whether the user is bathing, cooking, even what appliances are running in the house, whether people are in the house, and the like. More seriously, an attacker can acquire the identity information and the residence information of a user according to the data collected by the intelligent ammeter, so that the attacker can steal the intelligent ammeter. Clearly, exposing such sensitive information can pose a threat to the privacy of the user. Meanwhile, an attacker may also cause economic loss of a user or an electric power provider by injecting counterfeit information or putting out unreasonable demands. Therefore, not only confidentiality and integrity of information transmitted in the smart grid are ensured, but also private information of users in the smart grid is protected from disclosure. By applying the algorithm of the scheme, the ammeter data of the user can be optimized, so that an attacker can obtain false information and cannot further judge the living state of the user, and the personal privacy of the user is protected.
Some standard notations are used throughout this document.Representing a symmetric matrix->Is semi-positive.Representing a symmetric matrix->Is semi-positive. 1. Representing a column vector of the appropriate dimension with all elements equal to 1. andRespectively indicate->The Wi Euclidean space and all->A set of real matrices. For arbitrary->,Representation->Standard inner product of the above.Finger vector->Euclidean norms of (c).Respectively an identity matrix and a zero matrix with appropriate dimensions. For a micro-functional->Representation->At->A gradient thereat. Random variable->Is expected to be composed ofAnd (3) representing. Given at +.>Two real-valued functions defined above +.> andFor a sufficiently large ∈>Strictly positive, if present +.> andSo that it is arbitrary->There is->Then indicate +.>The method comprises the steps of carrying out a first treatment on the surface of the If for any->There is->So that for any->There is->Then indicate +.>
S102: and (3) carrying out approximate infinite iterations on the first algorithm and the second algorithm respectively, and deducing the noise variance condition meeting epsilon-differential privacy in the infinite iterations.
S103: and respectively calculating the convergence rates of the algorithm I and the algorithm II according to the noise variance condition, and determining the optimal point.
In a possible implementation, in S102, the state of each individual i is masked with an additive random variable in algorithm one;
Performing an infinite number of iterations on algorithm one, including:
each agent i in algorithm one accepts the noise state of its neighbor agent j according to the following equation (1)Updating the estimated state:
(1)
wherein ,is intelligent body->An estimated state at the kth iteration; wherein k has the value of k=1, 2,3, …, n;each element is zero mean independent and equidistributed and the variance is +.>Is a laplace noise of (a);Representing elements in an adjacency matrix in a smart grid network graph;Representing intelligent agent->An optimized gradient at the kth iteration;Is the gradient step length;For introducing a new step size, according to the new step size +.>Weighting information from neighbor agent j;Representing the neighborhood of agent i.
In a possible implementation, a differential privacy distributed random optimization algorithm with variable sample size is proposed by output perturbation, called algorithm one, in which the states of each individual are masked using additive random variables. Specifically, in each iteration of algorithm one, each individualThe current noise state is +.>Every neighbor sent to it +.>Rather than its original state, in which +.>Is individual->At time- >Estimated state of ∈10->Is zero-mean independent and equidistributed. Variance is->Is a laplace noise of (c). Every agent->Receiving noise status from its neighbors>Then update->The following are provided:
wherein Is intelligent body->Is a noise state of (a). Step size ∈other than gradient step size>In addition, a new step size +.>To determine the degree to which information from the neighbors should be weighted. The information transmitted by the neighbors can be many things, and in a multi-intelligent system, various information exchanges exist between an individual and the neighbors to the extent that the transmitted information needs to be weighted in an algorithm, and the weighting degrees are different in different step sizes.
As previously described, to protect privacy, each individualBy giving a local state->Adding noise vectors to generate noise states, i.e. +.>. To ensure differential privacy, this approach is called output perturbation.
Next, a derivation algorithm is performed-the noise variance condition of epsilon-differential privacy is satisfied in an infinite number of iterations.
In differential privacy, a critical amount determines how much noise should be added in each iteration to achieve epsilon-differential privacy, referred to as sensitivity.
In a possible implementation, step S102 derives a noise variance condition for the algorithm that satisfies epsilon-differential privacy in an infinite number of iterations, comprising:
Given a constant C greater than 0, different samples of two gradient information If->Just in the data sample +.>Different in thatThen->Are adjacent;
wherein ,representing the random variable +.>Two different realizations of l in the overall value range,/-> andRepresenting the random variable +.>In data sample->Is a different implementation of (a);
setting a critical quantity in the first algorithm, wherein the critical quantity is sensitivity, and judging the noise degree which should be added in each iteration when epsilon-differential privacy is realized through the sensitivity, wherein the sensitivity of the output graph q in the kth iteration is defined as the following formula (1-1):
(1-1)
wherein a represents an adjacency matrix; … 1 Representing norms, sup representing an upper bound with the smallest set;
the sensitivity at the kth iteration of algorithm one satisfies the following equation (1-2):
(1-2)
wherein ,representing variance as +.>Laplacian distributed noise, < >>Is any given positive number; b represents the accumulation times; t represents the number of tired multiplications, +.>Representing the cumulative multiplication step length introduced in the process of performing the cumulative calculation;Is the number of sampling gradients used at the kth iteration, then +.>Represents the 0 th time, ++>Represents the b-th time;Represents the gradient step size of the 0 th time;Representing the gradient step size of the b-th iteration; the sensitivity of the output map q means that a single sampling gradient can change the size of the output map q.
andIs any two different samples of gradient information, there is a difference in one data sample at the kth iteration.Is according to->Calculated, and->Is according to->Calculated.
For the following andThere is
For any givenThen there is
For the followingThen there is
If further settingThen for any->Then there is
For sequencesIf (i)/(I)>Is positive and monotonically increasing; (ii) And (2)>. Then, for real numbers andAnd any positive integer +.>
Then the following equation (1-3) holds, then it is explained that algorithm one satisfies ε -differential privacy for an infinite number of iterations:
(1-3)。
in a possible implementation, the method comprises the following steps ofRepresents the gradient step size of the 1 st time;Represents the gradient step size of the 2 nd time; if one of the following conditions is satisfied:
i) ;/>
ii) ;
iii) .
algorithm one has a limited cumulative privacy budgetDifferential privacy for an infinite number of iterations.
In a possible implementation manner, in step S103, calculating the convergence speed of the algorithm one according to the noise variance condition to determine the optimal point includes:
the first algorithm is defined by a stacking vector of the following formula (1-4):
(1-4)
order theRespectively->Average value of (i.e.)> Definitions->Algebraic->Furthermore, the present invention uses the following notation +. >,. Then equation (1-4) can be rewritten as follows:
since A is doubly random, the following formula (1-5) is obtained:
(1-5)
the following four condition settings were made according to the above-described formulas (1-5):
condition 1: for all ofEach function->Is Lipohsh continuous, i.e. there is +.>So that
Each functionIs strongly convex if and only if +.>Satisfy->
Condition 2: for any fixed andThere is a positive constant +.>Make->Satisfy the following requirements
And
condition 3: undirected communication topologyIs connected, adjacency matrix->The following conditions are satisfied: (i) There is a positive constant +.>When->Time->When->When (I)>;(ii)Is birtransrandom, i.e.>
Condition 4: step sizePrivacy noise parameter->And variable sample size +.>One of the following conditions a) or b) is satisfied: />
a)
b)
If conditions 1, 2, 3 and 4 b) are satisfied, then the algorithm pairs allMean square convergence, i.e. there is an optimal solution +.>Make->
If the conditions 1, 2, 3 are satisfied, and the convergence speed of algorithm one is:
when (when)There is->
When (when)There is->
The convergence speed of an algorithm with a two-fold scale stochastic approximation step size is given, and the correlation result of distributed stochastic optimization is not given even if privacy protection is not considered. From the above, it can be seen that as the privacy noise parameter increases, the convergence speed of the algorithm becomes slower. Thus, to achieve a fusion of algorithms and limited cumulative privacy budgets For an unlimited number of iterations to be performed simultaneously, the privacy noise parameters should be properly selected.
From the above conclusion, it is possible to establish both the mean square convergence of algorithm one and the differential privacy with a limited cumulative privacy budgetThis will be shown in the following reasoning:
conclusion 1: order the. If it isIf true, then algorithm one's mean square convergence sum has a finite cumulative privacy budget +.>While establishing infinite iterations.
Note that conclusion 1 holds when the added privacy noise has an increasing variance. For example, whenOr->In this case, the condition of inference 1 is satisfied. In this case, algorithm one's mean square convergence and limited cumulative privacy budget +.>Next infinite number of passesThe generation of differential privacy may be established simultaneously. Wherein->Differential privacy is only demonstrated in each iteration, resulting in +.>Cumulative privacy loss after round execution>
In a possible implementation manner, a privacy-preserving distributed random optimization algorithm for changing the sample size, namely an algorithm two, is provided based on a gradient disturbance method.
Performing an infinite number of iterations on algorithm two, including:
according to the following formula (2), each agent i in algorithm two receives the noise status of its neighbor agent j Updating the estimated state:
(2)
wherein , andIs the step size of the algorithm, +.>Is +/every time>Is intelligent body->Added privacy noise.
In a possible implementation, in step S102, deriving the noise variance condition for the second algorithm that satisfies the epsilon-differential privacy in an infinite number of iterations includes:
privacy noiseDirectly adding the noise to the gradient, setting a critical quantity in a second algorithm, wherein the critical quantity is sensitivity, and judging how much noise should be added in each iteration to realize epsilon-differential privacy, wherein the sensitivity of the second algorithm is shown in the following formula (2-1):
(2-1)
is from variance->Sampling noise in the Laplace distribution, +.>Is any given positive number; then the following equation (2-2) holds, then it is explained that algorithm one satisfies ε -differential privacy for an infinite number of iterations:
(2-2)。
in a possible embodiment, whenThen algorithm two has a limited cumulative privacy budget +.>Differential privacy for an infinite number of iterations.
In a possible embodiment, for convergence analysis, the following is required with respect to step sizePrivacy noise parameter->And variable sample size +.>Is set in the condition of (2).
In step S103, calculating the convergence rate of the second algorithm according to the noise variance condition, to determine an optimal point, including:
Step sizePrivacy noise parameter->And variable sample size +.>One of the following conditions is satisfied: />
a)
b)
If conditions 1, 2, 3 and 5 a) are satisfied, algorithm two is for allAlmost certainly converging, i.e. there is an optimal solutionSuch that:
If conditions 1, 2, 3 and 5 b) are satisfied, then the algorithm pairs allMean square convergence, i.e. there is an optimal solution +.>So that
If the conditions 1, 2, 3 are satisfied, and the convergence speed of algorithm two is:
when (when)At the time, there are
When (when)There is->
Conclusion 2: order the. If it isEstablished, then simultaneously establishing the mean square convergence of algorithm two and +.>For differential privacy withoutAnd limiting the number of iterations.
For example, we can choose to. From conclusion 2, it is possible to establish both the mean square convergence of algorithm two and the presence of a limited cumulative privacy budget +.>For an infinite number of iterations.
S104: and the user selects the first algorithm and the second algorithm according to actual requirements to finish the differential privacy distributed random optimization oriented to the intelligent power grid.
In one possible implementation, the present application proposes two different differential privacy distributed random optimization algorithms. Both algorithms one and two can be used in a limited cumulative privacy budget And realizing differential privacy under the condition of algorithm convergence. Both algorithms have advantages. For algorithm one, it is a simple operation or has low computational complexity, while for algorithm two, it can be applied to analysis of other network problems, such as differential private distributed random optimization algorithms under event triggering or quantization.
Simulation examples are performed according to the above-described optimization method:
consider a single slaveA network of spatially distributed sensors, intended to estimate an unknown +.>Dimension parameter. Each sensor->Collecting a set of scalar measurements>Generated by the following noise-corrupted linear regression model, whereinIs individual->Accessible regression vector,>is zero-mean gaussian noise.
If it is andAre mutually independent Gaussian sequences distributed in the form of +.> and. The distributed parameter estimation problem can then be modeled as a distributed stochastic quadratic optimization problem,
wherein ,the method comprises the steps of carrying out a first treatment on the surface of the Thus (S)>Is convex and. By using the regression amount observed +.>And corresponding measured values->Sampling gradientCondition 2 is satisfied.
Setting vector dimensionsAnd real parameters->Let->. Furthermore, the initial parameter estimate of these individuals is chosen to be +. >. Let each covariance matrix
For positive definite matrix, then eachAre all strongly convex, and the topology is shown in figure 2.
First, set upStep size->Sample size->Privacy noise parameter->. Then, the cumulative privacy budget for an infinite number of iterations is limited, < >>. The estimation error of algorithm one is shown in FIG. 3, which shows that the generated iteration asymptotically converges to the true parameter +.>. Furthermore, in fig. 4 +.>Is subject to andConditions of influence. As shown in the figure, is->Along with-> andAnd decreases with increasing numbers.
The invention is provided withStep size-> andSample size->Privacy noise parameter->
The cumulative privacy budget for an infinite number of iterations is limited,. The estimation error of algorithm 2 is shown in FIG. 5, indicating that the generated iteration asymptotically converges to the true parameter +.>. Furthermore, we show +_in FIG. 6>Is subject to-> andConditions of influence. As shown in->Along with-> andAnd decreases, consistent with theoretical analysis.
In the embodiment of the invention, two differential private distributed random optimization algorithms with variable sample sizes are researched. The sensitive information of each individual is protected by adopting a gradient and output disturbance method. By using a stochastic approximation condition of twice the time scale, the algorithm converges to an optimal point in an almost deterministic and mean square sense, while having differential privacy with a finite cumulative privacy budget for infinite iterations. In addition, it is also shown how the added privacy noise affects the convergence speed of the algorithm.
Fig. 7 is a schematic diagram of a differential privacy distributed random optimization system for a smart grid according to the present invention, where the system 200 is used for the above differential privacy distributed random optimization for a smart grid, and the system 200 includes:
the algorithm construction module 210 is configured to construct a differential privacy distributed random optimization model, where the differential privacy distributed random optimization model includes two algorithms: the differential privacy distributed random optimization algorithm based on output disturbance is defined as algorithm I; a differential privacy distributed random optimization algorithm based on gradient disturbance is defined as an algorithm II;
the iteration module 220 is configured to perform infinite iterations on the first algorithm and the second algorithm, and derive a noise variance condition that satisfies epsilon-differential privacy in the infinite iterations;
the convergence calculation module 230 is configured to calculate convergence speeds of the first algorithm and the second algorithm according to the noise variance condition, respectively, so as to determine an optimal point;
and the optimization algorithm selection module 240 is used for selecting the algorithm I and the algorithm II according to actual requirements by a user to finish the differential privacy distributed random optimization oriented to the smart grid.
Preferably, the iteration module 220 is further configured to, for each intelligence in algorithm one The states of the bodies are all masked with additive random variables;
each agent i in algorithm one receives the noise status of agent i's neighbor agent j according to the following equation (1)Updating the estimated state:
(1)
wherein ,is intelligent body->An estimated state at the kth iteration; wherein k has the value of k=1, 2,3, …, n;each element of (2) is zero meanIndependent co-distribution and variance of->Is a laplace noise of (a);Representing elements in an adjacency matrix in a smart grid network graph;Representing intelligent agent->An optimized gradient at the kth iteration;Is the gradient step length;For introducing a new step size, according to the new step size +.>Weighting information from neighbor agent j;Representing the neighborhood of agent i.
Preferably, the iteration module 220 is further configured to calculate each agent in algorithm two according to the following formula (2)Accepting noise status of its neighbor agent j>Updating the estimated state:
(2)
wherein ,is privacy noise added for agent i at each time k.
Preferably, the iteration module 220 further gives a constant C greater than 0, different samples of the two gradient information If->Just in the data sample Different from above, so that->Then->Are adjacent;
wherein ,representing the random variable +.>Two different realizations of l in the overall value range,/-> andRepresenting the random variable +.>In data sample->Is a different implementation of (a);
setting a critical quantity in the algorithm I, wherein the critical quantity is sensitivity, and judging the noise degree which should be added in each iteration when epsilon-differential privacy is realized through the sensitivity, wherein the sensitivity of the output graph q in the kth iteration is defined as the following formula (1-1):
(1-1)
wherein ,two different samples respectively representing gradient information; a represents an adjacency matrix; … 1 Representing norms, sup representing an upper bound with the smallest set;
the sensitivity at the kth iteration of algorithm one satisfies the following equation (1-2):
(1-2)
wherein ,representing variance as +.>Laplacian distributed noise, < >>Is any given positive number; b represents the accumulation times; t represents the number of tired multiplications, +.>Representing the cumulative multiplication step length introduced in the process of performing the cumulative calculation;Is the number of sampling gradients used at the kth iteration, then +.>Represents the 0 th time, ++>Represents the b-th time;Represents the gradient step size of the 0 th time;Representing the gradient step size of the b-th iteration;
then the following equation (1-3) holds, then it is explained that algorithm one satisfies ε -differential privacy for an infinite number of iterations:
(1-3)
wherein ,representing privacy noise parameters.
Preferably, the iteration module 220 is further configured to cause the following steps based on the formula (1-2)Represents the gradient step size of the 1 st time;Represents the gradient step size of the 2 nd time; if one of the following conditions is satisfied:
i) ;
ii) ;
iii) .
algorithm one has a limited cumulative privacy budgetDifferential privacy is provided for an infinite number of iterations, where η is a constant value greater than 0.
Preferably, the iteration module 220 is further configured to set a critical amount in the second algorithm, where the critical amount is sensitivity, and determine how much noise should be added in each iteration to achieve epsilon-differential privacy; wherein the sensitivity of the output graph q at the kth iteration is defined as formula (1-1); the sensitivity of algorithm two is shown in the following formula (2-1):
(2-1)
is from variance->Sampling noise in the Laplace distribution, +.>Is any given positive number;Represents a random variable;Representing the gradient that each agent i can obtain; then the following equation (2-2) holds, then it is explained that algorithm one satisfies ε -differential privacy for an infinite number of iterations:
(2-2)
when (when) Algorithm two then has a limited cumulative privacy budgetDifferential privacy for an infinite number of iterations.
Preferably, the convergence calculation module 230 is further configured to perform the stack vector definition of the following formula (3) on the first algorithm or the second algorithm:
(3)
wherein T represents a transpose;representing a transpose of the gradient;A matrix of gradient vectors representing each agent i, i=1, 2,3 … n; let->Is->Average value of>Is->Average value of (i), i.eDefinitions->Algebraic->Since A is doubly randomThe following formula (4) is obtained:
(4);
the following four condition settings are made according to the above formula (4):
condition 1: for all agentsThe method comprises the steps of carrying out a first treatment on the surface of the Each function +.>Is Lipohsh continuous, i.e. there is +.>Such that:
wherein ,Li Representing a defined variable greater than zero;
each functionIs mu-strongly convex and only if +.>The method meets the following conditions:
wherein μ represents a constant greater than 0;
condition 2: for any fixed andThere is a positive constant +.>Make->The method meets the following conditions:
and :
wherein E represents the desire of the variable;
condition 3: undirected communication topologyIs connected, adjacency matrix->The following conditions are satisfied:
(i) There is a normal numberWhen intelligent agent->Time->When->When (I)>;(ii)Is birtransrandom, i.e.>;/>
Condition 4: step sizePrivacy noise parameter->And variable sample size +. >One of the following conditions 4 a) or 4 b) is satisfied:
4a)
4b)
condition 5: step sizePrivacy noise parameter->And variable sample size +.>One of the following conditions is satisfied:
5a):
5b):
preferably, the convergence computation module 230 is further configured to, when conditions 1, 2, 3, and 4, a are satisfied, algorithm one for all agentsConvergence, i.e. there is an optimal solution->So that
When conditions 1, 2, 3 and 4 b) are satisfied, then the algorithm pairs allMean square convergence, i.e. there is an optimal solution +.>So that
Where E represents the desire of the variable.
Preferably, the convergence calculation module 230 is further configured to, when the conditions 1, 2, 3 and 5 a) are satisfied, then algorithm two for allConvergence, i.e. there is an optimal solution->Make->
When conditions 1, 2, 3 and 5 b) are satisfied, then the algorithm pairs allMean square convergence, i.e. there is an optimal solution +.>So that
In the embodiment of the invention, two differential private distributed random optimization algorithms with variable sample sizes are researched. The sensitive information of each individual is protected by adopting a gradient and output disturbance method. By using a stochastic approximation condition of twice the time scale, the algorithm converges to an optimal point in an almost deterministic and mean square sense, while having differential privacy with a finite cumulative privacy budget for infinite iterations. In addition, it is also shown how the added privacy noise affects the convergence speed of the algorithm.
Fig. 8 is a schematic structural diagram of an electronic device 300 according to an embodiment of the present invention, where the electronic device 300 may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 301 and one or more memories 302, where at least one instruction is stored in the memories 302, and the at least one instruction is loaded and executed by the processors 301 to implement the following steps of the smart grid-oriented differential privacy distributed random optimization method:
s1: constructing a differential privacy distributed random optimization model, wherein the differential privacy distributed random optimization model comprises two algorithms: the differential privacy distributed random optimization algorithm based on output disturbance is defined as algorithm I; a differential privacy distributed random optimization algorithm based on gradient disturbance is defined as an algorithm II;
s2: carrying out infinite iteration on the algorithm I and the algorithm II respectively, and deducing a noise variance condition meeting epsilon-difference privacy in infinite iteration;
s3: according to the noise variance condition, the convergence speeds of the algorithm I and the algorithm II are calculated respectively, and an optimal point is determined;
S4: and the user selects the first algorithm and the second algorithm according to actual requirements to finish the differential privacy distributed random optimization oriented to the intelligent power grid.
In an exemplary embodiment, a computer readable storage medium, e.g. a memory comprising instructions executable by a processor in a terminal to perform the above-described smart grid-oriented differential privacy distributed random optimization method is also provided. For example, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

Claims (2)

1. The differential privacy distributed random optimization method for the smart grid is characterized by comprising the following steps of:
s1: constructing a differential privacy distributed random optimization model oriented to a smart grid, wherein input data of the differential privacy distributed random optimization model is ammeter data in the smart grid; the differential privacy distributed random optimization model comprises two algorithms: the differential privacy distributed random optimization algorithm based on output disturbance is defined as algorithm I; a differential privacy distributed random optimization algorithm based on gradient disturbance is defined as an algorithm II;
S2: performing approximate infinite iterations on the algorithm I and the algorithm II respectively, and deducing a noise variance condition meeting epsilon-difference privacy in infinite iterations;
in the step S2, performing an infinite number of iterations on the first algorithm includes:
for each agent in algorithm oneIs all using additivityShielding the random variable;
each agent in algorithm one according to the following equation (1)iAccepting the agentiNeighbor agent of (a)jNoise state of (a)Updating the estimated state:
(1)
wherein ,is intelligent body->The estimated state at the kth iteration, k, takes the value k=1, 2,3, …,nnrepresents a positive integer;each element is zero mean independent and equidistributed and the variance is +.>Is a laplace noise of (a);Representing elements in an adjacency matrix in a smart grid network graph;Representing intelligent agent->An optimized gradient at the kth iteration;Is the gradient step length;For introducing a new step size, according to the new step size +.>Weighting information from neighbor agent j;Representing an agentiIs a neighborhood of (a);
in step S2, performing an infinite number of iterations on the second algorithm, including:
each agent in algorithm two according to the following equation (2) Accepting noise status of its neighbor agent j>Updating the estimated state:
(2)
wherein ,representing variance->Is a laplace distributed noise;
in step S2, deriving the noise variance condition for the algorithm that satisfies epsilon-differential privacy in an infinite number of iterations includes:
given a constant C greater than 0, different samples of two gradient information, If->Just in the data sample +.>Different in thatThen->Are adjacent;
wherein , andRepresenting the random variable +.>Within the whole value rangelIs-> andRepresenting the random variable +.>In data sample->Is a different implementation of (a);
setting a critical quantity in the algorithm I, wherein the critical quantity is sensitivity, and judging the noise degree which should be added in each iteration when epsilon-differential privacy is realized through the sensitivity, wherein the sensitivity of the output map q in the kth iteration is defined as the following formula (1-1):
(1-1)
wherein ,two different samples respectively representing gradient information; a represents an adjacency matrix;Representing norms, sup representing an upper bound with the smallest set;
when algorithm one is at the firstkThe sensitivity at the time of the iteration satisfies the following formula (1-2):
(1-2);
wherein ,Representing variance->Laplacian distributed noise, < >>Is any given positive number; b represents the accumulation times; t represents the number of tired multiplications, +.>Representing the cumulative multiplication step length introduced in the process of performing the cumulative calculation;Is the number of sampling gradients used at the kth iteration, then +.>Represents the 0 th time, ++>Represents the b-th time;Represents the gradient step size of the 0 th time;Representing the gradient step size of the b-th iteration;
then the following equation (1-3) holds, then it is explained that algorithm one satisfies ε -differential privacy for an infinite number of iterations:
(1-3)
wherein ,representing privacy noise parameters;
in the step S2, deriving the noise variance condition for the algorithm satisfying epsilon-differential privacy in infinite iterations further includes:
any two different samples of gradient information, wherein a difference exists in one data sample at the kth iteration;
then for the following andThe following formula (1-4) is shown:
(1-4);
wherein ,is according to->Calculate +.>Is according to->Calculating;
based on the formulas (1-2), (1-3) and (1-4), letRepresents the gradient step size of the 1 st time;Represents the gradient step size of the 2 nd time; if one of the following conditions is satisfied:
i) ;
ii)
iii)
algorithm one has a limited cumulative privacy budgetIs differentially private for an infinite number of iterations, wherein, ηIs a constant value greater than 0;
in the step S2, deriving the noise variance condition for the algorithm two that satisfies epsilon-differential privacy in infinite iterations includes:
setting a critical quantity in the second algorithm, wherein the critical quantity is sensitivity, and judging how much noise should be added in each iteration to realize epsilon-differential privacy; wherein, the firstk Output graph at multiple iterationsqThe sensitivity of (2) is defined as in equation (1-1); the sensitivity of algorithm two is shown in the following formula (2-1):
(2-1)
is from variance->Sampling noise in the Laplace distribution, +.>Is any given positive number;Represents a random variable;Representing a gradient function; then the following equation (2-2) holds, then it is stated that algorithm two satisfies ε -differential privacy for an infinite number of iterations:
(2-2);
algorithm two now has a limited cumulative privacy budgetDifferential privacy for an infinite number of iterations;
s3: respectively calculating convergence rates of the first algorithm and the second algorithm according to the noise variance condition, and determining an optimal solution;
in the step S3, before calculating the convergence rates of the first algorithm and the second algorithm, a condition setting is performed, including:
and (3) performing stack vector definition of the following formula (3) on the algorithm one or the algorithm two:
(3);
Wherein T represents a transpose;representing a transpose of the gradient;Representing individual agentsiIs a matrix of gradient vectors of (a),i=1,2,3…nthe method comprises the steps of carrying out a first treatment on the surface of the Let->Is->Average value of>Is->Average value of (i), i.eThe method comprises the steps of carrying out a first treatment on the surface of the Definitions->Algebraic->Since a is doubly random, the following formula (4) is obtained:
(4);
the following four condition settings are made according to the above formula (4):
condition 1: for all agentsThe method comprises the steps of carrying out a first treatment on the surface of the Each function +.>Is Lipohsh continuous, i.e. there is +.>Such that:
wherein ,L i representing a defined variable greater than zero;
each functionIs mu-strongly convex and only if +.>The method meets the following conditions:
wherein μ represents a constant greater than 0;
condition 2: for any fixed andThere is a positive constant +.>Make->The method meets the following conditions:
and :
wherein E represents the desire of the variable;
condition 3: the undirected communication topology G is connected and the adjacency matrix a satisfies the following condition:
(i) There is a normal numberηWhen the intelligent agentWhen (I)>When->When (I)>The method comprises the steps of carrying out a first treatment on the surface of the (ii) A is a double random, i.e. +.>
Condition 4: step by stepLong lengthPrivacy noise parameter->And the number of sampling gradients used at the kth iteration +.>One of the following conditions 4 a) or 4 b) is satisfied:
4a)
4b)
condition 5: step sizePrivacy noise parameter->And the number of sampling gradients used at the kth iteration +. >One of the following conditions is satisfied:
5a)
5b)
in the step S3, the convergence speed of the first algorithm is calculated according to the noise variance condition, and an optimal solution is determined, including:
when conditions 1, 2, 3 and 4 a are satisfied), then the algorithm pair is for all agentsConvergence, i.e. there is an optimal solution->So that
When conditions 1, 2, 3 and 4 b) are satisfied, then the algorithm pairs allMean square convergence, i.e. there is an optimal solution +.>So that
Wherein E represents the desire of the variable;
in the step S3, the convergence speed of the second algorithm is calculated according to the noise variance condition, and an optimal solution is determined, including:
when conditions 1, 2, 3 and 5 a) are satisfied, algorithm two is for allConvergence, i.e. there is an optimal solution->So that
When conditions 1, 2, 3 and 5 b) are satisfied, algorithm two pairs are allMean square convergenceI.e. there is an optimal solution->So that
S4: based on the optimal solution, a user selects an algorithm I and an algorithm II according to actual requirements, and the ammeter data in the intelligent power grid are optimized to finish differential privacy distributed random optimization oriented to the intelligent power grid.
2. A smart grid-oriented differential privacy distributed stochastic optimization system for use in a smart grid-oriented differential privacy distributed stochastic optimization method of claim 1, the system comprising:
The algorithm construction module is used for constructing a differential privacy distributed random optimization model, and the differential privacy distributed random optimization model comprises two algorithms: the differential privacy distributed random optimization algorithm based on output disturbance is defined as algorithm I; a differential privacy distributed random optimization algorithm based on gradient disturbance is defined as an algorithm II;
the iteration module is used for carrying out approximate infinite iterations on the algorithm I and the algorithm II respectively and deducing a noise variance condition meeting epsilon-differential privacy in the infinite iterations;
the convergence calculation module is used for respectively calculating the convergence speeds of the algorithm I and the algorithm II according to the noise variance condition to determine an optimal point;
and the optimization algorithm selection module is used for selecting the algorithm I and the algorithm II according to actual requirements by a user to finish the differential privacy distributed random optimization oriented to the intelligent power grid.
CN202310361388.7A 2023-04-07 2023-04-07 Differential privacy distributed random optimization method and system for smart grid Active CN116090014B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310361388.7A CN116090014B (en) 2023-04-07 2023-04-07 Differential privacy distributed random optimization method and system for smart grid

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310361388.7A CN116090014B (en) 2023-04-07 2023-04-07 Differential privacy distributed random optimization method and system for smart grid

Publications (2)

Publication Number Publication Date
CN116090014A CN116090014A (en) 2023-05-09
CN116090014B true CN116090014B (en) 2023-10-10

Family

ID=86199432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310361388.7A Active CN116090014B (en) 2023-04-07 2023-04-07 Differential privacy distributed random optimization method and system for smart grid

Country Status (1)

Country Link
CN (1) CN116090014B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158238A (en) * 2021-03-30 2021-07-23 中国科学院数学与系统科学研究院 Game control-oriented privacy protection method and system and readable storage medium
CN114118407A (en) * 2021-10-29 2022-03-01 华北电力大学 Deep learning-oriented differential privacy usability measurement method
CN114447924A (en) * 2022-01-18 2022-05-06 山东大学 Distributed differential privacy ADMM (advanced data mm) energy management and control method and system for smart grid
CN114529207A (en) * 2022-02-22 2022-05-24 国网湖北省电力有限公司电力科学研究院 Energy storage battery distributed economic dispatching method based on differential privacy mechanism
CN115378813A (en) * 2022-08-12 2022-11-22 大连海事大学 Distributed online optimization method based on differential privacy mechanism
CN115481431A (en) * 2022-08-31 2022-12-16 南京邮电大学 Dual-disturbance-based privacy protection method for federated learning counterreasoning attack

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158238A (en) * 2021-03-30 2021-07-23 中国科学院数学与系统科学研究院 Game control-oriented privacy protection method and system and readable storage medium
CN114118407A (en) * 2021-10-29 2022-03-01 华北电力大学 Deep learning-oriented differential privacy usability measurement method
CN114447924A (en) * 2022-01-18 2022-05-06 山东大学 Distributed differential privacy ADMM (advanced data mm) energy management and control method and system for smart grid
CN114529207A (en) * 2022-02-22 2022-05-24 国网湖北省电力有限公司电力科学研究院 Energy storage battery distributed economic dispatching method based on differential privacy mechanism
CN115378813A (en) * 2022-08-12 2022-11-22 大连海事大学 Distributed online optimization method based on differential privacy mechanism
CN115481431A (en) * 2022-08-31 2022-12-16 南京邮电大学 Dual-disturbance-based privacy protection method for federated learning counterreasoning attack

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Differentiated Output-Based Privacy-Preserving Average Consensus;Jieming Ke et al;《IEEE CONTROL SYSTEMS LETTERS》;1369-1374 *

Also Published As

Publication number Publication date
CN116090014A (en) 2023-05-09

Similar Documents

Publication Publication Date Title
Zamzam et al. Physics-aware neural networks for distribution system state estimation
Yang et al. Distributed optimization based on a multiagent system in the presence of communication delays
Lopes et al. Incremental adaptive strategies over distributed networks
Xiao et al. A space-time diffusion scheme for peer-to-peer least-squares estimation
Lopes et al. Distributed adaptive incremental strategies: Formulation and performance analysis
Vorobyov et al. Robust iterative fitting of multilinear models
CN109829337B (en) Method, system and equipment for protecting social network privacy
Alaeddini et al. Adaptive communication networks with privacy guarantees
CN111475838B (en) Deep neural network-based graph data anonymizing method, device and storage medium
Kefayati et al. Secure consensus averaging in sensor networks using random offsets
Zhang et al. PWG-IDS: an intrusion detection model for solving class imbalance in IIoT networks using generative adversarial networks
Palanisamy et al. Spliteasy: A practical approach for training ml models on mobile devices
Zhang et al. Quasisynchronization of reaction–diffusion neural networks under deception attacks
CN115481441A (en) Difference privacy protection method and device for federal learning
Rahman et al. Deep learning-based improved cascaded channel estimation and signal detection for reconfigurable intelligent surfaces-assisted MU-MISO systems
Gratton et al. Distributed ridge regression with feature partitioning
Plata-Chaves et al. Distributed incremental-based RLS for node-specific parameter estimation over adaptive networks
Li et al. Iterative approach with optimization-based execution scheme for parameter identification of distributed parameter systems and its application in secure communication
CN104199884A (en) Social networking service viewpoint selection method based on R coverage rate priority
Zhu et al. Learning-empowered privacy preservation in beyond 5G edge intelligence networks
Wang et al. Decentralized cooperative online estimation with random observation matrices, communication graphs and time delays
CN116090014B (en) Differential privacy distributed random optimization method and system for smart grid
Zhao et al. VFLR: An efficient and privacy-preserving vertical federated framework for logistic regression
Ghazanfari-Rad et al. Optimal variable step-size diffusion LMS algorithms
Kesici et al. Detection of False Data Injection Attacks in Distribution Networks: A Vertical Federated Learning Approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant