CN112069631A - Distributed projection method considering communication time delay and based on variance reduction technology - Google Patents
Distributed projection method considering communication time delay and based on variance reduction technology Download PDFInfo
- Publication number
- CN112069631A CN112069631A CN202010614853.XA CN202010614853A CN112069631A CN 112069631 A CN112069631 A CN 112069631A CN 202010614853 A CN202010614853 A CN 202010614853A CN 112069631 A CN112069631 A CN 112069631A
- Authority
- CN
- China
- Prior art keywords
- local
- optimization problem
- agent
- follows
- variance reduction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000004891 communication Methods 0.000 title claims abstract description 37
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000005516 engineering process Methods 0.000 title claims abstract description 23
- 238000005457 optimization Methods 0.000 claims abstract description 47
- 238000004364 calculation method Methods 0.000 claims abstract description 22
- 238000004458 analytical method Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 8
- 239000003795 chemical substances by application Substances 0.000 claims description 54
- 239000011159 matrix material Substances 0.000 claims description 38
- 239000013598 vector Substances 0.000 claims description 33
- 101100161752 Mus musculus Acot11 gene Proteins 0.000 claims description 3
- 241000970807 Thermoanaerobacterales Species 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 238000012804 iterative process Methods 0.000 claims description 3
- 239000000047 product Substances 0.000 description 5
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000012466 permeate Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/18—Network design, e.g. design based on topological or interconnect aspects of utility systems, piping, heating ventilation air conditioning [HVAC] or cabling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2111/00—Details relating to CAD techniques
- G06F2111/04—Constraint-based CAD
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computer Networks & Wireless Communication (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a distributed projection method considering communication time delay and based on a variance reduction technology, which comprises the following steps: step 1, providing an original optimization problem model (1) aiming at a multi-intelligent system simultaneously provided with local set constraint and local equality constraint; step 2, equivalently converting the original optimization problem model (1) obtained in the step 1 into a convex optimization problem model (2) convenient for distributed processing; step 3, a distributed projection algorithm (3) based on a variance reduction technology is provided to solve a convex optimization problem model (2) with constraints, namely, a local random average gradient is adopted to estimate a local full gradient unbiased, so that heavy calculation burden caused by calculation of full gradients of all local objective functions in each iteration is relieved; step 4, carrying out convergence analysis; the invention can greatly reduce the calculation cost of all the agents in the network, thereby reducing the communication and calculation pressure of the whole multi-agent system and having higher practicability.
Description
Technical Field
The invention relates to the technical field of intelligent communication, in particular to a distributed projection method considering communication time delay and based on a variance reduction technology.
Background
In recent years, with the rapid development of high-tech technology, particularly, emerging fields such as cloud computing and big data have appeared. Distributed optimization theory and application are paid more and more attention and gradually permeate into various aspects of scientific research, engineering application and social life, the distributed optimization is a task of effectively realizing optimization through cooperative coordination among a plurality of intelligent agents, and the distributed optimization can be used for solving the large-scale complex optimization problem which is hard to be competent by a plurality of centralized algorithms. However, when the existing distributed optimization algorithm faces a convex optimization problem which is large in scale and has relatively complex local constraints, the gradient calculation amount is large, and the calculation burden of the intelligent agents in the network is heavy, so that the calculation and communication efficiency of a multi-intelligent-agent system is low, and the like, and therefore the requirements of people cannot be met.
Disclosure of Invention
The invention provides a distributed projection algorithm based on variance reduction technology, which can greatly reduce the calculation cost of the intelligent agents in the network, thereby reducing the communication and calculation pressure of the whole multi-intelligent-agent system.
The invention adopts the following technical scheme:
a distributed projection method based on variance reduction technology and considering communication delay comprises the following steps:
as a preferred technical scheme of the invention, the specific construction process and form of the original optimization problem model (1) in the step 1 are as follows:
firstly: defining an agent cluster V ═ {1, …, m }, communication network edge setAnd a contiguous matrixDirected communication networkAnd simple network G has no self-loops; when agent (i, j) is E, aij=aji> 0, otherwise aij=aji0; degree of agent i is represented asFor diagonal matrix D ═ diag { D1,d2,...,dmThe Laplacian matrix of the undirected network G is defined asIf the undirected network G is connected, then the Laplace matrixAre symmetrical and semi-positive;
secondly, the original optimization problem model (1) is embodied as follows
In the above formula, the objective functionRepresenting samples of a real problem requiring processing, saidRepresenting a decision vector, qiRepresenting the total number of local questions assigned to agent i; while in the above equation the local objective function is further decomposed intoWhereinh∈{1,…,qiIs a sub-function of the h local objective function; based on the above, defineFor closed convex sets, and intersection X is non-empty, a column full rank matrix is definedAnddefining an optimal solution for the constrained convex optimization problem (1) as
As a preferred technical solution of the present invention, the convex optimization problem model (2) in step 2 has the following specific form:
defining matrix B as a diagonal matrix with full rank column and diagonal elements of { B }1,...,BmI.e. that
Stacked vectorOrder toIs the Cartesian product; order toWhereinRepresents a symbol of kronecker product; q. q.siIs represented by qmaxAnd q ismin(wherein q isminNot less than 1, namely: each agent processes at least one sample); from the above statements, λ can be obtainedmin(BTB)qminIs greater than 0; based on the convex optimization problem model (2), the following assumptions and definitions are made:
assume that 1: each local sub-targeting function fi hAre both strongly convex and all have a risch continuous gradient. Namely: for all i ∈ V, h ∈ {1,. qiAre multiplied byThe following formula holds:
wherein mu is more than 0 and less than or equal to Lf(ii) a Then, under the assumption that one is true, the globally optimal solution of the constrained convex optimization problem (2) is unique and expressed as
Assume 2: the undirected network G is connected;
Wherein B is0Is a positive integer.
Definition 1: defining global vectors to collect local variables xi,k,yi,k,wi,k,gi,kAndthe following were used:
and a global vector xkAnd wkVersion of local delay:
then, at the k-th iteration, the communication delayi, j ∈ V, determined by agent i and agent j simultaneously, so the global delay vector xk[i]And wk[i]Held only by agent i.
As a preferred technical solution of the present invention, a specific iterative process of the distributed projection algorithm (3) based on the variance reduction technique in step 3 is as follows:
Setting: k is 0
For agent i 1
2: the local random mean gradient is calculated as follows
4: updating variable xi,k+1As follows
5: updating variable yi,k+1As follows
yi,k+1=yi,k+Bixi,k+1-bi
6: updating variable wi,k+1As follows
wi,k+1=wi,k+βxi,k+1
End of cycle
Setting k to k +1, and repeating the cycle until a stop condition is met;
wherein, theAs a sub-function of a local objective functionh∈{1,...,qiThe iteration value at the kth iteration,representing an n-dimensional real column vector.
As a preferable technical means of the present invention, the aboveThe iteration rule of (1) is as follows:
at iteration k, for agent i, a local random mean gradient is defined:
let FkRepresenting the σ -algebra produced by the local random mean gradient at iteration k, the following equation can be obtained:
as a preferred technical solution of the present invention, the convergence analysis process in step 4 is as follows:
the following definitions are first made:
definition 2: for 0 < alpha < 1/lambdamax(L), defining a semi-positive definite matrix P as:
where W ═ I- α L is a positive definite matrix, then:
Then, combining hypotheses 1-3 and definitions 1-2 yields:
consider the variance reduction technique based distributed projection algorithm (3) and definition 2U under the assumption that 1-3 holdskWith the definition of U, if the parameters η, Φ and ξ satisfy:
0<φ<2μ (21b)
then, the constant step α and the algorithm parameter β satisfy:
then the sequence Uk}k≥0Is bounded and converged, then the sequence xk}k≥0Is uniquely converged on x*In (1).
The invention has the beneficial effects that:
1. the algorithm provided by the invention estimates the local full gradient unbiased by means of the local random average gradient, so that the calculation cost of the intelligent agent in the network can be greatly reduced, the communication and calculation pressure of the whole multi-intelligent-agent system is reduced, less gradient calculation cost is spent when the same convergence precision is reached, and less communication times are required;
2. compared with the existing distributed random gradient optimization algorithm, the algorithm provided by the invention can solve the more complex optimization problem, namely: a convex optimization problem with both local set constraints and local equality constraints;
3. compared with most of the existing optimization algorithms considering communication time delay, the algorithm provided by the invention also considers the privacy of the local information of the intelligent agent while introducing the communication time delay, and has higher practical value.
Drawings
FIG. 1 is a undirected network connectivity diagram;
FIG. 2 is a graph comparing the performance of the algorithm of the present invention with that of the prior art;
FIG. 3 illustrates the instantaneous behavior of an agent without communication delay in accordance with the present invention;
FIG. 4 illustrates the instantaneous behavior of an agent in the presence of communication delays in accordance with the present invention;
Detailed Description
The invention will now be described in further detail with reference to the drawings and examples.
First, the following is defined for each symbol in the following formula:
a set of real numbers is represented as,representing an n-dimensional real column vector,the dimension-real matrix represents m × n;
the identity matrix is represented by I, the dimensions of which are determined by the context;
λ2(. -) represents the minimum non-zero eigenvalue of a semi-positive definite matrix, λ, for a real symmetric matrix Amax(A) And λmin(A) Respectively representing the maximum characteristic value and the minimum characteristic value;
xTand ATRepresents the transpose of vector x and the transpose of matrix a;
the Euclidean norm of the vector and the spectral norm of the matrix are uniformly expressed by | | · |;
for a semi-positive definite matrixAnd a vector x of the sum vector x,defining a scalar product<x,y>A=<x,Ay>And is andan A matrix weighted norm representing vector x;
e [ x ] represents the expectation for a random variable x;
The following embodiments of the present invention are described below:
a distributed projection method based on variance reduction technology and considering communication delay comprises the following steps:
the specific construction process and form of the original optimization problem model (1) in the step 1 are as follows:
firstly: defining an agent cluster V ═ {1, …, m }, communication network edge setAnd a contiguous matrixDirected communication networkAnd simple network G has no self-loops;
when agent (i, j) is E, aij=aji> 0, otherwise aij=aji=0;
For diagonal matrix D ═ diag { D1,d2,...,dmThe Laplace matrix of the undirected network G is defined as
secondly, the original optimization problem model (1) is embodied as follows
In the above formula, the objective functionRepresenting samples of a real problem requiring processing, saidRepresenting a decision vector, qiRepresenting the total number of local questions assigned to agent i;
while in the above equation the local objective function is further decomposed intoWhereinh∈{1,...,qiIs a sub-function of the h local objective function;
based on the above formula, defineFor closed convex sets, and with the intersection X non-empty, a column full rank matrix is definedAnddefining an optimal solution for the constrained convex optimization problem (1) as
The concrete form of the convex optimization problem model (2) in the step 2 is as follows:
defining matrix B as a diagonal matrix with full rank column and diagonal elements of { B }1,...,BmI.e. that
qiIs represented by qmaxAnd q ismin(wherein q isminNot less than 1, namely: each agent processes at least one sample);
from the above statements, λ can be obtainedmin(BTB)qmin>0;
Based on the convex optimization problem model (2), the following assumptions and definitions are made:
assume that 1: each local sub-targeting function fi hAre both strongly convex and all have a risch continuous gradient. Namely: for all i ∈ V, h ∈ {1,. qiAre multiplied byThe following formula holds:
wherein mu is more than 0 and less than or equal to Lf;
Then, assuming a true condition, the globally optimal solution of the constrained convex optimization problem (2) is unique and expressed as
Assume 2: the undirected network G is connected;
Wherein B is0Is a positive integer.
Definition 1: defining global vectors to collect local variables xi,k,yi,k,wi,k,gi,kAndthe following were used:
and a global vector xkAnd wkVersion of local delay:
then, at the k-th iteration, the communication delayi, j ∈ V, determined by agent i and agent j simultaneously, so the global delay vector xk[i]And wk[i]Held only by agent i.
The specific iterative process of the distributed projection algorithm (3) based on the variance reduction technology in the step 3 is as follows:
Setting: k is 0
For agent i 1
2: the local random mean gradient is calculated as follows
4: updating variable xi,k+1As follows
5: updating variable yi,k+1As follows
yi,k+1=yi,k+Bixi,k+1-bi
6: updating variable wi,k+1As follows
wi,k+1=wi,k+βxi,k+1
End of cycle
Setting k to k +1, and repeating the cycle until a stop condition is met;
wherein, theAs a sub-function of a local objective functionh∈{1,...,qiThe iteration value at the kth iteration,representing an n-dimensional real column vector.
at iteration k, for agent i, a local random mean gradient is defined:
let FkRepresenting the σ -algebra produced by the local random mean gradient at iteration k, the following equation can be obtained:
the convergence analysis process in step 4 is as follows:
firstly, in the practical application process, the following 7 arguments are adopted in the convergence analysis of the embodiment: introduction 1: for any non-empty closed convex set X, the following two inequalities hold
Wherein P isX[·]Is a projection operator;
2, leading: if there isAndglobally optimal solution of constrained convex optimization problem (2) under the assumption that one is trueExists exclusively and has:
wherein the constant step length alpha is more than 0, and the parameter beta is more than 0;
and 3, introduction: considering a sequence generated by a distributed projection algorithm (3) based on a variance reduction technique under the condition that 1-2 is assumed to be establishedAnd { gk}k≥0To aIs provided with
Wherein the auxiliary sequence { pk}k≥0Is defined as:
sequence { pk}k≥0Non-negative under the assumption that one is true;
and (4) introduction: considering a distributed projection algorithm (3) and a sequence (13) based on a variance reduction technique, for the assumption that 1 holdsIs provided with
And (5) introduction: consider a global vector v under the condition that assumption 3 holdsk=[(v1,k)T,...,(vm,k)T]TAnd its delayed version vk[i]The method comprises the following steps:
Where l and d are two non-negative scalars; then, willThe superposition with respect to k from 0 to n can be obtained
And (6) introduction: considering a distributed projection algorithm (3) based on variance reduction technique under the condition that 1-3 are assumed to be true, the following inequality is true
the specific demonstration process of the above conclusion is as follows:
according to definition 1, we present a shorthand form of the distributed projection algorithm (3) based on the variance reduction technique as follows:
yi,k+1=yi,k+Bixi,k+1-bi (9b)
wi,k+1=wi,k+βxi,k+1 (9c)
according to (9a), we have:
wherein the inequality uses the following equation:
(i) note that xk+1=PX[vk]Then, according to theorem 1, the following equation holds:
(ii) Similar to [12], we have
Wherein eta andfor positive constants, the first inequality applies the young's inequality, the second inequality applies the function f is strongly convex and has a Lipschitz continuous gradient, and substituting (27) into (24) yields:
next to inner pair 2 alpha (x)k+1-x*)TBTB(xk+1-xk) And (3) processing:
substituting the result of equation (29) into equation (28), and obtaining the desired result:
Wherein p iskDefinition in (13), the first equation in (31) uses the standard variance decomposition E [ | | a-E [ a | F [ ]k]||2|Fk]=E[||a||2|Fk]-||E[a|Fk]||2The inequality uses the strong convexity sum of fLiphoz continuity of (a). Next, substituting the conclusion of (31) into (30) can result in:
next, we will introduce an important relation,where V is a semi-positive definite matrix. From this relationship, we can obtain the following three equations:
finally, the result of expression (33) is substituted into expression (32).
And (3) introduction 7: under the condition that the assumption 3 is established, the following two inequalities are established
In which ξ1,ξ2Are two arbitrary positive constants; it is noted that when there is no network determination,and is thus determined.
The above conclusion is specifically demonstrated as follows:
we first demonstrated (19a) in lemma 7
The second inequality uses the lemma 5, the last inequality uses the young inequality, and xi1Is a positive constant; (19b) the certification process of (19a) is similar to that of (19a), and thus will not be described in detail;
next, for the convenience of analysis, the following definitions are made:
definition 2: for 0 < alpha < 1/lambdamax(L), defining a semi-positive definite matrix P as:
where W ═ I- α L is a positive definite matrix, then:
Then combining hypotheses 1-3 and definitions 1-2 can conclude as follows:
consider the variance reduction technique based distributed projection algorithm (3) and definition 2U under the assumption that 1-3 holdskWith the definition of U, if the parameters η, Φ and ξ satisfy:
0<φ<2μ (21b)
then, the constant step α and the algorithm parameter β satisfy:
then the sequence Uk}k≥0Is bounded and converged, then the sequence xk}k≥0Is uniquely converged on x*In (1).
The specific demonstration process is as follows:
for α > 0 and β > 0, substituting the results of theorem 7 into theorem 6 yields:
whereinIs defined in theorem 6. Next, according to lemma 4, we will convert c (E [ p ]k+1|Fk]-pk) To (35)
Both ends were obtained:
we know the sequence p according to the lemma 3kNot less than 0; thus, if η > 2Lf[Lfqmax+qmin(Lf-μ)]/λmin(BTB)qminAnd 4 α qmaxLfAnd/η ≦ c, then equation (36) may be rewritten as:
according to definition 2, if 0 < alpha < 1/lambdamax(L) and 0 < beta < 1, we have
To handle the first term on the right hand side of the (38) inequality number, we set ξ below1=ξ2Xi, 0 < xi < 2 mu, and 0 < xi < (2 mu-phi)/(1 + beta), and we next define an nonnegative constantBased on this definition, we can rewrite equation (38) as:
summing (39) from 0 to n with respect to k, yields:
under conditions (21) and (22), we define a semi-positive definite matrix
Thus, the inequality (40) can be rewritten as:
when n approaches infinity, we have
The above formula indicates that the right side of formula (39) is harmonizable. Thus, the sequenceInternal accumulation<·,·>PFitting Fej pir monotonous; we can directly derive the sequenceIs bounded and converged; thus, the sequence { U }k}k≥0Is bounded and converged; finally we can get the sequence xk}k≥0Converge on x*(ii) a Under the condition that the assumption 1 holds, we know the global optimal solution x*Are present only.
Detailed description of the preferred embodiments example 1
To demonstrate the effectiveness of the proposed algorithm, we considered using a multi-agent network with m-10 to solve the following least squares optimization problem:
whereinAnd isThe abscissa represents the amount of calculation for calculating all samples at once. Let us set n 10, p i1, and the overall sample is Q1000; the total samples are randomly and evenly distributed among agents in the network; thus, each agent i ∈ V needs to process qiQ/m samples; local parameterAndrespectively in [ -1,1 [)]And [ -n, n [ -n]Randomly selecting the two groups; the equality constraint is defined asWhen j is i, BiIs 1, otherwise is 0; biIs always 1; the local set constraint for agent i is defined as Xi=[-1n,1n]In which 1 isnRepresenting a column vector of all 1 dimensions n.
The results of the network application and the application of the application embodiment are shown in fig. 1 to 4, and specifically include:
fig. 1 shows a diagram of an experimental communication network, in which the communication rate of the network is 0.5;
fig. 2 is a comparison graph of performance of the algorithm of the present invention and the algorithm of the prior art, wherein the algorithm of the prior art is the algorithm disclosed in the documents "q.liu, s.yang, and y.hong," structured senses algorithms with fixed step size for distributed Control over multi-agent networks, "IEEE Transactions on Automatic Control, vol.62, No.8, pp.4259-4265,2017", and it is apparent from fig. 2 that the performance of the algorithm proposed by the present invention is optimal, that is: the convergence rate is fastest;
FIG. 3 is the instantaneous behavior of agent Nos. 2, 4, 6, 8, 10 without communication delay;
FIG. 4 is the instantaneous behavior of agent Nos. 2, 4, 6, 8, 10 in the presence of communication delay (and maximum communication delay per iteration of 10) in accordance with the present invention;
it can be known from the combination of fig. 3 and fig. 4 that the communication delay has a large influence on the instantaneous behavior of the intelligent agent.
Finally, it should be noted that: these embodiments are merely illustrative of the present invention and do not limit the scope of the present invention. Moreover, it will be apparent to those skilled in the art that various other changes and modifications can be made based on the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications of the invention may be made without departing from the scope of the invention.
Claims (6)
1. A distributed projection method based on variance reduction technology and considering communication delay comprises the following steps:
step 1, providing an original optimization problem model (1) aiming at a multi-intelligent system simultaneously provided with local set constraint and local equality constraint;
step 2, equivalently converting the original optimization problem model (1) obtained in the step 1 into a convex optimization problem model (2) convenient for distributed processing;
step 3, a distributed projection algorithm (3) based on a variance reduction technology is provided to solve a convex optimization problem model (2) with constraints, namely, a local random average gradient is adopted to estimate a local full gradient unbiased, so that heavy calculation burden caused by calculation of full gradients of all local objective functions in each iteration is relieved;
and 4, carrying out convergence analysis on the distributed projection algorithm (3) based on the variance reduction technology, which is provided in the step 3.
2. The distributed projection method based on variance reduction technology considering communication delay as claimed in claim 1, wherein:
the specific construction process and form of the original optimization problem model (1) in the step 1 are as follows:
firstly: defining an agent cluster V ═ {1, …, m }, communication network edge setAnd adjacency matrixDirected communication networkAnd simple network G has no self-loops;
when agent (i, j) is E, aij=aji> 0, otherwise aij=aji=0;
For diagonal matrix D ═ diag { D1,d2,...,dmThe Laplace matrix of the undirected network G is defined as
secondly, the original optimization problem model (1) is embodied as follows
In the above formula, the objective functionRepresenting samples of a real problem requiring processing, saidRepresenting a decision vector, qiRepresenting the total number of local questions assigned to agent i;
while in the above equation the local objective function is further decomposed intoWhereinIs a sub-function of the h local objective function;
3. The distributed projection method based on variance reduction technology considering communication delay as claimed in claim 2, wherein:
the concrete form of the convex optimization problem model (2) in the step 2 is as follows:
defining matrix B as a diagonal matrix with full rank column and diagonal elements of { B }1,...,BmI.e. thatStacked vectorOrder toIs the Cartesian product; note the bookWhereinRepresents a symbol of kronecker product; q. q.siRespectively maximum and minimum values ofShown as qmaxAnd q ismin(wherein q isminNot less than 1, namely: each agent processes at least one sample); from the above statements, λ can be obtainedmin(BTB)qmin>0;
Based on the convex optimization problem model (2), the following assumptions and definitions are made:
assume that 1: each local sub-targeting function fi hAre both strongly convex and all have a risch continuous gradient. Namely: for all i ∈ V, h ∈ {1,. qiAre multiplied byThe following formula holds:
wherein mu is more than 0 and less than or equal to Lf;
Then, assuming a true condition, the globally optimal solution of the constrained convex optimization problem (2) is unique and expressed as
Assume 2: the undirected network G is connected;
Wherein B is0Is a positive integer.
Definition 1: defining global vectors to collect local variables xi,k,yi,k,wi,k,gi,kAndthe following were used:
and a global vector xkAnd wkVersion of local delay:
4. The distributed projection method based on variance reduction technology considering communication delay as claimed in claim 3, wherein:
the specific iterative process of the distributed projection algorithm (3) based on the variance reduction technology in the step 3 is as follows:
Setting: k is 0
For agent i 1
2: the local random mean gradient is calculated as follows
4: updating variable xi,k+1As follows
5: updating variable yi,k+1As follows
yi,k+1=yi,k+Bixi,k+1-bi
6: updating variable wi,k+1As follows
wi,k+1=wi,k+βxi,k+1
End of cycle
Setting k to k +1, and repeating the cycle until a stop condition is met;
5. The distributed projection method based on variance reduction technology considering communication delay as claimed in claim 4, wherein:
at iteration k, for agent i, a local random mean gradient is defined:
let FkRepresenting the σ -algebra produced by the local random mean gradient at iteration k, the following equation can be obtained:
6. the distributed projection method based on variance reduction technology considering communication delay as claimed in claim 5, wherein:
the convergence analysis process in step 4 is as follows:
the following definitions are first made:
definition 2: for 0 < alpha < 1/lambdamax(L), defining a semi-positive definite matrix P as:
where W ═ I- α L is a positive definite matrix, then:
Then, combining hypotheses 1-3 and definitions 1-2 yields:
consider the variance reduction technique based distributed projection algorithm (3) and definition 2U under the assumption that 1-3 holdskAnd U*If the parameters η, φ and ξ satisfy:
0<φ<2μ (2]b)
then, the constant step α and the algorithm parameter β satisfy:
then the sequence Uk}k≥0Is bounded and converged, then the sequence xk}k≥0Is uniquely converged on x*In (1).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010614853.XA CN112069631B (en) | 2020-06-30 | 2020-06-30 | Distributed projection method based on variance reduction technology and considering communication time delay |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010614853.XA CN112069631B (en) | 2020-06-30 | 2020-06-30 | Distributed projection method based on variance reduction technology and considering communication time delay |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112069631A true CN112069631A (en) | 2020-12-11 |
CN112069631B CN112069631B (en) | 2024-05-24 |
Family
ID=73656196
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010614853.XA Active CN112069631B (en) | 2020-06-30 | 2020-06-30 | Distributed projection method based on variance reduction technology and considering communication time delay |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112069631B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113076662A (en) * | 2021-05-01 | 2021-07-06 | 群智未来人工智能科技研究院(无锡)有限公司 | Linear convergence distributed discrete time optimization algorithm for constraint optimization problem |
CN115691675A (en) * | 2022-11-10 | 2023-02-03 | 西南大学 | Efficient mushroom toxicity identification method based on asynchronous distributed optimization algorithm |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130198372A1 (en) * | 2011-12-15 | 2013-08-01 | Massachusetts Institute Of Technology | Distributed newton method and apparatus for network utility maximization |
CN108430047A (en) * | 2018-01-19 | 2018-08-21 | 南京邮电大学 | A kind of distributed optimization method based on multiple agent under fixed topology |
WO2019134254A1 (en) * | 2018-01-02 | 2019-07-11 | 上海交通大学 | Real-time economic dispatch calculation method using distributed neural network |
CN110311388A (en) * | 2019-05-28 | 2019-10-08 | 广东电网有限责任公司电力调度控制中心 | Control method for frequency of virtual plant based on distributed projection subgradient algorithm |
CN111259327A (en) * | 2020-01-15 | 2020-06-09 | 桂林电子科技大学 | Subgraph processing-based optimization method for consistency problem of multi-agent system |
-
2020
- 2020-06-30 CN CN202010614853.XA patent/CN112069631B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130198372A1 (en) * | 2011-12-15 | 2013-08-01 | Massachusetts Institute Of Technology | Distributed newton method and apparatus for network utility maximization |
WO2019134254A1 (en) * | 2018-01-02 | 2019-07-11 | 上海交通大学 | Real-time economic dispatch calculation method using distributed neural network |
CN108430047A (en) * | 2018-01-19 | 2018-08-21 | 南京邮电大学 | A kind of distributed optimization method based on multiple agent under fixed topology |
CN110311388A (en) * | 2019-05-28 | 2019-10-08 | 广东电网有限责任公司电力调度控制中心 | Control method for frequency of virtual plant based on distributed projection subgradient algorithm |
CN111259327A (en) * | 2020-01-15 | 2020-06-09 | 桂林电子科技大学 | Subgraph processing-based optimization method for consistency problem of multi-agent system |
Non-Patent Citations (2)
Title |
---|
AVINASH KUMAR ROY: "Development of event-triggered-based minimum variance recursive estimator for the NLNS using multi-model approach", 《IET SIGNAL PROCESS.》, vol. 13, no. 9, pages 766 - 777, XP006088022, DOI: 10.1049/iet-spr.2018.5546 * |
任芳芳;李德权;: "时延情形下的分布式随机无梯度优化算法", 安徽理工大学学报(自然科学版), no. 01, pages 34 - 39 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113076662A (en) * | 2021-05-01 | 2021-07-06 | 群智未来人工智能科技研究院(无锡)有限公司 | Linear convergence distributed discrete time optimization algorithm for constraint optimization problem |
CN115691675A (en) * | 2022-11-10 | 2023-02-03 | 西南大学 | Efficient mushroom toxicity identification method based on asynchronous distributed optimization algorithm |
CN115691675B (en) * | 2022-11-10 | 2023-06-06 | 西南大学 | Efficient fungus toxicity identification method based on asynchronous distributed optimization algorithm |
Also Published As
Publication number | Publication date |
---|---|
CN112069631B (en) | 2024-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | Frequency-domain dynamic pruning for convolutional neural networks | |
WO2021259357A1 (en) | Privacy-preserving asynchronous federated learning for vertical partitioned data | |
Delfosse et al. | Adaptive blind separation of independent sources: a deflation approach | |
CN112069631A (en) | Distributed projection method considering communication time delay and based on variance reduction technology | |
CN107480685B (en) | GraphX-based distributed power iterative clustering method and device | |
Wierman | Substitution method critical probability bounds for the square lattice site percolation model | |
Zhang et al. | Bi-alternating direction method of multipliers over graphs | |
Mokhtari et al. | Network newton-part i: Algorithm and convergence | |
CN112529193A (en) | Data processing method based on quantum system and quantum device | |
Li et al. | Surrogate-based distributed optimisation for expensive black-box functions | |
Ye et al. | PMGT-VR: A decentralized proximal-gradient algorithmic framework with variance reduction | |
Mandilara et al. | Quantum entanglement via nilpotent polynomials | |
Ling | Generalized power method for generalized orthogonal Procrustes problem: global convergence and optimization landscape analysis | |
CN112258410B (en) | Differentiable low-rank learning network image restoration method | |
Matei et al. | Distributed algorithms for optimization problems with equality constraints | |
Vladimirov et al. | Directly coupled observers for quantum harmonic oscillators with discounted mean square cost functionals and penalized back-action | |
Fang et al. | Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems | |
Hegde et al. | A Kaczmarz algorithm for solving tree based distributed systems of equations | |
CN108520027A (en) | A kind of Frequent Itemsets Mining Algorithm that the GPU based on CUDA frames accelerates | |
Ge et al. | From an interior point to a corner point: smart crossover | |
Wang et al. | Tensor Decomposition based Personalized Federated Learning | |
Gkillas et al. | Federated dictionary learning from non-iid data | |
Wei et al. | Weighted averaged stochastic gradient descent: Asymptotic normality and optimality | |
Ashurbekova et al. | Robust structure learning using multivariate T-distributions | |
Karakasis et al. | Alternating optimization for tensor factorization with orthogonality constraints: Algorithm and parallel implementation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |