CN113408741B - Distributed ADMM machine learning method of self-adaptive network topology - Google Patents

Distributed ADMM machine learning method of self-adaptive network topology Download PDF

Info

Publication number
CN113408741B
CN113408741B CN202110691239.8A CN202110691239A CN113408741B CN 113408741 B CN113408741 B CN 113408741B CN 202110691239 A CN202110691239 A CN 202110691239A CN 113408741 B CN113408741 B CN 113408741B
Authority
CN
China
Prior art keywords
node
nodes
machine learning
iteration
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110691239.8A
Other languages
Chinese (zh)
Other versions
CN113408741A (en
Inventor
曾帅
张烨
肖俊
林海韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202110691239.8A priority Critical patent/CN113408741B/en
Publication of CN113408741A publication Critical patent/CN113408741A/en
Application granted granted Critical
Publication of CN113408741B publication Critical patent/CN113408741B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/231Hierarchical techniques, i.e. dividing or merging pattern sets so as to obtain a dendrogram

Abstract

The invention discloses a distributed ADMM machine learning method of a self-adaptive network topology, belonging to the field of machine learning and comprising the following steps: dividing the nodes into 1 management node and a plurality of working nodes, and abstracting the working nodes into upper nodes and lower nodes; decomposing a global convex optimization problem into a plurality of local convex optimization problems aiming at a connected network, solving the local convex optimization problems, and obtaining a global optimal solution by coordinating the local optimal solution, wherein the machine learning method comprises two parts of node detection and iterative computation; in the node detection process, the working node runs the update of the iterative computation part, and in addition, the upper node feeds back the completion of single iteration to the management node when each iteration is completed; when the position of the upper node is selected, all possibilities are avoided to be traversed through a greedy thought, and dynamic selection is adopted, so that the influence of link delay in the network is as small as possible.

Description

Distributed ADMM machine learning method adaptive to network topology
Technical Field
The invention belongs to the technical field of machine learning, and particularly relates to a distributed ADMM machine learning method based on an adaptive network topology.
Background
In recent years, with the rapid development of the information industry, the internet scale is expanding, and big data and machine learning are used more and more frequently in business. In the field of machine learning, a large amount of high-dimensional data come from different nodes, and high requirements are put on computing capacity, in this case, a single node is difficult to solve the problem, but a distributed machine learning algorithm can better adapt to the situation.
An Alternating Direction Method (ADMM) is a constraint problem optimization Method widely used in machine learning. The method greatly reduces the cost of a single problem by decomposing a global problem into a local problem, and the local problem can be coordinated to obtain a final global problem solution. The method has greater expansion and optimization possibility, from an initial dual-up, dual decomposition and augmented Lagrange multiplier method to ADMM proposed by Stephen Boyd and then to a later stage, people continuously propose variant ADMM aiming at various specific conditions, and the processing advantages of the method in the convex optimization problem are applied to various fields.
According to the iterative communication mode in the ADMM machine learning process, the method can be simply divided into a centralized mode and a distributed mode. The centralization is distinguished from the conventional standalone centralization, i.e. the system is still distributed over several nodes, but all nodes communicate with a specific node. A central node and a plurality of common nodes exist in the network, any two common nodes cannot communicate with each other, and any one common node can exchange data with the central node. The central node acquires the optimal solution of the local subproblems of all the common nodes after communicating with all the common nodes, performs coordination operation on all the solutions to obtain a result after one iteration is completed, and then issues the result to all the nodes again for recalculation. Different from the centralized type, a certain specific central node is not arranged in the network, a plurality of intermediate nodes can be arranged in the network, and the common nodes can select the intermediate nodes nearby and finally perform data summarization coordination by the central node. The pressure of one node is dispersed to a plurality of nodes, and the method is more in line with the development direction of the Internet. The mode can avoid the link with poor quality by selecting a proper intermediate node, thereby accelerating the iteration speed. And because of having a plurality of intermediate nodes, can not lead to the crash of the whole system because of the paralysis of a certain central node.
In the case of distributed computing, continuous communication is required between nodes to perform data interaction, and thus, the data convergence speed is increased. However, since the communication between the nodes needs to be carried out by the network, the whole operation can be influenced by the network condition. If improper node interaction exists in the operation and is influenced by network delay, the whole calculation process is greatly slowed down, and if the selected object is improper, data of a corresponding communication group can be polluted, so that the data convergence speed is slowed down and the convergence accuracy error is increased. It is therefore an inevitable problem how to select a suitable community of communication partners for each individual node. The invention provides a distributed ADMM machine learning method based on a self-adaptive network topology, aiming at the influence of delay in a network link on a distributed system.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. The distributed ADMM machine learning method has the advantages that under the condition of improving the robustness of the system, the nodes are distributed more reasonably through the self-adaptive network condition, and the influence of network link delay on the self-adaptive network topology is reduced. The technical scheme of the invention is as follows:
a distributed ADMM machine learning method of adaptive network topology, comprising the steps of:
dividing the nodes into 1 management node and a plurality of working nodes, and abstracting the working nodes into upper nodes and lower nodes; decomposing a global convex optimization problem into a plurality of local convex optimization problems aiming at a connected network, solving the local convex optimization problems, and obtaining a global optimal solution by coordinating the local optimal solution, wherein the machine learning method comprises two parts of node detection and iterative computation; the node detection part comprises upper and lower layer node attribution updating, upper and lower layer node communication, management node and upper layer node communication, and the iterative computation part comprises upper and lower layer related node data communication and single iterative computation. In the node detection process, the working node runs the update of the iterative computation part, and in addition, the upper node feeds back the completion of single iteration to the management node when each iteration is completed; when the position of the upper node is selected, all possibilities are avoided to be traversed through a greedy thought, and dynamic selection is adopted, so that the influence of link delay in the network is as small as possible.
Further, the method is used to solve a regularized linear regression problem, i.e.
Figure BDA0003126834320000021
Figure BDA0003126834320000022
Where A is an m x n order matrix, b is an m order vector, λ is a constant, and x is an n order vector.
Further, the node detection part specifically includes the following steps:
1) The management node issues a detection starting start instruction to an upper node and records the current time t s
2) The upper node i receives the clustering start instruction and then sends data to the corresponding lower node list L i A node in (1);
3) The lower node j receives the data and stores the data until all the corresponding upper node lists U are received j The node data in the system is calculated and returned to all the corresponding upper node U after the calculation j
4) The upper node i receives the data and stores the data until all the corresponding lower node lists L are received i The node data in the node list is calculated and returned to all the corresponding lower-layer nodes L after the calculation;
5) The upper node sends an iterover instruction for one iteration to the management node;
6) The management node waits for receiving and recording the iterover instructions of all upper nodes, and when the corresponding information of only one upper node is not received, the corresponding upper node is cancelled, and the current time t is obtained c Through t c -t s Obtaining the time t required for a single complete system iteration l It is stored in an iteration time set T, if T i Is the minimum value in the iteration time set T, i.e. T i If not, keeping the current upper node set U of the system, otherwise, not updating the U;
7) The management node issues a node attribution updating update instruction to all upper-layer and lower-layer nodes, and the node receives the instruction and then performs corresponding node attribution updating operation;
8) Repeating the processes 3 to 7 until only one upper node is left in the network, wherein the upper node set U stored by the management node is a final upper node set;
9) The management node sends a detection completion instruction to all upper nodes and all lower nodes, and the nodes perform node attribution updating operation after receiving the instruction;
furthermore, the upper node and the lower node contain a relative node attribution relationship, namely the lower node is related to the upper node with neighbor relationship and the upper node closest to the lower node; the upper node is related to the lower node with neighbor relation; if the distance from the lower node to the upper node is the shortest among all the upper nodes, the upper node is also related to the lower node; each lower level node may be associated with multiple upper level nodes, and each upper level node may also be associated with multiple lower level nodes.
Furthermore, each lower node stores a local variable x and u corresponding to each relevant upper node i i Each upper node stores a local variable z, all variables x, u and z are n-order vectors, and initial states are n-order zero vectors.
Further, the iterative computation section includes the steps of:
the upper node sends down z in the kth iterative computation k To the related lower node, the lower node passes through the formula
x k+1 =(A T A+ρI) -1 (A T b+ρ(z k -u k )),
u k+1 =u k +x k+1 -z k+1 .
Updating the local x, u to obtain the result x after the kth iterative computation k+1 、z k+1 And u k+1 Wherein A is n-dimensional real number closed convex set, I is n-order unit square matrix, rho > 0 is punishment parameter, and updated x and u are returned to related upper node which is obtained by formula
Figure BDA0003126834320000041
The local z is updated. Wherein
Figure BDA0003126834320000042
Representing a soft threshold operator in the form of
Figure BDA0003126834320000043
Further, the tubeThe processing node will store a list of current iteration nodes, the number of upper nodes in the current network is set as N, if the length of the list of the iteration nodes is N-1, the processing node informs the whole network to remove the only upper nodes which are not in the list of the iteration nodes, clears the list of the iteration nodes, and stores a list S of upper nodes in the current system N-1 And time interval t between two times of list emptying N-1 (ii) a And repeating until only one upper node remains in the network.
Further, for i ∈ { 1.,. N-1}, the minimum t is selected i And lists S with its corresponding upper node i The upper node serves as an upper node in a final system, and all working nodes are informed to perform a formal iterative computation part; after receiving the notification, the working node initializes local corresponding x, u and z variables, updates the attribution of the related node, and then starts iterative computation communication with the related node; and stopping iterative computation when the iteration times reach the maximum iteration times preset by the system.
The invention has the following advantages and beneficial effects:
according to the invention, the calculation pressure is dispersed from a single node to the nodes in the whole system through a distributed method, the whole calculation speed is not limited by the hardware processing capacity of the single node any more, the high-dimensional data is split, the iterative calculation pressure of the single node is reduced after the dimensionality is reduced, and the speed is accelerated. And the number of the upper nodes in the system is at least 2, namely, a centralized star mode can not appear, and the reliability of the system is ensured. Finally, the number and the positions of the upper nodes are determined by an algorithm. And continuously removing slowest upper-layer nodes by adopting a similar hierarchical clustering mode from all the initial nodes to obtain a corresponding upper-layer node set with the fastest iterative convergence of the single system until the final comparison. A greedy idea is taken to avoid enumerating all possibilities. The upper node set is determined by taking into account the computing power of each device, rather than just taking into account network latency and ignoring differences between nodes. And because the calculation time of the node detection part equipment and the network delay are calculated in the single iteration time, the calculation capability of each node does not need to be known in advance. The influence of network attributes such as network delay and topology change on distributed computation is reduced to a certain extent.
Drawings
FIG. 1 is a simple topology embodiment node correspondence;
FIG. 2 is a diagram of a preferred embodiment of a small world simulation network topology provided by the present invention;
FIG. 3 is a flow chart of a node probe section;
FIG. 4 is a diagram of a formal iterative computation process.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly in the following with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
a distributed ADMM machine learning method based on self-adaptive network topology is used for decomposing a global convex optimization problem into a plurality of local convex optimization problems aiming at a connected network and solving the local convex optimization problems, and obtaining a global optimal solution by coordinating the local optimal solution.
Furthermore, the whole machine learning method is decomposed into two parts of node detection and iterative computation.
Further, the nodes in the system are divided into a management node and other working nodes, and the working nodes are abstracted into two attributes, namely an upper node and a lower node, wherein the working nodes are necessarily the lower nodes but not necessarily the upper nodes, which is determined by an algorithm.
Furthermore, an affiliation method of the relative node is provided, and since the node is divided into an upper node and a lower node, the affiliation update is also divided into an upper part and a lower part.
The lower node self attribution updating part flow comprises the following steps:
1) Traversing all the neighbor nodes, and if the neighbor nodes are upper nodes, adding the neighbor nodes to the corresponding upper node list U self Performing the following steps;
2) Go through all upper nodes to obtainTaking the time delay T from the lower node to all the upper nodes u u Selecting the shortest T x Adding the corresponding upper node x into an upper node list U corresponding to the lower node i self In (1).
Because the upper and lower node relations are corresponding, that is, after all the lower nodes determine the corresponding related upper nodes, the related lower nodes corresponding to the upper nodes are also determined, so that the related lower node list L corresponding to the upper node u can be reversely deduced from the process of the lower node attribution updating u The process comprises the following steps:
1) Traversing all the neighbor nodes, and adding the neighbor nodes to the corresponding lower layer node list L u Performing the following steps;
2) Traversing all the lower nodes, the time delay from the lower node to all the upper nodes { T } 1 ,...,T n In the method, if the time delay T to the current upper node is reached i =min{T 1 ,...,T n Adding the node to a corresponding lower-layer node list L u
Further, each lower level node may be associated with a plurality of upper level nodes, and each upper level node may be associated with a plurality of lower level nodes.
Furthermore, each lower level node stores u corresponding to the local variable x and each related upper level node i i Each upper node holds a local variable z.
Further, the upper node will issue z in the kth iterative computation k To the related lower node, the lower node passes through the formula
x k+1 :=(A T A+ρI) -1 (A T b+ρ(z k -u k )),
u k+1 :=u k +x k+1 -z k+1 .
Updating local x and u, wherein A is n-dimensional real number closed convex set, I is n-order unit square matrix, returning the updated x and u to related upper nodes, and the upper nodes pass through a formula
Figure BDA0003126834320000071
The local z is updated.
Furthermore, in the node detection process, the working node executes the updating of the iterative computation part, and in addition, the upper node feeds back the completion of a single iteration to the management node when each iteration is completed.
Further, the management node stores a list of currently finished iteration nodes, the number of upper nodes in the current network is set to be N, if the length of the list of finished iteration nodes is N-1, the management node notifies the whole network of removing the only upper nodes which are not in the list of finished iteration nodes, clears the list of finished iteration nodes, and stores a list S of upper nodes in the current system N-1 And time interval t between two times of list emptying N-1 And repeating until only one upper node is left in the network.
Furthermore, after the working node receives the upper node update instruction notified by the management node, new relevant node attribution update is performed, and the process is repeated until only one upper node is left in the network.
Further, for i ∈ { 1.,. N-1}, the minimum t is selected i And lists S with its corresponding upper node i And the upper-layer node is used as the upper-layer node in the final system and informs all the working nodes to carry out a formal iterative computation part.
Further, after receiving the notification, the working node initializes local corresponding x, u, z variables, performs home update of the relevant node, and then starts iterative computation communication with the relevant node. And stopping iterative computation when the iteration times reach the maximum iteration times preset by the system.
The invention abstracts the working nodes into upper-layer nodes and lower-layer nodes, solves the solution of the convex optimization problem by using a distributed ADMM algorithm, simultaneously considers the influence of the link delay in the network on the communication speed during iterative computation among different nodes, avoids traversing all possibilities by greedy thought when selecting the position of the upper-layer node, and makes the influence of the link delay in the network as small as possible by dynamic selection. As shown in fig. 1, a simple topology network is composed of 5 working nodes and 1 management node, wherein the upper and lower nodes with the same serial number represent the same physical device entity, and since the related nodes of each working node are not limited to one, there can be many-to-many relationship, such as upper nodes 1, 3, 5 and lower nodes 2,3,4 in fig. 1.
For a small-world simulation network, 1 management node and 16 working nodes are selected, any one working node has 4 neighbor working nodes, each link has 90% of possibility to be reset between other two nodes, and the approximate topology is shown in fig. 2, wherein the management node is not included.
Initially, all the work nodes are set as upper nodes and lower nodes. Initial entry node detection section:
1) The management node issues a detection start instruction to an upper node and records the current time t s
2) The upper layer node i receives the clustering start instruction and then sends data to the corresponding lower layer node list L i A node in (1);
3) The lower node j receives the data and stores the data until all the corresponding upper node lists U are received j The node data in the node B is calculated and returned to all corresponding upper nodes U after the calculation j
4) The upper node i receives the data and stores the data until receiving all the corresponding lower node lists L i The node data in the node list is calculated and returned to all the corresponding lower-layer nodes L after the calculation;
5) An upper node sends an iteration over instruction to a management node;
6) The management node waits for receiving and recording the over commands of all upper nodes, and when the information corresponding to only one upper node is not received, the corresponding upper node is cancelled, and the current time t is obtained c Through t c -t s Obtaining the time t required by a single complete system iteration l It is stored in an iteration time set T, if T i Is the minimum value in the iteration time set T, i.e. T i = minT, then save current systemCollecting the U by upper nodes, otherwise, not updating the U;
7) The management node issues a node attribution updating (update) instruction to all upper-layer and lower-layer nodes, and the node receives the instruction and then performs corresponding node attribution updating operation;
8) Repeating the processes 3 to 7 until only one upper node is left in the network, wherein the upper node set U stored by the management node is the final upper node set;
9) The management node issues a detection completion instruction to all upper nodes and lower nodes, and the node performs node attribution updating operation after receiving the instruction;
fig. 3 shows the communication process of detecting different nodes by single clustering in the above steps, but the detection process continues until only one upper node is left in the network, and finally the upper node in the system is determined.
And the final upper-layer node in the network is determined and stored in U, namely the number and the position of the node clustering centers are determined. After the upper node is determined, the attribution relationship of the lower node can be obtained easily through the attribution updating part. It should be noted that the selected upper-level node is only the current optimal solution selected in each detection iteration, which is determined by the greedy idea, and compared with traversing all possibilities and spending most of the time in the clustering detection part before formal calculation, the local optimal solution is selected to better meet the actual requirements.
After determining the upper node, to solve
Figure BDA0003126834320000091
The updating of x, u, z is shown below
x k+1 =(A T A+ρI) -1 (A T b+ρ(z k -u k )),# (2)
Figure BDA0003126834320000092
u k+1 =u k +x k+1 -z k+1 .# (4)
As shown in FIG. 4, the upper nodes will store respective z i Each lower node stores its own x i And u equal in number to the relevant upper nodes i I.e. each related upper node will have a corresponding u at the lower node i The upper node sends its z initially i For all the lower nodes related to the lower nodes, the lower nodes update the u owned by the lower nodes through the relation between the formula 2 and the formula 4 after receiving the u i And x i And returning the two to the upper node related to the upper node, and the upper node receives the correlation u of all the lower nodes i And x i Then, z owned by the user is updated by formula 3 i And finishing one iteration.
After each iteration is completed, the upper node obtains the result L in the problem 1 according to the current x, u and z i And storing locally, and after finishing the specified number of iterations, performing iteration result L on all upper nodes each time i Taking the average to obtain
Figure BDA0003126834320000093
Finally according to
Figure BDA0003126834320000094
The result convergence time and the number of iterations i required for convergence are determined.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises that element.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (7)

1. A distributed ADMM machine learning method of self-adaptive network topology is characterized by comprising the following steps:
dividing the nodes into 1 management node and a plurality of working nodes, and abstracting the working nodes into upper nodes and lower nodes; decomposing a global convex optimization problem into a plurality of local convex optimization problems aiming at a connected network, solving the local convex optimization problems, and obtaining a global optimal solution by coordinating the local optimal solution, wherein the machine learning method comprises two parts of node detection and iterative computation; the node detection part comprises upper and lower layer node attribution updating, upper and lower layer node communication and management node and upper layer node communication, the iterative computation part carries out upper and lower layer related node data communication and single iterative computation, in the node detection process, a working node runs the updating of the iterative computation part, and in addition, the upper layer node feeds back the completion of the single iteration to the management node when each iteration is completed; when the position of the upper node is selected, all possibilities are avoided to be traversed through a greedy thought, and dynamic selection is adopted, so that the influence of link delay in the network is as small as possible;
the node detection part specifically comprises the following steps:
1) The management node issues a command of detecting and starting the clustering start to an upper node and records the current time t s
2) The upper node i receives the clustering start instruction and then sends data to the corresponding lower node list L i A node in (1);
3) The lower node j receives the data and stores the data until all the corresponding upper node lists U are received j The node data in the system is calculated and returned to all the corresponding upper node U after the calculation j
4) Upper node i received numberAnd then stored until all the corresponding lower layer node lists L are received i Performing calculation operation on the node data in the lower layer, and returning the data to all corresponding lower-layer nodes L after the calculation operation is completed;
5) The upper node sends an iterover instruction for one iteration to the management node;
6) The management node waits for receiving and recording the over commands of all upper nodes, and when the information corresponding to only one upper node is not received, the corresponding upper node is cancelled, and the current time t is obtained c Through t c -t s Obtaining the time t required for a single complete system iteration l It is stored in an iteration time set T, if T i Is the minimum value in the iteration time set T, i.e. T i If not, keeping the current upper node set U of the system, otherwise, not updating the U;
7) The management node issues a node attribution updating update instruction to all upper-layer and lower-layer nodes, and the node receives the instruction and then performs corresponding node attribution updating operation;
8) Repeating the processes 3 to 7 until only one upper node is left in the network, wherein the upper node set U stored by the management node is the final upper node set;
9) And the management node sends a detection completion instruction to all upper nodes and all lower nodes, and the nodes perform node attribution updating operation after receiving the instruction.
2. The distributed ADMM machine learning method of adaptive network topology of claim 1, for solving a regularized linear regression problem, namely
Figure FDA0003869926750000021
Where A is an m x n order matrix, b is an m order vector, λ is a constant, and x is an n order vector.
3. The distributed ADMM machine learning method according to claim 1, wherein the nodes in the upper layer and the nodes in the lower layer contain a relative node attribution relationship, that is, the nodes in the lower layer are related to the nodes in the upper layer having a neighbor relationship and the nodes in the upper layer closest to the nodes in the lower layer; the upper node is related to the lower node with neighbor relation; if the distance from the lower node to the upper node is the shortest among all the upper nodes, the upper node is also related to the lower node; each lower level node may be associated with multiple upper level nodes, and each upper level node may also be associated with multiple lower level nodes.
4. The distributed ADMM machine learning method of adaptive network topology as claimed in claim 1, wherein each lower node stores a variable u corresponding to a local variable x and each related upper node i i Each upper node holds a local variable z, all variables x, u, z are n-order vectors, the initial state is an n-order zero vector and is of the same order as the variable x in claim 2.
5. The distributed ADMM machine learning method according to claim 1, wherein,
the upper node sends down z in the kth iterative computation k To the related lower node, the lower node passes through the formula
x k+1 =(A T A+ρI) -1 (A T b+ρ(z k -u k )),
u k+1 =u k +x k+1 -z k+1
Updating the local x, u to obtain the result x after the kth iterative computation k+1 、z k+1 And u k+1 (ii) a Wherein A is n-dimensional real closed convex set, I is n-order unit square matrix, rho>0 is a punishment parameter, and returns the updated x and u to the related upper nodes which pass through a formula
Figure FDA0003869926750000033
Updating the local z; wherein
Figure FDA0003869926750000031
Representing a soft threshold operator in the form of
Figure FDA0003869926750000032
6. The method of claim 5, wherein the management node stores a list of nodes that have been iterated currently, and the number of nodes in the upper layers of the current network is set to N, and if the length of the list of nodes that have been iterated is N-1, the management node notifies the entire network of the removal of the only nodes that are not in the list of nodes that have been iterated, clears the list of nodes that have been iterated, and stores a list S of nodes in the upper layers of the current system N-1 And time interval t between two list emptions N-1 (ii) a And repeating until only one upper node remains in the network.
7. The distributed ADMM machine learning method of adaptive network topology as claimed in claim 6, wherein for i ∈ {1, \8230;, N-1}, the smallest t is selected i And lists S with its corresponding upper node i The upper node serves as an upper node in a final system, and all working nodes are informed to perform a formal iterative computation part; after receiving the notification, the working node initializes local corresponding x, u and z variables, updates the affiliation of the related node, and then starts iterative computation communication with the related node; and stopping iterative calculation when the iteration times reach the maximum iteration times preset by the system.
CN202110691239.8A 2021-06-22 2021-06-22 Distributed ADMM machine learning method of self-adaptive network topology Active CN113408741B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110691239.8A CN113408741B (en) 2021-06-22 2021-06-22 Distributed ADMM machine learning method of self-adaptive network topology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110691239.8A CN113408741B (en) 2021-06-22 2021-06-22 Distributed ADMM machine learning method of self-adaptive network topology

Publications (2)

Publication Number Publication Date
CN113408741A CN113408741A (en) 2021-09-17
CN113408741B true CN113408741B (en) 2022-12-27

Family

ID=77682381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110691239.8A Active CN113408741B (en) 2021-06-22 2021-06-22 Distributed ADMM machine learning method of self-adaptive network topology

Country Status (1)

Country Link
CN (1) CN113408741B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581221B (en) * 2022-05-05 2022-07-29 支付宝(杭州)信息技术有限公司 Distributed computing system and computer device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458293A (en) * 2019-05-16 2019-11-15 重庆邮电大学 A kind of distributed ADMM machine learning method optimizing network delay
CN111935205A (en) * 2020-06-19 2020-11-13 东南大学 Distributed resource allocation method based on alternative direction multiplier method in fog computing network
CN112636338A (en) * 2020-12-11 2021-04-09 国网江苏省电力有限公司南通供电分公司 Load partition regulation and control system and method based on edge calculation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10304008B2 (en) * 2015-03-20 2019-05-28 Nec Corporation Fast distributed nonnegative matrix factorization and completion for big data analytics
US20200327435A1 (en) * 2019-04-12 2020-10-15 General Electric Company Systems and methods for sequential power system model parameter estimation
CN111988185A (en) * 2020-08-31 2020-11-24 重庆邮电大学 Multi-step communication distributed optimization method based on Barzilai-Borwein step length

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458293A (en) * 2019-05-16 2019-11-15 重庆邮电大学 A kind of distributed ADMM machine learning method optimizing network delay
CN111935205A (en) * 2020-06-19 2020-11-13 东南大学 Distributed resource allocation method based on alternative direction multiplier method in fog computing network
CN112636338A (en) * 2020-12-11 2021-04-09 国网江苏省电力有限公司南通供电分公司 Load partition regulation and control system and method based on edge calculation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Distributed Alternating Direction Multiplier Method Based on Optimized Topology and Nodes Selection Strategy";Shuai Zeng等;《2020 3rd International Seminar on Research of Information Technology and Intelligent Systems (ISRITI)》;20210113;第2-6页 *

Also Published As

Publication number Publication date
CN113408741A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
Ilievski et al. Efficient hyperparameter optimization for deep learning algorithms using deterministic rbf surrogates
Zhao et al. Autoloss: Automated loss function search in recommendations
Kroese et al. Network reliability optimization via the cross-entropy method
CN112686428B (en) Subway passenger flow prediction method and device based on subway line network site similarity
CN113408741B (en) Distributed ADMM machine learning method of self-adaptive network topology
Feng et al. An online virtual metrology model with sample selection for the tracking of dynamic manufacturing processes with slow drift
CN110428015A (en) A kind of training method and relevant device of model
CN116402002A (en) Multi-target layered reinforcement learning method for chip layout problem
CN116681104A (en) Model building and realizing method of distributed space diagram neural network
Guo et al. Research of new strategies for improving CBR system
Ghesmoune et al. G-stream: Growing neural gas over data stream
CN112784123B (en) Cold start recommendation method for graph network
Zhou et al. Online recommendation based on incremental-input self-organizing map
Chang et al. A survey of some simulation-based algorithms for Markov decision processes
Zheng et al. Workload-aware shortest path distance querying in road networks
CN111626425B (en) Quantum register allocation method and system for two-dimensional neighbor quantum computing architecture
CN107818347A (en) The evaluation Forecasting Methodology of the GGA qualities of data
Flentge Locally weighted interpolating growing neural gas
Funabiki et al. A two-stage discrete optimization method for largest common subgraph problems
Bonet et al. Factored probabilistic belief tracking
Li Clustering with uncertainties: An affinity propagation-based approach
Xu et al. Efficiently answering k-hop reachability queries in large dynamic graphs for fraud feature extraction
Galindo et al. Faster quantum alternative to softmax selection in deep learning and deep reinforcement learning
Mohammed et al. Soft set decision/forecasting system based on hybrid parameter reduction algorithm
US20230244700A1 (en) System and method for identifying approximate k-nearest neighbors in web scale clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant