LU102143B1 - Conditional gradient based method for accelerated distributed online optimization - Google Patents

Conditional gradient based method for accelerated distributed online optimization Download PDF

Info

Publication number
LU102143B1
LU102143B1 LU102143A LU102143A LU102143B1 LU 102143 B1 LU102143 B1 LU 102143B1 LU 102143 A LU102143 A LU 102143A LU 102143 A LU102143 A LU 102143A LU 102143 B1 LU102143 B1 LU 102143B1
Authority
LU
Luxembourg
Prior art keywords
optimization
agents
local
distributed
distributed online
Prior art date
Application number
LU102143A
Other languages
German (de)
Inventor
Qiao Dong
Dequan Li
Xiuyu Shen
Original Assignee
Univ Anhui Sci & Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Anhui Sci & Technology filed Critical Univ Anhui Sci & Technology
Application granted granted Critical
Publication of LU102143B1 publication Critical patent/LU102143B1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Algebra (AREA)
  • Probability & Statistics with Applications (AREA)
  • Economics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

An accelerated distributed online conditional gradient optimization method based on conditional gradient is provided, so that the problem of high time complexity of the distributed online optimization algorithm can be effectively solved. According to the method, a network optimization objective function is decomposed into the sum of local objective functions of different nodes or agents. Thus, the agents obtain just their objective functions and cooperate to solve the optimization problem through the interaction of information with adjacent agents. The method avoids the deficiency that the traditional conditional gradient algorithm is not sensitive to the gradient size by constructing a new time-varying cost function with a regularization item to the local cost information of each agent. The method further improves the convergence rate of the algorithm exponentially by a local linear optimization setup instead of projection operation. Finally, the experimental results performed on various tasks indicate that the method works also well in practice and compares favorably to other distributed online optimization methods.

Description

TT BL-5165
CONDITIONAL GRADIENT BASED METHOD FOR ACCELERATED DISTRIBUTED ONLINE OPTIMIZATION HUT0zTas
TECHNICAL FIELD The present invention relates to a conditional gradient based method for accelerated distributed online optimization, and belongs to the field of machine learning.
BACKGROUND Distributed convex optimization has drawn great attention from researchers in many fields. Classical problems such as distributed tracking, distributed estimation and distributed detection are essentially optimization problems. The distributed optimization problem is mainly to perform global optimization tasks assigned to each node in the network. Since each node has limited resources or partial information about the task, the nodes, therefore, collaborate to perform data collection and update the local estimation by sharing the collected information. Distributed optimization imposes a low computing burden on each node, and the whole network remains robust even though certain node undergoes a local failure. Therefore, it is possible to effectively overcome deficiencies in a single information processing unit in a centralized scenario.
Distributed optimization has been widely used in the case of time-invariant cost functions. However, distributed network systems are usually in dynamic and uncertain environments in practice. For example, considering the issue of tracking moving targets, it is intended to track the position, velocity and acceleration of the object. Such problems have always been main focus of online learning in the field of machine learning. Thus, the combination of online optimization and distributed optimization and the employment of any variable cost function to represent the uncertainty of a multi-agent network system can be effective for real-time processing of dynamic data streams of network nodes.
With the rapid development of distributed online optimization, many traditional optimization algorithms have been extended to distributed online situations. In recent years, traditional optimization algorithms such as gradient descent and dual averaging have been widely used in distributed online optimization. Conditional gradient algorithm (also known as Frank-Wolfe, FW) is essentially a first-order optimization method, which can theoretically achieve a lower convergence rate than other effective optimization algorithms. In consideration of practical multi-dimensional optimization problems, it is infeasible to use second-order information or other superlinear operation. In addition, the FW method has been proved to be a powerful tool for solving large-scale optimization problems since it can effectively avoid critical issues such as the difficulty of calculating orthogonal projections in the first-order optimization method. Hence, with the introduction of a local linear optimization step in the existing conditional gradient algorithm, an accelerated distributed online conditional gradient algorithm is proposed, and the conditional —————m———————EEE
BL-5165 ’ gradient based online optimization algorithm is extended to the distributed setting. LU102143
SUMMARY The technical problem to be solved by the present invention is to provide a conditional gradient based method for accelerated distributed online optimization, which is aimed at | accelerating the convergence of a model in a distributed network.
To solve the above technical problem, the following technical solution is adopted in the present invention.
In a distributed online convex optimization setup, each node represents an agent, and in each iteration, the agents generate decision-making information, submit the decision-making information independently and obtain corresponding cost functions. Different agents have varying degrees of importance relative to each other in the exchange of information, and with a weighted average, an agent having a higher degree of importance can be assigned a higher weight to provide more valuable information therefrom, thereby reducing the error of the entire distributed system. Besides, a local linear optimization step is introduced in the existing conditional gradient algorithm to accelerate the convergence of an entire network model.
BRIEF DESCRIPTION OF DRAWINGS FIG. 1 illustrates the relative error versus the number of iterations when the present invention is applied to an L1 regularized logistic regression model problem. FIG. 2 illustrates the relative error versus the number of iterations when the present invention is applied to an L2 regularized logistic regression model problem.
DETAILED DESCRIPTION The present invention solves the distributed optimization problem on a connected undirected network and avoids excessive communication cost of a central node due to deficiencies in a single information processing unit in a centralized scenario. The following specific steps are performed. Step 1: a loss function f(t) =f, (1) is revealed. Step 3: a subgradient of information generated by agents is calculated: g, € fi, (x) Regarding each agent: 7, aC +1) Step 4: 4 Pis 7 A (Xu) Xp = Yo Xa =X, +0 (Pis Xi ) In a distributed network, the transfer of information of agents is based on a weighted average —_———————"""
BL-5165 (the third line in Step 4) to ensure that the information of an important agent is fully utilized. CA 02143 local linear optimization setup À (610705 8.) is further introduced in our method for the purpose of accelerating the convergence of an entire network model. p is a parameter for the local linear optimization setup. @ isa learning rate.
The present invention will be further described below with reference to the accompanying drawings.
FIG. 1 shows the convergence result of the proposed method performing on L1 regularized logistic regression model. In case of an online distributed learning setting our objective is to solve the L1 regularized logistic regression problem. For the synthetic dataset, numerical result is shown in FIG. 1. It can be seen that the accelerated distributed online conditional gradient algorithm performs better than other algorithms. FIG. 1 also shows that the convergence of the algorithm 1s obviously faster than that of other algorithms at the beginning.
FIG. 2 shows the convergence result of the proposed method performing on L2 regularized logistic regression model. An experiment was carried out on an actual data set with satisfying results. As shown in FIG. 2, the algorithm provided herein achieves the acceleration effect. As shown in FIG. 2, the loss of the algorithm reaches the minimum rapidly, and it also indicates the algorithm is better than other algorithms in performance and thus may be more suitable for practical use.
CC ————

Claims (3)

BL-5165 ! What is claimed is: LU102143
1. A conditional gradient-based method for accelerated distributed online optimization, wherein agents in a distributed network submit local information independently and then obtain local cost functions; the agents communicate with each other by a weighted average method, and find next iteration direction by a local linear optimization step after the communications among the agents.
2. The method according to claim 1, wherein regarding that agents in a distributed network submit local information independently and then obtain local cost functions, in a distributed online convex optimization setup, each node represents an agent, and in each iteration, the agents generate decision-making information, submit the decision-making information independently and obtain corresponding cost functions.
3. The method according to claim 1, wherein regarding that the agents communicate with each other by a weighted average method, different agents have varying degrees of importance relative to each other in the exchange of information, and with a weighted average, an agent having a higher degree of importance is assigned a higher weight to provide more valuable information therefrom, thereby reducing the error of the entire distributed system. | ee
LU102143A 2019-10-30 2020-10-16 Conditional gradient based method for accelerated distributed online optimization LU102143B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911045411.1A CN110768841A (en) 2019-10-30 2019-10-30 Acceleration distributed online optimization method based on condition gradient

Publications (1)

Publication Number Publication Date
LU102143B1 true LU102143B1 (en) 2021-04-16

Family

ID=69334538

Family Applications (1)

Application Number Title Priority Date Filing Date
LU102143A LU102143B1 (en) 2019-10-30 2020-10-16 Conditional gradient based method for accelerated distributed online optimization

Country Status (2)

Country Link
CN (1) CN110768841A (en)
LU (1) LU102143B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111580962A (en) * 2020-04-29 2020-08-25 安徽理工大学 Distributed self-adaptive online learning method with weight attenuation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109149568B (en) * 2018-09-10 2021-11-02 上海交通大学 Interconnected micro-grid based on distributed agent and scheduling price optimization method

Also Published As

Publication number Publication date
CN110768841A (en) 2020-02-07

Similar Documents

Publication Publication Date Title
Liu et al. Adaptive asynchronous federated learning in resource-constrained edge computing
CN109491790B (en) Container-based industrial Internet of things edge computing resource allocation method and system
Zhan et al. Consensus of sampled-data multi-agent networking systems via model predictive control
Li et al. Zoning for hierarchical network optimization in software defined networks
Yang et al. An information fusion approach to intelligent traffic signal control using the joint methods of multiagent reinforcement learning and artificial intelligence of things
Kotenko et al. Neural network approach to forecast the state of the internet of things elements
Tran et al. Change detection in streaming data in the era of big data: models and issues
LU102143B1 (en) Conditional gradient based method for accelerated distributed online optimization
CN104618149B (en) A kind of heterogeneous network SON intelligence operation management method
CN105205052A (en) Method and device for mining data
Levchuk et al. Learning and detecting patterns in multi-attributed network data
Kafaf et al. A web service-based approach for developing self-adaptive systems
Xu et al. Distributed event-triggered circular formation control for multiple anonymous mobile robots with order preservation and obstacle avoidance
Shen et al. Multi-objective time-cost optimization using Cobb-Douglas production function and hybrid genetic algorithm
CN117376355A (en) B5G mass Internet of things resource allocation method and system based on hypergraph
Gómez-Marín et al. Integrating multi-agent system and microsimulation for dynamic modeling of urban freight transport
Olaniyan et al. A fast edge-based synchronizer for tasks in real-time artificial intelligence applications
Lei et al. Adaptive stochastic ADMM for decentralized reinforcement learning in edge IoT
Fiosina et al. Decentralised cooperative agent-based clustering in intelligent traffic clouds
Sui et al. State-Observer-Based Adaptive Fuzzy Event-Triggered Formation Control for Nonlinear Multiagent System
Wu et al. Distributed fuzzy clustering based association rule mining: Design, deployment and implementation
Sengupta et al. Collaborative learning-based schema for predicting resource usage and performance in F2C paradigm
Miranda et al. Adaptation of parallel framework to solve traveling salesman problem using genetic algorithms and tabu search
Gao et al. A Scalable Two-Hop Multi-Sink Wireless Sensor Network for Data Collection in Large-Scale Smart Manufacturing Facilities.
Kong et al. Identifying Multiple Influential Nodes for Complex Networks Based on Multi-agent Deep Reinforcement Learning

Legal Events

Date Code Title Description
FG Patent granted

Effective date: 20210416