AU2020103334A4 - Distributed estimation with adaptive clustering strategy based on element-wise distance over multitask networks. - Google Patents
Distributed estimation with adaptive clustering strategy based on element-wise distance over multitask networks. Download PDFInfo
- Publication number
- AU2020103334A4 AU2020103334A4 AU2020103334A AU2020103334A AU2020103334A4 AU 2020103334 A4 AU2020103334 A4 AU 2020103334A4 AU 2020103334 A AU2020103334 A AU 2020103334A AU 2020103334 A AU2020103334 A AU 2020103334A AU 2020103334 A4 AU2020103334 A4 AU 2020103334A4
- Authority
- AU
- Australia
- Prior art keywords
- clustering
- node
- estimation
- cluster
- wise
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Algebra (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Databases & Information Systems (AREA)
- Complex Calculations (AREA)
Abstract
:
In the background of big data and high-performance computer cluster or multi-agent
network environment, for the purpose to cope with the service invocations of a large number
of users, more and more multi-agent networks adopt distributed computing and service
provision, assign different multi-agent clusters to different tasks, and return the task execution
results of the cluster to the users who invoke the service. Thus, in this patent, an adaptive
clustering strategy based on element-wised distance for distributed estimation over multi-task
networks is proposed. Firstly, though the ATC d-LMS algorithm, we solve the global
optimization problem by applying the gradient descent method to each single node's
optimization problem in a distributed manner. Secondly, we carry out clustering detection
according to the algorithm of distributed adaptive clustering based on element-wise distance.
Thirdly, we update combination weight according to the averaged element-wise distance.
Finally, we use the Metropolis rule as a combination rule to combine weight. Compared with
previous methods, we make the following new contributions in this work. First of all, to
minimize the deviation between estimators and the system's true value, we derive a fully
distributed adaptive clustering threshold. Secondly, considering the information of each
dimension of the estimated parameters, we propose an adaptive clustering method that is
constructed to further enhance clustering accuracy. Thirdly, according to the simulation
results demonstrating the performance of the proposed clustering strategy, the proposed
clustering strategy has better estimation performance and is also suitable for non-stationary
environments.
Figi. Performance of distributed estimation. Transient-state average MSDs for
recursions with variousA k
0
-_ prpsdok=1 2
proposed Ak=2
_5-propused hk=1.75
-10
0
Z 20
.30
-360 100 200 300 400 5 00 103O
Time i
Fig2.Network topology in the initial stage with no prior cluster information in case 1: (a)
initial global network topology; (b), (c), and (d) initial topologies of clusters 1, 2, and 3,
respectively.
(a) Global network topology. (b) Cluster I
1 00 - 0
(c) Cluster2 (d) Cluster
Fig3. Performance of adaptive clustering. Network topology in a steady state after the
clustering process: (a) global network topology in steady state; (b), (c), and (d) topologies
of clusters 1, 2, and 3, respectively, in a steady state.
Description
Figi. Performance of distributed estimation. Transient-state average MSDs for
recursions with variousA k
0 -_ prpsdok=1 2 proposed Ak=2 _5-propused hk=1.75
-10
0
Z 20
.30
-360 100 200 300 400 5 00 103O Time i
Fig2.Network topology in the initial stage with no prior cluster information in case 1: (a) initial global network topology; (b), (c), and (d) initial topologies of clusters 1, 2, and 3, respectively.
(a) Global network topology. (b) Cluster I
1 00 - 0
(c) Cluster2 (d) Cluster
Fig3. Performance of adaptive clustering. Network topology in a steady state after the clustering process: (a) global network topology in steady state; (b), (c), and (d) topologies of clusters 1, 2, and 3, respectively, in a steady state.
1. Background and Purpose In the background of big data and high-performance computer cluster or multi-agent network environment, for the purpose to cope with the service invocations of a large number of users, more and more multi-agent networks adopt distributed computing and service provision, assign different multi-agent clusters to different tasks, and return the task execution results of the cluster to the users who invoke the service. This type of network is called a multitasking network. The centralized solution for task execution, such as system's parameter estimation, each agent transmits its local data to a central fusion center for processing. Centralizing all streaming measurements to a single node as the fusion center is a high-risk task, because single-node failure is non-robust and lacks scalability. By comparison, performing adaptive estimation in a distributed manner is a more robust and resource-saving method to solve inference problems in an autonomous and collaborative way, which depends on each agent over the whole network. Several strategies have been proposed for distributed information processing over networks, including consensus strategies, incremental strategies and diffusion strategies. Diffusion strategies have attracted more attention owing to their low power consumption, enhanced adaptation performance, and wider stability ranges when constant step sizes are used to enable continuous learning scalability as well as robustness. In a single-task network in which a diffusion approach is adopted, each node in the network exchanges information only with neighboring nodes, and processing is distributed among all nodes in the network through global diffusion. It is worth noting that adapt-then-combine (ATC) diffusion least mean square (d-LMS) is a diffusion-based adaptive solution of the LMS type to distributed optimization problems. However, in multitasking networks, like multi-target positioning, the neighboring agents that belong to different clusters pursue different goals. Therefore arbitrary cooperation will lead to degradation in estimation performance. Hence, creating an appropriate degree of cooperation to improve parameter estimation accuracy, especially for the case where the prior information of a cluster is unknown is extremely important. Thus, in multi-agent networks, particular for the multitasking network, distributed adaptive clustering strategies are need to be designed for learning, inference, adaptation, modeling, and optimization by networked nodes, which are effective and popular in online supervised learning, reinforcement learning, and signal processing. Thus, in this patent, an adaptive clustering strategy based on element-wised distance for distributed estimation over multi-task networks is proposed, which enables agents to
1' distinguish between sub-neighbors that belong to the same cluster and those that belong to a different cluster. First, we derive a fully distributed adaptive clustering threshold based on element-wise distance for the differences in each dimension. In contrast to the static and quantitative threshold that is imposed in traditional clustering methods, we devise a method for real-time clustering hypothesis detection, which is constructed through the use of a reliable adaptive clustering threshold as reference and the averaged element-wise distance between tasks as real-time clustering detection statistic. Second, we extend this by taking into consideration the significant differences among local optima to remedy the limitation of previous approaches. Third, we devise a distributed parameter estimation strategy based on the adaptive clustering method over multitask networks. Finally, simulations are presented to compare the proposed algorithm and some traditional clustering strategies in both stationary and non-stationary environments. The effects of task difference on performance are also obtained to demonstrate the superiority of our proposed clustering strategy in terms of accuracy, robustness, and suitability. The clustering strategy developed here is more suitable, robust, and accurate than traditional clustering methods, regardless of the differences in tasks between stationary and non-stationary environments.
2. System Model 2.1 Multitask basic network model and data model We consider a connected network with a node setS= {1, 2,. . . , N} categorized into s mutually exclusive clusters, which are denoted by {C}. Each node k of the network can communicate with its adjacent agents Nk. We denote a real-time cluster including node k by Ck=NA, n Ni , where /V, represents sub-neighbors with the same objective as node k collected through clustering detection at time i. A- =Ak \ C,,i represents sub-neighbors with different objectives from node k at time i. However, the sub-neighbors in the same or different clusters of agents in the initial stage are not clear. In this present invention, we consider a multitasking network environment where different clusters perform different estimation tasks. It is assumed that each agent k is interested in estimating a unique M x 1 unknown optimum weight vector w0. Agents of the same cluster estimate the same optimum
w =w°., for VkeC (1) wherethecluster C, e{ C1,C 2 ,...,Cs is in the multitask network, and each node k collects scalar measurements d (i)and 1 x M regression data vectors Uk (i) over successive time instants i. The measurements across all nodes are assumed to be related to a set of unknown M x 1 optimum weight vectors w colkwI via a linear regression model of the form dkI=ukJwk°+v(i) , (2) where VkiW is a zero-mean i.i.d. additive Gaussian measurement noise, which is assumed to be independent of any other signals and to have covariance matrix Rk = kIm and wk denotes the local optimum of node k.
3. Distributed Estimation with Adaptive Clustering Strategy Based on Element-wise Distance over Multitask Networks Algorithm Description 3.1 The problem formulation There are s clusters { C} in a multitask network. Since agents belonging to different clusters pursue different and irrelevant goals, the objective of nodes in the multitask network is equivalent to the procedure for dealing with a clustered multitask problem through seeking the unique minimizer of the aggregate cluster cost function, which can be written as miim~fe~ I) J kw 1 (3) ke C.
To minimize all cluster cost functions defined by (3), agents need to cooperate only within their clusters, since the objective of each cluster is completely irrelevant, and cooperation with neighbors that belong to different clusters may cause biased effects due to misleading information from these neighbors. Thus, diffusion strategies can only be applied within each cluster: In the scenario where cluster information is completely unavailable, each node ke C, performs a self-governed estimation task following the principles of distributed optimization, namely, to estimate w, of the local optima set w A col{ w } by seeking the solution for a twice-differentiable local cost function, denoted by J (W)e R. Hence, the global cost function can be decomposed into a single node, which is expressed in the mean square-error (MSE) form s
=1 I Cnke n=1 k(4)
ESd -- w2
n=1 keC,2
forthecluster Cn E{ C1,C 2 ,---,C,. Though the ATC d-LMS algorithm, we solve the global optimization problem by applying the gradient descent method to each single node's optimization problem in a distributed manner, which can be expressed as
(PJ Wi 4 1+,U p (,dk Y k ki
WkJ I =o
3.2 Distributed estimation with adaptive clustering strategy based on element-wise distance over multitask networks
In this work, we aim at clustering (i.e., collecting) sub-neighbors 1of node k into kJ, where N'+ k,i denotes the sub-neighbors belonging to the same cluster as node k at time i. 7*,'
represents an adaptive clustering threshold as reference, and Ak is some constant, defined as a relaxation factor that relaxes the clustering conditions while ensuring cooperation with
sub-neighbors. 0'*,i denotes the element-wise clustering detection statistic. Step1: Adapting the intermediateestimate. For each node k , based on the local measurement{dk i,)d, it adapts the local intermediate estimate k, ateachinstant i as
W ±k,i +k kk-1 k,i(dk -- ukwk1) (6) Step 2: One-step approximation To reduce the MSD bias that results from the cooperation of nodes executing various
estimation tasks, an approximate optimal vector Wo can be obtained by using the
intermediate estimator Pk,i from data diffusion at time i. In particular, we make a local one-step approximation based on Pki thereby obtaining , as node k's approximate
optimal vector at time i, which acts as a more reliable reference thank, : 0
='k,ip Vudk (W
results from performing a further gradient descent about the local cost function based on the intermediate estimator (p,j . Through this step, we reduce intermediate estimator
deviation as much as possible in terms of decreasing the local cost. As time i - 0o , 0 approaches the cluster objective w, which lays a foundation for constructing an accurate clustering threshold.
Step 3: Computing adaptive element-wise clustering threshold In this step, we calculate the clustering threshold in an element-wise sense, which ensures that the optima of agents in the same cluster are similar in each dimension.
First, the estimator can be rewritten as
Wk,i C i
, le Nt,
CI~ w(o,i + l k, i,
= Wk ,i + Icl,k 4Sl k, i
, 0 = Wk, + Cik,i (8) Where ki [1kj2,k,i, ,N,kj ] T ' Subtracting wo k from both sides of (12), we have
Awki = Awk,i + ±,Ek,i (9) 0 We then obtain the transient squared deviation between the d-LMS estimator wkj and Wkj
as A Wk 2 2+ Ci ±,,2+2Awk,iTCik, (10) To minimize the diffusion estimation bias, we then get the following relationship
| 2 T C8 1 --2Awk,i COk, (11 For this inequality to be satisfied, each node /E N;k must satisfy the condition
C 2 Aw ,W . We relax the condition by taking the average of 2kj with respect to all M elements. We then obtain an element-wise difference
c 2 j2M I2 Aw,i
which holds for each element Awk E Ak,i . Recalling that 2lk, M and
c =1, we have c 2 M , and if yk, 4(Awkj 2 for each
elementAWk() E Awk,i then the transient squared AWk,1 2 of the d-LMS system will be 2 smaller than Awki 2 of a reliable approximate optimal vector. This is equivalent to (rd)2 (nmi) ki 4(Awki 2, where Awk, is the minimum element of Awk However, in practical applications, the actual value ofWk , namely, A,,Wk I is
unknown and should be estimated via the sample variance. However, beyond that, the minimum element AWk,i(min)E AWk,i will change in real time. We relax the condition without loss of accuracy by taking as an approximation an average of the relatively reliable 0 approximate optimal vector, which is the approximate variance of the local optimum Wk, with respect to all the M elements: 0 =( - a);k, 1 +awki ,;k,
2 (2il 0( 2 5ki = (I-a)Sk +1aL0Wk'_;kj/ (13) 2 20 Where gkj and ki are the mean and the averaged element-wise sample variance of Wk,' 2 48 respectively. Finally, we set yk, = k.The sample statistics converge to unbiased estimates of their corresponding true values, since Wkj approaches wk as i - oo . Hence, the setting for Yki will ensure the accuracy of clustering in real time. Step 4: Clusteringdetection with element-wise statistics The averaged element-wise distance 0,,, between node l's intermediate estimate C/',j and a relatively reliable approximate optimal vector of the local optimum, 0 should be calculated as a clustering detection statistic for accurate clustering at time i as
0 2<
N (14) where Ho and H1 denote the hypothesis that node 1 belongs to the sub-neighbors that are in the same cluster as node k and the opposite hypothesis, respectively We add a relaxation factor Ak >1 to relax clustering restrictions by making a tradeoff between maximum cooperation for steady state misalignment of estimation and clustering accuracy. Thus, the corresponding clustering hypothesis is derived as HO T , k, i=0 <(5 i k (15)
Clustering detection at time i will lead to the following set of real-time sub-neighbors that are in the same cluster as node k and able to cooperate: A(,Ie A(,T ,Q ,k } (16) We use the Metropolis rule as a combination rule in which the combination weight c,k,, at time i is defined as
if 1c,* A{jk(k} max(nkj,njj ) ck 1- I ck if l=k (17) IeNi I{k}
0 otherwise
Step 5: Estimation with combination
Wki =Y C 1 4/ivi i /E l -k (18) The final information result is obtained by cooperation among sub-neighbors of the same cluster.
I6
Claims (1)
- The claims defining the invention are as follows: Distributed estimation with adaptive clustering strategy based on element-wise distance over multitask networks. In this patent, we aim at clustering (i.e., collecting) sub-neighbors /of node k into .+., where V, denotes the sub-neighbors belonging to the same cluster as node k at time i. represents an adaptive clustering threshold as reference, andk lis some constant, Yk,jdefined as a relaxation factor that relaxes the clustering conditions while ensuring cooperation 0 with sub-neighbors. 1,k, denotes the element-wise clustering detection statistic. Step: Adapting the intermediate estimate. For each node k , based on the local measurement{ d U }(i), , it adapts the local intermediate estimatedk, at each instant i asStep 2: One-step approximation To reduce the MSD bias that results from the cooperation of nodes executing various estimation tasks, an approximate optimal vector ,, can be obtained by using the intermediate estimator Pkifrom data diffusion at timei. In particular, we make a local one-step approximation based on (i,, thereby obtaining wki as node k's approximate optimal vector at time i, which acts as a more reliable reference than 0 Wk ,i - kV Ak(W lt=kiki- Pk VkAk(W)4'-V,,i (2)=,±1+ Pk u k(dk 0 Wk,i results from performing a further gradient descent about the local cost function based on the intermediate estimator (pqi. Through this step, we reduce intermediate estimator 0 deviation as much as possible in terms of decreasing the local cost. As time i - o,1Wk approaches the cluster objective w, which lays a foundation for constructing an accurate clustering threshold. Step 3: Computing adaptive element-wise clustering threshold In this step, we calculate the clustering threshold in an element-wise sense, which ensures that the optima of agents in the same cluster are similar in each dimension. First, the estimator can be rewritten asle 'VkJ(3)=Wk , IClk4lki, 00 =Wk, + CiOkWhere) ki I,'k 2,k, -N,ki ]T . Subtracting wo k from both sides of (12), we haveAwk,,=AWk,+CiOkJ (4) 1'We then obtain the transient squared deviation between the d-LMS estimator wkj andWk,i as Awk,i 2 wkIW k 1,i 2 + 2Awk,iT iok,i (5) To minimize the diffusion estimation bias, we then get the following relationship k 2 -2Awk,,iCO,,, (6)For this inequality to be satisfied, each node /E N , must satisfy the condition 2 T 2CKk < -2Awk, C . We relax the condition by taking the average of l with respect to all M elements. We then obtain an element-wise differencec FJk lM 21k : 2 Alwj,, (7) ki2 which holds for each elementAWk,.(m)EAwoI Recalling thatand we have and if yk,( Aw ki 2for cI=1,C lkieach element AWk 7 )EAWkithen the transient squared A wk,, 2 of the d-LMS system 2 will be smaller than Alwk, 1 of a reliable approximate optimal vector. This is equivalent to (i)2 (min) k A,i 4(Awkj 2, where Awk is the minimum element of Awk.However, in practical applications, the actual value ofwk, namely, Aw, ,, isunknown and should be estimated via the sample variance. However, beyond that, the minimum element Awk "")E AWk,, will change in real time. We relax the condition without loss of accuracy by taking as an approximation an average of the relatively reliable 0 approximate optimal vector, which is the approximate variance of the local optimum Wkjwith respect to all the M elements: 0 = (-a) ,;k,, ,;k,--awom 2 0 2 ,,i= (I-- a),i-i -+a 0Wk,, -- g , / M (8)Where g and g are the mean and the averaged element-wise sample variance ofw0, 2 4 respectively. Finally, we set yk, = k.The sample statistics converge to unbiased estimatesof their corresponding true values, since wk, approaches wk as i - oo . Hence, the setting for will ensure the accuracy of clustering in real time. 7k,, Step 4: Clustering detection with element-wise statistics The averaged element-wise distance 0,,, between node l's intermediate estimate (P, and a relatively reliable approximate optimal vector of the local optimum, WO,, should be calculated as a clustering detection statistic for accurate clustering at time i as0 2< ,,-g k,, , f o f Wk (9)where HO and H1 denote the hypothesis that node / belongs to the sub-neighbors that are in the same cluster as node k and the opposite hypothesis, respectively. We add a relaxation factor Ak >1 to relax clustering restrictions by making a tradeoff between maximum cooperation for steady state misalignment of estimation and clustering accuracy. Thus, the corresponding clustering hypothesis is derived as74Clustering detection at time i will lead to the following set of real-time sub-neighbors that are in the same cluster as node k and able to cooperate:We use the Metropolis rule as a combination rule in which the combination weight Ci,,i, at time i is defined asif fleA I/{ k} max(nkj,nII) Cl kI1- ICk if 1=k (12) k Nj, /{k}0 otherwiseStep 5: Estimation with combinationk ,i- I I Ci k lp i (13) leN, The final information result is obtained by cooperation among sub-neighbors of the same cluster.Fig1. Performance of distributed estimation. Transient-state average MSDs forrecursions with various k . 2020103334Fig2. Network topology in the initial stage with no prior cluster information in case 1: (a) initial global network topology; (b), (c), and (d) initial topologies of clusters 1, 2, and 3, respectively.Fig3. Performance of adaptive clustering. Network topology in a steady state after the clustering process: (a) global network topology in steady state; (b), (c), and (d) topologies of clusters 1, 2, and 3, respectively, in a steady state.Fig4. Performance of distributed estimation. Network transient-state average MSDs for recursions.Fig5. Performance of distributed estimation. Comparison of network MSD behavior in a non-stationary multitask environment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2020103334A AU2020103334A4 (en) | 2020-11-09 | 2020-11-09 | Distributed estimation with adaptive clustering strategy based on element-wise distance over multitask networks. |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2020103334A AU2020103334A4 (en) | 2020-11-09 | 2020-11-09 | Distributed estimation with adaptive clustering strategy based on element-wise distance over multitask networks. |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2020103334A4 true AU2020103334A4 (en) | 2021-01-21 |
Family
ID=74341071
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2020103334A Ceased AU2020103334A4 (en) | 2020-11-09 | 2020-11-09 | Distributed estimation with adaptive clustering strategy based on element-wise distance over multitask networks. |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU2020103334A4 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114594689A (en) * | 2022-03-15 | 2022-06-07 | 北京理工大学 | Distributed recursive grouping and autonomous aggregation control method of large-scale cluster system |
CN116367178A (en) * | 2023-05-31 | 2023-06-30 | 北京邮电大学 | Unmanned cluster self-adaptive networking method and device |
CN116467610A (en) * | 2023-03-13 | 2023-07-21 | 深圳市壹通道科技有限公司 | Data topology analysis method, device, equipment and storage medium based on 5G message |
-
2020
- 2020-11-09 AU AU2020103334A patent/AU2020103334A4/en not_active Ceased
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114594689A (en) * | 2022-03-15 | 2022-06-07 | 北京理工大学 | Distributed recursive grouping and autonomous aggregation control method of large-scale cluster system |
CN116467610A (en) * | 2023-03-13 | 2023-07-21 | 深圳市壹通道科技有限公司 | Data topology analysis method, device, equipment and storage medium based on 5G message |
CN116467610B (en) * | 2023-03-13 | 2023-10-10 | 深圳市壹通道科技有限公司 | Data topology analysis method, device, equipment and storage medium based on 5G message |
CN116367178A (en) * | 2023-05-31 | 2023-06-30 | 北京邮电大学 | Unmanned cluster self-adaptive networking method and device |
CN116367178B (en) * | 2023-05-31 | 2023-07-25 | 北京邮电大学 | Unmanned cluster self-adaptive networking method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2020103334A4 (en) | Distributed estimation with adaptive clustering strategy based on element-wise distance over multitask networks. | |
Khan et al. | BAS-ADAM: An ADAM based approach to improve the performance of beetle antennae search optimizer | |
Hu et al. | Delay compensation-based state estimation for time-varying complex networks with incomplete observations and dynamical bias | |
Sun et al. | Multi-sensor distributed fusion estimation with applications in networked systems: A review paper | |
Dai et al. | Event-triggered leader-following consensus for multi-agent systems with semi-Markov switching topologies | |
Qin et al. | Recent advances in consensus of multi-agent systems: A brief survey | |
Koshal et al. | A gossip algorithm for aggregative games on graphs | |
Gupta et al. | On a stochastic sensor selection algorithm with applications in sensor scheduling and sensor coverage | |
Xia et al. | Networked data fusion with packet losses and variable delays | |
Chen et al. | Complex-valued radial basic function network, part I: Network architecture and learning algorithms | |
Fan et al. | Sampling-based event-triggered consensus for multi-agent systems | |
Mohammadi et al. | Event-based estimation with information-based triggering and adaptive update | |
Yang et al. | Dynamic event-triggered leader-following consensus control of a class of linear multi-agent systems | |
Zhang et al. | Asynchronous constrained resilient robust model predictive control for Markovian jump systems | |
Liu et al. | Sampled-data based distributed convex optimization with event-triggered communication | |
Graham et al. | Spatial statistics and distributed estimation by robotic sensor networks | |
Pang et al. | Observer-based event-triggered adaptive control for nonlinear multiagent systems with unknown states and disturbances | |
You et al. | Proportional integral observer-based consensus control of discrete-time multi-agent Systems | |
Li et al. | Adaptive event-triggered group consensus of multi-agent systems with non-identical nonlinear dynamics | |
Zhang et al. | Distributed event-triggered tracking control of multi-agent systems with active leader | |
Chen et al. | Diffusion event-triggered sequential asynchronous state estimation algorithm for stochastic multiplicative noise systems | |
Raghuvamsi et al. | A review on distribution system state estimation uncertainty issues using deep learning approaches | |
Cui et al. | A Novel Adaptive Event-Triggered Consensus Control Approach to Multi-Agent Systems with Guaranteed Positive MIET | |
Yuanyuan Zhang et al. | Adaptive clustering based on element-wised distance for distributed estimation over multi-task networks | |
Wan et al. | Distributed filtering over networks based on diffusion strategy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGI | Letters patent sealed or granted (innovation patent) | ||
MK22 | Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry |