CN111639671A - Method for sparse multi-task adaptive network non-negative parameter vector estimation - Google Patents

Method for sparse multi-task adaptive network non-negative parameter vector estimation Download PDF

Info

Publication number
CN111639671A
CN111639671A CN202010325256.5A CN202010325256A CN111639671A CN 111639671 A CN111639671 A CN 111639671A CN 202010325256 A CN202010325256 A CN 202010325256A CN 111639671 A CN111639671 A CN 111639671A
Authority
CN
China
Prior art keywords
parameter
clusters
node
task
adaptive network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010325256.5A
Other languages
Chinese (zh)
Other versions
CN111639671B (en
Inventor
王紫璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN202010325256.5A priority Critical patent/CN111639671B/en
Publication of CN111639671A publication Critical patent/CN111639671A/en
Application granted granted Critical
Publication of CN111639671B publication Critical patent/CN111639671B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2133Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on naturality criteria, e.g. with non-negative factorisation or negative correlation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for estimating non-negative parameter vectors of a sparse multi-task self-adaptive network, wherein the sparse multi-task self-adaptive network comprises K nodes, the network is divided into Q clusters, the parameter vectors estimated by each cluster are the same, the parameter vectors estimated by different clusters are different, and each node comprises a self-adaptive filter; the clusters are used for simulating the parameter distribution condition of the multi-task system, so that the parameter vector association of different task clusters is ensured; adaptive filter introduces L based on cost function0The norm method estimates the unknown parameter vector. The adaptive network is divided into a plurality of clusters, the parameter vector estimated by each cluster is the same, the parameter vectors estimated by different clusters are different, but certain similarity exists between the clusters. The method has higher convergence rate so as to solve the problem of transmitting when a sparse system is estimatedThe convergence speed of the conventional method is low.

Description

Method for sparse multi-task adaptive network non-negative parameter vector estimation
Technical Field
The present invention relates to a method for sparse multi-task adaptive network non-negative parameter vector estimation, in particular to the use of mean square error in combination with L0A norm method is used for parameter estimation, and belongs to the field of wireless sensor networks.
Background
An adaptive network is a communication network consisting of a plurality of nodes dispersed over an area, each node being equipped with an adaptive filter for adaptively estimating an unknown parameter vector. At present, the application of a multitask self-adaptive network is very wide, and each node in the network can perform independent operation by using the interactive information of adjacent nodes, so that the accuracy of the identification of the whole network is improved. Multitasking adaptive networks have been widely used in applications such as machine learning, computer networks, and the like.
According to different cooperation modes of nodes, the network can be divided into three adaptive network types of an incremental type, a diffusion type and a probability type. Based on various structures and adaptive filtering frameworks, scholars propose a series of distributed network methods. In 2013, Chen et al proposed a Multitask diffusion least mean square method (abbreviated as MD-LMS) [ Multitask diffusion addition over Networks [ J ]. IEEE Journal of Selected Topics in Signal processing,2013, PP (99):1-1 ], which effectively expands the application range of the adaptive network.
In some physical phenomena, such as concentration fields, demographics, etc., the parameter vectors in a multitasking adaptive network need to satisfy non-negative constraints. The adaptive filtering method under the non-negative constraint condition is essentially to solve the optimization problem under the constraint condition. In 2011, Chen et al proposed a non-negative Least Mean Square method (abbreviated as NNLMS) [ non-negative Least-Mean-Square Algorithm [ J ]. Signal Processing,2011,59(11): 5225-.
However, the existing multitask diffusion LMS method and multitask diffusion RLS method are only suitable for identifying unconstrained parameter vectors.
Therefore, an efficient method for non-negative parameter vector identification of a multitask adaptive network needs to be found.
Disclosure of Invention
To solve the above-mentioned drawbacks, the present invention aims to: the method for estimating the nonnegative parameter vector of the sparse multitask adaptive network supplements the blank of identifying the nonnegative parameter vector of the sparse multitask network in the prior art and simultaneously can obtain lower steady state offset.
In order to realize the scheme, the invention adopts the following technology:
a method for sparse multi-tasking adaptive network non-negative parameter vector estimation, characterized by: the sparse multi-task adaptive network comprises K nodes, the sparse multi-task adaptive network is divided into Q clusters, the parameter vector estimated by each cluster is the same, the parameter vectors estimated by different clusters are different,
each of said nodes comprising an adaptive filter;
the clusters are used for simulating the parameter distribution condition of the multi-task system, so that the parameter vector association of different task clusters is ensured;
the adaptive filter introduces L based on a cost function0The norm method estimates the unknown parameter vector. The method has a high convergence speed, so that the problem of low convergence speed of the traditional method when a sparse system is estimated is solved.
In one embodiment, the estimating comprises the steps of:
s1: solving joint matrix C, similarity matrix rho and system joint parameter a of networklkWherein, in the step (A),
in a multitasking adaptive network, a neighborhood of node k (including k) is defined as NkThe cluster in which node k is located is C (k),
for nodes in the same cluster, a joint matrix is defined
Figure BDA0002462966310000021
Each of which is associated with a parameter clkSatisfy clk≥0,
Figure BDA0002462966310000027
Defining a similarity matrix for nodes in different clusters
Figure BDA0002462966310000023
Each similarity parameter p thereofklSatisfy rhokl≥0,
Figure BDA0002462966310000028
S2: generating a joint estimate ψ of node k at time n +1 for an unknown parameterk(n +1), K ∈ {1,2, …, K }, using wk(n) represents the estimation of unknown parameters by node k at time nThe counting is carried out by the following steps of,
Figure BDA0002462966310000025
is represented by wk(n) the elements are diagonal matrices of diagonal elements, and the input signal at node k at time n is xk(n),
Error of the measurement
Figure BDA0002462966310000026
The joint estimation ψ of node k at time n +1 for the unknown parametersk(n +1) is represented by the formula
Figure BDA0002462966310000031
Generating, wherein mu, eta and lambda are step length parameters, and beta is the action range and the intensity of a zero absorption factor;
s3: generating a latest estimate w of unknown parameters at time n +1 for node kk(n+1),k∈{1,2,…,K},
By psil(n +1) represents the joint estimation of unknown parameters by node l at time instant n +1,
by using
Figure BDA0002462966310000032
A latest estimate of the unknown parameter at time n +1 is generated for node k.
In one embodiment, in step S1, a is takenlk=ckl
Advantageous effects
Compared with the scheme in the prior art, the invention has the advantages that: the method of the invention can not only keep the sparse multitask adaptive network to have high convergence speed, but also ensure that the sparse multitask adaptive network obtains low steady state imbalance. The method can be widely applied to computer networks, distributed machine learning, disaster early warning, target positioning and cognitive radio.
Drawings
The invention is further described with reference to the following figures and examples:
FIG. 1 is a diagram illustrating a multitasking adaptive network according to an embodiment of the present application;
fig. 2 is a schematic diagram of a multitask adaptive network connection of a 4-task cluster and 20 nodes according to an embodiment of the present application;
fig. 3a and 3b are schematic diagrams illustrating weight parameter vector values of a 4-task cluster according to an embodiment of the present application;
FIG. 4 is a plot of the mean square deviation of the network using Gaussian noise as input in an embodiment of the present application;
fig. 5 is a network mean square deviation curve when uniform noise is used as input in an embodiment of the present application.
Detailed Description
Examples
To better illustrate the objects and advantages of the present invention, the following detailed description of the invention is provided in conjunction with the accompanying drawings and examples. The following section further illustrates the above embodiments in conjunction with specific examples. It should be understood that these examples are for illustrative purposes and are not intended to limit the scope of the present invention. The conditions employed in the examples may be adjusted to suit the particular application, and the conditions not specified are typically those used in routine experimentation.
The invention discloses a method for estimating non-negative parameter vectors of a sparse multi-task self-adaptive network, wherein the sparse multi-task self-adaptive network comprises K nodes, the network is divided into Q clusters, the parameter vectors estimated by each cluster are the same, the parameter vectors estimated by different clusters are different, and each node comprises a self-adaptive filter; the clusters are used for simulating the parameter distribution condition of the multi-task system, so that the parameter vector association of different task clusters is ensured; adaptive filter introduces L based on cost function0The norm method estimates the unknown parameter vector. The method has a high convergence speed, so that the problem of low convergence speed of the traditional method when a sparse system is estimated is solved. In the implementation method, the association description of the parameter vectors of different task clusters is that the parameter vectors of different task clusters have differences and maintain similarity to a certain extent.
In this embodiment, MD-L is used0Adaptive network of the NNLMS method (abbreviated MD-L)0NNLMS) to identify an unknown parameter vector and to apply itThe performance is compared with that of an adaptive network (abbreviated as MD-NNLMS) adopting the MD-NNLMS method, wherein the MD-NNLMS method is used for utilizing MD-L0The NNLMS method is obtained by using the mean square error as a cost function. The performance of the normalized mean square deviation NMSD relative to different methods (algorithms) is used for evaluation in the implementation method, and the definition formula is
Figure BDA0002462966310000041
The unit is decibel (dB), wherein
Figure BDA0002462966310000042
All experimental curves were results averaged 20 times for the optimal solution without negative values. FIG. 1 is a schematic diagram of a multitasking adaptive network; fig. 2 is a schematic diagram of a multitasking adaptive network used in the experiment, which includes 4 task clusters and 20 nodes. Because of the similarity between adjacent clusters, a linear model is used
Figure BDA0002462966310000043
l ∈ {1,2,3,4} obtains the weight parameter vector of the cluster C (l), in the embodiment, a multitask adaptive network of 4 task clusters and 20 nodes is adopted, in other embodiments, the value of Q is between 3 and 10, and the value of K is between 10 and 50, without limitation to the application situation.
FIG. 3a shows w used in the experiment*Fixed part of the linear model, FIG. 3b is Δ w for different clustersC(l)Therefore, the parameter vectors selected by each cluster are not completely the same, but contain the same original parameter vectors, and the parameter value conditions of the multitask adaptive network are reasonably reflected.
The principle of the embodiment of the application is as follows: diffusion method and L Using KKT conditions0The regularization theory designs a multitask self-adaptive method under the condition of nonnegativity constraint. The measurement indexes of the adaptive network comprise convergence rate and steady state imbalance, wherein the convergence rate determines the time required by the adaptive network to estimate the unknown parameter vectorAnd the steady state imbalance determines the accuracy that the adaptive network can estimate the unknown parameter vector. The method proposed by the present application also requires a faster convergence rate or a lower steady state imbalance than the conventional least mean square method.
In this embodiment, MD-L is used0Adaptive network pair unknown parameter vector w of NNLMS methodoPerforming an estimation comprising the steps of:
s1: solving joint matrix C, similarity matrix rho and system joint parameter a of networklk
In a multitasking adaptive network, a neighborhood of node k (including k) is defined as NkThe cluster in which node k is located is C (k). For nodes in the same cluster, a joint matrix is defined
Figure BDA0002462966310000051
Each of which is associated with a parameter clkSatisfy clk≥0,
Figure BDA0002462966310000059
Defining a similarity matrix for nodes in different clusters
Figure BDA0002462966310000053
Each similarity parameter p thereofklSatisfy rhokl≥0,
Figure BDA00024629663100000510
To simplify the system joint parameters, take alk=ckl
S2: generating a joint estimate ψ of node k at time n +1 for an unknown parameterkW for (n +1), K ∈ {1,2, …, K }k(n) represents the estimation of the unknown parameter by node k at time n,
Figure BDA0002462966310000055
is represented by wk(n) the elements are diagonal matrices of diagonal elements, and the input signal at node k at time n is xk(n) error
Figure BDA0002462966310000056
The joint estimation ψ of node k at time n +1 for the unknown parametersk(n +1) can be represented by the formula
Figure BDA0002462966310000057
Generating, wherein mu, eta and lambda are step length parameters, and beta is a zero absorption factor;
s3: generating a latest estimate w of unknown parameters at time n +1 for node kk(n+1),k∈{1,2,…,K}
By psil(n +1) represents the joint estimation of unknown parameters by the node l at the time of n +1, and can be used
Figure BDA0002462966310000058
A latest estimate of the unknown parameter at time n +1 is generated for node k.
In this embodiment, the parameter vector to be estimated is a sparse vector with a negative value and a length M of 20, w*The vector takes on the values of 0, 0.3, 0, 0.5, 0.2, 0, 0.5, 0, -0.3, 0, 0.1, 0, 0.5, 0, 0.3, -0.2. The filters in all nodes are of the same length. For joint parameter clkAnd a similarity parameter ρlkBy selecting, we uniformly use the average rule, i.e. clk=|Nl∩C(l)|-1,k∈Nl∩C(l),ρlk=|Nk\C(k)|-1,l∈Nk\ C (k). In this embodiment, gaussian noise is used as input, the mean value is 0.5, and the standard deviation is 0.1; the system noise is respectively selected from Gaussian noise and uniform noise with the average value of 0.05 and the standard deviation of 0.001.
In this embodiment, the parameters are selected as follows:
s1, when the system noise is Gaussian noise, the parameters of the MD-NNLMS method are taken as mu-0.035 and η -0.001, and MD-L is adopted0The parameters of the-NNLMS method are taken as μ ═ 0.035, η ═ 0.001, λ ═ 0.001, β ═ 5, and when the input is uniform noise, the parameters of the MD-NNLMS method are taken as μ ═ 0.035, η ═ 0.001, and MD-L is used0The parameters of the-NNLMS method are taken as μ ═ 0.035,η, λ 0.001, β, 5. in other embodiments, μ is in the range of 0.02-0.05, η is in the range of 0.0001-0.003, λ is in the range of 0.0001-0.003, and β is in the range of 1-10.
Fig. 4 and 5 are normalized mean square deviation curves for gaussian noise and uniform noise, respectively, as system noise. The experimental results show that: under the same steady state maladjustment condition, the invention discloses a method based on MD-L0Sparse multitask adaptive networks of the NNLMS method have the fastest convergence speed.
The method for estimating the non-negative parameter vector of the sparse multitask adaptive network is also called an algorithm for estimating the non-negative parameter vector of the sparse multitask adaptive network.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and the purpose of the embodiments is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All modifications made according to the spirit of the main technical scheme of the invention are covered in the protection scope of the invention.

Claims (6)

1. A method for sparse multi-tasking adaptive network non-negative parameter vector estimation, characterized by: the sparse multi-task adaptive network comprises K nodes, the sparse multi-task adaptive network is divided into Q clusters, the parameter vector estimated by each cluster is the same, the parameter vectors estimated by different clusters are different,
each of said nodes comprising an adaptive filter;
the clusters are used for simulating the parameter distribution condition of the multi-task system, so that the parameter vector association of different task clusters is ensured;
the adaptive filter introduces L based on cost function0The norm method estimates the unknown parameter vector.
2. The method of claim 1, wherein: the estimation is carried out in such a way that,
comprises the following steps:
s1: solving joint matrix C, similarity matrix rho and system joint parameter a of networklkWherein, in the step (A),
in a multitasking adaptive network, a neighborhood of node k (including k) is defined as NkThe cluster in which node k is located is C (k),
for nodes in the same cluster, a joint matrix is defined
Figure FDA0002462966300000011
Each of which is associated with a parameter clkSatisfy the requirement of
Figure FDA0002462966300000012
Defining a similarity matrix for nodes in different clusters
Figure FDA0002462966300000013
Each similarity parameter p thereofklSatisfy the requirement of
Figure FDA0002462966300000018
S2: generating a joint estimate ψ of node k at time n +1 for an unknown parameterk(n +1), K ∈ {1,2, …, K }, using wk(n) represents the estimation of the unknown parameter by node k at time n,
Figure FDA0002462966300000015
is represented by wk(n) the elements are diagonal matrices of diagonal elements, and the input signal at node k at time n is xk(n),
Error of the measurement
Figure FDA0002462966300000016
The joint estimation ψ of node k at time n +1 for the unknown parametersk(n +1) is represented by the formula
Figure FDA0002462966300000017
Generating, wherein mu, eta and lambda are step length parameters, and beta is a zero absorption factor;
s3: generating a latest estimate w of unknown parameters at time n +1 for node kk(n+1),k∈{1,2,…,K},
By psil(n +1) represents the joint estimation of unknown parameters by node l at time instant n +1,
by using
Figure FDA0002462966300000021
A latest estimate of the unknown parameter at time n +1 is generated for node k.
3. The method of claim 2, wherein: in the step S1, a is takenlk=ckl
4. The method of claim 1, wherein: in the method, the value of Q is between 3 and 10, and the value of K is between 10 and 50.
5. The method of claim 4, wherein: in the method, the value of Q is 4, and the value of K is 20.
6. The method of claim 2, wherein: in the step S2:
mu is 0.02-0.05, eta is 0.0001-0.003, lambda is 0.0001-0.003, beta is 1-10.
CN202010325256.5A 2020-04-23 2020-04-23 Method for estimating nonnegative parameter vector of sparse multitasking adaptive network Active CN111639671B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010325256.5A CN111639671B (en) 2020-04-23 2020-04-23 Method for estimating nonnegative parameter vector of sparse multitasking adaptive network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010325256.5A CN111639671B (en) 2020-04-23 2020-04-23 Method for estimating nonnegative parameter vector of sparse multitasking adaptive network

Publications (2)

Publication Number Publication Date
CN111639671A true CN111639671A (en) 2020-09-08
CN111639671B CN111639671B (en) 2023-07-28

Family

ID=72328870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010325256.5A Active CN111639671B (en) 2020-04-23 2020-04-23 Method for estimating nonnegative parameter vector of sparse multitasking adaptive network

Country Status (1)

Country Link
CN (1) CN111639671B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105871762A (en) * 2016-05-23 2016-08-17 苏州大学 Adaptive network used for estimation of sparse parameter vector
CN109687845A (en) * 2018-12-25 2019-04-26 苏州大学 A kind of sparse regularization multitask sef-adapting filter network of the cluster of robust

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105871762A (en) * 2016-05-23 2016-08-17 苏州大学 Adaptive network used for estimation of sparse parameter vector
CN109687845A (en) * 2018-12-25 2019-04-26 苏州大学 A kind of sparse regularization multitask sef-adapting filter network of the cluster of robust

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王紫璇;: "基于数据选择的非负自适应滤波算法" *
王艳;: "基于系数估值约束的改进LMS自适应滤波算法" *

Also Published As

Publication number Publication date
CN111639671B (en) 2023-07-28

Similar Documents

Publication Publication Date Title
Li et al. Adaptive fuzzy backstepping output feedback control for a class of MIMO time-delay nonlinear systems based on high-gain observer
CN112583633B (en) Distributed optimization method of directed multi-agent network based on rough information
Ji et al. Observability and estimation in distributed sensor networks
CN109687845B (en) Robust cluster sparse regularization multitask adaptive filter network
Guo et al. Robust formation tracking and collision avoidance for uncertain nonlinear multi-agent systems subjected to heterogeneous communication delays
EP4193304A1 (en) Normalization in deep convolutional neural networks
CN111639671A (en) Method for sparse multi-task adaptive network non-negative parameter vector estimation
CN110190832B (en) Regularization parameter multi-task adaptive filter network
CN105871762A (en) Adaptive network used for estimation of sparse parameter vector
CN112272385B (en) Multi-task adaptive network for non-negative parameter vector estimation
Abuzainab et al. A multiclass mean-field game for thwarting misinformation spread in the internet of battlefield things
CN109976974B (en) System monitoring method under cloud computing environment aiming at operation state judgment
Ma et al. Leader-following consensus control for multi-agent systems under measurement noises
CN108834043B (en) Priori knowledge-based compressed sensing multi-target passive positioning method
CN110516198A (en) A kind of distribution type non-linear kalman filter method
CN115499842A (en) Unmanned aerial vehicle distributed security estimation method under Byzantine attack and manipulation attack
Kurihara et al. Analysis of convergence property of PSO and its application to nonlinear blind source separation
CN110618607B (en) Behavior-based multi-UUV self-organizing coordination control method
Zhenxing et al. Quantized consensus for linear discrete-time multi-agent systems
Yu et al. Distributed blind system identification in sensor networks
CN108279564A (en) A kind of sparse multitask Adaptable System and alternative manner of robust
Zhang et al. Sparse Adaptive Channel Estimation Based on Multi-kernel Correntropy
Ampeliotis et al. Adapt-align-combine for diffusion-based distributed dictionary learning
Shang Synchronization in networks of coupled harmonic oscillators with stochastic perturbation and time delays
Kompella et al. Optimal curiosity-driven modular incremental slow feature analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant