CN116112022A - Multi-task clustering sparse reconstruction method based on message passing - Google Patents

Multi-task clustering sparse reconstruction method based on message passing Download PDF

Info

Publication number
CN116112022A
CN116112022A CN202211561223.6A CN202211561223A CN116112022A CN 116112022 A CN116112022 A CN 116112022A CN 202211561223 A CN202211561223 A CN 202211561223A CN 116112022 A CN116112022 A CN 116112022A
Authority
CN
China
Prior art keywords
sparse
task
algorithm
value
distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211561223.6A
Other languages
Chinese (zh)
Inventor
何振清
梁应敞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze River Delta Research Institute of UESTC Huzhou
Original Assignee
Yangtze River Delta Research Institute of UESTC Huzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze River Delta Research Institute of UESTC Huzhou filed Critical Yangtze River Delta Research Institute of UESTC Huzhou
Priority to CN202211561223.6A priority Critical patent/CN116112022A/en
Publication of CN116112022A publication Critical patent/CN116112022A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3059Digital compression and data reduction techniques where the original information is represented by a subset or similar information, e.g. lossy compression
    • H03M7/3062Compressive sampling or sensing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a multi-task clustering sparse reconstruction method based on message transfer. The method utilizes the joint clustering sparse structural features of sparse signals among different tasks, so that better sparse reconstruction performance is obtained under the condition of less observation samples. Specifically, the sparse structural features of the clustered sparse signals are described through Markov Spike and Slab priors, a generalized approximation message passing algorithm is introduced to iteratively approximate the posterior mean value of each unknown variable, and an expectation-maximization method is adopted to iteratively update the unknown parameters. Compared with the traditional single-task Bayes compressed sensing algorithm, the method has remarkable performance improvement under the condition that the acquired observation samples are fewer.

Description

Multi-task clustering sparse reconstruction method based on message passing
Technical Field
The invention belongs to the technical field of information and communication, and particularly relates to a multi-task clustering sparse reconstruction method based on message transmission.
Background
Big data in the background of novel technology industries such as intelligent Internet of things, ultra-wideband communication, semantic communication and artificial intelligence are explosively increased, and massive data brings unprecedented challenges for analysis and processing of the big data. In fact, since the amount of information in the data itself grows significantly slower than the data dimension, there is often redundancy between high-dimensional mass data such that its critical information is typically located in a potentially low-dimensional structural pattern. The sparse structure is the most important low-dimensional structural characteristic of the data, and provides a trigger for expressing, analyzing and revealing intrinsic properties of things. Based on the prior sparse structure, the compressed sensing theory has developed, which means that when the original signal or the representation coefficient thereof in a certain transformation domain is sufficiently sparse, the high-dimensional signal can be reconstructed by using a measurement sample far lower than the original signal dimension. Compressed sensing provides a theoretical basis for sub-nyquist acquisition of sparse signals, and also enables signal processing to generate new ideas and methods. In many practical applications (including but not limited to radar detection, sonar positioning, image processing, wireless communication, etc.), the object of interest may be considered sparse relative to the application context. In compressed sensing, the most important task is to reconstruct an unknown sparse signal from a small number of underdetermined measurements, and an efficient and robust sparse reconstruction method is a prerequisite for realizing the application thereof.
The traditional sparse reconstruction method comprises a greedy algorithm (such as OMP algorithm in document Signal recovery from random measurements via orthogonal matching pursuit) and an L1 norm reconstruction method based on convex optimization. According to the proper prior information, a Bayesian sparse reconstruction method is widely focused and studied. The document Sparse Bayesian learning and the relevance vector machine discloses a sparse prior model based on layering, and firstly proposes a sparse reconstruction method based on Bayesian learning, but the method relates to matrix inversion, and has higher computational complexity when the dimension of a sensing matrix is larger, so that the method is not beneficial to real-time application. Aiming at the clustering sparse reconstruction problem, namely when a non-zero sparse support of sparse signals presents an unknown block sparse structure, a general sparse prior of a document Generalized approximate message passing for estimation with random linear mixing provides a sparse reconstruction algorithm based on message transfer, and the sparse reconstruction algorithm has lower computational complexity. However, the above method is not directly applicable to the multi-task block sparse reconstruction discussed in the present invention. One of the most important features of the multitasking block sparse reconstruction problem is that the sparse signals under different subtasks have the same sparse set. Through proper common sparse support modeling and reasonable utilization, the reconstruction accuracy of sparse signals can be effectively improved, so that the reconstruction algorithm with low calculation complexity and high accuracy has important research value.
Disclosure of Invention
The invention mainly provides a multi-task clustering sparse reconstruction method based on Bayesian message transfer. The method utilizes the joint clustering sparse structural features of sparse signals among different tasks, so that better sparse reconstruction performance is obtained under the condition of less observation samples. Specifically, the sparse structural features of the clustered sparse signals are described through Markov Spike and Slab priors, a generalized approximation message passing algorithm is introduced to iteratively approximate the posterior mean value of each unknown variable, and an expectation-maximization method is adopted to iteratively update the unknown parameters. Compared with the traditional single-task Bayes compressed sensing algorithm, the method has remarkable performance improvement under the condition that the acquired observation samples are fewer.
The technical scheme adopted by the invention comprises the following steps:
s1, establishing a mathematical model of the multi-task block sparse reconstruction problem, which can be expressed as
y k =A k x k +w k ,k=1,...,K
wherein ,
Figure SMS_1
representing the measurement vector in the kth task, < +.>
Figure SMS_2
Is a block sparse signal ({ x) k Have joint block sparse structure), w k Is an unknown zero-mean Gaussian noise vector, +.>
Figure SMS_3
(M N) is the perception matrix or dictionary of the kth task model。
S2, pair { x } k The block sparse structure of } is modeled. The present invention uses markov spike and slab prior a priori to describe this cluster sparsity, for any k:
Figure SMS_4
because of the Markov chain, the sparse support can be expressed as:
Figure SMS_5
wherein ,
Figure SMS_6
is a transition probability and has the following Bernoulli condition probability distribution:
Figure SMS_7
wherein ,
Figure SMS_8
and />
Figure SMS_9
If p 10 The smaller s n-1 S takes 0 value n The smaller the probability of taking a 1 value, the greater the average distance between two adjacent non-zero clusters. If p 0i The smaller s n-1 S takes 1 value n The smaller the probability of taking a value of 0, the larger the mean size of the non-zero cluster. By subtracting from p (x kn |s n ) Calculate x in kn The edge distribution of (a) can be obtained:
Figure SMS_10
wherein ,λ=p10 /(p 01 +p 10 ) Represents x k Sparsity of (x), i.e. x kn Is a Bernoulli Gaussian distribution. Thus will initially p(s 1 ) The method comprises the following steps:
p(s 1 )=λs 1 +(1-λ)(1-s 1 ).
s3, factor graph model representation
The pair can be deduced from Bayes formula
Figure SMS_11
and s={s1 ,s 2 ,...,s n The joint probability distribution of } is:
p(Y,X,s)=p(Y|X,s)p(X|s)p(s)
when s is known, for
Figure SMS_12
p(y k |x k )p(x k S) are conditional independent, then there are
Figure SMS_13
As can be seen from the a priori nature of markov spike and slab prior
Figure SMS_14
p(x kn |s n ) Independent of each other, the joint probability distribution of the available (Y, X, s) is:
Figure SMS_15
at x k When it is known that
Figure SMS_16
p(y km |x k ) Are independent of each other, thus there are
Figure SMS_17
/>
wherein ,
Figure SMS_18
z km is z k M of (3)The elements. p (y) km |z km ) To obey z km As the average value, with sigma 2 Is a gaussian distribution of variance, i.e
Figure SMS_19
Due to x 1 ,x 2 ,...,x k With a joint clustering sparse structure, their sparsity can be expressed by the same markov spike and slab prior a priori distribution, and a factor graph model is built by combining the dependency relationships among the random variables shown in (x 1). According to the factor graph model, a message passing and mean value calculating method among various variables in the factor graph model can be obtained based on a message passing method of a document Generalized approximate message passing for estimation with random linear mixing.
S4, obtaining an edge posterior mean value estimator of each variable through message transmission, and then calculating iterative updating of unknown parameters (noise variance and the like) according to an expectation maximization method in Pattern recognition and machine learning. Steps 3 and 3 are iterated alternately until convergence.
The beneficial effects of the invention are as follows: describing sparse structural features of clustering sparse signals through Markov Spike and Slab priors, introducing a generalized approximation message passing algorithm to iteratively approximate posterior mean values of each unknown variable, and carrying out iterative updating on the unknown parameters by adopting an expectation-maximization method. Compared with the traditional single-task Bayes compressed sensing algorithm, the method has remarkable performance improvement under the condition that the acquired observation samples are fewer.
Drawings
FIG. 1 is a probabilistic factor graph representation of a multi-tasking clustering sparse signal reconstruction model;
FIG. 2 original multitasking signal;
the reconstruction result of the fig. 3 MT-GAMP algorithm, mse= -21.7220dB;
reconstruction result of the GAMP algorithm of fig. 4, mse= -18.4699dB;
the reconstruction result of the VB-SBL algorithm of fig. 5, mse= -15.0471dB;
the reconstruction result of the OMP algorithm of fig. 6, mse= -11.7080dB;
fig. 7 MSE as a function of SNR (m=150);
fig. 8 MSE as a function of number of measurements (snr=20 dB).
Detailed Description
The technical scheme of the invention is described in detail below with reference to the accompanying drawings.
The invention comprises the following steps:
s1, setting a simulation data generation model as
y k =A k x k +w k
Where k=1,..,
Figure SMS_20
is superimposed complex cyclic symmetric Gaussian noise, i.e +.>
Figure SMS_21
wherein ,σ2 Is the variance of noise, I M Representing an M x M unit array. Set matrix A k Each element is mutually independent and obeys zero-mean complex Gaussian distribution with variance of 1/N, and sparse signal x k The non-zero elements obey zero-mean complex gaussian distribution with variance of 1, the number of clusters is L, and the number of non-zero elements is H, then sparsity λ=h/N. Gaussian noise { w at different tasks k Mutually independent.
The signal to noise ratio and the mean square error are defined next. Due to x k Is subject to a zero-mean Gaussian distribution with variance of 1, and a Signal-to-Noise Ratio (SNR) is defined as
Figure SMS_22
The Mean-Square error (MSE) is used as a performance measure index, defined as
Figure SMS_23
Where q=1,..q is the number of experiments,
Figure SMS_24
representing the original signal x of the q-th experiment kn Is a function of the estimate of (2). The application and testing of the method (EM-MT-GAMP) of the invention is realized by adopting the following steps:
s2, pair { x } k The block sparse structure of } is modeled. The present invention uses markov spike and slab prior a priori to describe this cluster sparsity, for any k:
Figure SMS_25
because of the Markov chain, the sparse support can be expressed as:
Figure SMS_26
wherein ,
Figure SMS_27
is a transition probability and has the following Bernoulli condition probability distribution:
Figure SMS_28
wherein ,
Figure SMS_29
and />
Figure SMS_30
If p 10 The smaller s n-1 S takes 0 value n The smaller the probability of taking a 1 value, the greater the average distance between two adjacent non-zero clusters. If p 01 The smaller s n-1 S takes 1 value n The smaller the probability of taking a value of 0, the larger the mean size of the non-zero cluster. By subtracting from p (x kn |s n ) Calculate x in kn The edge distribution of (a) can be obtained:
Figure SMS_31
wherein ,λ=p10 /(p 01 +p 10 ) Represents x k Sparsity of (x), i.e. x kn Is a Bernoulli Gaussian distribution. Thus will be the initial p (s 1 ) The method comprises the following steps:
p(s 1 )=λs 1 +(1-λ)(1-s 1 ).
s3, factor graph model representation
The pair can be deduced from Bayes formula
Figure SMS_32
and s={s1 ,s 2 ,...,s n The joint probability distribution of } is:
p(Y,X,s)=p(Y|X,s)p(X|s)p(s)
when s is known, for
Figure SMS_33
p(y k |x k )p(x k S) are conditional independent, then there are
Figure SMS_34
As can be seen from the a priori nature of markov spike and slab prior
Figure SMS_35
p(x kn |s n ) Independent of each other, the joint probability distribution of the available (Y, X, s) is:
Figure SMS_36
/>
at x k When it is known that
Figure SMS_37
p(y km |x k ) Are independent of each other, thus there are
Figure SMS_38
wherein ,
Figure SMS_39
z km is z k The mth element of (a) is a group of elements. p (y) km |z km ) To obey z km As the average value, with sigma 2 Is a gaussian distribution of variance, i.e
Figure SMS_40
Due to x 1 ,x 2 ,...,x k With a joint clustering sparsity structure, their sparsity can be expressed in terms of the same markov spike and slab prior a priori distribution, combined with the dependency between the random variables shown in (x 2) to build a factorial graph model, as shown in fig. 1. According to the factor graph model shown in fig. 1, the message passing and mean value calculating method among the variables in the factor graph model can be obtained based on the message passing method of the document Generalized approximate message passing for estimation with random linear mixing.
S4, obtaining an edge posterior mean value estimator of each variable through message transmission, and then calculating iterative updating of unknown parameters (noise variance and the like) according to the expectation maximization method in Pattern recognition andmachine learning. Steps S3 and S4 are iterated alternately until convergence.
Next, a single reconstruction experiment of the multi-task clustered sparse signal by different algorithms was tested. As shown in figures 2, 3, 4, 5 and 6, respectively, are shown in
Figure SMS_41
Snr=20 dB, number of clusters l=2, number of non-zero elements h=20, task number k=5, the simulated original signal image and the reconstruction result of the EM-MT-GAMP algorithm and the participation comparison method (GAMP algorithm, VB-SBL algorithm and OMP algorithm) on the signal are proposed by the invention. Wherein the EM-MT-GAMP algorithm, VB-SBLThe mean square error of the algorithm and OMP algorithm are respectively: -21.7220dB, -18.4699dB, -15.0471dB, -11.7080dB. And (3) observing the recovery condition of the algorithm to the zero value signal: it can be seen that in the region where the signal is zero, there is little glitch in fig. 3 (i.e., a signal value that is misinterpreted as non-zero), whereas fig. 3 has a small amount of glitch in the region of signal zero, fig. 4 has a large amount of glitch in the region of signal zero, and fig. 5 also has a small amount of glitch in the region of signal zero. And then observing the recovery condition of the algorithm on the non-zero value, wherein the EM-MT-GAMP algorithm can well restore the magnitude of the non-zero value of the signal, and the non-zero values of the signals restored by other algorithms have little distortion phenomenon compared with the non-zero value of the original signal. Therefore, the reconstruction performance of the EM-MT-GAMP algorithm on sparse signals is superior to that of the other three algorithms.
The mean square error performance of the EM-MT-GAMP algorithm under different signal to noise ratio environments is tested through a simulation experiment, so that the influence of the signal to noise ratio on the effectiveness of the EM-MT-GAMP algorithm is examined. FIG. 7 is a diagram of
Figure SMS_42
When the number of clusters L=3 and the number of non-zero elements H=90 and the number of tasks K=30, the EM-MT-GAMP algorithm, the VB-SBL algorithm and the OMP algorithm are used for respectively carrying out Q=10 times of simulation under different signal to noise ratios to obtain an average mean square error curve. It can be seen that the performance of EM-MT-GAMP is always better than the OMP algorithm and VB-SBL algorithm. The following is a comparison of the EM-MT-GAMP and GAMP algorithms. It can be seen that the EM-MT-GAMP algorithm presented herein performs similarly to the single task GAMP algorithm at various signal-to-noise ratios when there are more observations (n=200). Whereas at smaller numbers of observations (n=150), the EM-MT-GAMP algorithm presented herein has a significant performance improvement over the single-task GAMP algorithm at signal-to-noise ratios greater than 20 dB. It can be seen that the advantage of this EM-MT-GAMP algorithm is that better recovery performance is obtained with less observed data. Even when there is more observation data, the recovery of the algorithm is still always better than that of the single-task GAMP algorithm.
The mean square error of the EM-MT-GAMP algorithm under different single task observation data amounts is analyzed through a simulation experiment, so that the influence of the single task observation data amounts on the effectiveness of the EM-MT-GAMP algorithm is examined. FIG. 8 is a diagram of
Figure SMS_43
Snr=20 dB, number of clusters l=3, number of non-zero elements 90, task number k=30, and average mean square error curves obtained by performing q=10 simulations under different single task observation data numbers M using EM-MT-GAMP algorithm, vb_sbl algorithm and OMP algorithm as proposed herein. Since the EM-MT-GAMP algorithm, the GAMP algorithm and the OMP algorithm are always superior to the VB-SBL algorithm in the above conditions, an important comparison is made between the EM-MT-GAMP algorithm, the GAMP algorithm and the OMP algorithm. It can be seen that under the above conditions, when the number of observation data of a single task M is 50 to 200, the EM-MT-GAMP algorithm has significant performance improvement compared with the GAMP algorithm and the OMP algorithm. When M is more than 200, the reconstruction performance of the EM-MT-GAMP algorithm is converged with that of the GAMP algorithm and the OMP algorithm, but is always better than that of the GAMP algorithm and the OMP algorithm, and the fact that the EM-MT-GAMP algorithm can obtain better recovery performance under the condition of less observation data is proved. />

Claims (1)

1. The multi-task clustering sparse reconstruction method based on message transmission is characterized by comprising the following steps of:
s1, establishing a mathematical model of a multi-task block sparse reconstruction problem:
y k =A k x k +w k ,k=1,...,K
wherein ,
Figure FDA0003984779250000011
representing the measurement vector in the kth task, < +.>
Figure FDA0003984779250000012
For block sparse signals, and define { x } k Has a joint block sparse structure, w k Is an unknown zero-mean Gaussian noise vector, +.>
Figure FDA0003984779250000013
Perceptual moment for kth task modelAn array or dictionary, M < N;
s2, pair { x } k Modeling the massive sparse structure: using markov spike and slab prior a priori to describe this cluster sparsity, for any k:
Figure FDA0003984779250000014
because of the Markov chain, the sparse support is represented as:
Figure FDA0003984779250000015
wherein ,
Figure FDA0003984779250000016
is a transition probability and has the following Bernoulli condition probability distribution:
Figure FDA0003984779250000017
wherein ,
Figure FDA0003984779250000018
and />
Figure FDA0003984779250000019
If p 10 The smaller s n-1 S takes 0 value n The smaller the probability of taking a 1 value, the larger the average distance between two adjacent non-zero clusters is; if p 01 The smaller s n-1 S takes 1 value n The smaller the probability of taking the value of 0 is, the larger the average size of the non-zero cluster is; by subtracting from p (x kn |s n ) Calculate x in kn The edge distribution of (a) can be obtained:
Figure FDA00039847792500000110
wherein ,λ=p10 /(p 01 +p 10 ) Represents x k Sparsity of (x), i.e. x kn Is a Bernoulli Gaussian distribution; thus will be the initial p (s 1 ) The method comprises the following steps:
p(s 1 )=λs 1 +(1-λ)(1-s 1 ).
s3, factor graph model representation
The pair is deduced from Bayes formula
Figure FDA0003984779250000021
and s={s1 ,s 2 ,...,s n The joint probability distribution of } is:
p(Y,X,s)=p(Y|X,s)p(X|s)p(s)
when s is known, for
Figure FDA0003984779250000022
p(y k |x k )p(x k S) are conditional independent, have
Figure FDA0003984779250000023
As can be seen from the a priori nature of markov spike and slab prior
Figure FDA0003984779250000024
p(x kn |s n ) Independent of each other, the joint probability distribution of the available (Y, X, s) is:
Figure FDA0003984779250000025
at x k When it is known that
Figure FDA0003984779250000026
p(y km |x k ) Are independent of each other, thus have +.>
Figure FDA0003984779250000027
wherein ,
Figure FDA0003984779250000028
is z k The mth element of (a); p (y) km |z km ) To obey z km As the average value, with sigma 2 Is a gaussian distribution of variance, i.e
Figure FDA0003984779250000029
Due to x 1 ,x 2 ,…,x k Because of the combined clustering sparse structure, the sparsity of the clustering sparse structure is expressed by the same Markov spike and slab prior prior distribution, a factor graph model is established by combining the dependency relationship among random variables, and then the message transmission and the average value among the variables in the factor graph model are obtained by adopting a message transmission method according to the factor graph model;
s4, obtaining an edge posterior mean value estimator of each variable through a message transmission method, and then calculating iterative updating of unknown parameters according to an expected maximization method.
CN202211561223.6A 2022-12-07 2022-12-07 Multi-task clustering sparse reconstruction method based on message passing Pending CN116112022A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211561223.6A CN116112022A (en) 2022-12-07 2022-12-07 Multi-task clustering sparse reconstruction method based on message passing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211561223.6A CN116112022A (en) 2022-12-07 2022-12-07 Multi-task clustering sparse reconstruction method based on message passing

Publications (1)

Publication Number Publication Date
CN116112022A true CN116112022A (en) 2023-05-12

Family

ID=86266524

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211561223.6A Pending CN116112022A (en) 2022-12-07 2022-12-07 Multi-task clustering sparse reconstruction method based on message passing

Country Status (1)

Country Link
CN (1) CN116112022A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116626665A (en) * 2023-07-24 2023-08-22 无锡航征科技有限公司 Algorithm model, algorithm, current meter and storage medium for measuring flow rate by radar

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116626665A (en) * 2023-07-24 2023-08-22 无锡航征科技有限公司 Algorithm model, algorithm, current meter and storage medium for measuring flow rate by radar
CN116626665B (en) * 2023-07-24 2023-10-13 无锡航征科技有限公司 Method for measuring flow velocity by radar, flow velocity meter and storage medium

Similar Documents

Publication Publication Date Title
CN109683161B (en) Inverse synthetic aperture radar imaging method based on depth ADMM network
CN109993280B (en) Underwater sound source positioning method based on deep learning
Cevher et al. Sparse signal recovery using markov random fields
CN109116293B (en) Direction-of-arrival estimation method based on lattice-separated sparse Bayes
CN110109050B (en) Unknown mutual coupling DOA estimation method based on sparse Bayes under nested array
CN109375154B (en) Coherent signal parameter estimation method based on uniform circular array in impact noise environment
Tzagkarakis et al. Multiple-measurement Bayesian compressed sensing using GSM priors for DOA estimation
He et al. Improved FOCUSS method with conjugate gradient iterations
CN116112022A (en) Multi-task clustering sparse reconstruction method based on message passing
Butala et al. Tomographic imaging of dynamic objects with the ensemble Kalman filter
Aich et al. On application of OMP and CoSaMP algorithms for DOA estimation problem
Van Gorp et al. Active deep probabilistic subsampling
Ting et al. Sparse image reconstruction for molecular imaging
CN114720938A (en) Large-scale antenna array single-bit sampling DOA estimation method based on depth expansion
CN104407319A (en) Method and system for finding direction of target source of array signal
CN111798531B (en) Image depth convolution compressed sensing reconstruction method applied to plant monitoring
Lin et al. A local search enhanced differential evolutionary algorithm for sparse recovery
CN114624646A (en) DOA estimation method based on model-driven complex neural network
Qin Deep networks for direction of arrival estimation with sparse prior in low SNR
Bai et al. Space alternating variational estimation based sparse Bayesian learning for complex‐value sparse signal recovery using adaptive Laplace priors
Li et al. TEFISTA-Net: GTD parameter estimation of low-frequency ultra-wideband radar via model-based deep learning
CN107677988B (en) Efficient compressed sensing direction-finding method based on special inhomogeneous linear array
Yu et al. Fast reconstruction of 1D compressive sensing data using a deep neural network
Xia et al. Robust signal recovery using Bayesian compressed sensing based on Lomax prior
Bangun Signal recovery on the sphere from compressive and phaseless measurements

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination