CN116112022A - Multi-task clustering sparse reconstruction method based on message passing - Google Patents
Multi-task clustering sparse reconstruction method based on message passing Download PDFInfo
- Publication number
- CN116112022A CN116112022A CN202211561223.6A CN202211561223A CN116112022A CN 116112022 A CN116112022 A CN 116112022A CN 202211561223 A CN202211561223 A CN 202211561223A CN 116112022 A CN116112022 A CN 116112022A
- Authority
- CN
- China
- Prior art keywords
- sparse
- task
- algorithm
- value
- distribution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000005540 biological transmission Effects 0.000 claims description 7
- 238000005259 measurement Methods 0.000 claims description 6
- 230000007704 transition Effects 0.000 claims description 3
- 238000013178 mathematical model Methods 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 abstract description 65
- 230000006872 improvement Effects 0.000 abstract description 5
- 238000012546 transfer Methods 0.000 abstract description 3
- 238000011084 recovery Methods 0.000 description 6
- 238000004088 simulation Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/3059—Digital compression and data reduction techniques where the original information is represented by a subset or similar information, e.g. lossy compression
- H03M7/3062—Compressive sampling or sensing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention provides a multi-task clustering sparse reconstruction method based on message transfer. The method utilizes the joint clustering sparse structural features of sparse signals among different tasks, so that better sparse reconstruction performance is obtained under the condition of less observation samples. Specifically, the sparse structural features of the clustered sparse signals are described through Markov Spike and Slab priors, a generalized approximation message passing algorithm is introduced to iteratively approximate the posterior mean value of each unknown variable, and an expectation-maximization method is adopted to iteratively update the unknown parameters. Compared with the traditional single-task Bayes compressed sensing algorithm, the method has remarkable performance improvement under the condition that the acquired observation samples are fewer.
Description
Technical Field
The invention belongs to the technical field of information and communication, and particularly relates to a multi-task clustering sparse reconstruction method based on message transmission.
Background
Big data in the background of novel technology industries such as intelligent Internet of things, ultra-wideband communication, semantic communication and artificial intelligence are explosively increased, and massive data brings unprecedented challenges for analysis and processing of the big data. In fact, since the amount of information in the data itself grows significantly slower than the data dimension, there is often redundancy between high-dimensional mass data such that its critical information is typically located in a potentially low-dimensional structural pattern. The sparse structure is the most important low-dimensional structural characteristic of the data, and provides a trigger for expressing, analyzing and revealing intrinsic properties of things. Based on the prior sparse structure, the compressed sensing theory has developed, which means that when the original signal or the representation coefficient thereof in a certain transformation domain is sufficiently sparse, the high-dimensional signal can be reconstructed by using a measurement sample far lower than the original signal dimension. Compressed sensing provides a theoretical basis for sub-nyquist acquisition of sparse signals, and also enables signal processing to generate new ideas and methods. In many practical applications (including but not limited to radar detection, sonar positioning, image processing, wireless communication, etc.), the object of interest may be considered sparse relative to the application context. In compressed sensing, the most important task is to reconstruct an unknown sparse signal from a small number of underdetermined measurements, and an efficient and robust sparse reconstruction method is a prerequisite for realizing the application thereof.
The traditional sparse reconstruction method comprises a greedy algorithm (such as OMP algorithm in document Signal recovery from random measurements via orthogonal matching pursuit) and an L1 norm reconstruction method based on convex optimization. According to the proper prior information, a Bayesian sparse reconstruction method is widely focused and studied. The document Sparse Bayesian learning and the relevance vector machine discloses a sparse prior model based on layering, and firstly proposes a sparse reconstruction method based on Bayesian learning, but the method relates to matrix inversion, and has higher computational complexity when the dimension of a sensing matrix is larger, so that the method is not beneficial to real-time application. Aiming at the clustering sparse reconstruction problem, namely when a non-zero sparse support of sparse signals presents an unknown block sparse structure, a general sparse prior of a document Generalized approximate message passing for estimation with random linear mixing provides a sparse reconstruction algorithm based on message transfer, and the sparse reconstruction algorithm has lower computational complexity. However, the above method is not directly applicable to the multi-task block sparse reconstruction discussed in the present invention. One of the most important features of the multitasking block sparse reconstruction problem is that the sparse signals under different subtasks have the same sparse set. Through proper common sparse support modeling and reasonable utilization, the reconstruction accuracy of sparse signals can be effectively improved, so that the reconstruction algorithm with low calculation complexity and high accuracy has important research value.
Disclosure of Invention
The invention mainly provides a multi-task clustering sparse reconstruction method based on Bayesian message transfer. The method utilizes the joint clustering sparse structural features of sparse signals among different tasks, so that better sparse reconstruction performance is obtained under the condition of less observation samples. Specifically, the sparse structural features of the clustered sparse signals are described through Markov Spike and Slab priors, a generalized approximation message passing algorithm is introduced to iteratively approximate the posterior mean value of each unknown variable, and an expectation-maximization method is adopted to iteratively update the unknown parameters. Compared with the traditional single-task Bayes compressed sensing algorithm, the method has remarkable performance improvement under the condition that the acquired observation samples are fewer.
The technical scheme adopted by the invention comprises the following steps:
s1, establishing a mathematical model of the multi-task block sparse reconstruction problem, which can be expressed as
y k =A k x k +w k ,k=1,...,K
wherein ,representing the measurement vector in the kth task, < +.>Is a block sparse signal ({ x) k Have joint block sparse structure), w k Is an unknown zero-mean Gaussian noise vector, +.>(M N) is the perception matrix or dictionary of the kth task model。
S2, pair { x } k The block sparse structure of } is modeled. The present invention uses markov spike and slab prior a priori to describe this cluster sparsity, for any k:
because of the Markov chain, the sparse support can be expressed as:
wherein ,is a transition probability and has the following Bernoulli condition probability distribution:
wherein , and />If p 10 The smaller s n-1 S takes 0 value n The smaller the probability of taking a 1 value, the greater the average distance between two adjacent non-zero clusters. If p 0i The smaller s n-1 S takes 1 value n The smaller the probability of taking a value of 0, the larger the mean size of the non-zero cluster. By subtracting from p (x kn |s n ) Calculate x in kn The edge distribution of (a) can be obtained:
wherein ,λ=p10 /(p 01 +p 10 ) Represents x k Sparsity of (x), i.e. x kn Is a Bernoulli Gaussian distribution. Thus will initially p(s 1 ) The method comprises the following steps:
p(s 1 )=λs 1 +(1-λ)(1-s 1 ).
s3, factor graph model representation
The pair can be deduced from Bayes formula and s={s1 ,s 2 ,...,s n The joint probability distribution of } is:
p(Y,X,s)=p(Y|X,s)p(X|s)p(s)
As can be seen from the a priori nature of markov spike and slab priorp(x kn |s n ) Independent of each other, the joint probability distribution of the available (Y, X, s) is:
wherein ,z km is z k M of (3)The elements. p (y) km |z km ) To obey z km As the average value, with sigma 2 Is a gaussian distribution of variance, i.e
Due to x 1 ,x 2 ,...,x k With a joint clustering sparse structure, their sparsity can be expressed by the same markov spike and slab prior a priori distribution, and a factor graph model is built by combining the dependency relationships among the random variables shown in (x 1). According to the factor graph model, a message passing and mean value calculating method among various variables in the factor graph model can be obtained based on a message passing method of a document Generalized approximate message passing for estimation with random linear mixing.
S4, obtaining an edge posterior mean value estimator of each variable through message transmission, and then calculating iterative updating of unknown parameters (noise variance and the like) according to an expectation maximization method in Pattern recognition and machine learning. Steps 3 and 3 are iterated alternately until convergence.
The beneficial effects of the invention are as follows: describing sparse structural features of clustering sparse signals through Markov Spike and Slab priors, introducing a generalized approximation message passing algorithm to iteratively approximate posterior mean values of each unknown variable, and carrying out iterative updating on the unknown parameters by adopting an expectation-maximization method. Compared with the traditional single-task Bayes compressed sensing algorithm, the method has remarkable performance improvement under the condition that the acquired observation samples are fewer.
Drawings
FIG. 1 is a probabilistic factor graph representation of a multi-tasking clustering sparse signal reconstruction model;
FIG. 2 original multitasking signal;
the reconstruction result of the fig. 3 MT-GAMP algorithm, mse= -21.7220dB;
reconstruction result of the GAMP algorithm of fig. 4, mse= -18.4699dB;
the reconstruction result of the VB-SBL algorithm of fig. 5, mse= -15.0471dB;
the reconstruction result of the OMP algorithm of fig. 6, mse= -11.7080dB;
fig. 7 MSE as a function of SNR (m=150);
fig. 8 MSE as a function of number of measurements (snr=20 dB).
Detailed Description
The technical scheme of the invention is described in detail below with reference to the accompanying drawings.
The invention comprises the following steps:
s1, setting a simulation data generation model as
y k =A k x k +w k
Where k=1,..,is superimposed complex cyclic symmetric Gaussian noise, i.e +.> wherein ,σ2 Is the variance of noise, I M Representing an M x M unit array. Set matrix A k Each element is mutually independent and obeys zero-mean complex Gaussian distribution with variance of 1/N, and sparse signal x k The non-zero elements obey zero-mean complex gaussian distribution with variance of 1, the number of clusters is L, and the number of non-zero elements is H, then sparsity λ=h/N. Gaussian noise { w at different tasks k Mutually independent.
The signal to noise ratio and the mean square error are defined next. Due to x k Is subject to a zero-mean Gaussian distribution with variance of 1, and a Signal-to-Noise Ratio (SNR) is defined as
The Mean-Square error (MSE) is used as a performance measure index, defined as
Where q=1,..q is the number of experiments,representing the original signal x of the q-th experiment kn Is a function of the estimate of (2). The application and testing of the method (EM-MT-GAMP) of the invention is realized by adopting the following steps:
s2, pair { x } k The block sparse structure of } is modeled. The present invention uses markov spike and slab prior a priori to describe this cluster sparsity, for any k:
because of the Markov chain, the sparse support can be expressed as:
wherein ,is a transition probability and has the following Bernoulli condition probability distribution:
wherein , and />If p 10 The smaller s n-1 S takes 0 value n The smaller the probability of taking a 1 value, the greater the average distance between two adjacent non-zero clusters. If p 01 The smaller s n-1 S takes 1 value n The smaller the probability of taking a value of 0, the larger the mean size of the non-zero cluster. By subtracting from p (x kn |s n ) Calculate x in kn The edge distribution of (a) can be obtained:
wherein ,λ=p10 /(p 01 +p 10 ) Represents x k Sparsity of (x), i.e. x kn Is a Bernoulli Gaussian distribution. Thus will be the initial p (s 1 ) The method comprises the following steps:
p(s 1 )=λs 1 +(1-λ)(1-s 1 ).
s3, factor graph model representation
The pair can be deduced from Bayes formula and s={s1 ,s 2 ,...,s n The joint probability distribution of } is:
p(Y,X,s)=p(Y|X,s)p(X|s)p(s)
As can be seen from the a priori nature of markov spike and slab priorp(x kn |s n ) Independent of each other, the joint probability distribution of the available (Y, X, s) is:
wherein ,z km is z k The mth element of (a) is a group of elements. p (y) km |z km ) To obey z km As the average value, with sigma 2 Is a gaussian distribution of variance, i.e
Due to x 1 ,x 2 ,...,x k With a joint clustering sparsity structure, their sparsity can be expressed in terms of the same markov spike and slab prior a priori distribution, combined with the dependency between the random variables shown in (x 2) to build a factorial graph model, as shown in fig. 1. According to the factor graph model shown in fig. 1, the message passing and mean value calculating method among the variables in the factor graph model can be obtained based on the message passing method of the document Generalized approximate message passing for estimation with random linear mixing.
S4, obtaining an edge posterior mean value estimator of each variable through message transmission, and then calculating iterative updating of unknown parameters (noise variance and the like) according to the expectation maximization method in Pattern recognition andmachine learning. Steps S3 and S4 are iterated alternately until convergence.
Next, a single reconstruction experiment of the multi-task clustered sparse signal by different algorithms was tested. As shown in figures 2, 3, 4, 5 and 6, respectively, are shown inSnr=20 dB, number of clusters l=2, number of non-zero elements h=20, task number k=5, the simulated original signal image and the reconstruction result of the EM-MT-GAMP algorithm and the participation comparison method (GAMP algorithm, VB-SBL algorithm and OMP algorithm) on the signal are proposed by the invention. Wherein the EM-MT-GAMP algorithm, VB-SBLThe mean square error of the algorithm and OMP algorithm are respectively: -21.7220dB, -18.4699dB, -15.0471dB, -11.7080dB. And (3) observing the recovery condition of the algorithm to the zero value signal: it can be seen that in the region where the signal is zero, there is little glitch in fig. 3 (i.e., a signal value that is misinterpreted as non-zero), whereas fig. 3 has a small amount of glitch in the region of signal zero, fig. 4 has a large amount of glitch in the region of signal zero, and fig. 5 also has a small amount of glitch in the region of signal zero. And then observing the recovery condition of the algorithm on the non-zero value, wherein the EM-MT-GAMP algorithm can well restore the magnitude of the non-zero value of the signal, and the non-zero values of the signals restored by other algorithms have little distortion phenomenon compared with the non-zero value of the original signal. Therefore, the reconstruction performance of the EM-MT-GAMP algorithm on sparse signals is superior to that of the other three algorithms.
The mean square error performance of the EM-MT-GAMP algorithm under different signal to noise ratio environments is tested through a simulation experiment, so that the influence of the signal to noise ratio on the effectiveness of the EM-MT-GAMP algorithm is examined. FIG. 7 is a diagram ofWhen the number of clusters L=3 and the number of non-zero elements H=90 and the number of tasks K=30, the EM-MT-GAMP algorithm, the VB-SBL algorithm and the OMP algorithm are used for respectively carrying out Q=10 times of simulation under different signal to noise ratios to obtain an average mean square error curve. It can be seen that the performance of EM-MT-GAMP is always better than the OMP algorithm and VB-SBL algorithm. The following is a comparison of the EM-MT-GAMP and GAMP algorithms. It can be seen that the EM-MT-GAMP algorithm presented herein performs similarly to the single task GAMP algorithm at various signal-to-noise ratios when there are more observations (n=200). Whereas at smaller numbers of observations (n=150), the EM-MT-GAMP algorithm presented herein has a significant performance improvement over the single-task GAMP algorithm at signal-to-noise ratios greater than 20 dB. It can be seen that the advantage of this EM-MT-GAMP algorithm is that better recovery performance is obtained with less observed data. Even when there is more observation data, the recovery of the algorithm is still always better than that of the single-task GAMP algorithm.
The mean square error of the EM-MT-GAMP algorithm under different single task observation data amounts is analyzed through a simulation experiment, so that the influence of the single task observation data amounts on the effectiveness of the EM-MT-GAMP algorithm is examined. FIG. 8 is a diagram ofSnr=20 dB, number of clusters l=3, number of non-zero elements 90, task number k=30, and average mean square error curves obtained by performing q=10 simulations under different single task observation data numbers M using EM-MT-GAMP algorithm, vb_sbl algorithm and OMP algorithm as proposed herein. Since the EM-MT-GAMP algorithm, the GAMP algorithm and the OMP algorithm are always superior to the VB-SBL algorithm in the above conditions, an important comparison is made between the EM-MT-GAMP algorithm, the GAMP algorithm and the OMP algorithm. It can be seen that under the above conditions, when the number of observation data of a single task M is 50 to 200, the EM-MT-GAMP algorithm has significant performance improvement compared with the GAMP algorithm and the OMP algorithm. When M is more than 200, the reconstruction performance of the EM-MT-GAMP algorithm is converged with that of the GAMP algorithm and the OMP algorithm, but is always better than that of the GAMP algorithm and the OMP algorithm, and the fact that the EM-MT-GAMP algorithm can obtain better recovery performance under the condition of less observation data is proved. />
Claims (1)
1. The multi-task clustering sparse reconstruction method based on message transmission is characterized by comprising the following steps of:
s1, establishing a mathematical model of a multi-task block sparse reconstruction problem:
y k =A k x k +w k ,k=1,...,K
wherein ,representing the measurement vector in the kth task, < +.>For block sparse signals, and define { x } k Has a joint block sparse structure, w k Is an unknown zero-mean Gaussian noise vector, +.>Perceptual moment for kth task modelAn array or dictionary, M < N;
s2, pair { x } k Modeling the massive sparse structure: using markov spike and slab prior a priori to describe this cluster sparsity, for any k:
because of the Markov chain, the sparse support is represented as:
wherein ,is a transition probability and has the following Bernoulli condition probability distribution:
wherein , and />If p 10 The smaller s n-1 S takes 0 value n The smaller the probability of taking a 1 value, the larger the average distance between two adjacent non-zero clusters is; if p 01 The smaller s n-1 S takes 1 value n The smaller the probability of taking the value of 0 is, the larger the average size of the non-zero cluster is; by subtracting from p (x kn |s n ) Calculate x in kn The edge distribution of (a) can be obtained:
wherein ,λ=p10 /(p 01 +p 10 ) Represents x k Sparsity of (x), i.e. x kn Is a Bernoulli Gaussian distribution; thus will be the initial p (s 1 ) The method comprises the following steps:
p(s 1 )=λs 1 +(1-λ)(1-s 1 ).
s3, factor graph model representation
The pair is deduced from Bayes formula and s={s1 ,s 2 ,...,s n The joint probability distribution of } is:
p(Y,X,s)=p(Y|X,s)p(X|s)p(s)
As can be seen from the a priori nature of markov spike and slab priorp(x kn |s n ) Independent of each other, the joint probability distribution of the available (Y, X, s) is:
wherein ,is z k The mth element of (a); p (y) km |z km ) To obey z km As the average value, with sigma 2 Is a gaussian distribution of variance, i.e
Due to x 1 ,x 2 ,…,x k Because of the combined clustering sparse structure, the sparsity of the clustering sparse structure is expressed by the same Markov spike and slab prior prior distribution, a factor graph model is established by combining the dependency relationship among random variables, and then the message transmission and the average value among the variables in the factor graph model are obtained by adopting a message transmission method according to the factor graph model;
s4, obtaining an edge posterior mean value estimator of each variable through a message transmission method, and then calculating iterative updating of unknown parameters according to an expected maximization method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211561223.6A CN116112022A (en) | 2022-12-07 | 2022-12-07 | Multi-task clustering sparse reconstruction method based on message passing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211561223.6A CN116112022A (en) | 2022-12-07 | 2022-12-07 | Multi-task clustering sparse reconstruction method based on message passing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116112022A true CN116112022A (en) | 2023-05-12 |
Family
ID=86266524
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211561223.6A Pending CN116112022A (en) | 2022-12-07 | 2022-12-07 | Multi-task clustering sparse reconstruction method based on message passing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116112022A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116626665A (en) * | 2023-07-24 | 2023-08-22 | 无锡航征科技有限公司 | Algorithm model, algorithm, current meter and storage medium for measuring flow rate by radar |
-
2022
- 2022-12-07 CN CN202211561223.6A patent/CN116112022A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116626665A (en) * | 2023-07-24 | 2023-08-22 | 无锡航征科技有限公司 | Algorithm model, algorithm, current meter and storage medium for measuring flow rate by radar |
CN116626665B (en) * | 2023-07-24 | 2023-10-13 | 无锡航征科技有限公司 | Method for measuring flow velocity by radar, flow velocity meter and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109683161B (en) | Inverse synthetic aperture radar imaging method based on depth ADMM network | |
CN109993280B (en) | Underwater sound source positioning method based on deep learning | |
Cevher et al. | Sparse signal recovery using markov random fields | |
CN109116293B (en) | Direction-of-arrival estimation method based on lattice-separated sparse Bayes | |
CN110109050B (en) | Unknown mutual coupling DOA estimation method based on sparse Bayes under nested array | |
CN109375154B (en) | Coherent signal parameter estimation method based on uniform circular array in impact noise environment | |
Tzagkarakis et al. | Multiple-measurement Bayesian compressed sensing using GSM priors for DOA estimation | |
He et al. | Improved FOCUSS method with conjugate gradient iterations | |
CN116112022A (en) | Multi-task clustering sparse reconstruction method based on message passing | |
Butala et al. | Tomographic imaging of dynamic objects with the ensemble Kalman filter | |
Aich et al. | On application of OMP and CoSaMP algorithms for DOA estimation problem | |
Van Gorp et al. | Active deep probabilistic subsampling | |
Ting et al. | Sparse image reconstruction for molecular imaging | |
CN114720938A (en) | Large-scale antenna array single-bit sampling DOA estimation method based on depth expansion | |
CN104407319A (en) | Method and system for finding direction of target source of array signal | |
CN111798531B (en) | Image depth convolution compressed sensing reconstruction method applied to plant monitoring | |
Lin et al. | A local search enhanced differential evolutionary algorithm for sparse recovery | |
CN114624646A (en) | DOA estimation method based on model-driven complex neural network | |
Qin | Deep networks for direction of arrival estimation with sparse prior in low SNR | |
Bai et al. | Space alternating variational estimation based sparse Bayesian learning for complex‐value sparse signal recovery using adaptive Laplace priors | |
Li et al. | TEFISTA-Net: GTD parameter estimation of low-frequency ultra-wideband radar via model-based deep learning | |
CN107677988B (en) | Efficient compressed sensing direction-finding method based on special inhomogeneous linear array | |
Yu et al. | Fast reconstruction of 1D compressive sensing data using a deep neural network | |
Xia et al. | Robust signal recovery using Bayesian compressed sensing based on Lomax prior | |
Bangun | Signal recovery on the sphere from compressive and phaseless measurements |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |