CN114050832A - Sparse signal reconstruction method based on two-step depth expansion strategy - Google Patents

Sparse signal reconstruction method based on two-step depth expansion strategy Download PDF

Info

Publication number
CN114050832A
CN114050832A CN202111374559.7A CN202111374559A CN114050832A CN 114050832 A CN114050832 A CN 114050832A CN 202111374559 A CN202111374559 A CN 202111374559A CN 114050832 A CN114050832 A CN 114050832A
Authority
CN
China
Prior art keywords
algorithm
signal
training
sparse
depth expansion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111374559.7A
Other languages
Chinese (zh)
Inventor
邵凯
闫力力
王光宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202111374559.7A priority Critical patent/CN114050832A/en
Publication of CN114050832A publication Critical patent/CN114050832A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention relates to a sparse signal reconstruction method based on a two-step depth expansion strategy, which belongs to the technical field of signal reconstruction and comprises the following steps: s1: sparsifying the input signal; s2: improving a depth expansion model of a traditional sparse signal reconstruction algorithm by using a TwDU algorithm; s3: and reconstructing and recovering the original sparse signal by using the improved sparse signal reconstruction algorithm. The method can fully utilize the correlation among signals, and can reconstruct the signals more quickly and improve the reconstruction precision of the signals no matter aiming at one-dimensional signals or two-dimensional picture signals in wireless communication.

Description

Sparse signal reconstruction method based on two-step depth expansion strategy
Technical Field
The invention belongs to the technical field of signal reconstruction, and relates to a sparse signal reconstruction method based on a two-step depth expansion strategy.
Background
Compressed Sensing (CS) refers to obtaining discrete samples of an original signal, i.e. a sparse signal, by using a sampling matrix under a condition far below a nyquist sampling rate, and then reconstructing the original signal by using the sparse signal through a nonlinear reconstruction algorithm. For efficient and highly accurate reconstruction of the original signal, a number of excellent reconstruction algorithms are proposed. The method is used for reconstructing an original signal with high efficiency and high precision.
In recent years, deep learning technology has a great influence on the research and design of a compressed sensing sparse signal reconstruction algorithm due to its strong feature learning capability. These tasks fall into two main categories: the first is based on data-driven methods, which tend to adapt the data structure by adapting the neural network model. Compared with the traditional algorithm, the method based on data driving has certain advantages, such as: (1) the dependence on signal sparsity can be reduced. (2) The accuracy of the signal reconstruction can be improved. However, the neural network model is mainly adopted, and has the following disadvantages: (1) the network architecture is usually universal, the model has no interpretability and is not high in stability. (2) In the network training process, the networks often need large batches of signal samples, and have high requirements on computing power and memory of the platform. The second type is a model-driven method, which combines the advantages of a conventional algorithm with performance guarantee and a neural network model, and is widely applied to the fields of wireless communication, image processing and the like, and is collectively called Deep Unfolding (Deep Unfolding). The deep expansion specifically refers to the development of a new hierarchical structure similar to the neural network through an iterative process in a traditional algorithm. The hierarchical structures comprise parameter variables which can be trained, the parameter variables are trained in a supervised learning mode, and algorithm parameters are updated by using a back propagation mechanism based on a gradient descent method. The deep expansion method makes full use of the strong learning capability of the deep learning technology and the deterministic internal structure of the traditional algorithm, so that the traditional algorithm has the learning capability. Gregor and LeCun use the method for the first time to put forward a Learned ISTA (LISTA) network model, and the method abstracts a threshold value and a matrix variable in an ISTA algorithm into network training parameters to obtain better performance than the ISTA. Borgerding and Schniter abstract partial parameters and threshold values in an Onsager correction term in AMP into network training parameters, and a Learned AMP network model is provided and has better performance than LISTA and AMP. Recently, Ito and Takabe introduced the MMSE estimator to ISTA, proposed a Trainable ISTA (TISTA) deep-unfolding model, and obtained faster convergence speed than LISTA, LAMP with a very small amount of training parameters. Compared with the first class, the depth expansion algorithm has the following advantages: (1) and the stability is high and the performance is guaranteed under the constraint of the traditional algorithm. (2) Most deep unfolding algorithms require less parameters to be trained and, therefore, require fewer training samples. (3) The depth expansion algorithm is generally intuitive, interpretable, and has low computational power and memory requirements.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a sparse signal reconstruction method based on a Two-Step depth Unfolding strategy (TwDU), in which the TwDU makes full use of correlation between signals to assign different weights to the previous Two estimation values during depth Unfolding, so as to jointly determine the current result. The method can establish the dependency relationship between signals, and the dependency relationship is not fixed and is self-adaptive to adjust along with the change of data, so that the reconstruction accuracy and the convergence speed of the sparse signals are compressed and sensed by other people.
In order to achieve the purpose, the invention provides the following technical scheme:
a sparse signal reconstruction method based on a two-step depth expansion strategy comprises the following steps:
s1: sparsifying the input signal;
s2: improving a depth expansion model of a traditional sparse signal reconstruction algorithm by using a TwDU algorithm;
s3: and reconstructing and recovering the original sparse signal by using the improved sparse signal reconstruction algorithm.
Further, in step S1, the method specifically includes:
in compressed sensing, let
Figure BDA0003359329200000021
As a vector of the original signal, the signal is,
Figure BDA0003359329200000022
in order to observe the matrix, the system,
Figure BDA0003359329200000023
is Gaussian white noise vector, and y is phi s + n, wherein
Figure BDA0003359329200000024
Is an observation signal vector obtained by sampling s through an observation matrix phi; assuming the original signal vector s is at a known orthonormal basis
Figure BDA0003359329200000025
The lower can be thinned, i.e. s ═ Ψ x, where
Figure BDA0003359329200000026
Called original sparse signal, is a sparse representation of an original signal vector s in a new transform domain psi, and makes a sensing matrix
Figure BDA0003359329200000027
Get y ═ Ax + n.
Further, in step S1, the conventional reconstruction algorithm models the sparse reconstruction problem as a convex optimization problem as follows:
Figure BDA0003359329200000028
further, in step S2, the depth expansion model of the conventional sparse signal reconstruction algorithm includes LISTA, LAMP, and TISTA.
Further, in step S2, the steps of improving the LISTA algorithm by using the TwDU algorithm are as follows:
and (3) carrying out depth expansion on the ISTA algorithm, adopting a symbol gamma (·) to represent an ISTA model, adopting a symbol U · to represent a depth expansion model for the ISTA, and representing the renewed description of the ISTA as:
Figure BDA0003359329200000029
Figure BDA0003359329200000031
let beta AT=B,IN-BA ═ S, yielding:
Figure BDA0003359329200000032
improvement of ISTA using depth unwrapping technique is denoted as
Figure BDA0003359329200000033
The formula (5) shows that the depth of the traditional algorithm ISTA is expanded into LISTA, and then the result of the next iteration is obtained; in the depth expansion algorithm LISTA, the parameter θ ═ B, S, τ]Training input data pairs through deep learning techniques
Figure BDA0003359329200000034
Minimizing a secondary loss function (6) to realize self-learning and updating:
Figure BDA0003359329200000035
then, performing two-step expansion on the depth expansion algorithm; the two-step deep unfolding strategy is proposed by
Figure BDA0003359329200000036
Not only do the results of
Figure BDA0003359329200000037
And also depends on
Figure BDA0003359329200000038
I.e. the estimate of each depth expansion algorithm depends on the estimates of the previous two depth expansion algorithms, rather than only on the previous time, i.e. the existence of a correlation between the signals, which is formulated as:
Figure BDA0003359329200000039
u Γ · is a generalized depth expansion algorithm in equation (7), the packageIncluding LITSTA, LAMP, TISTA; (7) the parameters to be trained by the deep learning technique are theta, omega, psi, theta U]Wherein θ U is a parameter to be trained in each deep expansion algorithm, and if the parameter is TWD-LISTA, θ U ═ B, S, τ](ii) a In formula (7)
Figure BDA00033593292000000310
And
Figure BDA00033593292000000311
two-step deep development strategies are utilized in respective evaluation processes;
Figure BDA00033593292000000312
and
Figure BDA00033593292000000313
the training parameters are trained in the respective two-step deep development, and do not participate in the current training any more, so that the calculation burden can be reduced. The results of the first two calculations
Figure BDA00033593292000000314
And
Figure BDA00033593292000000322
the influence factors omega and psi of the current calculation result can be adaptively adjusted along with the characteristics of data by utilizing the strong learning capacity of deep learning; when let ω be 1.0, then
Figure BDA00033593292000000316
Figure BDA00033593292000000321
And
Figure BDA00033593292000000318
the coefficient item is 0, and the previous two iteration results have no influence on the current result; in this case, (8) and (5) are both common depth deployment schemes. In fact, the proposed solution (7) is of more general significance,(5) this can be seen as a special case of (7). However, as the iteration times of the algorithm are continuously increased, the parameters ω and ψ are continuously self-optimized through a back propagation mechanism in the deep learning and finally stabilized to be small fluctuation above and below an optimal value, at this time,
Figure BDA00033593292000000319
coefficient 1-omega and
Figure BDA00033593292000000320
the coefficients ω - ψ terms of (c) are no longer 0, and the performance of the algorithm is significantly improved, and the side shows the important influence of the previous two calculation results on the current result. The two-step depth expansion strategy fully utilizes the inherent characteristics between time sequence signals, and the estimation values of the former two-step depth expansion algorithm exert influence on the current result together according to different weights.
And further, an incremental training mode of the parameters. In the method based on the TwDU strategy, the parameters Θ ═ ω, ψ, θ (U (·))]The value of (c) will directly affect the reconstruction quality of the sparse signal, and therefore, the Θ training method is very important. In the training process of the invention, a batch of data is firstly divided into H small batches of data (batch) and sent into an algorithm network, and the network loss value gradually decreases along with the training of the batch. When the training of one batch of data is completed, a new batch of data will be fed into the network training again. Multiple experiments verify that the incremental training method is very effective in adjusting theta and improving network performance. This is because the incremental training can not only alleviate the gradient disappearance problem, but also further improve the generalization capability of the network. The training data is a randomly generated data pair x, y, wherein y is from the group of y as Ax + n, and is characteristic data after sparse sampling which needs to be learned by the two-step expansion network, and x is sparse label data. The TwDU algorithm learns the data features from batch by applying a stochastic gradient descent algorithm, reconstructing the original sparse signal x. In the t gain training process, the optimizer prompts the objective function of the training by adjusting theta
Figure BDA0003359329200000041
And (4) minimizing. When H pieces of the waste are processedAfter a small batch of data, the objective function of the optimizer becomes
Figure BDA0003359329200000042
Although the objective function is continuously changed from the first layer to the last layer in the network training process, the parameter Θ takes the previous result as the initial value of the current training in each training process, and has certain consistency. In the present invention, all experiments, including control experiments, were conducted using incremental training for variable control.
Further, in step S2, the step of improving the LAMP algorithm using the TwDU algorithm is as follows:
the AMP algorithm is a signal processing algorithm proposed in recent years, and has attracted much attention due to its rapid convergence rate, and its mathematical iterative formula is expressed as
Figure BDA0003359329200000043
Figure BDA0003359329200000044
In the formula vtFor the T T th ═ 0,1,2, … T-1 iteration pair signal
Figure BDA0003359329200000045
Estimating residual error, initialization
Figure BDA0003359329200000046
v_10, and
Figure BDA0003359329200000047
the deep unfolding algorithm LAMP is simplified for AMP into the following form:
Figure BDA0003359329200000048
Figure BDA0003359329200000049
equations (10a) and (10b) are based on the generalization of AMP algorithms (9a) and (9b), where matrices A, ATAt iteration t, it appears as At,Bt. In order to reduce the training parameters required by the LAMP network and accelerate the signal processing speed, the A is controlled on the basis of not changing the characteristics of the algorithmt=βtA, at this time, AtIn only a scalar betatVarying with the number of iterations t. LAMP network parameters
Figure BDA0003359329200000051
Data pairs entered by training
Figure BDA0003359329200000052
Minimizing the loss function L (theta) of the formula (6) to realize that the self-learning is updated;
in the TwDU method, TwDU-LAMP
Figure BDA0003359329200000053
The iterative process is
Figure BDA0003359329200000054
Adding trainable parameters omega and psi on the basis of the original training parameters of LAMP to establish the relation between signals; omega and psi estimate the first two steps of signals
Figure BDA0003359329200000055
And
Figure BDA0003359329200000056
adaptive and current signal estimation by deep learning techniques
Figure BDA0003359329200000057
And establishing a connection, wherein the connection between the signals accelerates the reconstruction speed and the reconstruction precision of the signals.
Further, in step S2, the step of improving the TISTA algorithm by using the TwDU algorithm is as follows:
TISTA is another deep development form of the ISTA algorithm, and the mathematical expression thereof is
Figure BDA0003359329200000058
Figure BDA0003359329200000059
Figure BDA00033593292000000510
Figure BDA00033593292000000511
Wherein the matrix W ═ AT(AAT)-1Is the pseudo-inverse of the matrix A, σ2Is the variance of the noise, ηMMSEMinimum Mean Square Error (MMSE) estimator:
Figure BDA00033593292000000512
wherein
Figure BDA00033593292000000513
Figure BDA00033593292000000514
For a non-zero element variance of the input signal,
Figure BDA00033593292000000515
for error variance, p is the probability of occurrence of a non-zero element of the input signal, an
Figure BDA00033593292000000516
From (12), the signal error variance estimation in the deep expansion algorithm TISTA is shown
Figure BDA00033593292000000517
And
Figure BDA00033593292000000518
the influence on the final sparse signal estimation value is crucial. Scalar variable
Figure BDA0003359329200000061
The step length parameter is used for controlling and adjusting the error variance and is also a parameter to be trained in the deep learning technology, and the number of the step length parameter is equal to the number of network layers; training parameter theta ═ gamma for TISTA algorithmt];
In the Ttwdu method, of Ttwdu-TISTA
Figure BDA0003359329200000062
The iterative process is
Figure BDA0003359329200000063
Wherein the trainable parameters are Θ ═ ω, ψ, γt]In TtwDU-TISTA, ω and ψ are also estimated values of the signal in the first two steps
Figure BDA0003359329200000064
And
Figure BDA0003359329200000065
adaptive sum signal
Figure BDA0003359329200000066
And a relation is established, and the convergence speed of the TISTA of the depth expansion algorithm is improved.
Further, in step S3, the signal is reconstructed by the signal thinning in step S1 and the signal algorithm processing in step S2, and the estimated value of the signal is output in step S2
Figure BDA0003359329200000067
Input to mean square error loss function
Figure BDA0003359329200000068
In the method, algorithm parameters are updated by a deep learning technology and a back propagation mechanism based on a gradient descent method, and signal reconstruction and recovery are performed by combining the incremental training mode.
The invention has the beneficial effects that: the sparse signal reconstruction method based on two-step depth expansion can fully utilize the correlation between signals, and the method provided by the invention can reconstruct the signals more quickly and improve the reconstruction precision of the signals no matter aiming at one-dimensional signals or two-dimensional picture signals in wireless communication.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a tth iteration of LAMP for a deep-expansion network;
FIG. 2 is a block diagram of the TwDU-LAMP algorithm;
FIG. 3 is a t-th iteration of the deep expansion network TISA;
FIG. 4 is a block diagram of the TwDU-TISTA algorithm;
FIG. 5 shows a comparison of the depth expansion algorithms NMSE, with SNR 40 dB;
FIG. 6 shows a comparison of the depth expansion algorithms NMSE, with SNR of 30 dB;
fig. 7 shows the depth-spread algorithm NMSE comparison with SNR 20 dB.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
The invention provides a sparse signal reconstruction method based on a two-step depth expansion strategy, which comprises the following steps of:
step one, input signal sparsification.
In compressed sensing, let
Figure BDA0003359329200000071
The original signal vector can be a signal acquired by a sensor in the environment of the internet of things, a signal in wireless communication, or a two-dimensional picture signal acquired by various electronic devices.
Figure BDA0003359329200000072
In order to observe the matrix, the system,
Figure BDA0003359329200000073
is Gaussian white noise vector, and y is phi s + n, wherein
Figure BDA0003359329200000074
Is an observed signal vector obtained by sampling s through an observation matrix phi. Assuming the original signal vector s is at a known orthonormal basis
Figure BDA0003359329200000075
The lower can be thinned, i.e. s ═ Ψ x, where
Figure BDA0003359329200000076
Referred to as original sparse signal, is a sparse representation of the original signal vector s in the new transform domain Ψ. Order sensing matrix
Figure BDA0003359329200000077
Get y ═ Ax + n. The objective of the compressed sensing reconstruction algorithm is to reconstruct the estimated value of the original sparse signal x by the observation signal y in such an underdetermined linear system
Figure BDA0003359329200000078
Thereby further comprising
Figure BDA0003359329200000079
The original signal is reconstructed. The common reconstruction algorithm models the sparse reconstruction problem as a convex optimization problem as follows:
Figure BDA00033593292000000710
the embodiment of the invention is directed to the solution proposed in (1).
And step two, the TwDU algorithm processes the input signal. The thinned signal y is input into an algorithm for processing.
The conventional reconstruction algorithm performs depth expansion. The symbol gamma (·) is adopted to represent a traditional reconstruction algorithm model, the symbol U · represents a depth expansion model of the traditional reconstruction algorithm, and the notation U · represents the inverse of the inverse transformation algorithm by taking the inverse transformation algorithm as an example
Figure BDA0003359329200000081
Figure BDA0003359329200000082
Let beta AT=B,IN-BA ═ S, i.e. obtaining:
Figure BDA0003359329200000083
the above is the traditional ISTA algorithm, and the improved ISTA using the depth expansion technique is expressed as
Figure BDA0003359329200000084
Equation (5) represents that the depth of the traditional algorithm ISTA is expanded into LISTA, and then the result of the next iteration is obtained. In the depth expansion algorithm LISTA, the parameter θ ═ B, S, τ]Training input data pairs through deep learning techniques
Figure BDA0003359329200000085
And minimizing the secondary loss function (6) to realize self-learning and updating.
Figure BDA0003359329200000086
Then, the depth expansion algorithm is subjected to two-step expansion. The two-step deep unfolding strategy is proposed by
Figure BDA0003359329200000087
Not only do the results of
Figure BDA0003359329200000088
And also depends on
Figure BDA0003359329200000089
I.e. the estimate of each depth expansion algorithm depends on the estimates of the previous two depth expansion algorithms, rather than only on the previous time, i.e. the existence of a correlation between the signals, which is formulated as:
Figure BDA00033593292000000810
in the formula (7), U Γ · does not refer to LISTA in particular, but refers to various depth expansion algorithms such as LAMP, TISTA, and the like. (7) The parameters to be trained by the deep learning technique are theta, omega, psi, theta U]Wherein θ U is a parameter to be trained in each deep expansion algorithm, and if the parameter is TWD-LISTA, θ U ═ B, S, τ]. In formula (7)
Figure BDA00033593292000000811
And
Figure BDA00033593292000000818
a two-step deep unfolding strategy is utilized in each evaluation process. In addition to this, the present invention is,
Figure BDA00033593292000000813
and
Figure BDA00033593292000000814
the training parameters are trained in the respective two-step deep development, and do not participate in the current training any more, so that the calculation burden can be reduced. The first two calculation results (
Figure BDA00033593292000000815
And
Figure BDA00033593292000000816
) The influence factors omega and psi for the current calculation result can be adaptively adjusted along with the characteristics of data by utilizing the powerful learning capability of deep learning. When let ω be 1.0, then
Figure BDA00033593292000000817
Figure BDA0003359329200000091
And
Figure BDA0003359329200000092
the coefficient term is 0, and the results of the previous two iterations have no influence on the current result. In this case, (8) and (5) are both common depth deployment schemes. In fact, the proposed solution (7) is more general and (5) can be seen as a special case of (7). However, as the iteration number of the algorithm is continuously increased, the parameters ω and ψ are continuously self-optimized through a back propagation mechanism in deep learning and finally stabilize to fluctuate slightly above and below the optimal value. At this time, the process of the present invention,
Figure BDA0003359329200000093
coefficient 1-omega and
Figure BDA0003359329200000094
the coefficients ω - ψ terms of (c) are no longer 0, and the performance of the algorithm is significantly improved, and the side shows the important influence of the previous two calculation results on the current result. The two-step depth expansion strategy fully utilizes the inherent characteristics between time sequence signals, and the estimation values of the former two-step depth expansion algorithm exert influence on the current result together according to different weights.
And (4) incremental training mode of the parameters. In the method based on the TwDU strategy, the parameters Θ ═ ω, ψ, θ (U (·))]Will directly affectTo the reconstruction quality of sparse signals, therefore, the Θ training method is extremely important. In the training process of the invention, a batch of data is firstly divided into H small batches of data (batch) and sent into an algorithm network, and the network loss value gradually decreases along with the training of the batch. When the training of one batch of data is completed, a new batch of data will be fed into the network training again. Multiple experiments verify that the incremental training method is very effective in adjusting theta and improving network performance. This is because the incremental training can not only alleviate the gradient disappearance problem, but also further improve the generalization capability of the network. The training data is a randomly generated data pair x, y, wherein y is from the group of y as Ax + n, and is characteristic data after sparse sampling which needs to be learned by the two-step expansion network, and x is sparse label data. The TwDU algorithm learns the data features from batch by applying a stochastic gradient descent algorithm, reconstructing the original sparse signal x. In the t gain training process, the optimizer prompts the objective function of the training by adjusting theta
Figure BDA0003359329200000095
And (4) minimizing. After H small batches of data have been processed, the objective function of the optimizer becomes
Figure BDA0003359329200000096
Although the objective function is continuously changed from the first layer to the last layer in the network training process, the parameter Θ takes the previous result as the initial value of the current training in each training process, and has certain consistency. In this context, all experiments, including control experiments, were conducted using incremental training methods for controlling variables.
The TwDU-LAMP and TwDU-TISTA algorithms are taken as examples for carrying out the expansion processing.
1) TwDU-LAMP. The AMP algorithm is a signal processing algorithm proposed in recent years, and has attracted much attention due to its rapid convergence rate, and its mathematical iterative formula is expressed as
Figure BDA0003359329200000097
Figure BDA0003359329200000098
In the formula vtFor T T0,1,2, T-1 iterations
Figure BDA0003359329200000099
Estimating residual error, initialization
Figure BDA00033593292000000910
v-10, and
Figure BDA00033593292000000911
the deep unfolding algorithm LAMP is simplified for AMP into the following form:
Figure BDA0003359329200000101
Figure BDA0003359329200000102
(10) based on the generalization of AMP algorithm (9), where the matrix A, ATAt iteration t, it appears as At,Bt. In order to reduce the training parameters required by the LAMP network and accelerate the signal processing speed, Borgerding and Schniter make A on the basis of not changing the characteristics of the algorithmt=βtA, at this time, AtIn only a scalar betatVarying with the number of iterations t. LAMP network parameters
Figure BDA0003359329200000103
Data pairs entered by training
Figure BDA0003359329200000104
And (6) minimizing the loss function L (theta) to realize that the self-learning is updated. The iterative process in (10) is performed with depth expansion as shown in fig. 1.
In the TwDU method, TwDU-LAMP
Figure BDA0003359329200000105
The iterative process is
Figure BDA0003359329200000106
The trainable parameters are increased by only 2, i.e., ω and ψ, based on the original training parameters of LAMP, and the link between the signals is established. The block diagram of TwDU-LAMP is shown in FIG. 2. Omega and psi estimate the first two steps of signals
Figure BDA0003359329200000107
And
Figure BDA00033593292000001018
adaptive and current signal estimation by deep learning techniques
Figure BDA0003359329200000109
And establishing a connection, wherein the connection between the signals accelerates the reconstruction speed and the reconstruction precision of the signals.
2) TwDU-TISTA. TISTA is another deep development form of the ISTA algorithm, and the mathematical expression thereof is
Figure BDA00033593292000001010
Figure BDA00033593292000001011
Figure BDA00033593292000001012
Figure BDA00033593292000001013
Wherein the matrix W ═ AT(AAT)-1Is the pseudo-inverse of the matrix A, σ2Is the noise variance. EtaMMSEIs a Minimum Mean Square Error (MMSE) estimator,
Figure BDA00033593292000001014
wherein
Figure BDA00033593292000001015
Figure BDA00033593292000001016
For a non-zero element variance of the input signal,
Figure BDA00033593292000001017
for error variance, p is the probability of occurrence of a non-zero element of the input signal, an
Figure BDA0003359329200000111
From (12), the signal error variance estimation in the deep expansion algorithm TISTA is shown
Figure BDA0003359329200000112
And
Figure BDA0003359329200000113
the influence on the final sparse signal estimation value is crucial. Scalar variable
Figure BDA0003359329200000114
The step length parameter is used for controlling and adjusting the error variance, and is also a parameter to be trained in the deep learning technology, and the number of the step length parameter is equal to the number of network layers. Training parameter theta ═ gamma for TISTA algorithmt]. The iterative process in (12) is performed with depth expansion as shown in fig. 3.
In the Ttwdu method, of Ttwdu-TISTA
Figure BDA0003359329200000115
The iterative process is
Figure BDA0003359329200000116
Wherein the trainable parameters are Θ ═ ω, ψ, γ t]. The block diagram of the TwDU-TISTA is shown in FIG. 4. In TtwDU-TISTA, ω and ψ are also estimated values of the first two steps of the signal
Figure BDA0003359329200000117
And
Figure BDA0003359329200000118
adaptive sum signal
Figure BDA0003359329200000119
And a relation is established, and the convergence speed of the TISTA of the depth expansion algorithm is improved.
And step three, reconstructing and recovering the signal.
And reconstructing the signals in the step through signal sparsification in the step one and signal algorithm processing in the step two. The estimated value of the output signal in the second step
Figure BDA00033593292000001110
Input to mean square error loss function
Figure BDA00033593292000001111
In the method, algorithm parameters are updated by a deep learning technology through a back propagation mechanism based on a gradient descent method, and signal reconstruction and recovery are performed in combination with the incremental training mode introduced above.
The method is not only suitable for the above cases, but also suitable for other depth expansion algorithms. The algorithm pseudo code of the present invention is shown in table 1.
TABLE 1 two-step depth expansion strategy reconstruction algorithm flow
Figure BDA00033593292000001112
In order to illustrate the feasibility and advantages of the present invention, simulation experiments will be performed. The experimental system is deployed on a Linux platform, a PyTorch1.5.1 deep learning framework is applied, and an Adam optimizer is adopted. In the experiment, a one-dimensional sparse signal which obeys Bernoulli-Gaussian distribution is used as a simulation input signal, and meanwhile, a Normalized Mean Square Error (NMSE) is used as a judgment standard to measure the performance of each depth reconstruction algorithm. The length of the simulated sparse signal used in the experiment is N-500, the dimension M of the observation matrix a is 250, and N is 500. Each element in the matrix A follows a Gaussian distribution with a mean value of 0 and a variance of 1/M, namely Ai,j~N(0,1/M)。
Fig. 5 shows normalized minimum mean square error (NMSE) for LISTA, TwDU-LISTA, TISTA, TwDU-TISTA, LAMP and TwDU-LAMP, each iterated 12 times at SNR of 40 dB. The NMSE calculation is as follows:
Figure BDA0003359329200000121
as can be observed from fig. 5, the reconstruction algorithm based on the two-step depth expansion strategy can better reconstruct sparse signals compared with the common depth expansion algorithm. The initial phase of TwDU-lisa and LISTA has similar NMSE, and with the number of iterations increasing, the TwDU-lisa has about 9dB and 6dB gains compared to LISTA at t 9 and t 12, respectively. The tissta converged in the first 12 iterations with approximately-42 dB NMSE, whereas TwDU-tissta has reached-42 dB at t-8, which provides faster convergence speed than t-10 at the time of tissta convergence. Meanwhile, the NMSE performance of the TwDU-LAMP in the first 12 iterations is better than that of LAMP, and the convergence rate of the TwDU-LAMP is 2 periods earlier than that of LAMP.
Fig. 6 and 7 show the depth spread algorithm NMSE comparisons at signal-to-noise ratios of 30dB and 20dB, respectively. It can be clearly observed from the figure that the reconstruction capability of each depth expansion algorithm to the signal is worse as the signal-to-noise ratio is reduced. Even so, compared with the common depth expansion algorithm, the reconstruction algorithm based on the TwDU strategy can reconstruct sparse signals better. When the signal-to-noise ratio is 30dB, the TwdU-LITSTA has a gain of 0.6dB at most compared with the LITSTA, the TwdU-TISTA has a gain of 4.6dB at most compared with the TISTA, and the TwdU-LAMP has a gain of 3.9dB at most compared with the LAMP. When the signal-to-noise ratio is 20dB, the performance difference of each depth expansion reconstruction algorithm is reduced due to serious noise pollution, and compared with a common depth expansion algorithm, the gain of the depth expansion algorithm based on the two-step depth expansion strategy is within 1 dB.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (9)

1. A sparse signal reconstruction method based on a two-step depth expansion strategy is characterized by comprising the following steps of: the method comprises the following steps:
s1: sparsifying the input signal;
s2: improving a depth expansion model of a traditional sparse signal reconstruction algorithm by using a TwDU algorithm;
s3: and reconstructing and recovering the original sparse signal by using the improved sparse signal reconstruction algorithm.
2. The sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 1, wherein: in step S1, the method specifically includes:
in compressed sensing, let
Figure FDA0003359329190000011
As a vector of the original signal, the signal is,
Figure FDA0003359329190000012
in order to observe the matrix, the system,
Figure FDA0003359329190000013
is Gaussian white noise vector, and y is phi s + n, wherein
Figure FDA0003359329190000014
Is an observation signal vector obtained by sampling s through an observation matrix phi; assuming the original signal vector s is at a known orthonormal basis
Figure FDA0003359329190000015
The lower can be thinned, i.e. s ═ Ψ x, where
Figure FDA0003359329190000016
Called original sparse signal, is a sparse representation of an original signal vector s in a new transform domain psi, and makes a sensing matrix
Figure FDA0003359329190000017
Get y ═ Ax + n.
3. The sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 2, wherein: in step S1, the conventional reconstruction algorithm models the sparse reconstruction problem as the following convex optimization problem:
Figure FDA0003359329190000018
4. the sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 1, wherein: in step S2, the depth expansion model of the conventional sparse signal reconstruction algorithm includes LISTA, LAMP, and TISTA.
5. The sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 4, wherein: in step S2, the steps of improving the LISTA algorithm using the TwDU algorithm are as follows:
and (3) carrying out depth expansion on the ISTA algorithm, adopting a symbol gamma (·) to represent an ISTA model, adopting a symbol U · to represent a depth expansion model for the ISTA, and representing the renewed description of the ISTA as:
Figure FDA0003359329190000019
Figure FDA00033593291900000110
let beta AT=B,IN-BA ═ S, yielding:
Figure FDA00033593291900000111
improvement of ISTA using depth unwrapping technique is denoted as
Figure FDA00033593291900000112
The formula (5) shows that the depth of the traditional algorithm ISTA is expanded into LISTA, and then the result of the next iteration is obtained; in the depth expansion algorithm LISTA, the parameter θ ═ B, S, τ]Training input data pairs through deep learning techniques
Figure FDA0003359329190000021
Minimizing a secondary loss function (6) to realize self-learning and updating:
Figure FDA0003359329190000022
then, the depth expansion algorithm is expanded in two steps, and the formula is expressed as follows:
Figure FDA0003359329190000023
u gamma in the formula (7) refers to various depth expansion algorithms including LITSTA, LAMP and TISTA; the parameters to be trained by the deep learning technique in the formula (7) are Θ ═ ω, ψ, θ U ·]Wherein θ U is a parameter to be trained in each deep expansion algorithm, and if the parameter is TWD-LISTA, θ U ═ B, S, τ](ii) a In formula (7)
Figure FDA0003359329190000024
And
Figure FDA0003359329190000025
two-step deep development strategies are utilized in respective evaluation processes;
Figure FDA0003359329190000026
and
Figure FDA0003359329190000027
the training parameters are trained in the respective two-step deep development, and do not participate in the current training any more, and the results of the previous two calculations
Figure FDA0003359329190000028
And
Figure FDA0003359329190000029
the influence factors omega and psi of the current calculation result can be adaptively adjusted along with the characteristics of data by utilizing the strong learning capacity of deep learning; when let ω be 1.0, then
Figure FDA00033593291900000210
Figure FDA00033593291900000211
And
Figure FDA00033593291900000212
the coefficient item is 0, and the previous two iteration results have no influence on the current result; in this case, both the formula (8) and the formula (5) are common depth expansion schemes, and the formula (5) is considered as a special case of the formula (7); with the continuous increase of the iteration times of the algorithm, the parameters omega and psi are continuously self-optimized through a back propagation mechanism in the deep learning, and finally are stabilized to be in small fluctuation above and below an optimal value, at the moment,
Figure FDA00033593291900000213
coefficient 1-omega and
Figure FDA00033593291900000214
the co-psi terms of (a) are no longer 0.
6. The sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 5, wherein: in the training process, a batch of data is firstly divided into H small batch data batch and sent to an algorithm network, and the network loss value is gradually reduced along with the training of the batch; when the training of a batch of data is finished, a new batch of data is sent to the network again for training; the training data is a randomly generated data pair x, y, wherein y is from y being Ax + n, is characteristic data after sparse sampling needing learning of the two-step expansion network, and x is sparse label data; the TwDU algorithm learns the data characteristics from batch by applying a random gradient descent algorithm to reconstruct an original sparse signal x; in the t gain training process, the optimizer prompts the objective function of the training by adjusting theta
Figure FDA00033593291900000215
Minimization; after H small batches of data have been processed, the objective function of the optimizer becomes
Figure FDA00033593291900000216
The parameter Θ is an initial value of the training in each training process, which is the result of the previous training.
7. The sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 4, wherein: in step S2, the steps of improving the LAMP algorithm using the TwDU algorithm are as follows:
the mathematical iterative formula of the AMP algorithm is expressed as
Figure FDA0003359329190000031
Figure FDA0003359329190000032
In the formula vtT-1 iterations for signal T T ═ 0,1,2
Figure FDA0003359329190000033
Estimating residual error, initialization
Figure FDA0003359329190000034
v_10, and
Figure FDA0003359329190000035
the deep unfolding algorithm LAMP is simplified for AMP into the following form:
Figure FDA0003359329190000036
Figure FDA0003359329190000037
equations (10a) and (10b) are based on the generalization of AMP algorithms (9a) and (9b), where the matrices A, ATAt iteration t, it appears as At,BtLet At=βtA, at this time, AtIn only a scalar betatVarying with the number of iterations t. LAMP network parameters
Figure FDA0003359329190000038
Data pairs entered by training
Figure FDA0003359329190000039
Minimizing the loss function L (theta) of the formula (6) to realize that the self-learning is updated;
in the TwDU method, TwDU-LAMP
Figure FDA00033593291900000310
The iterative process is
Figure FDA00033593291900000311
Adding trainable parameters omega and psi on the basis of the original training parameters of LAMP to establish the relation between signals; omega and psi estimate the first two steps of signals
Figure FDA00033593291900000312
And
Figure FDA00033593291900000313
adaptive and current signal estimation by deep learning techniques
Figure FDA00033593291900000314
And establishing a connection.
8. The sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 4, wherein: in step S2, the steps of improving the TISTA algorithm using the TwDU algorithm are as follows:
TISTA is another deep development form of the ISTA algorithm, and the mathematical expression thereof is
Figure FDA00033593291900000315
Figure FDA00033593291900000316
Figure FDA0003359329190000041
Figure FDA0003359329190000042
Wherein the matrix W ═ AT(AAT)-1Is the pseudo-inverse of the matrix A, σ2Is the variance of the noise, ηMMSEMinimum Mean Square Error (MMSE) estimator:
Figure FDA0003359329190000043
wherein
Figure FDA0003359329190000044
α2For a non-zero element variance of the input signal,
Figure FDA0003359329190000045
for error variance, p is the probability of occurrence of a non-zero element of the input signal, an
Figure FDA0003359329190000046
Scalar variable
Figure FDA0003359329190000047
Is a step size parameter for controlling the error of adjustmentThe difference variance is also a parameter to be trained in the deep learning technology, and the number of the difference variance is equal to the number of network layers; training parameter theta ═ gamma for TISTA algorithmt];
In the Ttwdu method, of Ttwdu-TISTA
Figure FDA0003359329190000048
The iterative process is
Figure FDA0003359329190000049
Wherein the trainable parameters are Θ ═ ω, ψ, γt]In TwDU-TISTA, ω and ψ are estimated values of the signal in the first two steps
Figure FDA00033593291900000410
And
Figure FDA00033593291900000411
adaptive sum signal
Figure FDA00033593291900000412
And establishing a connection.
9. The sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 6, wherein: in step S3, the signal is reconstructed by the signal thinning in step S1 and the signal algorithm processing in step S2, and the estimated value of the signal is output in step S2
Figure FDA00033593291900000413
Input to mean square error loss function
Figure FDA00033593291900000414
In the method, algorithm parameters are updated by a deep learning technology and a back propagation mechanism based on a gradient descent method, and signal reconstruction and recovery are performed by combining the incremental training mode.
CN202111374559.7A 2021-11-17 2021-11-17 Sparse signal reconstruction method based on two-step depth expansion strategy Pending CN114050832A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111374559.7A CN114050832A (en) 2021-11-17 2021-11-17 Sparse signal reconstruction method based on two-step depth expansion strategy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111374559.7A CN114050832A (en) 2021-11-17 2021-11-17 Sparse signal reconstruction method based on two-step depth expansion strategy

Publications (1)

Publication Number Publication Date
CN114050832A true CN114050832A (en) 2022-02-15

Family

ID=80210022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111374559.7A Pending CN114050832A (en) 2021-11-17 2021-11-17 Sparse signal reconstruction method based on two-step depth expansion strategy

Country Status (1)

Country Link
CN (1) CN114050832A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150287223A1 (en) * 2014-04-04 2015-10-08 The Board Of Trustees Of The University Of Illinois Highly accelerated imaging and image reconstruction using adaptive sparsifying transforms
US20180240219A1 (en) * 2017-02-22 2018-08-23 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
CN112234994A (en) * 2020-09-29 2021-01-15 西南石油大学 Compressed sensing reconstruction signal processing method and system, computer equipment and application
CN113222812A (en) * 2021-06-02 2021-08-06 北京大学深圳研究生院 Image reconstruction method based on information flow reinforced deep expansion network
CN113271269A (en) * 2021-04-22 2021-08-17 重庆邮电大学 Sparsity self-adaptive channel estimation method based on compressed sensing
CN113300714A (en) * 2021-04-23 2021-08-24 北京工业大学 Joint sparse signal dimension reduction gradient tracking reconstruction algorithm based on compressed sensing theory

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150287223A1 (en) * 2014-04-04 2015-10-08 The Board Of Trustees Of The University Of Illinois Highly accelerated imaging and image reconstruction using adaptive sparsifying transforms
US20180240219A1 (en) * 2017-02-22 2018-08-23 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
CN112234994A (en) * 2020-09-29 2021-01-15 西南石油大学 Compressed sensing reconstruction signal processing method and system, computer equipment and application
CN113271269A (en) * 2021-04-22 2021-08-17 重庆邮电大学 Sparsity self-adaptive channel estimation method based on compressed sensing
CN113300714A (en) * 2021-04-23 2021-08-24 北京工业大学 Joint sparse signal dimension reduction gradient tracking reconstruction algorithm based on compressed sensing theory
CN113222812A (en) * 2021-06-02 2021-08-06 北京大学深圳研究生院 Image reconstruction method based on information flow reinforced deep expansion network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DI YOU等: "ISTA-NET++: Flexible Deep Unfolding Network for Compressive Sensing", 《2021 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME)》, 9 June 2021 (2021-06-09), pages 1 - 6 *
刘振宇: "基于深度学习的大规模MIMO系统信道状态信息反馈研究", 《中国博士学位论文全文数据库信息科技辑》, 15 January 2021 (2021-01-15), pages 136 - 232 *

Similar Documents

Publication Publication Date Title
CN109859147B (en) Real image denoising method based on generation of antagonistic network noise modeling
Suresh et al. A sequential learning algorithm for complex-valued self-regulating resource allocation network-CSRAN
Yang et al. Online sequential echo state network with sparse RLS algorithm for time series prediction
CN109150775B (en) Robust online channel state estimation method for dynamic change of self-adaptive noise environment
Cui et al. Adaptive decentralized NN control of large-scale stochastic nonlinear time-delay systems with unknown dead-zone inputs
CN109741364B (en) Target tracking method and device
CN111950711A (en) Second-order hybrid construction method and system of complex-valued forward neural network
CN112132760B (en) Image recovery method based on matrix inversion and matrix decomposition capable of learning and differentiating
CN112258410B (en) Differentiable low-rank learning network image restoration method
Barreto et al. Adaptive filtering with the self-organizing map: A performance comparison
JP6507320B2 (en) Method of recovering original signal in DS-CDMA system based on complexity reduction
CN114050832A (en) Sparse signal reconstruction method based on two-step depth expansion strategy
CN108596865B (en) Feature map enhancement system and method for convolutional neural network
Scott et al. Nonlinear system identification and prediction using orthogonal functions
CN113869503B (en) Data processing method and storage medium based on depth matrix decomposition completion
CN110450164A (en) Robot control method, device, robot and storage medium
Kumar et al. Image Deconvolution using Deep Learning-based Adam Optimizer
CN111865489B (en) Multiple-input multiple-output detection method based on graph neural network
Xie et al. Model‐guided boosting for image denoising
WO2021073738A1 (en) Learning a data density function
CN112488309B (en) Training method and system of deep neural network based on critical damping momentum
Zhuang et al. Analytic learning of convolutional neural network for pattern recognition
Huynh et al. ENABLING SMART FACTORY WITH DEEP RESIDUAL-AIDED GENERATIVE ADVERSARIAL NETWORK: PERFORMANCE ANALYSIS END-TO-END LEARNING OF MACHINE-TO-MACHINE.
EP4040342A1 (en) Deep neutral network structure learning and simplifying method
CN109447913B (en) Rapid image reconstruction method applied to incomplete data imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination