CN114050832A - Sparse signal reconstruction method based on two-step depth expansion strategy - Google Patents
Sparse signal reconstruction method based on two-step depth expansion strategy Download PDFInfo
- Publication number
- CN114050832A CN114050832A CN202111374559.7A CN202111374559A CN114050832A CN 114050832 A CN114050832 A CN 114050832A CN 202111374559 A CN202111374559 A CN 202111374559A CN 114050832 A CN114050832 A CN 114050832A
- Authority
- CN
- China
- Prior art keywords
- algorithm
- signal
- training
- sparse
- depth expansion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 134
- 238000012549 training Methods 0.000 claims description 68
- 238000013135 deep learning Methods 0.000 claims description 24
- 239000011159 matrix material Substances 0.000 claims description 20
- 230000006870 function Effects 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 10
- 238000011161 development Methods 0.000 claims description 9
- 238000012804 iterative process Methods 0.000 claims description 9
- 238000005516 engineering process Methods 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 8
- 230000007246 mechanism Effects 0.000 claims description 7
- 238000011478 gradient descent method Methods 0.000 claims description 4
- 238000012854 evaluation process Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000011084 recovery Methods 0.000 claims description 3
- 101150043283 ccdA gene Proteins 0.000 claims description 2
- 230000006872 improvement Effects 0.000 claims description 2
- 241001134453 Lista Species 0.000 claims 4
- 238000004891 communication Methods 0.000 abstract description 4
- 238000002474 experimental method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 2
- 230000008034 disappearance Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Feedback Control In General (AREA)
Abstract
The invention relates to a sparse signal reconstruction method based on a two-step depth expansion strategy, which belongs to the technical field of signal reconstruction and comprises the following steps: s1: sparsifying the input signal; s2: improving a depth expansion model of a traditional sparse signal reconstruction algorithm by using a TwDU algorithm; s3: and reconstructing and recovering the original sparse signal by using the improved sparse signal reconstruction algorithm. The method can fully utilize the correlation among signals, and can reconstruct the signals more quickly and improve the reconstruction precision of the signals no matter aiming at one-dimensional signals or two-dimensional picture signals in wireless communication.
Description
Technical Field
The invention belongs to the technical field of signal reconstruction, and relates to a sparse signal reconstruction method based on a two-step depth expansion strategy.
Background
Compressed Sensing (CS) refers to obtaining discrete samples of an original signal, i.e. a sparse signal, by using a sampling matrix under a condition far below a nyquist sampling rate, and then reconstructing the original signal by using the sparse signal through a nonlinear reconstruction algorithm. For efficient and highly accurate reconstruction of the original signal, a number of excellent reconstruction algorithms are proposed. The method is used for reconstructing an original signal with high efficiency and high precision.
In recent years, deep learning technology has a great influence on the research and design of a compressed sensing sparse signal reconstruction algorithm due to its strong feature learning capability. These tasks fall into two main categories: the first is based on data-driven methods, which tend to adapt the data structure by adapting the neural network model. Compared with the traditional algorithm, the method based on data driving has certain advantages, such as: (1) the dependence on signal sparsity can be reduced. (2) The accuracy of the signal reconstruction can be improved. However, the neural network model is mainly adopted, and has the following disadvantages: (1) the network architecture is usually universal, the model has no interpretability and is not high in stability. (2) In the network training process, the networks often need large batches of signal samples, and have high requirements on computing power and memory of the platform. The second type is a model-driven method, which combines the advantages of a conventional algorithm with performance guarantee and a neural network model, and is widely applied to the fields of wireless communication, image processing and the like, and is collectively called Deep Unfolding (Deep Unfolding). The deep expansion specifically refers to the development of a new hierarchical structure similar to the neural network through an iterative process in a traditional algorithm. The hierarchical structures comprise parameter variables which can be trained, the parameter variables are trained in a supervised learning mode, and algorithm parameters are updated by using a back propagation mechanism based on a gradient descent method. The deep expansion method makes full use of the strong learning capability of the deep learning technology and the deterministic internal structure of the traditional algorithm, so that the traditional algorithm has the learning capability. Gregor and LeCun use the method for the first time to put forward a Learned ISTA (LISTA) network model, and the method abstracts a threshold value and a matrix variable in an ISTA algorithm into network training parameters to obtain better performance than the ISTA. Borgerding and Schniter abstract partial parameters and threshold values in an Onsager correction term in AMP into network training parameters, and a Learned AMP network model is provided and has better performance than LISTA and AMP. Recently, Ito and Takabe introduced the MMSE estimator to ISTA, proposed a Trainable ISTA (TISTA) deep-unfolding model, and obtained faster convergence speed than LISTA, LAMP with a very small amount of training parameters. Compared with the first class, the depth expansion algorithm has the following advantages: (1) and the stability is high and the performance is guaranteed under the constraint of the traditional algorithm. (2) Most deep unfolding algorithms require less parameters to be trained and, therefore, require fewer training samples. (3) The depth expansion algorithm is generally intuitive, interpretable, and has low computational power and memory requirements.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a sparse signal reconstruction method based on a Two-Step depth Unfolding strategy (TwDU), in which the TwDU makes full use of correlation between signals to assign different weights to the previous Two estimation values during depth Unfolding, so as to jointly determine the current result. The method can establish the dependency relationship between signals, and the dependency relationship is not fixed and is self-adaptive to adjust along with the change of data, so that the reconstruction accuracy and the convergence speed of the sparse signals are compressed and sensed by other people.
In order to achieve the purpose, the invention provides the following technical scheme:
a sparse signal reconstruction method based on a two-step depth expansion strategy comprises the following steps:
s1: sparsifying the input signal;
s2: improving a depth expansion model of a traditional sparse signal reconstruction algorithm by using a TwDU algorithm;
s3: and reconstructing and recovering the original sparse signal by using the improved sparse signal reconstruction algorithm.
Further, in step S1, the method specifically includes:
in compressed sensing, letAs a vector of the original signal, the signal is,in order to observe the matrix, the system,is Gaussian white noise vector, and y is phi s + n, whereinIs an observation signal vector obtained by sampling s through an observation matrix phi; assuming the original signal vector s is at a known orthonormal basisThe lower can be thinned, i.e. s ═ Ψ x, whereCalled original sparse signal, is a sparse representation of an original signal vector s in a new transform domain psi, and makes a sensing matrixGet y ═ Ax + n.
Further, in step S1, the conventional reconstruction algorithm models the sparse reconstruction problem as a convex optimization problem as follows:
further, in step S2, the depth expansion model of the conventional sparse signal reconstruction algorithm includes LISTA, LAMP, and TISTA.
Further, in step S2, the steps of improving the LISTA algorithm by using the TwDU algorithm are as follows:
and (3) carrying out depth expansion on the ISTA algorithm, adopting a symbol gamma (·) to represent an ISTA model, adopting a symbol U · to represent a depth expansion model for the ISTA, and representing the renewed description of the ISTA as:
let beta AT=B,IN-BA ═ S, yielding:
improvement of ISTA using depth unwrapping technique is denoted as
The formula (5) shows that the depth of the traditional algorithm ISTA is expanded into LISTA, and then the result of the next iteration is obtained; in the depth expansion algorithm LISTA, the parameter θ ═ B, S, τ]Training input data pairs through deep learning techniquesMinimizing a secondary loss function (6) to realize self-learning and updating:
then, performing two-step expansion on the depth expansion algorithm; the two-step deep unfolding strategy is proposed byNot only do the results ofAnd also depends onI.e. the estimate of each depth expansion algorithm depends on the estimates of the previous two depth expansion algorithms, rather than only on the previous time, i.e. the existence of a correlation between the signals, which is formulated as:
u Γ · is a generalized depth expansion algorithm in equation (7), the packageIncluding LITSTA, LAMP, TISTA; (7) the parameters to be trained by the deep learning technique are theta, omega, psi, theta U]Wherein θ U is a parameter to be trained in each deep expansion algorithm, and if the parameter is TWD-LISTA, θ U ═ B, S, τ](ii) a In formula (7)Andtwo-step deep development strategies are utilized in respective evaluation processes;andthe training parameters are trained in the respective two-step deep development, and do not participate in the current training any more, so that the calculation burden can be reduced. The results of the first two calculationsAndthe influence factors omega and psi of the current calculation result can be adaptively adjusted along with the characteristics of data by utilizing the strong learning capacity of deep learning; when let ω be 1.0, then
Andthe coefficient item is 0, and the previous two iteration results have no influence on the current result; in this case, (8) and (5) are both common depth deployment schemes. In fact, the proposed solution (7) is of more general significance,(5) this can be seen as a special case of (7). However, as the iteration times of the algorithm are continuously increased, the parameters ω and ψ are continuously self-optimized through a back propagation mechanism in the deep learning and finally stabilized to be small fluctuation above and below an optimal value, at this time,coefficient 1-omega andthe coefficients ω - ψ terms of (c) are no longer 0, and the performance of the algorithm is significantly improved, and the side shows the important influence of the previous two calculation results on the current result. The two-step depth expansion strategy fully utilizes the inherent characteristics between time sequence signals, and the estimation values of the former two-step depth expansion algorithm exert influence on the current result together according to different weights.
And further, an incremental training mode of the parameters. In the method based on the TwDU strategy, the parameters Θ ═ ω, ψ, θ (U (·))]The value of (c) will directly affect the reconstruction quality of the sparse signal, and therefore, the Θ training method is very important. In the training process of the invention, a batch of data is firstly divided into H small batches of data (batch) and sent into an algorithm network, and the network loss value gradually decreases along with the training of the batch. When the training of one batch of data is completed, a new batch of data will be fed into the network training again. Multiple experiments verify that the incremental training method is very effective in adjusting theta and improving network performance. This is because the incremental training can not only alleviate the gradient disappearance problem, but also further improve the generalization capability of the network. The training data is a randomly generated data pair x, y, wherein y is from the group of y as Ax + n, and is characteristic data after sparse sampling which needs to be learned by the two-step expansion network, and x is sparse label data. The TwDU algorithm learns the data features from batch by applying a stochastic gradient descent algorithm, reconstructing the original sparse signal x. In the t gain training process, the optimizer prompts the objective function of the training by adjusting thetaAnd (4) minimizing. When H pieces of the waste are processedAfter a small batch of data, the objective function of the optimizer becomesAlthough the objective function is continuously changed from the first layer to the last layer in the network training process, the parameter Θ takes the previous result as the initial value of the current training in each training process, and has certain consistency. In the present invention, all experiments, including control experiments, were conducted using incremental training for variable control.
Further, in step S2, the step of improving the LAMP algorithm using the TwDU algorithm is as follows:
the AMP algorithm is a signal processing algorithm proposed in recent years, and has attracted much attention due to its rapid convergence rate, and its mathematical iterative formula is expressed as
In the formula vtFor the T T th ═ 0,1,2, … T-1 iteration pair signalEstimating residual error, initializationv_10, and
the deep unfolding algorithm LAMP is simplified for AMP into the following form:
equations (10a) and (10b) are based on the generalization of AMP algorithms (9a) and (9b), where matrices A, ATAt iteration t, it appears as At,Bt. In order to reduce the training parameters required by the LAMP network and accelerate the signal processing speed, the A is controlled on the basis of not changing the characteristics of the algorithmt=βtA, at this time, AtIn only a scalar betatVarying with the number of iterations t. LAMP network parametersData pairs entered by trainingMinimizing the loss function L (theta) of the formula (6) to realize that the self-learning is updated;
Adding trainable parameters omega and psi on the basis of the original training parameters of LAMP to establish the relation between signals; omega and psi estimate the first two steps of signalsAndadaptive and current signal estimation by deep learning techniquesAnd establishing a connection, wherein the connection between the signals accelerates the reconstruction speed and the reconstruction precision of the signals.
Further, in step S2, the step of improving the TISTA algorithm by using the TwDU algorithm is as follows:
TISTA is another deep development form of the ISTA algorithm, and the mathematical expression thereof is
Wherein the matrix W ═ AT(AAT)-1Is the pseudo-inverse of the matrix A, σ2Is the variance of the noise, ηMMSEMinimum Mean Square Error (MMSE) estimator:
wherein For a non-zero element variance of the input signal,for error variance, p is the probability of occurrence of a non-zero element of the input signal, an
From (12), the signal error variance estimation in the deep expansion algorithm TISTA is shownAndthe influence on the final sparse signal estimation value is crucial. Scalar variableThe step length parameter is used for controlling and adjusting the error variance and is also a parameter to be trained in the deep learning technology, and the number of the step length parameter is equal to the number of network layers; training parameter theta ═ gamma for TISTA algorithmt];
Wherein the trainable parameters are Θ ═ ω, ψ, γt]In TtwDU-TISTA, ω and ψ are also estimated values of the signal in the first two stepsAndadaptive sum signalAnd a relation is established, and the convergence speed of the TISTA of the depth expansion algorithm is improved.
Further, in step S3, the signal is reconstructed by the signal thinning in step S1 and the signal algorithm processing in step S2, and the estimated value of the signal is output in step S2Input to mean square error loss functionIn the method, algorithm parameters are updated by a deep learning technology and a back propagation mechanism based on a gradient descent method, and signal reconstruction and recovery are performed by combining the incremental training mode.
The invention has the beneficial effects that: the sparse signal reconstruction method based on two-step depth expansion can fully utilize the correlation between signals, and the method provided by the invention can reconstruct the signals more quickly and improve the reconstruction precision of the signals no matter aiming at one-dimensional signals or two-dimensional picture signals in wireless communication.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a tth iteration of LAMP for a deep-expansion network;
FIG. 2 is a block diagram of the TwDU-LAMP algorithm;
FIG. 3 is a t-th iteration of the deep expansion network TISA;
FIG. 4 is a block diagram of the TwDU-TISTA algorithm;
FIG. 5 shows a comparison of the depth expansion algorithms NMSE, with SNR 40 dB;
FIG. 6 shows a comparison of the depth expansion algorithms NMSE, with SNR of 30 dB;
fig. 7 shows the depth-spread algorithm NMSE comparison with SNR 20 dB.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
The invention provides a sparse signal reconstruction method based on a two-step depth expansion strategy, which comprises the following steps of:
step one, input signal sparsification.
In compressed sensing, letThe original signal vector can be a signal acquired by a sensor in the environment of the internet of things, a signal in wireless communication, or a two-dimensional picture signal acquired by various electronic devices.In order to observe the matrix, the system,is Gaussian white noise vector, and y is phi s + n, whereinIs an observed signal vector obtained by sampling s through an observation matrix phi. Assuming the original signal vector s is at a known orthonormal basisThe lower can be thinned, i.e. s ═ Ψ x, whereReferred to as original sparse signal, is a sparse representation of the original signal vector s in the new transform domain Ψ. Order sensing matrixGet y ═ Ax + n. The objective of the compressed sensing reconstruction algorithm is to reconstruct the estimated value of the original sparse signal x by the observation signal y in such an underdetermined linear systemThereby further comprisingThe original signal is reconstructed. The common reconstruction algorithm models the sparse reconstruction problem as a convex optimization problem as follows:
the embodiment of the invention is directed to the solution proposed in (1).
And step two, the TwDU algorithm processes the input signal. The thinned signal y is input into an algorithm for processing.
The conventional reconstruction algorithm performs depth expansion. The symbol gamma (·) is adopted to represent a traditional reconstruction algorithm model, the symbol U · represents a depth expansion model of the traditional reconstruction algorithm, and the notation U · represents the inverse of the inverse transformation algorithm by taking the inverse transformation algorithm as an example
Let beta AT=B,IN-BA ═ S, i.e. obtaining:
the above is the traditional ISTA algorithm, and the improved ISTA using the depth expansion technique is expressed as
Equation (5) represents that the depth of the traditional algorithm ISTA is expanded into LISTA, and then the result of the next iteration is obtained. In the depth expansion algorithm LISTA, the parameter θ ═ B, S, τ]Training input data pairs through deep learning techniquesAnd minimizing the secondary loss function (6) to realize self-learning and updating.
Then, the depth expansion algorithm is subjected to two-step expansion. The two-step deep unfolding strategy is proposed byNot only do the results ofAnd also depends onI.e. the estimate of each depth expansion algorithm depends on the estimates of the previous two depth expansion algorithms, rather than only on the previous time, i.e. the existence of a correlation between the signals, which is formulated as:
in the formula (7), U Γ · does not refer to LISTA in particular, but refers to various depth expansion algorithms such as LAMP, TISTA, and the like. (7) The parameters to be trained by the deep learning technique are theta, omega, psi, theta U]Wherein θ U is a parameter to be trained in each deep expansion algorithm, and if the parameter is TWD-LISTA, θ U ═ B, S, τ]. In formula (7)Anda two-step deep unfolding strategy is utilized in each evaluation process. In addition to this, the present invention is,andthe training parameters are trained in the respective two-step deep development, and do not participate in the current training any more, so that the calculation burden can be reduced. The first two calculation results (And) The influence factors omega and psi for the current calculation result can be adaptively adjusted along with the characteristics of data by utilizing the powerful learning capability of deep learning. When let ω be 1.0, then
Andthe coefficient term is 0, and the results of the previous two iterations have no influence on the current result. In this case, (8) and (5) are both common depth deployment schemes. In fact, the proposed solution (7) is more general and (5) can be seen as a special case of (7). However, as the iteration number of the algorithm is continuously increased, the parameters ω and ψ are continuously self-optimized through a back propagation mechanism in deep learning and finally stabilize to fluctuate slightly above and below the optimal value. At this time, the process of the present invention,coefficient 1-omega andthe coefficients ω - ψ terms of (c) are no longer 0, and the performance of the algorithm is significantly improved, and the side shows the important influence of the previous two calculation results on the current result. The two-step depth expansion strategy fully utilizes the inherent characteristics between time sequence signals, and the estimation values of the former two-step depth expansion algorithm exert influence on the current result together according to different weights.
And (4) incremental training mode of the parameters. In the method based on the TwDU strategy, the parameters Θ ═ ω, ψ, θ (U (·))]Will directly affectTo the reconstruction quality of sparse signals, therefore, the Θ training method is extremely important. In the training process of the invention, a batch of data is firstly divided into H small batches of data (batch) and sent into an algorithm network, and the network loss value gradually decreases along with the training of the batch. When the training of one batch of data is completed, a new batch of data will be fed into the network training again. Multiple experiments verify that the incremental training method is very effective in adjusting theta and improving network performance. This is because the incremental training can not only alleviate the gradient disappearance problem, but also further improve the generalization capability of the network. The training data is a randomly generated data pair x, y, wherein y is from the group of y as Ax + n, and is characteristic data after sparse sampling which needs to be learned by the two-step expansion network, and x is sparse label data. The TwDU algorithm learns the data features from batch by applying a stochastic gradient descent algorithm, reconstructing the original sparse signal x. In the t gain training process, the optimizer prompts the objective function of the training by adjusting thetaAnd (4) minimizing. After H small batches of data have been processed, the objective function of the optimizer becomesAlthough the objective function is continuously changed from the first layer to the last layer in the network training process, the parameter Θ takes the previous result as the initial value of the current training in each training process, and has certain consistency. In this context, all experiments, including control experiments, were conducted using incremental training methods for controlling variables.
The TwDU-LAMP and TwDU-TISTA algorithms are taken as examples for carrying out the expansion processing.
1) TwDU-LAMP. The AMP algorithm is a signal processing algorithm proposed in recent years, and has attracted much attention due to its rapid convergence rate, and its mathematical iterative formula is expressed as
the deep unfolding algorithm LAMP is simplified for AMP into the following form:
(10) based on the generalization of AMP algorithm (9), where the matrix A, ATAt iteration t, it appears as At,Bt. In order to reduce the training parameters required by the LAMP network and accelerate the signal processing speed, Borgerding and Schniter make A on the basis of not changing the characteristics of the algorithmt=βtA, at this time, AtIn only a scalar betatVarying with the number of iterations t. LAMP network parametersData pairs entered by trainingAnd (6) minimizing the loss function L (theta) to realize that the self-learning is updated. The iterative process in (10) is performed with depth expansion as shown in fig. 1.
The trainable parameters are increased by only 2, i.e., ω and ψ, based on the original training parameters of LAMP, and the link between the signals is established. The block diagram of TwDU-LAMP is shown in FIG. 2. Omega and psi estimate the first two steps of signalsAndadaptive and current signal estimation by deep learning techniquesAnd establishing a connection, wherein the connection between the signals accelerates the reconstruction speed and the reconstruction precision of the signals.
2) TwDU-TISTA. TISTA is another deep development form of the ISTA algorithm, and the mathematical expression thereof is
Wherein the matrix W ═ AT(AAT)-1Is the pseudo-inverse of the matrix A, σ2Is the noise variance. EtaMMSEIs a Minimum Mean Square Error (MMSE) estimator,
wherein For a non-zero element variance of the input signal,for error variance, p is the probability of occurrence of a non-zero element of the input signal, an
From (12), the signal error variance estimation in the deep expansion algorithm TISTA is shownAndthe influence on the final sparse signal estimation value is crucial. Scalar variableThe step length parameter is used for controlling and adjusting the error variance, and is also a parameter to be trained in the deep learning technology, and the number of the step length parameter is equal to the number of network layers. Training parameter theta ═ gamma for TISTA algorithmt]. The iterative process in (12) is performed with depth expansion as shown in fig. 3.
Wherein the trainable parameters are Θ ═ ω, ψ, γ t]. The block diagram of the TwDU-TISTA is shown in FIG. 4. In TtwDU-TISTA, ω and ψ are also estimated values of the first two steps of the signalAndadaptive sum signalAnd a relation is established, and the convergence speed of the TISTA of the depth expansion algorithm is improved.
And step three, reconstructing and recovering the signal.
And reconstructing the signals in the step through signal sparsification in the step one and signal algorithm processing in the step two. The estimated value of the output signal in the second stepInput to mean square error loss functionIn the method, algorithm parameters are updated by a deep learning technology through a back propagation mechanism based on a gradient descent method, and signal reconstruction and recovery are performed in combination with the incremental training mode introduced above.
The method is not only suitable for the above cases, but also suitable for other depth expansion algorithms. The algorithm pseudo code of the present invention is shown in table 1.
TABLE 1 two-step depth expansion strategy reconstruction algorithm flow
In order to illustrate the feasibility and advantages of the present invention, simulation experiments will be performed. The experimental system is deployed on a Linux platform, a PyTorch1.5.1 deep learning framework is applied, and an Adam optimizer is adopted. In the experiment, a one-dimensional sparse signal which obeys Bernoulli-Gaussian distribution is used as a simulation input signal, and meanwhile, a Normalized Mean Square Error (NMSE) is used as a judgment standard to measure the performance of each depth reconstruction algorithm. The length of the simulated sparse signal used in the experiment is N-500, the dimension M of the observation matrix a is 250, and N is 500. Each element in the matrix A follows a Gaussian distribution with a mean value of 0 and a variance of 1/M, namely Ai,j~N(0,1/M)。
Fig. 5 shows normalized minimum mean square error (NMSE) for LISTA, TwDU-LISTA, TISTA, TwDU-TISTA, LAMP and TwDU-LAMP, each iterated 12 times at SNR of 40 dB. The NMSE calculation is as follows:
as can be observed from fig. 5, the reconstruction algorithm based on the two-step depth expansion strategy can better reconstruct sparse signals compared with the common depth expansion algorithm. The initial phase of TwDU-lisa and LISTA has similar NMSE, and with the number of iterations increasing, the TwDU-lisa has about 9dB and 6dB gains compared to LISTA at t 9 and t 12, respectively. The tissta converged in the first 12 iterations with approximately-42 dB NMSE, whereas TwDU-tissta has reached-42 dB at t-8, which provides faster convergence speed than t-10 at the time of tissta convergence. Meanwhile, the NMSE performance of the TwDU-LAMP in the first 12 iterations is better than that of LAMP, and the convergence rate of the TwDU-LAMP is 2 periods earlier than that of LAMP.
Fig. 6 and 7 show the depth spread algorithm NMSE comparisons at signal-to-noise ratios of 30dB and 20dB, respectively. It can be clearly observed from the figure that the reconstruction capability of each depth expansion algorithm to the signal is worse as the signal-to-noise ratio is reduced. Even so, compared with the common depth expansion algorithm, the reconstruction algorithm based on the TwDU strategy can reconstruct sparse signals better. When the signal-to-noise ratio is 30dB, the TwdU-LITSTA has a gain of 0.6dB at most compared with the LITSTA, the TwdU-TISTA has a gain of 4.6dB at most compared with the TISTA, and the TwdU-LAMP has a gain of 3.9dB at most compared with the LAMP. When the signal-to-noise ratio is 20dB, the performance difference of each depth expansion reconstruction algorithm is reduced due to serious noise pollution, and compared with a common depth expansion algorithm, the gain of the depth expansion algorithm based on the two-step depth expansion strategy is within 1 dB.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.
Claims (9)
1. A sparse signal reconstruction method based on a two-step depth expansion strategy is characterized by comprising the following steps of: the method comprises the following steps:
s1: sparsifying the input signal;
s2: improving a depth expansion model of a traditional sparse signal reconstruction algorithm by using a TwDU algorithm;
s3: and reconstructing and recovering the original sparse signal by using the improved sparse signal reconstruction algorithm.
2. The sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 1, wherein: in step S1, the method specifically includes:
in compressed sensing, letAs a vector of the original signal, the signal is,in order to observe the matrix, the system,is Gaussian white noise vector, and y is phi s + n, whereinIs an observation signal vector obtained by sampling s through an observation matrix phi; assuming the original signal vector s is at a known orthonormal basisThe lower can be thinned, i.e. s ═ Ψ x, whereCalled original sparse signal, is a sparse representation of an original signal vector s in a new transform domain psi, and makes a sensing matrixGet y ═ Ax + n.
4. the sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 1, wherein: in step S2, the depth expansion model of the conventional sparse signal reconstruction algorithm includes LISTA, LAMP, and TISTA.
5. The sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 4, wherein: in step S2, the steps of improving the LISTA algorithm using the TwDU algorithm are as follows:
and (3) carrying out depth expansion on the ISTA algorithm, adopting a symbol gamma (·) to represent an ISTA model, adopting a symbol U · to represent a depth expansion model for the ISTA, and representing the renewed description of the ISTA as:
let beta AT=B,IN-BA ═ S, yielding:
improvement of ISTA using depth unwrapping technique is denoted as
The formula (5) shows that the depth of the traditional algorithm ISTA is expanded into LISTA, and then the result of the next iteration is obtained; in the depth expansion algorithm LISTA, the parameter θ ═ B, S, τ]Training input data pairs through deep learning techniquesMinimizing a secondary loss function (6) to realize self-learning and updating:
then, the depth expansion algorithm is expanded in two steps, and the formula is expressed as follows:
u gamma in the formula (7) refers to various depth expansion algorithms including LITSTA, LAMP and TISTA; the parameters to be trained by the deep learning technique in the formula (7) are Θ ═ ω, ψ, θ U ·]Wherein θ U is a parameter to be trained in each deep expansion algorithm, and if the parameter is TWD-LISTA, θ U ═ B, S, τ](ii) a In formula (7)Andtwo-step deep development strategies are utilized in respective evaluation processes;andthe training parameters are trained in the respective two-step deep development, and do not participate in the current training any more, and the results of the previous two calculationsAndthe influence factors omega and psi of the current calculation result can be adaptively adjusted along with the characteristics of data by utilizing the strong learning capacity of deep learning; when let ω be 1.0, then
Andthe coefficient item is 0, and the previous two iteration results have no influence on the current result; in this case, both the formula (8) and the formula (5) are common depth expansion schemes, and the formula (5) is considered as a special case of the formula (7); with the continuous increase of the iteration times of the algorithm, the parameters omega and psi are continuously self-optimized through a back propagation mechanism in the deep learning, and finally are stabilized to be in small fluctuation above and below an optimal value, at the moment,coefficient 1-omega andthe co-psi terms of (a) are no longer 0.
6. The sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 5, wherein: in the training process, a batch of data is firstly divided into H small batch data batch and sent to an algorithm network, and the network loss value is gradually reduced along with the training of the batch; when the training of a batch of data is finished, a new batch of data is sent to the network again for training; the training data is a randomly generated data pair x, y, wherein y is from y being Ax + n, is characteristic data after sparse sampling needing learning of the two-step expansion network, and x is sparse label data; the TwDU algorithm learns the data characteristics from batch by applying a random gradient descent algorithm to reconstruct an original sparse signal x; in the t gain training process, the optimizer prompts the objective function of the training by adjusting thetaMinimization; after H small batches of data have been processed, the objective function of the optimizer becomesThe parameter Θ is an initial value of the training in each training process, which is the result of the previous training.
7. The sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 4, wherein: in step S2, the steps of improving the LAMP algorithm using the TwDU algorithm are as follows:
the mathematical iterative formula of the AMP algorithm is expressed as
In the formula vtT-1 iterations for signal T T ═ 0,1,2Estimating residual error, initializationv_10, and
the deep unfolding algorithm LAMP is simplified for AMP into the following form:
equations (10a) and (10b) are based on the generalization of AMP algorithms (9a) and (9b), where the matrices A, ATAt iteration t, it appears as At,BtLet At=βtA, at this time, AtIn only a scalar betatVarying with the number of iterations t. LAMP network parametersData pairs entered by trainingMinimizing the loss function L (theta) of the formula (6) to realize that the self-learning is updated;
8. The sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 4, wherein: in step S2, the steps of improving the TISTA algorithm using the TwDU algorithm are as follows:
TISTA is another deep development form of the ISTA algorithm, and the mathematical expression thereof is
Wherein the matrix W ═ AT(AAT)-1Is the pseudo-inverse of the matrix A, σ2Is the variance of the noise, ηMMSEMinimum Mean Square Error (MMSE) estimator:
whereinα2For a non-zero element variance of the input signal,for error variance, p is the probability of occurrence of a non-zero element of the input signal, an
Scalar variableIs a step size parameter for controlling the error of adjustmentThe difference variance is also a parameter to be trained in the deep learning technology, and the number of the difference variance is equal to the number of network layers; training parameter theta ═ gamma for TISTA algorithmt];
9. The sparse signal reconstruction method based on the two-step depth expansion strategy according to claim 6, wherein: in step S3, the signal is reconstructed by the signal thinning in step S1 and the signal algorithm processing in step S2, and the estimated value of the signal is output in step S2Input to mean square error loss functionIn the method, algorithm parameters are updated by a deep learning technology and a back propagation mechanism based on a gradient descent method, and signal reconstruction and recovery are performed by combining the incremental training mode.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111374559.7A CN114050832A (en) | 2021-11-17 | 2021-11-17 | Sparse signal reconstruction method based on two-step depth expansion strategy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111374559.7A CN114050832A (en) | 2021-11-17 | 2021-11-17 | Sparse signal reconstruction method based on two-step depth expansion strategy |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114050832A true CN114050832A (en) | 2022-02-15 |
Family
ID=80210022
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111374559.7A Pending CN114050832A (en) | 2021-11-17 | 2021-11-17 | Sparse signal reconstruction method based on two-step depth expansion strategy |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114050832A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150287223A1 (en) * | 2014-04-04 | 2015-10-08 | The Board Of Trustees Of The University Of Illinois | Highly accelerated imaging and image reconstruction using adaptive sparsifying transforms |
US20180240219A1 (en) * | 2017-02-22 | 2018-08-23 | Siemens Healthcare Gmbh | Denoising medical images by learning sparse image representations with a deep unfolding approach |
CN112234994A (en) * | 2020-09-29 | 2021-01-15 | 西南石油大学 | Compressed sensing reconstruction signal processing method and system, computer equipment and application |
CN113222812A (en) * | 2021-06-02 | 2021-08-06 | 北京大学深圳研究生院 | Image reconstruction method based on information flow reinforced deep expansion network |
CN113271269A (en) * | 2021-04-22 | 2021-08-17 | 重庆邮电大学 | Sparsity self-adaptive channel estimation method based on compressed sensing |
CN113300714A (en) * | 2021-04-23 | 2021-08-24 | 北京工业大学 | Joint sparse signal dimension reduction gradient tracking reconstruction algorithm based on compressed sensing theory |
-
2021
- 2021-11-17 CN CN202111374559.7A patent/CN114050832A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150287223A1 (en) * | 2014-04-04 | 2015-10-08 | The Board Of Trustees Of The University Of Illinois | Highly accelerated imaging and image reconstruction using adaptive sparsifying transforms |
US20180240219A1 (en) * | 2017-02-22 | 2018-08-23 | Siemens Healthcare Gmbh | Denoising medical images by learning sparse image representations with a deep unfolding approach |
CN112234994A (en) * | 2020-09-29 | 2021-01-15 | 西南石油大学 | Compressed sensing reconstruction signal processing method and system, computer equipment and application |
CN113271269A (en) * | 2021-04-22 | 2021-08-17 | 重庆邮电大学 | Sparsity self-adaptive channel estimation method based on compressed sensing |
CN113300714A (en) * | 2021-04-23 | 2021-08-24 | 北京工业大学 | Joint sparse signal dimension reduction gradient tracking reconstruction algorithm based on compressed sensing theory |
CN113222812A (en) * | 2021-06-02 | 2021-08-06 | 北京大学深圳研究生院 | Image reconstruction method based on information flow reinforced deep expansion network |
Non-Patent Citations (2)
Title |
---|
DI YOU等: "ISTA-NET++: Flexible Deep Unfolding Network for Compressive Sensing", 《2021 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME)》, 9 June 2021 (2021-06-09), pages 1 - 6 * |
刘振宇: "基于深度学习的大规模MIMO系统信道状态信息反馈研究", 《中国博士学位论文全文数据库信息科技辑》, 15 January 2021 (2021-01-15), pages 136 - 232 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109859147B (en) | Real image denoising method based on generation of antagonistic network noise modeling | |
Suresh et al. | A sequential learning algorithm for complex-valued self-regulating resource allocation network-CSRAN | |
Yang et al. | Online sequential echo state network with sparse RLS algorithm for time series prediction | |
CN109150775B (en) | Robust online channel state estimation method for dynamic change of self-adaptive noise environment | |
Cui et al. | Adaptive decentralized NN control of large-scale stochastic nonlinear time-delay systems with unknown dead-zone inputs | |
CN109741364B (en) | Target tracking method and device | |
CN111950711A (en) | Second-order hybrid construction method and system of complex-valued forward neural network | |
CN112132760B (en) | Image recovery method based on matrix inversion and matrix decomposition capable of learning and differentiating | |
CN112258410B (en) | Differentiable low-rank learning network image restoration method | |
Barreto et al. | Adaptive filtering with the self-organizing map: A performance comparison | |
JP6507320B2 (en) | Method of recovering original signal in DS-CDMA system based on complexity reduction | |
CN114050832A (en) | Sparse signal reconstruction method based on two-step depth expansion strategy | |
CN108596865B (en) | Feature map enhancement system and method for convolutional neural network | |
Scott et al. | Nonlinear system identification and prediction using orthogonal functions | |
CN113869503B (en) | Data processing method and storage medium based on depth matrix decomposition completion | |
CN110450164A (en) | Robot control method, device, robot and storage medium | |
Kumar et al. | Image Deconvolution using Deep Learning-based Adam Optimizer | |
CN111865489B (en) | Multiple-input multiple-output detection method based on graph neural network | |
Xie et al. | Model‐guided boosting for image denoising | |
WO2021073738A1 (en) | Learning a data density function | |
CN112488309B (en) | Training method and system of deep neural network based on critical damping momentum | |
Zhuang et al. | Analytic learning of convolutional neural network for pattern recognition | |
Huynh et al. | ENABLING SMART FACTORY WITH DEEP RESIDUAL-AIDED GENERATIVE ADVERSARIAL NETWORK: PERFORMANCE ANALYSIS END-TO-END LEARNING OF MACHINE-TO-MACHINE. | |
EP4040342A1 (en) | Deep neutral network structure learning and simplifying method | |
CN109447913B (en) | Rapid image reconstruction method applied to incomplete data imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |