CN109543822B - One-dimensional signal data restoration method based on convolutional neural network - Google Patents
One-dimensional signal data restoration method based on convolutional neural network Download PDFInfo
- Publication number
- CN109543822B CN109543822B CN201811445812.1A CN201811445812A CN109543822B CN 109543822 B CN109543822 B CN 109543822B CN 201811445812 A CN201811445812 A CN 201811445812A CN 109543822 B CN109543822 B CN 109543822B
- Authority
- CN
- China
- Prior art keywords
- data
- neural network
- convolutional neural
- weight
- damaged
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Abstract
The invention discloses a one-dimensional signal data restoration method based on a convolutional neural network, which comprises the steps of performing limit processing on damaged data, constructing a convolutional neural network model based on an Encoder-decoder framework, and performing weighting processing on a loss function in the convolutional neural network model to ensure that the restored data has more instantaneous characteristics; the L2 regular term is added to avoid the overfitting phenomenon of the network, and the dual condition limits of the iteration times and the loss function change rate are selected as the stopping conditions of training, so that the performance stability of the neural network is facilitated, and the data restoration efficiency is improved; by adopting the Adam optimization algorithm, the convergence rate is high, the anti-noise performance is good, and the convolutional neural network model can be corrected more quickly.
Description
Technical Field
The invention belongs to the technical field of data processing and data restoration, and particularly relates to a one-dimensional signal data restoration method based on a convolutional neural network.
Background
Due to the failure of the node, the packet loss of wireless communication and the data loss occur occasionally in the transmission process of the wireless sensor test network.
Common one-dimensional data repair algorithms are: a proximity interpolation algorithm, an EM algorithm, a CS algorithm. The typical algorithm of the adjacent interpolation is a segmented interpolation algorithm, and is often applied to a slowly varying signal restoration scene with obvious space-time correlation, such as environmental temperature and humidity test, and is not suitable for signal restoration with high frequency variation; the EM algorithm is suitable for statistical data restoration and is not suitable for application scenes with few samples and unknown probability density distribution; the compressive sensing algorithm is suitable for supplementing data loss with high frequency, but more a priori knowledge is needed, and sparse bases of repaired data need to be known. In practical application, the time correlation of signals monitored by the wireless sensor network is not necessarily obvious, even has no spatial correlation, and the sparse basis is difficult to judge through priori knowledge.
Disclosure of Invention
In view of this, the present invention provides a one-dimensional signal data repairing method based on a convolutional neural network, which can effectively restore all lost data by fitting to a damaged signal without any a priori knowledge.
A one-dimensional signal data restoration method based on a convolutional neural network comprises the following steps:
step one, obtaining a damaged data set x to be repaired0;
Step two, for the damaged data set x0Limiting the upper and lower bounds of each undamaged data to obtain a processed data set y
Wherein δ is a scaling margin and has a positive value greater than 1; bias is an offset to prevent the processed data value from being less than 0;
step three, establishing a convolutional neural network model based on a coding-decoding framework, and inputting the processed data set obtained in the step two into the convolutional neural network model for training; wherein an optimization goal of the convolutional neural network training is to minimize a loss function value;
the loss function is expressed as follows:
E(z;x0)=||(z-x0)⊙m||2 (2)
wherein z represents reconstructed data output by the convolutional neural network; m is a mask of lost data, wherein the element is 0 or 1, and the index of the element corresponds to each data index in the damaged data set one by one; when data at an index position in the damaged data set is not lost, the element at the same index position m is 1, otherwise, the element is 0;
and step four, outputting the reconstructed data after the training of the convolutional neural network model is finished.
Further, weighting the loss function, and then training by taking the weighted loss function as an optimization target of the convolutional neural network to obtain reconstruction data; wherein, the loss function after weighting processing is:
E(z;x0)=||(z-x0)⊙m||2⊙η (3)
wherein η is a weight vector, the length of the weight vector η is the same as the length of the damaged data set to be repaired, that is, each index position data of the damaged data set corresponds to a weight, and the weight assignment principle is as follows: the closer the index position is to the damaged data, the larger the value is; the weight of the data at the damaged location is 0; if the position damaged data with a plurality of indexes simultaneously affects the weight of certain undamaged data, the weight of one undamaged data is respectively obtained according to the distance between each damaged data and the undamaged data, and the maximum weight value is taken as the weight of the undamaged data.
Preferably, the weight is a rectangular window weight, specifically: for each undamaged data, the distance between the index position of the undamaged data and the index position of the damaged data exceeds a set threshold, the corresponding weight of the undamaged data is 0, and otherwise, the distance is 1.
Preferably, the weight is a gaussian window weight, and the weight η i of the undamaged data with index position i is:
in the formula, λ is used to control the maximum value of the weight, that is, the weight closest to the position of the lost data is λ; sigma is a parameter for controlling the attenuation speed of the weight, and the larger the sigma is, the slower the attenuation is; the index set for corrupted data is L, and k is the corrupted data index position.
Further, when training the convolutional neural network:
firstly, adding an L2 regular penalty term L (theta) in a loss function of the convolutional neural network, and adjusting the sparsity of network parameters by controlling a regular term coefficient upsilon; theta is the parameter of the neural network, the optimal parameterIs obtained by an Adam optimization algorithm, and is expressed as formula (5):
E(z ;x0)=||(z -x0)⊙m||2⊙η+L(θ) (5)
the calculation method of L (theta) is as shown in formula (6):
then, controlling network convergence by adopting an Adam optimization algorithm, wherein the method comprises the following specific steps:
setting the learning speed to LR; the attenuation coefficient of the moment estimate is ρ1、ρ2(ii) a A small constant term n; initializing a neural network coefficient theta;
step (1), initializing a first-order moment variable s and a second-order moment variable s which are 0, r which is 0, and iterating times t which are 1;
step (2), training an input damaged data set to be repaired, and outputting a loss value e of a loss function;
step (3), calculating the gradient of the coefficient theta, and updating the iteration times: g ← v +θft(θ), t ← t + 1; wherein f ist(θ) represents a gradient of θ;
step (4), updating a first moment variable: s ← ρ1s+(1-ρ1)g
Step (5), updating the second moment variable: r ← ρ2r+(1-ρ2)g⊙g
step (8), updating the coefficient: θ ← θ + Δ θ, and steps (2) to (8) are repeated;
and finally, after the iteration times t reach a set value or the change rate of the loss function meets a set condition, stopping the iteration of the convolutional neural network, and outputting reconstructed data.
The invention has the following beneficial effects:
the invention relates to a one-dimensional signal data restoration method based on a convolutional neural network, which comprises the steps of performing limit processing on damaged data, constructing a convolutional neural network model based on an Encoder-decoder framework, and performing weighting processing on a loss function in the convolutional neural network model to ensure that the restored data has more instantaneous characteristics;
the L2 regular term is added to avoid the overfitting phenomenon of the network, the dual condition limits of the iteration times and the loss function change rate are selected as the stopping conditions of training, the performance stability of the neural network is facilitated, and the data restoration efficiency is improved.
By adopting the Adam optimization algorithm, the convergence rate is high, the anti-noise performance is good, and the convolutional neural network model can be corrected more quickly.
Drawings
FIG. 1 is a schematic block diagram of an inventive convolutional neural network based on an encoder-decoder architecture;
FIG. 2 is a flow chart of the method of the present invention.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
As shown in fig. 2, a method for repairing one-dimensional signal data based on a convolutional neural network includes the following steps:
step one, acquiring damaged data x to be repaired0;
Secondly, in order to match the output of the post-processing unit with the training set, damaged data needs to be processed, and the upper and lower bounds of the data are limited; the method comprises the following steps of mapping unreleased data by using an equation (1) to limit the value range of each piece of unreleased data:
where δ is a scaling margin, which is to ensure that the output repair data can be larger than the maximum value of the known data or smaller than the minimum value of the known data, and is generally a positive number larger than 1; bias is an offset used to bias the value in the data to a reasonably positive number, preventing it from being less than 0 resulting in exceeding the value range of the sigmoid activation function, and y is the processed data.
Step three, establishing a convolutional neural network model based on an encoding-decoding (Encoder-decoder) architecture, as shown in fig. 1, comprising an encoding unit and a decoding unit; and D, inputting the data set processed in the step two into a convolutional neural network model, carrying out compression coding processing on the data set by a coding unit, and coding the low-dimensional long data signal into a high-dimensional short data signal. The decoding unit decodes the encoded information to decode the high-dimensional short data signal into the low-dimensional long data signal. In this embodiment, a 7-layer encoding unit and a 7-layer decoding unit are adopted, and the number of channels adopted by each layer is 2, 4, 8, 16, 32, 64, and 128, respectively.
And step four, the optimization goal of the convolutional neural network training is to minimize the loss function value. The invention selects a mean square error function to calculate the error of a reconstructed signal z relative to known lossy data, and a mask is used for shielding the index position of the missing data to model the data loss. And m is a mask of lost data, wherein the element is 0 or 1, the index of the element corresponds to the index of the damaged data one by one, if the data at a certain index position in the damaged data is not lost, the element at the same index position of m is 1, and otherwise, the element is 0. The loss function is as in equation (2).
E(z;x0)=||(z-x0)⊙m||2 (2)
The loss function value is minimized to be an optimization target during the neural network training. In the data recovery task, the loss function of the neural network is represented by equation (2). In order to make the repaired data have more transient characteristics, weighting processing is performed on the basis of the formula (2), namely, the formula (3).
E(z;x0)=||(z-x0)⊙m||2⊙η (3)
Wherein x0 is the corrupted data, where z represents the reconstructed data output by the convolutional neural network; m is a mask of lost data, wherein the element is 0 or 1, and the index of the element corresponds to each data index in the damaged data set one by one; when data at an index position in the damaged data set is not lost, the element at the same index position of m is 1, otherwise is 0 (since there is a mask m, the value is not important); if the position damaged data with a plurality of indexes simultaneously affects the weight of certain undamaged data, the weight of one undamaged data is respectively obtained according to the distance between each damaged data and the undamaged data, and the maximum weight value is taken as the weight of the undamaged data.
The invention sets two weight calculation methods: a rectangular window weight and a gaussian window weight.
(1) Weight of rectangular window
For simplification, the weight can be set to be 0 or 1, for each undamaged data, the distance between the index position of the undamaged data and the index position of the damaged data exceeds a set threshold, the corresponding weight of the undamaged data is 0, otherwise, the weight is 1. And (4) setting the index set of the damaged data as L, i is the index position of the undamaged data, k is the index position of the damaged data, and Th is the distance threshold, and then assigning the rule as the formula (4).
(2) Gauss window weights
In order to enhance the importance of data adjacent to the position of missing data in information provision, weights which decay with the gaussian law with the distance of the index position can be set, so that the weight η of the undamaged data with index position iiComprises the following steps:
where λ controls the maximum value of the weight, i.e. the weight closest to the location of the missing data is λ. σ can control the decay rate of the weights, with larger σ decaying more slowly. To simplify the calculation, the weight is set to 0 at positions where the distance is greater than 3 σ, according to the 3 σ principle of gaussian.
Adding an L2 regular penalty term L (theta) to a loss function to avoid an overfitting phenomenon of the network, and regulating sparsity of network parameters by controlling a regular term coefficient upsilon; theta is the parameter of the neural network, the optimal parameterIs obtained by an Adam optimization algorithm, and is expressed as formula (6):
E(z;x0)=||(z-x0)⊙m||2⊙η+L(θ) (6)
the calculation method of L (theta) is as shown in formula (7):
the regular term of L2 can enable the deep neural network to contain more non-zero parameters close to 0, and the sparsity of the network is improved while more characteristic information is retained. In order to embody the advantages of the deep neural network and enable the network to better find detailed characteristics in signals, an L2 regular term is adopted to avoid the overfitting phenomenon of the network during the training of the network provided by the invention, and the sparsity of network parameters is adjusted by controlling the coefficient upsilon of the regular term.
Step six, selecting an Adam optimization algorithm by a network convergence optimization algorithm, wherein the method specifically comprises the following steps:
inputting: a learning speed LR; attenuation coefficient of moment estimation ρ1、ρ2(ii) a Small constant term n (typically 10)-8) (ii) a Initializing a neural network coefficient theta;
step (1), initializing a first-order moment variable s and a second-order moment variable s as 0, r as 0 and iteration time t as 1;
and (3) training the data of the training set in the step (2) and outputting a loss value e.
Step (3) calculating gradient, updating iteration times: g ← v +θft(θ),t←t+1(ft(theta) gradient representing theta)
Step (4), updating a first moment variable: s ← ρ1s+(1-ρ1)g
Step (5) updating a second moment variable: r ← ρ2r+(1-ρ2)g⊙g
updating the coefficient in the step (8): θ ← θ + Δ θ, and steps 2 to 8 are repeated.
Step seven, selecting iteration times and loss functions E (z; x)0) And (3) dual condition limitation of the change rate, namely, after the iteration times t reach a set value or the change rate of the loss function meets the set condition, stopping the iteration of the convolutional neural network and outputting the reconstructed data, wherein the dual condition limitation of the change rate is used as a stop condition of the training.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (5)
1. A one-dimensional signal data restoration method based on a convolutional neural network is characterized by comprising the following steps:
step one, obtaining a damaged data set x to be repaired0;
Step two, for the damaged data set x0Limiting the upper and lower bounds of each undamaged data to obtain a processed data set y:
wherein δ is a scaling margin and has a positive value greater than 1; bias is an offset to prevent the processed data value from being less than 0;
step three, establishing a convolutional neural network model based on a coding-decoding framework, and inputting the processed data set obtained in the step two into the convolutional neural network model for training; wherein an optimization goal of the convolutional neural network training is to minimize a loss function value;
the loss function is expressed as follows:
E(z;x0)=||(z-x0)⊙m||2 (2)
wherein z represents reconstructed data output by the convolutional neural network; m is a mask of lost data, wherein the element is 0 or 1, and the index of the element corresponds to each data index in the damaged data set one by one; when data at an index position in the damaged data set is not lost, the element at the same index position m is 1, otherwise, the element is 0;
and step four, outputting the reconstructed data after the training of the convolutional neural network model is finished.
2. The one-dimensional signal data restoration method based on the convolutional neural network as claimed in claim 1, wherein the loss function is weighted, and then the weighted loss function is used as an optimization target of the convolutional neural network for training to obtain reconstructed data; wherein, the loss function after weighting processing is:
E(z;x0)=||(z-x0)⊙m||2⊙η (3)
wherein η is a weight vector, the length of the weight vector η is the same as the length of the damaged data set to be repaired, that is, each index position data of the damaged data set corresponds to a weight, and the weight assignment principle is as follows: the closer the index position is to the damaged data, the larger the value is; the weight of the data at the damaged location is 0; if the position damaged data with a plurality of indexes simultaneously affects the weight of certain undamaged data, the weight of one undamaged data is respectively obtained according to the distance between each damaged data and the undamaged data, and the maximum weight value is taken as the weight of the undamaged data.
3. The one-dimensional signal data restoration method based on the convolutional neural network as claimed in claim 2, wherein the weight is a rectangular window weight, specifically: for each undamaged data, the distance between the index position of the undamaged data and the index position of the damaged data exceeds a set threshold, the corresponding weight of the undamaged data is 0, and otherwise, the distance is 1.
4. The convolutional neural network-based one-dimensional signal data recovery method as claimed in claim 2, wherein the weight is a gaussian window weight, and the weight η of the undamaged data with index position iiComprises the following steps:
in the formula, λ is used to control the maximum value of the weight, that is, the weight closest to the position of the lost data is λ; sigma is a parameter for controlling the attenuation speed of the weight, and the larger the sigma is, the slower the attenuation is; the index set for corrupted data is L, and k is the corrupted data index position.
5. The convolutional neural network-based one-dimensional signal data recovery method as claimed in claim 2, wherein in training the convolutional neural network:
firstly, adding an L2 regular penalty term L (theta) in a loss function of the convolutional neural network, and adjusting the sparsity of network parameters by controlling a regular term coefficient upsilon; theta is the parameter of the neural network, the optimal parameterIs obtained by an Adam optimization algorithm, and is expressed as formula (5):
E(z ;x0)=||(z -x0)⊙m||2⊙η+L(θ) (5)
the calculation method of L (theta) is as shown in formula (6):
then, controlling network convergence by adopting an Adam optimization algorithm, wherein the method comprises the following specific steps:
setting the learning speed to LR; the attenuation coefficient of the moment estimate is ρ1、ρ2(ii) a A small constant term n; initializing a neural network coefficient theta;
step (1), initializing a first-order moment variable s and a second-order moment variable s which are 0, r which is 0, and iterating times t which are 1;
step (2), training an input damaged data set to be repaired, and outputting a loss value e of a loss function;
step (3), calculating the gradient of the coefficient theta, and updating the iteration times:t ← t + 1; wherein f ist(θ) represents a gradient of θ;
step (4), updating a first moment variable: s ← ρ1s+(1-ρ1)g
Step (5), updating the second moment variable: r ← ρ2r+(1-ρ2)g⊙g
step (8), updating the coefficient: θ ← θ + Δ θ, and steps (2) to (8) are repeated;
and finally, after the iteration times t reach a set value or the change rate of the loss function meets a set condition, stopping the iteration of the convolutional neural network, and outputting reconstructed data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811445812.1A CN109543822B (en) | 2018-11-29 | 2018-11-29 | One-dimensional signal data restoration method based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811445812.1A CN109543822B (en) | 2018-11-29 | 2018-11-29 | One-dimensional signal data restoration method based on convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109543822A CN109543822A (en) | 2019-03-29 |
CN109543822B true CN109543822B (en) | 2021-08-10 |
Family
ID=65851293
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811445812.1A Active CN109543822B (en) | 2018-11-29 | 2018-11-29 | One-dimensional signal data restoration method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109543822B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110307969A (en) * | 2019-05-23 | 2019-10-08 | 哈尔滨理工大学 | A kind of planetary gear failure prediction method based on function type data fitting and convolutional neural networks |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3156943A1 (en) * | 2015-10-16 | 2017-04-19 | Thomson Licensing | Method and device for clustering patches of a degraded version of an image |
ZA201708035B (en) * | 2016-12-03 | 2019-01-30 | Zensar Tech Limited | A computer implemented system and method for steganography |
CN108876864B (en) * | 2017-11-03 | 2022-03-08 | 北京旷视科技有限公司 | Image encoding method, image decoding method, image encoding device, image decoding device, electronic equipment and computer readable medium |
CN108090871B (en) * | 2017-12-15 | 2020-05-08 | 厦门大学 | Multi-contrast magnetic resonance image reconstruction method based on convolutional neural network |
CN108765338A (en) * | 2018-05-28 | 2018-11-06 | 西华大学 | Spatial target images restored method based on convolution own coding convolutional neural networks |
-
2018
- 2018-11-29 CN CN201811445812.1A patent/CN109543822B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109543822A (en) | 2019-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Lee et al. | Sparse markov decision processes with causal sparse tsallis entropy regularization for reinforcement learning | |
CN112350899A (en) | Network flow prediction method based on graph convolution network fusion multi-feature input | |
CN113158582A (en) | Wind speed prediction method based on complex value forward neural network | |
CN111815053B (en) | Prediction method and system for industrial time sequence data | |
Wang et al. | A switch kernel width method of correntropy for channel estimation | |
CN111860783B (en) | Graph node low-dimensional representation learning method and device, terminal equipment and storage medium | |
CN109472105B (en) | Semiconductor product yield upper limit analysis method | |
CN109543822B (en) | One-dimensional signal data restoration method based on convolutional neural network | |
CN116822382B (en) | Sea surface temperature prediction method and network based on space-time multiple characteristic diagram convolution | |
CN116680540A (en) | Wind power prediction method based on deep learning | |
Abbassi et al. | Optimal filter approximations in conditionally Gaussian pairwise Markov switching models | |
CN109599866A (en) | A kind of power system state estimation method of prediction auxiliary | |
CN116993537A (en) | Power load abnormality detection method and system based on serial GRU (generic routing unit) self-encoder | |
CN116880191A (en) | Intelligent control method of process industrial production system based on time sequence prediction | |
Dai et al. | Dictionary learning and update based on simultaneous codeword optimization (SimCO) | |
CN109302189B (en) | Polarization code decoding algorithm based on ladder pruning | |
CN114630207B (en) | Multi-sensing-node sensing data collection method based on noise reduction self-encoder | |
CN111161363A (en) | Image coding model training method and device | |
CN112990618A (en) | Prediction method based on machine learning method in industrial Internet of things | |
Chatterjee et al. | On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property | |
Wang et al. | A sparsity-aware proportionate normalized maximum correntropy criterion algorithm for sparse system identification in non-gaussian environment | |
CN116722548B (en) | Photovoltaic power generation prediction method based on time sequence model and related equipment | |
CN113688875B (en) | Industrial system fault identification method and device | |
Catalina et al. | Accelerated block coordinate descent for sparse group Lasso | |
CN117093902A (en) | Sparse system identification method suitable for aircraft modeling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |