CN109547961B - Large data volume compressed sensing coding and decoding method in wireless sensor network - Google Patents
Large data volume compressed sensing coding and decoding method in wireless sensor network Download PDFInfo
- Publication number
- CN109547961B CN109547961B CN201811445814.0A CN201811445814A CN109547961B CN 109547961 B CN109547961 B CN 109547961B CN 201811445814 A CN201811445814 A CN 201811445814A CN 109547961 B CN109547961 B CN 109547961B
- Authority
- CN
- China
- Prior art keywords
- signal
- signals
- sparse
- sparse representation
- coding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/38—Services specially adapted for particular environments, situations or purposes for collecting sensor information
-
- G—PHYSICS
- G08—SIGNALLING
- G08C—TRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
- G08C17/00—Arrangements for transmitting signals characterised by the use of a wireless electrical link
- G08C17/02—Arrangements for transmitting signals characterised by the use of a wireless electrical link using a radio link
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/3059—Digital compression and data reduction techniques where the original information is represented by a subset or similar information, e.g. lossy compression
- H03M7/3062—Compressive sampling or sensing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W28/00—Network traffic management; Network resource management
- H04W28/02—Traffic management, e.g. flow control or congestion control
- H04W28/06—Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W84/00—Network topologies
- H04W84/18—Self-organising networks, e.g. ad-hoc networks or sensor networks
Abstract
The invention discloses a large data volume compression sensing coding and decoding method in a wireless sensor network, which is a method for decomposing signals step by step, wherein a main signal is contained in the signals of the previous stages, so that effective signals required by people are effectively protected, noise signals are weakened, and the accuracy of the signals is improved; the sparse signals are subjected to hierarchical compression coding, the total data amount of the decomposed signals is greatly reduced, and the real-time performance and the accuracy of reconstruction can be improved; a total measurement matrix Mmax multiplied by N is designed, the measurement matrix of each level signal is used as a sub-matrix of the measurement matrix, redundant measurement matrices are not saved, and the utilization rate of hardware resources is reduced; after the coded data transmission of a single sub-signal is finished, the original signal can be reconstructed and output, the sub-signal with the maximum sparse vector coefficient is reconstructed first, and then the sub-signals with higher series are accumulated continuously, so that the detail correction is realized, and the real-time performance of data transmission is improved.
Description
Technical Field
The invention belongs to the technical field of data processing, and particularly relates to a large-data-volume compressed sensing coding and decoding method in a wireless sensor network.
Background
Most of data collected by the wireless sensor test network are one-dimensional data in nature, such as temperature, humidity, vibration, displacement, sound and the like. Data compression coding in wireless sensor networks is mainly performed on such one-dimensional signals. The compression coding can achieve data compression, and the data correlation of the wireless sensor network is substantially utilized. Such correlations include temporal correlations and spatial correlations. Therefore, the corresponding compression algorithm can also be divided into: a data compression algorithm based on temporal correlation, a data compression algorithm based on spatial correlation, and a data compression algorithm based on spatio-temporal correlation.
Most of the typical data compression algorithms and design ideas only consider the redundancy of data and do not consider the instability and packet loss phenomenon in wireless transmission. Therefore, the Luo and Lee and the like introduce a compressed sensing algorithm into the field of wireless sensor networks, successfully eliminate data space redundancy and ensure insensitivity of data to packet loss. Xiong L et al eliminate the spatio-temporal redundancy of data through a compressive sensing algorithm in combination with wavelet transformation. However, these algorithms require a measurement signal with a higher dimension to ensure the reconstruction accuracy of data, which affects the data compression rate and the real-time performance of terminal display.
Disclosure of Invention
In view of this, the present invention provides a method for compressing, sensing, encoding and decoding a large amount of data in a wireless sensor network, which can improve the real-time performance and accuracy of data reconstruction and save the transmission bandwidth of the network.
A data compression perception coding and decoding method comprises the following steps:
step one, carrying out sparse transformation and classification on an original signal:
step 1), carrying out sparse conversion on the original signal, and calculating a sparse representation α of the original signal;
step 2), initializing a sparse representation α of the residual signalr=α;
Step 3), calculating sparse representation of the 1 st-level signal, specifically:
sparse representation α for residualrIs determined whether the element value is greater than or equal to σ max (α)r) And the component coefficient sigma is determined according to the actual situation, and the value is decimal between 0 and 1:
if yes, assigning the element value to the corresponding position of the current grade signal, and then α the residual errorrThe value of the element in (1) is set to 0;
if not, setting the value of the corresponding position of the current grade signal and the element as 0;
traversal residual αrAfter all the elements in (c), the values of the elements in the sparse representation of the current fraction signal and the sparse representation α of the updated residual are obtainedr(ii) a Obtaining sparseness of current fraction signalDegree;
step 4), decomposing the current residual error according to the method in the step 3) to obtain a next-level classification signal; in this way, until the decomposition grade reaches a set threshold or the p-norm of the residual signal is smaller than the set threshold;
step two, the signals are coded and compressed step by step:
randomly generating an M multiplied by N dimensional Bernoulli random matrix as a total measurement matrix phi; wherein N represents the dimension of the original signal; the value of M is based on the sparsity K of the last level signalnDetermining: m is not less than cKnlog(N/Kn) Wherein the value of c is determined from a priori knowledge of the signal;
the measurement matrix phi i corresponding to the ith grade signal is the first M of the total measurement matrix phiiRow data;
carrying out coding compression on the sparse representation of each level signal by using the measurement matrix corresponding to each level signal to obtain a coded signal;
step three, sequentially transmitting each level signal after completing coding from the first level, determining the dimension of the level signal after receiving the coding signal of the first level signal, namely the number of rows of the measuring matrix corresponding to the level signal, extracting the measuring matrix corresponding to the level signal from the total measuring matrix phi, inputting the coding data and the measuring matrix into a matching pursuit algorithm OMP, and outputting α sparse representation of the restored signali’;
Step four, assuming that the j-th level signal coded data is restored, the reconstructed original signal is:
where Ψ represents a sparse basis for sparsely transforming the original signal in step one.
Preferably, M ═ cKnlog(N/Kn) Wherein c is 3.
The invention has the following beneficial effects:
the method of decomposing the signal stage by using sparse representation of the signal under sparse basis is adopted, the main signal is contained in the signals of the previous stages, the secondary signal is more posterior, and noise signals such as white noise signals are generally decomposed into residual errors to be eliminated. The effective signals required by people are effectively protected, the noise signals are weakened, and the accuracy of the signals is improved.
The sparse signals are subjected to hierarchical compression coding, the total data amount of the decomposed signals is greatly reduced, and the real-time performance and the accuracy of reconstruction can be improved.
A total measurement matrix Mmax multiplied by N is designed, the measurement matrix of each level signal is used as a sub-matrix of the measurement matrix, redundant measurement matrices are not saved, and the utilization rate of hardware resources is reduced.
After the coded data transmission of the single sub-signal is finished, the original signal can be reconstructed and output. The more the reconstructed sub-signals are, the higher the number of stages is, the more detailed information of the original signal is, and the distortion is smaller. The step-by-step reconstruction mode is different from the traditional signal reconstruction mode, the output of the reconstructed signal can be carried out without receiving all compressed coding information, and the fractional signal coding with low stage number is transmitted in the network firstly, so that the fractional signal with the maximum sparse vector coefficient is reconstructed firstly, and then the fractional signals with high stage number are accumulated continuously, the detail correction is realized, and the real-time performance of data transmission is improved.
Drawings
Fig. 1 is a flowchart of a large data volume compressed sensing coding and decoding method in a wireless sensor network according to the present invention.
FIG. 2 is a flow chart of the sparse vector coefficient-based decomposition process of the present invention.
Fig. 3 is a flow chart of the present invention for a hierarchical reconstructed signal.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention provides a large data volume compressed sensing coding and decoding method in a wireless sensor network, as shown in figure 1, the method comprises the following steps:
step one, carrying out sparse transformation and classification on an original signal:
the signal is decomposed in the sparse representation size of the sparse basis signal, and the range of the sparse representation size of each layered signal is controlled by controlling the size of the component coefficient sigma. Sparse representation refers to the values of a signal that have undergone a sparse transformation. The most dominant, sparsest signal is decomposed first from the total signal, the less dominant signal is decomposed later from the residual, and so on. The signals decomposed at the beginning are sparse, the sparsity of the signals is worse and worse along with the gradual progress of the decomposition, the energy is smaller and smaller, and the residual final residual signals are ignored;
specifically, signal thinning is carried out according to a formula (1);
in the formula, X is an N multiplied by 1 dimension original signal, psi is an N multiplied by N dimension sparse base, α is an N multiplied by 1 dimension signal which is a sparse representation of the original signal under the sparse base psi, r is an N multiplied by 1 dimension residual error, and X isiRepresenting the i-th fraction signal obtained by the decomposition αiThe signal of dimension Nx 1 is a sparse representation of each level signal under a sparse basis Ψ, αrThe N × 1 dimensional signal is a sparse representation of the residual signal at the sparse basis Ψ.
The invention adopts an adaptive threshold method to define α in each fraction signaliThe value range of the elements is as follows:
σ·max(αr)≤αi(n)≤max(αr) (2)
the component coefficient sigma is determined according to actual conditions and takes a decimal between 0 and 1, and the formula (2) represents α in the ith fraction signaliThe value range of the element is α of the current residual error (namely the residual error obtained after decomposing the signal of the i-1 th fraction)rThe value of the element is sigma max (α)r) To max (α)r) The value of (c).
As shown in fig. 2, the specific signal decomposition steps are as follows:
step 1), first, the original signal is sparsely transformed, and a sparse representation of the original signal is calculated α.
Step 2), initializing a sparse representation α of the residual signalrα, the decomposition level is i 1.
Step 3), computing a sparse representation α of the ith fraction signaliThe method specifically comprises the following steps:
for residual αrIs determined whether the element value is greater than or equal to σ max (α)r):
If so, the element value is assigned to the corresponding position of the current level signal, αi(j)=αr(j) The residual error αrThe value of the element (2) is 0, i.e., α'r(j)=0;
If not, setting the value of the corresponding position of the current grade signal and the element as 0;
traversal residual αrAll the elements in the current grade signal are obtained, and the updated residual error α is obtainedr(ii) a Obtaining sparsity K of current fraction signali;
Step 4), decomposing the current residual error according to the method in the step 3) to obtain a next-level classification signal; and repeating the steps until the decomposition grade reaches a set threshold n or the p-norm of the residual signal is smaller than the set threshold.
After the original signal is sparsely transformed and ranked, the original signal is decomposed into signal sparse representations α of the various levels1,α2,α3…αn. The signal decomposed according to the method has the main signal contained in the signals of the previous stages, the secondary signal is at the later stage, and the noise signal such as white noise signal is generally decomposed into the residual error for elimination.
Step two, the signals are coded and compressed step by step:
the method selects a Bernoulli random matrix as a measurement matrix of signal coding, and the measurement matrix is a matrix irrelevant to a sparse basis; the signal is encoded and compressed by the measuring matrix as shown in formula (3)
yi=Φi⊙αi(3)
Formula (III) αiSparse representation, phi, representing the ith fraction signaliIs a measurement matrix corresponding to the ith fraction signal;y1,y2,y3…ynIs a compressed signal of each level, with a magnitude of MiX1 dimension; line number MiIs equal to the sparsity of the fractional signal, the smaller the sparsity, the number of rows MiThe smaller the value of (b) can be taken, the better the compression effect. The number of rows cannot be reduced without limitation, but generally needs to be greater than a certain lower limit.
The number of rows of the bernoulli random matrix satisfies the following formula to restore the original signal well,
M≥cKlog(N/K) (4)
wherein the value of c is determined according to the prior knowledge of the signal, and is generally 3; the value of K is the sparsity of the signal.
The sparsity (K) of each level of the partial signals is different, and the partial signals with lower levels are more sparse, and the number of rows required by the measurement matrix is smaller. If the measurement matrix is defined separately for each level of signals, it is easy to waste hardware resources. In the invention, in order to not store redundant measurement matrixes, an M multiplied by N dimensional Bernoulli random matrix is randomly generated to be used as a total measurement matrix phi; wherein, the value of M is the sparsity K of the last-level signal (set as the nth level)nAnd (6) determining.
The measurement matrix Φ is a sufficiently large bernoulli random matrix. The measuring matrix of each grade signal is used as the sub-matrix of the measuring matrix, the number of columns of the measuring matrix is the same as the total measuring matrix, and the number of rows is the number of rows M required by each grade signaliFrom the first row of the total measurement matrix phi to the Mth rowiAnd (6) stopping, and selecting.
Step three, transmitting each level of partial signals after completing coding from the first level, and coding data y of single partial signaliAfter the transmission is completed, the original signal can be reconstructed and output, specifically:
encoded data y for received i-th level signali(MiX 1D data) determining the dimension of the fraction signal, i.e. the number M of rows of the measurement matrix corresponding to the fraction signali(ii) a Selecting front M from measurement matrix phiiRow data to obtain a measurement matrix phii. Will encode data yiAnd a measurement matrix phiiInputting OMP reconstruction algorithm, and outputting sparse representation α 'of reduction signal'i。
The more the reconstructed sub-signals are, the higher the number of stages is, the more detailed information of the original signal is, and the distortion is smaller. The progressive reconstruction mode is different from the traditional signal reconstruction mode, and the reconstructed signal can be output without receiving all compressed coding information. The progressive reconstruction process is shown in fig. 3. In the network, the component signal codes with low stage number are transmitted firstly, so that the component signals with the maximum sparse vector coefficient are reconstructed firstly, and then the component signals with high stage number are accumulated continuously to realize detail correction; if the reconstruction of the j-th level signal is completed, the original signal is reconstructed as the following formula (5):
in summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (2)
1. A data compression sensing coding and decoding method is characterized by comprising the following steps:
step one, carrying out sparse transformation and classification on an original signal:
step 1), carrying out sparse conversion on the original signal, and calculating a sparse representation α of the original signal;
step 2), initializing a sparse representation α of the residual signalr=α;
Step 3), calculating sparse representation of the 1 st-level signal, specifically:
sparse representation α for residual signalsrIs determined whether the element value is greater than or equal to σ max (α)r) And the component coefficient sigma is determined according to the actual situation, and the value is decimal between 0 and 1:
if so, the element value is assigned to the elementPre-dividing the residual signal to corresponding positions, and expressing α the sparse part of residual signalrThe value of the element in (1) is set to 0;
if not, setting the value of the corresponding position of the current grade signal and the element as 0;
traversing sparse representation α of residual signalrAfter all the elements in (c), the values of the elements in the sparse representation of the current fraction signal and the updated sparse representation α of the residual signal are obtainedr(ii) a Obtaining sparsity of a current fraction signal;
step 4), sparse representation α with current residual signalrDecomposing according to the method in the step 3) to obtain a next grade signal; in this way, until the decomposition grade reaches a set threshold or the p-norm of the residual signal is smaller than the set threshold;
step two, the signals are coded and compressed step by step:
randomly generating an M multiplied by N dimensional Bernoulli random matrix as a total measurement matrix phi; wherein N represents the dimension of the original signal; the value of M is based on the sparsity K of the last level signalnDetermining: m is not less than cKnlog(N/Kn) Wherein the value of c is determined from a priori knowledge of the signal;
the measurement matrix phi corresponding to the ith fraction signaliFront M of the total measurement matrix phiiRow data;
carrying out coding compression on the sparse representation of each level signal by using the measurement matrix corresponding to each level signal to obtain a coded signal;
step three, sequentially transmitting each level signal after completing coding from the first level, determining the dimension of the level signal after receiving the coding signal of the first level signal, namely the number of rows of the measuring matrix corresponding to the level signal, extracting the measuring matrix corresponding to the level signal from the total measuring matrix phi, inputting the coding data and the measuring matrix into a matching pursuit algorithm OMP, and outputting α sparse representation of the restored signali’;
Step four, assuming that the j-th level signal coded data is restored, the reconstructed original signal is:
where Ψ represents a sparse basis for sparsely transforming the original signal in step one.
2. The method as claimed in claim 1, wherein M ═ cKnlog(N/Kn) Wherein c is 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811445814.0A CN109547961B (en) | 2018-11-29 | 2018-11-29 | Large data volume compressed sensing coding and decoding method in wireless sensor network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811445814.0A CN109547961B (en) | 2018-11-29 | 2018-11-29 | Large data volume compressed sensing coding and decoding method in wireless sensor network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109547961A CN109547961A (en) | 2019-03-29 |
CN109547961B true CN109547961B (en) | 2020-06-09 |
Family
ID=65851204
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811445814.0A Active CN109547961B (en) | 2018-11-29 | 2018-11-29 | Large data volume compressed sensing coding and decoding method in wireless sensor network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109547961B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111093166B (en) * | 2019-12-06 | 2022-07-19 | 北京京航计算通讯研究所 | Compressed data collection system using sparse measurement matrix in internet of things |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102665221A (en) * | 2012-03-26 | 2012-09-12 | 南京邮电大学 | Cognitive radio frequency spectrum perception method based on compressed sensing and BP (back-propagation) neural network |
CN107113769A (en) * | 2014-11-28 | 2017-08-29 | 高通股份有限公司 | Interference mitigation for location reference signals |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8630341B2 (en) * | 2011-01-14 | 2014-01-14 | Mitsubishi Electric Research Laboratories, Inc. | Method for training and utilizing separable transforms for video coding |
CN103037212B (en) * | 2011-10-08 | 2016-02-10 | 太原科技大学 | The adaptive block compressed sensing method for encoding images of view-based access control model perception |
CN103944579B (en) * | 2014-04-10 | 2017-06-20 | 东华大学 | A kind of coding/decoding system of compressed sensing reconstruct |
US20170185900A1 (en) * | 2015-12-26 | 2017-06-29 | Intel Corporation | Reconstruction of signals using a Gramian Matrix |
CN108471531B (en) * | 2018-03-22 | 2020-02-07 | 南京邮电大学 | Quality gradable rapid coding method based on compressed sensing |
-
2018
- 2018-11-29 CN CN201811445814.0A patent/CN109547961B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102665221A (en) * | 2012-03-26 | 2012-09-12 | 南京邮电大学 | Cognitive radio frequency spectrum perception method based on compressed sensing and BP (back-propagation) neural network |
CN107113769A (en) * | 2014-11-28 | 2017-08-29 | 高通股份有限公司 | Interference mitigation for location reference signals |
Also Published As
Publication number | Publication date |
---|---|
CN109547961A (en) | 2019-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107832837B (en) | Convolutional neural network compression method and decompression method based on compressed sensing principle | |
CN110248190B (en) | Multilayer residual coefficient image coding method based on compressed sensing | |
WO2014138633A2 (en) | Systems and methods for digital media compression and recompression | |
CN105306956A (en) | Method for increasing discrete cosine transform processing speed of HEVC coder | |
Li et al. | Multiple description coding based on convolutional auto-encoder | |
CN109547961B (en) | Large data volume compressed sensing coding and decoding method in wireless sensor network | |
Karthikeyan et al. | An efficient image compression method by using optimized discrete wavelet transform and Huffman encoder | |
CN107920250B (en) | Compressed sensing image coding transmission method | |
CN113676187A (en) | Huffman correction coding method, system and related components | |
RU2419246C1 (en) | Method to compress and recover fixed halftone video images | |
Arora et al. | Review of Image Compression Techniques | |
CN109951711B (en) | EZW data compression method based on random threshold adjustment | |
CN106331719A (en) | K-L transformation error space dividing based image data compression method | |
Vaish et al. | A new Image compression technique using principal component analysis and Huffman coding | |
Nazar et al. | Implementation of JPEG-LS compression algorithm for real time applications | |
CN115314156B (en) | LDPC coding and decoding method and system based on self-coding network | |
CN107612556B (en) | Optimal entropy coding method for L loyd-Max quantizer | |
Hui et al. | An Image Compression Scheme Based on Block Truncation Coding Using Real-time Block Classification and Modified Threshold for Pixels Grouping | |
Kranthi et al. | Enhanced image compression algorithm for image processing applications | |
Chen | Side Information-based distributed source coding with Low-Density Parity-Check code | |
Gashnikov | Hierarchical Representation of Plain Areas of Post-Interpolation Residuals for Image Compression | |
Ma et al. | Embedded zerotrees wavelet image coding using source polar codes | |
KR100590184B1 (en) | A method for designing codebook in channel-optimized vector quantization | |
Waysi et al. | Enhanced Image Coding Scheme Based on Modified Embedded Zerotree Wavelet Transform (DMEZW) | |
Utkarsh et al. | Image Compression: A Comparative Study between ANN and Traditional Approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |