CN110070583A - Signal compression and restoration methods and system based on tensor resolution and deep learning - Google Patents

Signal compression and restoration methods and system based on tensor resolution and deep learning Download PDF

Info

Publication number
CN110070583A
CN110070583A CN201910309593.2A CN201910309593A CN110070583A CN 110070583 A CN110070583 A CN 110070583A CN 201910309593 A CN201910309593 A CN 201910309593A CN 110070583 A CN110070583 A CN 110070583A
Authority
CN
China
Prior art keywords
signal
neural network
compression
tensor resolution
tensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910309593.2A
Other languages
Chinese (zh)
Inventor
杨昉
邹琮
潘长勇
宋健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201910309593.2A priority Critical patent/CN110070583A/en
Publication of CN110070583A publication Critical patent/CN110070583A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Algebra (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention discloses a kind of signal compressions based on tensor resolution and deep learning and restoration methods and system, wherein method includes the following steps: generating calculation matrix according to preset signal sampling rate;Original signal is multiplied with calculation matrix, generates compressed signal;The neural network based on tensor resolution method restored for signal is built, and neural network is decomposed by tensor resolution method, obtains the test signal for tending to original signal, and obtain the neural network for completing training using compressed signal and original signal;Test signal is multiplied with calculation matrix, obtains compression verification signal, and compression verification signal is input to the trained neural network of completion, with the test signal after being restored.This method can substantially reduce the time needed for signal restores, at the same parameter needed for network number it is available be greatly reduced, required calculatings space is reduced, even if higher signal recovery precision still can be kept when signal acquisition rate is lower.

Description

Signal compression and restoration methods and system based on tensor resolution and deep learning
Technical field
The present invention relates to signal processing technology field, in particular to a kind of signal pressure based on tensor resolution and deep learning Contracting and restoration methods and system.
Background technique
As resolution requirement of the people to multimedia content such as image, videos is higher and higher, it is based on Nyquist (Nyquist) data volume that Sampling Theorem samples is excessive, is unfavorable for storing and transmitting, at the same data itself exist it is many superfluous It is remaining, it can further compress, therefore propose compressed sensing (Compressed sensing, CS) technology.Compressed sensing is a kind of Method for obtaining and reconstructing sparse or compressible signal is managed using the characteristic that signal is sparse compared to Nyquist By being able to restore the original signal entirely to be learnt from less measured value.But traditional compression sensing method is primarily present Two problems: firstly, actual many signals are not fully sparse on some base class, this just affects the essence of signal recovery Degree;Secondly, traditional compressed sensing algorithmic statement is slow, which limits the application fields of compressed sensing.And it is many in recent years The research of deep learning frame application to compressed sensing field can effectively be solved the problems, such as into two above.
However, the size of neural network also will increase when the length of signal increases, it is complicated that this will lead to excessively high calculating Degree, great memory space possibly even lead to over-fitting.Existing most common solution be big signal is divided into it is more A small signal carries out signal compression and recovery respectively again.Although this method can solve the excessively high problem of signal dimension, this It often will lead to blocking artifact to appear in the big signal being recombined, especially when signal acquisition rate is relatively low. It is therefore proposed that one kind can memory space required for huge compression neural network, reduce computation complexity, reduce and calculate the time Signal recovery method be very necessary.
Summary of the invention
The present invention is directed to solve at least some of the technical problems in related technologies.
For this purpose, an object of the present invention is to provide a kind of signal compression based on tensor resolution and deep learning with it is extensive Compound method.This method can greatly compress memory space required for neural network, reduce computation complexity, when reducing calculating Between.
It is another object of the present invention to propose a kind of signal compression and recovery based on tensor resolution and deep learning System can compress complete big signal and restored without segmentation, efficiently solve that signal dimension is excessively high and block is imitated The problem of answering.
In order to achieve the above objectives, one aspect of the present invention propose signal compression based on tensor resolution and deep learning with it is extensive Compound method, comprising the following steps: step S1 generates calculation matrix according to preset signal sampling rate;Step S2, by original signal It is multiplied with the calculation matrix, generates compressed signal;Step S3 builds the mind based on tensor resolution method restored for signal The neural network is decomposed through network, and by tensor resolution method, obtains the test letter for tending to the original signal Number, and the neural network for completing training is obtained using the compressed signal and the original signal;Step S4 believes the test It number is multiplied with the calculation matrix, obtains compression verification signal, and the compression verification signal is input to the completion and is trained Neural network, with the test signal after being restored.
The signal compression and restoration methods based on tensor resolution and deep learning of the embodiment of the present invention, by dividing tensor Solution method is applied in the full articulamentum of neural network, is decomposed to its weight matrix, to reduce neural computing and deposit The space of storage reduces computation complexity, so that biggish signal can be by disposably, completely in the case where not needing segmentation Restore.
In addition, the signal compression and restoration methods according to the above embodiment of the present invention based on tensor resolution and deep learning There can also be following additional technical characteristic:
Further, in one embodiment of the invention, the step S1 further comprises: step S101, generates clothes From the matrix of Gaussian Profile;The row of preset signal sampling rate and the matrix is orthogonalized, to obtain by step S102 State calculation matrix.
Further, in one embodiment of the invention, the neural network includes the full articulamentum of multilayer, wherein institute Every layer of full articulamentum for stating the full articulamentum of multilayer includes a weight matrix, using the tensor resolution method to the weight square Battle array is decomposed, with compression calculating and memory space.
Further, in one embodiment of the invention, the neuron number of every layer of the neural network full articulamentum Mesh is obtained according to the dimension of the compressed signal, wherein the dimension of the compressed signal is gradually increased according to default growth pattern To the dimension of the original signal.
Further, in one embodiment of the invention, the default growth pattern include arithmetic progression principle, etc. ratio Ordered series of numbers principle and computation complexity minimum principle.
Further, in one embodiment of the invention, the tensor resolution method includes CANDECOMP/PARAFAC It decomposes, Tucker is decomposed and Tensor-Train is decomposed.
Further, in one embodiment of the invention, the step S3 further comprises:
Input data of the compressed signal as the neural network, the original signal conduct is arranged in step S301 The target data of the neural network builds the neural network according to the default growth pattern;
Step S302 is decomposed using the neural network of the tensor resolution method to the full articulamentum of multilayer, is obtained It must tend to the test signal of the original signal, and complete the training process of the neural network.
Further, in one embodiment of the invention, in the training process of the neural network, the nerve net The input data of the full articulamentum of each of network is multiplied with the weight matrix, replaces with and obtains after decomposing with the neural network Tensor sequence is multiplied.
Further, in one embodiment of the invention, in the step S3, the neural network passes through mean square error The inconsistent degree training of judgement of output valve and the original signal that difference estimates the neural network stops condition, wherein institute It states the absolute error that trained suspension condition is the square mean error amount recycled twice and is less than predetermined minimum, the training suspension condition Square mean error amount for single cycle is less than predetermined minimum or cycle-index reaches maximum value.
In order to achieve the above objectives, another aspect of the present invention proposes a kind of signal pressure based on tensor resolution and deep learning Contracting and recovery system, comprising: calculation matrix generation module, the calculation matrix generation module are used for according to preset signal sampling Rate generates calculation matrix;Compressed signal generation module, the compressed signal generation module are used for original signal and the measurement Matrix multiple generates compressed signal;Neural network decomposing module, the neural network decomposing module are extensive for signal for building The multiple neural network based on tensor resolution method, and the neural network is decomposed by tensor resolution method, it obtains Tend to the test signal of the original signal, and obtains the nerve for completing training using the compressed signal and the original signal Network;Signal compression and recovery module, the signal compression and recovery module are used for the test signal and the measurement square Battle array is multiplied, and compression verification signal is obtained, and the compression verification signal is input to the neural network for completing training, to obtain Test signal after must restoring.
The signal compression and recovery system based on tensor resolution and deep learning of the embodiment of the present invention, by dividing tensor Solution method is applied in the full articulamentum of neural network, is decomposed to its weight matrix, to reduce neural computing and deposit The space of storage reduces computation complexity, so that biggish signal can be by disposably, completely in the case where not needing segmentation Restore.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partially become from the following description Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments Obviously and it is readily appreciated that, in which:
Fig. 1 is the signal compression and restoration methods process based on tensor resolution and deep learning according to the embodiment of the present invention Figure;
Fig. 2 be signal compression according to the specific embodiment of the invention one to three based on tensor resolution and deep learning with it is extensive Multiple method flow diagram;
Fig. 3 is the structure chart according to the neural network of the embodiment of the present invention built;
Fig. 4 is the original signal and recovery signal contrast figure according to the embodiment of the present invention;
Fig. 5 is the signal compression and recovery system structure based on tensor resolution and deep learning according to the embodiment of the present invention Schematic diagram.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached The embodiment of figure description is exemplary, it is intended to is used to explain the present invention, and is not considered as limiting the invention.
The signal pressure based on tensor resolution and deep learning proposed according to embodiments of the present invention is described with reference to the accompanying drawings Contracting and restoration methods and system, describe to propose according to embodiments of the present invention first with reference to the accompanying drawings based on tensor resolution and depth The signal compression and restoration methods of study.
Fig. 1 is the signal compression and restoration methods process based on tensor resolution and deep learning of one embodiment of the invention Figure.
As shown in Figure 1, should signal compression based on tensor resolution and deep learning and restoration methods the following steps are included:
In step sl, calculation matrix is generated according to preset signal sampling rate.
Specifically, step S1 further comprises: step S101 generates the matrix of Gaussian distributed;Step S102, will The row of preset signal sampling rate and matrix is orthogonalized, to obtain calculation matrix.
That is, calculation matrix generation method is the matrix for being first randomly generated Gaussian distributed, then to its row Do orthogonalization.
In step s 2, original signal is multiplied with calculation matrix, generates compressed signal.
In step s3, the neural network based on tensor resolution method restored for signal is built, and passes through tensor point Solution method decomposes neural network, obtains the test signal for tending to original signal, and utilize compressed signal and original signal Obtain the neural network for completing training.
Wherein, neural network includes the full articulamentum of multilayer, and the full articulamentum of every layer of the full articulamentum of multilayer includes a weight Matrix preferably includes three layers of full articulamentum.
In addition, the neuron number of the full articulamentum of every layer of neural network is obtained according to the dimension of compressed signal, wherein pressure The dimension of contracting signal is gradually increased to the dimension of original signal according to default growth pattern.Default growth pattern includes arithmetic progression Principle, Geometric Sequence principle and computation complexity minimum principle.
That is, the neuron number for every layer of the neural network articulamentum built in step s3 is tieed up by compressed signal Number M progressively increases to original signal dimension N, and growing method includes arithmetic progression principle or Geometric Sequence principle, or is calculated multiple Miscellaneous degree minimum principle.
For example, in the neural network built in step s3, it is preferable that including three layers of full articulamentum, three full connections Layer neuron number meets arithmetic progression principle, i.e., it is M that setting first layer, which inputs number, and output number is 0.5N;The second layer is defeated Entering number is 0.5N, and output number is 0.75N;It is 0.75N that the last layer, which inputs number, and output number is N.
Specifically, step S3 further comprises: input number of the compressed signal as neural network is arranged in step S301 According to target data of the original signal as neural network builds neural network according to default growth pattern;Step S302 is utilized Tensor resolution method decomposes the neural network of the full articulamentum of multilayer, obtains and levels off to the test signal of original signal, and Complete the training process of neural network.
It should be noted that the weight matrix of the full articulamentum of each of neural network use tensor resolution method, including but It is not limited to CANDECOMP/PARAFAC is decomposed, Tucker is decomposed and Tensor-Train is decomposed etc..
For example, weight matrix W ∈ RM×NDecomposition method be Tensor-Train decomposition method, by weight matrix It is rearranged for tensorIt is decomposed into a series of three ranks tensorsWherein d is tensorOrder and three rank tensors Number.Decomposition method are as follows:
1) M and N is decomposed:
2) element of matrix W is rearranged into d rank tensorIts kth rank dimension is mknk
3) by d rank tensorIt is decomposed, so that for arbitrary order k=1 ..., any dimension jk=0 ... of d and kth rank, Nk-1, tensorArbitrary element obtained by following d matrix multiple:
Corresponding matrix { the G with single order kk[jk]|jk=0 ..., nk- 1 } size is all rk-1×rk, they can be merged into greatly Small is rk-1×rk×nkThree rank tensors Tensor sequence thusOrder (TT-rank), wherein r0And rd 1 is necessary for using holding matrix multiplied result as scalar.
Wherein, in the training process of neural network, the input data of every layer of full articulamentum is replaced by being multiplied with weight matrix W It is changed to and tensor sequenceIt is multiplied.
It should be noted that in step s3, neural network estimates the output valve and original of neural network by mean square error The inconsistent degree training of judgement of beginning signal stops condition, wherein training suspension condition is the square mean error amount recycled twice Absolute error is less than predetermined minimum, and training suspension condition is that the square mean error amount of single cycle is less than predetermined minimum, or follows Ring number reaches maximum value.
In step s 4, test signal is multiplied with calculation matrix, obtains compression verification signal, and by compression verification signal It is input to the neural network for completing training, with the test signal after being restored.
That is, test signal is multiplied with calculation matrix, compression verification signal is obtained;Pass through compression verification signal The neural network of training is completed, the test signal after being restored completes the compression and recovery process of test signal.
The embodiment of the present invention is described in further detail combined with specific embodiments below, following specific embodiments are only used for Illustrate, is not specifically limited herein.
Embodiment one
As shown in Fig. 2, the embodiment of the present invention is the signal compression based on Tensor-Train tensor resolution and deep learning With restoration methods, method includes the following steps:
Step 1 generates calculation matrix according to preset signal sampling rate;
Specifically, original signal dimension is 14400, sample rate 0.04, i.e. compressed signal dimension and original signal dimension Ratio, then compressed signal dimension be 576.Generating size is 576 × 14400, the random gaussian matrix that standard deviation is 0.2, so Orthogonalization is done to its row to get calculation matrix is arrived afterwards.
Step 2 generates compressed signal;
Specifically, calculation matrix is multiplied with the original signal that dimension is 14400 with calculation matrix, obtaining dimension is 576 Compressed signal.
Step 3 builds the neural network with three layers of full articulamentum, to the weight matrix Tensor- of full articulamentum Train tensor resolution method is decomposed.
Specifically, neural network is as shown in figure 3, comprising three layers of full articulamentum, and every layer of neuron number is by compressed signal Dimension 576 progressively increases to original signal dimension 14400, and according to arithmetic progression principle, first layer input node number is 576, Output node number is 7200;Second layer input node number is 7200, and output node number is 10800;The last layer input Interstitial content is 10800, and output node number is 14400.
Then, it is decomposed using weight matrix of the Tensor-Train method to three layers of full articulamentum.For EQUILIBRIUM CALCULATION FOR PROCESS Contradiction between space and computational accuracy sets the number d=2 of tensor in the tensor sequence after decomposing, and decomposition method is with first Illustrate for layer:
1) M and N is decomposed: input signal dimension N=576=24 × 24, output signal dimension M=7200=80 × 90;
2) by weight matrix W ∈ R576×7200Element be rearranged into 2 rank tensorsIts first rank dimension n 1=24 × 80, dimension n 2=24 × 90 of second-order;
3) by 2 rank tensorsDecomposed so that for arbitrary order k=1,2 and kth rank any dimension jk=0 ..., nk- 1, tensorArbitrary element obtained by following two matrix multiple:
Corresponding matrix { the G with single order kk[jk]|jk=0 ..., nk- 1 } size is all rk-1×rk, they can be merged into greatly Small is rk-1×rk×nkThree rank tensors Tensor sequence thusOrder (TT-rank), wherein r0And r2 1 is necessary for using holding matrix multiplied result as scalar, due to the limitation of calculator memory condition, if r1=135.
Wherein, after the above method decomposes, such as the tensor sequence size that table 1 is each full articulamentum.
The size of 1 weight matrix of table and its corresponding tensor sequence
As shown in figure 3, the weight matrix of full articulamentum is replaced with tensor sequence, then the input data of every layer of full articulamentum It is replaced with and tensor sequence by being multiplied with weight matrix WIt is multiplied, algorithm is as follows, equally by taking first layer as an example:
Input:
1) dimension 576=24 × 24 are inputted
2) dimension 7200=80 × 90 are exported
3) input vector x ∈ R576
4) tensor sequence
5) the order r of tensor sequence0=r2=1, r1=135
Start:
1) value of vector x is assigned to vector y
2) Fori is from 1 to 2
3) element of vector y is rearranged into size is (ni×ri-1,m1×…×mi-1×ni+1×…×nd) square Battle array
4) by three rank tensorsElement be rearranged into size be (mi×ri,ni×ri-1) matrix
5) matrix obtained after above-mentioned two matrix multiple is assigned to y
6)End for
7) element of matrix y is rearranged into the vector that dimension is 7200, to obtain output vector.
Step 4 is trained neural network using compressed signal and original signal, realizes that compressed signal restores.
Specifically, using compressed signal as the input data of neural network, number of targets of the original signal as neural network According to using Adam optimization algorithm update network parameter, Learning Step is by 10-3Decay to 10-4, so that network output data and mesh Error between mark data, that is, original signal constantly reduces, and neural network output valve and original are estimated by mean square error (MSE) The inconsistent degree of beginning signal, formula are as follows:
Wherein,To restore signal, x is original signal, and l is signal dimension.It is exhausted when the square mean error amount recycled twice To error less than 10-5When, training terminates, so that trained neural network has been obtained, when inputting compressed signal to it, nerve Network will export recovery signal.By taking picture signal as an example, image and original image Contrast on effect after recovery are as shown in Figure 4.
Embodiment two
As shown in Fig. 2, the embodiment of the present invention be signal compression and restoration methods based on CP tensor resolution and deep learning, Method includes the following steps:
Step 1 generates calculation matrix according to preset signal sampling rate;
Specifically, original signal dimension is 1024, sample rate 0.25, then compressed signal dimension is 256.Generating size is 256 × 1024, then the random gaussian matrix that standard deviation is 0.2 does orthogonalization to its row to get calculation matrix is arrived.
Step 2 generates compressed signal;
Specifically, calculation matrix is multiplied with the original signal that dimension is 1024 with calculation matrix, obtaining dimension is 256 Compressed signal.
Step 3 builds the neural network with three layers of full articulamentum, to the weight matrix CP tensor point of full articulamentum Solution method is decomposed.
Specifically, neural network include three layers of full articulamentum, every layer of neuron number by compressed signal dimension 256 gradually Increase to original signal dimension 1024, according to arithmetic progression principle, first layer input node number is 256, output node number It is 512;Second layer input node number is 512, and output node number is 768;The last layer input node number is 768, defeated Egress number is 1024.Each layer of activation primitive is all Sigmoid function.Then using CP method to three layers of full articulamentum Weight matrix decomposed, compression calculate and memory space, reduce computation complexity.
Step 4 is trained neural network using compressed signal and original signal, realizes that compressed signal restores.
Specifically, using compressed signal as the input data of neural network, number of targets of the original signal as neural network According to using RMSprop optimization algorithm update network parameter, Learning Step is by 10-3Decay to 10-4, so that network output data with Error between target data, that is, original signal constantly reduces, and neural network output valve and original letter are estimated by mean square error Number inconsistent degree.When cycle-index reaches most value 105When secondary, training terminates, so that trained neural network has been obtained, When inputting compressed signal to it, neural network will export recovery signal.
Embodiment three
As shown in Fig. 2, the embodiment of the present invention is signal compression and recovery based on Tucker tensor resolution and deep learning Method, method includes the following steps:
Step 1 generates calculation matrix according to preset signal sampling rate;
Specifically, original signal dimension is 14400, sample rate 0.08, then compressed signal dimension is 1152.Generate size It is 1152 × 14400, then the random gaussian matrix that standard deviation is 0.2 does orthogonalization to its row to get calculation matrix is arrived.
Step 2 generates compressed signal;
Specifically, calculation matrix is multiplied with the original signal that dimension is 14400 with calculation matrix, obtaining dimension is 1152 Compressed signal.
Step 3 builds the neural network with three layers of full articulamentum, opens to the weight matrix Tucker of full articulamentum Amount decomposition method is decomposed.
Specifically, neural network includes three layers of full articulamentum, every layer of neuron number by compressed signal dimension 1152 by Cumulative to be added to original signal dimension 14400, according to Geometric Sequence principle, first layer input node number is 1152, output node Number is 3600;Second layer input node number is 3600, and output node number is 7200;The last layer input node number is 7200, output node number is 14400.Each layer of activation primitive is all tanh function.Then using Tucker method to three The weight matrix of the full articulamentum of layer is decomposed, and compression calculates and memory space, reduces computation complexity.
Step 4 is trained neural network using compressed signal and original signal, realizes that compressed signal restores.
Specifically, using compressed signal as the input data of neural network, number of targets of the original signal as neural network According to using Adam optimization algorithm update network parameter, Learning Step is by 10-3Decay to 10-4, so that network output data and mesh Error between mark data, that is, original signal constantly reduces, and neural network output valve and original signal are estimated by mean square error Inconsistent degree.When the mean square error of single cycle is less than minimum value 10-4When, training terminates, to obtain trained Neural network, when inputting compressed signal to it, neural network will export recovery signal.
The signal compression and restoration methods based on tensor resolution and deep learning proposed according to embodiments of the present invention, passes through Tensor resolution method is applied in the full articulamentum of neural network, its weight matrix is decomposed, i.e., is without loop iteration Network can be decomposed and be compressed, reduce the space of neural computing and storage, computation complexity be reduced, so that biggish Signal can disposably, completely be restored in the case where not needing segmentation.
The signal pressure based on tensor resolution and deep learning proposed according to embodiments of the present invention is described referring next to attached drawing Contracting and recovery system.
Fig. 5 is the signal compression and recovery system structure based on tensor resolution and deep learning of one embodiment of the invention Schematic diagram.
As shown in figure 5, the signal compression and recovery system 10 based on tensor resolution and deep learning include: calculation matrix Generation module 100, compressed signal generation module 200, neural network decomposing module 300 and signal compression and recovery module 400.
Wherein, calculation matrix generation module 100 is used to generate calculation matrix according to preset signal sampling rate.Compressed signal Generation module 200 generates compressed signal for original signal to be multiplied with calculation matrix.Neural network decomposing module 300 is used for The neural network based on tensor resolution method restored for signal is built, and neural network is carried out by tensor resolution method It decomposes, obtains the test signal for tending to original signal, and obtain the nerve net for completing training using compressed signal and original signal Network.Signal compression is multiplied for that will test signal with calculation matrix with recovery module 400, obtains compression verification signal, and will pressure Contracting test signal is input to the neural network for completing training, with the test signal after being restored.
It should be noted that the aforementioned signal compression to based on tensor resolution and deep learning and restoration methods embodiment Explanation is also applied for the system, and details are not described herein again.
The signal compression and recovery system based on tensor resolution and deep learning proposed according to embodiments of the present invention, passes through Tensor resolution method is applied in the full articulamentum of neural network, its weight matrix is decomposed, i.e., is without loop iteration Network can be decomposed and be compressed, reduce the space of neural computing and storage, computation complexity be reduced, so that biggish Signal can disposably, completely be restored in the case where not needing segmentation.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the present invention, the meaning of " plurality " is at least two, such as two, three It is a etc., unless otherwise specifically defined.
In the present invention unless specifically defined or limited otherwise, term " installation ", " connected ", " connection ", " fixation " etc. Term shall be understood in a broad sense, for example, it may be being fixedly connected, may be a detachable connection, or integral;It can be mechanical connect It connects, is also possible to be electrically connected;It can be directly connected, can also can be in two elements indirectly connected through an intermediary The interaction relationship of the connection in portion or two elements, unless otherwise restricted clearly.For those of ordinary skill in the art For, the specific meanings of the above terms in the present invention can be understood according to specific conditions.
In the present invention unless specifically defined or limited otherwise, fisrt feature in the second feature " on " or " down " can be with It is that the first and second features directly contact or the first and second features pass through intermediary mediate contact.Moreover, fisrt feature exists Second feature " on ", " top " and " above " but fisrt feature be directly above or diagonally above the second feature, or be merely representative of First feature horizontal height is higher than second feature.Fisrt feature can be under the second feature " below ", " below " and " below " One feature is directly under or diagonally below the second feature, or is merely representative of first feature horizontal height less than second feature.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It can be combined in any suitable manner in one or more embodiment or examples.In addition, without conflicting with each other, the skill of this field Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples It closes and combines.
Although the embodiments of the present invention has been shown and described above, it is to be understood that above-described embodiment is example Property, it is not considered as limiting the invention, those skilled in the art within the scope of the invention can be to above-mentioned Embodiment is changed, modifies, replacement and variant.

Claims (10)

1. a kind of signal compression and restoration methods based on tensor resolution and deep learning, which comprises the following steps:
Step S1 generates calculation matrix according to preset signal sampling rate;
Original signal is multiplied by step S2 with the calculation matrix, generates compressed signal;
Step S3 builds the neural network based on tensor resolution method restored for signal, and passes through tensor resolution method pair The neural network is decomposed, and is obtained and is tended to the test signal of the original signal, and utilizes the compressed signal and described Original signal obtains the neural network for completing training;And
The test signal is multiplied by step S4 with the calculation matrix, obtains compression verification signal, and by the compression verification Signal is input to the neural network for completing training, with the test signal after being restored.
2. the signal compression and restoration methods according to claim 1 based on tensor resolution and deep learning, feature exist In the step S1 further comprises:
Step S101 generates the matrix of Gaussian distributed;
The row of preset signal sampling rate and the matrix is orthogonalized by step S102, to obtain the calculation matrix.
3. the signal compression and restoration methods according to claim 1 based on tensor resolution and deep learning, feature exist In the neural network includes the full articulamentum of multilayer, wherein the full articulamentum of every layer of the full articulamentum of multilayer includes a power Value matrix decomposes the weight matrix using the tensor resolution method, with compression calculating and memory space.
4. the signal compression and restoration methods according to claim 1 or 3 based on tensor resolution and deep learning, feature It is, the neuron number of the full articulamentum of every layer of the neural network is obtained according to the dimension of the compressed signal, wherein institute The dimension for stating compressed signal is gradually increased to the dimension of the original signal according to default growth pattern.
5. the signal compression and restoration methods according to claim 4 based on tensor resolution and deep learning, feature exist In the default growth pattern includes arithmetic progression principle, Geometric Sequence principle and computation complexity minimum principle.
6. the signal compression and restoration methods according to claim 3 based on tensor resolution and deep learning, feature exist In the tensor resolution method includes that CANDECOMP/PARAFAC is decomposed, Tucker is decomposed and Tensor-Train is decomposed.
7. the signal compression and restoration methods according to claim 1 based on tensor resolution and deep learning, feature exist In the step S3 further comprises:
Step S301, is arranged input data of the compressed signal as the neural network, described in the original signal is used as The target data of neural network builds the neural network according to the default growth pattern;
Step S302 is decomposed using the neural network of the tensor resolution method to the full articulamentum of multilayer, is become In the test signal of the original signal, and complete the training process of the neural network.
8. the signal compression and restoration methods according to claim 3 or 7 based on tensor resolution and deep learning, feature It is, in the training process of the neural network, the input data and the power of the full articulamentum of each of described neural network Value matrix is multiplied, and replaces with the tensor sequence obtained after decomposing with the neural network and is multiplied.
9. the signal compression and restoration methods according to claim 1 based on tensor resolution and deep learning, feature exist In, in the step S3, the neural network by mean square error estimate the output valve of the neural network with it is described original The inconsistent degree training of judgement of signal stops condition, wherein the training suspension condition is the square mean error amount recycled twice Absolute error be less than predetermined minimum, the training suspension condition is that the square mean error amount of single cycle is less than default minimum Value or cycle-index reach maximum value.
10. a kind of signal compression and recovery system based on tensor resolution and deep learning characterized by comprising
Calculation matrix generation module, the calculation matrix generation module are used to generate measurement square according to preset signal sampling rate Battle array;
Compressed signal generation module, the compressed signal generation module are raw for original signal to be multiplied with the calculation matrix At compressed signal;
Neural network decomposing module, the neural network decomposing module are used to build for signal recovery based on tensor resolution side The neural network of method, and the neural network is decomposed by tensor resolution method, acquisition tends to the original signal Signal is tested, and obtains the neural network for completing training using the compressed signal and the original signal;And
Signal compression and recovery module, the signal compression and recovery module are used for the test signal and the calculation matrix It is multiplied, compression verification signal is obtained, and the compression verification signal is input to the neural network for completing training, to obtain Test signal after recovery.
CN201910309593.2A 2019-04-17 2019-04-17 Signal compression and restoration methods and system based on tensor resolution and deep learning Pending CN110070583A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910309593.2A CN110070583A (en) 2019-04-17 2019-04-17 Signal compression and restoration methods and system based on tensor resolution and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910309593.2A CN110070583A (en) 2019-04-17 2019-04-17 Signal compression and restoration methods and system based on tensor resolution and deep learning

Publications (1)

Publication Number Publication Date
CN110070583A true CN110070583A (en) 2019-07-30

Family

ID=67367911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910309593.2A Pending CN110070583A (en) 2019-04-17 2019-04-17 Signal compression and restoration methods and system based on tensor resolution and deep learning

Country Status (1)

Country Link
CN (1) CN110070583A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111697974A (en) * 2020-06-19 2020-09-22 广东工业大学 Compressed sensing reconstruction method and device
CN111984242A (en) * 2020-08-20 2020-11-24 中电科仪器仪表有限公司 Method and system for decomposing synthesized signal
CN112148891A (en) * 2020-09-25 2020-12-29 天津大学 Knowledge graph completion method based on graph perception tensor decomposition
CN112669861A (en) * 2020-12-09 2021-04-16 北京百度网讯科技有限公司 Audio data processing method, device, equipment and storage medium
CN113537485A (en) * 2020-04-15 2021-10-22 北京金山数字娱乐科技有限公司 Neural network model compression method and device
WO2021234967A1 (en) * 2020-05-22 2021-11-25 日本電信電話株式会社 Speech waveform generation model training device, speech synthesis device, method for the same, and program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844653A (en) * 2016-04-18 2016-08-10 深圳先进技术研究院 Multilayer convolution neural network optimization system and method
CN106127297A (en) * 2016-06-02 2016-11-16 中国科学院自动化研究所 The acceleration of degree of depth convolutional neural networks based on resolution of tensor and compression method
CN107516129A (en) * 2017-08-01 2017-12-26 北京大学 The depth Web compression method decomposed based on the adaptive Tucker of dimension
CN107730451A (en) * 2017-09-20 2018-02-23 中国科学院计算技术研究所 A kind of compressed sensing method for reconstructing and system based on depth residual error network
CN107944556A (en) * 2017-12-12 2018-04-20 电子科技大学 Deep neural network compression method based on block item tensor resolution
CN108171762A (en) * 2017-12-27 2018-06-15 河海大学常州校区 System and method for is reconfigured quickly in a kind of similar image of the compressed sensing of deep learning
CN108734191A (en) * 2017-05-25 2018-11-02 湖北工业大学 Deep learning is applied to the data training method that compressed sensing is rebuild

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844653A (en) * 2016-04-18 2016-08-10 深圳先进技术研究院 Multilayer convolution neural network optimization system and method
CN106127297A (en) * 2016-06-02 2016-11-16 中国科学院自动化研究所 The acceleration of degree of depth convolutional neural networks based on resolution of tensor and compression method
CN108734191A (en) * 2017-05-25 2018-11-02 湖北工业大学 Deep learning is applied to the data training method that compressed sensing is rebuild
CN107516129A (en) * 2017-08-01 2017-12-26 北京大学 The depth Web compression method decomposed based on the adaptive Tucker of dimension
CN107730451A (en) * 2017-09-20 2018-02-23 中国科学院计算技术研究所 A kind of compressed sensing method for reconstructing and system based on depth residual error network
CN107944556A (en) * 2017-12-12 2018-04-20 电子科技大学 Deep neural network compression method based on block item tensor resolution
CN108171762A (en) * 2017-12-27 2018-06-15 河海大学常州校区 System and method for is reconfigured quickly in a kind of similar image of the compressed sensing of deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZOU C, YANG F.: "Deep Learning Approach Based on Tensor-Train for Sparse Signal Recovery", 《IEEE ACCESS, 2019》 *
刘思聪,杨昉等: "基于压缩感知的窄带干扰重构与消除", 《电视技术》 *
王磊,赵英海等: "面向嵌入式应用的深度神经网络模型压缩技术综述", 《北京交通大学学报》 *
陈工孟,芮萌,许庆胜主编: "《现代企业财务困境预测》", 31 July 2006 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537485A (en) * 2020-04-15 2021-10-22 北京金山数字娱乐科技有限公司 Neural network model compression method and device
WO2021234967A1 (en) * 2020-05-22 2021-11-25 日本電信電話株式会社 Speech waveform generation model training device, speech synthesis device, method for the same, and program
CN111697974A (en) * 2020-06-19 2020-09-22 广东工业大学 Compressed sensing reconstruction method and device
CN111697974B (en) * 2020-06-19 2021-04-16 广东工业大学 Compressed sensing reconstruction method and device
CN111984242A (en) * 2020-08-20 2020-11-24 中电科仪器仪表有限公司 Method and system for decomposing synthesized signal
CN112148891A (en) * 2020-09-25 2020-12-29 天津大学 Knowledge graph completion method based on graph perception tensor decomposition
CN112669861A (en) * 2020-12-09 2021-04-16 北京百度网讯科技有限公司 Audio data processing method, device, equipment and storage medium
CN112669861B (en) * 2020-12-09 2023-04-07 北京百度网讯科技有限公司 Audio data processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110070583A (en) Signal compression and restoration methods and system based on tensor resolution and deep learning
JP6574503B2 (en) Machine learning method and apparatus
CN105981050B (en) For extracting the method and system of face characteristic from the data of facial image
CN107992938B (en) Space-time big data prediction technique and system based on positive and negative convolutional neural networks
US20200302576A1 (en) Image processing device, image processing method, and image processing program
CN113111760B (en) Light-weight graph convolution human skeleton action recognition method based on channel attention
CN106203625A (en) A kind of deep-neural-network training method based on multiple pre-training
CN109766995A (en) The compression method and device of deep neural network
Lillekjendlie et al. Chaotic time series part II: System identification and prediction
CN109977989B (en) Image tensor data processing method
CN106780645A (en) Dynamic MRI images method for reconstructing and device
CN104408697B (en) Image Super-resolution Reconstruction method based on genetic algorithm and canonical prior model
CN109064460A (en) Wheat severe plant disease prevention method based on multiple timings property element depth characteristic
CN112418286A (en) Multi-view clustering method based on constrained non-negative matrix factorization
Henderson et al. Spike event based learning in neural networks
CN111125620B (en) Parallel random gradient descent method based on matrix decomposition in recommendation system
CN110610508B (en) Static video analysis method and system
CN110781968B (en) Extensible class image identification method based on plastic convolution neural network
CN116935128A (en) Zero sample abnormal image detection method based on learning prompt
CN110993121A (en) Drug association prediction method based on double-cooperation linear manifold
CN116304569A (en) Filling method for missing data of distributed optical fiber sensor
KR101963556B1 (en) Apparatus for posture analysis of time series using artificial inteligence
Liu et al. TT-PINN: a tensor-compressed neural PDE solver for edge computing
CN111542818A (en) Network model data access method and device and electronic equipment
CN110739030B (en) Soft measurement method for small sample in ethylene production process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Yang Fang

Inventor after: Zou Cong

Inventor after: Wang Jintao

Inventor after: Pan Changyong

Inventor after: Song Jian

Inventor before: Yang Fang

Inventor before: Zou Cong

Inventor before: Pan Changyong

Inventor before: Song Jian

CB03 Change of inventor or designer information
RJ01 Rejection of invention patent application after publication

Application publication date: 20190730

RJ01 Rejection of invention patent application after publication