CN109919864A - A kind of compression of images cognitive method based on sparse denoising autoencoder network - Google Patents

A kind of compression of images cognitive method based on sparse denoising autoencoder network Download PDF

Info

Publication number
CN109919864A
CN109919864A CN201910126717.3A CN201910126717A CN109919864A CN 109919864 A CN109919864 A CN 109919864A CN 201910126717 A CN201910126717 A CN 201910126717A CN 109919864 A CN109919864 A CN 109919864A
Authority
CN
China
Prior art keywords
network
sub
indicate
denoising autoencoder
sparse denoising
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910126717.3A
Other languages
Chinese (zh)
Inventor
张祖凡
伍云锋
甘臣权
孙韶辉
于秀兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201910126717.3A priority Critical patent/CN109919864A/en
Publication of CN109919864A publication Critical patent/CN109919864A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A kind of compression of images cognitive method based on sparse denoising autoencoder network is claimed in the present invention, belongs to deep learning and technical field of image processing.It the described method comprises the following steps: 1, obtaining original image signal x as training data, to data prediction and complete signal and corrode to obtain2, the coding sub-network of sparse denoising autoencoder network is built, picture signal x obtains measured value y by encoding sub-network;3, the decoding sub-network of sparse denoising autoencoder network is built, measured value y obtains rebuilding picture by decoding sub-network4, sparsity limitation, generational loss function J are introducedSDAE(W,b);5, joint training is carried out to coding and decoding sub-network by back-propagation algorithm, undated parameter simultaneously obtains optimal sparse denoising autoencoder network.Sparsity limitation is added in the present invention on the basis of denoising autoencoder network, and compression of images and reconstruction are integrated into a unified autoencoder network frame, the quality of reconstruction image is effectively raised and greatly lowers reconstitution time.

Description

A kind of compression of images cognitive method based on sparse denoising autoencoder network
Technical field
The invention belongs to deep learnings and technical field of image processing, and in particular to one kind encodes net based on sparse denoising certainly The compression of images cognitive method of network effectively raises the quality of reconstruction image and greatly lowers reconstitution time.
Background technique
As social informatization develops, the data volume for acquiring and handling is sharply increased, and is set to sensor sampling rate, storage Standby and transmission bandwidth requirement is higher and higher.Traditional signal processing mode is storage or biography after first high-speed sampling recompression Defeated, this method will cause a large amount of wastes of sampled data.Thus there is compressive sensing theory, it can be to be far below Nai Kuisi The use rate of special sample frequency acquires signal, and high-precision rebuilds original signal, and compression is completed while acquiring signal.The reason By handled in medical signals, the methods of array signal processing, wireless communication are widely used.
Compressive sensing theory mainly include signal rarefaction expression, calculation matrix design, three key technologies of signal reconstruction, Wherein signal reconstruction is the core place of compressive sensing theory.Current algorithm for reconstructing mainly includes that two class greediness match tracings are calculated Method and convex relaxation method.Greedy matching pursuit algorithm constantly updates estimation signal supported collection by iteration and finally approaches target letter Number, it mainly include that atom selects and estimate two basic steps of signal update.It is fast but smart that greedy match tracing method rebuilds speed It spends not high.Original optimization aim Norm minimum is converted to Norm minimum by convex relaxation method, and being then converted to one kind has constraint item The extreme-value problem of part, and utilize linear programming for solution.The Exact Reconstruction theoretical foundation of convex relaxation method is abundant, the observation number needed Mesh is minimum, but algorithm complexity is high, long for large-scale data reconstruction time.Traditional restructing algorithm is due to needing calculation amount logical It is often very big, it is difficult to meet the requirement of real-time, and very poor to the treatment effect of noise picture.
Summary of the invention
Present invention seek to address that conventional compression perceptual image reconstitution time is long, and very poor to the treatment effect of noise picture Problem.It proposes a kind of quality for effectively raising reconstruction image and greatly lowers reconstitution time, and have and go well The compression of images cognitive method based on sparse denoising autoencoder network for ability of making an uproar.Technical scheme is as follows:
A kind of compression of images cognitive method based on sparse denoising autoencoder network comprising following steps:
Step 1): obtaining original image signal x as training data, to data gray scale pretreatment and completes signal and corrodes It arrives
Step 2): building the coding sub-network of sparse denoising autoencoder network, and coding sub-network is one three layers full connection Neural network, picture signal x obtain measured value y by encoding sub-network;
Step 3): building the decoding sub-network of sparse denoising autoencoder network, and decoding sub-network is and coding sub-network knot The symmetrical three layers of full Connection Neural Network of structure, measured value y obtain rebuilding picture by decoding sub-network
Step 4): sparsity limitation, generational loss function J are introducedSDAE(W,b);
Step 5): joint training is carried out to coding and decoding sub-network, by back-propagation algorithm to loss function JSDAE (W, b) is optimized, and undated parameter simultaneously obtains optimal sparse denoising autoencoder network.
Further, the step 1) carries out gray proces to picture signal x, and the white Gaussian of certain probability distribution is added Noise obtains corrosion signal to signalThe additive Gaussian that wherein n is expressed as zero-mean and variance is 1 samples noise, λ is expressed as signal corrosion strength.
Further, the step 2) establishes the coding sub-network T of sparse denoising autoencoder networke(), and surveyed Magnitude y, the coding sub-network are three layers of full Connection Neural Network: input layer, hidden layer and output layer will corrode signalAs input data, hidden layer feature vector is expressed as
Output layer output is that measured value y is expressed as
Y=f (W(2)a(1)+b(2))
W in formula(l),b(l)Indicate that the weight matrix and bias between l layers and l+1 layers, f () indicate sigmoid activation Function;
Regard three-layer network as an entirety and obtains coding sub-network Te(), cataloged procedure are as follows
Ω in formulae={ W(1),W(2);b(1),b(2)All parameter sets during presentation code, TePresentation code sub-network.
Further, the step 3) establishes the decoding sub-network T of sparse denoising autoencoder networke(), and to measurement Value y, which reconstructs to obtain, rebuilds pictureThe decoding sub-network is to connect nerve net entirely for symmetrical three layers with coding sub-network structure Network: input layer, hidden layer and output layer, using measured value y as input data, hidden layer feature vector is expressed as
a(3)=f (W(3)y+b(3)),
Picture is rebuild in output layer outputIt is expressed as
W in formula(l),b(l)Indicate that the weight matrix and bias between l layers and l+1 layers, f () indicate sigmoid activation Function;
Regard three-layer network as an entirety and obtains decoding sub-network Td(), decoding process are as follows:
Wherein Ωd={ W(3),W(4);b(3),b(4)Indicate all parameter sets in decoding process, TdIndicate decoding sub-network.
Further, the step 4) uses mean square error to reduce error between reconstructed picture and original image As loss function, and sparsity limitation is introduced to improve network performance;
First item is mean square error in formula, and N indicates training sample number,Indicate i-th of reconstructed picture, xiIndicate i-th A original image;Section 2 is sparsity limit entry,Indicate the average activity of the hidden neuron j in training set, ρ is indicated The activity of expectation, β indicate sparsity limit entry parameter.
Further, the step 5) carries out joint training to coding and decoding sub-network, passes through back-propagation algorithm pair Loss function J is optimized, and it is minimum that undated parameter makes loss function, to obtain optimal sparse denoising autoencoder network, is had Body training process is as follows:
6-1, random initializtion parameter ΩeAnd Ωd, ΩeAnd ΩdRespectively indicate all parameter sets during coding and decoding It closes.
6-2, using propagated forward calculation formula, calculate the activation value of each layer:
Y indicates measured value,Indicate corrosion picture,It indicates to rebuild picture, TePresentation code sub-network, TdIndicate solution numeral Network.
6-3, the residual error item for calculating i-th of neuron node of l layer
Indicate hidden layer activation valueDerivative,Indicate the weight of neuron j to i.
6-4, the partial derivative for calculating loss function
6-5, undated parameter
α indicates learning rate.
It advantages of the present invention and has the beneficial effect that:
The present invention proposes a kind of compression of images cognitive method based on sparse denoising autoencoder network.Specific innovative step packet Include: 1) present invention realizes the signal perception perceived to compression of images with coding sub-network, improves perception by optimization training The unstability of process, and;2) by the training of mass data, optimal decoding sub-network is obtained, is changed compared to traditional For restructing algorithm, reconstitution time can be greatly reduced while improving and rebuilding picture quality;3) it is encoded and is conciliate by joint training Numeral network improves the overall performance of network so that two sub-networks perfectly combine together.
Detailed description of the invention
Fig. 1 is that the present invention provides the stream of compression of images cognitive method of the preferred embodiment based on sparse denoising autoencoder network Cheng Tu;
The sparse denoising autoencoder network instance graph of Fig. 2.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, detailed Carefully describe.Described embodiment is only a part of the embodiments of the present invention.
The technical solution that the present invention solves above-mentioned technical problem is:
Fig. 1 is the present invention is based on the overview flow chart of the compression of images cognitive method of sparse denoising autoencoder network, and Fig. 2 is Sparse denoising autoencoder network instance graph of the invention.Below in conjunction with attached drawing and instance graph, to embodiments of the present invention into Row detailed description, including the following steps:
Step 1: obtaining original image signal x as training data, to data prediction and completes signal and corrodes to obtainGray proces are carried out to picture signal x first, and the white Gaussian noise that certain probability distribution is added obtains corrosion letter to signal NumberThe additive Gaussian that wherein n is expressed as zero-mean and variance is 1 samples noise, and λ is expressed as signal corrosion strength.
Step 2: the coding sub-network T of sparse denoising autoencoder network is establishede(), and obtain measured value y.It is encoded Sub-network is three layers of full Connection Neural Network: input layer, hidden layer and output layer.Signal will be corrodedAs Input data, hidden layer feature vector are expressed as
Output layer output is that measured value y is expressed as
Y=f (W(2)a(1)+b(2))
W in formula(l),b(l)Indicate that the weight matrix and bias between l layers and l+1 layers, f () indicate sigmoid activation Function.
Regard three-layer network as an entirety and obtains coding sub-network Te(), cataloged procedure are as follows
Wherein Ωe={ W(1),W(2);b(1),b(2)All parameter sets during presentation code.
Step 3: the decoding sub-network T of sparse denoising autoencoder network is establishede(), and measured value y is reconstructed to obtain weight Build pictureIt decodes sub-network: input layer, hidden layer And output layer.Using measured value y as input data, hidden layer feature vector is expressed as
a(3)=f (W(3)y+b(3)),
Picture is rebuild in output layer outputIt is expressed as
W in formula(l),b(l)Indicate that the weight matrix and bias between l layers and l+1 layers, f () indicate sigmoid activation Function.
Regard three-layer network as an entirety and obtains decoding sub-network Td(), decoding process are as follows:
Wherein Ωd={ W(3),W(4);b(3),b(4)Indicate all parameter sets in decoding process.
Step 4: sparsity limitation, generational loss function J are introducedSDAE(W,b).For training set, using itself as mark Label are Dtrain={ (x1,x1),(x2,x2),…,(xn,xn)}.In order to reduce error between reconstructed picture and original image, use Mean square error introduces sparsity limitation as loss function to improve network performance.
First item is mean square error in formula, and Section 2 is sparsity limit entry.Indicate the hidden neuron j in training set Average activity, ρ indicate expect activity, β indicate sparsity limit entry parameter.
Step 5: joint training is carried out to coding and decoding sub-network, by back-propagation algorithm to loss function JSDAE (W, b) is optimized, and undated parameter simultaneously obtains optimal sparse denoising autoencoder network.
Specific training process is as follows:
1, random initializtion parameter ΩeAnd Ωd, ΩeAnd ΩdRespectively indicate all parameter sets during coding and decoding.
2, using propagated forward calculation formula, the activation value of each layer is calculated;
Y indicates measured value,Indicate corrosion picture,It indicates to rebuild picture, TePresentation code sub-network, TdIndicate solution numeral Network.
3, the residual error item of i-th of neuron node of l layer is calculated,
Indicate hidden layer activation valueDerivative,Indicate the weight of neuron j to i.
4, the partial derivative of loss function is calculated
5, undated parameter
α indicates learning rate.
The above embodiment is interpreted as being merely to illustrate the present invention rather than limit the scope of the invention.? After the content for having read record of the invention, technical staff can be made various changes or modifications the present invention, these equivalent changes Change and modification equally falls into the scope of the claims in the present invention.

Claims (6)

1. a kind of compression of images cognitive method based on sparse denoising autoencoder network, which comprises the following steps:
Step 1): obtaining original image signal x as training data, to data gray scale pretreatment and completes signal and corrodes to obtain
Step 2): building the coding sub-network of sparse denoising autoencoder network, and coding sub-network is one three layers full connection nerve Network, picture signal x obtain measured value y by encoding sub-network;
Step 3): building the decoding sub-network of sparse denoising autoencoder network, and decoding sub-network is and coding sub-network structure pair The three layers of full Connection Neural Network claimed, measured value y obtain rebuilding picture by decoding sub-network
Step 4): sparsity limitation, generational loss function J are introducedSDAE(W,b);
Step 5): joint training is carried out to coding and decoding sub-network, by back-propagation algorithm to loss function JSDAE(W,b) It optimizes, undated parameter simultaneously obtains optimal sparse denoising autoencoder network.
2. a kind of compression of images cognitive method based on sparse denoising autoencoder network according to claim 1, feature It is, the step 1) carries out gray proces to picture signal x, and the white Gaussian noise that certain probability distribution is added obtains signal To corrosion signalThe additive Gaussian that wherein n is expressed as zero-mean and variance is 1 samples noise, and λ is expressed as signal Corrosion strength.
3. a kind of compression of images cognitive method based on sparse denoising autoencoder network according to claim 2, feature It is, the step 2) establishes the coding sub-network T of sparse denoising autoencoder networke(), and measured value y is obtained, the volume Numeral network is three layers of full Connection Neural Network: input layer, hidden layer and output layer will corrode signalAs Input data, hidden layer feature vector are expressed as
Output layer output is that measured value y is expressed as
Y=f (W(2)a(1)+b(2))
W in formula(l),b(l)Indicate that the weight matrix and bias between l layers and l+1 layers, f () indicate sigmoid activation primitive;
Regard three-layer network as an entirety and obtains coding sub-network Te(), cataloged procedure are as follows
Ω in formulae={ W(1),W(2);b(1),b(2)All parameter sets during presentation code, TePresentation code sub-network.
4. a kind of compression of images cognitive method based on sparse denoising autoencoder network according to claim 3, feature It is, the step 3) establishes the decoding sub-network T of sparse denoising autoencoder networke(), and measured value y is reconstructed to obtain weight Build pictureThe decoding sub-network is and the coding symmetrical three layers of full Connection Neural Network of sub-network structure: input layer is hidden Layer and output layer, using measured value y as input data, hidden layer feature vector is expressed as
a(3)=f (W(3)y+b(3)),
Picture is rebuild in output layer outputIt is expressed as
W in formula(l),b(l)Indicate that the weight matrix and bias between l layers and l+1 layers, f () indicate sigmoid activation primitive;
Regard three-layer network as an entirety and obtains decoding sub-network Td(), decoding process are as follows:
Wherein Ωd={ W(3),W(4);b(3),b(4)Indicate all parameter sets in decoding process, TdIndicate decoding sub-network.
5. a kind of compression of images cognitive method based on sparse denoising autoencoder network according to claim 4, feature It is, the step 4) uses mean square error as loss function to reduce error between reconstructed picture and original image, And sparsity limitation is introduced to improve network performance;
First item is mean square error in formula, and N indicates training sample number,Indicate i-th of reconstructed picture, xiIndicate i-th it is original Picture;Section 2 is sparsity limit entry,Indicate the average activity of the hidden neuron j in training set, ρ indicates expectation Activity, β indicate sparsity limit entry parameter.
6. a kind of compression of images cognitive method based on sparse denoising autoencoder network according to claim 5, feature It is, the step 5) carries out joint training to coding and decoding sub-network, is carried out by back-propagation algorithm to loss function J Optimization, undated parameter make loss function minimum, to obtain optimal sparse denoising autoencoder network, specific training process is such as Under:
1, random initializtion parameter ΩeAnd Ωd, ΩeAnd ΩdRespectively indicate all parameter sets during coding and decoding;
2, using propagated forward calculation formula, the activation value of each layer is calculated,
Y indicates measured value,Indicate corrosion picture,It indicates to rebuild picture, TePresentation code sub-network, TdIndicate decoding subnet Network;
3, the residual error item of i-th of neuron node of l layer is calculated,
Indicate hidden layer activation valueDerivative,Indicate the weight of neuron j to i;
4, the partial derivative of loss function is calculated
5, undated parameter
α indicates learning rate.
CN201910126717.3A 2019-02-20 2019-02-20 A kind of compression of images cognitive method based on sparse denoising autoencoder network Pending CN109919864A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910126717.3A CN109919864A (en) 2019-02-20 2019-02-20 A kind of compression of images cognitive method based on sparse denoising autoencoder network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910126717.3A CN109919864A (en) 2019-02-20 2019-02-20 A kind of compression of images cognitive method based on sparse denoising autoencoder network

Publications (1)

Publication Number Publication Date
CN109919864A true CN109919864A (en) 2019-06-21

Family

ID=66961836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910126717.3A Pending CN109919864A (en) 2019-02-20 2019-02-20 A kind of compression of images cognitive method based on sparse denoising autoencoder network

Country Status (1)

Country Link
CN (1) CN109919864A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428462A (en) * 2019-07-17 2019-11-08 清华大学 Polyphaser solid matching method and device
CN110569961A (en) * 2019-08-08 2019-12-13 合肥图鸭信息科技有限公司 neural network training method and device and terminal equipment
CN110779724A (en) * 2019-11-20 2020-02-11 重庆邮电大学 Bearing fault diagnosis method based on frequency domain group sparse noise reduction
CN111401236A (en) * 2020-03-16 2020-07-10 西北工业大学 Underwater sound signal denoising method based on self-coding neural network
CN111563423A (en) * 2020-04-17 2020-08-21 西北工业大学 Unmanned aerial vehicle image target detection method and system based on depth denoising automatic encoder
CN111598786A (en) * 2019-11-28 2020-08-28 南京航空航天大学 Hyperspectral image unmixing method based on deep denoising self-coding network
CN111652311A (en) * 2020-06-03 2020-09-11 苏州大学 Image sparse representation method based on sparse elliptic RBF neural network
CN112270650A (en) * 2020-10-12 2021-01-26 西南大学 Image processing method, system, medium, and apparatus based on sparse autoencoder
CN112270725A (en) * 2020-09-24 2021-01-26 南京晓庄学院 Image reconstruction and coding method in spectral tomography
WO2021022686A1 (en) * 2019-08-08 2021-02-11 合肥图鸭信息科技有限公司 Video compression method and apparatus, and terminal device
CN112437311A (en) * 2020-11-23 2021-03-02 黄晓红 Video sequence compression coding method and device
CN112465141A (en) * 2020-12-18 2021-03-09 平安科技(深圳)有限公司 Model compression method, model compression device, electronic device and medium
CN112688836A (en) * 2021-03-11 2021-04-20 南方电网数字电网研究院有限公司 Energy routing equipment online dynamic sensing method based on deep self-coding network
CN113328755A (en) * 2021-05-11 2021-08-31 内蒙古工业大学 Compressed data transmission method facing edge calculation
CN114186583A (en) * 2021-12-02 2022-03-15 国家石油天然气管网集团有限公司 Method and system for recovering abnormal signal of corrosion detection of tank wall of oil storage tank
CN114202595A (en) * 2021-11-23 2022-03-18 北京理工大学 Calculation sensing method, system, equipment and storage medium
CN114782565A (en) * 2022-06-22 2022-07-22 武汉搜优数字科技有限公司 Digital archive image compression, storage and recovery method based on neural network
CN114926679A (en) * 2022-05-12 2022-08-19 海南大学 Image classification system and method for performing countermeasure defense
CN114998457A (en) * 2022-08-01 2022-09-02 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Image compression method, image decompression method, related device and readable storage medium
CN115169499A (en) * 2022-08-03 2022-10-11 中国电子科技集团公司信息科学研究院 Asset data dimension reduction method and device, electronic equipment and computer storage medium
CN115314156A (en) * 2022-07-15 2022-11-08 广东科学技术职业学院 LDPC coding and decoding method and system based on self-coding network

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104361328A (en) * 2014-11-21 2015-02-18 中国科学院重庆绿色智能技术研究院 Facial image normalization method based on self-adaptive multi-column depth model
CN106503654A (en) * 2016-10-24 2017-03-15 中国地质大学(武汉) A kind of face emotion identification method based on the sparse autoencoder network of depth
CN107396124A (en) * 2017-08-29 2017-11-24 南京大学 Video-frequency compression method based on deep neural network
CN107480777A (en) * 2017-08-28 2017-12-15 北京师范大学 Sparse self-encoding encoder Fast Training method based on pseudo- reversal learning
CN107563294A (en) * 2017-08-03 2018-01-09 广州智慧城市发展研究院 A kind of finger vena characteristic extracting method and system based on self study
CN108009520A (en) * 2017-12-21 2018-05-08 东南大学 A kind of finger vein identification method and system based on convolution variation self-encoding encoder neutral net
CN108335349A (en) * 2017-01-18 2018-07-27 辉达公司 Utilize NN filtering image data
US20180240219A1 (en) * 2017-02-22 2018-08-23 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
US20180285695A1 (en) * 2017-03-28 2018-10-04 Siemens Healthcare Gmbh Magnetic Resonance Image Reconstruction System and Method
CN108765338A (en) * 2018-05-28 2018-11-06 西华大学 Spatial target images restored method based on convolution own coding convolutional neural networks
CN108846323A (en) * 2018-05-28 2018-11-20 哈尔滨工程大学 A kind of convolutional neural networks optimization method towards Underwater Targets Recognition

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104361328A (en) * 2014-11-21 2015-02-18 中国科学院重庆绿色智能技术研究院 Facial image normalization method based on self-adaptive multi-column depth model
CN106503654A (en) * 2016-10-24 2017-03-15 中国地质大学(武汉) A kind of face emotion identification method based on the sparse autoencoder network of depth
CN108335349A (en) * 2017-01-18 2018-07-27 辉达公司 Utilize NN filtering image data
US20180240219A1 (en) * 2017-02-22 2018-08-23 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
US20180285695A1 (en) * 2017-03-28 2018-10-04 Siemens Healthcare Gmbh Magnetic Resonance Image Reconstruction System and Method
CN107563294A (en) * 2017-08-03 2018-01-09 广州智慧城市发展研究院 A kind of finger vena characteristic extracting method and system based on self study
CN107480777A (en) * 2017-08-28 2017-12-15 北京师范大学 Sparse self-encoding encoder Fast Training method based on pseudo- reversal learning
CN107396124A (en) * 2017-08-29 2017-11-24 南京大学 Video-frequency compression method based on deep neural network
CN108009520A (en) * 2017-12-21 2018-05-08 东南大学 A kind of finger vein identification method and system based on convolution variation self-encoding encoder neutral net
CN108765338A (en) * 2018-05-28 2018-11-06 西华大学 Spatial target images restored method based on convolution own coding convolutional neural networks
CN108846323A (en) * 2018-05-28 2018-11-20 哈尔滨工程大学 A kind of convolutional neural networks optimization method towards Underwater Targets Recognition

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
ALI MOUSAVI 等: "A Deep Learning Approach to Structured Signal Recovery", 《2015 53RD ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON)》 *
DUC MINH NGUYEN 等: "DEEP LEARNING SPARSE TERNARY PROJECTIONS FOR COMPRESSED SENSING OF IMAGES", 《2017 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP)》 *
LUKUN WANG 等: "Transformer fault diagnosis using continuous sparse autoencoder", 《SPRINGERPLUS》 *
PRIYA RANJAN MUDULI 等: "A Deep Learning Approach to Fetal-ECG Signal Reconstruction", 《2016 TWENTY SECOND NATIONAL CONFERENCE ON COMMUNICATION (NCC)》 *
ZUFAN ZHANG 等: "The optimally designed autoencoder network for compressed sensing", 《EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING》 *
伍云锋: "基于自编码网络的图像压缩感知研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *
步春宁: "基于自编码器的图像超分辨率算法研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428462A (en) * 2019-07-17 2019-11-08 清华大学 Polyphaser solid matching method and device
CN110428462B (en) * 2019-07-17 2022-04-08 清华大学 Multi-camera stereo matching method and device
CN110569961A (en) * 2019-08-08 2019-12-13 合肥图鸭信息科技有限公司 neural network training method and device and terminal equipment
WO2021022686A1 (en) * 2019-08-08 2021-02-11 合肥图鸭信息科技有限公司 Video compression method and apparatus, and terminal device
CN110779724A (en) * 2019-11-20 2020-02-11 重庆邮电大学 Bearing fault diagnosis method based on frequency domain group sparse noise reduction
CN110779724B (en) * 2019-11-20 2022-03-11 重庆邮电大学 Bearing fault diagnosis method based on frequency domain group sparse noise reduction
CN111598786A (en) * 2019-11-28 2020-08-28 南京航空航天大学 Hyperspectral image unmixing method based on deep denoising self-coding network
CN111598786B (en) * 2019-11-28 2023-10-03 南京航空航天大学 Hyperspectral image unmixing method based on depth denoising self-coding network
CN111401236A (en) * 2020-03-16 2020-07-10 西北工业大学 Underwater sound signal denoising method based on self-coding neural network
CN111563423A (en) * 2020-04-17 2020-08-21 西北工业大学 Unmanned aerial vehicle image target detection method and system based on depth denoising automatic encoder
CN111652311A (en) * 2020-06-03 2020-09-11 苏州大学 Image sparse representation method based on sparse elliptic RBF neural network
CN111652311B (en) * 2020-06-03 2024-02-20 苏州大学 Sparse elliptic RBF neural network-based image sparse representation method
CN112270725A (en) * 2020-09-24 2021-01-26 南京晓庄学院 Image reconstruction and coding method in spectral tomography
CN112270650A (en) * 2020-10-12 2021-01-26 西南大学 Image processing method, system, medium, and apparatus based on sparse autoencoder
CN112437311A (en) * 2020-11-23 2021-03-02 黄晓红 Video sequence compression coding method and device
CN112465141A (en) * 2020-12-18 2021-03-09 平安科技(深圳)有限公司 Model compression method, model compression device, electronic device and medium
CN112688836B (en) * 2021-03-11 2021-07-06 南方电网数字电网研究院有限公司 Energy routing equipment online dynamic sensing method based on deep self-coding network
CN112688836A (en) * 2021-03-11 2021-04-20 南方电网数字电网研究院有限公司 Energy routing equipment online dynamic sensing method based on deep self-coding network
CN113328755A (en) * 2021-05-11 2021-08-31 内蒙古工业大学 Compressed data transmission method facing edge calculation
CN113328755B (en) * 2021-05-11 2022-09-16 内蒙古工业大学 Compressed data transmission method facing edge calculation
CN114202595A (en) * 2021-11-23 2022-03-18 北京理工大学 Calculation sensing method, system, equipment and storage medium
CN114186583A (en) * 2021-12-02 2022-03-15 国家石油天然气管网集团有限公司 Method and system for recovering abnormal signal of corrosion detection of tank wall of oil storage tank
CN114926679A (en) * 2022-05-12 2022-08-19 海南大学 Image classification system and method for performing countermeasure defense
CN114782565A (en) * 2022-06-22 2022-07-22 武汉搜优数字科技有限公司 Digital archive image compression, storage and recovery method based on neural network
CN115314156A (en) * 2022-07-15 2022-11-08 广东科学技术职业学院 LDPC coding and decoding method and system based on self-coding network
CN115314156B (en) * 2022-07-15 2023-04-25 广东科学技术职业学院 LDPC coding and decoding method and system based on self-coding network
CN114998457A (en) * 2022-08-01 2022-09-02 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Image compression method, image decompression method, related device and readable storage medium
CN114998457B (en) * 2022-08-01 2022-11-22 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Image compression method, image decompression method, related device and readable storage medium
CN115169499A (en) * 2022-08-03 2022-10-11 中国电子科技集团公司信息科学研究院 Asset data dimension reduction method and device, electronic equipment and computer storage medium
CN115169499B (en) * 2022-08-03 2024-04-05 中国电子科技集团公司信息科学研究院 Asset data dimension reduction method, device, electronic equipment and computer storage medium

Similar Documents

Publication Publication Date Title
CN109919864A (en) A kind of compression of images cognitive method based on sparse denoising autoencoder network
CN110349230A (en) A method of the point cloud Geometric compression based on depth self-encoding encoder
CN103020935B (en) The image super-resolution method of the online dictionary learning of a kind of self-adaptation
CN108416755A (en) A kind of image de-noising method and system based on deep learning
CN110148081A (en) Training method, image processing method, device and the storage medium of image processing model
CN109346063B (en) Voice data enhancement method
CN111901829A (en) Wireless federal learning method based on compressed sensing and quantitative coding
CN108960333A (en) Lossless compression method for high spectrum image based on deep learning
CN110007347A (en) A kind of deep learning seismic data denoising method
CN112712488B (en) Remote sensing image super-resolution reconstruction method based on self-attention fusion
CN104199627B (en) Gradable video encoding system based on multiple dimensioned online dictionary learning
CN109523486A (en) Based on the multichannel brain electric signal reconfiguring method of robust compressed sensing under noise circumstance
CN115880762B (en) Human-machine hybrid vision-oriented scalable face image coding method and system
CN111797891A (en) Unpaired heterogeneous face image generation method and device based on generation countermeasure network
CN104301728A (en) Compressed video capture and reconstruction system based on structured sparse dictionary learning
CN114299185A (en) Magnetic resonance image generation method, magnetic resonance image generation device, computer equipment and storage medium
CN111259745B (en) 3D face decoupling representation learning method based on distribution independence
CN110728728A (en) Compressed sensing network image reconstruction method based on non-local regularization
CN112712855B (en) Joint training-based clustering method for gene microarray containing deletion value
CN108573512B (en) Complex visual image reconstruction method based on depth coding and decoding dual model
CN108769674A (en) A kind of video estimation method based on adaptive stratification motion modeling
CN105654119B (en) A kind of dictionary optimization method
CN114418854B (en) Unsupervised remote sensing image super-resolution reconstruction method based on image recursion
CN116777800A (en) Compressed sensing image reconstruction method based on gating recursion unit
Gao et al. Volumetric end-to-end optimized compression for brain images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190621