CN107240136A - A kind of Still Image Compression Methods based on deep learning model - Google Patents
A kind of Still Image Compression Methods based on deep learning model Download PDFInfo
- Publication number
- CN107240136A CN107240136A CN201710379743.8A CN201710379743A CN107240136A CN 107240136 A CN107240136 A CN 107240136A CN 201710379743 A CN201710379743 A CN 201710379743A CN 107240136 A CN107240136 A CN 107240136A
- Authority
- CN
- China
- Prior art keywords
- image
- model
- deep learning
- methods based
- compression
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
A kind of Still Image Compression Methods based on deep learning model, data mining and machine learning field, are related to the lossy compression method for image, mainly use the sparse autocoding of multilayer and K means algorithms to realize function above.The main flow that the present invention carries out view data lossy compression method is image block, image classification, PSO parameter optimizations and model training, four steps of model measurement.The innovative point of the present invention, which is mainly reflected in, proposes a kind of sparse autocoding of multilayer (MSAE) neutral net mixed by sparse autocoder and BP neural network.The feature extraction effect of the sparse autocoding neutral net of multilayer and traditional neural network is all higher than to the effect of image lossy compression method.Successfully the method for deep learning is introduced into compression of images field, and achieve the effect better than artificial neural network.
Description
Technical field
The invention belongs to data mining and machine learning field, it is related to for the lossy compression method to view data.
Background technology
With the arriving in big data epoch, data just increase and accumulated at an unprecedented rate, and data processing technique is just
Undergoing once brand-new change.First, the scale of construction of data constantly expands, the scale of data acquisition system from GB, TB to PB,
And network big data is even counted with units such as EB and ZB;Secondly, network Large data types are various, including structural data,
Semi-structured data and unstructured data show the spy that unstructured data increases substantially in modern the Internet application
Point;Again, network big data, which often shows burst and the nonlinear state such as emerges in large numbers, develops phenomenon, and therefore, it is difficult to change progress to it
Effectively assess and prediction another aspects, network big data is usually dynamically and quickly produced in the form of data flow, with very strong
It is ageing.Also it is evolving with the arriving data compression technique in big data epoch, because the space of storage device is after all
It is limited, and picture, game, audio, video are more and more universal in applying in a computer, but they occupy sky very much again
Between, so compress technique prospect is boundless and constantly in development.
Though the compression algorithm on data has early been studied, such as Run- Length Coding, differential coding, lzw algorithm, Huffman are compiled
Code, JPEG, JPEG200, ZIP etc., although these compression algorithms can compressed data, can be easy to store and transmit,
The data being exceedingly fast in face of current extremely large amount of, complicated, growth rate, increasingly seem unable to do what one wishes.This research topic with
Big data is research background, by data compression algorithm, proposes deep learning model to be used for the compression of data, so as to carry
The compression ratio of high data, reduces data redundancy, facilitates data transfer and storage, be significant.
The content of the invention
The present invention carries out piecemeals to five original-gray images first, then piece gathering one with same characteristic features
Each training dataset, is then input to the sparse autocoder of corresponding multilayer (MASE) by class formation training dataset
It is trained in model, and carries out parameter optimization using particle group optimizing (PSO) algorithm in the training process, finally using test
Sample is tested the performance of model and analyzes experimental result.The MASE of the present invention refers to sparse autocoder and BP nerves
The Sparse of input layer can be mapped to hidden layer by the hybrid neural networks of network, sparse autocoder, obtain dilute
Thin feature is easy to neutral net to carry out the feature of learning data.By the way that the output of sparse autocoder is input into BP networks
In, the network weight of automatic sparse coding device is then further adjusted by the error-duration model of BP networks.
To realize above-mentioned goal of the invention, the present invention uses following technical schemes:
A kind of Still Image Compression Methods based on deep learning model, comprise the following steps:
Step one:Image block
Divide the image into the segment of several 8*8 dimensions;
Step 2:Segment is clustered
The segment with same characteristic features that above-mentioned steps one are divided into is flocked together using K-means clustering algorithms,
Form training dataset;
Step 3:PSO parameter optimizations and model training
Using the sparse autocoder of multilayer (MSAE) model, the model is determined with particle group optimizing (PSO) algorithm
Two parameters of the number of plies and neuron number of hidden layer, and the segment with same characteristic features is input to completion in the model
The compression and decompression of image;
Step 4:Model measurement
Using above-mentioned steps one to step 3,8 times, 16 times, 32 times of experiments of image are carried out respectively, and in different test specimens
This, using test sample image as the collection that tests the speed of three MSAE networks, tests compression and the decompression of the model with training sample
Effect.
Identical feature includes fringe region feature, flat site feature and texture region feature.The fringe region is
The obvious region of variation of image grayscale, indicates the sharp transition of characteristics of image;The flat site is grey scale change in image
Gentle region, is the single features of the background either image of image;The texture region is that the gray scale in image is in certain
The region of distribution of shapes, indicates the textural characteristics of image.It is using K-means clustering algorithms that the segment is special according to region
Levy, flat site feature and texture region feature are divided into three classes.Define compression multiple R=NO/NC, wherein NOFor initial data
Dimension, NCFor data after compression dimension be last layer of hidden layer number.Using five width image Lena, Baboon, Jet,
Peppers, Sailboat carry out model measurement respectively as test sample and training sample.
Recognition result of the present invention with BP neural network in experimentation is contrasted, and has carried out cross validation,
Main algorithm performance evaluation index has selected PSNR values.As a result it is shown in the mixed of K-means plus MSAE networks in evaluation index
Close compact model and achieve more preferable quality reconstruction than ANN network in compression of images and reconstruct.
Brief description of the drawings
Fig. 1 is flow chart of the present invention.
Fig. 2 is model training schematic diagram of the present invention.
Fig. 3 is model schematic of the present invention.
Embodiment
The present invention is further illustrated with reference to the accompanying drawings and detailed description.
Fig. 1 is schematic diagram of the present invention.Shown in reference picture 1, piecemeal is carried out to five original-gray images first, then tool
Have same characteristic features piece gathers class formation training dataset, then each training dataset is input to corresponding
It is trained in the sparse autocoder of multilayer (MASE) model and carries out parameter optimization using PSO algorithms in the training process,
Finally the performance of model is tested using test sample and experimental result is analyzed.Implement step as follows:
Step one:Image block
(1) image block.In principle, we can be input to original training sample the sparse autocoder of multilayer
Then model is trained to this model, because multilayer sparse autocoder model can automatically learning sample data
Internal characteristicses.If however, we do not pre-process to training sample, it is necessary to which sparse autocoder model possesses more
Complicated structure (more hidden layers and neuron node) goes to be fitted sample data, and enough data go training pattern, together
When higher requirement is also proposed to our hardware.Due to the limitation and the control of time cost of hardware device, we are to reality
The pretreatment for testing data is necessary.Pretreatment for experimental data can not only reduce the complexity of model, shorten instruction
Practice the time, while the requirement of experimental facilities can also be reduced, contribute to us to carry out deep learning under limited experiment condition
The feasibility checking of model.Before experiment is started, we have carried out the processing of image block and image clustering to trial image.
Because the dimension of single image is huge, it would be desirable to could fully extract the feature of an entire image using more complicated model,
And when an entire image is input to model by we, the convergence rate of model can be slack-off, model can be easy to occur
The situation of poor fitting.Therefore, the present invention first divides the image into 8*8 segment, the gray-scale map of such width script 512*512 dimensions
Gray level image block as being just changed into 8*8 dimensions, so enormously simplify the complexity of model, accelerates model convergence rate,
Model is facilitated to carry out feature extraction.The width image of Baboon, Jet, Lena, Peppers, Sailboat five used herein is made respectively
It is that training sample and test sample are illustrated.
Step 2:Segment is clustered
It is, in general, that we can be divided into piece image fringe region, flat site and texture region.Fringe region,
It is the obvious region of variation of image grayscale, often indicates the sharp transition of characteristics of image;Flat site, is gray scale change in image
Change the single features in gentle region, the often background of image either image;Texture region, is that the gray scale in image is in one
Determine the region of distribution of shapes, typically to indicate the textural characteristics of image.Therefore, we use clustering algorithm with same characteristic features
Segment be brought together, then we using a compact model to this kind of segment carry out image compression and decompression grind
Study carefully.So to do contribute to compact model to extract this category feature, it is to avoid the segment of different characteristic for compact model disturbance,
And greatly strengthen the learning efficiency of model, it is ensured that the better quality of reconstructed image in the case of high compression ratio.
Step 3:PSO parameter optimizations and model training
There are three important parameters directly to have impact on the performance of model, the layer of hidden layer in the sparse autocoder of multilayer
Number, the number and node transfer function of hidden layer neuron.The number of neuron is really in the number of plies and hidden layer of hidden layer
Surely it is always the problem of academia, ununified theory, the selection for these parameters of most of researchers is more
It is dependence experience and a large amount of repetition experiments to determine.It is, in general, that the neuron number of hidden layer is more, network is to input data
Feature representation it is more complete, the data compressed are also just less, and being exhausted on the premise of required precision is met may be compact
Structure, that is, exhaust the number of hidden nodes that may lack;The hidden layer number of plies is more, network for initial data abstracting power just more
By force, it is better able to extract the core feature of data, is also more prone to compression, increase hidden layer number can reduces network error, carry
High accuracy, but also complicate network, convergence rate is slack-off.
Determined herein for the number of plies of hidden layer and the determination of neuron number using particle swarm optimization algorithm.Nerve net
The parameter quality of network have impact on the quality of neutral net compressing data effect, and particle swarm optimization algorithm can be in given parameter
In the range of find one group for network effect preferably parameter sets.
Particle cluster algorithm is using " colony " and the concept of " evolution ", and the adaptive value size according to particulate is operated, and is one
Plant based on the optimization tool iterated, system initialization is one group of RANDOM SOLUTION, and by iterating search optimal value, particle is chased after in solution space
Scanned for optimal particle, regard every individual as a particulate without weight and volume in n ties up search space,
And flown in search space with certain speed, just terminate until reaching maximum iteration and obtaining optimal resolving Algorithm.Fig. 2
It is model training figure, illustrates the step of image block, image clustering, PSO parameter optimizations.
Step 4:Model measurement
The collection that tests the speed of three MSAE networks is used as using Lena images.Lena images are divided into 4096 8*8's first
Segment, then using K-means clustering algorithms, is gathered into three classes, finally similar segment is input to complete in correspondence model
Compression experiment is conciliate in compression into image.Fig. 3 is shown by being obtained after population optimizing algorithm and sample set training
MSAE nets.The input layer of network carries out image to the part of last hidden layer for the compressor reducer MSAE_compress of image
Compression, the data of MSAE_compress outputs are the data to be stored;Last hidden layer is to scheme to the part of output layer
The decompressor MSAE_decompress of picture, output data is the corresponding pixel value of segment of reconstruct.Definition compression multiple is R,
NOFor the dimension of initial data, NCFor data after compression dimension be last layer of hidden layer number.Using experiment step above
Suddenly, 8 times, 16 times, 32 times of experiments of image are carried out respectively, and under different training samples and test sample, the pressure of test model
Contracting decompression effect.
R=No/Nc
Table one record be octuple compression multiplying power in the case of MSAE models and the image of ANN model reconstruct PSNR
Value.A behavior first compression in table one conciliates compression experiment, and wherein Lena is test sample, and remaining fourth officer is training sample.
As the experimental sequence of the image of ANN model is with the experimental sequence of the image of MSAE models.
From the row in table one, we can significantly have found Lena PSNR value of the PSNR values than other training images
Will low 1-3 db because Lena be not engaged in model training it, so it has the characteristic that model did not learn
, and to be characterized in model fully learn training image, so training image be input to after model obtained by reconstructed image
PSNR values than test image are high, and quality will get well.From row, in the case of same test sample and training sample, BP
The reconstructed image for the test sample that model is obtained obtains the low 1-4 db of PSNR values of reconstructed image than MSAE model, i.e.,
Make be training sample reconstruct, the PSNR of ANN model reconstructed image is also more many than MSAE network reconfiguration image differences, illustrates
MSAE models are higher than BP networks in image characteristics extraction, image restoring.
The contrast of the of table one MSAE models and ANN model reconstructed image in the case of octuple compression ratio
In order to avoid crossing the generation of study and deficient learning state, prevent the feature of single image from having influence on whole model
Performance and experimental result, test the stability and reliability of MSAE models using the method for cross validation herein.
Experiment sample is divided into five groups, every group of width for using successively in five width images is as test sample, four width in addition
As training sample, five experiments are carried out so in the case of same compression ratio, five width images all can be successively as test chart
Picture, it is to avoid the feature of particular image has influence on experimental result.According to above-mentioned thought, respectively at 8 times, 16 times, 32 times of compression ratios
In the case of use cross validation method test model, experimental result such as table two to table seven.
The contrast of the of table two MSAE model reconstruction images in the case of octuple compression ratio
The contrast of the of table three BP model reconstruction images in the case of octuple compression ratio
In the case of table four, 16 times of compression ratios, the PSNR contrasts of MSAE reconstructed images
In the case of table five, 16 times of compression ratios, the PSNR contrasts of BP reconstructed images
In the case of the twelvefold compression ratios of six, of table tri-, the PSNR contrasts of MSAE reconstructed images
In the case of the twelvefold compression ratios of seven, of table tri-, the PSNR contrasts of BP reconstructed images
Table two shows the PSNR values of ANN model and MSAE model reconstruction images under 8,16,32 multiplication of voltage demagnification rates to table six.
Each behavior is once tested in form 2, and the secondary series of wherein the first row is test image, and the second row first row is test image,
3rd row of the third line are test images, and the row of fourth line the 4th are test charts, and the 5th row of fifth line are test images, same to a line
Other are classified as training image, and form 3-6 test image is identical with the situation of form 2.
Test result indicates that, except individual other experimental error, general trend is all that test sample is reconstructed than training sample
Image PSNR values it is low, quality is poor, and difference is little each other for the PSNR values of reconstructed image.In the feelings of identical compression ratio
Under condition, PSNR value high 1-3 db of the reconstructed image than ANN model reconstructed image of MSAE models, even in different compression ratio feelings
Under condition, the reconstructed image of MSAE models has absolutely proved the spy of MSAE models also than the better quality of ANN model reconstructed image
Levy extractability and adaptability is stronger, model stability is higher.
The comparison of summary experimental result, K-means adds the mixing compact model of MSAE networks in compression of images and again
More preferable quality reconstruction is achieved than ANN network on structure, in the case of identical compression ratio, the feature that image retains is more comprehensive,
Image fault is small, and in the case of different compression ratios, also there is extraordinary performance, and this illustrates deep learning model than shallow
Layer learning model has obvious advantage in terms of feature extraction, more accords with the neutral net sandwich construction of physiologically human brain, together
When to also demonstrate a deep learning model use be practicable to compression of images field, MSAE models have stronger fault-tolerant
Property, any local damage do not interfere with whole result, and this characteristic contributes to there is the data compression for image of making an uproar and to compression
The recovery of information not full images afterwards.In addition, the MPP ability of MSAE networks, is the real-time implementation wound of Image Coding
Condition is made.
Claims (6)
1. a kind of Still Image Compression Methods based on deep learning model, comprise the following steps:
Step one:Image block
Divide the image into the segment of several 8*8 dimensions;
Step 2:Segment is clustered
The segment with same characteristic features that above-mentioned steps one are divided into is flocked together using K-means clustering algorithms, formed
Training dataset;
Step 3:PSO parameter optimizations and model training
Using the sparse autocoder of multilayer (MSAE) model, the implicit of the model is determined with particle group optimizing (PSO) algorithm
Two parameters of the number of plies and neuron number of layer, and the segment with same characteristic features is input in the model completes image
Compression and decompression;
Step 4:Model measurement
Using above-mentioned steps one to step 3, carry out 8 times, 16 times, 32 times of experiments of image respectively, and in different test samples and
Under training sample, using test sample image as the collection that tests the speed of three MSAE networks, compression and the decompression effect of the model are tested.
2. a kind of Still Image Compression Methods based on deep learning model according to claim 1, it is characterised in that:On
Stating identical feature described in step 2 includes fringe region feature, flat site feature and texture region feature.
3. a kind of Still Image Compression Methods based on deep learning model according to claim 2, it is characterised in that:Institute
It is the obvious region of variation of image grayscale to state fringe region, indicates the sharp transition of characteristics of image;The flat site is figure
The gentle region of grey scale change as in, is the single features of the background either image of image;During the texture region is image
Gray scale be in definite shape be distributed region, indicate the textural characteristics of image.
4. a kind of Still Image Compression Methods based on deep learning model according to claim any one of 1-3, it is special
Levy and be:It is using the K-means clustering algorithms that the segment is special according to provincial characteristics, flat site feature and texture region
Levy and be divided into three classes.
5. a kind of Still Image Compression Methods based on deep learning model according to claim 1, it is characterised in that:
Three kinds of above-mentioned steps, define compression multiple R=NO/NC, wherein NOFor the dimension of initial data, NCFor the dimension of data after compression
The number of i.e. last layer hidden layer.
6. a kind of Still Image Compression Methods based on deep learning model according to claim 1, it is characterised in that:Profit
With five width image Lena, Baboon, Jet, Peppers, Sailboat respectively as test sample and training sample, model is carried out
Test.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710379743.8A CN107240136B (en) | 2017-05-25 | 2017-05-25 | Static image compression method based on deep learning model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710379743.8A CN107240136B (en) | 2017-05-25 | 2017-05-25 | Static image compression method based on deep learning model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107240136A true CN107240136A (en) | 2017-10-10 |
CN107240136B CN107240136B (en) | 2020-07-10 |
Family
ID=59985621
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710379743.8A Expired - Fee Related CN107240136B (en) | 2017-05-25 | 2017-05-25 | Static image compression method based on deep learning model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107240136B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107749757A (en) * | 2017-10-18 | 2018-03-02 | 广东电网有限责任公司电力科学研究院 | A kind of data compression method and device based on stacking-type own coding and PSO algorithms |
CN108062780A (en) * | 2017-12-29 | 2018-05-22 | 百度在线网络技术(北京)有限公司 | Method for compressing image and device |
CN108111873A (en) * | 2017-12-29 | 2018-06-01 | 国网山东省电力公司泰安供电公司 | A kind of GIS image data transfer methods based on machine learning |
CN108876864A (en) * | 2017-11-03 | 2018-11-23 | 北京旷视科技有限公司 | Image coding, coding/decoding method, device, electronic equipment and computer-readable medium |
CN110119745A (en) * | 2019-04-03 | 2019-08-13 | 平安科技(深圳)有限公司 | Compression method, device, computer equipment and the storage medium of deep learning model |
CN110222717A (en) * | 2019-05-09 | 2019-09-10 | 华为技术有限公司 | Image processing method and device |
CN110930322A (en) * | 2019-11-06 | 2020-03-27 | 天津大学 | Defogging method for estimating transmission image by combining image blocking with convolution network |
WO2020232612A1 (en) * | 2019-05-20 | 2020-11-26 | 西门子股份公司 | Method and apparatus lowering data volume used for data visualization |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105163121A (en) * | 2015-08-24 | 2015-12-16 | 西安电子科技大学 | Large-compression-ratio satellite remote sensing image compression method based on deep self-encoding network |
CN105531725A (en) * | 2013-06-28 | 2016-04-27 | D-波系统公司 | Systems and methods for quantum processing of data |
US20160292589A1 (en) * | 2015-04-03 | 2016-10-06 | The Mitre Corporation | Ultra-high compression of images based on deep learning |
CN106251375A (en) * | 2016-08-03 | 2016-12-21 | 广东技术师范学院 | A kind of degree of depth study stacking-type automatic coding of general steganalysis |
-
2017
- 2017-05-25 CN CN201710379743.8A patent/CN107240136B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105531725A (en) * | 2013-06-28 | 2016-04-27 | D-波系统公司 | Systems and methods for quantum processing of data |
US20160292589A1 (en) * | 2015-04-03 | 2016-10-06 | The Mitre Corporation | Ultra-high compression of images based on deep learning |
CN105163121A (en) * | 2015-08-24 | 2015-12-16 | 西安电子科技大学 | Large-compression-ratio satellite remote sensing image compression method based on deep self-encoding network |
CN106251375A (en) * | 2016-08-03 | 2016-12-21 | 广东技术师范学院 | A kind of degree of depth study stacking-type automatic coding of general steganalysis |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107749757A (en) * | 2017-10-18 | 2018-03-02 | 广东电网有限责任公司电力科学研究院 | A kind of data compression method and device based on stacking-type own coding and PSO algorithms |
CN108876864B (en) * | 2017-11-03 | 2022-03-08 | 北京旷视科技有限公司 | Image encoding method, image decoding method, image encoding device, image decoding device, electronic equipment and computer readable medium |
CN108876864A (en) * | 2017-11-03 | 2018-11-23 | 北京旷视科技有限公司 | Image coding, coding/decoding method, device, electronic equipment and computer-readable medium |
CN108062780A (en) * | 2017-12-29 | 2018-05-22 | 百度在线网络技术(北京)有限公司 | Method for compressing image and device |
CN108111873A (en) * | 2017-12-29 | 2018-06-01 | 国网山东省电力公司泰安供电公司 | A kind of GIS image data transfer methods based on machine learning |
CN108062780B (en) * | 2017-12-29 | 2019-08-09 | 百度在线网络技术(北京)有限公司 | Method for compressing image and device |
CN108111873B (en) * | 2017-12-29 | 2020-04-14 | 国网山东省电力公司泰安供电公司 | GIS image data transmission method based on machine learning |
CN110119745A (en) * | 2019-04-03 | 2019-08-13 | 平安科技(深圳)有限公司 | Compression method, device, computer equipment and the storage medium of deep learning model |
CN110119745B (en) * | 2019-04-03 | 2024-05-10 | 平安科技(深圳)有限公司 | Compression method, compression device, computer equipment and storage medium of deep learning model |
CN110222717A (en) * | 2019-05-09 | 2019-09-10 | 华为技术有限公司 | Image processing method and device |
WO2020232612A1 (en) * | 2019-05-20 | 2020-11-26 | 西门子股份公司 | Method and apparatus lowering data volume used for data visualization |
CN110930322B (en) * | 2019-11-06 | 2021-11-30 | 天津大学 | Defogging method for estimating transmission image by combining image blocking with convolution network |
CN110930322A (en) * | 2019-11-06 | 2020-03-27 | 天津大学 | Defogging method for estimating transmission image by combining image blocking with convolution network |
Also Published As
Publication number | Publication date |
---|---|
CN107240136B (en) | 2020-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107240136A (en) | A kind of Still Image Compression Methods based on deep learning model | |
CN112149316B (en) | Aero-engine residual life prediction method based on improved CNN model | |
CN111626300B (en) | Image segmentation method and modeling method of image semantic segmentation model based on context perception | |
CN110533631B (en) | SAR image change detection method based on pyramid pooling twin network | |
CN110517329B (en) | Deep learning image compression method based on semantic analysis | |
CN111079795B (en) | Image classification method based on CNN (content-centric networking) fragment multi-scale feature fusion | |
CN110852227A (en) | Hyperspectral image deep learning classification method, device, equipment and storage medium | |
CN110334580A (en) | The equipment fault classification method of changeable weight combination based on integrated increment | |
CN109884419B (en) | Smart power grid power quality online fault diagnosis method | |
CN107679543A (en) | Sparse autocoder and extreme learning machine stereo image quality evaluation method | |
CN110223234A (en) | Depth residual error network image super resolution ratio reconstruction method based on cascade shrinkage expansion | |
CN110428045A (en) | Depth convolutional neural networks compression method based on Tucker algorithm | |
CN113962893A (en) | Face image restoration method based on multi-scale local self-attention generation countermeasure network | |
CN108537259A (en) | Train control on board equipment failure modes and recognition methods based on Rough Sets Neural Networks model | |
CN112541572A (en) | Residual oil distribution prediction method based on convolutional encoder-decoder network | |
CN109598676A (en) | A kind of single image super-resolution method based on Hadamard transform | |
CN107507253A (en) | Based on the approximate more attribute volume data compression methods of high order tensor | |
CN108734675A (en) | Image recovery method based on mixing sparse prior model | |
CN109920013A (en) | Image reconstructing method and device based on gradual convolution measurement network | |
Zhou et al. | Online filter clustering and pruning for efficient convnets | |
CN112309112A (en) | Traffic network data restoration method based on GraphSAGE-GAN | |
CN112634438A (en) | Single-frame depth image three-dimensional model reconstruction method and device based on countermeasure network | |
CN115546032A (en) | Single-frame image super-resolution method based on feature fusion and attention mechanism | |
CN105260736A (en) | Fast image feature representing method based on normalized nonnegative sparse encoder | |
CN113935240A (en) | Artificial seismic wave simulation method based on generative confrontation network algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200710 |