CN113706641B - Hyperspectral image compression method based on space and spectral content importance - Google Patents

Hyperspectral image compression method based on space and spectral content importance Download PDF

Info

Publication number
CN113706641B
CN113706641B CN202110916576.2A CN202110916576A CN113706641B CN 113706641 B CN113706641 B CN 113706641B CN 202110916576 A CN202110916576 A CN 202110916576A CN 113706641 B CN113706641 B CN 113706641B
Authority
CN
China
Prior art keywords
image
network
importance
tensor
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110916576.2A
Other languages
Chinese (zh)
Other versions
CN113706641A (en
Inventor
种衍文
顾晓林
潘少明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202110916576.2A priority Critical patent/CN113706641B/en
Publication of CN113706641A publication Critical patent/CN113706641A/en
Application granted granted Critical
Publication of CN113706641B publication Critical patent/CN113706641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention relates to a hyperspectral image compression method based on space and spectral content importance. Firstly, training a compressed network model by using a training data set to obtain model training parameters, then dividing an input image tensor into two branches, compressing one branch of the image tensor by an encoder network to obtain a hidden characterization tensor with 1/16 scale of an original image, inputting the hidden characterization tensor into a quantizer network to obtain a binarized code stream by pre-quantization and quantization, inputting the other branch of the hidden characterization tensor into a multi-depth convolution network to generate an importance map, weighting the importance map and the binarized code stream to obtain a content-based code stream, and inputting the content-based code stream into a decoder to obtain a reconstructed image. The method and the device generate the importance map according to the spatial characteristics and the spectral characteristics of the hyperspectral image, and under the guidance of the importance map, the coding end can dynamically allocate the code rate according to the spatial and spectral content complexity of the image, so that the compression rate is improved, and the quality of image compression is ensured.

Description

Hyperspectral image compression method based on space and spectral content importance
Technical Field
The invention belongs to the technical field of hyperspectral image compression, and particularly relates to a hyperspectral image compression method based on space and spectrum content importance.
Background
Compared with natural images, hyperspectral images have redundancy of spatial correlation, and redundancy of similarity between spectrum segments and spectrum segments. The spectrum information rich in the hyperspectral image can fully reflect the difference of physical structures and chemical components in the sample, and can provide important data support for geological exploration, fine agriculture, environment detection and the like. However, with the rapid improvement of the resolution of the remote sensor, the data scale of the hyperspectral image increases in geometric magnitude, the correlation between wave bands is stronger and the information redundancy is larger, which not only increases the calculation burden, but also may aggravate the contradiction between the channel bandwidth and the real-time data transmission requirement. Therefore, the high-spectrum image is effectively compressed, and meanwhile, the spectral characteristic information of the high-spectrum image is reserved to a great extent, so that higher requirements are placed on transmission, storage and processing of the high-spectrum image.
In recent years, depth convolution networks have achieved great success in a variety of visual tasks, and some natural image compression methods based on depth convolution networks have achieved performance comparable to or even better than conventional methods. In 2016, google's researchers have achieved comparable performance to JPEG in image compression using recurrent neural networks (a variant of mixed GRU and res net). In 2017, google's researchers improved the cyclic neural network model, introduced spatially adaptive code rates (Spatially Adaptive Bit Rates, SABR), dynamically adjusted local code rates according to target reconstruction quality, and improved their performance to levels beyond WebP. In the same year, li et al propose an image compression technique based on image content weighting, learn an importance map (importance map) of an image using a three-layer convolutional network, then generate an importance mask (importance mask) by quantization, and apply the importance mask to a subsequent encoding process, and perform as well as JPEG 2000. The above methods are all compression methods for natural images, not compression for hyperspectral images. For hyperspectral images, not only spatial content importance but also spectral importance is considered when dynamically assigning code rates according to image content importance.
In summary, a set of compression schemes are designed for the specific spectral characteristics of the hyperspectral image, so that not only can the inter-spectrum correlation of the hyperspectral image be effectively removed, but also the spectrum fidelity can be effectively improved, and higher performance of the hyperspectral image compression algorithm can be realized.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a hyperspectral image compression method based on the importance of space and spectral content. Firstly, training a compressed network model by using a training data set to obtain model training parameters, then dividing an input image tensor into two branches, compressing one branch of the image tensor by an encoder network to obtain a hidden characterization tensor with 1/16 scale of an original image, inputting the hidden characterization tensor into a quantizer network to obtain a binarized code stream by pre-quantization and quantization, inputting the other branch of the hidden characterization tensor into a multi-depth convolution network to generate an importance map, weighting the importance map and the quantized binarized code stream to obtain a code stream based on content, and inputting the code stream based on the content into a decoder to obtain a reconstructed image.
In order to achieve the above purpose, the technical scheme provided by the invention is a hyperspectral image compression method based on space and spectrum content importance, comprising the following steps:
step 1, randomly cutting multispectral images in a training set into image blocks with the sizes of 31 multiplied by 256;
step 2, converting the image block cut in the step 1 into tensor with 16 multiplied by 31 multiplied by 256 specification with the batch size of 16, inputting a compression network model for training, wherein the compression network model comprises an encoder, a quantizer, a multi-depth convolution network and a decoder, and iterating all data 300 times to obtain a trained compression network;
step 3, inputting the hyperspectral image to be compressed into a compression network, and dividing the read image tensor into two branches;
step 4, compressing one image tensor through an encoder network to obtain a hidden characterization tensor of 1/16 scale of the original image;
step 5, inputting the hidden characteristic tensor obtained in the step 4 into a quantizer network, and obtaining a binarized code stream through pre-quantization and quantization processing;
step 6, inputting the other image tensor into a multi-depth convolution network to generate an importance map;
step 7, weighting the importance map generated in the step 6 and the binarized code stream quantized in the step 5 to obtain a code stream based on content, wherein the weight coefficient takes experience values of multiple experiments;
and 8, inputting the content-based code stream obtained in the step 7 to a decoder to obtain a reconstructed image.
Furthermore, the loss function used for training the compression network in the step 2The calculation method is as follows:
where c is the coding of the input image,is the input image x i The code rate loss with the code c, namely the difference value between the image code stream and the code stream; />Is the input image x i And output image +.>Is calculated from the mean square error MSE; lambda is an over-parameter for balancing the code rate loss and the distortion loss, and is set by a user during training.
In addition, in the step 4, the encoder is formed by stacking four convolution layers and three GDN layers in a crossing manner, the encoder gradually transfers the spatial information of the original image tensor to the inter-spectrum dimension in the down sampling process, the spatial dimension of the image tensor (B, C, H, W) becomes (H/4, W/4) after the image tensor passes through the encoding end, the channel number becomes M, and M is the dimension of the convolution kernel of the last layer.
Moreover, the quantizer network in step 5 includes two processing modules, pre-quantization and quantization, wherein the pre-quantization module maps the b×c×h×w scale hidden token tensors output by the encoder into the hidden embedded space e R (k×d) (k=b, d=c×h×w), based primarily on discrete neural network learning. The quantization module is used for carrying out { -1,1} binarization processing on the feature map (feature maps) after pre-quantization to obtain a code stream.
The binarization calculation formula is as follows:
where x is the value of a point on the feature map.
In the step 6, three sub-importance maps are respectively generated by inputting the image tensor into a multi-depth convolution network, and then the final importance map is obtained by proportional weighted summation, and the weight coefficient takes the experience value of multiple experiments; the multi-depth convolution network is composed of three single-depth importance graph networks, each single-depth importance graph network comprises two convolution layers; the single-depth importance map network is used for identifying important areas of the image, introducing interdependence among SE-BLOCK modeling feature map channels, strengthening important channel features, improving feature directivity of the importance map, and generating an important map to guide bit allocation; in order to compensate excessive loss of non-important channels under the condition of low bit rate, a pyramid decomposition structure is adopted to reconstruct non-important channel information to form a multi-depth convolution network, a dynamic acceptance domain convolution DRFc is adopted to replace conventional convolution in a second convolution layer of each single-depth importance map network, all points in the neighborhood are ordered according to the degree of correlation between the first-order and second-order neighbors of the convolution points, and then eight points with the top rank are selected to serve as the dynamic acceptance domains of the convolution points, so that the capability of extracting edge information of CNN is greatly improved.
In addition, the decoder and the encoder in the step 8 are symmetrical, and are formed by crossing four convolution layers and three IGDN layer networks, and the decoder gradually transfers the inter-spectrum information of the code stream to the space dimension in the up-sampling process, so that the image reconstruction is realized.
Compared with the prior art, the invention has the following advantages: (1) Meanwhile, an importance map is generated according to the spatial characteristics and the spectral characteristics of the hyperspectral image, and under the guidance of the importance map, the coding end can dynamically allocate code rates according to the complexity of the spatial and spectral contents of the image, so that the compression rate is improved, and the quality of image compression is guaranteed. (2) Under the condition that the spectrum redundancy of the hyperspectral image is effectively removed, the spectrum characteristics of the hyperspectral image are well reserved, and the hyperspectral image is more beneficial to application. (3) The method is very suitable for remote sensing image compression transmission under the conditions of low bandwidth and low code rate, and has excellent image reconstruction capability. (4) The scale and the running speed of the deep neural network are optimized, so that the deployment and popularization of the equipment oriented to the Internet of things are facilitated.
Drawings
Fig. 1 is a schematic diagram of an encoder-quantization-multi-depth importance map-decoder block architecture in a compression network according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a specific structure of a multi-depth importance map module according to an embodiment of the present invention.
FIG. 3 is a schematic diagram of specific steps of dynamic acceptance domain convolution in an embodiment of the present invention.
FIG. 4 is a diagram showing the importance of a hyperspectral image at different bpps in an embodiment of the present invention, wherein FIG. 4 (a) is a diagram showing the importance at 0.25bpp, FIG. 4 (b) is a diagram showing the importance at 0.4bpp, FIG. 4 (c) is a diagram showing the importance at 0.55bpp, FIG. 4 (d) is a diagram showing the importance at 0.7bpp, FIG. 4 (e) is a diagram showing the importance at 1.0bpp, and FIG. 4 (f) is a diagram showing the importance at 1.25 bpp.
Fig. 5 is a graph of the reconstruction effect of two hyperspectral images at 0.504bpp in the embodiment of the present invention, wherein fig. 5 (a) and fig. 5 (b) are two original images, and fig. 5 (c) and fig. 5 (d) are respectively graphs of the reconstruction effect of two hyperspectral images at 0.504 bpp.
Detailed Description
The invention provides a hyperspectral image compression method based on space and spectral content importance, which comprises the steps of firstly training a compression network model by using a training data set to obtain model training parameters, dividing an input image tensor into two branches, carrying out compression processing on one image tensor by an encoder network to obtain a hidden characterization tensor of 1/16 scale of an original image, inputting the hidden characterization tensor into a quantizer network to obtain a binary code stream through pre-quantization and quantization processing, inputting the other branch into a multi-depth convolution network to generate an importance map, weighting the importance map and the quantized binary code stream to obtain a content-based code stream, and inputting the content-based code stream into a decoder to obtain a reconstructed image.
The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
As shown in fig. 1, the flow of the embodiment of the present invention includes the following steps:
step 1, randomly cutting the multispectral image in the training set into image blocks with the sizes of 31 multiplied by 256.
Step 2, converting the image block cut in the step 1 into tensor with 16 multiplied by 31 multiplied by 256 specification with the batch size of 16, inputting the tensor into a compression network model for training, wherein the compression network model comprises an encoder, a quantizer, a multi-depth convolution network and a decoder, iterating all data 300 times to obtain a trained compression network, and training a loss function used by the trainingThe calculation method is as follows:
where c is the coding of the input image,input image x i The code rate loss with the code c, namely the difference value between the image code stream and the code stream; />The distortion loss of the input image and the output image is calculated by mean square error MSE; lambda is an over-parameter for balancing the code rate loss and the distortion loss, and is set by a user during training.
And 3, inputting the hyperspectral image to be compressed into a compression network, and dividing the read image tensor into two branches.
And 4, compressing one image tensor through an encoder network to obtain a hidden characterization tensor of 1/16 scale of the original image.
The encoder is formed by stacking four convolution layers and three GDN layers in a crossing way, the encoder gradually transfers the spatial information of the original image tensor to the inter-spectrum dimension in the downsampling process, the spatial dimension of the image tensor (B, C, H, W) becomes (H/4, W/4) after passing through the coding end, the channel number becomes M, and M is the dimension of the convolution kernel of the last layer.
And 5, inputting the hidden characteristic tensor obtained in the step 4 into a quantizer network, and obtaining a binarized code stream through pre-quantization and quantization processing.
The quantizer network comprises two processing modules, pre-quantization and quantization, wherein the pre-quantization module maps the b×c×h×w scale hidden token tensors output by the encoder into the hidden embedding space e R (k×d) (k=b, d=c×h×w), based mainly on discrete neural network learning. The quantization module is used for carrying out { -1,1} binarization processing on the feature map (feature maps) after pre-quantization to obtain a code stream.
The binarization calculation formula is as follows:
where x is the value of a point on the feature map.
And 6, inputting the other image tensor into a multi-depth convolution network to generate an importance map.
The multi-depth convolution network is composed of three single-depth importance graph networks, each single-depth importance graph network comprises two convolution layers; the single-depth importance map network is used for identifying important areas of the image, introducing interdependence among SE-BLOCK modeling feature map channels, strengthening important channel features, improving feature directivity of the importance map, and generating an important map to guide bit allocation; in order to compensate excessive loss of non-important channels under the condition of low bit rate, a pyramid decomposition structure is adopted to reconstruct non-important channel information to form a multi-depth convolution network, a dynamic acceptance domain convolution DRFc is adopted to replace conventional convolution in a second convolution layer of each single-depth importance map network, all points in the neighborhood are ordered according to the degree of correlation between the first-order and second-order neighbors of the convolution points, and then eight points with the top rank are selected to serve as the dynamic acceptance domains of the convolution points, so that the capability of extracting edge information of CNN is greatly improved. As shown in fig. 2, the image tensor is input into a multi-depth convolution network to generate three sub-importance maps respectively, and then the final importance maps are obtained by proportional weighted summation, and the weight coefficient takes the experience value of multiple experiments.
And 7, weighting the importance map generated in the step 6 and the binarized code stream quantized in the step 5 to obtain a code stream based on content, and taking experience values of multiple experiments by the weight coefficient.
And 8, inputting the content-based code stream obtained in the step 7 to a decoder to obtain a reconstructed image.
The decoder is symmetrical with the encoder and is formed by crossing four convolution layers and three IGDN layer networks, and the decoder gradually transfers the inter-spectrum information of the code stream to the space dimension in the up-sampling process so as to realize the reconstruction of the image.
In specific implementation, the above process may be implemented by using a computer software technology.
The specific embodiments described herein are offered by way of example only to illustrate the spirit of the invention. Those skilled in the art may make various modifications or additions to the described embodiments or substitutions thereof without departing from the spirit of the invention or exceeding the scope of the invention as defined in the accompanying claims.

Claims (4)

1. The hyperspectral image compression method based on the importance of the spatial and spectral contents is characterized by comprising the following steps of:
step 1, randomly cutting multispectral images in a training set into image blocks with the sizes of a multiplied by b multiplied by c;
step 2, converting the image block cut in the step 1 into tensors with the specification of lambda x a x b x c with the batch size as lambda, inputting a compression network model for training, wherein the compression network model comprises an encoder, a quantizer, a multi-depth convolution network and a decoder, and iterating all data for n times to obtain a trained compression network;
loss function for training compression networkThe calculation method is as follows:
where c is the coding of the input image,is the input image x i The code rate loss with the code c, namely the difference value between the image code stream and the code stream; />Is the input image x i And output image +.>Is calculated from the mean square error MSE; lambda is an over-parameter that balances the rate loss and the distortion loss;
step 3, inputting the hyperspectral image to be compressed into a compression network, and dividing the read image tensor into two branches;
step 4, compressing one image tensor through an encoder network to obtain a hidden characterization tensor of 1/16 scale of the original image;
step 5, inputting the hidden characteristic tensor obtained in the step 4 into a quantizer network, and obtaining a binarized code stream through pre-quantization and quantization processing;
step 6, inputting the other image tensor into a multi-depth convolution network to generate an importance map;
the method comprises the steps of inputting image tensors into a multi-depth convolution network to respectively generate three sub-importance graphs, then carrying out proportional weighted summation to obtain a final importance graph, and taking experience values of multiple experiments by weight coefficients; the multi-depth convolution network is composed of three single-depth importance graph networks, each single-depth importance graph network comprises two convolution layers; the single-depth importance map network is used for identifying important areas of the image, introducing interdependence among SE-BLOCK modeling feature map channels, strengthening important channel features, improving feature directivity of the importance map, and generating an important map to guide bit allocation; in order to compensate excessive loss of a non-important channel under the condition of low bit rate, reconstructing non-important channel information by adopting a pyramid decomposition structure to form a multi-depth convolution network, adopting a dynamic acceptance domain convolution DRFc to replace conventional convolution in a second convolution layer of each single-depth importance graph network, firstly finding out first-order and second-order neighborhoods of convolution points, sorting all points in the neighborhoods according to the correlation degree of the first-order and second-order neighborhoods of the convolution points, and then selecting eight points with the top rank as dynamic acceptance domains of the convolution points, thereby greatly improving the capability of CNN for extracting edge information;
step 7, weighting the importance map generated in the step 6 and the binarized code stream quantized in the step 5 to obtain a code stream based on content;
and 8, inputting the content-based code stream obtained in the step 7 to a decoder to obtain a reconstructed image.
2. A method of compressing hyperspectral images based on spatial and spectral content importance as claimed in claim 1 wherein: in the step 4, the encoder is formed by stacking four convolution layers and three GDN layers in a crossing manner, the encoder gradually transfers the spatial information of the original image tensor to the inter-spectrum dimension in the down sampling process, the image tensor (B, C, H, W) is changed into (H/4, W/4) after passing through the encoder, the channel number is changed into M, B, C, H and W respectively represent the size×frame number, the channel, the height and the width of the feature map, and M is the dimension of the convolution kernel of the last layer.
3. A method of compressing hyperspectral images based on spatial and spectral content importance as claimed in claim 2 wherein: the quantizer network in the step 5 comprises a pre-quantization module and a quantization module, wherein the pre-quantization module is based on discrete neural network learning, maps a hidden characterization tensor of B×C×H×W scale output by an encoder into a hidden embedded space e epsilon R (k×d), wherein R is a collection space, k=B, d=C×H×W, and the quantization module is used for performing { -1,1} binarization processing on the feature map after pre-quantization to obtain a code stream;
the binarization calculation formula is as follows:
where x is the value of a point on the feature map.
4. A method of compressing hyperspectral images based on spatial and spectral content importance as claimed in claim 1 wherein: in the step 8, the decoder and the encoder are symmetrical, and are formed by crossing four convolution layers and three IGDN layer networks, and the decoder gradually transfers the inter-spectrum information of the code stream to the space dimension in the up-sampling process, so that the image reconstruction is realized.
CN202110916576.2A 2021-08-11 2021-08-11 Hyperspectral image compression method based on space and spectral content importance Active CN113706641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110916576.2A CN113706641B (en) 2021-08-11 2021-08-11 Hyperspectral image compression method based on space and spectral content importance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110916576.2A CN113706641B (en) 2021-08-11 2021-08-11 Hyperspectral image compression method based on space and spectral content importance

Publications (2)

Publication Number Publication Date
CN113706641A CN113706641A (en) 2021-11-26
CN113706641B true CN113706641B (en) 2023-08-15

Family

ID=78652169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110916576.2A Active CN113706641B (en) 2021-08-11 2021-08-11 Hyperspectral image compression method based on space and spectral content importance

Country Status (1)

Country Link
CN (1) CN113706641B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463449B (en) * 2022-01-12 2024-07-12 武汉大学 Hyperspectral image compression method based on edge guidance
CN115766965B (en) * 2022-11-29 2024-07-02 广东职业技术学院 Test paper image file processing method and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108717717A (en) * 2018-04-23 2018-10-30 东南大学 The method rebuild based on the sparse MRI that convolutional neural networks and alternative manner are combined
CN110348487A (en) * 2019-06-13 2019-10-18 武汉大学 A kind of method for compressing high spectrum image and device based on deep learning
CN111683250A (en) * 2020-05-13 2020-09-18 武汉大学 Generation type remote sensing image compression method based on deep learning
CN112734867A (en) * 2020-12-17 2021-04-30 南京航空航天大学 Multispectral image compression method and system based on space spectrum feature separation and extraction
CN113011499A (en) * 2021-03-22 2021-06-22 安徽大学 Hyperspectral remote sensing image classification method based on double-attention machine system
CN113132727A (en) * 2019-12-30 2021-07-16 北京大学 Scalable machine vision coding method based on image generation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10861143B2 (en) * 2017-09-27 2020-12-08 Korea Advanced Institute Of Science And Technology Method and apparatus for reconstructing hyperspectral image using artificial intelligence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108717717A (en) * 2018-04-23 2018-10-30 东南大学 The method rebuild based on the sparse MRI that convolutional neural networks and alternative manner are combined
CN110348487A (en) * 2019-06-13 2019-10-18 武汉大学 A kind of method for compressing high spectrum image and device based on deep learning
CN113132727A (en) * 2019-12-30 2021-07-16 北京大学 Scalable machine vision coding method based on image generation
CN111683250A (en) * 2020-05-13 2020-09-18 武汉大学 Generation type remote sensing image compression method based on deep learning
CN112734867A (en) * 2020-12-17 2021-04-30 南京航空航天大学 Multispectral image compression method and system based on space spectrum feature separation and extraction
CN113011499A (en) * 2021-03-22 2021-06-22 安徽大学 Hyperspectral remote sensing image classification method based on double-attention machine system

Also Published As

Publication number Publication date
CN113706641A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN110348487B (en) Hyperspectral image compression method and device based on deep learning
CN109889839B (en) Region-of-interest image coding and decoding system and method based on deep learning
CN113706641B (en) Hyperspectral image compression method based on space and spectral content importance
CN110517329B (en) Deep learning image compression method based on semantic analysis
FR2724792A1 (en) DATA COMPRESSION METHOD USING REVERSIBLE IMPLANTED ELEMENTARY WAVES
CN113450421B (en) Unmanned aerial vehicle reconnaissance image compression and decompression method based on enhanced deep learning
Perumal et al. A hybrid discrete wavelet transform with neural network back propagation approach for efficient medical image compression
CN112132158A (en) Visual picture information embedding method based on self-coding network
CN114581341A (en) Image style migration method and system based on deep learning
Raja et al. Analysis of efficient wavelet based image compression techniques
Kathirvalavakumar et al. Self organizing map and wavelet based image compression
CN117522674A (en) Image reconstruction system and method combining local and global information
Alsayyh et al. A Novel Fused Image Compression Technique Using DFT, DWT, and DCT.
Hilles Sofm and vector quantization for image compression by component
CN108171325A (en) Sequential integrated network, code device and the decoding apparatus that a kind of multiple dimensioned face restores
CN115065817B (en) Hologram compression method, encoder and hologram reproduction module
CN113949880B (en) Extremely-low-bit-rate man-machine collaborative image coding training method and coding and decoding method
WO2022194344A1 (en) Learnable augmentation space for dense generative adversarial networks
Radad et al. A Hybrid Discrete Wavelet Transform with Vector Quantization for Efficient Medical Image Compression
CN106028043B (en) Three-dimensional Self-organizing Maps image encoding method based on new neighborhood function
Hilles Spatial Frequency Filtering Using Sofm For Image Compression
Hu et al. Grayscale image coding using optimal pixel grouping and adaptive multi-grouping division block truncation coding
Sharma et al. A technique of image compression based on discrete wavelet image decomposition and self organizing map
Sawant et al. Hybrid Image Compression Method using ANN and DWT
Das et al. A Review on Deep Learning of Neural Network Based Image Compression Techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant