CN113435376B - Bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform - Google Patents

Bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform Download PDF

Info

Publication number
CN113435376B
CN113435376B CN202110760099.5A CN202110760099A CN113435376B CN 113435376 B CN113435376 B CN 113435376B CN 202110760099 A CN202110760099 A CN 202110760099A CN 113435376 B CN113435376 B CN 113435376B
Authority
CN
China
Prior art keywords
feature fusion
fusion module
wavelet transform
bidirectional
discrete wavelet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110760099.5A
Other languages
Chinese (zh)
Other versions
CN113435376A (en
Inventor
李亚峰
孙洁琪
张文博
刘鹏辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baoji University of Arts and Sciences
Original Assignee
Baoji University of Arts and Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baoji University of Arts and Sciences filed Critical Baoji University of Arts and Sciences
Priority to CN202110760099.5A priority Critical patent/CN113435376B/en
Publication of CN113435376A publication Critical patent/CN113435376A/en
Application granted granted Critical
Publication of CN113435376B publication Critical patent/CN113435376B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • G06F2218/10Feature extraction by analysing the shape of a waveform, e.g. extracting parameters relating to peaks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform, and belongs to the technical field of image classification and artificial intelligence. The method comprises the following specific steps: s101, constructing a bidirectional feature fusion module, wherein the bidirectional feature fusion module consists of a spatial domain feature fusion module and a pooling operation and channel domain feature fusion module; s102, embedding the bidirectional feature fusion module into a mainstream network architecture to replace an original pooling method; s103, performing characteristic diagram training and testing on the classical image classification data set by using a network of a bidirectional characteristic fusion module embedded into a mainstream network architecture. The invention provides a novel spatial domain feature fusion method by utilizing discrete wavelet transform and inverse discrete wavelet transform, which can effectively inhibit the problem of information loss caused by directly using pooling operation and improve the classification accuracy of images.

Description

Bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform
Technical Field
The invention belongs to the technical field of image classification and artificial intelligence, and particularly relates to a bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform.
Background
The deep convolutional neural network is one of important tools of computer vision tasks and image processing tasks such as image classification, target detection and image restoration. The pooling layer is an important component of the deep convolutional neural network, and can increase the receptive field of the network, reduce the complexity of the network, increase the nonlinearity of the network and improve the generalization capability of the model. In the deep convolutional neural network, the commonly used pooling methods include maximum pooling, average pooling, mixed pooling, random pooling, and the like. Among them, the classical maximal pooling and the average pooling are widely applied to the deep convolutional neural network due to the simple and efficient design principle. However, one of the main limitations of these two types of pooling operations is the loss and weakening of some feature information in the image as the resolution of the image decreases. The hybrid pooling and the random pooling establish a link between maximum pooling and average pooling by using probabilistic means. Although hybrid pooling and random pooling inherit the advantages of average pooling and maximum pooling, there are still information loss and weakness problems. The loss and weakening of feature information caused by pooling operation in the process of extracting features of the deep convolutional neural network directly influence the expression capability of the network, and the accuracy of classification is also influenced.
Researchers are constantly improving on the problem of loss of characteristic information for pooling operations. The convolution with step size reduces the resolution of the feature map without discarding data, but the step size convolution increases the amount of computation and parameters of the training network significantly. Researchers have also observed that pooling directly with down-sampling ignores the difference in the spatial and channel position distributions of the low-frequency features and the high-frequency features, resulting in aliasing effects between the frequency domain features. Therefore, it is proposed to remove the high frequency feature by low-pass filtering and then perform down-sampling, so as to effectively avoid the aliasing effect. Because the high-frequency part in the feature map contains a large amount of detail information and edge information, directly discarding the high-frequency features still causes the problem of feature information loss and affects the expression capability of the network.
In order to solve the problems, a bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform is provided.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform, which aims to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform comprises the following specific steps:
s101, constructing a bidirectional feature fusion module based on discrete wavelet transform and inverse discrete wavelet transform, wherein the bidirectional feature fusion module consists of a spatial domain feature fusion module, a pooling operation module and a channel domain feature fusion module;
s102, embedding the bidirectional feature fusion module into a mainstream network architecture to replace an original pooling method;
s103, training and testing a feature map on the classical image classification data set by using a deep convolutional neural network of a bidirectional feature fusion module embedded into a mainstream network architecture.
Further optimizing the technical scheme, the spatial domain feature fusion module comprises the following steps when fusion among spatial domain features is realized:
s201, decomposing input features by utilizing discrete wavelet transform to obtain low-frequency features, high-frequency features in the horizontal direction, high-frequency features in the vertical direction and high-frequency features in the diagonal direction;
s202, grouping and reconstructing by utilizing inverse discrete wavelet transform to obtain four types of characteristics with the same spatial dimension as the original characteristics so as to realize the fusion of the spatial characteristics.
Further optimizing the technical scheme, the grouping reconstruction refers to low-frequency characteristic single-frequency reconstruction, low-frequency and horizontal-direction high-frequency reconstruction, low-frequency and vertical-direction high-frequency reconstruction and low-frequency and diagonal-direction high-frequency reconstruction.
Further optimizing the technical scheme, the four groups of characteristics obtained by the spatial domain characteristic fusion module are spatial domain characteristics and have approximate distribution characteristics, the influence on the loss function is equivalent, and meanwhile, the four groups of fusion characteristics obtained from the spatial domain are respectively pooled.
Further optimizing the technical scheme, the spatial domain feature fusion module does not increase any parameter in the spatial domain feature fusion process by using the discrete wavelet transform and the inverse discrete wavelet transform.
Further optimizing the technical scheme, the channel domain feature fusion module splices the channel dimensions of the pooled multiple groups of features, and then reduces the dimensions by using 1x1 convolution to realize information interaction, so that the information of different frequency bands is mutually supplemented and transmitted in the channel domain.
Further optimizing the technical scheme, the characteristic diagram passes through the bidirectional characteristic fusion module, namely sequentially passes through the airspace characteristic fusion module, the pooling operation and the channel domain characteristic fusion module, so that the double-domain characteristic fusion module can be summarized as follows:
LL,LH,HL,HH=Pooling(f s (x))
x'=f c (concat[LL,LH,HL,HH])
giving a feature map x as input, and sequentially passing through a spatial feature fusion module f s Pooling operations Pooling and channel domain feature fusion Module f c . Here, x' represents the result of the feature map passing through the bidirectional feature fusion module.
Further optimizing the technical scheme, the bidirectional feature fusion module can be packaged as an independent module for replacing pooling operation in the deep convolutional neural network.
Further optimizing the technical scheme, the deep convolutional neural network of the bidirectional feature fusion module embedded into the mainstream network architecture is used for training and testing the public image classification data, and the classification precision of the model is improved.
Compared with the prior art, the invention provides a bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform, which has the following beneficial effects:
(1) According to the method, an airspace feature fusion module is arranged, the airspace feature fusion module establishes information fusion among different features in an airspace, and expansion of channel dimensions is realized; by adopting the grouping reconstruction method, the distribution of different characteristics is approximate, the influence on the loss function is similar, relatively important wavelet low-frequency characteristics are used in the grouping reconstruction, the characteristics in a space domain and a channel domain are redundant, and the characteristic loss caused by pooling is reduced.
(2) According to the method, the four types of features are subjected to pooling treatment respectively, and then a pooling result is transmitted into a channel domain feature fusion module, compared with a classical network, a popular attention embedding-based network and a latest wavelet-based deep convolutional neural network, the deep convolutional neural network embedded into a bidirectional feature fusion module improves the network classification accuracy rate under the condition of small parameter increment, and has certain superiority.
(3) The method provides a novel spatial domain feature fusion method by utilizing discrete wavelet transform and inverse discrete wavelet transform, overcomes the defects of the existing wavelet-based method, can be directly embedded into the current popular neural network architecture to replace the traditional pooling operation, and inhibits the pooled information loss from two dimensions of spatial domain and channel domain.
Drawings
FIG. 1 is a schematic flow chart of a method for constructing a bidirectional feature fusion deep convolution neural network based on discrete wavelet transform according to the present invention;
FIG. 2 is a schematic flow diagram of a bidirectional feature fusion module in a bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform according to the present invention;
fig. 3 is a schematic structural diagram of a bidirectional feature fusion module in the bidirectional feature fusion deep convolutional neural network construction method based on discrete wavelet transform.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
referring to fig. 1-3, a method for constructing a bidirectional feature fusion deep convolution neural network based on discrete wavelet transform includes the following steps:
s101, constructing a bidirectional feature fusion module based on discrete wavelet transform and inverse discrete wavelet transform, wherein the bidirectional feature fusion module consists of a spatial domain feature fusion module and a channel domain feature fusion module;
s102, embedding the bidirectional feature fusion module into a mainstream network architecture to replace an original pooling method;
s103, feature map training and testing are carried out on the classical image classification data set by using a deep convolution neural network of a bidirectional feature fusion module embedded into a mainstream network architecture.
Specifically, the spatial domain feature fusion module includes the following steps when fusion between spatial domain features is implemented:
s201, decomposing input features by utilizing discrete wavelet transform to obtain low-frequency features, high-frequency features in the horizontal direction, high-frequency features in the vertical direction and high-frequency features in the diagonal direction;
s202, grouping and reconstructing by utilizing inverse discrete wavelet transform to obtain four types of characteristics with the same airspace dimension as the original characteristics so as to realize the fusion between the airspace characteristics.
Specifically, the grouping reconstruction refers to low-frequency characteristic single-frequency reconstruction, low-frequency and horizontal-direction high-frequency reconstruction, low-frequency and vertical-direction high-frequency reconstruction, and low-frequency and diagonal-direction high-frequency reconstruction.
Specifically, the four sets of features obtained by the spatial domain feature fusion module are spatial domain features, the distribution features are approximate, the influence on the loss function is equivalent, and meanwhile, the four sets of fusion features obtained from the spatial domain are respectively pooled.
Specifically, the spatial domain feature fusion module does not increase any parameter in the spatial domain feature fusion process by using the discrete wavelet transform and the inverse discrete wavelet transform.
Specifically, the channel domain feature fusion module splices the channel dimensions of the pooled multiple groups of features, and then reduces the dimensions by using 1x1 convolution to realize the purpose of feature selection and information interaction, so that the channel domain information is mutually supplemented and transmitted in different frequency bands.
Specifically, the feature map passes through the bidirectional feature fusion module, namely sequentially passes through the airspace feature fusion module, the pooling operation and the channel domain feature fusion module, so that the dual-domain feature fusion module can be summarized as follows:
LL,LH,HL,HH=Pooling(f s (x))
x'=f c (concat[LL,LH,HL,HH])
giving a feature map x as input, and sequentially passing through a spatial feature fusion module f s Pooling operation Pooling and channel domain feature fusion module fc. Here, x' represents the result of the feature map passing through the bidirectional feature fusion module.
Specifically, the bidirectional feature fusion module may be packaged as an independent module for replacing pooling operations in a deep convolutional neural network.
Specifically, the deep convolutional neural network of the bidirectional feature fusion module embedded in the mainstream network architecture is used for training and testing the public image classification data, so as to improve the classification accuracy of the model.
Example two:
the method for constructing the bidirectional feature fusion deep convolutional neural network based on the discrete wavelet transform is adopted, and 3 kinds of classical image classification data sets CIFAR-10, CIFAR-100 and Mini-ImageNet are used for training and verifying a feature fusion image classification model obtained by image features extracted by the deep convolutional neural network based on the discrete wavelet transform. And (3) iteratively updating parameters of a convolution kernel and a neuron in the model by using an SGD optimizer, wherein the specific parameters of the optimizer are as follows: 100 times of iterative training, the size of each training batch is 32, the learning rate is 0.001, the weight penalty term is 0.0001, and the momentum is 0.8. When the loss of the training set and the verification set tends to converge, the representation model is stable, and a trained classification model is obtained.
The method for constructing the bidirectional feature fusion deep convolution neural network based on the discrete wavelet transform comprises the steps of firstly constructing a bidirectional feature fusion module, decomposing input features into low-frequency features, high-frequency features in the horizontal direction, high-frequency features in the vertical direction and high-frequency features in the diagonal direction by using the discrete wavelet transform, and then performing grouping reconstruction on different frequency domain features by using the inverse discrete wavelet transform. The method comprises the steps of reconstructing a low-frequency characteristic single frequency, reconstructing a low frequency and a high frequency in a horizontal direction, reconstructing a low frequency and a high frequency in a vertical direction, and reconstructing a low frequency and a high frequency in a diagonal direction respectively to obtain four types of characteristics with the same dimensionality as an original characteristic airspace. And performing Pooling operation on the second module, namely performing Pooling treatment on the four types of features respectively by using the module, and transmitting a Pooling result to a third channel domain feature fusion module. The channel domain feature fusion module realizes feature fusion among channels, firstly carries out splicing of channel dimensions, and then carries out dimension reduction by utilizing 1 multiplied by 1 convolution so as to reduce information redundancy among the channels. The bidirectional feature fusion module is embedded into a mainstream deep neural network architecture VGG, resNet, denseNet, wherein, taking VGG16 as an example, the network comprises 13 convolutional layers, 5 pooling layers and 3 volume connection layers, and 5 pooling layers in the VGG16 are replaced by the bidirectional feature fusion module. And training and testing the deep convolutional neural network embedded into the bidirectional feature fusion module on the public image classification data set. Including CIFAR-10, CIFAR-100, mini-ImageNet. The CIFAR-10 dataset contains 50000 training pictures and 10000 test pictures, for a total of 10 different categories. The image size of each graph is 32 × 32. The CIFAR-100 data set contains 50000 training pictures and 10000 testing pictures, and has 100 different categories, and the size of each picture is 32 x 32. The Mini-Imagenet dataset contains 100 categories, each category containing 500 pictures for training and 100 pictures for testing.
In the embodiment, a large number of experiments are carried out on data sets of CIFAR-10, CIFAR-100 and Mini-ImageNet, and the fact that the pooling operation in the model is replaced by the bidirectional feature fusion module is proved, the bidirectional feature fusion deep convolution neural network based on discrete wavelet transform is constructed, and the accuracy of image classification can be improved. As can be seen from table 1, using different types of wavelets can improve the classification accuracy rate compared to the original method, and at the same time, it is verified that different wavelet types have different influences on the classification structure, where a bidirectional feature fusion module constructed by using the bior2.2 wavelets in the Haar wavelet and the biorthogonal wavelet can obtain a higher classification accuracy rate in most experiments.
TABLE 1 deep convolutional neural network classification accuracy comparison (%) -embedded bidirectional feature fusion module
Figure BDA0003148899690000081
In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Moreover, various embodiments or examples and features of various embodiments or examples described in this specification can be combined and combined by one skilled in the art without being mutually inconsistent.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (8)

1. A bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform is characterized by comprising the following specific steps:
s101, constructing a bidirectional feature fusion module based on discrete wavelet transform and inverse discrete wavelet transform, wherein the bidirectional feature fusion module consists of a spatial domain feature fusion module, a pooling operation module and a channel domain feature fusion module;
s102, embedding the bidirectional feature fusion module into a mainstream network architecture to replace an original pooling method;
s103, performing feature map training and testing on the classical image classification data set by using a deep convolutional neural network of a bidirectional feature fusion module embedded into a mainstream network architecture;
the characteristic diagram passes through the bidirectional characteristic fusion module, namely sequentially passes through the airspace characteristic fusion module, the pooling operation and the channel domain characteristic fusion module, so that the bidirectional characteristic fusion module is summarized as follows:
Figure QLYQS_1
given a feature mapxAs input, sequentially pass through a spatial feature fusion modulef s Operation in pondsPoolingAnd channel domain feature fusion modulef cx' represents the result of the feature graph after passing through the bidirectional feature fusion module.
2. The method for constructing the bidirectional feature fusion deep convolutional neural network based on the discrete wavelet transform as claimed in claim 1, wherein the spatial domain feature fusion module comprises the following steps when realizing fusion between spatial domain features:
s201, decomposing input features by utilizing discrete wavelet transform to obtain low-frequency features, high-frequency features in the horizontal direction, high-frequency features in the vertical direction and high-frequency features in the diagonal direction;
s202, grouping and reconstructing by utilizing inverse discrete wavelet transform to obtain four types of characteristics with the same spatial dimension as the original characteristics so as to realize the fusion of the spatial characteristics.
3. The method for constructing the bidirectional feature fusion deep convolution neural network based on the discrete wavelet transform as claimed in claim 2, wherein the grouping reconstruction refers to reconstructing a single frequency of a low frequency feature, reconstructing a high frequency in a low frequency and a horizontal direction, reconstructing a high frequency in a low frequency and a vertical direction, and reconstructing a high frequency in a low frequency and a high frequency in a diagonal direction.
4. The method for constructing the bidirectional feature fusion deep convolutional neural network based on the discrete wavelet transform as claimed in claim 2, wherein four groups of features obtained by the spatial domain feature fusion module are spatial domain features and distribution features are similar, the influence on the loss function is equivalent, and the four groups of fusion features obtained from the spatial domain are respectively pooled.
5. The method for constructing the bidirectional feature fusion deep convolutional neural network based on the discrete wavelet transform of claim 2, wherein the spatial domain feature fusion module does not add any parameter in the spatial domain feature fusion process by using the discrete wavelet transform and the inverse discrete wavelet transform.
6. The method for constructing the bidirectional feature fusion deep convolutional neural network based on the discrete wavelet transform as claimed in claim 1, wherein the channel domain feature fusion module performs channel dimension splicing on a plurality of groups of pooled features, and then performs dimension reduction by using 1x1 convolution to realize information interaction, so that different frequency band information performs mutual complementation and transmission of channel domains.
7. The method for constructing the bidirectional feature fusion deep convolutional neural network based on the discrete wavelet transform as claimed in claim 1, wherein the bidirectional feature fusion module is packaged as an independent module for replacing the pooling operation in the deep convolutional neural network.
8. The method for constructing the bidirectional feature fusion deep convolutional neural network based on the discrete wavelet transform of claim 1, wherein the deep convolutional neural network using the bidirectional feature fusion module embedded in the mainstream network architecture is trained and tested on public image classification data for realizing the purpose of improving the classification accuracy of the model.
CN202110760099.5A 2021-07-05 2021-07-05 Bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform Active CN113435376B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110760099.5A CN113435376B (en) 2021-07-05 2021-07-05 Bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110760099.5A CN113435376B (en) 2021-07-05 2021-07-05 Bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform

Publications (2)

Publication Number Publication Date
CN113435376A CN113435376A (en) 2021-09-24
CN113435376B true CN113435376B (en) 2023-04-18

Family

ID=77759171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110760099.5A Active CN113435376B (en) 2021-07-05 2021-07-05 Bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform

Country Status (1)

Country Link
CN (1) CN113435376B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506822A (en) * 2017-07-26 2017-12-22 天津大学 A kind of deep neural network method based on Space integration pond

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894477B (en) * 2016-06-03 2019-04-02 深圳市樊溪电子有限公司 Astronomical image noise elimination method
CN109643396A (en) * 2016-06-17 2019-04-16 诺基亚技术有限公司 Construct convolutional neural networks
CN109978057A (en) * 2019-03-28 2019-07-05 宝鸡文理学院 A kind of research method of the hardware image recognition algorithm based on deep learning
CN111179173B (en) * 2019-12-26 2022-10-14 福州大学 Image splicing method based on discrete wavelet transform and gradient fusion algorithm
CN111680176B (en) * 2020-04-20 2023-10-10 武汉大学 Remote sensing image retrieval method and system based on attention and bidirectional feature fusion
CN111967516B (en) * 2020-08-14 2024-02-06 西安电子科技大学 Pixel-by-pixel classification method, storage medium and classification equipment
CN112200161B (en) * 2020-12-03 2021-03-02 北京电信易通信息技术股份有限公司 Face recognition detection method based on mixed attention mechanism

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506822A (en) * 2017-07-26 2017-12-22 天津大学 A kind of deep neural network method based on Space integration pond

Also Published As

Publication number Publication date
CN113435376A (en) 2021-09-24

Similar Documents

Publication Publication Date Title
WO2023092813A1 (en) Swin-transformer image denoising method and system based on channel attention
CN110599401A (en) Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
CN110414670A (en) A kind of image mosaic tampering location method based on full convolutional neural networks
CN110415180A (en) A kind of SAR image denoising method based on wavelet convolution neural network
CN114331911A (en) Fourier laminated microscopic image denoising method based on convolutional neural network
CN115546060A (en) Reversible underwater image enhancement method
WO2021139351A1 (en) Image segmentation method, apparatus, medium, and electronic device
US6424737B1 (en) Method and apparatus of compressing images using localized radon transforms
CN114063168B (en) Artificial intelligent noise reduction method for seismic signals
CN113723472B (en) Image classification method based on dynamic filtering constant-variation convolutional network model
CN113435376B (en) Bidirectional feature fusion deep convolution neural network construction method based on discrete wavelet transform
CN114492522A (en) Automatic modulation classification method based on improved stacked hourglass neural network
Ahn et al. Super-resolution convolutional neural networks using modified and bilateral ReLU
Wu et al. An adversarial learning framework with cross-domain loss for median filtered image restoration and anti-forensics
CN116703750A (en) Image defogging method and system based on edge attention and multi-order differential loss
CN115272131B (en) Image mole pattern removing system and method based on self-adaptive multispectral coding
Li et al. FA-GAN: a feature attention GAN with fusion discriminator for non-homogeneous dehazing
CN111028160A (en) Remote sensing image noise suppression method based on convolutional neural network
CN115861749A (en) Remote sensing image fusion method based on window cross attention
CN113191947B (en) Image super-resolution method and system
CN115861108A (en) Image restoration method based on wavelet self-attention generation countermeasure network
CN116152263A (en) CM-MLP network-based medical image segmentation method
CN114509814A (en) Pre-stack seismic data random noise suppression method and system
Chen et al. Deep 2nd-order residual block for image denoising
Xia et al. MFC-Net: Multi-scale fusion coding network for Image Deblurring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant