CN114742706A - Water pollution remote sensing image super-resolution reconstruction method for intelligent environmental protection - Google Patents

Water pollution remote sensing image super-resolution reconstruction method for intelligent environmental protection Download PDF

Info

Publication number
CN114742706A
CN114742706A CN202210399165.5A CN202210399165A CN114742706A CN 114742706 A CN114742706 A CN 114742706A CN 202210399165 A CN202210399165 A CN 202210399165A CN 114742706 A CN114742706 A CN 114742706A
Authority
CN
China
Prior art keywords
remote sensing
image
module
feature
sensing image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210399165.5A
Other languages
Chinese (zh)
Other versions
CN114742706B (en
Inventor
张林肖
周莉莎
曾丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia Zhiyuan Innovation Technology Co ltd
Original Assignee
Chongqing Niuzhizhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Niuzhizhi Technology Co ltd filed Critical Chongqing Niuzhizhi Technology Co ltd
Priority to CN202210399165.5A priority Critical patent/CN114742706B/en
Publication of CN114742706A publication Critical patent/CN114742706A/en
Application granted granted Critical
Publication of CN114742706B publication Critical patent/CN114742706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A20/00Water conservation; Efficient water supply; Efficient water use
    • Y02A20/20Controlling water pollution; Waste water treatment

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a water pollution remote sensing image super-resolution reconstruction method for intelligent environmental protection, which comprises the steps of acquiring a data set, training an image super-resolution reconstruction network, acquiring a remote sensing image to be processed, carrying out primary feature extraction on the remote sensing image to be processed by a shallow feature mapping module, sequentially carrying out sampling and reconstruction on a second feature map by a first feature map through a plurality of SL feature mapping modules connected in series and an image output module, and then outputting an amplified image. The method utilizes the convolutional neural network to carry out super-resolution reconstruction on the remote sensing image, improves the resolution of the remote sensing image by a relatively simple and feasible method, further improves the accuracy of subsequent image processing, does not need to change hardware equipment, and has the advantages of low cost, good reconstruction effect and the like.

Description

Water pollution remote sensing image super-resolution reconstruction method for intelligent environmental protection
Technical Field
The invention belongs to the technical field of environmental protection and artificial intelligence, and particularly relates to a water pollution remote sensing image super-resolution reconstruction method for intelligent environmental protection.
Background
With the enhancement of the national regulation of environmental pollution, the market related to environmental protection is gradually growing, and a plurality of emerging technologies are applied to the field. The intelligent environment protection is a new concept proposed under the background of the current digital era, integrates the advanced technologies such as artificial intelligence, cloud computing and the Internet of things, and aims to realize intelligent, accurate and efficient environment management. The satellite remote sensing imaging is an important technology for realizing intelligent environmental protection, remote sensing images are combined with image processing technologies such as image segmentation, image recognition and target detection, and the type of a water body pollution source, the position of the pollution source and the distribution range of water body pollution can be rapidly monitored. However, all high-precision automatic processing means depend on high-quality image input and are influenced by factors such as hardware equipment performance, cost and the like, and the resolution of remote sensing images obtained in practical application scenes is generally lower than required, so that the precision of subsequent image segmentation and identification is limited.
Disclosure of Invention
Aiming at the phenomenon, the invention provides the super-resolution reconstruction method for the intelligent and environment-friendly water pollution remote sensing image, which is used for carrying out super-resolution reconstruction on the remote sensing image by utilizing a computer algorithm, so that the image resolution is increased, and the accuracy of subsequent automatic water pollution identification is further improved.
In order to achieve the above purpose, the solution adopted by the invention is as follows: a water pollution remote sensing image super-resolution reconstruction method for intelligent environmental protection comprises the following steps:
s100, acquiring a training data set, and performing down-sampling on the training data set to obtain a low-resolution image corresponding to an image in the training data set;
s200, constructing an intelligent environment-friendly image super-resolution reconstruction network, and training the image super-resolution reconstruction network by using the training data set obtained in the step S100 and the corresponding low-resolution image thereof; the image super-resolution reconstruction network comprises a shallow feature mapping module, SL feature mapping modules and an image output module, wherein the shallow feature mapping module is arranged at the front end of the network, the SL feature mapping modules are arranged in the middle of the network, and the image output module is arranged at the tail of the network;
s300, obtaining a remote sensing image to be processed, inputting the remote sensing image to be processed into the image super-resolution reconstruction network trained in the step S200, performing primary feature extraction on the remote sensing image to be processed through the shallow feature mapping module, and outputting to obtain a first feature map;
s400, enabling the first characteristic diagram to sequentially pass through a plurality of SL characteristic mapping modules which are connected in series, and outputting to obtain a second characteristic diagram;
the SL feature mapping module is used for extracting deep feature information in the first feature map, and the mathematical model of the SL feature mapping module is as follows:
P0=f1 1(Tn)
P1=ψ1(f1 3(P0))
P2=ψ2(f1 5(P0))
AV1=fA1(P1+P2)
AV2=fA2(P1+P2)
P3=Mul(P1,AV1)
P4=Mul(P2,AV2)
P5=ψ4(f3 33(f2 3(Cat(P3,P4)))))
P6=Mul(P5,(AV1+AV2))
Tn+1=Cat(P6,Tn)
wherein, TnIs a feature map, T, input to the SL feature mapping modulen+1Is a feature map of the SL feature mapping module output, f1 1() Denotes the convolution operation of 1 x 1, f1 5() Denotes a convolution operation of 5 by 5, f1 3()、f2 3() And f3 3() All represent 3 x 3 convolution operations, 1 x 1, 3 x 3 and 5 x 5 above represent the size of the convolution kernel, Ψ1()、Ψ2()、Ψ3()、Ψ4() All represent ReLU activation function, fA1() And fA1() Respectively representing a first channel attention module and a second channel attention module, AV1 representing a first channel modulation diagram generated by the first channel attention module, AV2 representing a second channel modulation diagram generated by the second channel attention module, Mul () representing the step of multiplying the generated channel modulation diagram by a characteristic diagram to enable the channel modulation diagram to realize the function of different channels of the characteristic diagramThe Cat () represents to splice the feature maps therein;
s500, inputting the second characteristic diagram into the image output module, wherein the image output module outputs an enlarged image after sampling and reconstructing the second characteristic diagram, and the resolution of the enlarged image is greater than that of the remote sensing image to be processed.
Further, the training data set is DIV 2K.
Further, the first channel attention module is represented as a mathematical model as follows:
A1M=δ1(φ1(MePl(P1+P2)))
A1A=δ2(φ2(MaPl1(P1+P2)))
AV1=A1M+A1A
wherein, P1+ P2 is a feature map input into the first channel attention module, MePl () represents global average pooling operation (i.e. calculating an average value of each layer of the feature map) for each layer of the feature map, MaPl1() represents global maximum pooling operation (i.e. calculating a maximum value of each layer of the feature map) for each layer of the feature map, δ 1 and δ 2 both represent activation functions sigmoid,
Figure BDA0003591735870000031
and
Figure BDA0003591735870000032
both represent a non-linear mapping mechanism and AV1 represents a first channel modulation map output by the first channel attention module.
Further, the second channel attention module is represented as a mathematical model as follows:
A2V=δ3(φ3(VaPl(P1+P2)))
A2A=δ4(φ4(MaPl2(P1+P2)))
AV2=A2A-A2V
wherein, P1+ P2 is the feature map input into the attention module of the second channel, VaPl () represents global variance pooling operation (i.e. calculating variance value of each layer of the feature map) for each layer of the feature map, and MaPl2() represents global maximum pooling operation for each layer of the feature mapAnd, delta 3 and delta 4 both represent the activation function sigmoid,
Figure BDA0003591735870000033
and
Figure BDA0003591735870000034
both represent a non-line mapping mechanism and AV2 represents a second channel map output by the second channel attention module.
Furthermore, a jump connection is arranged between the first channel attention module and the second channel attention module, and a vector obtained after the global maximum pooling operation in the first channel attention module is input into the second channel attention module through the jump connection and is added to a vector obtained after the global maximum pooling operation in the second channel attention module.
Further, the nonlinear mapping mechanism includes a first fully-connected layer, a ReLU activation layer, and a second fully-connected layer connected in sequence.
Experiments show that when the first channel attention module adopts maximum pooling and average pooling, the second channel attention module adopts maximum pooling and variance pooling, and the vector obtained by the branch where the maximum pooling is located has the best effect of making a difference with the vector obtained by the branch where the variance pooling is located. On the basis of the design, the first channel attention module and the second channel attention module are in jump connection, so that the second channel attention module receives more information input, the global receptive field is achieved, the generated second channel modulation graph can reflect the importance relation among different channels more accurately, and the modulation effect is more accurate.
The invention has the beneficial effects that:
(1) the method comprises the steps that a water body remote sensing image is obviously different from a common scene image, a considerable part of area in the water body remote sensing image is basically a single pure-color background, the part is repeated low-frequency information after characteristic extraction, and high-frequency water pollution characteristics are mixed in the background image;
in the process of transferring the characteristic diagram inside the network, characteristic information is more and more abstract, highly abstract information is critical to high-level tasks such as target detection and the like, but for super-resolution reconstruction, part of excessively abstract information is useless, and interference is generated for reconstruction.
Drawings
FIG. 1 is a schematic diagram of an image super-resolution reconstruction network for intelligent environmental protection according to the present invention;
FIG. 2 is a schematic diagram of the SL feature mapping module of FIG. 1;
FIG. 3 is a schematic structural view of the first channel attention module and the second channel attention module of FIG. 2;
FIG. 4 is a schematic diagram of the internal structure of the non-linear mapping mechanism of FIG. 3;
FIG. 5 is a schematic diagram of an internal structure of the image output module in FIG. 1;
FIG. 6 is a schematic structural view of a first channel attention module and a second channel attention module with jumpers removed;
fig. 7 is a schematic diagram of the structure of the SL feature mapping module in comparative network 2;
in the drawings:
1-to-be-processed remote sensing image, 2-shallow feature mapping module, 3-SL feature mapping module, 31-first channel attention module, 32-second channel attention module, 33-nonlinear mapping mechanism, 331-first full connection layer, 332-ReLU activation layer, 333-second full connection layer, 34-jump connection, 4-image output module, 41-front end 3 × 3 convolution layer, 42-pixelhuffle layer, 43-rear end 3 × 3 convolution layer, and 5-amplified image.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Example 1:
and acquiring a training data set DIV2K and test data sets BSD100 and Manga109, and reducing the resolution of the original image by adopting bicubic down-sampling to obtain a corresponding low-resolution image. When a network is built, codes are programmed by python and are realized on the basis of a pytorch framework. Fig. 1 shows an image super-resolution reconstruction network for smart environmental protection, in which a shallow feature mapping module 2 is a convolution layer with a convolution kernel size of 3 × 3, the number of SL feature mapping modules 3 is 6, the structure of the SL feature mapping module 3 is shown in fig. 2, the internal structures of a first channel attention module 31 and a second channel attention module 33 in the SL feature mapping module 3 are shown in fig. 3, and a nonlinear mapping mechanism 33 is shown in fig. 4 and includes a first full-connection layer 331, a ReLU activation layer 332, and a second full-connection layer 333, which are connected in sequence. The internal structure of the image output module 4 is shown in fig. 5, and includes a front-end 3 × 3 convolution layer 41, a pixelhuffle layer 42, and a rear-end 3 × 3 convolution layer 43 connected in this order. In this embodiment, the parameters are optimized by using an L2 loss function when training the super-resolution reconstruction network, the epoch number is 1000, and the learning rate is fixedly set to 0.0002.
After the image 1 to be processed is input into the network and passes through the shallow feature mapping module 2, the number of the obtained first feature map channels is 48. For each SL feature mapping module 3, the feature map input therein is first subjected to a1 × 1 convolution operation, and the feature map channels are sorted into 48, so as to obtain a feature map P0. Then, 3 × 3 convolution and 5 × 5 convolution are input, and the number of channels of the obtained feature maps P1 and P2 is 48. For the first channel attention module 31 and the second channel attention module 33, a vector with the length of 48 is obtained after the global pooling operation, and after the vector is processed by the nonlinear mapping mechanism 33 and the sigmoid activation function, the lengths of the first channel attention map AV1 and the second channel attention map AV2, and AV1 and AV2 are 48 respectively. After the feature maps P3 and P4 are spliced, the number of channels is changed to 96, after the first 3 × 3 convolution operation, the number of channels is kept to 96, and after the second 3 × 3 convolution operation, the feature map P5 with the number of channels being 48 is output, so that the number of channels of P5 is equal to the lengths of AV1 and AV 2.
For the interior of the non-linear mapping mechanism 33, the number of input elements of the first fully-connected layer 331 is 48, and the input element number isThe out element is 24, the number of input elements of the second fully connected layer 333 is 24, and the number of output elements is 48. For the image output module 4, the number of channels of the characteristic map of the front end 3 × 3 convolution layer 41 is input to 48 × 7, and after passing through the front end 3 × 3 convolution layer 41, the number of characteristic map channels becomes 48K2(K represents the image resolution increasing factor), the output characteristic image channel number of the pixelshuffle layer 42 is 48, and the length and width dimension is changed to be K times of the original value. Finally, the back-end 3 × 3 convolutional layer 43 outputs the enlarged image 5 with the number of channels of 3 after the super-resolution reconstruction.
The image super-resolution reconstruction network provided by the embodiment is compared with the existing advanced models SRMDNF and RCAN, and the results are shown in the following table:
Figure BDA0003591735870000061
from the above results, it can be seen that the super-resolution reconstruction network provided by the present embodiment has significantly better reconstruction effect on a common data set than SRMDNF and RCAN, compared with the prior art.
Example 2:
to demonstrate the effect of the jump-junction 34 and the modulation of the P5 profile using the attention mechanism, this example performed ablation experiments on the basis of example 1. The jump connection 34 is removed on the basis of the embodiment 1, the vector obtained by canceling the global maximum pooling operation in the first channel attention module 31 is input into the second channel attention module 32, the modified attention module is shown in fig. 6, and other structures of the network are not changed, so that the comparative network 1 is obtained. On the basis of embodiment 1, the P5 signature map modulation is canceled by the attention mechanism, the modified SL signature mapping module 3 is shown in fig. 7, and the other structure of the network is the same as that of embodiment 1, and a comparative network 2 is obtained. The results of experiments on the same training and test data sets are shown in the following table:
Figure BDA0003591735870000071
according to experimental data, the super-resolution reconstruction effect of the model can be effectively improved by setting the jump connection 34 and modulating the P5 feature map by using an attention mechanism, and meanwhile, the parameters and the calculated amount of the network are basically kept unchanged.
The above-mentioned embodiments only express the specific embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (6)

1. A water pollution remote sensing image super-resolution reconstruction method for intelligent environmental protection is characterized by comprising the following steps: the method comprises the following steps:
s100, acquiring a training data set, and performing down-sampling on the training data set to obtain a low-resolution image corresponding to an image in the training data set;
s200, constructing an intelligent environment-friendly image super-resolution reconstruction network, and training the image super-resolution reconstruction network by using the training data set obtained in the step S100 and the corresponding low-resolution image thereof; the image super-resolution reconstruction network comprises a shallow feature mapping module, SL feature mapping modules and an image output module, wherein the shallow feature mapping module is arranged at the front end of the network, the SL feature mapping modules are arranged in the middle of the network, and the image output module is arranged at the tail of the network;
s300, obtaining a remote sensing image to be processed, inputting the remote sensing image to be processed into the image super-resolution reconstruction network trained in the step S200, performing primary feature extraction on the remote sensing image to be processed through the shallow feature mapping module, and outputting to obtain a first feature map;
s400, enabling the first characteristic diagram to sequentially pass through a plurality of SL characteristic mapping modules which are connected in series, and outputting to obtain a second characteristic diagram;
the SL feature mapping module is used for extracting deep feature information in the first feature map, and the mathematical model of the SL feature mapping module is as follows:
P0=f1 1(Tn)
P1=ψ1(f1 3(P0))
P2=ψ2(f1 5(P0))
AV1=fA1(P1+P2)
AV2=fA2(P1+P2)
P3=Mul(P1,AV1)
P4=Mul(P2,AV2)
Figure FDA0003591735860000011
P6=Mul(P5,(AV1+AV2))
Tn+1=Cat(P6,Tn)
wherein, TnIs a feature map, T, input to the SL feature mapping modulen+1Is a feature map of the SL feature mapping module output, f1 1() Denotes the convolution operation of 1 x 1, f1 5() Denotes convolution operation of 5 x 5, f1 3()、f2 3() And f3 3() All represent 3 x 3 convolution operations, Ψ1()、Ψ2()、Ψ3()、Ψ4() All represent ReLU activation function, fA1() And fA1() Respectively representing a first channel attention module and a second channel attention module, AV1 representing a first channel modulation diagram generated by the first channel attention module, AV2 representing a second channel modulation diagram generated by the second channel attention module, Mul () representing the generated channel modulation diagram multiplied by a feature diagram, and Cat () representing the feature diagram spliced together;
s500, inputting the second characteristic diagram into the image output module, wherein the image output module outputs an enlarged image after sampling and reconstructing the second characteristic diagram, and the resolution of the enlarged image is greater than that of the remote sensing image to be processed.
2. The intelligent and environment-friendly water pollution remote sensing image super-resolution reconstruction method for the intelligent and environment-friendly water pollution remote sensing image as claimed in claim 1, wherein: the training data set is DIV 2K.
3. The intelligent and environment-friendly water pollution remote sensing image super-resolution reconstruction method according to claim 1, which is characterized in that: the first channel attention module is represented as a mathematical model as follows:
A1M=δ1(φ1(MePl(P1+P2)))
A1A=δ2(φ2(MaPl1(P1+P2)))
AV1=A1M+A1A
wherein, P1+ P2 is a feature map input into the attention module of the first channel, MePl () represents global average pooling operation for each layer of the feature map, MaPl1() represents global maximum pooling operation for each layer of the feature map, δ 1 and δ 2 both represent activation functions sigmoid,
Figure FDA0003591735860000021
and
Figure FDA0003591735860000022
both represent a non-linear mapping mechanism and AV1 represents a first channel modulation map output by the first channel attention module.
4. The intelligent and environment-friendly water pollution remote sensing image super-resolution reconstruction method for the intelligent and environment-friendly water pollution remote sensing image as claimed in claim 3, wherein: the second channel attention module is represented as a mathematical model as follows:
A2V=δ3(φ3(VaPl(P1+P2)))
A2A=δ4(φ4(MaPl2(P1+P2)))
AV2=A2A-A2V
wherein, P1+ P2 is a feature map input into the attention module of the second channel, VaPl () represents global variance pooling for each layer of the feature map, MaPl2() represents global maximum pooling for each layer of the feature map, δ 3 and δ 4 both represent activation functions sigmoid,
Figure FDA0003591735860000031
and
Figure FDA0003591735860000032
both represent a non-line mapping mechanism and AV2 represents a second channel map output by the second channel attention module.
5. The intelligent and environment-friendly water pollution remote sensing image super-resolution reconstruction method for the intelligent and environment-friendly water pollution remote sensing image as claimed in claim 4, wherein: and a jump connection is arranged between the first channel attention module and the second channel attention module, and a vector obtained after the global maximum pooling operation in the first channel attention module is input into the second channel attention module through the jump connection and is added with a vector obtained after the global maximum pooling operation in the second channel attention module.
6. The intelligent and environment-friendly water pollution remote sensing image super-resolution reconstruction method for the intelligent and environment-friendly water pollution remote sensing image as claimed in claim 4, wherein: the nonlinear mapping mechanism includes a first fully-connected layer, a ReLU activation layer, and a second fully-connected layer connected in sequence.
CN202210399165.5A 2022-04-12 2022-04-12 Water pollution remote sensing image super-resolution reconstruction method for intelligent environmental protection Active CN114742706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210399165.5A CN114742706B (en) 2022-04-12 2022-04-12 Water pollution remote sensing image super-resolution reconstruction method for intelligent environmental protection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210399165.5A CN114742706B (en) 2022-04-12 2022-04-12 Water pollution remote sensing image super-resolution reconstruction method for intelligent environmental protection

Publications (2)

Publication Number Publication Date
CN114742706A true CN114742706A (en) 2022-07-12
CN114742706B CN114742706B (en) 2023-11-28

Family

ID=82282270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210399165.5A Active CN114742706B (en) 2022-04-12 2022-04-12 Water pollution remote sensing image super-resolution reconstruction method for intelligent environmental protection

Country Status (1)

Country Link
CN (1) CN114742706B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115018711A (en) * 2022-07-15 2022-09-06 成都运荔枝科技有限公司 Image super-resolution reconstruction method for warehouse scheduling

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712488A (en) * 2020-12-25 2021-04-27 北京航空航天大学 Remote sensing image super-resolution reconstruction method based on self-attention fusion
CN112734646A (en) * 2021-01-19 2021-04-30 青岛大学 Image super-resolution reconstruction method based on characteristic channel division
US20210241470A1 (en) * 2019-04-30 2021-08-05 Beijing Sensetime Technology Development Co., Ltd. Image processing method and apparatus, electronic device, and storage medium
CN113781308A (en) * 2021-05-19 2021-12-10 马明才 Image super-resolution reconstruction method and device, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210241470A1 (en) * 2019-04-30 2021-08-05 Beijing Sensetime Technology Development Co., Ltd. Image processing method and apparatus, electronic device, and storage medium
CN112712488A (en) * 2020-12-25 2021-04-27 北京航空航天大学 Remote sensing image super-resolution reconstruction method based on self-attention fusion
CN112734646A (en) * 2021-01-19 2021-04-30 青岛大学 Image super-resolution reconstruction method based on characteristic channel division
CN113781308A (en) * 2021-05-19 2021-12-10 马明才 Image super-resolution reconstruction method and device, storage medium and electronic equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WEN MA 等: "Achieving Super-Resolution Remote Sensing Images via the Wavelet Transform Combined With the Recursive Res-Net", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, vol. 57, no. 6, pages 3512 - 3527, XP011725909, DOI: 10.1109/TGRS.2018.2885506 *
张艳 等: "多路径特征融合的遥感图像超分辨率重建算法", 《遥感信息》, vol. 36, no. 2, pages 46 - 53 *
曾茜: "基于深度学习的图像超分辨率重建方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2, pages 138 - 1375 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115018711A (en) * 2022-07-15 2022-09-06 成都运荔枝科技有限公司 Image super-resolution reconstruction method for warehouse scheduling
CN115018711B (en) * 2022-07-15 2022-10-25 成都运荔枝科技有限公司 Image super-resolution reconstruction method for warehouse scheduling

Also Published As

Publication number Publication date
CN114742706B (en) 2023-11-28

Similar Documents

Publication Publication Date Title
CN112651973B (en) Semantic segmentation method based on cascade of feature pyramid attention and mixed attention
AU2020103901A4 (en) Image Semantic Segmentation Method Based on Deep Full Convolutional Network and Conditional Random Field
CN108596248B (en) Remote sensing image classification method based on improved deep convolutional neural network
CN111507993A (en) Image segmentation method and device based on generation countermeasure network and storage medium
CN111340738A (en) Image rain removing method based on multi-scale progressive fusion
CN111126385A (en) Deep learning intelligent identification method for deformable living body small target
CN112669248B (en) Hyperspectral and panchromatic image fusion method based on CNN and Laplacian pyramid
CN112184577A (en) Single image defogging method based on multi-scale self-attention generation countermeasure network
CN115759237A (en) End-to-end deep neural network model compression and heterogeneous conversion system and method
CN114742706A (en) Water pollution remote sensing image super-resolution reconstruction method for intelligent environmental protection
CN113298032A (en) Unmanned aerial vehicle visual angle image vehicle target detection method based on deep learning
CN116524307A (en) Self-supervision pre-training method based on diffusion model
CN113850298A (en) Image identification method and device and related equipment
Tang et al. CATNet: Convolutional attention and transformer for monocular depth estimation
CN117593275A (en) Medical image segmentation system
CN116977631A (en) Streetscape semantic segmentation method based on DeepLabV3+
CN111724309B (en) Image processing method and device, training method of neural network and storage medium
CN116740362A (en) Attention-based lightweight asymmetric scene semantic segmentation method and system
Chaudhari et al. Learning gradients of convex functions with monotone gradient networks
CN113688783B (en) Face feature extraction method, low-resolution face recognition method and equipment
CN116188396A (en) Image segmentation method, device, equipment and medium
CN116152128A (en) High dynamic range multi-exposure image fusion model and method based on attention mechanism
Wu et al. Learning hybrid sparsity prior for image restoration: Where deep learning meets sparse coding
CN116681625B (en) Multi-scale contrast learning-based unsupervised image rain removing method, device and terminal
Wu et al. Improved Lightweight DeepLabv3+ Algorithm Based on Attention Mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20231101

Address after: 024000 Room 301, 3rd Floor, Building 8, Information Valley Technology Innovation Base, Zhongguancun, Chifeng, Xing'an Street, Songshan District, Chifeng City, Inner Mongolia Autonomous Region

Applicant after: Inner Mongolia Zhiyuan Innovation Technology Co.,Ltd.

Address before: 400030 85-9-1, Xiaoxin street, Shapingba street, Shapingba District, Chongqing

Applicant before: Chongqing niuzhizhi Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant