CN113034361A - Remote sensing image super-resolution reconstruction method based on improved ESRGAN - Google Patents

Remote sensing image super-resolution reconstruction method based on improved ESRGAN Download PDF

Info

Publication number
CN113034361A
CN113034361A CN202110236027.0A CN202110236027A CN113034361A CN 113034361 A CN113034361 A CN 113034361A CN 202110236027 A CN202110236027 A CN 202110236027A CN 113034361 A CN113034361 A CN 113034361A
Authority
CN
China
Prior art keywords
network
image
convolution
remote sensing
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110236027.0A
Other languages
Chinese (zh)
Other versions
CN113034361B (en
Inventor
张泽远
郭明强
陈学业
郑晓云
葛亮
曹威
吴亮
谢忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Center Of Digital City Engineering
Original Assignee
Shenzhen Research Center Of Digital City Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Center Of Digital City Engineering filed Critical Shenzhen Research Center Of Digital City Engineering
Priority to CN202110236027.0A priority Critical patent/CN113034361B/en
Publication of CN113034361A publication Critical patent/CN113034361A/en
Application granted granted Critical
Publication of CN113034361B publication Critical patent/CN113034361B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a remote sensing image super-resolution reconstruction method based on improved ESRGAN, which comprises the following steps: constructing an improved remote sensing image super-resolution reconstruction network model, which comprises a generation network and a discrimination network; generating a network composed of: the system comprises 64 convolution kernels with the size of 3x3, a residual error network formed by 23 RRDB modules and a LeakyReLU activation function; the judgment network comprises 6 layers, a full convolution network with even-number convolution kernel is adopted, and a BN layer and a LeakyReLU activation layer are added for construction; the first layer of inputs to the discrimination network are: carrying out channel merging on an image obtained by carrying out bicubic interpolation amplification on the original low-resolution remote sensing image real _ A and a fake _ B image generated by a generation network; and alternately training the generating network and the judging network, updating parameters of the generating network and the judging network, and finally obtaining an improved remote sensing image hyper-resolution reconstruction network model. The invention has the beneficial effects that: the high-definition image with the definition and the texture characteristics closer to the real high-resolution remote sensing image can be generated.

Description

Remote sensing image super-resolution reconstruction method based on improved ESRGAN
Technical Field
The invention relates to the field of remote sensing image processing, in particular to a remote sensing image super-resolution reconstruction method based on improved ESRGAN.
Background
The satellite remote sensing image can rapidly provide information of the earth surface, but the satellite remote sensing images with medium and low resolutions have certain limitations for extracting high-precision ground features, updating maps, identifying targets and the like. The development of the satellite remote sensing image with high resolution makes the deep application of the remote sensing image possible, thereby providing favorable conditions for updating GIS data and applying GIS. The method is also significant for map updating, image matching, target detection and the like.
In the field of remote sensing, due to the influence of imaging technology and shooting equipment, a high-resolution remote sensing image is difficult to obtain, an unmanned aerial vehicle is often required to carry out aerial shooting, and both manpower and material resources are consumed, so that the technology for realizing image super-resolution reconstruction from the perspective of an algorithm becomes a hot research topic in multiple fields such as image processing and computer vision.
In recent years, more and more researchers carry out super-resolution reconstruction by using a deep learning method, and good progress is made, wherein the most impressive most deep-learning method is an ESRGAN super-resolution reconstruction network, the network adopts a multi-level RRDB module to replace a basic residual error network, and a BN module in a generated network is removed, so that the image super-resolution reconstruction technology is greatly improved. However, in the remote sensing field, due to the fact that processing such as compression and fusion exists in images shot by satellites, texture detail loss of the obtained low-resolution remote sensing images is serious, and the original ESRGAN network is prone to have the problems of artifacts, texture detail distortion and the like in the low-resolution remote sensing images shot by the reconstructed satellites.
Disclosure of Invention
In view of the above, the technical problems to be solved by the present invention are: how to generate a high-resolution remote sensing image with more vivid detail texture. Aiming at the situation, the invention provides an improved ESRGAN remote sensing image super-resolution reconstruction network, which reserves a strong generation network architecture of an original ESRGAN, and in the aspect of network discrimination, a VGG network adopted by the original ESRGAN is abandoned, a full convolution network with even-number-size convolution kernels is adopted, a BN layer and a LeakyReLU activation layer are added for construction, and finally, an 8x8 prediction matrix is output, and the matrix is averaged to finish discrimination.
The invention provides a remote sensing image super-resolution reconstruction method based on improved ESRGAN, which comprises the following steps:
s101: constructing an improved remote sensing image super-resolution reconstruction network model; the improved remote sensing image super-resolution reconstruction network model is based on an ESRGAN architecture and comprises a generation network net _ G and a judgment network net _ D; generating a false image fake _ B for deceiving a discrimination network by a network;
generating a network composed of: the system comprises 64 convolution kernels with the size of 3x3, a residual error network formed by 23 RRDB modules and a LeakyReLU activation function; the judgment network comprises 6 layers, a full convolution network with even-number convolution kernel is adopted, a BN layer and a LeakyReLU activation layer are added for construction, finally, a prediction matrix of 8x8 is output, and the matrix is averaged to finish the judgment;
the first layer of inputs to the discrimination network are: carrying out channel merging on an image obtained by carrying out bicubic interpolation amplification on the original low-resolution remote sensing image real _ A and a fake _ B image generated by a generation network;
s102: alternately training the generating network and the discriminating network: and generating a false image fake _ B for deceiving the discrimination network by using the generated network, updating parameters of the generated network and the discrimination network, and finally obtaining an improved remote sensing image hyper-resolution reconstruction network model.
Further, the specific structure of the discrimination network is as follows:
a first layer: 64 convolution networks of 4 × 4 convolution kernels with a convolution step size of 2;
a second layer: 128 convolution layers with convolution kernel of 4x4 and convolution step size of 2, and subsequently connected with BN layer and LeakyReLU layer;
and a third layer: 256 convolution layers with convolution kernel of 4x4 and convolution step size of 2, and subsequently connecting a BN layer and a LeakyReLU layer;
a fourth layer: 512 convolution layers with convolution kernel of 4x4 and convolution step size of 2, and subsequently connecting a BN layer and a LeakyReLU layer;
the fifth layer is 512 convolution layers with convolution kernels of 4x4 and convolution step size of 1, and is subsequently connected with a BN layer and a LeakyReLU layer;
a sixth layer: 1 convolution layer of 4 × 4 convolution kernels with a convolution step size of 1.
Further, the process of generating the network-generated false image fake _ B specifically includes:
s201: acquiring an original low-resolution remote sensing image real _ A and an original high-resolution remote sensing image real _ B;
s202: performing convolution operation on the low-resolution remote sensing image real _ A by using 64 convolution kernels with the size of 3 multiplied by 3 to obtain an original characteristic image;
s203: inputting the original characteristic image into a residual error network constructed by 23 RRDB modules in an ESRGAN network for characteristic extraction and characteristic fusion to obtain a processed characteristic image;
s204: performing convolution operation again on the feature images with the sizes of 64 and 3 multiplied by 3 after the convolution check processing to obtain the operated feature images;
s205: fusing the calculated characteristic image with the original characteristic image, performing nearest neighbor interpolation processing, and activating through a LeakyReLU function to generate a super-resolution image, namely a fake image fake _ B; the super-resolution image has the same size as the original high-resolution video real _ B.
Further, in step S102, the loss calculation formula for the generated network is as follows:
loss_G=0.2*loss_G_pix+loss_G_feature+loss_G_gan (1)
in the formula (1), Loss _ G _ pix is L1Loss of pixel values of both the fake _ B image and the original high-resolution real _ B image generated by the generation network; the Loss _ G _ feature is L1Loss of pixel values of two characteristic images after a fake _ B image and a real _ B image are respectively input into a VGG16 network before activation; loss _ G _ gan is BCEWithLogitsLoss calculated according to the output value of the discrimination network net _ D;
when network loss calculation and parameter updating are generated, judging network parameters are kept unchanged; and finally, the loss _ G after fusion is the objective function of the generated network part.
Further, in step S102, the loss calculation formula of the discrimination network is as follows:
loss_D=0.5*(loss_D_fake+loss_G_real) (2)
in the formula (2), loss _ D _ fake is BCEWithLoctitsLoss which inputs the output value of the discrimination network net _ D after the original low-resolution remote sensing image real _ A is subjected to bicubic interpolation amplification and fake _ B image generated by a generation network are subjected to channel merging; and the loss _ G _ real is BCEWithLogitsLoss which inputs the output value of the judgment network net _ D after the channel merging of the image obtained by the bicubic interpolation amplification of the original low-resolution remote sensing image real _ A and the original high-resolution remote sensing image real _ B.
The technical scheme provided by the invention has the beneficial effects that: the network provided by the invention can more accurately extract the characteristics of an input image and recover smoother and vivid textural characteristics, simultaneously changes the input of a discriminator, amplifies a low-definition remote sensing image and adds the amplified low-definition remote sensing image and an image channel generated by a generation network to input the amplified low-definition remote sensing image and the image channel into the discrimination network for discrimination.
Drawings
Fig. 1 is a schematic flow chart of a remote sensing image super-resolution reconstruction method based on improved ESRGAN according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
Referring to fig. 1, the present invention provides a remote sensing image super-resolution reconstruction method based on improved ESRGAN, which specifically includes the following steps:
s101: constructing an improved remote sensing image super-resolution reconstruction network model; the improved remote sensing image super-resolution reconstruction network model is based on an ESRGAN architecture and comprises a generation network net _ G and a judgment network net _ D; generating a false image fake _ B for deceiving a discrimination network by a network;
generating a network composed of: the system comprises 64 convolution kernels with the size of 3x3, a residual error network formed by 23 RRDB modules and a LeakyReLU activation function; the discrimination network comprises 6 layers, and the specific structure is as follows:
a first layer: 64 convolution networks of 4 × 4 convolution kernels with a convolution step size of 2;
the first layer of inputs to the discrimination network are: carrying out channel merging on an image obtained by carrying out bicubic interpolation amplification on the original low-resolution remote sensing image real _ A and a fake _ B image generated by a generation network;
preferably, the input of the first layer is implemented using a torch.nnsequential () function; the process is as follows:
torch.nn.Sequential(torch.nn.Conv2d(in_channels=6,out_channels=64,kernel_size=4,stride=2,padding=1),torch.nn.LeakyReLU())
where in _ channels represents the number of channels of the input fused image; out _ channels represents the number of channels of the output image, and also represents the number of convolution kernels; kernel _ size represents the size of the convolution kernel; stride represents the step size of the convolution kernel move; padding indicates the size of the input image boundary complement 0, and torch.
A second layer: 128 convolution layers with convolution kernel of 4x4 and convolution step size of 2, and subsequently connected with BN layer and LeakyReLU layer;
preferably, the implementation of the second layer calls the function as follows:
torch.nn.Sequential(torch.nn.Conv2d(in_channels=64,out_channels=128,kernel_size=4,stride=2,padding=1),torch.nn.BatchNorm2d(128),torch.nn.LeakyReLU(0.2,True))
wherein, the torch.nn.batchnorm2d represents the batch normalization process performed on the convolved images.
And a third layer: 256 convolution layers with convolution kernel of 4x4 and convolution step size of 2, and subsequently connecting a BN layer and a LeakyReLU layer;
preferably, the implementation of the third layer calls the function as follows:
torch.nn.Sequential(torch.nn.Conv2d(in_channels=128,out_channels=256,kernel_size=4,stride=2,padding=1),torch.nn.BatchNorm2d(256),torch.nn.LeakyReLU(0.2,True));
a fourth layer: 512 convolution layers with convolution kernel of 4x4 and convolution step size of 2, and subsequently connecting a BN layer and a LeakyReLU layer;
preferably, the implementation of the fourth layer calls the function as follows:
torch.nn.Sequential(torch.nn.Conv2d(in_channels=256,out_channels=512,kernel_size=4,stride=2,padding=1),torch.nn.BatchNorm2d(512),torch.nn.LeakyReLU(0.2,True));
the fifth layer is 512 convolution layers with convolution kernels of 4x4 and convolution step size of 1, and is subsequently connected with a BN layer and a LeakyReLU layer;
preferably, the implementation of the fifth layer calls the function as follows:
torch.nn.Sequential(torch.nn.Conv2d(in_channels=512,out_channels=512,kernel_size=4,stride=1,padding=1),torch.nn.BatchNorm2d(512),torch.nn.LeakyReLU(0.2,True));
a sixth layer: 1 convolution layer of 4 × 4 convolution kernels with a convolution step size of 1.
Preferably, the implementation of the sixth layer calls the function as follows:
torch.nn.Conv2d(in_channels=512,out_channels=1,kernel_size=4,stride=1,padding=1);
s102: alternately training the generating network and the discriminating network: and generating a false image fake _ B for deceiving the discrimination network by using the generated network, updating parameters of the generated network and the discrimination network, and finally obtaining an improved remote sensing image hyper-resolution reconstruction network model.
The process of generating the network-generated false image fake _ B specifically includes:
s201: acquiring an original low-resolution remote sensing image real _ A and an original high-resolution remote sensing image real _ B;
s202: performing convolution operation on the low-resolution remote sensing image real _ A by using 64 convolution kernels with the size of 3 multiplied by 3 to obtain an original characteristic image;
the part is realized by a torch.nn.Conv2d () function, and the calling process specifically comprises the following steps:
torch.nn.Conv2d(in_channels=3,out_channels=64,kernel_size=3,stride=1,padding=1);
s203: inputting the original characteristic image into a residual error network constructed by 23 RRDB modules in an ESRGAN network for characteristic extraction and characteristic fusion to obtain a processed characteristic image;
the part is realized by constructing 23 RRDB modules by adopting a make _ layer method of models.
S204: performing convolution operation again on the feature images with the sizes of 64 and 3 multiplied by 3 after the convolution check processing to obtain the operated feature images;
this step is still implemented using the torch.nn.conv2d () function;
s205: fusing the calculated characteristic image with the original characteristic image, performing nearest neighbor interpolation processing, and activating through a LeakyReLU function to generate a super-resolution image, namely a fake image fake _ B; the super-resolution image is the same size as the original high-resolution video real _ B and is four times the original low-resolution image.
In step S102, a network loss calculation formula is generated as shown in formula (1):
loss_G=0.2*loss_G_pix+loss_G_feature+loss_G_gan (1)
in the formula (1), Loss _ G _ pix is L1Loss of pixel values of both the fake _ B image and the original high-resolution real _ B image generated by the generation network; the Loss _ G _ feature is L1Loss of pixel values of two characteristic images after a fake _ B image and a real _ B image are respectively input into a VGG16 network before activation; loss _ G _ gan is BCEWithLogitsLoss calculated according to the output value of the discrimination network net _ D;
when network loss calculation and parameter updating are generated, judging network parameters are kept unchanged; and finally, the loss _ G after fusion is the objective function of the generated network part.
In step S102, the loss calculation formula of the network is determined as shown in formula (2):
loss_D=0.5*(loss_D_fake+loss_G_real) (2)
in the formula (2), loss _ D _ fake is BCEWithLoctitsLoss which inputs the output value of the discrimination network net _ D after the original low-resolution remote sensing image real _ A is subjected to bicubic interpolation amplification and fake _ B image generated by a generation network are subjected to channel merging; loss _ G _ real is BCEWithLogitsLoss which is obtained by carrying out channel merging on an image obtained by carrying out bicubic interpolation amplification on original low-resolution remote sensing image real _ A and an original high-resolution remote sensing image real _ B and then inputting an output value of a judgment network net _ D
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
the VGG network adopted by the original ESRGAN is abandoned, the full convolution network of convolution kernel with even number size is adopted, the BN layer and the LeakyReLU activation layer are added for construction, finally, a prediction matrix of 8x8 is output, and the matrix is averaged to complete the judgment. The network can extract more accurate characteristics of an input image and recover more smooth and vivid textural characteristics, simultaneously changes the input of a discriminator, amplifies a low-definition remote sensing image and adds an image channel generated by a generation network to input the amplified low-definition remote sensing image and the image channel into the discrimination network for discrimination.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (5)

1. A remote sensing image super-resolution reconstruction method based on improved ESRGAN is characterized in that: the method specifically comprises the following steps:
s101: constructing an improved remote sensing image super-resolution reconstruction network model; the improved remote sensing image super-resolution reconstruction network model is based on an ESRGAN architecture and comprises a generation network net _ G and a judgment network net _ D; generating a false image fake _ B for deceiving a discrimination network by a network;
generating a network composed of: the system comprises 64 convolution kernels with the size of 3x3, a residual error network formed by 23 RRDB modules and a LeakyReLU activation function; the judgment network comprises 6 layers, a full convolution network with even-number convolution kernel is adopted, a BN layer and a LeakyReLU activation layer are added for construction, finally, a prediction matrix of 8x8 is output, and the matrix is averaged to finish the judgment; the first layer of inputs to the discrimination network are: carrying out channel merging on an image obtained by carrying out bicubic interpolation amplification on the original low-resolution remote sensing image real _ A and a fake _ B image generated by a generation network;
s102: alternately training the generating network and the discriminating network: and generating a false image fake _ B for deceiving the discrimination network by using the generated network, updating parameters of the generated network and the discrimination network, and finally obtaining an improved remote sensing image hyper-resolution reconstruction network model.
2. The remote sensing image super-resolution reconstruction method based on the improved ESRGAN as claimed in claim 1, wherein: the specific structure of the discrimination network is as follows:
a first layer: 64 convolution networks of 4 × 4 convolution kernels with a convolution step size of 2;
a second layer: 128 convolution layers with convolution kernel of 4x4 and convolution step size of 2, and subsequently connected with BN layer and LeakyReLU layer;
and a third layer: 256 convolution layers with convolution kernel of 4x4 and convolution step size of 2, and subsequently connecting a BN layer and a LeakyReLU layer;
a fourth layer: 512 convolution layers with convolution kernel of 4x4 and convolution step size of 2, and subsequently connecting a BN layer and a LeakyReLU layer;
the fifth layer is 512 convolution layers with convolution kernels of 4x4 and convolution step size of 1, and is subsequently connected with a BN layer and a LeakyReLU layer;
a sixth layer: 1 convolution layer of 4 × 4 convolution kernels with a convolution step size of 1.
3. The remote sensing image super-resolution reconstruction method based on the improved ESRGAN as claimed in claim 1, wherein: the process of generating the network-generated false image fake _ B specifically includes:
s201: acquiring an original low-resolution remote sensing image real _ A and an original high-resolution remote sensing image real _ B;
s202: performing convolution operation on the low-resolution remote sensing image real _ A by using 64 convolution kernels with the size of 3 multiplied by 3 to obtain an original characteristic image;
s203: inputting the original characteristic image into a residual error network constructed by 23 RRDB modules in an ESRGAN network for characteristic extraction and characteristic fusion to obtain a processed characteristic image;
s204: performing convolution operation again on the feature images with the sizes of 64 and 3 multiplied by 3 after the convolution check processing to obtain the operated feature images;
s205: fusing the calculated characteristic image with the original characteristic image, performing nearest neighbor interpolation processing, and activating through a LeakyReLU function to generate a super-resolution image, namely a fake image fake _ B; the super-resolution image has the same size as the original high-resolution video real _ B.
4. The remote sensing image super-resolution reconstruction method based on the improved ESRGAN as claimed in claim 1, wherein: in step S102, a network loss calculation formula is generated as shown in formula (1):
loss_G=0.2*loss_G_pix+loss_G_feature+loss_G_gan (1)
in the formula (1), Loss _ G _ pix is L1Loss of pixel values of both the fake _ B image and the original high-resolution real _ B image generated by the generation network; the Loss _ G _ feature is L1Loss of pixel values of two characteristic images after a fake _ B image and a real _ B image are respectively input into a VGG16 network before activation; loss _ G _ gan is BCEWithLogitsLoss calculated according to the output value of the discrimination network net _ D;
when network loss calculation and parameter updating are generated, judging network parameters are kept unchanged; and finally, the loss _ G after fusion is the objective function of the generated network part.
5. The remote sensing image super-resolution reconstruction method based on the improved ESRGAN as claimed in claim 1, wherein: in step S102, the loss calculation formula of the network is determined as shown in formula (2):
loss_D=0.5*(loss_D_fake+loss_G_real) (2)
in the formula (2), loss _ D _ fake is BCEWithLoctitsLoss which inputs the output value of the discrimination network net _ D after the original low-resolution remote sensing image real _ A is subjected to bicubic interpolation amplification and fake _ B image generated by a generation network are subjected to channel merging; and the loss _ G _ real is BCEWithLogitsLoss which inputs the output value of the judgment network net _ D after the channel merging of the image obtained by the bicubic interpolation amplification of the original low-resolution remote sensing image real _ A and the original high-resolution remote sensing image real _ B.
CN202110236027.0A 2021-03-03 2021-03-03 Remote sensing image super-resolution reconstruction method based on improved ESRGAN Expired - Fee Related CN113034361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110236027.0A CN113034361B (en) 2021-03-03 2021-03-03 Remote sensing image super-resolution reconstruction method based on improved ESRGAN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110236027.0A CN113034361B (en) 2021-03-03 2021-03-03 Remote sensing image super-resolution reconstruction method based on improved ESRGAN

Publications (2)

Publication Number Publication Date
CN113034361A true CN113034361A (en) 2021-06-25
CN113034361B CN113034361B (en) 2022-10-14

Family

ID=76466139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110236027.0A Expired - Fee Related CN113034361B (en) 2021-03-03 2021-03-03 Remote sensing image super-resolution reconstruction method based on improved ESRGAN

Country Status (1)

Country Link
CN (1) CN113034361B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516591A (en) * 2021-07-20 2021-10-19 海南长光卫星信息技术有限公司 Remote sensing image super-resolution reconstruction method, device, equipment and storage medium
WO2023000158A1 (en) * 2021-07-20 2023-01-26 海南长光卫星信息技术有限公司 Super-resolution reconstruction method, apparatus and device for remote sensing image, and storage medium
CN115982418A (en) * 2023-03-17 2023-04-18 亿铸科技(杭州)有限责任公司 Method for improving super-division operation performance of AI (Artificial Intelligence) computing chip

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080531A (en) * 2020-01-10 2020-04-28 北京农业信息技术研究中心 Super-resolution reconstruction method, system and device for underwater fish image
CN111127316A (en) * 2019-10-29 2020-05-08 山东大学 Single face image super-resolution method and system based on SNGAN network
CN111626237A (en) * 2020-05-29 2020-09-04 中国民航大学 Crowd counting method and system based on enhanced multi-scale perception network
US20200387698A1 (en) * 2018-07-10 2020-12-10 Tencent Technology (Shenzhen) Company Limited Hand key point recognition model training method, hand key point recognition method and device
CN112288632A (en) * 2020-10-29 2021-01-29 福州大学 Single image super-resolution method and system based on simplified ESRGAN
CN112396554A (en) * 2019-08-14 2021-02-23 天津大学青岛海洋技术研究院 Image super-resolution algorithm based on generation countermeasure network
CN112435164A (en) * 2020-11-23 2021-03-02 浙江工业大学 Method for simultaneously super-resolution and denoising of low-dose CT lung image based on multi-scale generation countermeasure network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200387698A1 (en) * 2018-07-10 2020-12-10 Tencent Technology (Shenzhen) Company Limited Hand key point recognition model training method, hand key point recognition method and device
CN112396554A (en) * 2019-08-14 2021-02-23 天津大学青岛海洋技术研究院 Image super-resolution algorithm based on generation countermeasure network
CN111127316A (en) * 2019-10-29 2020-05-08 山东大学 Single face image super-resolution method and system based on SNGAN network
CN111080531A (en) * 2020-01-10 2020-04-28 北京农业信息技术研究中心 Super-resolution reconstruction method, system and device for underwater fish image
CN111626237A (en) * 2020-05-29 2020-09-04 中国民航大学 Crowd counting method and system based on enhanced multi-scale perception network
CN112288632A (en) * 2020-10-29 2021-01-29 福州大学 Single image super-resolution method and system based on simplified ESRGAN
CN112435164A (en) * 2020-11-23 2021-03-02 浙江工业大学 Method for simultaneously super-resolution and denoising of low-dose CT lung image based on multi-scale generation countermeasure network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ISOLA P ET AL: "Image-to-image translation with conditional adversarial networks", 《IN PROCEEDINGS OF THE PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
N. C. RAKOTONIRINA ET AL: "ESRGAN+ : Further Improving Enhanced Super-Resolution Generative Adversarial Network", 《ICASSP 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS》 *
XU B ET AL: "Empirical evaluation of rectified activations in convolutional network", 《ARXIV:1505.00853》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516591A (en) * 2021-07-20 2021-10-19 海南长光卫星信息技术有限公司 Remote sensing image super-resolution reconstruction method, device, equipment and storage medium
WO2023000158A1 (en) * 2021-07-20 2023-01-26 海南长光卫星信息技术有限公司 Super-resolution reconstruction method, apparatus and device for remote sensing image, and storage medium
CN115982418A (en) * 2023-03-17 2023-04-18 亿铸科技(杭州)有限责任公司 Method for improving super-division operation performance of AI (Artificial Intelligence) computing chip
CN115982418B (en) * 2023-03-17 2023-05-30 亿铸科技(杭州)有限责任公司 Method for improving super-division operation performance of AI (advanced technology attachment) computing chip

Also Published As

Publication number Publication date
CN113034361B (en) 2022-10-14

Similar Documents

Publication Publication Date Title
CN113034361B (en) Remote sensing image super-resolution reconstruction method based on improved ESRGAN
CN110033410B (en) Image reconstruction model training method, image super-resolution reconstruction method and device
Engin et al. Cycle-dehaze: Enhanced cyclegan for single image dehazing
CN111598778B (en) Super-resolution reconstruction method for insulator image
CN108921783B (en) Satellite image super-resolution reconstruction method based on mixed loss function constraint
Xie et al. Deep coordinate attention network for single image super‐resolution
CN110363068B (en) High-resolution pedestrian image generation method based on multiscale circulation generation type countermeasure network
CN112699844B (en) Image super-resolution method based on multi-scale residual hierarchy close-coupled network
CN113096017A (en) Image super-resolution reconstruction method based on depth coordinate attention network model
CN111640060A (en) Single image super-resolution reconstruction method based on deep learning and multi-scale residual dense module
Guan et al. Srdgan: learning the noise prior for super resolution with dual generative adversarial networks
CN110634103A (en) Image demosaicing method based on generation of countermeasure network
CN112949636A (en) License plate super-resolution identification method and system and computer readable medium
Fang et al. High-resolution optical flow and frame-recurrent network for video super-resolution and deblurring
Yu et al. Split-attention multiframe alignment network for image restoration
CN110751271B (en) Image traceability feature characterization method based on deep neural network
Zhang et al. An effective decomposition-enhancement method to restore light field images captured in the dark
CN113379606B (en) Face super-resolution method based on pre-training generation model
Zheng et al. Double-branch dehazing network based on self-calibrated attentional convolution
CN111161156A (en) Deep learning-based underwater pier disease image resolution enhancement method
Wang et al. Underwater image super-resolution using multi-stage information distillation networks
Chen et al. Guided dual networks for single image super-resolution
Nie et al. Context and detail interaction network for stereo rain streak and raindrop removal
CN116823611A (en) Multi-focus image-based referenced super-resolution method
CN113674154B (en) Single image super-resolution reconstruction method and system based on generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20221014