CN112634135A - Remote sensing image super-resolution reconstruction method based on super-resolution style migration network - Google Patents

Remote sensing image super-resolution reconstruction method based on super-resolution style migration network Download PDF

Info

Publication number
CN112634135A
CN112634135A CN202011537739.8A CN202011537739A CN112634135A CN 112634135 A CN112634135 A CN 112634135A CN 202011537739 A CN202011537739 A CN 202011537739A CN 112634135 A CN112634135 A CN 112634135A
Authority
CN
China
Prior art keywords
resolution
super
image
remote sensing
sensing image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011537739.8A
Other languages
Chinese (zh)
Other versions
CN112634135B (en
Inventor
张泽远
郭明强
黄颖
刘恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Space Time Technology Development Co ltd
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN202011537739.8A priority Critical patent/CN112634135B/en
Publication of CN112634135A publication Critical patent/CN112634135A/en
Application granted granted Critical
Publication of CN112634135B publication Critical patent/CN112634135B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a remote sensing image super-resolution reconstruction method based on a super-resolution style migration network, which comprises the following steps: constructing a super-resolution style migration network model; the super-resolution style migration network model comprises a generation part and a judgment part; the generation part is formed by connecting a super-resolution reconstruction network SSR and a U-net network; alternately training the generation part and the judgment part by using a training set to obtain a trained super-resolution style migration network model; and inputting the remote sensing image with low resolution into the trained super-resolution style migration network model to obtain the super-resolution remote sensing image. The method overcomes the defects of poor generalization capability, loss of texture details of the reconstructed image and the like of the traditional super-resolution network model, and can generate the reconstructed image with definition and texture characteristics closer to the real high-resolution remote sensing image.

Description

Remote sensing image super-resolution reconstruction method based on super-resolution style migration network
Technical Field
The invention relates to the field of remote sensing image data processing, in particular to a remote sensing image super-resolution reconstruction method based on a super-resolution style migration network.
Background
The satellite remote sensing image can rapidly provide information on the earth surface, but the medium and low resolution satellite remote sensing images have certain limitations on extraction of high-precision GIS information, map updating, target identification and the like. The development of the satellite remote sensing image with high resolution makes the deep application of the remote sensing image possible, thereby providing favorable conditions for updating GIS data and applying GIS. The method is also significant for map updating, image matching, target detection and the like.
In the field of remote sensing, due to the influence of imaging technology and shooting equipment, a high-resolution remote sensing image is difficult to obtain, an unmanned aerial vehicle is often required to carry out aerial shooting, and both manpower and material resources are consumed, so that the technology for realizing image super-resolution reconstruction from the perspective of an algorithm becomes a hot research topic in multiple fields such as image processing and computer vision.
In recent years, more and more researchers carry out super-resolution reconstruction by using a deep learning method, and good progress is made, but in the field of remote sensing, due to the fact that processing such as compression, fusion and the like exists in images shot by satellites, texture details of the obtained low-resolution remote sensing images are seriously lost, and the existing super-resolution reconstruction models cannot restore the texture details of the low-resolution remote sensing images shot by the satellites and cannot generate super-resolution reconstructed images with the definition close to that of real high-definition remote sensing images.
Disclosure of Invention
Aiming at the technical problems, the invention provides a super-resolution style migration network which comprises the steps of firstly utilizing a series of small-size convolution cores to carry out super-resolution reconstruction processing on an original low-resolution remote sensing image, then utilizing the strong capability of image-to-image conversion of the style migration network to convert the image after preliminary super-resolution reconstruction to a real high-resolution remote sensing image and learn the texture characteristics of the real high-resolution remote sensing image.
The invention provides a remote sensing image super-resolution reconstruction method based on a super-resolution style migration network, which specifically comprises the following steps:
s101: constructing a super-resolution style migration network model; the super-resolution style migration network model comprises a generation part and a judgment part; the generation part is formed by connecting a super-resolution reconstruction network SSR and a U-net network;
s102: alternately training the generation part and the judgment part by using a training set to obtain a trained super-resolution style migration network model;
s103: and inputting the remote sensing image with low resolution into the trained super-resolution style migration network model to obtain the super-resolution remote sensing image.
Further, the construction process of the super-resolution reconstruction network SSR is as follows:
s201: performing convolution operation on the low-resolution remote sensing image real _ A by using 64 convolution kernels with the size of 5 × 5, and activating by using PRelu to obtain an original characteristic image corresponding to the low-resolution remote sensing image real _ A;
s202: performing channel shrinkage on the original characteristic image by using 16 convolution kernels with the size of 1 x 1, and activating by using PRelu to obtain a shrunk characteristic image;
s203: carrying out nonlinear mapping on the shrunk feature image through 4 convolution layers (each layer specifically comprises 16 convolution kernels with the size of 3 x 3 and a PRelu activation function) to obtain a nonlinear mapped feature image;
s204: performing channel expansion on the nonlinear mapped characteristic image by using 64 convolution kernels with the size of 1 x 1, activating by using PRelu, and finally performing deconvolution processing to obtain a preliminary super-resolution remote sensing image SSR _ A which is amplified by 4 times compared with a low-resolution remote sensing image real _ A; the size of the initial super-resolution remote sensing image SSR _ A is the same as that of the original high-resolution image real _ B.
Further, in step S203, each of the 4 convolutional layers specifically comprises 16 convolutional kernels with a size of 3 × 3 and a prilu activation function.
Further, the generating part is configured to generate a false image of the spoof discrimination part, specifically as follows:
Fake_B=model(SSR_A)
wherein model represents a U-net network; and the Fake _ B represents an output image of the preliminary super-resolution remote sensing image SSR _ A after the migration through the U-net network style, namely a false image used for generating a deception judgment part.
Further, in step S102, the generating part and the discriminating part are alternately trained by using a training set, wherein a loss function of the generating part is calculated as follows:
loss_G=loss_G_GAN+loss_G_L1+loss_SSR
the loss _ G _ GAN represents the loss of the network output value of the input discriminator after the primary super-resolution reconstruction images SSR _ A and fake _ B are subjected to channel merging; loss _ G _ L1 is the loss of pixel values between fake _ B and the original high resolution image real _ B, which is the training data in the training set; loss _ SSR is the loss of the initial super-resolution reconstructed image SSR _ A and the original high-resolution image real _ B; loss _ G is the loss of the generated part.
The discriminating part is specifically a Markov discriminator.
The beneficial effects provided by the invention are as follows: the defects that a traditional super-resolution network model is poor in generalization capability and the details of the texture of the reconstructed image are lost are overcome, and the reconstructed image with the definition and the texture characteristics closer to those of a real high-resolution remote sensing image can be generated.
Drawings
FIG. 1 is a flow chart of a remote sensing image super-resolution reconstruction method based on a super-resolution style migration network;
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of a remote sensing image super-resolution reconstruction method based on a super-resolution style migration network, and specifically includes the following steps:
s101: constructing a super-resolution style migration network model; the super-resolution style migration network model comprises a generation part and a judgment part; the generation part is formed by connecting a super-resolution reconstruction network SSR and a U-net network;
the construction process of the super-resolution reconstruction network SSR is as follows:
s201: performing convolution operation on the low-resolution remote sensing image real _ A by using 64 convolution kernels with the size of 5 × 5, and activating by using PRelu to obtain an original characteristic image corresponding to the low-resolution remote sensing image real _ A;
s202: performing channel shrinkage on the original characteristic image by using 16 convolution kernels with the size of 1 x 1, and activating by using PRelu to obtain a shrunk characteristic image;
s203: carrying out nonlinear mapping on the shrunk feature image through 4 convolution layers (each layer specifically comprises 16 convolution kernels with the size of 3 x 3 and a PRelu activation function) to obtain a nonlinear mapped feature image;
s204: performing channel expansion on the nonlinear mapped characteristic image by using 64 convolution kernels with the size of 1 x 1, activating by using PRelu, and finally performing deconvolution processing to obtain a preliminary super-resolution remote sensing image SSR _ A which is amplified by 4 times compared with a low-resolution remote sensing image real _ A; the size of the initial super-resolution remote sensing image SSR _ A is the same as that of the original high-resolution image real _ B.
S102: alternately training the generation part and the judgment part by using a training set to obtain a trained super-resolution style migration network model;
implementation flow of the generation part (or called generator network):
the realization process of the super-resolution reconstruction part network in the generator network comprises the following steps: SSR _ a ═ SSR (real _ a)
The realization process of the style migration part network in the generator network comprises the following steps: make _ B ═ model (SSR _ A)
The real _ A is an original low-resolution remote sensing image, the SSR is the super-resolution reconstruction network, and the SSR _ A is a preliminary super-resolution reconstruction image output by the original low-resolution image through the super-resolution reconstruction network; model represents a U-net network, and fake _ B is an output image of the preliminary super-resolution reconstruction image after the transfer in the U-net network style.
The losses in the generator network are calculated as:
loss_G=loss_G_GAN+loss_G_L1+loss_SSR
wherein the loss _ G _ GAN is a BCEWithLotsLoss which inputs the output value of the discriminator network after channel combination of the SSR _ A image of the over-resolution reconstructed image and the fake _ B image after the grid migration, and keeps the network parameters of the discriminator unchanged during the calculation of the network loss of the generator and the updating of the parameters; loss _ G _ L1 is L1Loss of pixel values of both the fake _ B image after style migration and the original high resolution real _ B image; the Loss _ SSR is the SmoothL1Loss of the preliminary super-resolution reconstructed image SSR _ a and the original high-resolution image real _ B. And finally, the fused loss _ G is an objective function of the generator network part, and the parameters of the generator network are updated in the alternate training with the arbiter network. The discriminant part (or referred to as a discriminant network) is specifically a markov discriminant.
S103: and inputting the remote sensing image with low resolution into the trained super-resolution style migration network model to obtain the super-resolution remote sensing image.
Based on the above, the embodiments provided by the present invention are as follows:
s101: constructing a super-resolution style migration network model; the super-resolution style migration network model comprises a generation part and a judgment part; the generation part is formed by connecting a super-resolution reconstruction network SSR and a U-net network;
the construction process of the super-resolution reconstruction network SSR is as follows:
s201: performing convolution operation on the low-resolution remote sensing image real _ A by using 64 convolution kernels with the size of 5 × 5, and activating by using PRelu to obtain an original characteristic image corresponding to the low-resolution remote sensing image real _ A;
specifically, the following function modules are called:
torch.nn.Sequential(torch.nn.Conv2d(in_channels=3,out_channels=64,kernel_size=5,stride=1,padding=2),torch.nn.PReLU())
wherein in _ channels represents the number of channels of the input low-resolution remote sensing image; out _ channels represents the number of channels of the output image, and also represents the number of convolution kernels; kernel _ size represents the size of the convolution kernel; stride represents the step size of the convolution kernel move; padding denotes the size of the input image boundary, complemented by 0, and nn.
S202: performing channel shrinkage on the original characteristic image by using 16 convolution kernels with the size of 1 x 1, and activating by using PRelu to obtain a shrunk characteristic image;
specifically, the following function modules are called:
torch.nn.Sequential(torch.nn.Conv2d(in_channels=64,out_channels=16,kernel_size=1,stride=1,padding=0),torch.nn.PReLU())
s203: carrying out nonlinear mapping on the shrunk feature image through 4 convolution layers (each layer specifically comprises 16 convolution kernels with the size of 3 x 3 and a PRelu activation function) to obtain a nonlinear mapped feature image;
the nonlinear mapping calls the following function modules:
torch.nn.Sequential(torch.nn.Conv2d(in_channels=16,out_channels=16,kernel_size=3,stride=1,padding=1),torchnn.PReLU())
s204: performing channel expansion on the nonlinear mapped characteristic image by using 64 convolution kernels with the size of 1 x 1, activating by using PRelu, and finally performing deconvolution processing to obtain a preliminary super-resolution remote sensing image SSR _ A which is amplified by 4 times compared with a low-resolution remote sensing image real _ A; the size of the initial super-resolution remote sensing image SSR _ A is the same as that of the original high-resolution image real _ B.
The following function modules are called:
torch.nn.Sequential(torch.nn.Conv2d(in_channels=16,out_channels=64,kernel_size=1,stride=1,padding=0),nn.PReLU())
torch.nn.ConvTranspose2d(in_channels=64,out_channels=3,kernel_size=9,stride=4,padding=3,output_padding=1)
where output _ padding represents the size of the output image boundary, complemented by 0.
S102: alternately training the generation part and the judgment part by using a training set to obtain a trained super-resolution style migration network model;
implementation flow of the generation part (or called generator network):
the realization process of the super-resolution reconstruction part network in the generator network comprises the following steps: the realization process of the style migration part network in the SSR _ A (real _ A) generator network is as follows: make _ B ═ model (SSR _ A)
The real _ A is an original low-resolution remote sensing image, the SSR is the super-resolution reconstruction network, and the SSR _ A is a preliminary super-resolution reconstruction image output by the original low-resolution image through the super-resolution reconstruction network; model represents a U-net network, and fake _ B is an output image of the preliminary super-resolution reconstruction image after the transfer in the U-net network style.
The losses in the generator network are calculated as:
loss_G=loss_G_GAN+loss_G_L1+loss_SSR
the loss _ G _ GAN is a BCEWithLotsLoss which inputs a network output value of the discriminator after channel combination of an SSR _ A image of the super-resolution reconstructed image and a fake _ B image after the grid migration, a function calling module is torch, nn, BCEWithLotsLoss (), and network parameters of the discriminator are kept unchanged during network loss calculation and parameter updating of a generator; loss _ G _ L1 is L1Loss of pixel values of a fake _ B image and an original high-resolution real _ B image after style migration, and a calling function module is torch.nn.l1Loss (); the Loss _ SSR is the SmoothL1Loss of the preliminary super-resolution reconstructed image SSR _ a and the original high-resolution image real _ B, and the calling function module is torch. And finally, the fused loss _ G is an objective function of the generator network part, and the parameters of the generator network are updated in the alternate training with the arbiter network. The discriminant part (or referred to as a discriminant network) is specifically a markov discriminant.
S103: and inputting the remote sensing image with low resolution into the trained super-resolution style migration network model to obtain the super-resolution remote sensing image.
The beneficial effects provided by the invention are as follows: the defects that a traditional super-resolution network model is poor in generalization capability and the details of the texture of the reconstructed image are lost are overcome, and the reconstructed image with the definition and the texture characteristics closer to those of a real high-resolution remote sensing image can be generated.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

1. The remote sensing image super-resolution reconstruction method based on the super-resolution style migration network is characterized by comprising the following steps: the method specifically comprises the following steps:
s101: constructing a super-resolution style migration network model; the super-resolution style migration network model comprises a generation part and a judgment part; the generation part is formed by connecting a super-resolution reconstruction network SSR and a U-net network;
s102: alternately training the generation part and the judgment part by using a training set to obtain a trained super-resolution style migration network model;
s103: and inputting the remote sensing image with low resolution into the trained super-resolution style migration network model to obtain the super-resolution remote sensing image.
2. The remote sensing image super-resolution reconstruction method based on the super-resolution style migration network of claim 1, wherein: the construction process of the super-resolution reconstruction network SSR is as follows:
s201: performing convolution operation on the low-resolution remote sensing image real _ A by using 64 convolution kernels with the size of 5 × 5, and activating by using PRelu to obtain an original characteristic image corresponding to the low-resolution remote sensing image real _ A;
s202: performing channel shrinkage on the original characteristic image by using 16 convolution kernels with the size of 1 x 1, and activating by using PRelu to obtain a shrunk characteristic image;
s203: carrying out nonlinear mapping on the shrunk feature image through 4 convolution layers (each layer specifically comprises 16 convolution kernels with the size of 3 x 3 and a PRelu activation function) to obtain a nonlinear mapped feature image;
s204: performing channel expansion on the nonlinear mapped characteristic image by using 64 convolution kernels with the size of 1 x 1, activating by using PRelu, and finally performing deconvolution processing to obtain a preliminary super-resolution remote sensing image SSR _ A which is amplified by 4 times compared with a low-resolution remote sensing image real _ A; the size of the initial super-resolution remote sensing image SSR _ A is the same as that of the original high-resolution image real _ B.
3. The remote sensing image super-resolution reconstruction method based on the super-resolution style migration network of claim 2, wherein: in step S203, each of the 4 convolutional layers specifically includes 16 convolutional kernels with a size of 3 × 3 and a prilu activation function.
4. The remote sensing image super-resolution reconstruction method based on the super-resolution style migration network of claim 2, wherein: the generation part is used for generating a false image of the deception judgment part, and the generation part is specifically as follows:
Fake_B=model(SSR_A)
wherein model represents a U-net network; and the Fake _ B represents an output image of the preliminary super-resolution remote sensing image SSR _ A after the migration through the U-net network style, namely a false image used for generating a deception judgment part.
5. The remote sensing image super-resolution reconstruction method based on the super-resolution style migration network of claim 4, wherein: in step S102, the generation part and the discrimination part are alternately trained by using a training set, wherein a loss function of the generation part is calculated as follows:
loss_G=loss_G_GAN+loss_G_L1+loss_SSR
the loss _ G _ GAN represents the loss of the network output value of the input discriminator after the primary super-resolution reconstruction images SSR _ A and fake _ B are subjected to channel merging; loss _ G _ L1 is the loss of pixel values between fake _ B and the original high resolution image real _ B, which is the training data in the training set; loss _ SSR is the loss of the initial super-resolution reconstructed image SSR _ A and the original high-resolution image real _ B; loss _ G is the loss of the generated part.
6. The remote sensing image super-resolution reconstruction method based on the super-resolution style migration network of claim 5, wherein: the discriminating part is specifically a Markov discriminator.
CN202011537739.8A 2020-12-23 2020-12-23 Remote sensing image super-resolution reconstruction method based on super-resolution style migration network Active CN112634135B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011537739.8A CN112634135B (en) 2020-12-23 2020-12-23 Remote sensing image super-resolution reconstruction method based on super-resolution style migration network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011537739.8A CN112634135B (en) 2020-12-23 2020-12-23 Remote sensing image super-resolution reconstruction method based on super-resolution style migration network

Publications (2)

Publication Number Publication Date
CN112634135A true CN112634135A (en) 2021-04-09
CN112634135B CN112634135B (en) 2022-09-13

Family

ID=75321734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011537739.8A Active CN112634135B (en) 2020-12-23 2020-12-23 Remote sensing image super-resolution reconstruction method based on super-resolution style migration network

Country Status (1)

Country Link
CN (1) CN112634135B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
CN109949222A (en) * 2019-01-30 2019-06-28 北京交通大学 Image super-resolution rebuilding method based on grapheme
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN110599401A (en) * 2019-08-19 2019-12-20 中国科学院电子学研究所 Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
EP3611699A1 (en) * 2018-08-14 2020-02-19 Siemens Healthcare GmbH Image segmentation using deep learning techniques
CN110827213A (en) * 2019-10-11 2020-02-21 西安工程大学 Super-resolution image restoration method based on generation type countermeasure network
CN111583109A (en) * 2020-04-23 2020-08-25 华南理工大学 Image super-resolution method based on generation countermeasure network
CN111667407A (en) * 2020-05-18 2020-09-15 武汉大学 Image super-resolution method guided by depth information
CN111899168A (en) * 2020-07-02 2020-11-06 中国地质大学(武汉) Remote sensing image super-resolution reconstruction method and system based on feature enhancement

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
EP3611699A1 (en) * 2018-08-14 2020-02-19 Siemens Healthcare GmbH Image segmentation using deep learning techniques
CN109949222A (en) * 2019-01-30 2019-06-28 北京交通大学 Image super-resolution rebuilding method based on grapheme
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN110599401A (en) * 2019-08-19 2019-12-20 中国科学院电子学研究所 Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
CN110827213A (en) * 2019-10-11 2020-02-21 西安工程大学 Super-resolution image restoration method based on generation type countermeasure network
CN111583109A (en) * 2020-04-23 2020-08-25 华南理工大学 Image super-resolution method based on generation countermeasure network
CN111667407A (en) * 2020-05-18 2020-09-15 武汉大学 Image super-resolution method guided by depth information
CN111899168A (en) * 2020-07-02 2020-11-06 中国地质大学(武汉) Remote sensing image super-resolution reconstruction method and system based on feature enhancement

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
何平等: "基于生成对抗网络的遥感图像居民区分割", 《传感器与微系统》 *
史振威等: "图像超分辨重建算法综述", 《数据采集与处理》 *
李英等: "基于生成对抗网络的多用途图像增强鲁棒算法", 《计算机应用与软件》 *

Also Published As

Publication number Publication date
CN112634135B (en) 2022-09-13

Similar Documents

Publication Publication Date Title
Wang et al. Patchmatchnet: Learned multi-view patchmatch stereo
CN110135366B (en) Shielded pedestrian re-identification method based on multi-scale generation countermeasure network
CN110033410B (en) Image reconstruction model training method, image super-resolution reconstruction method and device
CN111524135B (en) Method and system for detecting defects of tiny hardware fittings of power transmission line based on image enhancement
CN109241972B (en) Image semantic segmentation method based on deep learning
CN110363068B (en) High-resolution pedestrian image generation method based on multiscale circulation generation type countermeasure network
CN113870422B (en) Point cloud reconstruction method, device, equipment and medium
CN113034361B (en) Remote sensing image super-resolution reconstruction method based on improved ESRGAN
CN111353938A (en) Image super-resolution learning method based on network feedback
CN113449612B (en) Three-dimensional target point cloud identification method based on sub-flow sparse convolution
CN116580184A (en) YOLOv 7-based lightweight model
CN115293986A (en) Multi-temporal remote sensing image cloud region reconstruction method
CN113111740A (en) Characteristic weaving method for remote sensing image target detection
CN112634135B (en) Remote sensing image super-resolution reconstruction method based on super-resolution style migration network
CN116758130A (en) Monocular depth prediction method based on multipath feature extraction and multi-scale feature fusion
CN115965968A (en) Small sample target detection and identification method based on knowledge guidance
CN116129234A (en) Attention-based 4D millimeter wave radar and vision fusion method
Li et al. Monocular 3-D Object Detection Based on Depth-Guided Local Convolution for Smart Payment in D2D Systems
CN112465836B (en) Thermal infrared semantic segmentation unsupervised field self-adaption method based on contour information
CN111047571B (en) Image salient target detection method with self-adaptive selection training process
CN115205527A (en) Remote sensing image bidirectional semantic segmentation method based on domain adaptation and super-resolution
CN113362240A (en) Image restoration method based on lightweight feature pyramid model
CN113313180A (en) Remote sensing image semantic segmentation method based on deep confrontation learning
Yang et al. Deep networks for image super-resolution using hierarchical features
CN113240589A (en) Image defogging method and system based on multi-scale feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230824

Address after: Room 415, 4th Floor, Building 35, Erlizhuang, Haidian District, Beijing, 100080

Patentee after: BEIJING SPACE-TIME TECHNOLOGY DEVELOPMENT CO.,LTD.

Address before: 430000 Lu Mill Road, Hongshan District, Wuhan, Hubei Province, No. 388

Patentee before: CHINA University OF GEOSCIENCES (WUHAN CITY)