CN112862809A - Spatial resolution enhancement method based on weak supervised deep learning, terminal equipment and computer readable storage medium - Google Patents

Spatial resolution enhancement method based on weak supervised deep learning, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN112862809A
CN112862809A CN202110254785.5A CN202110254785A CN112862809A CN 112862809 A CN112862809 A CN 112862809A CN 202110254785 A CN202110254785 A CN 202110254785A CN 112862809 A CN112862809 A CN 112862809A
Authority
CN
China
Prior art keywords
image
spatial resolution
enhancement
data
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110254785.5A
Other languages
Chinese (zh)
Other versions
CN112862809B (en
Inventor
李军
于秋童
马凌飞
李海峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central university of finance and economics
Original Assignee
Central university of finance and economics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central university of finance and economics filed Critical Central university of finance and economics
Priority to CN202110254785.5A priority Critical patent/CN112862809B/en
Publication of CN112862809A publication Critical patent/CN112862809A/en
Application granted granted Critical
Publication of CN112862809B publication Critical patent/CN112862809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The application provides a spatial resolution enhancement method based on weak supervised deep learning, which comprises the steps of preprocessing a first image to obtain a first fine image; preprocessing the second image to obtain a second fine image, and performing data enhancement on the second fine image to obtain a second data enhancement sample; updating labels of the land cover map based on weak supervision to obtain label enhancement data; enhancing the spatial resolution of the first fine image to obtain a first data enhancement sample; inputting a first fine image, a second fine image, the first data enhancement sample, the second data enhancement sample and the tag enhancement data into a deep convolutional neural network for training to obtain a weak supervised deep convolutional neural network; and inputting the land cover map into a weak supervision deep convolution neural network to obtain the land cover map with enhanced spatial resolution. Daily low spatial resolution land cover maps are enhanced by using weakly supervised deep Convolutional Neural Networks (CNNs).

Description

Spatial resolution enhancement method based on weak supervised deep learning, terminal equipment and computer readable storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a spatial resolution enhancement method based on weak supervised deep learning, a terminal device, and a computer-readable storage medium.
Background
The spatial resolution refers to the minimum distance between two adjacent ground objects which can be identified on the remote sensing image. For photographic images, it is common to express the number of black and white "line pairs" per unit length (line pair/mm); for a scanned image, the instantaneous field angle (IFOV) is usually expressed in terms of its size (mrad), i.e. the pixel, which is the minimum area that can be resolved in the scanned image. The actual size of the spatial resolution value on the ground is called ground resolution. For the photographic image, the coverage width of the line pair on the ground is expressed by (meter); for the scanned image, it is the actual ground size (meter) corresponding to the pixel. For example, the spatial resolution or ground resolution of a multiband scanning image of a terrestrial satellite is 79 m (pixel size 56 × 79 m 2). But with the same value of line pair width and pixel size, their ground resolution is different. For the optical machine to scan images, about 2.8 pixels are needed to represent the same information in a line pair on a photographic image. The spatial resolution is one of important indexes for evaluating the performance of the sensor and remote sensing information, and is also an important basis for identifying the shape and the size of the ground object.
Multispectral satellite images are the primary data source for monitoring land use and land cover changes worldwide. However, the consistency of the land cover monitoring is limited by the spatial and temporal resolution of the acquired satellite images. High resolution satellite images that can be extracted to the public on a consistent basis every day are still quite limited.
Disclosure of Invention
1. Technical problem to be solved
Multispectral satellite-based images are the primary data source for monitoring land use and land cover changes worldwide. However, the consistency of the land cover monitoring is limited by the spatial and temporal resolution of the acquired satellite images. The problem that high-resolution satellite images which can be extracted to the public on the premise of consistency every day are still quite limited is that the application provides a spatial resolution enhancement method based on weak supervision deep learning, a terminal device and a computer readable storage medium.
2. Technical scheme
In order to achieve the above object, the present application provides a spatial resolution enhancement method based on weak supervised deep learning, the method comprising the following steps: 1) preprocessing the first image to obtain a first fine image; 2) preprocessing a second image to obtain a second fine image, and performing data enhancement on the second fine image to obtain a second data enhancement sample; 3) updating labels of the land cover map based on weak supervision to obtain label enhancement data; 4) enhancing the spatial resolution of the first fine image to obtain a first data enhancement sample; 5) inputting the first fine image, the second fine image, the first data enhancement sample, the second data enhancement sample and the tag enhancement data into a deep convolutional neural network for training to obtain a weakly supervised deep convolutional neural network; 6) and inputting the land cover map into the weak supervised deep convolution neural network to obtain a land cover map with enhanced spatial resolution.
Another embodiment provided by the present application is that the step 1) of preprocessing the first image to obtain the first fine image includes: and denoising and linearly stretching the first image, and performing edge-oriented window filtering on the denoised first image.
Another embodiment provided by the present application is that the step 2) of performing data enhancement on the second fine image includes: and performing geometric transformation and linear transformation on the second fine image.
Another embodiment provided herein is where the geometric transformation includes flipping, rotating, and bending, and the linear transformation includes a contrast stretch of 2% to 98%.
Another embodiment provided by the present application is that the step 3) of updating the label of the land cover map based on weak supervision includes: refining the noisy labels, and updating the labels every 5 periods of the whole training data set; the first 5 original training, after 5 th cycle, the model outputs the intermediate prediction on all training samples; an updated tag is obtained by comparing the intermediate prediction with the original tag, while the intersection of the original tag and the intermediate prediction is obtained, which is used for the next 5 cycles.
Another embodiment provided by the present application is that, in the step 4), an improved enhancement version is used to train a model based on the image semantic segmentation network of deep learning, and the spatial resolution of the first fine image is enhanced through the trained model.
Another embodiment provided by the present application is that the first image is a sentinel first synthetic aperture radar image, the second image is a sentinel second multispectral image, and the land cover map is a medium resolution imaging spectrometer land cover map.
In another embodiment provided by the present application, in the step 5), a semi-supervised mode is adopted to train the deep convolutional neural network.
The application also provides a terminal device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the method.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method.
3. Advantageous effects
Compared with the prior art, the spatial resolution enhancement method based on the weak supervised deep learning, the terminal equipment and the computer readable storage medium have the advantages that:
according to the spatial resolution enhancement method based on the weak supervised deep learning, provided by the application, for a large-scale land cover map, a more challenging land cover type can be classified by using a suggested method.
The spatial resolution enhancement method based on the weak supervised deep learning is a space-time fusion method, and a daily land cover map with low spatial resolution is enhanced by using a weak supervised deep Convolutional Neural Network (CNN).
According to the spatial resolution enhancement method based on the weak supervised deep learning, a deep learning-based method is developed to effectively fuse MODIS and Sentinel data.
According to the spatial resolution enhancement method based on the weak supervised deep learning, the ground true phase label is noisy and unreliable, so that the deep learning semantic segmentation network deep LabV3+ is comprehensively evaluated.
Drawings
FIG. 1 is a schematic flowchart of an embodiment of a spatial resolution enhancement method based on weakly supervised deep learning according to the present application;
FIG. 2 is a schematic view of the label improvement process of the present application (ignoring masks in white);
FIG. 3 is a schematic representation of a model presented by an excerpt of the 5 classification results of the present application;
fig. 4 is a schematic structural diagram of a terminal device of the present application.
Detailed Description
Hereinafter, specific embodiments of the present application will be described in detail with reference to the accompanying drawings, and it will be apparent to those skilled in the art from this detailed description that the present application can be practiced. Features from different embodiments may be combined to yield new embodiments, or certain features may be substituted for certain embodiments to yield yet further preferred embodiments, without departing from the principles of the present application.
MODIS is an important sensor carried on terra and aqua satellites, is a satellite-borne instrument which only broadcasts real-time observation data to the whole world directly through an x wave band and can receive the data free and use the data free, and MODIS data is received and used by many countries and regions all over the world.
Referring to fig. 1 to 4, the present application provides a spatial resolution enhancement method based on weak supervised deep learning, including the following steps: 1) preprocessing the first image to obtain a first fine image, namely preprocessing the Sentinel-1 image.
S11, inputting the multispectral satellite image, performing speckle noise removal technology denoising pretreatment on the Sentinel-1 image, and performing linear stretching; and S12, performing edge directional window filtering on the denoised image by adopting a Lee filtering method. Local means and local variances are calculated using only pixels in the edge-directed window. After speckle filtering, the linear stretching of the image is enhanced by 2%. The lowest and highest values of 2% are set to 0 and 255, respectively.
2) Preprocessing a second image to obtain a second fine image, and performing data enhancement on the second fine image to obtain a second data enhancement sample, namely performing image preprocessing on the Sentinel-2 image and performing data enhancement; several enhancement techniques are added to the data loader module of the model network to improve performance by expanding the training data set. Including geometric transformations (e.g., flipping, rotating, bending) and linear transformations (e.g., 2% to 98% contrast stretch). All geometric transformations were randomly selected and applied to the image, each with a probability of 0.5. Linear stretching is assumed to be suitable for low contrast images (e.g., images taken at night).
3) And updating the labels of the land cover map based on weak supervision to obtain label enhancement data, namely updating the labels of the MODIS based on the weak supervision.
S31, refining the noisy labels, and updating the labels every 5 times of the whole training data set period; s32, performing first 5 original training, and after the 5 th period, outputting intermediate prediction on all training samples by the model; s33, obtain the updated tag by comparing the intermediate prediction with the original MODIS tag, and use the intersection of the two for the next 5 cycles.
4) And enhancing the spatial resolution of the first fine image to obtain a first data enhancement sample, namely performing model training by using an improved Plus-deep LabV3+ (image segmentation) network, and enhancing the spatial resolution in the multispectral satellite image.
S41, the proposed model was implemented on PyTorch. The weights of the pre-trained models on the ImageNet dataset are used for initialization of the models. S42, it is worth mentioning that the number of land cover in the training dataset is different from the number of categories in the ImageNet dataset, thus excluding the logit weights in the pre-trained model. S43, preprocessing of the Sentinel-1SAR image and data augmentation have been added to the data loader module of the network, and the network structure was altered during training to update the tags. The model was trained over 50 sessions.
5) Inputting the first fine image, the second fine image, the first data enhancement sample, the second data enhancement sample and the label enhancement data into a deep convolutional neural network for training to obtain a weak supervised deep convolutional neural network, namely, preprocessing the image of the Sentinel 1, the data enhancement sample and the Sentinel 2 data enhancement sample obtained in the above steps, and training one deep experimental neural network in a semi-supervised mode by using the label enhancement data information of the MODIS label as input.
6) And inputting the land cover map into the weak supervised deep convolution neural network to obtain a land cover map with enhanced spatial resolution.
The method for integrating the multisource satellite data to provide the enhanced land cover mapping expands the currently most advanced semantic segmentation network DeepLabV3+, can provide a method for enhancing the space for a user through an integrated map (the original space resolution is 500m), and improves the resolution of the MODIS-derived land cover mapping by a Synthetic Aperture Radar (SAR) image derived from Sentinel-1 and a multispectral image derived from Sentinel-2.
The method and the device combine the Sentinel-1&2 image and the MODIS land cover map under the application background that large space resolution difference exists between massive remote sensing data and the MODIS data and the Sentinel image. Training of the neural network was performed on common data set SEN12MS, while validation and testing were performed using the ground truth data of the 2020IEEE GRSS data fusion competition. The results show that the synthesized land cover map has higher spatial resolution than the corresponding MODIS land cover map. By fusing the fine images from Sentinel-1&2 and the daily low quality images from MODIS, a high resolution time series of satellite images can be generated by implementing the ensemble method.
From top to bottom in fig. 3: (a) detecting a coastline and a beach; (b) weakening the effect of MODIS labels; (c) river and wetland classification errors; (d) classification errors of farmland and shrub fields; (e) misclassification of infertility. FIG. 3 shows the S2 input, the S1 input, the MODIS label, the predictor and the DFC label, respectively.
The present application further provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps in any of the method embodiments described above are implemented.
The terminal device of this embodiment includes: at least one processor (only one shown in fig. 4) a processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor when executing the computer program implementing the steps in any of the various metabolic pathway prediction method embodiments described below.
The molecular optimization method provided by the embodiment of the present application can be applied to terminal devices such as a tablet computer, a notebook computer, a super-mobile personal computer (UMPC), a netbook, and a Personal Digital Assistant (PDA), and the embodiment of the present application does not limit the specific types of the terminal devices.
For example, the terminal device may be a Station (ST) in a WLAN, a Personal Digital Assistant (PDA) device, a handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, a computer, a laptop, a handheld communication device, a handheld computing device, a satellite radio, a wireless modem card.
The terminal device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The terminal device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that the terminal device is merely an example, and does not constitute a limitation of the terminal device, and may include more or less components than those shown, or combine some components, or different components, such as input and output devices, network access devices, etc.
The Processor may be a Central Processing Unit (CPU), or other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may in some embodiments be an internal storage unit of the terminal device, such as a hard disk or a memory of the terminal device. In other embodiments, the memory may also be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (MC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device. Further, the memory may also include both an internal storage unit and an external storage device of the terminal device. The memory is used for storing an operating system, application programs, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer programs. The memory may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed. The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application. In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

Claims (10)

1. A spatial resolution enhancement method based on weak supervised deep learning is characterized by comprising the following steps:
1) preprocessing the first image to obtain a first fine image;
2) preprocessing a second image to obtain a second fine image, and performing data enhancement on the second fine image to obtain a second data enhancement sample;
3) updating labels of the land cover map based on weak supervision to obtain label enhancement data;
4) enhancing the spatial resolution of the first fine image to obtain a first data enhancement sample;
5) inputting the first fine image, the second fine image, the first data enhancement sample, the second data enhancement sample and the tag enhancement data into a deep convolutional neural network for training to obtain a weakly supervised deep convolutional neural network;
6) and inputting the land cover map into the weak supervised deep convolution neural network to obtain a land cover map with enhanced spatial resolution.
2. The weak supervised deep learning based spatial resolution enhancement method of claim 1, wherein the step 1) of preprocessing the first image to obtain the first fine image comprises:
and denoising and linearly stretching the first image, and performing edge-oriented window filtering on the denoised first image.
3. The weak supervised deep learning based spatial resolution enhancement method of claim 1, wherein the step 2) of data enhancing the second fine image comprises:
and performing geometric transformation and linear transformation on the second fine image.
4. The weak supervised deep learning based spatial resolution enhancement method of claim 3, wherein the geometric transformation includes flipping, rotating and bending, and the linear transformation includes contrast stretching of 2% to 98%.
5. The weak supervised deep learning based spatial resolution enhancement method of claim 1, wherein the step 3) of performing the weak supervised based tag update on the land cover map comprises:
refining the noisy labels, and updating the labels every 5 periods of the whole training data set; the first 5 original training, after 5 th cycle, the model outputs the intermediate prediction on all training samples; an updated tag is obtained by comparing the intermediate prediction with the original tag, while the intersection of the original tag and the intermediate prediction is obtained, which is used for the next 5 cycles.
6. The spatial resolution enhancement method based on the weakly supervised deep learning as recited in claim 1, wherein the step 4) trains a model by using an improved enhancement version based on a deep learning image semantic segmentation network, and the spatial resolution of the first fine image is enhanced by the trained model.
7. The method for spatial resolution enhancement based on unsupervised deep learning according to any one of claims 1 to 6, wherein the first image is a sentinel first synthetic aperture radar image, the second image is a sentinel second multispectral image, and the land cover map is a medium-resolution imaging spectrometer land cover map.
8. The weak supervised deep learning based spatial resolution enhancement method of claim 7, wherein the step 5) adopts a semi-supervised mode to train the deep convolutional neural network.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202110254785.5A 2021-03-09 2021-03-09 Spatial resolution enhancement method based on weak supervision deep learning, terminal equipment and computer readable storage medium Active CN112862809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110254785.5A CN112862809B (en) 2021-03-09 2021-03-09 Spatial resolution enhancement method based on weak supervision deep learning, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110254785.5A CN112862809B (en) 2021-03-09 2021-03-09 Spatial resolution enhancement method based on weak supervision deep learning, terminal equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112862809A true CN112862809A (en) 2021-05-28
CN112862809B CN112862809B (en) 2023-07-18

Family

ID=75993501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110254785.5A Active CN112862809B (en) 2021-03-09 2021-03-09 Spatial resolution enhancement method based on weak supervision deep learning, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112862809B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017129940A1 (en) * 2016-01-29 2017-08-03 Global Surface Intelligence Limited System and method for earth observation and analysis
CN108416370A (en) * 2018-02-07 2018-08-17 深圳大学 Image classification method, device based on semi-supervised deep learning and storage medium
CN110046415A (en) * 2019-04-08 2019-07-23 中国科学院南京地理与湖泊研究所 A kind of soil organic matter content remote sensing dynamic playback method of space-time fining
WO2019157348A1 (en) * 2018-02-09 2019-08-15 The Board Of Trustees Of The University Of Illinois A system and method to fuse multiple sources of optical data to generate a high-resolution, frequent and cloud-/gap-free surface reflectance product
US20190303703A1 (en) * 2018-03-30 2019-10-03 Regents Of The University Of Minnesota Predicting land covers from satellite images using temporal and spatial contexts
US20200125929A1 (en) * 2018-10-19 2020-04-23 X Development Llc Crop yield prediction at field-level and pixel-level
CN111652193A (en) * 2020-07-08 2020-09-11 中南林业科技大学 Wetland classification method based on multi-source images
CN111932457A (en) * 2020-08-06 2020-11-13 北方工业大学 High-space-time fusion processing algorithm and device for remote sensing image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017129940A1 (en) * 2016-01-29 2017-08-03 Global Surface Intelligence Limited System and method for earth observation and analysis
CN108416370A (en) * 2018-02-07 2018-08-17 深圳大学 Image classification method, device based on semi-supervised deep learning and storage medium
WO2019157348A1 (en) * 2018-02-09 2019-08-15 The Board Of Trustees Of The University Of Illinois A system and method to fuse multiple sources of optical data to generate a high-resolution, frequent and cloud-/gap-free surface reflectance product
US20190303703A1 (en) * 2018-03-30 2019-10-03 Regents Of The University Of Minnesota Predicting land covers from satellite images using temporal and spatial contexts
US20200125929A1 (en) * 2018-10-19 2020-04-23 X Development Llc Crop yield prediction at field-level and pixel-level
CN110046415A (en) * 2019-04-08 2019-07-23 中国科学院南京地理与湖泊研究所 A kind of soil organic matter content remote sensing dynamic playback method of space-time fining
CN111652193A (en) * 2020-07-08 2020-09-11 中南林业科技大学 Wetland classification method based on multi-source images
CN111932457A (en) * 2020-08-06 2020-11-13 北方工业大学 High-space-time fusion processing algorithm and device for remote sensing image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QIUTONG YU等: "Spatial Resolution Enhancement for Large-Scale Land Cover Mapping via Weakly Supervised Deep Learning", 《PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING》, vol. 87, no. 6, pages 405 - 412 *
YEONJU CHOI等: "A No-Reference Super Resolution for Satellite Image Quality Enhancement for KOMPSAT-3", 《IGARSS 2020 - 2020 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM》, pages 220 - 223 *
邰建豪: "深度学习在遥感影像目标检测和地表覆盖分类中的应用研究", 《中国博士学位论文全文数据库 (基础科学辑)》, no. 2, pages 008 - 13 *

Also Published As

Publication number Publication date
CN112862809B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
Lei et al. Coupled adversarial training for remote sensing image super-resolution
Maggiori et al. Convolutional neural networks for large-scale remote-sensing image classification
Garg et al. Semantic segmentation of PolSAR image data using advanced deep learning model
Lee et al. Local similarity Siamese network for urban land change detection on remote sensing images
Shakya et al. CNN-based fusion and classification of SAR and Optical data
Aghdami-Nia et al. Automatic coastline extraction through enhanced sea-land segmentation by modifying Standard U-Net
Wang et al. A high-resolution feature difference attention network for the application of building change detection
Wang et al. Urban building extraction from high-resolution remote sensing imagery based on multi-scale recurrent conditional generative adversarial network
Civicioglu et al. Contrast stretching based pansharpening by using weighted differential evolution algorithm
Li et al. Progressive fusion learning: A multimodal joint segmentation framework for building extraction from optical and SAR images
Chen et al. Memory-oriented unpaired learning for single remote sensing image dehazing
Li et al. HS2P: Hierarchical spectral and structure-preserving fusion network for multimodal remote sensing image cloud and shadow removal
Qin et al. Deep ResNet based remote sensing image super-resolution reconstruction in discrete wavelet domain
Chen et al. Spatiotemporal fusion for spectral remote sensing: A statistical analysis and review
Lv et al. Spatial-contextual information utilization framework for land cover change detection with hyperspectral remote sensed images
Gao et al. SSC-SFN: spectral-spatial non-local segment federated network for hyperspectral image classification with limited labeled samples
Yang et al. Improving building rooftop segmentation accuracy through the optimization of UNet basic elements and image foreground-background balance
CN112862809B (en) Spatial resolution enhancement method based on weak supervision deep learning, terminal equipment and computer readable storage medium
Liu et al. A deep learning method for individual arable field (IAF) extraction with cross-domain adversarial capability
Song et al. HDTFF-Net: Hierarchical deep texture features fusion network for high-resolution remote sensing scene classification
Li et al. An effective multi-model fusion method for SAR and optical remote sensing images
Xiong et al. Mask Guided Local-Global Attentive Network for Change Detection in Remote Sensing Images
Mishra et al. Exploring single-frame super-resolution on real-world Hyperion and PRISMA datasets of an urban area in a developing nation
Khan et al. Crop Type Classification using Multi-temporal Sentinel-2 Satellite Imagery: A Deep Semantic Segmentation Approach
Huang et al. Dual-branche attention network for super-resolution of remote sensing images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant