CN112862809B - Spatial resolution enhancement method based on weak supervision deep learning, terminal equipment and computer readable storage medium - Google Patents

Spatial resolution enhancement method based on weak supervision deep learning, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN112862809B
CN112862809B CN202110254785.5A CN202110254785A CN112862809B CN 112862809 B CN112862809 B CN 112862809B CN 202110254785 A CN202110254785 A CN 202110254785A CN 112862809 B CN112862809 B CN 112862809B
Authority
CN
China
Prior art keywords
image
spatial resolution
data
deep learning
fine image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110254785.5A
Other languages
Chinese (zh)
Other versions
CN112862809A (en
Inventor
李军
于秋童
马凌飞
李海峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central university of finance and economics
Original Assignee
Central university of finance and economics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central university of finance and economics filed Critical Central university of finance and economics
Priority to CN202110254785.5A priority Critical patent/CN112862809B/en
Publication of CN112862809A publication Critical patent/CN112862809A/en
Application granted granted Critical
Publication of CN112862809B publication Critical patent/CN112862809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The application provides a spatial resolution enhancement method based on weak supervision deep learning, which is used for preprocessing a first image to obtain a first fine image; preprocessing the second image to obtain a second fine image, and performing data enhancement on the second fine image to obtain a second data enhancement sample; performing weak supervision-based label updating on the land coverage map to obtain label enhancement data; enhancing the spatial resolution of the first fine image to obtain a first data enhancement sample; inputting the first fine image, the second fine image, the first data enhancement sample, the second data enhancement sample and the tag enhancement data into a deep convolutional neural network for training to obtain a weak supervision deep convolutional neural network; and inputting the land cover map into a weakly supervised deep convolutional neural network to obtain the land cover map with enhanced spatial resolution. Land coverage maps of daily low spatial resolution are enhanced by using a weakly supervised deep Convolutional Neural Network (CNN).

Description

Spatial resolution enhancement method based on weak supervision deep learning, terminal equipment and computer readable storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a spatial resolution enhancement method based on weak supervision deep learning, terminal equipment and a computer readable storage medium.
Background
The spatial resolution refers to the minimum distance between two adjacent features that can be identified on the remote sensing image. For photographic images, it is common to express the images by a "line pair" containing distinguishable black and white (line pair/mm) in unit length; for scanned images, the magnitude of the instantaneous field of view (IFOV), i.e. the pixel, is typically expressed in terms of milliradians mrad, which is the smallest area in the scanned image that can be resolved. The actual size of the spatial resolution value on the ground is called the ground resolution. For photographic images, the coverage width of the line pair on the ground is expressed in meters; for scanned images, the actual size (meter) of the ground corresponding to the pixel is the same. For example, the spatial resolution or the ground resolution of the multi-band scanning image of the terrestrial satellite is 79 meters (the pixel size is 56×79 meters 2). But have the same number of line pair widths and pixel sizes, which differ in ground resolution. For an optically scanned image, about 2.8 pixels are required to represent the same information in a pair of lines on a photographic image. The spatial resolution is one of important indexes for evaluating the performance of the sensor and the remote sensing information, and is also an important basis for identifying the shape and the size of the ground object.
Multispectral satellite images are the primary data sources for monitoring land utilization and land cover changes worldwide. However, consistency of coverage monitoring is limited by the spatial and temporal resolution of the acquired satellite images. High resolution satellite images that can be extracted to the public on a consistent basis per day are still quite limited.
Disclosure of Invention
1. Technical problem to be solved
Multispectral satellite-based images are the primary data sources for monitoring land utilization and land cover changes worldwide. However, consistency of coverage monitoring is limited by the spatial and temporal resolution of the acquired satellite images. The application provides a spatial resolution enhancement method based on weak supervised deep learning, a terminal device and a computer readable storage medium.
2. Technical proposal
In order to achieve the above object, the present application provides a spatial resolution enhancement method based on weakly supervised deep learning, the method comprising the steps of: 1) Preprocessing the first image to obtain a first fine image; 2) Preprocessing the second image to obtain a second fine image, and carrying out data enhancement on the second fine image to obtain a second data enhancement sample; 3) Performing weak supervision-based label updating on the land coverage map to obtain label enhancement data; 4) Enhancing the spatial resolution of the first fine image to obtain a first data enhancement sample; 5) Inputting the first fine image, the second fine image, the first data enhancement sample, the second data enhancement sample and the tag enhancement data into a deep convolutional neural network for training to obtain a weak supervision deep convolutional neural network; 6) And inputting the land cover map into the weak supervision deep convolutional neural network to obtain a land cover map with enhanced spatial resolution.
In another embodiment, the step 1) of preprocessing the first image to obtain a first fine image includes: denoising and linear stretching treatment are carried out on the first image, and edge directional window filtering is carried out on the denoised first image.
In another embodiment, the step 2) of data enhancing the second fine image includes: and performing geometric transformation and linear transformation on the second fine image.
Another embodiment provided herein is where the geometric transformation includes flipping, rotating, and bending, and the linear transformation includes a contrast stretch of 2% -98%.
In another embodiment, the step 3) performing the label update based on weak supervision on the land coverage map includes: refining the noisy label, and updating the label every 5 periods of the training data set; the first 5 original training, after the 5 th period, the model outputs intermediate predictions on all training samples; an updated label is obtained by comparing the intermediate prediction to the original label, while an intersection of the original label and the intermediate prediction is obtained, which intersection is used for the next 5 cycles.
In another embodiment, the step 4) trains a model by using an image semantic segmentation network based on deep learning with an improved enhancement plate, and the spatial resolution of the first fine image is enhanced by the trained model.
The application provides another embodiment, the first image is a first sentinel synthetic aperture radar image, the second image is a second sentinel multispectral image, and the land cover map is a medium resolution imaging spectrometer land cover map.
In another embodiment, the step 5) trains the deep convolutional neural network in a semi-supervised manner.
The application also provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method when executing the computer program.
The present application also provides a computer readable storage medium storing a computer program which when executed by a processor implements the method.
3. Advantageous effects
Compared with the prior art, the spatial resolution enhancement method, the terminal equipment and the computer readable storage medium based on the weak supervision deep learning have the beneficial effects that:
the spatial resolution enhancement method based on weak supervision deep learning provided by the application can classify more challenging land coverage types by using the suggested method aiming at a large-scale land coverage map.
The spatial resolution enhancement method based on weak supervision deep learning is a space-time fusion method, and a daily land coverage map with low spatial resolution is enhanced by using a weak supervision deep Convolutional Neural Network (CNN).
The spatial resolution enhancement method based on the weakly supervised deep learning provided by the application develops a method based on the deep learning to effectively fuse MODIS and Sentinel data.
According to the spatial resolution enhancement method based on the weak supervision deep learning, which is provided by the application, the ground real-phase label is noisy and unreliable, so that the deep learning semantic segmentation network deep LabV & lt3+ & gt is comprehensively evaluated.
Drawings
FIG. 1 is a flow diagram of an embodiment of a spatial resolution enhancement method based on weakly supervised deep learning of the present application;
FIG. 2 is a schematic diagram of the label improvement process of the present application (ignore mask white);
FIG. 3 is a schematic representation of a proposed model of an excerpt of 5 classification results of the present application;
fig. 4 is a schematic structural diagram of a terminal device of the present application.
Detailed Description
Hereinafter, specific embodiments of the present application will be described in detail with reference to the accompanying drawings, and according to these detailed descriptions, those skilled in the art can clearly understand the present application and can practice the present application. Features from various embodiments may be combined to obtain new implementations or to replace certain features from certain embodiments to obtain other preferred implementations without departing from the principles of the present application.
MODIS is an important sensor mounted on terra and aqua satellites, is a satellite-borne instrument which is used for directly broadcasting real-time observation data to the world only through an x-wave band and can receive the data free and use the data gratuitously, and is used for receiving and using MODIS data in many countries and regions around the world.
Referring to fig. 1-4, the present application provides a spatial resolution enhancement method based on weakly supervised deep learning, the method comprising the steps of: 1) And preprocessing the first image to obtain a first fine image, namely preprocessing the Sentinel-1 image.
S11, inputting a multispectral satellite image, carrying out speckle noise removal technology denoising pretreatment on a Sentinel-1 image, and carrying out linear stretching; s12, performing edge directional window filtering on the denoised image by adopting a Lee filtering method. The local mean and local variance are calculated using only pixels in the edge directed window. After speckle filtering, the linear stretching of the image is enhanced by 2%. The lowest and highest values of 2% are set to 0 and 255, respectively.
2) Preprocessing a second image to obtain a second fine image, and performing data enhancement on the second fine image to obtain a second data enhancement sample, namely performing image preprocessing on a Sentinel-2 image and performing data enhancement; several enhancements are added to the data loader module of the model network to improve performance by expanding the training data set. Including geometric transformations (e.g., flipping, rotating, bending) and linear transformations (e.g., 2% to 98% contrast stretching). All geometric transformations were randomly selected and applied to the image, each with a probability of 0.5. It is assumed that linear stretching is applied to an image of low contrast (for example, an image photographed at night).
3) And performing weak supervision-based tag updating on the land coverage map to obtain tag enhancement data, namely performing weak supervision-based tag updating on the MODIS.
S31, refining the noisy label, and updating the label every 5 periods of the training data set; s32, the first 5 original training, after the 5 th period, the model outputs intermediate prediction on all training samples; s33, comparing the updated label with the original MODIS label through intermediate prediction, and using the intersection of the updated label and the original MODIS label for the next 5 periods.
4) And enhancing the spatial resolution of the first fine image to obtain a first data enhancement sample, namely, performing model training by using an improved Plus-deep LabV3+ (image segmentation) network, and enhancing the spatial resolution in the multispectral satellite image.
S41, the proposed model is realized on a PyTorch. The weights of the pre-trained model on the ImageNet dataset are used for model initialization. As noted in S42, the number of land coverage in the training dataset is different from the number of categories in the ImageNet dataset, thus excluding the Logit weights in the pre-training model. S43, preprocessing and data expansion of the Sentinel-1SAR image are added to a data loader module of the network, and the network structure is changed in the training process to update the label. The model was trained over 50 periods.
5) And inputting the first fine image, the second fine image, the first data enhancement sample, the second data enhancement sample and the tag enhancement data into a deep convolutional neural network for training to obtain a weak supervision deep convolutional neural network, namely, performing image preprocessing of Sentinel 1 and data enhancement samples of the data enhancement sample and Sentinel 2 obtained in the steps, and training a deep experimental neural network in a semi-supervision mode by using tag enhancement data information of a MODIS label as input.
6) And inputting the land cover map into the weak supervision deep convolutional neural network to obtain a land cover map with enhanced spatial resolution.
The fusion multisource satellite data provides enhanced land coverage drawing, expands the current most advanced semantic segmentation network deep LabV3+, can provide a method for enhancing space for a user by integrating a map (the original spatial resolution is 500 m), and improves the resolution of the MODIS-derived land coverage drawing by a Synthetic Aperture Radar (SAR) image derived from Sentinel-1 and a multispectral image derived from Sentinel-2.
Under the application background that a large space resolution difference exists between massive remote sensing data and MODIS data and the Sentinel image, the Sentinel-1&2 images and the MODIS land coverage map are combined. Training of the neural network is performed on the public dataset SEN12MS, while verification and testing is performed using 2020IEEE GRSS data fusion contest ground truth data. The results show that the synthesized land cover map has higher spatial resolution than the corresponding MODIS land cover map. By fusing the fine images from Sentinel-1&2 with the daily low quality images from MODIS, a high resolution satellite image time series can be generated by implementing an overall method.
Top down in fig. 3: (a) detecting coastlines and beaches; (b) weakening the effect of the MODIS tag; (c) river and wetland classification errors; (d) misclassification of farmland and shrubs; (e) misclassification of sterility. Fig. 3 is S2 input, S1 input, MODIS tag, prediction result, and DFC tag, respectively.
The application also provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of any of the various method embodiments described above when executing the computer program.
The terminal device of this embodiment includes: at least one processor (only one shown in fig. 4), a memory, and a computer program stored in the memory and executable on the at least one processor, which when executed implements the steps of any of the various metabolic pathway prediction method embodiments described below.
The molecular optimization method provided by the embodiment of the application can be applied to terminal equipment such as tablet computers, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal digital assistant, PDA) and the like, and the embodiment of the application does not limit the specific type of the terminal equipment.
For example, the terminal device may be a Station (ST) in a WLAN, may be a personal digital processing (Personal Digital Assistant, PDA) device, a handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, a computer, a laptop computer, a handheld communication device, a handheld computing device, a satellite radio, a wireless modem card.
The terminal equipment can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The terminal device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the terminal device is merely an example and is not limiting of the terminal device, and may include more or fewer components than shown, or may combine certain components, or different components, for example, may also include input and output devices, network access devices, etc.
The processor may be a central processing unit (Central Processing Unit, CPU), it may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may in some embodiments be an internal storage unit of the terminal device, such as a hard disk or a memory of the terminal device. The memory may in other embodiments also be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (MC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device. Further, the memory may also include both an internal storage unit and an external storage device of the terminal device. The memory is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs, etc., such as program code for the computer program, etc. The memory may also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
The present embodiments provide a computer program product which, when run on a terminal device, causes the terminal device to perform steps that enable the respective method embodiments described above to be implemented. The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

Claims (8)

1. A method for enhancing spatial resolution based on weakly supervised deep learning, the method comprising the steps of:
1) Preprocessing the first image to obtain a first fine image;
2) Preprocessing the second image to obtain a second fine image, and carrying out data enhancement on the second fine image to obtain a second data enhancement sample;
3) Performing weak supervision-based label updating on the land coverage map to obtain label enhancement data;
4) Enhancing the spatial resolution of the first fine image to obtain a first data enhancement sample;
5) Inputting the first fine image, the second fine image, the first data enhancement sample, the second data enhancement sample and the tag enhancement data into a deep convolutional neural network for training to obtain a weak supervision deep convolutional neural network;
6) Inputting the land cover map into the weak supervision depth convolution neural network to obtain a space resolution enhanced land cover map; the step 3) of performing label updating based on weak supervision on the land cover map comprises the following steps:
refining the noisy label, and updating the label every 5 periods of the training data set; the first 5 original training, after the 5 th period, the model outputs intermediate predictions on all training samples; acquiring updated labels by comparing the intermediate predictions with original labels, and simultaneously acquiring intersections of the original labels and the intermediate predictions, the intersections being used for the next 5 cycles; the first image is a first sentinel synthetic aperture radar image, the second image is a second sentinel multispectral image, and the land cover map is a medium resolution imaging spectrometer land cover map.
2. The method for enhancing spatial resolution based on weakly supervised deep learning as set forth in claim 1, wherein the step 1) of preprocessing the first image to obtain the first fine image comprises:
denoising and linear stretching treatment are carried out on the first image, and edge directional window filtering is carried out on the denoised first image.
3. The method for enhancing spatial resolution based on weakly supervised deep learning as set forth in claim 1, wherein the step 2) data enhancing the second fine image comprises:
and performing geometric transformation and linear transformation on the second fine image.
4. A weakly supervised deep learning based spatial resolution enhancement method as set forth in claim 3, wherein the geometric transformation comprises flipping, rotation, and warping, and the linear transformation comprises a contrast stretch of 2% -98%.
5. The method for enhancing spatial resolution based on weakly supervised deep learning as set forth in claim 1, wherein the step 4) trains a model using a modified reinforcement version based on a deep learning image semantic segmentation network, and enhances spatial resolution of the first fine image by the trained model.
6. The method for enhancing spatial resolution based on weakly supervised deep learning as set forth in claim 1, wherein the step 5) trains the deep convolutional neural network in a semi-supervised manner.
7. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 6 when executing the computer program.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 6.
CN202110254785.5A 2021-03-09 2021-03-09 Spatial resolution enhancement method based on weak supervision deep learning, terminal equipment and computer readable storage medium Active CN112862809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110254785.5A CN112862809B (en) 2021-03-09 2021-03-09 Spatial resolution enhancement method based on weak supervision deep learning, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110254785.5A CN112862809B (en) 2021-03-09 2021-03-09 Spatial resolution enhancement method based on weak supervision deep learning, terminal equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112862809A CN112862809A (en) 2021-05-28
CN112862809B true CN112862809B (en) 2023-07-18

Family

ID=75993501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110254785.5A Active CN112862809B (en) 2021-03-09 2021-03-09 Spatial resolution enhancement method based on weak supervision deep learning, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112862809B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017129940A1 (en) * 2016-01-29 2017-08-03 Global Surface Intelligence Limited System and method for earth observation and analysis
CN108416370A (en) * 2018-02-07 2018-08-17 深圳大学 Image classification method, device based on semi-supervised deep learning and storage medium
CN110046415A (en) * 2019-04-08 2019-07-23 中国科学院南京地理与湖泊研究所 A kind of soil organic matter content remote sensing dynamic playback method of space-time fining
WO2019157348A1 (en) * 2018-02-09 2019-08-15 The Board Of Trustees Of The University Of Illinois A system and method to fuse multiple sources of optical data to generate a high-resolution, frequent and cloud-/gap-free surface reflectance product
CN111652193A (en) * 2020-07-08 2020-09-11 中南林业科技大学 Wetland classification method based on multi-source images
CN111932457A (en) * 2020-08-06 2020-11-13 北方工业大学 High-space-time fusion processing algorithm and device for remote sensing image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11068737B2 (en) * 2018-03-30 2021-07-20 Regents Of The University Of Minnesota Predicting land covers from satellite images using temporal and spatial contexts
US11676244B2 (en) * 2018-10-19 2023-06-13 Mineral Earth Sciences Llc Crop yield prediction at field-level and pixel-level

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017129940A1 (en) * 2016-01-29 2017-08-03 Global Surface Intelligence Limited System and method for earth observation and analysis
CN108416370A (en) * 2018-02-07 2018-08-17 深圳大学 Image classification method, device based on semi-supervised deep learning and storage medium
WO2019157348A1 (en) * 2018-02-09 2019-08-15 The Board Of Trustees Of The University Of Illinois A system and method to fuse multiple sources of optical data to generate a high-resolution, frequent and cloud-/gap-free surface reflectance product
CN110046415A (en) * 2019-04-08 2019-07-23 中国科学院南京地理与湖泊研究所 A kind of soil organic matter content remote sensing dynamic playback method of space-time fining
CN111652193A (en) * 2020-07-08 2020-09-11 中南林业科技大学 Wetland classification method based on multi-source images
CN111932457A (en) * 2020-08-06 2020-11-13 北方工业大学 High-space-time fusion processing algorithm and device for remote sensing image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A No-Reference Super Resolution for Satellite Image Quality Enhancement for KOMPSAT-3;Yeonju Choi等;《IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium》;220-223 *
Spatial Resolution Enhancement for Large-Scale Land Cover Mapping via Weakly Supervised Deep Learning;Qiutong Yu等;《Photogrammetric Engineering & Remote Sensing》;第87卷(第6期);405-412 *
深度学习在遥感影像目标检测和地表覆盖分类中的应用研究;邰建豪;《中国博士学位论文全文数据库 (基础科学辑)》(第2期);A008-13 *

Also Published As

Publication number Publication date
CN112862809A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
Neupane et al. Deep learning-based semantic segmentation of urban features in satellite images: A review and meta-analysis
Cao et al. Operational flood detection using Sentinel-1 SAR data over large areas
Maggiori et al. Convolutional neural networks for large-scale remote-sensing image classification
Garg et al. Semantic segmentation of PolSAR image data using advanced deep learning model
Sameen et al. Classification of very high resolution aerial photos using spectral-spatial convolutional neural networks
Ajadi et al. Change detection in synthetic aperture radar images using a multiscale-driven approach
Shakya et al. CNN-based fusion and classification of SAR and Optical data
Yan et al. Multimodal image registration using histogram of oriented gradient distance and data-driven grey wolf optimizer
Spinosa et al. Remote sensing-based automatic detection of shoreline position: A case study in apulia region
Xing et al. Integrating change magnitude maps of spectrally enhanced multi-features for land cover change detection
Dibs et al. Automatic feature extraction and matching modelling for highly noise near-equatorial satellite images
Pham et al. Application of Sentinel-1 data in mapping land-use and land cover in a complex seasonal landscape: a case study in coastal area of Vietnamese Mekong Delta
Fuse et al. Development of shoreline extraction method based on spatial pattern analysis of satellite SAR images
Janse van Rensburg et al. The use of C-band and X-band SAR with machine learning for detecting small-scale mining
Oga et al. River state classification combining patch-based processing and CNN
Zhang et al. Learning adjustable reduced downsampling network for small object detection in urban Environments
CN112862809B (en) Spatial resolution enhancement method based on weak supervision deep learning, terminal equipment and computer readable storage medium
Phinzi et al. Understanding the role of training sample size in the uncertainty of high-resolution LULC mapping using random forest
Gokon et al. Detecting Urban Floods with Small and Large Scale Analysis of ALOS-2/PALSAR-2 Data
Yang et al. Improving building rooftop segmentation accuracy through the optimization of UNet basic elements and image foreground-background balance
Siddique et al. An empirical approach to monitor the flood-prone regions of North India using Sentinel-1 images
Jahanifar et al. Mitosis detection, fast and slow: Robust and efficient detection of mitotic figures
Mishra et al. Exploring single-frame super-resolution on real-world Hyperion and PRISMA datasets of an urban area in a developing nation
Sanderson et al. XFIMNet: an Explainable deep learning architecture for versatile flood inundation mapping with synthetic aperture radar and multi-spectral optical images
Valdiviezo-N et al. Morphological reconstruction algorithms for urban monitoring using satellite data: proper selection of the marker and mask images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant