CN113762170A - Multi-source data fusion vegetation coverage space-time downscaling method - Google Patents

Multi-source data fusion vegetation coverage space-time downscaling method Download PDF

Info

Publication number
CN113762170A
CN113762170A CN202111056815.8A CN202111056815A CN113762170A CN 113762170 A CN113762170 A CN 113762170A CN 202111056815 A CN202111056815 A CN 202111056815A CN 113762170 A CN113762170 A CN 113762170A
Authority
CN
China
Prior art keywords
data
ndvi
image
time
grade
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111056815.8A
Other languages
Chinese (zh)
Inventor
雷添杰
张平
徐瑞瑞
张保山
李小涵
鲁源
张亚珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lei Tianjie
Original Assignee
Gansu Zhongxing Hongtu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gansu Zhongxing Hongtu Technology Co ltd filed Critical Gansu Zhongxing Hongtu Technology Co ltd
Priority to CN202111056815.8A priority Critical patent/CN113762170A/en
Publication of CN113762170A publication Critical patent/CN113762170A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-source data fusion vegetation coverage construction space-time downscaling method, which comprises the steps of S1, obtaining high score No. 1 data in an NDVI product; s2, performing supervision and classification on the down-sampled high-grade No. 1 data, and dividing the data into vegetation areas and non-vegetation areas; s3, filling NDVI values in vegetation areas, and filling non-vegetation areas without data; s4, training the high-grade No. 1 image sample by adopting deep learning to obtain data for accurately identifying the NDVI model; and S5, based on the NDVI model, extracting the NDVI image from the high-grade No. 1 data subjected to down-sampling, and correcting the numerical value in the down-scale NDVI image data. The method has important effects in the aspects of monitoring vegetation coverage change, crop growth condition, ground feature type identification, crop yield estimation, biomass estimation, ground surface evapotranspiration, soil humidity monitoring, climate change and the like by high-space-time resolution NDVI image data.

Description

Multi-source data fusion vegetation coverage space-time downscaling method
Technical Field
The invention belongs to the technical field of vegetation coverage downscaling, and particularly relates to a method for constructing vegetation coverage space-time downscaling through multi-source data fusion.
Background
In order to research the vegetation NDVI down-sampling technology, the Jia Yan proposes that the low-resolution vegetation index NDVI image is subjected to down-sampling treatment, and the image subjected to down-sampling treatment and the high-resolution vegetation index NDVI image are subjected to model fitting, so that an effective method is provided for constructing the high-spatial-resolution NDVI image. According to MODIS pixels of known MODIS NDVI low-spatial-resolution images and high-spatial-resolution TM pixels in TM MODIS data, NDVI time sequence data with high spatial resolution are predicted and constructed. Zhangjin Shui provides a method for constructing a high spatial resolution image by using pixel decomposition, the method uses a downscaling decomposition method to perform downscaling processing on a low resolution image, and then the downscaled image replaces a resampled low resolution image in STARFM to perform data fusion, so that the image fusion precision is improved. In view of the fact that the fusion model STARFM has the problem that if transient or abrupt ground surface change information is not recorded by a Landsat image in a base period, the fused image cannot capture the ground surface change information, the Hilker provides a new fusion algorithm STAARCH based on the STARFM model, extracts spatial change and temporal change from Landsat and MODIS data respectively, and improves the precision of the fusion algorithm by selecting the optimal Landsat image in the base period.
In the prior art, vegetation coverage downscaling data is constructed, a mixed pixel decomposition method is mainly used for downscaling large-scale and low-resolution image data to obtain small-scale and high-resolution image data, but the method has precision loss in the image downscaling conversion process.
Disclosure of Invention
The invention aims to provide a multi-source data fusion vegetation coverage construction space-time downscaling method aiming at the defects in the prior art, so as to solve the problem of precision loss in the image downscaling transformation process in the prior art.
In order to achieve the purpose, the invention adopts the technical scheme that:
a multi-source data fusion vegetation coverage space-time downscaling method comprises the following steps:
s1, acquiring high score No. 1 data in the NDVI product;
s2, performing supervision and classification on the down-sampled high-grade No. 1 data, and dividing the data into vegetation areas and non-vegetation areas;
s3, filling NDVI values in vegetation areas, and filling non-vegetation areas without data;
s4, training the high-grade No. 1 image sample by adopting deep learning to obtain data for accurately identifying the NDVI model;
and S5, based on the NDVI model, extracting the NDVI image from the high-grade No. 1 data subjected to down-sampling, and correcting the numerical value in the down-scale NDVI image data.
Further, in S4, deep learning is adopted to train the score 1 image sample, and data for accurately identifying the NDVI model is obtained, including the steps of:
s4.1, constructing a training data set of the high-grade No. 1 image sample, and storing a data set with the size of 224x 224;
s4.2, constructing a deep learning network comprising 5 convolutional layers and 3 full-connection layers;
s4.3, training a data set with the size of 224x224 by adopting a deep learning network;
and S4.4, storing the trained NDVI recognition model.
Further, in S4.1, a training data set of top-score No. 1 image samples is constructed, and a data set with a size of 224 × 224 is stored, including:
acquiring high-resolution NDVI image data in high-resolution No. 1 data, adopting a polygon to draw a mask on the high-resolution NDVI image part, and marking 10 types of masks as label data according to 10 types of high-resolution NDVI values of [0,0.1), [0.1,0.2), [0.2,0.3), [0.3,0.4), [0.4,0.5), [0.5,0.6), [0.6,0.7), [0.7,0.8, [0.8,0.9), [0.9,1] sections as 0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9, 1; and performing image segmentation on the sample data of the high score No. 1 and the corresponding label image data at the same time, and storing a data set with the size of 224x 224.
Further, in S4.2, the size of the initial convolution kernel is 3 × 3 × 3, the size of the stride is 1, the size of the effective padding is 1, the pooling layer posing adopts a mode of a maximum pooling function max posing of 2 × 2, and the step of deep learning the network includes:
1, performing convolution processing by using 64 convolution kernels at a time, and performing pooling layering processing at a time;
2, performing convolution processing by using 128 convolution kernels at a time, and performing pooling layering processing at a time;
a3, carrying out convolution processing by using 256 convolution kernels once, and carrying out pooling layering processing once;
4, performing convolution processing by using 512 convolution kernels at a time, and performing pooling layering processing at a time;
5, performing convolution processing by using 512 convolution kernels at a time, and performing pooling layering processing at a time;
a6, using three full-link layers Fc _ layer, and processing by softmax output layer.
Further, in S5, based on the NDVI identification model in S4, the extraction of the NDVI image is performed on the down-sampled high-score 1 data, and the correction of the value in the down-scaled NDVI image data includes:
Figure BDA0003254894030000031
wherein m is the result of extracting the NDVI image by taking the down-sampled high-grade No. 1 data and using a deep learning model for the same pixel grid; n is a numerical value in the downscaling NDVI image data; a is the corrected value in the downscaled NDVI image data.
The method for constructing the vegetation coverage space-time downscaling through multi-source data fusion has the following beneficial effects:
the method comprises the steps of carrying out downscaling decomposition on an image by using a mixed pixel decomposition method, decomposing a large-scale low-spatial-resolution NDVI remote sensing image into a small-scale high-spatial-resolution NDVI remote sensing image, and modifying the result of the image subjected to downscaling decomposition by using a trained deep learning model to complete downscaling decomposition of data of the NDVI image; the NDVI image data with high space-time resolution has important functions in aspects of monitoring vegetation coverage change, crop growth conditions, recognizing ground feature types, crop yield estimation, biomass estimation, surface evapotranspiration, soil humidity monitoring, climate change and the like.
Drawings
FIG. 1 is a flow chart of a method for constructing vegetation coverage space-time downscaling through multi-source data fusion.
Fig. 2 is a decomposition schematic diagram of a mixed pixel.
FIG. 3 is a diagram of a deep learning model.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
According to an embodiment of the application, referring to fig. 1, the method for constructing vegetation coverage space-time reduction scale by multi-source data fusion of the scheme includes the following steps:
s1, finding the NDVI product from the MODIS official website and downloading the NDVI product to obtain the high score number 1 data in the NDVI product.
S2, resampling the high-resolution image and realizing supervision and classification;
the spatial resolution of the NDVI down-scale decomposition is the target resolution of the high-resolution No. 1 image data resampling. And (4) carrying out supervision and classification on the down-sampled high-grade No. 1 data, and dividing the data into vegetation areas and non-vegetation areas.
S3, a mixed pixel decomposition method;
referring to fig. 2, for the remote sensing image NDVI data, for the image after the mixed pixel decomposition, the non-vegetation area is not filled with data, and the vegetation area is filled with the NDVI value, that is, the NDVI image data after the mixed pixel decomposition is constructed.
S4, deep learning model training;
the method adopts deep learning to train No. 1 image samples, obtains data capable of accurately identifying the NDVI model, and comprises the following specific steps:
s4.1, constructing a training data set:
for all high-resolution No. 1 data, high-resolution NDVI image data is obtained, a polygon is used for outlining a mask on the high-resolution NDVI image part, and the types of the sections of [0.1,0.2, [0.2,0.3), [0.3,0.4), [0.4,0.5), [0.5,0.6), [0.6,0.7), [0.7,0.8, [0.8,0.9 ], and [0.9,1] are respectively marked as 0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1, and 10 types of masks are outlined as tag data. And performing image segmentation on the sample data of the high score No. 1 and the corresponding label image data at the same time, and storing a data set with the size of 224x 224.
S4.2, building a deep learning model comprising deep learning networks of 5 convolutional layers and 3 full-connection layers by referring to the graph 3;
deep learning convolutional layer process: the size of the initial convolution kernel is 3 × 3 × 3, the size of the stride is 1, the size of the effective padding is 1, and the pooling layer pooling adopts a maximum pooling function max pooling of 2 × 2.
1, performing convolution processing by using 64 convolution kernels at a time, and performing pooling layering processing at a time;
2, performing convolution processing by using 128 convolution kernels at a time, and performing pooling layering processing at a time;
a3, carrying out convolution processing by using 256 convolution kernels once, and carrying out pooling layering processing once;
4, performing convolution processing by using 512 convolution kernels at a time, and performing pooling layering processing at a time;
5, performing convolution processing by using 512 convolution kernels at a time, and performing pooling layering processing at a time;
a6, using three full-link layers Fc _ layer, and processing by softmax output layer.
S4.3, training by using the built 224x224 data set through the built deep learning network;
and S4.4, storing the trained NDVI recognition model.
S5, based on the NDVI model, extracting the NDVI image from the down-sampled high-grade No. 1 data, and correcting the numerical value in the down-scale NDVI image data, wherein the correction formula is as follows:
Figure BDA0003254894030000061
wherein m is the result of extracting the NDVI image by taking the down-sampled high-grade No. 1 data and using a deep learning model for the same pixel grid; n is a numerical value in the downscaling NDVI image data; a is the corrected value in the downscaled NDVI image data.
The method comprises the steps of carrying out downscaling decomposition on an image by using a mixed pixel decomposition method, decomposing a large-scale low-spatial-resolution NDVI remote sensing image into a small-scale high-spatial-resolution NDVI remote sensing image, and modifying the result of the image subjected to downscaling decomposition by using a trained deep learning model to complete downscaling decomposition of data of the NDVI image; the NDVI image data with high space-time resolution has important functions in aspects of monitoring vegetation coverage change, crop growth conditions, recognizing ground feature types, crop yield estimation, biomass estimation, surface evapotranspiration, soil humidity monitoring, climate change and the like.
While the embodiments of the invention have been described in detail in connection with the accompanying drawings, it is not intended to limit the scope of the invention. Various modifications and changes may be made by those skilled in the art without inventive step within the scope of the appended claims.

Claims (5)

1. A multi-source data fusion vegetation coverage space-time downscaling method is characterized by comprising the following steps:
s1, acquiring high score No. 1 data in the NDVI product;
s2, performing supervision and classification on the down-sampled high-grade No. 1 data, and dividing the data into vegetation areas and non-vegetation areas;
s3, filling NDVI values in vegetation areas, and filling non-vegetation areas without data;
s4, training the high-grade No. 1 image sample by adopting deep learning to obtain data for accurately identifying the NDVI model;
and S5, based on the NDVI model, extracting the NDVI image from the high-grade No. 1 data subjected to down-sampling, and correcting the numerical value in the down-scale NDVI image data.
2. The method for constructing vegetation coverage space-time downscaling through multi-source data fusion according to claim 1, wherein deep learning is adopted in S4 to train high-score No. 1 image samples, data used for accurately identifying an NDVI model are obtained, and the method comprises the following steps:
s4.1, constructing a training data set of the high-grade No. 1 image sample, and storing a data set with the size of 224x 224;
s4.2, constructing a deep learning network comprising 5 convolutional layers and 3 full-connection layers;
s4.3, training a data set with the size of 224x224 by adopting a deep learning network;
and S4.4, storing the trained NDVI recognition model.
3. The method for constructing vegetation coverage space-time downscaling through multi-source data fusion according to claim 2, wherein a training data set of high-grade No. 1 image samples is constructed in S4.1, and a data set with a size of 224x224 is stored, and the method comprises the following steps:
acquiring high-resolution NDVI image data in high-resolution No. 1 data, adopting a polygon to draw a mask on the high-resolution NDVI image part, and marking 10 types of masks as label data according to 10 types of high-resolution NDVI values of [0,0.1), [0.1,0.2), [0.2,0.3), [0.3,0.4), [0.4,0.5), [0.5,0.6), [0.6,0.7), [0.7,0.8, [0.8,0.9), [0.9,1] sections as 0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9, 1; and performing image segmentation on the sample data of the high score No. 1 and the corresponding label image data at the same time, and storing a data set with the size of 224x 224.
4. The method for constructing vegetation coverage space-time downscaling through multi-source data fusion according to claim 3, wherein in S4.2, the size of an initial convolution kernel is 3 x 3, the size of a stride is 1, the size of an effective padding is 1, a pooling layer pooling adopts a2 x2 maximum pooling function max pooling, and the step of deeply learning the network includes:
1, performing convolution processing by using 64 convolution kernels at a time, and performing pooling layering processing at a time;
2, performing convolution processing by using 128 convolution kernels at a time, and performing pooling layering processing at a time;
a3, carrying out convolution processing by using 256 convolution kernels once, and carrying out pooling layering processing once;
4, performing convolution processing by using 512 convolution kernels at a time, and performing pooling layering processing at a time;
5, performing convolution processing by using 512 convolution kernels at a time, and performing pooling layering processing at a time;
a6, using three full-link layers Fc _ layer, and processing by softmax output layer.
5. The method for constructing vegetation coverage space-time downscaling through multi-source data fusion according to claim 4, wherein the step S5 is that based on the NDVI recognition model in the step S4, the NDVI image extraction is performed on the high-grade No. 1 data after being down-sampled, and the numerical value in the downscaled NDVI image data is corrected, and the method comprises the following steps:
Figure FDA0003254894020000021
wherein m is the result of extracting the NDVI image by taking the down-sampled high-grade No. 1 data and using a deep learning model for the same pixel grid; n is a numerical value in the downscaling NDVI image data; a is the corrected value in the downscaled NDVI image data.
CN202111056815.8A 2021-09-09 2021-09-09 Multi-source data fusion vegetation coverage space-time downscaling method Pending CN113762170A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111056815.8A CN113762170A (en) 2021-09-09 2021-09-09 Multi-source data fusion vegetation coverage space-time downscaling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111056815.8A CN113762170A (en) 2021-09-09 2021-09-09 Multi-source data fusion vegetation coverage space-time downscaling method

Publications (1)

Publication Number Publication Date
CN113762170A true CN113762170A (en) 2021-12-07

Family

ID=78794364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111056815.8A Pending CN113762170A (en) 2021-09-09 2021-09-09 Multi-source data fusion vegetation coverage space-time downscaling method

Country Status (1)

Country Link
CN (1) CN113762170A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830446A (en) * 2022-11-25 2023-03-21 中国水利水电科学研究院 Dynamic water product fusion method, device, equipment and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830446A (en) * 2022-11-25 2023-03-21 中国水利水电科学研究院 Dynamic water product fusion method, device, equipment and readable storage medium
CN115830446B (en) * 2022-11-25 2023-06-13 中国水利水电科学研究院 Dynamic water product fusion method, device, equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN112308860B (en) Earth observation image semantic segmentation method based on self-supervision learning
CN109344883A (en) Fruit tree diseases and pests recognition methods under a kind of complex background based on empty convolution
CN112016400B (en) Single-class target detection method and device based on deep learning and storage medium
CN114841961B (en) Wheat scab detection method based on image enhancement and improved YOLOv5
CN111259900A (en) Semantic segmentation method for satellite remote sensing image
CN112766155A (en) Deep learning-based mariculture area extraction method
CN113657326A (en) Weed detection method based on multi-scale fusion module and feature enhancement
CN115272887A (en) Coastal zone garbage identification method, device and equipment based on unmanned aerial vehicle detection
CN112966548A (en) Soybean plot identification method and system
CN114677325A (en) Construction method of rice stem section segmentation model and detection method based on model
CN113537085A (en) Ship target detection method based on two-time transfer learning and data augmentation
CN113762170A (en) Multi-source data fusion vegetation coverage space-time downscaling method
CN117392627A (en) Corn row line extraction and plant missing position detection method
CN104021395B (en) Target tracing algorithm based on high-order partial least square method
CN116205893A (en) Rice leaf disease image detection method, device, equipment and storage medium
CN117558036B (en) Multi-variety cattle face recognition method based on image enhancement and residual error network
Hu et al. Semantic segmentation of tea geometrid in natural scene images using discriminative pyramid network
Shiu et al. Pineapples’ detection and segmentation based on faster and mask R-CNN in UAV imagery
Zhang et al. Pixel–scene–pixel–object sample transferring: a labor-free approach for high-resolution plastic greenhouse mapping
CN114022777A (en) Sample manufacturing method and device for ground feature elements of remote sensing images
CN116543165B (en) Remote sensing image fruit tree segmentation method based on dual-channel composite depth network
CN113221997A (en) High-resolution image rape extraction method based on deep learning algorithm
CN115797184B (en) Super-resolution extraction method for surface water body
CN116206210A (en) NAS-Swin-based remote sensing image agricultural greenhouse extraction method
CN116071653A (en) Automatic extraction method for multi-stage branch structure of tree based on natural image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230726

Address after: Water Science Institute, No. 20 Gongzhuang West Road, Collective Che, Haidian District, Beijing, 100048

Applicant after: Lei Tianjie

Address before: 730010 room 213, No. 281, gaoxinyan South Road, Chengguan District, Lanzhou City, Gansu Province

Applicant before: Gansu Zhongxing Hongtu Technology Co.,Ltd.