CN116091936A - Agricultural condition parameter inversion method for fusing point-land block-area scale data - Google Patents

Agricultural condition parameter inversion method for fusing point-land block-area scale data Download PDF

Info

Publication number
CN116091936A
CN116091936A CN202211504536.8A CN202211504536A CN116091936A CN 116091936 A CN116091936 A CN 116091936A CN 202211504536 A CN202211504536 A CN 202211504536A CN 116091936 A CN116091936 A CN 116091936A
Authority
CN
China
Prior art keywords
image
resolution
model
agricultural
texture features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211504536.8A
Other languages
Chinese (zh)
Inventor
刘哲
姚宇
陈一鸣
王恒斌
邢子瑶
刘筠羿
张晓东
赵圆圆
李绍明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Agricultural University
Original Assignee
China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Agricultural University filed Critical China Agricultural University
Priority to CN202211504536.8A priority Critical patent/CN116091936A/en
Publication of CN116091936A publication Critical patent/CN116091936A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a quantitative analysis method for agricultural condition parameters fused with point-land block-area scale data, which improves the utilization rate of remote sensing images and realizes quantitative analysis of fine agricultural condition parameters with fewer samples. The invention collects remote sensing images with different time-space resolutions and actual measurement point data of agricultural condition parameters, and performs data preprocessing; extracting texture features of the higher resolution images by using the gray level co-occurrence matrix and superposing the texture features; dividing the image according to a certain strategy with 256 multiplied by 256 rows and columns, and upscaling the lower resolution image to manufacture a data set; training an SRGAN model by using a higher-resolution image with superimposed texture features and a lower-resolution image data set after upscaling, and training a high-resolution image without superimposed texture features and a low-resolution image after upscaling on the basis of the training weights; and finally, carrying out random forest regression on the high-resolution image and the agricultural information point data which are generated by fusion, obtaining the surface-scale agricultural information with high spatial resolution, and carrying out verification.

Description

Agricultural condition parameter inversion method for fusing point-land block-area scale data
Technical Field
The invention relates to the technical field of fine monitoring of agricultural condition parameters of large-scale crops, in particular to an agricultural condition parameter inversion method for fusing point-land block-area scale data.
Background
The long-time sequence agricultural condition index obtained by using remote sensing data is an important data source for crop growth condition monitoring and yield estimation. The agricultural condition information parameter is a very important structural parameter reflecting the growth situation of crops for monitoring the growth situation of the crops. Taking Leaf Area Index (LAI) as an example, the change of the area affects the light receiving of crops and microclimate of farmlands, and is one of important marks for judging the photosynthesis ability of plant communities. In addition, normalized vegetation index (normalized difference vegetation index, NDVI) and the like are also good indicators reflecting the growth conditions of crops.
There are two main methods for measuring agricultural condition parameters, namely a direct measurement method for in-situ sampling and an indirect measurement method by using a measuring instrument, a remote sensing technology and other methods without damaging vegetation. In the remote sensing data, multispectral data and hyperspectral data are often adopted to extract agricultural condition parameters and study the time-space distribution of the agricultural condition parameters. The remote sensing image can synchronously observe in a large area, so that the cost is saved, the artificial interference is eliminated, and the data has comprehensiveness and comparability, but is easily influenced by various factors such as environmental factors, errors of instruments and equipment, photochemical process changes of vegetation and the like. How to improve the agricultural condition parameter monitoring precision has become a research hot spot and a difficult point.
The remote sensing images with different scales have different resolutions, the contained information amounts are different, and the acquired data precision is different. The low-resolution image often has the problem of serious mixed pixels, the high-resolution image is not easy to obtain and often has lower time resolution, so as to improve the problem and influence of the lower-resolution image on information extraction, and the data fusion technology is often adopted to obtain the remote sensing image with higher space-time resolution, so that the aim of obtaining more information data is fulfilled. The traditional data fusion method generally directly enlarges or reduces the whole image, and the actual effect is not obvious. With the development of artificial intelligence algorithms such as machine learning, the image fusion method is richer and can enhance more detailed information.
The accuracy of acquiring data of the remote sensing image can be effectively improved through image data reconstruction, but as the remote sensing satellites are various in variety, and various carried sensor image imaging principles are different and technical conditions are limited, any single image source cannot comprehensively reflect the characteristics of a target object. Taking smaller field scale as an example, finer data is usually needed for realizing the monitoring and estimation of crop growth in field scale, but a single remote sensing image is often difficult to meet the fine requirements of actual conditions. At present, satellites with high space-time resolution are fewer, the data acquisition cost is high, and in order to facilitate application, a space-time data fusion technology provides a solution for continuous time sequence field-scale crop growth monitoring. The method combines a plurality of remote sensing images of different sensors in the same area into an image meeting specific application requirements by utilizing a mathematical model, and combines the advantages of the different remote sensing sensors, thereby improving the space-time resolution of the images and realizing more accurate and more reliable estimation and judgment of targets.
Patent document CN115170916a, publication No. 2022.10.11, discloses an image reconstruction method and system for multi-scale feature fusion in the field of image reconstruction. Constructing an initial reconstructed image by using a measurement vector generated by the original image, and extracting features by using a plurality of residual error modules to obtain a residual error feature set; the method comprises the steps of inputting the extracted features into a dense module of a plurality of scale convolution kernels to extract dense features, repeating iteration to obtain global residual feature fusion, and adding residual after calculating the global fusion features with an initial reconstructed image to obtain a final reconstructed image.
Patent document CN1677085a, publication date 2005.10.05, discloses an agricultural application integration system of earth observation technology and a method thereof. The invention utilizes advanced earth observation technology to accurately carry out modern management on agriculture. Firstly, acquiring hyperspectral data and carrying out data preprocessing on the acquired hyperspectral data; selecting a data wave band for the hyperspectral data; carrying out feature extraction on the hyperspectral data subjected to band selection to obtain agricultural condition parameters; and outputting the related agricultural information in the hyperspectral data by using the acquired agricultural condition parameters.
However, current research has some problems:
the high space-time resolution remote sensing image is difficult to acquire and has high acquisition cost, the space-time resolution of the image which is easy to acquire is low, the ground feature contour information is fuzzy, the information identification effect is poor, the data precision is difficult to meet the actual production requirement, and a method for inverting the finer image by utilizing the remote sensing image which is easy to acquire is to be explored;
the data fusion algorithm has the defects of unstable model training, multiple parameter amounts, low model convergence speed and the like, the reconstruction effect is difficult to meet the precision requirement, and the process data analysis calculation amount is large;
the resolution ratio and the information quantity of the data in different scales are different, the application of the remote sensing data in the field is realized by not effectively combining the multi-scale data in a research area in a smaller scale, and the utilization rate of the remote sensing image is improved.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an agricultural condition parameter inversion method for fusing point-land parcel-area scale data, which aims to solve the technical problems and provides the following technical scheme:
an agricultural condition parameter inversion method for fusing point-land parcel-area scale data comprises the following steps:
two remote sensing images with different time-space resolutions and point data acquisition and data preprocessing work reflecting crop agriculture conditions;
extracting texture features of the remote sensing image with higher spatial resolution, and superposing the texture features on the texture features;
carrying out strategic image segmentation on the images of each image set to manufacture a training set, a verification set and a test set for image fusion;
constructing fusion models of remote sensing images with different spatial resolutions, performing step-by-step training and fine tuning of the models by using a migration learning strategy, obtaining final weights of the models, and building the trained image fusion models;
constructing a point-surface fusion model, namely using the generated high-resolution image and the agricultural condition information point data training model to construct a trained point-surface fusion model;
and carrying out multi-scale effect evaluation on the inverted agricultural condition information.
Optionally, the two remote sensing images with different time-space resolutions and the point data collection and data preprocessing work reflecting crop agriculture conditions include:
collecting remote sensing images with different resolutions corresponding to the spatial position information;
preprocessing a multi-scale remote sensing dataset: geometric correction, atmospheric correction, radiation calibration, orthographic correction, spatial registration, and the like;
resampling the lower spatial resolution image to be consistent with the higher spatial resolution image.
Optionally, the extracting texture features of the remote sensing image with higher spatial resolution and superimposing the texture features thereon includes:
extracting texture features of the image with higher spatial resolution by using a gray level symbiotic matrix method, superposing the texture features into the image, and deepening outline information of the image;
the gray level image is calculated to obtain a symbiotic matrix, and then partial eigenvalues of the matrix are calculated to respectively represent certain texture features of the image, wherein the common texture feature statistical attributes are as follows: mean (Mean), standard deviation (Std), homogeneity/inverse distance (homogeny), contrast (Contrast), dissimilarity (Dissimilarity), entropy (Entropy), angular second moment/energy (Angular Second Moment), maximum probability (Maximum probability) and the like, and the extraction effects of all the statistical properties are compared, and the best statistical properties are selected for extracting the texture features of the images with higher spatial resolution.
Optionally, the performing strategic image segmentation on the images of each image set to make a training set, a verification set and a test set for image fusion includes:
and respectively cutting the resampled lower resolution image set, the higher resolution image set with superimposed texture features and the original higher resolution image set to 256 x 256, setting a certain repetition rate at four edges of the image during cutting, wherein the upper and left repetition rates are 10%, the lower and right repetition rates are 5%, carrying out upscaling on the lower spatial resolution image after cutting by 4*4, namely taking an average value of every 16 pixels, converting the line number of 256 x 256 into the line number of 64 x 64, and finally dividing the training set, the verification set and the test set according to a ratio of 7:3:1.
Optionally, the constructing a fusion model of remote sensing images with different spatial resolutions, performing step-by-step training and fine tuning of the model by using a migration learning strategy, obtaining a final weight of the model, and building a trained image fusion model includes:
based on an image super-resolution reconstruction model SRGAN for generating an antagonism network, the model consists of a generator and a discriminator, the generator structure in the model consists of a residual block, a jump layer and a convolution layer, the aim is to optimize a loss function, and certain details in an image are enhanced, so that the image has more high-frequency details after reconstruction;
loss function of generator:
Figure BDA0003967694610000051
loss function of the arbiter:
Figure BDA0003967694610000052
the loss function method of the model consists of a weighted sum of content loss and counterloss. The formula is defined as:
Figure BDA0003967694610000053
wherein in content loss, pixel-based MSE loss is defined as:
Figure BDA0003967694610000054
to generate a contrast loss, that is, to generate an image that is unrecognizable by the discriminator, the equation for the contrast loss is defined as:
Figure BDA0003967694610000055
based on the method, the SRGAN model is trained by adopting the image data set with higher resolution images superimposed with texture features and lower resolution images after upscaling, so that the model has stronger texture feature generating capability, then the high-resolution images without the texture features superimposed and the low-resolution images after upscaling are continuously migrated and trained on the basis of pre-training weights, so that the generated spectral reflectivity is more approximate to a true value, finally the model is subjected to precision verification by using the original higher-resolution images and the lower-resolution image verification set after upscaling, the indexes used comprise but are not limited to PNSR, SSIM and the like, the SRGAN model with optimal training weights is input by using the lower-resolution image test set after upscaling, and the high-resolution images consistent with the spatial resolution of the higher-resolution images are generated.
Optionally, the building a point-surface fusion model, using the generated high-resolution image and the agricultural condition information point data training model, building a trained point-surface fusion model includes:
and carrying out point-surface data fusion on the fused high-resolution remote sensing image combined with the field agricultural condition information point data by adopting a random forest regression model to obtain the plot scale agricultural condition information with high spatial resolution, and realizing agricultural condition information inversion of multi-scale data fusion.
Optionally, the performing multi-scale effect evaluation on the inverted agricultural condition information includes:
using R 2 Verifying the agricultural condition parameter inversion effect of the point scale with the RMSE, analyzing the change trend and the stability of the surface scale of the agricultural condition parameter inversion result by using the higher resolution image and the generated high resolution image respectively by using a linear trend method and a percentage change method;
the formula of the percentage change method is as follows:
Figure BDA0003967694610000061
wherein ,
Figure BDA0003967694610000062
for the percentage change Value of a certain agricultural parameter to be monitored at a certain spatial position, index is the name of the agricultural parameter, high_value Index Generating_value for pesticide parameter Value of a certain spatial position of original high resolution image Index The values of the agricultural parameters for a corresponding spatial location are inverted for the high resolution image generated using the lower resolution image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an agricultural condition parameter inversion method for fusing point-plot-area scale data;
FIG. 2 is a technical frame diagram of an agricultural condition parameter inversion method for fusing point-plot-area scale data;
FIG. 3 is a graph showing image texture features extracted using gray level co-occurrence matrix method;
FIG. 4 is a schematic diagram of strategic cropping and stitching of remote sensing images;
FIG. 5 is a high resolution image after reconstruction of a lower resolution image;
FIG. 6 is a view of the leaf area index of the fine inversion of the reconstructed high resolution image;
FIG. 7 is a spatial distribution diagram of the percent change in the inversion result of the reconstructed image and the original high resolution image.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely illustrative of the manner in which embodiments of the invention have been described in connection with the description of the objects having the same attributes. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The agricultural condition parameter inversion method for fusing point-land parcel-area scale data provided by the embodiment of the invention is described below with reference to fig. 1 to 7.
Fig. 1 is a schematic flow chart of an agricultural condition parameter inversion method for fusing point-plot-area scale data provided by the invention, and fig. 2 is a technical frame chart of an agricultural condition parameter inversion method for fusing point-plot-area scale data provided by the invention, according to the flow and specific steps of fig. 1 and 2, the invention comprises the following steps:
step 1: two remote sensing images with different time-space resolutions and point data acquisition and data preprocessing reflecting crop agriculture conditions.
In order to acquire satellite images with different time-space resolutions in a target area, worldview-2 sub-meter satellite images or other high-resolution images and Sentinel-2 satellite images provided by European space bureau can be adopted, and in order to ensure that the remote sensing images of different sensors have little time difference, different time-space resolution images with revisit time difference of not more than 5 days and corresponding space positions in a target research area are selected for downloading.
The agricultural condition information point data can include, but is not limited to, leaf area index, leaf chlorophyll content, leaf nitrogen and the like, and the acquisition mode can be used for field acquisition or field experiments, and can also be acquired by a field agricultural condition sensor, but the data acquisition time is required to be consistent with a higher-resolution remote sensing image.
After the remote sensing images with different time-space resolutions and the agricultural point data are acquired, preprocessing work can be performed on the point data and the surface data respectively, and geometric correction, atmospheric correction, radiation calibration, orthographic correction and spatial registration can be performed on the remote sensing data. The point data is required to be subjected to missing value processing, and the sensor data is required to be subjected to thinning processing.
Step 2: and extracting texture features of the remote sensing image with higher spatial resolution and superposing the texture features.
And extracting texture features of the Worldview-2 satellite image by using a gray level symbiotic matrix method, and superposing the texture features into the image to deepen the contour information of the image. The gray level image is calculated to obtain a symbiotic matrix, and then partial eigenvalues of the matrix are calculated to respectively represent certain texture features of the image, wherein the common texture feature statistical attributes are as follows: mean (Mean), standard deviation (Std), homogeneity/inverse distance (homogeny), contrast (Contrast), dissimilarity (Dissimilarity), entropy (Entropy), angular second moment/energy (Angular Second Moment), maximum probability (Maximum probability), etc., as shown in fig. 3, the extraction effects of the statistical properties are compared, and the best statistical property is selected for extracting the texture features of the higher spatial resolution image.
And overlaying the Worldview-2 image texture features on the image texture features to obtain a higher resolution image overlaid with the texture features.
Step 3: and (3) carrying out strategic image segmentation on the images of each image set to manufacture a training set, a verification set and a test set for image fusion.
Cutting the resampled Sentinel-2 image set, the Worldview-2 image set with superimposed texture features and the original Worldview-2 image set to 256 x 256 respectively, wherein a certain repetition rate is required to be set at four edges of the image during cutting, as shown in FIG. 4, the upper and left repetition rates are 10%, and the lower and right repetition rates are 5%;
after clipping, the lower spatial resolution image is upscaled by 4*4, namely, taking the average value of every 16 pixels, and converting the line number of 256 to 64 by 64;
dividing the corresponding Sentinel-2 image set, the Worldview-2 image set with the superimposed texture features and the original Worldview-2 image set into a training set, a verification set and a test set according to the ratio of 7:3:1.
And 4, constructing fusion models of remote sensing images with different spatial resolutions, performing step-by-step training and fine tuning of the models by using a migration learning strategy, obtaining final weights of the models, and building the trained image fusion models.
An SRGAN model comprising a generator and a discriminator is constructed, wherein the generator structure consists of a residual block, a jump layer and a convolution layer, the aim is to optimize a loss function, and certain details in an image are enhanced, so that the image has more high-frequency details after reconstruction.
Loss function of generator:
Figure BDA0003967694610000091
loss function of the arbiter:
Figure BDA0003967694610000092
the loss function method of the model consists of a weighted sum of content loss and counterloss. The formula is defined as:
Figure BDA0003967694610000093
wherein in content loss, pixel-based MSE loss is defined as:
Figure BDA0003967694610000094
to generate a contrast loss, that is, to generate an image that is unrecognizable by the discriminator, the equation for the contrast loss is defined as:
Figure BDA0003967694610000095
the SRGAN model is pre-trained by using the Worldview-2 image with the superimposed texture features and the image training set of the Sentinel-2 after upscaling, so that the model has stronger texture feature generating capability and the training weight of the model is saved.
The SRGAN model is continuously trained on the basis of the pre-training weight by using the original Worldview-2 image and the image training set of the Sentinel-2 after upscaling, so that the generated spectral reflectivity is more approximate to a true value.
The model is verified for accuracy using the original Worldview-2 image and the upscaled Sentinel-2 image verification set, using metrics including but not limited to PNSR and SSIM.
And inputting an SRGAN model with optimal training weight by using the scaled Sentinel-2 image test set to generate a high-resolution image consistent with the spatial resolution of Worldview-2, wherein the high-resolution image is a high-resolution image reconstructed from the Sentinel-2 image test set as shown in FIG. 5.
Step 5: and constructing a point-surface fusion model, namely using the generated high-resolution image and the agricultural condition information point data training model to construct a trained point-surface fusion model.
And constructing a random forest regression model for fusing the remote sensing image and the agricultural condition information point data.
Training a random forest regression model by using an original Worldview-2 image training set and an agricultural condition point data (leaf area index) training set of a corresponding area;
and 6, performing multi-scale effect evaluation on the inverted agricultural condition information.
Accuracy verification of a model using an original Worldview-2 image verification set and an agricultural point data (leaf area index) verification set of a corresponding region using, but not limited to, R 2 And verifying the agricultural condition parameter inversion effect of the point scale with the RMSE.
And inputting a trained random forest regression model by using an original Worldview-2 image test set to generate a fine leaf area index spatial distribution map of the image corresponding region.
And inputting the high-resolution image generated by the test set into a trained random forest regression model, and generating a fine leaf area index spatial distribution diagram of the corresponding area of the generated high-resolution image, wherein the result is shown in figure 6.
The linear trend method and the percentage change method are used for analyzing the change trend and the stability of the surface scale of the agricultural condition parameter inversion result which are respectively carried out by using the original Worldview-2 and the generated high-resolution image, and the result is shown in figure 7.
The formula of the percentage change method is as follows:
Figure BDA0003967694610000111
wherein ,
Figure BDA0003967694610000112
for the percentage change Value of a certain agricultural parameter to be monitored at a certain spatial position, index is the name of the agricultural parameter, high_value Index Generating_value for an agronomic parameter Value for a spatial location inverted using the original Worldview-2 Index The agricultural parameter values for a corresponding spatial location are inverted for the high resolution image generated using the Sentinel-2 image.
In summary, the above embodiments are only for illustrating the technical solution of the present invention, and are not limited thereto. Although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the above embodiments can be modified or some of the technical features can be replaced equivalently. Such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. An agricultural condition parameter inversion method for fusing point-land parcel-area scale data is characterized by comprising the following steps:
two remote sensing images with different time-space resolutions and point data acquisition and data preprocessing work reflecting crop agriculture conditions;
extracting texture features of the remote sensing image with higher spatial resolution, and superposing the texture features on the texture features;
carrying out strategic image segmentation on the images of each image set to manufacture a training set, a verification set and a test set for image fusion;
constructing fusion models of remote sensing images with different spatial resolutions, performing step-by-step training and fine tuning of the models by using a migration learning strategy, obtaining final weights of the models, and building the trained image fusion models;
constructing a point-surface fusion model, namely using the generated high-resolution image and the agricultural condition information point data training model to construct a trained point-surface fusion model;
and carrying out multi-scale effect evaluation on the inverted agricultural condition information.
2. The method of claim 1, wherein the two different temporal spatial resolutions of the remote sensing image and the point data acquisition and data preprocessing reflecting crop agriculture conditions comprise:
collecting remote sensing images with different resolutions corresponding to the spatial position information;
preprocessing a multi-scale remote sensing dataset: geometric correction, atmospheric correction, radiation calibration, orthographic correction, spatial registration, and the like;
resampling the lower spatial resolution image to be consistent with the higher spatial resolution image.
3. The method of claim 1, wherein extracting texture features of the higher spatial resolution telemetry image and superimposing thereon comprises:
extracting texture features of the image with higher spatial resolution by using a gray level symbiotic matrix method, superposing the texture features into the image, and deepening outline information of the image;
the gray level image is calculated to obtain a symbiotic matrix, and then partial eigenvalues of the matrix are calculated to respectively represent certain texture features of the image, wherein the common texture feature statistical attributes are as follows: mean (Mean), standard deviation (Std), homogeneity/inverse distance (homogeny), contrast (Contrast), dissimilarity (Dissimilarity), entropy (Entropy), angular second moment/energy (Angular Second Moment), maximum probability (Maximum probability) and the like, and the extraction effects of all the statistical properties are compared, and the best statistical properties are selected for extracting the texture features of the images with higher spatial resolution.
4. The method of claim 1, wherein the strategically image segmentation of each image set image to produce a training set, a validation set, and a test set for image fusion comprises:
and respectively cutting the resampled lower resolution image set, the higher resolution image set with superimposed texture features and the original higher resolution image set to 256 x 256, setting a certain repetition rate at four edges of the image during cutting, wherein the upper and left repetition rates are 10%, the lower and right repetition rates are 5%, carrying out upscaling on the lower spatial resolution image after cutting by 4*4, namely taking an average value of every 16 pixels, converting the line number of 256 x 256 into the line number of 64 x 64, and finally dividing the training set, the verification set and the test set according to a ratio of 7:3:1.
5. The method of claim 1, wherein constructing the fusion model of the remote sensing images with different spatial resolutions, performing step-by-step training and fine tuning of the model using a migration learning strategy, obtaining a final weight of the model, and constructing the trained image fusion model comprises:
based on an image super-resolution reconstruction model SRGAN for generating an antagonism network, the model consists of a generator and a discriminator, the generator structure in the model consists of a residual block, a jump layer and a convolution layer, the aim is to optimize a loss function, and certain details in an image are enhanced, so that the image has more high-frequency details after reconstruction;
loss function of generator:
Figure FDA0003967694600000021
loss function of the arbiter:
Figure FDA0003967694600000022
the loss function method of the model consists of a weighted sum of content loss and counterloss, and the formula is defined as follows:
Figure FDA0003967694600000023
wherein in content loss, pixel-based MSE loss is defined as:
Figure FDA0003967694600000031
to generate a contrast loss, that is, to generate an image that is unrecognizable by the discriminator, the equation for the contrast loss is defined as:
Figure FDA0003967694600000032
based on the method, the SRGAN model is trained by adopting the image data set with higher resolution images superimposed with texture features and lower resolution images after upscaling, so that the model has stronger texture feature generating capability, then the high-resolution images without the texture features superimposed and the low-resolution images after upscaling are continuously migrated and trained on the basis of pre-training weights, so that the generated spectral reflectivity is more approximate to a true value, finally the model is subjected to precision verification by using the original higher-resolution images and the lower-resolution image verification set after upscaling, the indexes used comprise but are not limited to PNSR, SSIM and the like, the SRGAN model with optimal training weights is input by using the lower-resolution image test set after upscaling, and the high-resolution images consistent with the spatial resolution of the higher-resolution images are generated.
6. The method of claim 1, wherein the constructing a point-to-surface fusion model, using the generated high resolution image and the agricultural information point data training model, creates a trained point-to-surface fusion model, comprises:
and carrying out point-surface data fusion on the fused high-resolution remote sensing image combined with the field agricultural condition information point data by adopting a random forest regression model to obtain the plot scale agricultural condition information with high spatial resolution, and realizing agricultural condition information inversion of multi-scale data fusion.
7. The method of claim 1, wherein the multi-scale effect evaluation of the inverted agricultural information comprises:
using R 2 Verifying the agricultural condition parameter inversion effect of the point scale with the RMSE, analyzing the change trend and the stability of the surface scale of the agricultural condition parameter inversion result by using the higher resolution image and the generated high resolution image respectively by using a linear trend method and a percentage change method;
the formula of the percentage change method is as follows:
Figure FDA0003967694600000041
wherein ,
Figure FDA0003967694600000042
for the percentage change Value of a certain agricultural parameter to be monitored at a certain spatial position, index is the name of the agricultural parameter, high_value Index Generating_value for pesticide parameter Value of a certain spatial position of original high resolution image Index The values of the agricultural parameters for a corresponding spatial location are inverted for the high resolution image generated using the lower resolution image. />
CN202211504536.8A 2022-11-28 2022-11-28 Agricultural condition parameter inversion method for fusing point-land block-area scale data Pending CN116091936A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211504536.8A CN116091936A (en) 2022-11-28 2022-11-28 Agricultural condition parameter inversion method for fusing point-land block-area scale data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211504536.8A CN116091936A (en) 2022-11-28 2022-11-28 Agricultural condition parameter inversion method for fusing point-land block-area scale data

Publications (1)

Publication Number Publication Date
CN116091936A true CN116091936A (en) 2023-05-09

Family

ID=86212723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211504536.8A Pending CN116091936A (en) 2022-11-28 2022-11-28 Agricultural condition parameter inversion method for fusing point-land block-area scale data

Country Status (1)

Country Link
CN (1) CN116091936A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036987A (en) * 2023-10-10 2023-11-10 武汉大学 Remote sensing image space-time fusion method and system based on wavelet domain cross pairing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036987A (en) * 2023-10-10 2023-11-10 武汉大学 Remote sensing image space-time fusion method and system based on wavelet domain cross pairing
CN117036987B (en) * 2023-10-10 2023-12-08 武汉大学 Remote sensing image space-time fusion method and system based on wavelet domain cross pairing

Similar Documents

Publication Publication Date Title
Thorp et al. Deep machine learning with Sentinel satellite data to map paddy rice production stages across West Java, Indonesia
CN111598019B (en) Crop type and planting mode identification method based on multi-source remote sensing data
CN113128134B (en) Mining area ecological environment evolution driving factor weight quantitative analysis method
Koukoulas et al. Mapping individual tree location, height and species in broadleaved deciduous forest using airborne LIDAR and multi‐spectral remotely sensed data
Halme et al. Utility of hyperspectral compared to multispectral remote sensing data in estimating forest biomass and structure variables in Finnish boreal forest
CN110363246B (en) Fusion method of vegetation index NDVI with high space-time resolution
CN107527014A (en) Crops planting area RS statistics scheme of sample survey design method at county level
CN114926748A (en) Soybean remote sensing identification method combining Sentinel-1/2 microwave and optical multispectral images
CN113205014B (en) Time sequence data farmland extraction method based on image sharpening
Gandharum et al. Remote sensing versus the area sampling frame method in paddy rice acreage estimation in Indramayu regency, West Java province, Indonesia
CN114120101A (en) Soil moisture multi-scale comprehensive sensing method
CN116258976A (en) Hierarchical transducer high-resolution remote sensing image semantic segmentation method and system
CN116091936A (en) Agricultural condition parameter inversion method for fusing point-land block-area scale data
Tang et al. A modified flexible spatiotemporal data fusion model
Oriani et al. Downscaling multispectral satellite images without colocated high-resolution data: a stochastic approach based on training images
CN114821360A (en) Method and device for intelligently extracting crop leaf area index and electronic equipment
CN111401644A (en) Rainfall downscaling space prediction method based on neural network
Sihvonen et al. Spectral profile partial least-squares (SP-PLS): Local multivariate pansharpening on spectral profiles
Cresson et al. Comparison of convolutional neural networks for cloudy optical images reconstruction from single or multitemporal joint SAR and optical images
CN117422619A (en) Training method of image reconstruction model, image reconstruction method, device and equipment
CN117035066A (en) Ground surface temperature downscaling method coupling geographic weighting and random forest
Zhang et al. Gradient enhanced dual regression network: Perception-preserving super-resolution for multi-sensor remote sensing imagery
CN116503681A (en) Crop identification feature optimization method and system for plain and hilly areas
CN114611699A (en) Soil moisture downscaling method and device, electronic equipment and storage medium
CN117036987B (en) Remote sensing image space-time fusion method and system based on wavelet domain cross pairing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination