CN116310883A - Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment - Google Patents

Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment Download PDF

Info

Publication number
CN116310883A
CN116310883A CN202310552862.4A CN202310552862A CN116310883A CN 116310883 A CN116310883 A CN 116310883A CN 202310552862 A CN202310552862 A CN 202310552862A CN 116310883 A CN116310883 A CN 116310883A
Authority
CN
China
Prior art keywords
remote sensing
sensing image
sample
model
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310552862.4A
Other languages
Chinese (zh)
Other versions
CN116310883B (en
Inventor
陈飞勇
李政道
宋杨
刘汝鹏
肖冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jianzhu University
Original Assignee
Shandong Jianzhu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jianzhu University filed Critical Shandong Jianzhu University
Priority to CN202310552862.4A priority Critical patent/CN116310883B/en
Publication of CN116310883A publication Critical patent/CN116310883A/en
Application granted granted Critical
Publication of CN116310883B publication Critical patent/CN116310883B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an agricultural disaster prediction method and related equipment based on remote sensing image space-time fusion, and relates to the technical field of image processing, wherein the method comprises the following steps: acquiring a first remote sensing image sequence and a second remote sensing image sequence generated by remote sensing imaging of a field to be predicted, wherein the first remote sensing image sequence comprises a plurality of first remote sensing images, the second remote sensing image sequence comprises a plurality of second remote sensing images, the spatial resolution of the first remote sensing images is higher than that of the second remote sensing images, and the temporal resolution of the second remote sensing images is higher than that of the first remote sensing images; inputting the second remote sensing image into the first sub-model, and performing high-resolution reconstruction on the second remote sensing image through the first sub-model to obtain a reconstructed remote sensing image output by the first sub-model; and adding the reconstructed remote sensing image into the first remote sensing image sequence to obtain a fusion sequence, inputting the fusion sequence into the second sub-model, and obtaining a disaster prediction result of the field to be predicted, which is output by the second sub-model. The method and the device can improve the prediction precision of agricultural disasters.

Description

Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment
Technical Field
The invention relates to the technical field of image processing, in particular to an agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment.
Background
With the development of remote sensing technology, a remote sensing image change-based detection method is applied to various industries and fields.
In the prior art, high spatial resolution remote sensing images are used for agricultural disaster prediction. However, due to the inherent limitation of the remote sensing satellite imaging principle, the repeated observation frequency of the high-spatial-resolution sensor is low, the imaging quantity is small, the change detection precision is severely restricted, and the existing agricultural disaster prediction result based on the remote sensing image is inaccurate.
Disclosure of Invention
The invention provides an agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment, and aims to solve the problem that in the prior art, the accuracy of an agricultural disaster prediction result based on a remote sensing image is low and improve the accuracy of agricultural disaster prediction.
The invention provides an agricultural disaster prediction method based on remote sensing image space-time fusion, which comprises the following steps:
acquiring a first remote sensing image sequence and a second remote sensing image sequence generated by remote sensing imaging of a field to be predicted, wherein the first remote sensing image sequence comprises a plurality of first remote sensing images, the second remote sensing image sequence comprises a plurality of second remote sensing images, the spatial resolution of the first remote sensing images is higher than that of the second remote sensing images, and the temporal resolution of the second remote sensing images is higher than that of the first remote sensing images;
Acquiring disaster prediction results of the field to be predicted through a trained prediction model based on the first remote sensing image sequence and the second remote sensing image sequence;
the prediction model comprises a first sub-model and a second sub-model, and the disaster prediction result of the field to be predicted is obtained through a trained prediction model based on the first remote sensing image sequence and the second remote sensing image sequence, and the method comprises the following steps:
inputting the second remote sensing image into the first sub-model, and performing high-resolution reconstruction on the second remote sensing image through the first sub-model to obtain a reconstructed remote sensing image output by the first sub-model;
and adding the reconstructed remote sensing image to the first remote sensing image sequence to obtain a fusion sequence, inputting the fusion sequence to the second sub-model, and obtaining a disaster prediction result of the field to be predicted, which is output by the second sub-model.
According to the agricultural disaster prediction method based on remote sensing image space-time fusion, the training process of the prediction model comprises the following steps:
performing preliminary training on the first sub-model based on multiple groups of first training data;
combining the second sub-model with the first sub-model after preliminary training to obtain the prediction model, and training the prediction model based on a plurality of groups of second training data;
Each group of first training data comprises a sample second remote sensing image and a high-resolution image corresponding to the sample second remote sensing image;
each set of second training data comprises a sample sequence and a disaster label corresponding to the sample sequence, wherein the sample sequence comprises a sample first remote sensing image sequence and a sample second remote sensing image sequence.
According to the agricultural disaster prediction method based on remote sensing image space-time fusion, the high-resolution image corresponding to the second remote sensing image of the target sample in the plurality of groups of first training data is obtained by adopting the following modes:
when a first sample remote sensing image generated at a target moment exists, taking the first sample remote sensing image generated at the target moment as a high-resolution image corresponding to a second target sample remote sensing image, wherein the target moment is the moment of generating the second target sample remote sensing image;
and when the sample first remote sensing image generated at the target moment does not exist, determining a high-resolution image corresponding to the target sample second remote sensing image based on the sample first remote sensing image generated at the first moment and the second moment, wherein the first moment is earlier than the target moment, and the second moment is later than the target moment.
According to the agricultural disaster prediction method based on remote sensing image space-time fusion, which is provided by the invention, the prediction model is trained based on a plurality of groups of second training data, and the method comprises the following steps:
inputting an image in the first remote sensing image sequence of the sample in the sample sequence to the first sub-model after preliminary training, and obtaining a sample reconstructed remote sensing image output by the first sub-model;
adding the sample reconstructed remote sensing image to the sample second remote sensing image sequence in the sample sequence to obtain a sample fusion sequence, inputting the sample fusion sequence to the second sub-model, and obtaining a first sample disaster prediction result output by the second sub-model;
up-sampling the images in the sample second remote sensing image sequence to obtain an up-sampling image sequence, wherein the resolution of the images in the up-sampling image sequence is the same as that of the images in the sample second remote sensing image sequence, and the up-sampling image sequence is input into the second sub-model to obtain a second sample disaster prediction result output by the second sub-model;
acquiring training loss based on the first sample disaster prediction result, the second sample disaster prediction result and the disaster label corresponding to the sample sequence;
And updating parameters of the prediction model according to the training loss to complete one training of the prediction model.
According to the agricultural disaster prediction method based on remote sensing image space-time fusion provided by the invention, the training loss is obtained based on the first sample disaster prediction result, the second sample disaster prediction result and the disaster label corresponding to the sample sequence, and the method comprises the following steps:
acquiring a first loss based on the first sample disaster prediction result and the second sample disaster prediction result;
acquiring a second loss based on the first sample disaster prediction result and the disaster label corresponding to the sample sequence;
the training loss is obtained based on the first loss and the second loss.
According to the agricultural disaster prediction method based on remote sensing image space-time fusion provided by the invention, the up-sampling is carried out on the images in the second remote sensing image sequence of the sample to obtain an up-sampling image sequence, and the method comprises the following steps:
taking an image in the second remote sensing image sequence of the sample as an image to be sampled, and upsampling the image to be sampled to obtain an upsampled image corresponding to the image to be sampled;
Combining the up-sampling images respectively corresponding to a plurality of images in the sample second remote sensing image sequence to obtain an up-sampling image sequence;
the pixel value of the target pixel point in the up-sampled image corresponding to the image to be sampled is determined based on the following steps:
determining n reference pixel points in the image to be sampled based on the coordinates of the target pixel points, wherein the distance between the coordinates of each reference pixel point and the coordinates of the target pixel point is smaller than the distance between the coordinates of any non-reference pixel point and the coordinates of the target pixel point, the non-reference pixel points are pixel points in the image to be sampled except the reference pixel points and the pixel points which are the same as the coordinates of the target pixel point, and n is a positive integer larger than 1;
and obtaining the distance between the coordinates of each reference pixel point and the coordinates of the target pixel point, and carrying out weighted summation on the pixel value of each reference pixel point according to the distance corresponding to each reference pixel point to obtain the pixel value of the target pixel point.
According to the agricultural disaster prediction method based on remote sensing image space-time fusion, the method for obtaining a first remote sensing image sequence and a second remote sensing image sequence generated by remote sensing imaging of a field to be predicted comprises the following steps:
Receiving an initial first remote sensing image sequence and an initial second remote sensing image sequence which are sent by remote sensing equipment;
performing first preprocessing on the initial first remote sensing image sequence to obtain the first remote sensing image sequence, and performing second preprocessing on the initial second remote sensing image sequence to obtain the second remote sensing image sequence;
the first pretreatment at least comprises one of radiation calibration treatment, atmospheric correction treatment and geometric correction treatment, and the second pretreatment at least comprises one of re-projection treatment and geometric correction treatment.
The invention also provides an agricultural disaster prediction device based on remote sensing image space-time fusion, which comprises:
the image acquisition module is used for acquiring a first remote sensing image sequence and a second remote sensing image sequence generated by remote sensing imaging of a field to be predicted, wherein the first remote sensing image sequence comprises a plurality of first remote sensing images, the second remote sensing image sequence comprises a plurality of second remote sensing images, the spatial resolution of the first remote sensing images is higher than that of the second remote sensing images, and the temporal resolution of the second remote sensing images is higher than that of the first remote sensing images;
the disaster prediction module is used for acquiring a disaster prediction result of the field to be predicted through a trained prediction model based on the first remote sensing image sequence and the second remote sensing image sequence;
Wherein the predictive model includes a first sub-model and a second sub-model; the disaster prediction module is specifically used for:
inputting the second remote sensing image into the first sub-model, and performing high-resolution reconstruction on the second remote sensing image through the first sub-model to obtain a reconstructed remote sensing image output by the first sub-model;
and adding the reconstructed remote sensing image to the first remote sensing image sequence to obtain a fusion sequence, inputting the fusion sequence to the second sub-model, and obtaining a disaster prediction result of the field to be predicted, which is output by the second sub-model.
The invention also provides electronic equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the remote sensing image space-time fusion-based agricultural disaster prediction method is realized when the processor executes the program.
The invention also provides a non-transitory computer readable storage medium, on which is stored a computer program which, when executed by a processor, implements the agricultural hazard prediction method based on remote sensing image spatiotemporal fusion as described in any one of the above.
According to the agricultural disaster prediction method and the related equipment based on the space-time fusion of the remote sensing images, the high-resolution reconstruction is carried out on the second remote sensing image with low spatial resolution but high time resolution, and then the second remote sensing image is added into the sequence of the first remote sensing image with high spatial resolution, so that a fusion sequence with high spatial resolution and high time resolution is formed, and the fusion sequence with high space-time resolution is input into the model for disaster prediction, so that the agricultural disaster prediction precision based on the remote sensing images can be improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is one of the flow diagrams of the agricultural disaster prediction method based on remote sensing image space-time fusion provided by the invention;
FIG. 2 is a second flow chart of the agricultural disaster prediction method based on remote sensing image space-time fusion provided by the invention;
FIG. 3 is a schematic flow chart of preprocessing remote sensing images in the agricultural disaster prediction method based on remote sensing image space-time fusion;
fig. 4 is a schematic structural diagram of an agricultural disaster prediction device based on remote sensing image space-time fusion;
fig. 5 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
A single high spatial resolution remote sensing image may provide more information, but for disaster prediction, it is also necessary to combine the change information of multiple images over a period of time to achieve accurate prediction. The method is limited by a remote sensing satellite imaging principle, repeated observation frequency of a high-resolution sensor is low, the number of images is small, therefore, the change information which can be provided by a high-spatial-resolution remote sensing image sequence is less, in the prior art, disaster prediction is carried out on a field only by using the high-spatial-resolution remote sensing image sequence of the field, and the result is inaccurate. In order to overcome the defects, the invention provides an agricultural disaster prediction method based on remote sensing image space-time fusion, which is characterized in that a second remote sensing image with low spatial resolution but high time resolution is subjected to high-resolution reconstruction and then added into a sequence of a first remote sensing image with high spatial resolution to form a fusion sequence with high spatial resolution and high time resolution, semantic features for predicting disasters are extracted from the fusion sequence through the second sub-model, and disaster prediction is performed, so that the agricultural disaster prediction precision based on the remote sensing images can be improved.
The agricultural disaster prediction method based on remote sensing image space-time fusion provided by the invention is described below with reference to the accompanying drawings. As shown in fig. 1, the agricultural disaster prediction method based on remote sensing image space-time fusion provided by the invention comprises the following steps:
s100, acquiring a first remote sensing image sequence and a second remote sensing image sequence generated by remote sensing imaging of a field to be predicted, wherein the first remote sensing image sequence comprises a plurality of first remote sensing images, the second remote sensing image sequence comprises a plurality of second remote sensing images, the spatial resolution of the first remote sensing images is higher than that of the second remote sensing images, and the temporal resolution of the second remote sensing images is higher than that of the first remote sensing images;
and S200, acquiring disaster prediction results of the field to be predicted through a trained prediction model based on the first remote sensing image sequence and the second remote sensing image sequence.
The plurality of first remote sensing images included in the first remote sensing image sequence are ordered according to imaging moments, the plurality of second remote sensing images included in the second remote sensing image sequence are ordered according to imaging moments, the spatial resolution of the first remote sensing images is higher than that of the second remote sensing images, the image resolution of the first remote sensing images is higher than that of the second remote sensing images, the time resolution of the second remote sensing images is higher than that of the first remote sensing images, the interval between the imaging moments of the second remote sensing images in the second remote sensing image sequence is smaller than that between the imaging moments of the first remote sensing images in the first remote sensing image sequence, namely, the number of the generated second remote sensing images is larger than that of the generated first remote sensing images in the same time period. The time periods corresponding to the first remote sensing image sequence and the second remote sensing image sequence are the same, that is, the images in the first remote sensing image sequence and the second remote sensing image sequence are images generated by imaging the same field in the same time period, and the difference is the space-time resolution of the images in the first remote sensing image sequence and the second remote sensing image sequence. The first remote sensing image may be Landsat satellite (us satellite) data, such as Landsat8-OLI data, and the second remote sensing image may be MODIS (medium resolution imaging spectrometer (Moderate-resolution Imaging Spectroradiometer) data.
Specifically, the prediction model includes a first sub-model and a second sub-model, and the disaster prediction result of the field to be predicted is obtained through a trained prediction model based on the first remote sensing image sequence and the second remote sensing image sequence, and the method includes the steps of:
inputting the second remote sensing image into the first sub-model, and performing high-resolution reconstruction on the second remote sensing image through the first sub-model to obtain a reconstructed remote sensing image output by the first sub-model;
and adding the reconstructed remote sensing image to the first remote sensing image sequence to obtain a fusion sequence, inputting the fusion sequence to the second sub-model, and obtaining a disaster prediction result of the field to be predicted, which is output by the second sub-model.
The remote sensing imaging is realized through a sensor on a satellite, and is limited by a remote sensing satellite imaging principle, repeated observation frequency of a high-spatial-resolution sensor is low, so that a remote sensing image with high spatial resolution has lower time resolution, and the spatial resolution of the remote sensing image with high time resolution is lower. In the method provided by the invention, the fusion sequence formed by the high-resolution second remote sensing image and the high-spatial-resolution first remote sensing image after high-resolution reconstruction is input into the prediction model for disaster prediction of the field to be predicted, so that not only can the image detail information in the high-spatial-resolution first remote sensing image be utilized, but also the time-varying information of the field to be predicted reflected in the high-temporal-resolution second remote sensing image can be utilized, and compared with the prediction of the field to be predicted by only adopting the high-spatial-resolution remote sensing image, the method has higher accuracy.
It can be seen that the first and second remote sensing images are images generated after imaging the same object, i.e. the field to be predicted. In order to save calculation resources, the reconstructed remote sensing image corresponding to the first remote sensing image does not need to be generated for the first remote sensing image and the second remote sensing image which are generated at the same time. That is, the inputting the second remote sensing image into the first sub-model includes:
inputting a target second remote sensing image in the second remote sensing image sequence to the first sub-model;
and no image with the same generation time as the generation time of the target second remote sensing image exists in the first remote sensing image sequence.
Specifically, obtaining a first remote sensing image sequence and a second remote sensing image sequence generated by remote sensing imaging of a field to be predicted includes:
receiving an initial first remote sensing image sequence and an initial second remote sensing image sequence which are sent by remote sensing equipment;
performing first preprocessing on the initial first remote sensing image sequence to obtain the first remote sensing image sequence, and performing second preprocessing on the initial second remote sensing image sequence to obtain the second remote sensing image sequence;
The first pretreatment at least comprises one of radiation calibration treatment, atmospheric correction treatment and geometric correction treatment, and the second pretreatment at least comprises one of re-projection treatment and geometric correction treatment.
The initial first remote sensing image sequence may have atmospheric errors, radiation errors, geometric deformations, and the initial second remote sensing image sequence may have geometric deformations and problems of inconsistent coordinates with the first remote sensing image sequence. As shown in fig. 2, after the initial first remote sensing image sequence and the initial second remote sensing image sequence are acquired from the satellite system, the received data are processed to obtain the first remote sensing image sequence and the second remote sensing image sequence, and then the first remote sensing image sequence and the second remote sensing image sequence are processed, and the images in the second remote sensing image sequence are reconstructed in a high-resolution mode and then added into the first remote sensing image sequence to obtain the fusion sequence with high space-time resolution.
Specifically, as shown in fig. 3, for the image in the initial first remote sensing image sequence with high spatial resolution, a radiometric calibration process may be performed on the image first, so as to convert a gray level value (DN value) into a radiance value with actual effect, where the radiometric calibration process is implemented by using an empirical linear model. The empirical linear model is:
Figure SMS_1
In the formula->
Figure SMS_2
Indicating band,/->
Figure SMS_3
For the radiation brightness value, +.>
Figure SMS_4
Is gain factor (gain), +.>
Figure SMS_5
For the offset coefficient (offset), both the gain coefficient and the offset coefficient may be obtained from the remote sensing image header file.
In order to eliminate the radiation amount error caused by atmospheric absorption, the images in the initial first remote sensing image sequence can be subjected to atmospheric correction processing, wherein the atmospheric correction processing is realized by adopting a FLAASH model, and specifically comprises the following steps:
Figure SMS_6
Figure SMS_7
in the method, in the process of the invention,
Figure SMS_9
representing the total radiation value received by the satellite sensor. />
Figure SMS_11
For the actual reflectivity of the earth's surface +.>
Figure SMS_13
Representing the mean value of the reflectivity of the surrounding area of the picture element +.>
Figure SMS_10
For path reflection, +.>
Figure SMS_12
Hemispherical reflectivity indicative of atmospheric downlink, +.>
Figure SMS_14
And->
Figure SMS_15
Are all parameters related to the atmospheric environment and the geometric conditions, < ->
Figure SMS_8
Is the average value of DN values of the center pixel and the adjacent pixels.
Geometric deformation such as uneven rows and columns, inaccurate proportion of pixel size to ground feature size and the like of the remote sensing image can occur, in order to eliminate the influence of the geometric deformation on disaster prediction, geometric correction is carried out on the images in the initial first remote sensing image sequence and the initial second remote sensing image sequence after the initial first remote sensing image sequence and the initial second remote sensing image sequence are acquired, the geometric correction can be carried out by adopting a topographic map as a reference, for example, the geometric correction can be carried out by adopting a topographic map of 1:10000 as a reference, and the correction error is controlled within 0.5 pixel.
Further, as different satellites may adopt different geographic coordinate systems, for the images in the initial second remote sensing image sequence, batch re-projection processing can be performed on the images, and the images are converted into geographic coordinate systems consistent with the images in the initial first remote sensing image sequence. For example, the MOD09A1 data can be batch projection changed by the MODIS processing tool MRTSWATH (Modis Reprojection Tool Swath) to a UTM/WGS84 coordinate system, i.e., to be consistent with the geographic coordinate system of the Landsat data.
Further, as shown in fig. 3, the preprocessing of the images in the initial first remote sensing image sequence and the initial second remote sensing image sequence may further include the processing of band matching, spatial registration, region clipping, and the like, which are commonly performed on remote sensing images in the art.
As described above, the prediction model includes the first sub-model for performing high-resolution reconstruction on the second remote sensing image, that is, the roles of the first sub-model are relatively independent, and in the method provided by the invention, for training efficiency, the first sub-model adopts the structure of the existing high-resolution reconstruction model, and is combined with the second sub-model after preliminary training, so as to obtain the prediction model for training. Specifically, the training process of the prediction model includes the steps of:
Performing preliminary training on the first sub-model based on multiple groups of first training data;
combining the second sub-model with the first sub-model after preliminary training to obtain the prediction model, and training the prediction model based on a plurality of groups of second training data;
each group of first training data comprises a sample second remote sensing image and a high-resolution image corresponding to the sample second remote sensing image;
each set of second training data comprises a sample sequence and a disaster label corresponding to the sample sequence, wherein the sample sequence comprises a sample first remote sensing image sequence and a sample second remote sensing image sequence.
The relation between the first sample remote sensing image and the second sample remote sensing image corresponds to the relation between the first remote sensing image and the second remote sensing image, that is, the spatial resolution of the first sample remote sensing image is greater than that of the second sample remote sensing image, the temporal resolution of the second sample remote sensing image is greater than that of the first sample remote sensing image, the first sample remote sensing image sequence and the second sample remote sensing image sequence in the same training data set are generated by performing remote sensing imaging on the same field, and the difference is that the spatial resolution of the images in the first sample remote sensing image sequence is higher, and the temporal resolution of the images in the second sample remote sensing image sequence is higher.
Because the functions of the first sub-model are relatively independent, compared with the method for directly combining the first sub-model and the second sub-model to obtain the prediction model, the method is trained together after initialization, and the parameter quantity required to be updated in the process of independently training the first sub-model is small, so that the parameter of the first sub-model can be enabled to approach to an optimal solution more quickly, and the training efficiency is higher.
Specifically, multiple sets of first training data are adopted to train the first sub-model, one set of first training data is adopted to be input into the first sub-model each time, parameters of the first sub-model are updated based on output of the first sub-model, the next set of first training data is put into the first sub-model, and iteration is performed until the parameters of the first sub-model converge, or the iteration times reach preset times.
For target first training data in multiple groups of first training data, a sample second remote sensing image included in the target first training data is a target sample second remote sensing image, and a high-resolution image corresponding to the target sample second remote sensing image is obtained by adopting the following mode:
when a first sample remote sensing image generated at a target moment exists, taking the first sample remote sensing image generated at the target moment as a high-resolution image corresponding to a second target sample remote sensing image, wherein the target moment is the moment of generating the second target sample remote sensing image;
And when the sample first remote sensing image generated at the target moment does not exist, determining a high-resolution image corresponding to the target sample second remote sensing image based on the sample first remote sensing image generated at the first moment and the second moment, wherein the first moment is earlier than the target moment, and the second moment is later than the target moment.
Because the generation frequencies of the first sample remote sensing image and the second sample remote sensing image are different, the first sample remote sensing image generated at the same time may or may not exist for one second sample remote sensing image. And taking the moment of generating the second remote sensing image of the target sample as the target moment, and directly taking the first remote sensing image of the sample generated at the target moment as a high-resolution image corresponding to the second remote sensing image of the target sample when the first remote sensing image of the sample generated at the target moment exists.
And when the sample first remote sensing image generated at the target moment does not exist, generating a high-resolution image corresponding to the target sample second remote sensing image based on the sample first remote sensing image generated earlier than the target moment and later than the target moment based on a space-time fusion algorithm.
Specifically, the determining the high-resolution image corresponding to the target sample second remote sensing image based on the sample first remote sensing image and the sample second remote sensing image generated at the first moment and the second moment may be implemented by an existing space-time fusion remote sensing image reconstruction algorithm, for example, by a ESTARFM (Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model) algorithm model.
After the first sub-model is primarily trained through the first training data, combining the second sub-model and the primarily trained first sub-model to obtain the prediction model, and training the prediction model based on multiple groups of second training data, wherein the method specifically comprises the following steps:
inputting an image in the first remote sensing image sequence of the sample in the sample sequence to the first sub-model after preliminary training, and obtaining a sample reconstructed remote sensing image output by the first sub-model;
adding the sample reconstructed remote sensing image to the sample second remote sensing image sequence in the sample sequence to obtain a sample fusion sequence, inputting the sample fusion sequence to the second sub-model, and obtaining a first sample disaster prediction result output by the second sub-model;
Up-sampling the images in the sample second remote sensing image sequence to obtain an up-sampling image sequence, wherein the resolution of the images in the up-sampling image sequence is the same as that of the images in the sample second remote sensing image sequence, and the up-sampling image sequence is input into the second sub-model to obtain a second sample disaster prediction result output by the second sub-model;
acquiring training loss based on the first sample disaster prediction result, the second sample disaster prediction result and the disaster label corresponding to the sample sequence;
and updating parameters of the prediction model according to the training loss to complete one training of the prediction model.
In the method provided by the invention, in the process of training the prediction model, a sample second remote sensing image sequence with low spatial resolution but high time resolution is also input into the second sub-model, and the sample second remote sensing image sequence has higher time resolution than the sample first remote sensing image sequence because the sample second remote sensing image sequence has lower spatial resolution of a single image but can provide more change process information of a field to be predicted in order to optimize the performance of the model, so that the trained model can learn the relation between change information of the remote sensing image and disasters and output more accurate disaster prediction results. The second sub-model is used for extracting the change process information in the sample second remote sensing image sequence to be used as a reference for calculating the loss of the prediction model, so that the parameters of the prediction model can be optimized faster, the capability of the second sub-model for extracting the change process characteristics of the remote sensing image is higher, and a more accurate disaster prediction result is output.
Specifically, the acquiring training loss based on the first sample disaster prediction result, the second sample disaster prediction result, and the disaster labels in the sample sequence includes:
acquiring a first loss based on the first sample disaster prediction result and the second sample disaster prediction result;
acquiring a second loss based on the first sample disaster prediction result and the disaster label corresponding to the sample sequence;
the training loss is obtained based on the first loss and the second loss.
The first sample remote sensing image sequence and the second sample remote sensing image sequence are images generated by remote sensing of the same object, and the corresponding time periods of the first sample remote sensing image sequence and the second sample remote sensing image sequence are the same.
In the method provided by the invention, the second loss is obtained according to the difference between the first sample disaster prediction result and the second sample disaster prediction result generated based on the extracted features of the image sequence with low spatial resolution, in addition to the second loss obtained according to the difference between the first sample disaster prediction result and the real disaster label generated based on the image sequence with high spatial resolution and the image sequence with high temporal resolution formed in the image sequence with high spatial resolution after high spatial resolution and high temporal resolution is reconstructed. As described above, the second sample remote sensing image sequence and the first sample remote sensing image sequence are both generated after the same field is imaged in the same period, so that the change information of the fields reflected by the images in the second sample remote sensing image sequence and the first sample remote sensing image sequence should be consistent, therefore, the first sample disaster prediction result and the second sample disaster prediction result should be consistent, and the parameters of the prediction model are updated based on the first loss and the second loss, so that the finally trained prediction model has better prediction performance, the accuracy of the disaster prediction result output by the prediction model is improved, and the model training efficiency is improved, and the parameters of the prediction model can be optimized and updated in the correct direction quickly compared with the method of updating the parameters of the prediction model only by using the first loss.
The second sub-model is used for extracting features of the image with high spatial resolution, so that the image sequence input to the second sub-model is required to have preset high resolution. And the resolution of the images in the second remote sensing image sequence is lower, so that the second sub-model can process the sample second remote sensing image sequence, and the rated images in the sample second remote sensing image sequence are firstly up-sampled to achieve the resolution of the input images set by the second sub-model. Specifically, the upsampling the image in the second remote sensing image sequence of the sample to obtain an upsampled image sequence includes:
taking one image in the second remote sensing image sequence of the sample as an image to be sampled, and upsampling the image to be sampled to obtain an upsampled image corresponding to the image to be sampled;
combining the up-sampling images respectively corresponding to a plurality of images in the sample second remote sensing image sequence to obtain an up-sampling image sequence;
the pixel value of the target pixel point in the up-sampled image corresponding to the image to be sampled is determined based on the following steps:
determining n reference pixel points in the image to be sampled based on the coordinates of the target pixel points, wherein the distance between the coordinates of each reference pixel point and the coordinates of the target pixel point is smaller than the distance between the coordinates of any non-reference pixel point and the coordinates of the target pixel point, the non-reference pixel points are pixel points in the image to be sampled except the reference pixel points and the pixel points which are the same as the coordinates of the target pixel point, and n is a positive integer larger than 1;
And obtaining the distance between the coordinates of each reference pixel point and the coordinates of the target pixel point, and carrying out weighted summation on the pixel value of each reference pixel point according to the distance corresponding to each reference pixel point to obtain the pixel value of the target pixel point.
And up-sampling an image to be sampled in the second remote sensing image sequence of the sample by the following process to obtain the up-sampling image corresponding to the image to be sampled:
the resolution of the up-sampled image is determined based on the input setting of the second sub-model, after the resolution of the up-sampled image is determined, the coordinates of each pixel point in the up-sampled image can be determined, and the pixel value of each pixel point of the up-sampled image is determined based on the pixel values of the pixel points of a plurality of coordinates closest to the coordinates in the image to be sampled. Specifically, when determining the pixel value of the target pixel point in the up-sampled image, n pixel points are determined in the image to be sampled based on the coordinates of the target pixel point, where n may be preset, for example, 3, 5, 6, and the like. Taking n as 4 as an example, assuming that the coordinates of the target pixel point in the up-sampled image are (x, y), determining 4 pixel points in the image to be sampled as the reference pixel points, wherein the coordinates of the 4 pixel points are the 4 pixel points of which the coordinate values of the image to be sampled are closest to (x, y). After the reference pixel point is determined, the pixel values of the reference pixel point are weighted and summed based on the distance between the coordinates of the reference pixel point and the coordinates of the target pixel point, so as to obtain the pixel values of the target pixel point.
And performing weighted summation on pixel values of the reference pixel points according to the distance corresponding to each reference pixel point to obtain a pixel value of the target pixel point, wherein the weighted summation comprises the following steps:
and carrying out normalization processing on the distance corresponding to each reference pixel point, taking the distance corresponding to each reference pixel point after normalization processing as the weight corresponding to the reference pixel point, and carrying out weighted summation on the pixel value of each reference pixel point based on the weight corresponding to each reference pixel point to obtain the pixel value of the target pixel point.
For example, the coordinates of the target pixel point are (x, y), the coordinates of the reference pixel point are 3, the coordinates are (x 1, y 1), (x 2, y 2), and (x 3, y 3), the pixel values are a, b, and c, and the distances between (x 1, y 1), (x 2, y 2), (x 3, y 3), and (x, y) are calculated, respectively, to obtain 3 distances: l1, L2, L3, and normalization processing is performed on L1, L2, L3 to obtain three normalized values: l1, l2, l3, then the pixel value of the target pixel is: l1+l2+b3+c.
Because the images in the sample second remote sensing image sequence are up-sampled by adopting the same mode based on the pixel values of the images, the up-sampling sequence is obtained, original change information in the sample second remote sensing image sequence can be reserved as far as possible in the up-sampling sequence, the up-sampling sequence is input into the second sub-model, and the output disaster prediction result can provide better reference for training of the second sub-model, so that the first loss can reflect the performance of the second sub-model more, and the accuracy of the disaster prediction result output by the prediction model after training is finished is improved.
The agricultural disaster prediction device based on remote sensing image space-time fusion provided by the invention is described below, and the agricultural disaster prediction device based on remote sensing image space-time fusion described below and the agricultural disaster prediction method based on remote sensing image space-time fusion described above can be correspondingly referred to each other. As shown in fig. 4, the agricultural disaster prediction device based on remote sensing image space-time fusion provided by the invention comprises an image acquisition module 410 and a disaster prediction module 420.
The image acquisition module 410 is configured to acquire a first remote sensing image sequence and a second remote sensing image sequence generated by performing remote sensing imaging on a field to be predicted, where the first remote sensing image sequence includes a plurality of first remote sensing images, the second remote sensing image sequence includes a plurality of second remote sensing images, a spatial resolution of the first remote sensing image is higher than that of the second remote sensing image, and a temporal resolution of the second remote sensing image is higher than that of the first remote sensing image.
The disaster prediction module 420 is configured to obtain a disaster prediction result of the field to be predicted through a trained prediction model based on the first remote sensing image sequence and the second remote sensing image sequence.
Wherein the predictive model includes a first sub-model and a second sub-model; the disaster prediction module 420 is specifically configured to:
Inputting the second remote sensing image into the first sub-model, and performing high-resolution reconstruction on the second remote sensing image through the first sub-model to obtain a reconstructed remote sensing image output by the first sub-model;
and adding the reconstructed remote sensing image to the first remote sensing image sequence to obtain a fusion sequence, inputting the fusion sequence to the second sub-model, extracting semantic features for predicting disasters from the fusion sequence through the second sub-model, and obtaining a disaster prediction result of the field to be predicted, which is output by the second sub-model.
Fig. 5 illustrates a physical schematic diagram of an electronic device, as shown in fig. 5, which may include: processor 510, communication interface (Communications Interface) 520, memory 530, and communication bus 540, wherein processor 510, communication interface 520, memory 530 complete communication with each other through communication bus 540. Processor 510 may invoke logic instructions in memory 530 to perform a method of agricultural hazard prediction based on telemetry image spatiotemporal fusion, the method comprising:
acquiring a first remote sensing image sequence and a second remote sensing image sequence generated by remote sensing imaging of a field to be predicted, wherein the first remote sensing image sequence comprises a plurality of first remote sensing images, the second remote sensing image sequence comprises a plurality of second remote sensing images, the spatial resolution of the first remote sensing images is higher than that of the second remote sensing images, and the temporal resolution of the second remote sensing images is higher than that of the first remote sensing images;
Acquiring disaster prediction results of the field to be predicted through a trained prediction model based on the first remote sensing image sequence and the second remote sensing image sequence;
the prediction model comprises a first sub-model and a second sub-model, and the disaster prediction result of the field to be predicted is obtained through a trained prediction model based on the first remote sensing image sequence and the second remote sensing image sequence, and the method comprises the following steps:
inputting the second remote sensing image into the first sub-model, and performing high-resolution reconstruction on the second remote sensing image through the first sub-model to obtain a reconstructed remote sensing image output by the first sub-model;
and adding the reconstructed remote sensing image to the first remote sensing image sequence to obtain a fusion sequence, inputting the fusion sequence to the second sub-model, and obtaining a disaster prediction result of the field to be predicted, which is output by the second sub-model.
Further, the logic instructions in the memory 530 may be implemented in the form of software functional units and stored in a computer readable storage medium for sale or use with a product that is independent of the product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In still another aspect, the present invention further provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the method for predicting agricultural disasters based on remote sensing image spatiotemporal fusion provided by the above methods, the method comprising:
acquiring a first remote sensing image sequence and a second remote sensing image sequence generated by remote sensing imaging of a field to be predicted, wherein the first remote sensing image sequence comprises a plurality of first remote sensing images, the second remote sensing image sequence comprises a plurality of second remote sensing images, the spatial resolution of the first remote sensing images is higher than that of the second remote sensing images, and the temporal resolution of the second remote sensing images is higher than that of the first remote sensing images;
acquiring disaster prediction results of the field to be predicted through a trained prediction model based on the first remote sensing image sequence and the second remote sensing image sequence;
the prediction model comprises a first sub-model and a second sub-model, and the disaster prediction result of the field to be predicted is obtained through a trained prediction model based on the first remote sensing image sequence and the second remote sensing image sequence, and the method comprises the following steps:
Inputting the second remote sensing image into the first sub-model, and performing high-resolution reconstruction on the second remote sensing image through the first sub-model to obtain a reconstructed remote sensing image output by the first sub-model;
and adding the reconstructed remote sensing image to the first remote sensing image sequence to obtain a fusion sequence, inputting the fusion sequence to the second sub-model, and obtaining a disaster prediction result of the field to be predicted, which is output by the second sub-model.
The above described embodiments of the apparatus are merely illustrative, wherein the elements described by the crop separation elements may or may not be physically separate, and the elements shown by the crop elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An agricultural disaster prediction method based on remote sensing image space-time fusion is characterized by comprising the following steps:
acquiring a first remote sensing image sequence and a second remote sensing image sequence generated by remote sensing imaging of a field to be predicted, wherein the first remote sensing image sequence comprises a plurality of first remote sensing images, the second remote sensing image sequence comprises a plurality of second remote sensing images, the spatial resolution of the first remote sensing images is higher than that of the second remote sensing images, and the temporal resolution of the second remote sensing images is higher than that of the first remote sensing images;
acquiring disaster prediction results of the field to be predicted through a trained prediction model based on the first remote sensing image sequence and the second remote sensing image sequence;
The prediction model comprises a first sub-model and a second sub-model, and the disaster prediction result of the field to be predicted is obtained through a trained prediction model based on the first remote sensing image sequence and the second remote sensing image sequence, and the method comprises the following steps:
inputting the second remote sensing image into the first sub-model, and performing high-resolution reconstruction on the second remote sensing image through the first sub-model to obtain a reconstructed remote sensing image output by the first sub-model;
and adding the reconstructed remote sensing image to the first remote sensing image sequence to obtain a fusion sequence, inputting the fusion sequence to the second sub-model, and obtaining a disaster prediction result of the field to be predicted, which is output by the second sub-model.
2. The agricultural disaster prediction method based on remote sensing image space-time fusion according to claim 1, wherein the training process of the prediction model comprises:
performing preliminary training on the first sub-model based on multiple groups of first training data;
combining the second sub-model with the first sub-model after preliminary training to obtain the prediction model, and training the prediction model based on a plurality of groups of second training data;
Each group of first training data comprises a sample second remote sensing image and a high-resolution image corresponding to the sample second remote sensing image;
each set of second training data comprises a sample sequence and a disaster label corresponding to the sample sequence, wherein the sample sequence comprises a sample first remote sensing image sequence and a sample second remote sensing image sequence.
3. The agricultural disaster prediction method based on remote sensing image space-time fusion according to claim 2, wherein the high-resolution image corresponding to the second remote sensing image of the target sample in the plurality of sets of first training data is obtained by adopting the following method:
when a first sample remote sensing image generated at a target moment exists, taking the first sample remote sensing image generated at the target moment as a high-resolution image corresponding to a second target sample remote sensing image, wherein the target moment is the moment of generating the second target sample remote sensing image;
and when the sample first remote sensing image generated at the target moment does not exist, determining a high-resolution image corresponding to the target sample second remote sensing image based on the sample first remote sensing image generated at the first moment and the second moment, wherein the first moment is earlier than the target moment, and the second moment is later than the target moment.
4. The agricultural disaster prediction method based on remote sensing image space-time fusion according to claim 2, wherein said training said prediction model based on a plurality of sets of second training data comprises:
inputting an image in the first remote sensing image sequence of the sample in the sample sequence to the first sub-model after preliminary training, and obtaining a sample reconstructed remote sensing image output by the first sub-model;
adding the sample reconstructed remote sensing image to the sample second remote sensing image sequence in the sample sequence to obtain a sample fusion sequence, inputting the sample fusion sequence to the second sub-model, and obtaining a first sample disaster prediction result output by the second sub-model;
up-sampling the images in the sample second remote sensing image sequence to obtain an up-sampling image sequence, wherein the resolution of the images in the up-sampling image sequence is the same as that of the images in the sample second remote sensing image sequence, and the up-sampling image sequence is input into the second sub-model to obtain a second sample disaster prediction result output by the second sub-model;
acquiring training loss based on the first sample disaster prediction result, the second sample disaster prediction result and the disaster label corresponding to the sample sequence;
And updating parameters of the prediction model according to the training loss to complete one training of the prediction model.
5. The agricultural disaster prediction method based on remote sensing image space-time fusion according to claim 4, wherein said obtaining training loss based on said first sample disaster prediction result, said second sample disaster prediction result and said disaster labels corresponding to said sample sequences comprises:
acquiring a first loss based on the first sample disaster prediction result and the second sample disaster prediction result;
acquiring a second loss based on the first sample disaster prediction result and the disaster label corresponding to the sample sequence;
the training loss is obtained based on the first loss and the second loss.
6. The method for predicting agricultural disasters based on remote sensing image space-time fusion according to claim 4, wherein the upsampling the image in the second remote sensing image sequence of the sample to obtain an upsampled image sequence includes:
taking an image in the second remote sensing image sequence of the sample as an image to be sampled, and upsampling the image to be sampled to obtain an upsampled image corresponding to the image to be sampled;
Combining the up-sampling images respectively corresponding to a plurality of images in the sample second remote sensing image sequence to obtain an up-sampling image sequence;
the pixel value of the target pixel point in the up-sampled image corresponding to the image to be sampled is determined based on the following steps:
determining n reference pixel points in the image to be sampled based on the coordinates of the target pixel points, wherein the distance between the coordinates of each reference pixel point and the coordinates of the target pixel point is smaller than the distance between the coordinates of any non-reference pixel point and the coordinates of the target pixel point, the non-reference pixel points are pixel points in the image to be sampled except the reference pixel points and the pixel points which are the same as the coordinates of the target pixel point, and n is a positive integer larger than 1;
and obtaining the distance between the coordinates of each reference pixel point and the coordinates of the target pixel point, and carrying out weighted summation on the pixel value of each reference pixel point according to the distance corresponding to each reference pixel point to obtain the pixel value of the target pixel point.
7. The method for predicting agricultural disasters based on remote sensing image space-time fusion according to claim 1, wherein the obtaining a first remote sensing image sequence and a second remote sensing image sequence generated by performing remote sensing imaging on a field to be predicted comprises:
Receiving an initial first remote sensing image sequence and an initial second remote sensing image sequence which are sent by remote sensing equipment;
performing first preprocessing on the initial first remote sensing image sequence to obtain the first remote sensing image sequence, and performing second preprocessing on the initial second remote sensing image sequence to obtain the second remote sensing image sequence;
the first pretreatment at least comprises one of radiation calibration treatment, atmospheric correction treatment and geometric correction treatment, and the second pretreatment at least comprises one of re-projection treatment and geometric correction treatment.
8. Agricultural disaster prediction device based on remote sensing image space-time fusion, which is characterized by comprising:
the image acquisition module is used for acquiring a first remote sensing image sequence and a second remote sensing image sequence generated by remote sensing imaging of a field to be predicted, wherein the first remote sensing image sequence comprises a plurality of first remote sensing images, the second remote sensing image sequence comprises a plurality of second remote sensing images, the spatial resolution of the first remote sensing images is higher than that of the second remote sensing images, and the temporal resolution of the second remote sensing images is higher than that of the first remote sensing images;
the disaster prediction module is used for acquiring a disaster prediction result of the field to be predicted through a trained prediction model based on the first remote sensing image sequence and the second remote sensing image sequence;
Wherein the predictive model includes a first sub-model and a second sub-model; the disaster prediction module is specifically used for:
inputting the second remote sensing image into the first sub-model, and performing high-resolution reconstruction on the second remote sensing image through the first sub-model to obtain a reconstructed remote sensing image output by the first sub-model;
and adding the reconstructed remote sensing image to the first remote sensing image sequence to obtain a fusion sequence, inputting the fusion sequence to the second sub-model, and obtaining a disaster prediction result of the field to be predicted, which is output by the second sub-model.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the remote sensing image spatiotemporal fusion based agricultural hazard prediction method of any of claims 1 to 7 when executing the program.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the remote sensing image spatiotemporal fusion based agricultural hazard prediction method of any of claims 1 to 7.
CN202310552862.4A 2023-05-17 2023-05-17 Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment Active CN116310883B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310552862.4A CN116310883B (en) 2023-05-17 2023-05-17 Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310552862.4A CN116310883B (en) 2023-05-17 2023-05-17 Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment

Publications (2)

Publication Number Publication Date
CN116310883A true CN116310883A (en) 2023-06-23
CN116310883B CN116310883B (en) 2023-10-20

Family

ID=86790918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310552862.4A Active CN116310883B (en) 2023-05-17 2023-05-17 Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment

Country Status (1)

Country Link
CN (1) CN116310883B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117057606A (en) * 2023-08-15 2023-11-14 广州地铁设计研究院股份有限公司 Risk prediction model training method, risk prediction method and related equipment
CN117874498A (en) * 2024-03-12 2024-04-12 航天广通科技(深圳)有限公司 Intelligent forestry big data system, method, equipment and medium based on data lake

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105741252A (en) * 2015-11-17 2016-07-06 西安电子科技大学 Sparse representation and dictionary learning-based video image layered reconstruction method
CN110826429A (en) * 2019-10-22 2020-02-21 北京邮电大学 Scenic spot video-based method and system for automatically monitoring travel emergency
CN111932457A (en) * 2020-08-06 2020-11-13 北方工业大学 High-space-time fusion processing algorithm and device for remote sensing image
US20210118097A1 (en) * 2018-02-09 2021-04-22 The Board Of Trustees Of The University Of Illinois A system and method to fuse multiple sources of optical data to generate a high-resolution, frequent and cloud-/gap-free surface reflectance product
CN112712488A (en) * 2020-12-25 2021-04-27 北京航空航天大学 Remote sensing image super-resolution reconstruction method based on self-attention fusion
CN113128586A (en) * 2021-04-16 2021-07-16 重庆邮电大学 Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image
CN114529830A (en) * 2022-01-19 2022-05-24 重庆邮电大学 Remote sensing image space-time fusion method based on mixed convolution network
CN114565843A (en) * 2022-02-22 2022-05-31 中国电子科技集团公司第五十四研究所 Time series remote sensing image fusion method
CN114972759A (en) * 2022-06-15 2022-08-30 西安电子科技大学 Remote sensing image semantic segmentation method based on hierarchical contour cost function
WO2023000158A1 (en) * 2021-07-20 2023-01-26 海南长光卫星信息技术有限公司 Super-resolution reconstruction method, apparatus and device for remote sensing image, and storage medium
CN115731141A (en) * 2021-08-27 2023-03-03 中国科学院国家空间科学中心 Space-based remote sensing image space-time fusion method for dynamic monitoring of maneuvering target
CN116071644A (en) * 2022-12-20 2023-05-05 中化现代农业有限公司 Method, device, equipment and storage medium for inversion of sun leaf area index data

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105741252A (en) * 2015-11-17 2016-07-06 西安电子科技大学 Sparse representation and dictionary learning-based video image layered reconstruction method
US20210118097A1 (en) * 2018-02-09 2021-04-22 The Board Of Trustees Of The University Of Illinois A system and method to fuse multiple sources of optical data to generate a high-resolution, frequent and cloud-/gap-free surface reflectance product
CN110826429A (en) * 2019-10-22 2020-02-21 北京邮电大学 Scenic spot video-based method and system for automatically monitoring travel emergency
CN111932457A (en) * 2020-08-06 2020-11-13 北方工业大学 High-space-time fusion processing algorithm and device for remote sensing image
CN112712488A (en) * 2020-12-25 2021-04-27 北京航空航天大学 Remote sensing image super-resolution reconstruction method based on self-attention fusion
CN113128586A (en) * 2021-04-16 2021-07-16 重庆邮电大学 Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image
WO2023000158A1 (en) * 2021-07-20 2023-01-26 海南长光卫星信息技术有限公司 Super-resolution reconstruction method, apparatus and device for remote sensing image, and storage medium
CN115731141A (en) * 2021-08-27 2023-03-03 中国科学院国家空间科学中心 Space-based remote sensing image space-time fusion method for dynamic monitoring of maneuvering target
CN114529830A (en) * 2022-01-19 2022-05-24 重庆邮电大学 Remote sensing image space-time fusion method based on mixed convolution network
CN114565843A (en) * 2022-02-22 2022-05-31 中国电子科技集团公司第五十四研究所 Time series remote sensing image fusion method
CN114972759A (en) * 2022-06-15 2022-08-30 西安电子科技大学 Remote sensing image semantic segmentation method based on hierarchical contour cost function
CN116071644A (en) * 2022-12-20 2023-05-05 中化现代农业有限公司 Method, device, equipment and storage medium for inversion of sun leaf area index data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHENXIAO ZHANG等: "A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images", ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, vol. 166, pages 183 - 200, XP086215795, DOI: 10.1016/j.isprsjprs.2020.06.003 *
唐儒罡: "河口海岸水环境动态变化的多源遥感数据融合与协同技术研究", 中国博士学位论文全文数据库 基础科学辑, vol. 2022, no. 11, pages 010 - 1 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117057606A (en) * 2023-08-15 2023-11-14 广州地铁设计研究院股份有限公司 Risk prediction model training method, risk prediction method and related equipment
CN117874498A (en) * 2024-03-12 2024-04-12 航天广通科技(深圳)有限公司 Intelligent forestry big data system, method, equipment and medium based on data lake
CN117874498B (en) * 2024-03-12 2024-05-10 航天广通科技(深圳)有限公司 Intelligent forestry big data system, method, equipment and medium based on data lake

Also Published As

Publication number Publication date
CN116310883B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN116310883B (en) Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment
CN111259898A (en) Crop segmentation method based on unmanned aerial vehicle aerial image
CN108921035B (en) Sub-pixel positioning method and system based on spatial gravitation and pixel aggregation
CN108229274B (en) Method and device for training multilayer neural network model and recognizing road characteristics
CN111784560A (en) SAR and optical image bidirectional translation method for generating countermeasure network based on cascade residual errors
CN111681171B (en) Full-color and multispectral image high-fidelity fusion method and device based on block matching
CN112419380B (en) Cloud mask-based high-precision registration method for stationary orbit satellite sequence images
CN111583330B (en) Multi-scale space-time Markov remote sensing image sub-pixel positioning method and system
CN110728706A (en) SAR image fine registration method based on deep learning
CN112966548A (en) Soybean plot identification method and system
CN115601281A (en) Remote sensing image space-time fusion method and system based on deep learning and electronic equipment
CN112529827A (en) Training method and device for remote sensing image fusion model
CN116245757B (en) Multi-scene universal remote sensing image cloud restoration method and system for multi-mode data
CN116778354A (en) Deep learning-based visible light synthetic cloud image marine strong convection cloud cluster identification method
CN117788296B (en) Infrared remote sensing image super-resolution reconstruction method based on heterogeneous combined depth network
CN114092803A (en) Cloud detection method and device based on remote sensing image, electronic device and medium
CN117422619A (en) Training method of image reconstruction model, image reconstruction method, device and equipment
CN111368716B (en) Geological disaster damage cultivated land extraction method based on multi-source space-time data
CN111047525A (en) Method for translating SAR remote sensing image into optical remote sensing image
CN116758388A (en) Remote sensing image space-time fusion method and device based on multi-scale model and residual error
CN108985154B (en) Small-size ground object sub-pixel positioning method and system based on image concentration
CN116402693A (en) Municipal engineering image processing method and device based on remote sensing technology
CN116758419A (en) Multi-scale target detection method, device and equipment for remote sensing image
CN109993104A (en) A kind of change detecting method of remote sensing images object hierarchy
CN113962897B (en) Modulation transfer function compensation method and device based on sequence remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant