CN116994144A - High-resolution winter wheat yield estimation method, system, storage medium and electronic equipment based on synthetic aperture radar image - Google Patents

High-resolution winter wheat yield estimation method, system, storage medium and electronic equipment based on synthetic aperture radar image Download PDF

Info

Publication number
CN116994144A
CN116994144A CN202311100613.8A CN202311100613A CN116994144A CN 116994144 A CN116994144 A CN 116994144A CN 202311100613 A CN202311100613 A CN 202311100613A CN 116994144 A CN116994144 A CN 116994144A
Authority
CN
China
Prior art keywords
winter wheat
image
yield
sample
synthetic aperture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311100613.8A
Other languages
Chinese (zh)
Inventor
李宁
李倩
赵建辉
杨会巾
毋琳
黄亚博
舒高峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University
Original Assignee
Henan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University filed Critical Henan University
Priority to CN202311100613.8A priority Critical patent/CN116994144A/en
Publication of CN116994144A publication Critical patent/CN116994144A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application discloses a high-resolution winter wheat yield estimation method, a system, a storage medium and electronic equipment based on a synthetic aperture radar image, which comprise the following steps: preprocessing a time sequence SAR image; dividing the time sequence back scattering image from which winter wheat is removed into a plurality of blocks according to administrative region boundaries in an estimated production range, and then forming a neural network sample by a time sequence vector sample and an image sample; introducing a Gaussian process to explicitly model the spatial structure of the data; training the network model and storing the optimal network model. The application can rapidly and accurately acquire the yield of winter wheat in the region to be estimated, provide the knowledge of the yield and the production capacity, help decision makers, planners and stakeholders to make intelligent decisions, optimize the resource utilization and promote the sustainable development of economy and society. Meanwhile, the application has good performance and is easy to realize engineering.

Description

High-resolution winter wheat yield estimation method, system, storage medium and electronic equipment based on synthetic aperture radar image
Technical Field
The application relates to the technical field of remote sensing image processing, in particular to a high-resolution winter wheat yield estimation method, a system, a storage medium and electronic equipment based on a synthetic aperture radar image.
Background
Currently, winter wheat is one of the main crops in China, and how to effectively estimate crop yield in time to obtain accurate grain yield information has been the focus of attention in the agricultural field. The remote sensing image is used for rapidly and accurately estimating the yield, can provide the knowledge of the yield and the production capacity, helps decision makers, planners and stakeholders to make intelligent decisions, optimizes the resource utilization and promotes the sustainable development of economy and society. Therefore, yield estimation of winter wheat is very important.
In the related research of crop yield estimation, the traditional method mainly utilizes statistical means to acquire crop yield data, but the method is difficult in data acquisition and serious in waste of manpower and material resources. The remote sensing technology is widely used for estimating the agricultural yield due to the characteristics of wider monitoring range and stable monitoring period. With the development of Artificial Intelligence (AI), artificial Neural Networks (ANN) have performed yield estimates on several different crops, such as CNN, RNN, etc., and the results indicate that machine learning methods can be superior to traditional regression methods. However, the current method cannot fully utilize the space-time characteristics of the remote sensing data, and the accuracy and precision of the estimated product are further improved. In addition, most of the applications only use optical data, however, the optical images are easily affected by cloud and rain, and it is difficult to obtain complete time series remote sensing images. The synthetic aperture radar (Synthetic Aperture Radar, SAR) is a microwave remote sensing sensor, has all-day and all-weather imaging capability, can realize effective earth observation even in overcast and rainy days, and can well make up for the shortages of optical remote sensing.
Therefore, how to fully extract the time sequence SAR data characteristics to realize winter wheat yield estimation has important significance.
Disclosure of Invention
The application aims to provide a high-resolution winter wheat yield estimation method, a system, a storage medium and electronic equipment based on a synthetic aperture radar image, which can fully utilize the characteristic extraction capability with strong deep learning and the Gaussian process to build a winter wheat yield estimation model with strong generalization and robustness and solve the defects of the current yield estimation method.
The application adopts the technical scheme that:
a high-resolution winter wheat yield estimation method based on a synthetic aperture radar image comprises the following steps:
s101, preprocessing a time series synthetic aperture radar image to obtain time series backward scattering images with different polarization modes;
step S102, masking time series back scattering images of different polarization modes to remove non-winter wheat pixel points;
step S103, partitioning the time series back scattering diagram from which the non-winter wheat pixel points are removed according to the administrative boundaries of the to-be-estimated production area;
step S104, a histogram dimension reduction method is used for generating a neural network sample, and each administrative area generates a time sequence vector sample and an image sample;
step S105, respectively constructing LSTM and CNN to extract the characteristics of the time sequence vector sample and the image sample; the method is used for effectively extracting the space and time characteristics of the remote sensing image so as to fully utilize the space-time characteristics of the remote sensing image;
step S106, a Gaussian process is introduced to model the spatial structure of the features extracted by the LSTM and the CNN, so that the accuracy of estimating the yield is further improved;
step S107, obtaining the real-time data of winter wheat in the area to be estimated, dividing the real-time data into a training set and a verification set, and training the model;
step S108, selecting a determination coefficientA network model greater than 0.65 performs yield estimation, where y i For real data +.>For prediction data, ++>The average value in the real data set is obtained, and n is the total number of the data; yield results of winter wheat are obtained.
In step S101, the preprocessing includes track correction, thermal noise removal, radiometric calibration, deburst, multiview, filtering, and topography correction, where each pixel of the image represents a real radar backscatter coefficient, so as to form a backscatter image with different polarization modes.
In step S102, winter wheat is extracted by specifically using winter wheat planting distribution data provided by the national science and technology basic condition platform-national ecological science data center.
In step S103, the backscattering image is segmented according to the administrative boundary of the to-be-estimated production area, and specifically, the administrative area vector diagram is adopted to segment the time series backscattering coefficient image of the extracted winter wheat pixels.
The step S104 specifically includes the following steps:
firstly, performing histogram dimension reduction on a winter wheat time sequence backward scattering map in an administrative area;
then, normalizing the histogram after dimension reduction; the normalized formula is:
h in i Is a pixel histogram generated after dimension reduction, H i Is the normalized pixel histogram vector;
finally, time series vector samples and image samples are generated, wherein each neural network sample generated by each administrative area comprises one time series vector sample and one image sample.
In step S106, a gaussian process model is introduced,
the average function is linear with respect to the deep features and the covariance kernel depends on the spatial structure, the kernel function is shown in equation 3:
wherein g loc -g′ loc Representing the distance between the training data and the test data, I.I 2 Representing L2 norms, σ and r loc Is the parameter of the ultrasonic wave to be used as the ultrasonic wave,is an additional gaussian noise term and I is the identity matrix.
The linear Gaussian process model expression is as follows:
y(x)=f(x)+h(x) T β (4)
wherein f (x) to gp (0, k (x, x')); h (x) represents a feature vector extracted from a depth model based on the original data; beta follows Gaussian priorB is a weight vector obtained by connecting the feature vector extracted from the depth model with the output layer, b=σ b I, wherein sigma b Is a hyper-parameter, and I is an identity matrix.
In step S107, training the network model specifically includes the following steps:
dividing the obtained ground actual measurement yield data randomly according to the proportion of 8:2 into a training set and a verification set;
inputting the constructed training set into the built network model for training, judging the training effect of the network model through the precision evaluation index, and adjusting the parameters in the network model to obtain the optimal crop classification model.
A high resolution winter wheat yield estimation system based on synthetic aperture radar images, comprising: the preprocessing unit is configured to perform orbit correction, thermal noise removal, radiometric calibration, deburst, multi-view, filtering and terrain correction on the SAR image to obtain backward scattering images with different polarization modes, then mask the backward scattering images, remove non-winter wheat pixels and divide a processing result graph according to administrative boundaries of the region to be estimated;
a sample generation unit configured to realize generation of a neural network sample using a histogram dimensionality reduction technique, including a time-series vector sample and an image sample;
the model construction unit is configured to be an LSTM network and a CNN network, and respectively performs feature extraction on the time sequence vector sample and the image sample; then a Gaussian process is introduced to realize the modeling of the spatial structure of the data;
and the testing unit is configured to estimate the winter wheat yield of the input time series synthetic aperture radar image based on the optimal network model and parameters, so as to obtain the winter wheat yield estimation result based on the time series synthetic aperture radar.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes a device in which the computer readable storage medium is located to perform the method of estimating yield of winter wheat based on a synthetic aperture radar image.
An electronic device, comprising: the system comprises a memory, a processor and a program stored in the memory and capable of running on the processor, wherein the processor realizes the high-resolution winter wheat yield estimation method based on the synthetic aperture radar image when executing the program.
The application relates to a method for manufacturing a semiconductor device. In order to achieve the above object, the method scheme of the present application is realized as follows:
1. preprocessing (orbit correction, thermal noise removal, radiometric calibration, deburst, multiview, filtering and topography correction) are carried out on the time series SAR images so as to obtain backward scattering images with different polarization modes;
2. masking the time series backward scattering images with different polarization modes to remove the non-winter wheat pixel points;
3. partitioning the time series back scattering coefficient image from which winter wheat pixels are extracted by using a research area administrative boundary vector diagram;
4. and carrying out histogram dimension reduction on the winter wheat time sequence backward scattering diagram, normalizing the dimension reduced histogram, and finally generating a time sequence vector sample and an image sample. Each administrative region generated neural network sample includes a time series vector sample and an image sample.
6. Extracting image sample characteristics and time sequence vector sample characteristics respectively by combining CNN and LSTM;
7. a Gaussian process component is introduced to model the extracted features in a space structure, so that the accuracy of estimating the yield is further improved;
8. dividing the obtained ground actual measurement yield data randomly according to the proportion of 8:2 into a training set and a verification set; and inputting the constructed training set into the built network model for training, judging the training effect of the model through the precision evaluation index, and adjusting parameters in the network to obtain the optimal yield estimation model.
9. And loading optimal model parameters, inputting the time sequence SAR image into a network model, and obtaining a yield estimation result.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of the present application;
fig. 2 is a SAR image provided in an embodiment of the present application;
fig. 3 is a SAR image after removing non-winter wheat pixel points provided in an embodiment of the present application;
FIG. 4 is a neural network sample provided by an embodiment of the present application;
FIG. 5 is a graph showing the result of yield estimation according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without any inventive effort, are intended to be within the scope of the application.
The remote sensing image is used for rapidly and accurately estimating the yield, can provide the knowledge of the yield and the production capacity, helps decision makers, planners and stakeholders to make intelligent decisions, optimizes the resource utilization and promotes the sustainable development of economy and society.
The applicant researches the prior art to find that the current remote sensing estimated production method can be mainly divided into a method based on a statistical model, a method based on a physical model and a method based on a machine learning model.
The statistical model-based method is based on the statistical relationship between the remote sensing data and the ground observation data, and establishes a regression model or other statistical models of crop yield and the remote sensing data. Common statistical models include linear regression models, multiple regression models, and the like. But these methods tend to be less accurate.
The physical model-based method is to simulate the growth and yield of crops by using a physical model. The models couple the remote sensing data with the physical model based on factors such as physiological characteristics of crops, hydrologic process, radiation transmission and the like to estimate the growth condition and yield of the crops. Common physical models include biophysical models, crop models, and the like. But these methods are numerous in parameters and difficult to obtain.
The machine learning-based method utilizes remote sensing data and measured crop yield data to construct a model, and predicts and estimates crop yield by learning the relationship between the data. The machine learning algorithm automatically adjusts model parameters through a training process, and establishes a mapping relation between remote sensing data and crop yield in an optimized mode. However, the current machine learning method cannot fully mine the space-time characteristics of the remote sensing information.
For this reason, the applicant has proposed a SAR image winter wheat yield estimation method based on a deep learning and gaussian process. Preprocessing (orbit correction, thermal noise removal, radiometric calibration, deburst, multiview, filtering and topography correction) the time series SAR image to obtain backward scattering images with different polarization modes; masking the time series back scattering images with different polarization modes to remove non-wheat pixel points; partitioning the time series back scattering diagram from which the non-wheat pixels are removed according to administrative boundaries of products to be estimated; generating a neural network sample by using a histogram dimensionality reduction technology, and generating a time sequence vector sample and an image sample in each administrative area; respectively constructing LSTM and CNN to extract characteristics of the time sequence vector sample and the image sample; a Gaussian process component is introduced to explicitly model the spatial structure of the data and further improve the accuracy; obtaining the actual measurement data of winter wheat ground, and carrying out the process according to the following steps of 8:2, randomly dividing the proportion into a training set and a verification set, and training the model; and selecting a model with higher precision to estimate the yield, and obtaining a yield result graph of winter wheat. The method effectively extracts the spatial features and the temporal features of the remote sensing images, and not only uses CNN to fully extract the spatial information in the time sequence remote sensing images, but also uses LSTM to fully extract the temporal information in the time sequence remote sensing images. Gaussian process components are also introduced to explicitly model the spatial structure of the data, further improving the accuracy of the estimates. The method uses SAR data to realize high-precision estimation of winter wheat yield.
Exemplary method
Fig. 1 is a flowchart of a SAR image winter wheat yield estimation of the present application. As shown in fig. 1, the method comprises the steps of:
step S101: the SAR image is preprocessed (orbit correction, radiometric calibration, deburst, multiview, filtering, topography correction).
By carrying out track correction on the SAR image, a more accurate track file can be obtained, so that the subsequent processing is more accurate; the radiation calibration can enable each pixel value of the image to represent a real radar back scattering value, so that an accurate back scattering image is obtained; the Deburst can effectively remove the signal-free part in the Sentinel-1IW SLC image; the multi-view can effectively eliminate speckle noise in SAR images, and greatly reduce the subsequent data volume; the filtering can further eliminate the speckle noise in the SAR image so as to reduce the influence of the speckle noise on subsequent processing; besides geocoding, the topographic correction is carried out on the SAR image, so that the preprocessed image is more in line with the situation of real ground objects. And obtaining the back scattering images with different polarization modes after pretreatment.
Step S102: and (5) masking the time series back scattering images of different polarization modes to remove non-wheat pixel points.
The back-scattered image is masked in step S102 in order to remove non-winter wheat pixels in the image. The method uses winter wheat planting distribution data provided by a national science and technology basic condition platform-national ecological science data center (http:// www.nesdc.org.cn) to extract winter wheat.
Step S103: and partitioning the time series back scattering diagram with the non-winter wheat pixels removed according to the administrative boundaries of the to-be-estimated production area.
The method estimates the average yield of winter wheat in each administrative area, so that the remote sensing image of each administrative area needs to be processed. The time series back scattering map with the non-winter wheat pixels removed, which is processed in S102, is segmented using the administrative boundary vector map of the region to be estimated.
Step S104: the generation of the neural network samples is realized by using a histogram dimensionality reduction technology, and each administrative region generates a time sequence vector sample and an image sample.
After the dimension reduction range of the histogram is determined, dividing the image into a reasonable number of intervals, carrying out discretization on the number of statistical pixels one by one to generate a pixel histogram, and carrying out normalization processing of formula 2 on the generated pixel histogram.
H in i Is a pixel histogram vector H generated after dimension reduction i Is the normalized pixel histogram vector. A group of time series remote sensing images corresponding to each administrative area can generate a group of time series pixel histogram vectors; and a matrix may be generated after fusion over time. Whereby each administrative area generates a time series vector sample and an image sample.
Step S105: respectively constructing LSTM and CNN to extract characteristics of the time sequence vector sample and the image sample;
the method uses LSTM to extract the characteristics of time sequence vector samples, and mainly comprises an input layer, an LSTM unit layer and a full connection layer. Each LSTM cell receives as inputs the output of the previous time and the vector of the current time. Finally, a full connection layer is added. L2 loss is used for regression tasks. To prevent overfitting, we regularize the network by adding a discard layer with a discard rate of 0.75 after each state transition.
The method also uses CNN to extract the characteristics of the image sample, and mainly comprises an input layer, 7 convolution layers, 7 activation layers, 7 batch normalization layers, 3 Dropout layers and 1 full connection layer. The number of convolution kernels of the convolution layers C1-C7 is 64, 128, 256 and 512 in sequence, the convolution kernels are 3 multiplied by 3, the sliding step sizes are 1, 2, 1, 2 and 1 respectively, and each convolution layer is subjected to 1 zero filling. Meanwhile, batch normalization and Relu function activation operations are performed on each convolution layer, and Dropout layers are added to prevent model overfitting.
Step S106: a Gaussian process component is introduced to explicitly model the spatial structure of the data and further improve the accuracy;
the method designs a linear gaussian process model in which the average function is linear with respect to the deep features and the covariance kernel depends on the spatial structure, the kernel function being shown in equation 3.
Wherein g loc -g′ loc Representing the distance between the training data and the test data, I.I 2 Representing L2 norms, σ and r loc Is the parameter of the ultrasonic wave to be used as the ultrasonic wave,is an additional gaussian noise term and I is the identity matrix.
The linear gaussian process model expression of the experimental design is as follows:
y(x)=f(x)+h(x) T β (4)
wherein f (x) to gp (0, k (x, x')). h (x) represents a feature vector extracted from the depth model based on the original data. Beta follows Gaussian priorB is a weight vector obtained by connecting the feature vector extracted from the depth model with the output layer, b=σ b I, wherein sigma b Is a hyper-parameter, and I is an identity matrix.
Step S107: obtaining the actual measurement data of winter wheat ground, dividing the actual measurement data into a training set and a verification set, and training a model;
obtaining the actual measurement data of the winter wheat ground, and dividing the actual measurement data into a training set and a verification set according to the proportion of 8:2. The training set and the neural network sample are input together into the model for training, and the model accuracy is verified by using the verification set. Firstly, setting batch_size as 20, the maximum iteration number as 1000 and the learning rate as 0.001; and then inputting the training data set and the sample set into the network model constructed in the steps S105 to S106, training the network, testing the network performance by using the verification data set every 10 generations, and storing the model parameters obtained by current training.
Step S108: and selecting a model with higher precision to estimate the yield, and obtaining a yield result graph of winter wheat. And loading the optimal model parameters saved in the step S107, inputting the samples into the trained network model, and outputting the samples to obtain the estimated output value of each administrative area.
FIG. 1 is a technical flow chart of a high-resolution winter wheat yield estimation method based on a synthetic aperture radar image. Fig. 2 is one view of a time sequence SAR image adopted in the present technology, which is a sentinel one-number dual-polarized spaceborne SAR image, fig. 3 shows one view of a back scattering coefficient diagram of an image preprocessed to remove non-winter wheat pixels, and it can be seen from the diagram that the preprocessed image has smaller noise, and the corresponding non-winter wheat pixels have all become white. Fig. 4 shows time series vector samples and image samples formed after normalization operation after histogram dimensionality reduction. As can be seen from the figure, the time series vector samples are fused in time to form the image samples. FIG. 5 shows the estimated winter wheat yield, which is the average winter wheat yield in each village in Yu county. The method of the application can effectively estimate the yield of winter wheat, and the result is compared with the result of the verification set, and the result verifies that the yield of winter wheat estimated by the method has higher precision, root Mean Square Error (RMSE) of 63.6 jin/mu, average absolute error (MAE) of 53.9 jin/mu and determination coefficient (R) 2 ) The effectiveness of the method is fully demonstrated at 0.698. The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the present application.
In the description of the present application, it should be noted that, for the azimuth terms, there are terms such as "center", "lateral", "longitudinal
References to orientation and positional relationships such as "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", etc., are based on the orientation or positional relationships shown in the drawings, are merely for convenience of description and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and are not to be construed as limiting the particular scope of protection of the present application.
It should be noted that the terms "comprises" and "comprising," along with any variations thereof, in the description and claims of the present application are intended to cover a non-exclusive inclusion, such as a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
Note that the above is only a preferred embodiment of the present application and uses technical principles. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, while the present application has been described in connection with the above embodiments, it is to be understood that the application is not limited to the specific embodiments disclosed and that many other and equally effective embodiments may be devised without departing from the spirit of the application, and the scope thereof is determined by the scope of the appended claims.

Claims (10)

1. A high-resolution winter wheat yield estimation method based on a synthetic aperture radar image is characterized by comprising the following steps of: the method comprises the following steps:
s101, preprocessing a time series synthetic aperture radar image to obtain time series backward scattering images with different polarization modes;
step S102, masking time series back scattering images of different polarization modes to remove non-winter wheat pixel points;
step S103, partitioning the time series back scattering diagram from which the non-winter wheat pixel points are removed according to the administrative boundaries of the to-be-estimated production area;
step S104, a histogram dimension reduction method is used for generating a neural network sample, and each administrative area generates a time sequence vector sample and an image sample;
step S105, respectively constructing LSTM and CNN to extract the characteristics of the time sequence vector sample and the image sample; the method is used for effectively extracting the space and time characteristics of the remote sensing image so as to fully utilize the space-time characteristics of the remote sensing image;
step S106, a Gaussian process is introduced to model the spatial structure of the features extracted by the LSTM and the CNN, so that the accuracy of estimating the yield is further improved;
step S107, obtaining the real-time data of winter wheat in the area to be estimated, dividing the real-time data into a training set and a verification set, and training the model;
step S108, selecting a determination coefficientA network model greater than 0.65 performs yield estimation, where y i For real data +.>For prediction data, ++>The average value in the real data set is obtained, and n is the total number of the data; yield results of winter wheat are obtained.
2. The method for estimating the yield of winter wheat with high resolution based on the synthetic aperture radar image according to claim 1, wherein in step S101, the preprocessing includes orbit correction, thermal noise removal, radiometric calibration, deburst, multiview, filtering, topography correction, and each pixel of the image represents a true radar backscatter coefficient by preprocessing, thereby forming backscatter images with different polarization modes.
3. The method for estimating the yield of winter wheat with high resolution based on the synthetic aperture radar image as claimed in claim 1, wherein in step S102, winter wheat is extracted by using winter wheat planting distribution data provided by national science and technology basic condition platform-national ecological science data center.
4. The method for estimating the yield of winter wheat with high resolution based on the synthetic aperture radar image according to claim 1, wherein in step S103, the backscattering image is segmented according to the administrative boundaries of the area to be estimated, and particularly, the administrative area vector diagram is used to segment the time-series backscattering coefficient image from which the winter wheat pixels are extracted.
5. The method for estimating the yield of winter wheat with high resolution based on the synthetic aperture radar image as claimed in claim 1, wherein the step S104 comprises the steps of:
firstly, performing histogram dimension reduction on a winter wheat time sequence backward scattering map in an administrative area;
then, normalizing the histogram after dimension reduction; the normalized formula is:
h in i Is a pixel histogram generated after dimension reduction, H i Is the normalized pixel histogram vector;
finally, time series vector samples and image samples are generated, wherein each neural network sample generated by each administrative area comprises one time series vector sample and one image sample.
6. The method for estimating the yield of winter wheat with high resolution based on the synthetic aperture radar image as claimed in claim 1, wherein, in step S106, a Gaussian process model is introduced,
the average function is linear with respect to the deep features and the covariance kernel depends on the spatial structure, the kernel function is shown in equation 3:
wherein g loc -g′ loc Representing the distance between the training data and the test data, I.I 2 Representing L2 norms, σ and r loc Is the parameter of the ultrasonic wave to be used as the ultrasonic wave,is an additional gaussian noise term and I is the identity matrix.
The linear Gaussian process model expression is as follows:
y(x)=f(x)+h(x) T β (4)
wherein f (x) to gp (0, k (x, x')); h (x) represents a feature vector extracted from a depth model based on the original data; beta follows Gaussian priorB is a weight vector obtained by connecting the feature vector extracted from the depth model with the output layer, b=σ b I, wherein sigma b Is a hyper-parameter, and I is an identity matrix.
7. The method for estimating the yield of high-resolution winter wheat based on the synthetic aperture radar image as recited in claim 1, wherein the training of the network model in step S107 comprises the steps of:
dividing the obtained ground actual measurement yield data randomly according to the proportion of 8:2 into a training set and a verification set;
inputting the constructed training set into the built network model for training, judging the training effect of the network model through the precision evaluation index, and adjusting the parameters in the network model to obtain the optimal crop classification model.
8. A high resolution winter wheat yield estimation system based on synthetic aperture radar images, comprising: the preprocessing unit is configured to perform orbit correction, thermal noise removal, radiometric calibration, deburst, multi-view, filtering and terrain correction on the SAR image to obtain backward scattering images with different polarization modes, then mask the backward scattering images, remove non-winter wheat pixels and divide a processing result graph according to administrative boundaries of the region to be estimated;
a sample generation unit configured to realize generation of a neural network sample using a histogram dimensionality reduction technique, including a time-series vector sample and an image sample;
the model construction unit is configured to be an LSTM network and a CNN network, and respectively performs feature extraction on the time sequence vector sample and the image sample; then a Gaussian process is introduced to realize the modeling of the spatial structure of the data;
and the testing unit is configured to estimate the winter wheat yield of the input time series synthetic aperture radar image based on the optimal network model and parameters, so as to obtain the winter wheat yield estimation result based on the time series synthetic aperture radar.
9. A computer readable storage medium having stored thereon a computer program, which when executed by a processor causes the computer readable storage medium to perform the method for estimating yield of high-resolution winter wheat based on synthetic aperture radar images according to any one of claims 1 to 7.
10. An electronic device, comprising: a memory, a processor, and a program stored in the memory and executable on the processor, which when executed implements the high resolution winter wheat yield estimation method based on synthetic aperture radar images as claimed in any one of claims 1 to 7.
CN202311100613.8A 2023-08-30 2023-08-30 High-resolution winter wheat yield estimation method, system, storage medium and electronic equipment based on synthetic aperture radar image Pending CN116994144A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311100613.8A CN116994144A (en) 2023-08-30 2023-08-30 High-resolution winter wheat yield estimation method, system, storage medium and electronic equipment based on synthetic aperture radar image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311100613.8A CN116994144A (en) 2023-08-30 2023-08-30 High-resolution winter wheat yield estimation method, system, storage medium and electronic equipment based on synthetic aperture radar image

Publications (1)

Publication Number Publication Date
CN116994144A true CN116994144A (en) 2023-11-03

Family

ID=88533903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311100613.8A Pending CN116994144A (en) 2023-08-30 2023-08-30 High-resolution winter wheat yield estimation method, system, storage medium and electronic equipment based on synthetic aperture radar image

Country Status (1)

Country Link
CN (1) CN116994144A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117556956A (en) * 2023-11-24 2024-02-13 中国科学院空天信息创新研究院 Rice unit area yield estimation method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117556956A (en) * 2023-11-24 2024-02-13 中国科学院空天信息创新研究院 Rice unit area yield estimation method and device
CN117556956B (en) * 2023-11-24 2024-06-11 中国科学院空天信息创新研究院 Rice unit area yield estimation method and device

Similar Documents

Publication Publication Date Title
CN108038445B (en) SAR automatic target identification method based on multi-view deep learning framework
Zhang et al. Deep learning-based automatic recognition network of agricultural machinery images
Wang et al. SSRNet: In-field counting wheat ears using multi-stage convolutional neural network
CN105549009B (en) A kind of SAR image CFAR object detection methods based on super-pixel
CN104794730B (en) SAR image segmentation method based on super-pixel
CN111325381A (en) Multi-source heterogeneous farmland big data yield prediction method, system and device
CN112906300B (en) Polarization SAR soil humidity inversion method based on double-channel convolutional neural network
Wang et al. A deep learning framework combining CNN and GRU for improving wheat yield estimates using time series remotely sensed multi-variables
CN116994144A (en) High-resolution winter wheat yield estimation method, system, storage medium and electronic equipment based on synthetic aperture radar image
CN114529097B (en) Multi-scale crop phenological period remote sensing dimensionality reduction prediction method
Zhu et al. A deep learning crop model for adaptive yield estimation in large areas
CN104751183B (en) Classification of Polarimetric SAR Image method based on tensor MPCA
CN113963262A (en) Mining area land coverage classification method based on depth feature fusion model
CN109146925A (en) Conspicuousness object detection method under a kind of dynamic scene
Chaudhari et al. Performance analysis of CNN, Alexnet and vggnet models for drought prediction using satellite images
CN115049925A (en) Method for extracting field ridge, electronic device and storage medium
CN112883915A (en) Automatic wheat ear identification method and system based on transfer learning
Wahbi et al. A deep learning classification approach using high spatial satellite images for detection of built-up areas in rural zones: Case study of Souss-Massa region-Morocco
CN111738278A (en) Underwater multi-source acoustic image feature extraction method and system
CN114265062B (en) InSAR phase unwrapping method based on phase gradient estimation network
Meng et al. Physical knowledge-enhanced deep neural network for sea surface temperature prediction
CN109461127B (en) SAR image sparse regularization feature enhancement method with interpretation as purpose
Venkatanaresh et al. A new approach for crop type mapping in satellite images using hybrid deep capsule auto encoder
Zhang et al. Simulation model of vegetation dynamics by combining static and dynamic data using the gated recurrent unit neural network-based method
CN114782825B (en) Crop identification method and device based on incomplete remote sensing data and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination