CN112464172A - Growth parameter active and passive remote sensing inversion method and device - Google Patents

Growth parameter active and passive remote sensing inversion method and device Download PDF

Info

Publication number
CN112464172A
CN112464172A CN202011457497.1A CN202011457497A CN112464172A CN 112464172 A CN112464172 A CN 112464172A CN 202011457497 A CN202011457497 A CN 202011457497A CN 112464172 A CN112464172 A CN 112464172A
Authority
CN
China
Prior art keywords
remote sensing
data
optical
semi
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011457497.1A
Other languages
Chinese (zh)
Other versions
CN112464172B (en
Inventor
雒培磊
黄文江
叶回春
廖静娟
张弼尧
任淯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202011457497.1A priority Critical patent/CN112464172B/en
Publication of CN112464172A publication Critical patent/CN112464172A/en
Application granted granted Critical
Publication of CN112464172B publication Critical patent/CN112464172B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Optimization (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Pure & Applied Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Medical Informatics (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a growth parameter active and passive remote sensing inversion method and a device, comprising the following steps: acquiring optical and radar remote sensing images of an area to be predicted; inputting the optical and radar remote sensing images of the area to be predicted into a trained semi-supervised deep learning model, and outputting the growth parameters of the area to be predicted; the trained semi-supervised deep learning model is obtained by training a pre-constructed semi-supervised deep learning model based on unmarked data samples and marked data samples. According to the growth parameter active and passive remote sensing inversion method and device, deep extraction and effective fusion of optical and radar remote sensing characteristics are achieved through the semi-supervised deep learning model, the problem that common optical and radar remote sensing data are insufficiently combined is solved, information is mined from unmarked remote sensing data, the problem that a data augmentation method cannot introduce new information is effectively solved, and therefore the inversion accuracy of growth parameters is improved.

Description

Growth parameter active and passive remote sensing inversion method and device
Technical Field
The invention relates to the technical field of quantitative remote sensing inversion, in particular to a growth parameter active and passive remote sensing inversion method and device.
Background
Leaf Area Index (LAI) and biomass are important growth parameters of plants such as corn and the like, can provide important information for growth condition evaluation, temperature stress, water stress, pest control, early yield evaluation and the like, and are widely used for field management decision making and early yield evaluation at present.
The commonly used method for combining optical and radar data mainly comprises: vegetation index fusion, segment inversion and data assimilation. The vegetation index fusion method is the simplest and easy to use, but the method only uses information of a few wave bands for calculating the vegetation index, and the full utilization of optical and radar data is not realized. In order to avoid saturation in inversion, researchers have divided the inversion process of growth parameters into two sections: LAI <3 > and LAI > are 3, and researchers input partial characteristics extracted from optical and radar data into a crop growth model through a data assimilation method to realize the combined inversion of growth parameters of the optical and radar data.
It can be seen that although the above methods all implement active and passive inversion of growth parameters, efficient and deep fusion of optical and radar data is still not implemented, resulting in still low inversion accuracy of growth parameters.
Disclosure of Invention
The invention provides a growth parameter active and passive remote sensing inversion method and device, which are used for solving the technical problems in the prior art.
The invention provides a growth parameter active and passive remote sensing inversion method, which comprises the following steps:
acquiring optical and radar remote sensing images of an area to be predicted;
inputting the optical and radar remote sensing images of the area to be predicted into a trained semi-supervised deep learning model, and outputting the growth parameters of the area to be predicted;
the trained semi-supervised deep learning model is obtained by training a pre-constructed semi-supervised deep learning model based on unmarked data samples and marked data samples.
According to the growth parameter active and passive remote sensing inversion method provided by the invention, the pre-constructed semi-supervised deep learning model is trained by the following steps:
carrying out dicing processing on the optical and radar remote sensing image samples according to a 3 x 3 window;
taking a central pixel of each cut block to form a central pixel data sample set, and taking a neighborhood pixel of the central pixel of each cut block to form a neighborhood pixel data sample set;
and training a pre-constructed semi-supervised deep learning model by using the central pixel data sample set and the neighborhood pixel data sample set.
According to the growth parameter active and passive remote sensing inversion method provided by the invention, before taking a central pixel of each cut block to form a central pixel data sample set, and taking a neighborhood pixel of the central pixel of each cut block to form a neighborhood pixel data sample set, the method further comprises the following steps:
and filtering each cut block to filter abnormal cut blocks.
According to the growth parameter active and passive remote sensing inversion method provided by the invention, loss functions in training of a pre-constructed semi-supervised deep learning model comprise a neighborhood pixel or central pixel identification loss function based on negative sampling, a semi-supervised loss function based on reconstruction and a regression loss function based on minimum mean square error.
According to the growth parameter active and passive remote sensing inversion method provided by the invention, the semi-supervised deep learning model comprises a fusion layer and a regression layer;
the fusion layer is used for fusing optical and radar remote sensing data in the central pixel and optical and radar remote sensing data in the neighborhood pixels;
and the regression layer is used for performing cascade processing on the data output by the fusion layer and the time vector and inputting the data after the cascade processing into the multilayer perceptron.
According to the growth parameter active and passive remote sensing inversion method provided by the invention, the multilayer perceptron is composed of multiple layers of full-connection layers, and the activation function of the multilayer perceptron is a linear rectification function.
According to the growth parameter active and passive remote sensing inversion method provided by the invention, before the optical and radar remote sensing images of the area to be predicted are input into the trained semi-supervised deep learning model, the method further comprises the following steps:
and preprocessing and normalizing the optical and radar remote sensing images of the area to be predicted.
The invention also provides a growth parameter active and passive remote sensing inversion device, which comprises:
the acquisition module is used for acquiring optical and radar remote sensing images of the area to be predicted;
the inversion module is used for inputting the optical and radar remote sensing images of the area to be predicted into a trained semi-supervised deep learning model and outputting the growth parameters of the area to be predicted;
the trained semi-supervised deep learning model is obtained by training a pre-constructed semi-supervised deep learning model based on unmarked data samples and marked data samples.
The invention also provides electronic equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps of any one of the growth parameter active and passive remote sensing inversion methods.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for active and passive remote sensing inversion of growth parameters as described in any of the above.
According to the growth parameter active and passive remote sensing inversion method and device, deep extraction and effective fusion of optical and radar remote sensing characteristics are achieved through the semi-supervised deep learning model, the problem that common optical and radar remote sensing data are insufficiently combined is solved, information is mined from unmarked remote sensing data, the problem that a data augmentation method cannot introduce new information is effectively solved, and therefore the inversion accuracy of growth parameters is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a growth parameter active and passive remote sensing inversion method provided by the invention;
FIG. 2 is a schematic diagram of a neighborhood pixel and center pixel construction model provided by the invention;
FIG. 3 is a block diagram of a semi-supervised LAI and biomass inversion based framework model provided by the present invention;
FIG. 4 is a logic flow diagram for training a semi-supervised LAI and biomass inversion based framework model provided by the present invention;
FIG. 5 is a schematic flow chart of a growth parameter active and passive remote sensing inversion device provided by the invention;
fig. 6 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
Leaf Area Index (LAI) and biomass are important growth parameters for plants such as corn, soybean, etc. Taking corn as an example, LAI can provide important information for corn growth condition evaluation, temperature stress, water stress, pest control, early yield assessment and the like, and is widely used for field management decision making and early yield assessment at present. Traditional methods of obtaining LAI and biomass rely primarily on field sampling and manual measurements, which are time consuming and labor intensive. The remote sensing inversion method can realize large-area measurement by constructing an inversion model by means of a small amount of measured data and remote sensing data. However, the inversion method based on optical remote sensing is susceptible to cloud and rain weather, and saturation phenomenon is easy to occur, so that inversion accuracy is limited. The radar remote sensing has certain penetrability, can reflect the three-dimensional structure characteristics of the corn canopy, is beneficial to overcoming the saturation phenomenon in the optical remote sensing, but is easily influenced by soil background, topographic factors and the like, so that errors still exist in estimation of LAI and biomass. Therefore, the realization of active and passive inversion of maize LAI and biomass based on optical and radar data is currently a research focus.
The commonly used method for combining optical and radar data mainly comprises: vegetation index fusion, segment inversion and data assimilation. The vegetation index fusion method is the simplest and easy to use, but the method only uses information of a few wave bands for calculating the vegetation index, and the full utilization of optical and radar data is not realized. In order to avoid saturation phenomenon in inversion, there is a proposal to divide the inversion process of growth parameters into two sections: LAI <3 > and LAI > are 3, and a proposal also proposes that partial characteristics extracted from optical and radar data are input into a crop growth model by a data assimilation method to realize the combined inversion of growth parameters of the optical and radar data. It is easy to see that, although the above methods all implement active and passive inversion of growth parameters, efficient and deep fusion of optical and radar data is still not implemented, resulting in still low inversion accuracy of growth parameters. In addition, features are extracted based on empirical formulas in the current growth parameter inversion research, implicit, high-dimensional and complex expression features are difficult to find depending on manually designed features, the extracted feature types are limited, multiband deep-level information contained in remote sensing data is not fully mined and utilized, the description of a canopy structure is not sufficient, and the inversion accuracy of growth parameters is limited.
In the active and passive inversion model of the corn growth parameters, statistical learning (empirical model) occupies an important position, and the physical model and the semi-empirical model are inconvenient to combine optical data and radar data. The statistical learning method mainly comprises a linear method and a nonlinear method, and in the inversion of growth parameters combining optical data and radar data, an inversion model constructed based on a traditional machine learning algorithm (such as support vector regression and random forest), exponential regression and the like has a better inversion effect than an inversion model constructed based on a multiple linear regression method, and the complicated nonlinear model is more suitable for inversion of vegetation growth parameters. The deep learning method is proved to have great advantages in the aspects of multi-scale multi-level remote sensing feature extraction and low-level to high-level remote sensing feature fusion by depending on the powerful autonomous learning capability, can automatically learn and mine information contained in remote sensing data, greatly expands the dimensionality and depth of the features, implies the influence of factors such as soil, weather and the like on an inversion model, and has great significance for quantitative remote sensing inversion. However, deep learning as a Data driving (Data drive) method has strong dependence on Data amount, i.e. large-scale Data is required to complete training and learning of the deep model. Under the condition of insufficient data quantity, due to the extremely strong fitting capacity of the depth model, an overfitting phenomenon can occur, and the overfitting phenomenon has a great influence on the generalization capacity of the depth model, so that the depth model is difficult to obtain a good effect. Due to the fact that the acquisition cost of the LAI and the biomass measured data is high, and the data size far cannot meet the requirement of deep learning, the deep learning is difficult to apply in the field of remote sensing inversion. The data augmentation method can expand samples, weaken contradictions that the cost for obtaining measured data in the maize LAI and biomass inversion is high and deep learning depends on a large number of samples, relieve the problem that the deep neural network model is easy to generate an overfitting phenomenon under the condition of insufficient data quantity, and improve the generalization capability of the deep model. However, from the perspective of information theory, new information is not introduced in data augmentation, and thus the problem of poor model performance caused by insufficient information amount in data cannot be solved, so how to weaken the dependence of a model on labeled data and increase the information amount by using easily acquired unlabeled data is a key problem to be solved urgently.
The invention aims to provide an active and passive remote sensing inversion Framework (namely a Semi-supervised LAI and Biomass Inverse Framework, SLBIF) model (or called a Semi-supervised deep learning model) based on LAI and Biomass of Semi-supervised deep learning, solves the problem of poor model performance caused by insufficient information quantity in the traditional data augmentation method by introducing a large amount of unmarked remote sensing data, provides 2 target functions according to the characteristic space continuity assumption of pixels, assists the neural network model to train and extract the depth characteristics of remote sensing data, overcomes the problem of insufficient characteristic expression capability extracted by an empirical formula, and applies the deep learning method with better deep characteristic fusion capability and nonlinear fitting capability to the active and passive inversion of LAI and Biomass to improve the inversion accuracy.
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a growth parameter active and passive remote sensing inversion method provided by the present invention, and as shown in fig. 1, the present invention provides a growth parameter active and passive remote sensing inversion method, which includes:
101, acquiring optical and radar remote sensing images of an area to be predicted;
102, inputting the optical and radar remote sensing images of the area to be predicted into a trained semi-supervised deep learning model, and outputting the growth parameters of the area to be predicted; the trained semi-supervised deep learning model is obtained by training a pre-constructed semi-supervised deep learning model based on unmarked data samples and marked data samples.
Optionally, the training step of the pre-constructed semi-supervised deep learning model is as follows:
carrying out dicing processing on the optical and radar remote sensing image samples according to a 3 x 3 window;
taking a central pixel of each cut block to form a central pixel data sample set, and taking a neighborhood pixel of the central pixel of each cut block to form a neighborhood pixel data sample set;
and training a pre-constructed semi-supervised deep learning model by using the central pixel data sample set and the neighborhood pixel data sample set.
Optionally, before taking a central pixel of each slice to form a central pixel data sample set, taking a neighborhood pixel of the central pixel of each slice to form a neighborhood pixel data sample set, the method further includes:
and filtering each cut block to filter abnormal cut blocks.
Optionally, the loss function in training of the pre-constructed semi-supervised deep learning model includes a neighborhood pixel or central pixel identification loss function based on negative sampling, a semi-supervised loss function based on reconstruction, and a regression loss function based on minimum mean square error.
Optionally, the semi-supervised deep learning model comprises a fusion layer and a regression layer;
the fusion layer is used for fusing optical and radar remote sensing data in the central pixel and optical and radar remote sensing data in the neighborhood pixels;
and the regression layer is used for performing cascade processing on the data output by the fusion layer and the time vector and inputting the data after the cascade processing into the multilayer perceptron.
Optionally, the multilayer perceptron is composed of multiple fully-connected layers, and the activation function of the multilayer perceptron is a linear rectification function.
Optionally, before inputting the optical and radar remote sensing images of the area to be predicted into the trained semi-supervised deep learning model, the method further includes:
and preprocessing and normalizing the optical and radar remote sensing images of the area to be predicted.
Specifically, the invention provides a method for performing LAI and biomass active and passive inversion by an SLBIF model, aiming at the problem of low inversion accuracy in LAI and growth parameter inversion research combining optical and radar remote sensing data. The method mainly comprises four parts, namely pixel feature space continuity hypothesis, construction of neighborhood pixels and center pixels, construction and training of SLBIF, LAI and biomass inversion results.
1. And (5) pixel feature space continuity assumption.
The spatial distribution of the surface features has correlation, so that the continuity and the similarity of adjacent pixels are one of the characteristics of the remote sensing image, namely, two adjacent pixels show continuity change at a large probability. The similarity of spatial ground objects just forms a continuity hypothesis in semi-supervised learning, so that a semi-supervised model can be constructed by utilizing the similarity between adjacent pixels. In particular, the continuity of adjacent picture elements may formally be represented as picture element Pi,j-1,Pi,j+1,Pi-1,j,Pi+1,jEqual sum pixel Pi,jWith continuity, i.e.:
Figure BDA0002829522200000082
wherein, Pi,jThe pixel vector (or pixel value, brightness value, gray value, etc.) at the (i, j) position consists of a plurality of bands of optical and radar images in the present invention, for example, 8 bands are usually used; d is a distance measurement function, such as Euclidean distance or vector cosine distance, which is used for measuring the similarity of two given pixels;
Figure BDA0002829522200000081
Pm,nare pixels that are not within the neighborhood.
2. And constructing a neighborhood pixel and a center pixel.
FIG. 2 is a schematic diagram of a model for constructing a neighborhood pixel and a center pixel, and as shown in FIG. 2, one pixel is called as a center pixel, and other 8 pixels surrounding the center pixel are neighborhood pixels or context pixels.
Firstly, optical and radar remote sensing images after preprocessing such as radiometric calibration and geometric correction are respectively normalized, and are subjected to dicing processing according to 3-by-3 windows, so that a research area and a ground object do not need to be distinguished.
Each cut is then filtered to prevent interference with cloud or shadowless areas. Specifically, cloud blocks are filtered according to abnormal values of DN values (or called pixel values, brightness values, gray values and the like) of the optical images, and radar data blocks at corresponding positions are deleted; since the cloud pixels are bright, a threshold value, for example 240, may be set to filter out pixels with values greater than 240 to prevent cloud interference.
And meanwhile, the blocks of the image-free area with the value of 0 are removed according to the DN value.
And finally, taking the central pixel of the kth cut block to form a central pixel metadata set: the optical data has 6 wave bands and is recorded as
Figure BDA0002829522200000091
Wherein the content of the first and second substances,
Figure BDA0002829522200000092
representing six-dimensional real number set, radar data having 2 wave bands in total, and recording
Figure BDA0002829522200000093
Simultaneously, 8 pixels in the neighborhood are taken to form a domain image metadata set: optical neighborhood pixels are recorded
Figure BDA0002829522200000094
Radar field pixel notation
Figure BDA0002829522200000095
Thereby completing the construction of the field pixel and the center pixel.
3. Construction and training of LAI and biomass active and passive inversion model SLBIF
The invention provides an SLBIF model by converting a training target of the model into the recognition problem of a central pixel and a field pixel according to the pixel characteristic space continuity hypothesis and designing a target function (loss function) according to the recognition problem. By utilizing the thought of semi-supervised learning and introducing a large amount of label-free data to enhance the representation capability of the model, the problem of weak model performance caused by insufficient information content is solved, and the purpose of improving LAI and biomass inversion accuracy is achieved.
(1) Structure of SLBIF model
Fig. 3 is a structural diagram of a semi-supervised LAI and biomass inversion framework model according to the present invention, and as shown in fig. 3, the model mainly includes two parts, namely a fusion layer and a regression layer.
1) Fusion layer
The fusion layer is an important component of a semi-supervised LAI and biomass inversion framework, and is mainly used for completing fusion of optical and radar data. In a semi-supervised LAI and biomass inversion framework, fusion processing of neighborhood pixels and center pixels is synchronously completed. The fusion layer can be any deep neural network model for fusing optical and radar remote sensing data, for example, a twin neural network model that can employ a gating mechanism. The SLBIF model fuses optical and radar remote sensing data in a central pixel and a neighborhood pixel at the same time, and specifically, a fusion layer of the SLBIF model is represented as follows:
Figure BDA0002829522200000101
wherein, yi,tAnd yi,cAnd the SLBIF fusion model respectively outputs vectors after optical and radar remote sensing data fusion is carried out on the central pixel and the neighborhood pixel.
2) Regression layer
The main tasks of the regression layer are to calculate a semi-supervised objective function and realize regression mapping from the remote sensing characteristics to the LAI and biomass measured values. In view of the differences in LAI and biomass, the present invention performs regression using different parameters for LAI and biomass, respectively. Meanwhile, a multi-task learning mechanism is utilized to carry out LAI and biomass inversion so as to enhance the generalization capability of the model and improve the inversion accuracy of the model.
The regression layer firstly carries out cascade processing on vectors and time vectors of fusion optics and radar remote sensing data of a center pixel and a neighborhood pixel output by the fusion layer to obtain input data of the regression layer:
Figure BDA0002829522200000102
wherein the content of the first and second substances,
Figure BDA0002829522200000103
representing the input vector, x, using the i-th central pixel and its neighborhood pixels in the regression layertimestepRepresenting a time vector. A Multi-layer Perceptron (Multi-layer Perceptron) is then used. Regression mapping to final LAI and biomass was done:
Figure BDA0002829522200000104
wherein f isMLP() Is a multi-layer perceptron, which consists of multiple fully-connected layers, where the activation function is ReLU. The invention uses multilayer perceptron to do nonlinear transformation to the time vector, otherwise the full connection is used as the last layer commonly used in the deep neural network and has the capability of completing the final mapping.
(2) Objective function
The invention proposes 3 objective functions: the method comprises the steps of firstly identifying a loss function based on a neighborhood pixel (or a center pixel) of negative sampling (negative sampling), secondly identifying a semi-supervised loss function based on reconstruction, and thirdly identifying a regression loss function based on minimum Mean Square Error (MSE) so as to realize the construction of an LAI (local area index) and biomass semi-supervised inversion model under the condition of introducing unmarked data, thereby solving the problem of weak performance of the inversion model caused by insufficient information quantity on data. The method comprises the following specific steps: 1) neighborhood pixel (or center pixel) identification loss function based on negative sampling
According to the assumption of spatial continuity of pixel features, the distance between a central pixel and its neighbor pixels is smaller than the distance between the central pixel and its non-corresponding neighbor pixels as measured by a distance function (formula (1)). By utilizing the characteristics, the invention provides a neighborhood pixel (or center pixel) identification loss function based on negative sampling to perform auxiliary training of a deep neural network model by utilizing label-free data, namely, a task of identifying the neighborhood pixel corresponding to the center pixel from a given neighborhood pixel set (containing the neighborhood pixel corresponding to the center pixel and the rest neighborhood pixels are negative sampling) by using a given center pixel is established, and vice versa. By the method, a large amount of label-free data can accurately identify the neighborhood pixels (or the center pixels) of the data, and the method has positive significance for preventing the overfitting of the model and training the deep neural network model with higher representation capability.
Given a set of training samples
Figure BDA0002829522200000111
Note that it may be unlabeled or labeled, but the LAI and measured biomass values are not used. Traversing the group of data, alternately using each central pixel as a given pixel, selecting a neighborhood pixel corresponding to each central pixel from all neighborhood pixels in the group of data, and expressing vectors of the central pixels and the neighborhood pixels as
Figure BDA0002829522200000112
And
Figure BDA0002829522200000113
where n is the number of samples of the set of data and d is the characteristic dimension of the output vector. Then under the measure of cosine distance, the prediction probability of having a central pixel identifying a neighborhood pixel (and a neighborhood pixel identifying a central pixel) can be formally expressed as:
Figure BDA0002829522200000121
where T denotes the transpose of the matrix, where softmax (.) is normalized by rows (which can be considered as conversion to prediction probabilities), which can be formalized as:
Figure BDA0002829522200000122
wherein, yi,jFor the element, s, of the ith row and the jth column in the softmax input matrixi,jIs the output of the (i, j) position. Finally, the identification loss function of the neighborhood pel (or center pel) based on negative sampling is defined as:
Lp=min∑[||Predictiont→c-E||2+||Predictionc→t-E||2] (7)
where E is the unit matrix with diagonal elements of 1, the size is consistent with the predicted matrix. Note that the loss function here minimizes the distance between the prediction value and the identity matrix because the center pixel (or neighborhood pixel) is diagonal to its corresponding neighborhood pixel (or center pixel). I.e. the central pixel (or neighborhood pixel) represented by the ith row corresponds to the neighborhood pixel (or central pixel) represented by the ith column.
2) Reconstruction-based semi-supervised loss function
An automatic encoder (autoencoder) is an important unsupervised model training method in a deep neural network, and has important application in a plurality of fields such as generation, feature extraction and optimization, clustering and the like. The automatic encoder has complete information representation capability, can realize low-error original data reconstruction, and therefore has an important role in information compression encoding. The invention integrates an automatic encoder into the inversion of LAI and biomass: the output vector obtained by the fusion layer is used as the input of a decoder (multilayer perceptron) to complete the reconstruction of the original data, so that the complete hidden layer representation is learned by using the label-free data, which is beneficial to improving the inversion accuracy of LAI and biomass. Specifically, according to equation (3), two reconstruction loss functions can be obtained:
Figure BDA0002829522200000123
wherein the content of the first and second substances,
Figure BDA0002829522200000131
and
Figure BDA0002829522200000132
model estimation of a center pixel and a neighborhood pixel is respectively carried out, and a loss function based on a reconstruction error is obtained by combining MSE:
Figure BDA0002829522200000133
wherein, PtAnd PcRespectively, the original data of the central pixel and the neighborhood pixel.
Based on the identification loss function and reconstruction (reconstruction) error loss function of the neighborhood pixel (or center pixel) of the negative sampling, a loss function suitable for label-free data can be obtained, namely:
Lunlabeled=Lp+Lr (10)
3) loss function based on minimum Mean Square Error (MSE)
According to the invention, by adopting the multi-task learning method for simultaneously predicting LAI and biomass, the diversification of data can be increased, and the problems of weak model generalization capability, overfitting and the like caused by less training data are weakened, so that the generalization capability of the model is enhanced, and the inversion accuracy of the model is improved.
Given a set of training samples (
Figure BDA0002829522200000134
yLAI,
Figure BDA0002829522200000135
) Wherein y isLAI,
Figure BDA0002829522200000136
Are the measured values of LAI and biomass.
Figure BDA0002829522200000137
And representing the optical and radar remote sensing data corresponding to the central pixel and the neighborhood pixel. In combination with the minimum mean square error loss function, the objective function based on multitask learning is defined as:
Figure BDA0002829522200000138
among them, LAI, Biomasswet,BiomassdryIs the model estimate obtained from equation (4); MSE is the root mean square error.
(3) Training of SLBIF models
Fig. 4 is a logic flow diagram of training of a semi-supervised LAI and biomass inversion based framework model according to the present invention, as shown in fig. 4:
firstly, according to a formula (2), organically fusing optical and radar remote sensing characteristics by utilizing a fusion layer; and (5) inputting the fused features into a regression layer according to a formula (4) to obtain a primary predicted value of the LAI and the biomass. The preliminary predicted values are then modified according to the objective functions (10) (11) to complete model training.
4. LAI and biomass inversion results
According to formulas (2) and (4) of a fusion layer and a regression layer of the SLBIF model, pixel values of a region to be predicted on the optical and radar remote sensing images are input to obtain fusion optical and radar remote sensing characteristics, then the fusion remote sensing characteristics are input, and corresponding LAI and biomass inversion values can be obtained through the trained SLBIF model.
The method is suitable for active and passive remote sensing inversion of LAI and biomass, deep extraction and effective fusion of optical and radar remote sensing characteristics are realized through an SLBIF model, the problem that the inversion accuracy of the LAI and the biomass is limited only by using optical or radar remote sensing data is effectively solved, the problem that the common optical and radar remote sensing data is not sufficiently combined is solved, the problem that the expression of the canopy structure is not sufficient by using the remote sensing characteristics extracted based on an empirical formula is solved, and the inversion accuracy of the LAI and the biomass is improved.
In addition, in order to solve the problems that a deep learning model is easy to over-fit and poor in robustness due to insufficient field measured data and the problem that information cannot be introduced by a traditional data augmentation method, a neighborhood pixel (or center pixel) identification loss function based on negative sampling and a semi-supervised loss function based on reconstruction are provided, the problem that the model performance is weak due to insufficient information is effectively solved, and the robustness and the inversion performance of the SLBIF model are improved.
The above scheme is further illustrated below with a specific example:
the example uses data obtained from a test field located in Wuqing district of Tianjin as basic data, and adopts the proposed maize LAI and biomass active and passive inversion method based on semi-supervised deep learning to carry out maize LAI and biomass inversion. And is described in detail in connection with fig. 4.
1. And (4) preprocessing data.
(1) Remote sensing data preprocessing
The optical remote sensing data used in the example are Sentinel-2 and Landsat-8 data, and the total number of the optical remote sensing data is 6 wave bands; the radar data are Sentinel-1 data, and have 2 different wave bands. The acquired optical and radar remote sensing data of 4 growing periods (jointing period, large flare period, flowering period and grouting period) of the corn are preprocessed, and 158 sample points are counted. And obtaining the reflectivity value of the optical remote sensing image, and calculating vegetation indexes (NDVI, RVI, EVI, SAVI and MSAVI) in 5, and the backscattering coefficient and polarization parameter of the radar remote sensing image. Multidimensional features are used as input features in research. In particular, the optical data co-extracts 59-dimensional features, i.e.
Figure BDA0002829522200000151
It includes: 11-dimensional spectral features and 48-dimensional texture features. The radar data collectively extracting 22-dimensional features, i.e.
Figure BDA0002829522200000152
It includes: 6-dimensional spectral features and 12-dimensional texture features.
(2) Normalization
Considering the experimental data that the LAI and biomass range are too different (the LAI range is 0.0-6.0, and the biomass range is 16.48-1559.08 g/m2) And respectively carrying out normalization processing on the remote sensing characteristics, the actually-measured LAI and the biological quantity value by adopting a deep learning standard preprocessing method, thereby avoiding the optimization problem caused by overlarge difference of different dimensions in data. After obtaining the LAI and biomass inversion value of the corn by the model, the original data needs to be restored by inverse operation, and then the evaluation index R is calculated2And RMSE.
2. Central pixel and neighborhood pixel construction
(1) And 3-by-3 windows are adopted to respectively perform block processing on the optical and radar remote sensing data, and a research area and ground objects are not distinguished.
(2) And screening and deleting the blocks with clouds (abnormal large pixel values) and background values of 0.
(3) And extracting a central pixel and a neighborhood pixel from the blocks. Obtaining a central pixel data set and a neighborhood pixel data set for the ith block
Figure BDA0002829522200000153
In the example, 10 ten thousand neighborhood pixel pairs and center pixel pairs based on label-free data are constructed, and each neighborhood pixel pair and each center pixel pair comprise optical and radar remote sensing image data. And 158 neighborhood pixels with measured data values and center pixel pairs.
3. LAI and Biomass inversion model (SLBIF) construction
(1) The model parameters are initialized randomly.
Fig. 3 shows a structure diagram of the SLBIF network proposed herein, and it can be seen that there are 3 layers of the SLBIF fusion layer, and the neighborhood pixel input dimensions of the fusion layer are respectively: an optical data processing channel with an input dimension of 48 dimensions; a radar data processing channel with an input dimension of 16 dimensions; optical and radar data processing channels, the input dimension is 64 dimensions. The input dimensions of the central pixel of the fusion layer are respectively as follows: the optical data processing channel is 6-dimensional and the radar data processing channel is 2-dimensional, and thus the optical and radar data processing channels are 8-dimensional. Other specific network size parameters for each layer are defined as follows:
1) the size of the hidden layer of the gate control layer of the fusion layer is 300 dimensions, the size of the hidden layer of the full-connection layer is 300 dimensions, the output characteristic dimension is 300 dimensions, and the sizes of the internal parameters of 3 fusion layers are kept consistent;
2) the sizes of the fusion layers of the neighborhood pixels and the center pixels are kept consistent, but different parameters are used for learning;
3) the time vector dimension n is set to 10;
4) the input layer of the regression layer is the cascade connection of the neighborhood pixel processing channel and the central pixel data processing channel in the last fusion layer and the time vector, so that the size of the input layer is 1210 dimensions. The sizes of hidden layers of the LAI, fresh biomass and dry biomass independent prediction full-connected layers in the regression layer are all 300 dimensions, and the output is the model prediction values of the corresponding LAI, fresh biomass and dry biomass, specifically scalar quantities, namely 1 dimension;
5) this example optimizes the entire SLBIF model using a stochastic gradient descent method. The specific learning rate is set to 0.00001 for unlabeled data and 0.0001 for labeled data, and the two hyper-parameters of the Adam optimizer are set to 0.5 for b1 and 0.999 for b2, respectively. The batch size (batch size) is set to 500, i.e. 500 samples are processed simultaneously. The ratio of the use of the unlabeled data to the use of the labeled data is 50:1, namely the probability of training the model by selecting the unlabeled data is 98.03%, and the probability of using the labeled data is 1.97%. This example uses the Pytorch deep learning framework based on the python language to accomplish all code writing using GPU programming techniques. The code running server environment is Ubuntu 18.04, python 3.6.8, and 8 blocks of GeForce RTX 2080Ti are used for GPU operation.
6) To reduce the effect of data randomness, 5-fold cross-validation runs were performed on this example, and the mean of the results of 25 runs was calculated.
(2) And randomly taking a batch of data from the non-labeled data or labeled data with a certain probability, then executing forward propagation of a model fusion layer, and obtaining an output vector of the model through a fusion layer formula (2).
(3) The identification loss of the neighborhood pel (or center pel) based on the negative sampling and the reconstruction error loss are calculated by formula (10).
(4) If the input data comprises the actually measured LAI and the biological quantity value, calculating the regression loss by using the formulas (3), (4) and (11); otherwise, the step is skipped.
(5) And (5) carrying out model convergence verification by using the verification set, considering that the model is converged if the loss value of the model on the verification set does not decrease any more, ending the training if the model is converged, and turning to the next step if the model is not converged.
(6) Calculating the gradient corresponding to each layer of parameters of the network by reverse derivation, and updating the model parameters by combining a random gradient descent method with a learning rate. And (4) turning to the step (3).
4. LAI and biomass inversion results
Table 1 shows the accuracy evaluation results of the inversion models of LAI and Biomass of corn based on LAI and Biomass inversion framework SLBIF of semi-supervised deep learning. Because fresh biomass is susceptible to weather and drought conditions, dry biomass is taken as a reference index in practical application. And the model obtained by the method is respectively compared with the following models in precision:
(1) MLP/MLP + Beta-mixup: MLP is a classic multilayer perceptron, MLP + Beta-mixup is an inversion model obtained based on data augmentation and the multilayer perceptron, and the MLP + Beta-mixup belongs to a supervised learning method.
(2) GSDNN/GSDNN + Beta-mixup: GSDNN is a twin neural network based on a gating mechanism, GSDNN + Beta-mixup is an inversion model based on Beta data augmentation and GSDNN, and the GSDNN and the Beta data augmentation and GSDNN belong to supervised learning methods.
(3) LBIF removes the semi-supervised training loss function for the semi-Supervised LAI and Biomass Inversion Framework (SLBIF) -based proposed in this section, leaving only the regression loss function.
First, SLBIF compared to LBIF found R of SLBIF in the model of inversion of maize LAI and biomass2Increase, RMSE decrease. From R2It is seen that the LAI value is increased from 0.69 to 0.79 and the fresh biomass as well as the dry biomass is increased from 0.70 and 0.75 to 0.80 and 0.89, respectively. Compared with LBIF, SLBIF can mine richer and more effective remote sensing characteristics and can better reveal the interior of remote sensing data, thereby being beneficial to improving the accuracy of an inversion model.
TABLE 1 SLBIF vs. other model accuracies (R)2)
Figure BDA0002829522200000181
Two inverse models LBIF and GSDNN were compared for unexpanded data and found: the accuracy of the LAI and biomass inversion model obtained by LBIF is obviously higher than GSDNN; meanwhile, compared with the model SLBIF and the model GSDNN + Beta-mixup after the training data is expanded, the precision of the corn LAI and the biomass obtained by the SLBIF model is greatly improved. LBIF and SLBIF are both considered spatial continuity of the image elements, while neither GSDNN nor GSDNN + Beta-mixup takes into account the influence of surrounding image elements on the central image element, which indicates that: the surrounding pixels have certain influence on the central pixels, the change rule of the growth parameters can be better revealed by considering the remote sensing characteristics extracted by the spatial context correlation, and the precision of the maize LAI and biomass inversion model is improved.
Comparing the accuracy of the LAI and biomass inversion models obtained by SLBIF and GSDNN + Beta-mixup: from R2It is seen that the LAI value is increased from 0.71 to 0.79 and the fresh biomass and the dry biomass are increased from 0.78 and 0.86 to 0.80 and 0.89, respectively. The RMSE evaluation index is also greatly improved, namely the root mean square error of LAI is reduced to 0.54, and the root mean square error of dry biomass is further reduced to 141.15g/m2. The SLBIF is found to have better results by utilizing the unmarked data expansion training sample than the data expansion method, which shows that the data expansion method does not introduce new information from the information theory perspective, and the SLBIF effectively solves the problem that the data expansion method cannot introduce new information by mining information of the unmarked remote sensing data, thereby improving the inversion accuracy of the maize LAI and the biomass.
Fig. 5 is a schematic flow diagram of a growth parameter active and passive remote sensing inversion apparatus provided by the present invention, and as shown in fig. 5, an embodiment of the present invention provides a growth parameter active and passive remote sensing inversion apparatus, which can be used as an execution main body of the growth parameter active and passive remote sensing inversion method in the foregoing embodiment, and specifically includes an obtaining module 501 and an inversion module 502, where:
the obtaining module 501 is configured to obtain optical and radar remote sensing images of an area to be predicted; the inversion module 502 is used for inputting the optical and radar remote sensing images of the area to be predicted into the trained semi-supervised deep learning model and outputting the growth parameters of the area to be predicted; the trained semi-supervised deep learning model is obtained by training a pre-constructed semi-supervised deep learning model based on unmarked data samples and marked data samples.
The growth parameter active and passive remote sensing inversion device provided in the embodiment of the present application may be used to execute the method described in the corresponding embodiment, and the specific steps of executing the method described in the corresponding embodiment by using the device provided in the embodiment are the same as those in the corresponding embodiment, and the same technical effects may be achieved.
Fig. 6 is a schematic structural diagram of an electronic device provided in the present invention, and as shown in fig. 6, the electronic device may include: a processor (processor)610, a communication Interface (Communications Interface)620, a memory (memory)630 and a communication bus 640, wherein the processor 610, the communication Interface 620 and the memory 630 communicate with each other via the communication bus 640. The processor 610 may invoke logic instructions in the memory 630 to perform a method of growth parameter active and passive remote sensing inversion comprising:
acquiring optical and radar remote sensing images of an area to be predicted;
inputting the optical and radar remote sensing images of the area to be predicted into a trained semi-supervised deep learning model, and outputting the growth parameters of the area to be predicted;
the trained semi-supervised deep learning model is obtained by training a pre-constructed semi-supervised deep learning model based on unmarked data samples and marked data samples.
In addition, the logic instructions in the memory 630 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, which when executed by a computer, enable the computer to perform a method for active and passive remote sensing inversion of growth parameters provided by the above methods, the method comprising:
acquiring optical and radar remote sensing images of an area to be predicted;
inputting the optical and radar remote sensing images of the area to be predicted into a trained semi-supervised deep learning model, and outputting the growth parameters of the area to be predicted;
the trained semi-supervised deep learning model is obtained by training a pre-constructed semi-supervised deep learning model based on unmarked data samples and marked data samples.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program, which when executed by a processor is implemented to perform the growth parameter active and passive remote sensing inversion method provided above, the method comprising:
acquiring optical and radar remote sensing images of an area to be predicted;
inputting the optical and radar remote sensing images of the area to be predicted into a trained semi-supervised deep learning model, and outputting the growth parameters of the area to be predicted;
the trained semi-supervised deep learning model is obtained by training a pre-constructed semi-supervised deep learning model based on unmarked data samples and marked data samples.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A growth parameter active and passive remote sensing inversion method is characterized by comprising the following steps:
acquiring optical and radar remote sensing images of an area to be predicted;
inputting the optical and radar remote sensing images of the area to be predicted into a trained semi-supervised deep learning model, and outputting the growth parameters of the area to be predicted;
the trained semi-supervised deep learning model is obtained by training a pre-constructed semi-supervised deep learning model based on unmarked data samples and marked data samples.
2. The growth parameter active and passive remote sensing inversion method according to claim 1, wherein the training of the pre-constructed semi-supervised deep learning model comprises the following steps:
carrying out dicing processing on the optical and radar remote sensing image samples according to a 3 x 3 window;
taking a central pixel of each cut block to form a central pixel data sample set, and taking a neighborhood pixel of the central pixel of each cut block to form a neighborhood pixel data sample set;
and training a pre-constructed semi-supervised deep learning model by using the central pixel data sample set and the neighborhood pixel data sample set.
3. The growth parameter active and passive remote sensing inversion method according to claim 2, wherein before taking a central pixel of each slice to form a central pixel data sample set and taking a neighborhood pixel of the central pixel of each slice to form a neighborhood pixel data sample set, the method further comprises:
and filtering each cut block to filter abnormal cut blocks.
4. The growth parameter active-passive remote sensing inversion method according to claim 2, wherein the loss function in training of the pre-constructed semi-supervised deep learning model comprises a neighborhood pixel or center pixel identification loss function based on negative sampling, a semi-supervised loss function based on reconstruction, and a regression loss function based on minimum mean square error.
5. The growth parameter active and passive remote sensing inversion method according to claim 1, wherein the semi-supervised deep learning model comprises a fusion layer and a regression layer;
the fusion layer is used for fusing optical and radar remote sensing data in the central pixel and optical and radar remote sensing data in the neighborhood pixels;
and the regression layer is used for performing cascade processing on the data output by the fusion layer and the time vector and inputting the data after the cascade processing into the multilayer perceptron.
6. The growth parameter active and passive remote sensing inversion method according to claim 5, wherein the multilayer perceptron is composed of multiple fully-connected layers, and an activation function of the multilayer perceptron is a linear rectification function.
7. The growth parameter active-passive remote sensing inversion method according to any one of claims 1-6, wherein before inputting the optical and radar remote sensing images of the area to be predicted into the trained semi-supervised deep learning model, the method further comprises:
and preprocessing and normalizing the optical and radar remote sensing images of the area to be predicted.
8. A growth parameter active and passive remote sensing inversion device is characterized by comprising:
the acquisition module is used for acquiring optical and radar remote sensing images of the area to be predicted;
the inversion module is used for inputting the optical and radar remote sensing images of the area to be predicted into a trained semi-supervised deep learning model and outputting the growth parameters of the area to be predicted;
the trained semi-supervised deep learning model is obtained by training a pre-constructed semi-supervised deep learning model based on unmarked data samples and marked data samples.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of the method for active and passive remote sensing inversion of growth parameters according to any one of claims 1 to 7.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the growth parameter active and passive remote sensing inversion method according to any one of claims 1 to 7.
CN202011457497.1A 2020-12-10 2020-12-10 Active and passive remote sensing inversion method and device for growth parameters Active CN112464172B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011457497.1A CN112464172B (en) 2020-12-10 2020-12-10 Active and passive remote sensing inversion method and device for growth parameters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011457497.1A CN112464172B (en) 2020-12-10 2020-12-10 Active and passive remote sensing inversion method and device for growth parameters

Publications (2)

Publication Number Publication Date
CN112464172A true CN112464172A (en) 2021-03-09
CN112464172B CN112464172B (en) 2024-03-29

Family

ID=74801510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011457497.1A Active CN112464172B (en) 2020-12-10 2020-12-10 Active and passive remote sensing inversion method and device for growth parameters

Country Status (1)

Country Link
CN (1) CN112464172B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505486A (en) * 2021-07-14 2021-10-15 中国科学院空天信息创新研究院 Leaf area index inversion method and system for three-dimensional complex earth surface
CN117315466A (en) * 2023-09-20 2023-12-29 北京佳格天地科技有限公司 Growth monitoring management method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414891A (en) * 2020-04-07 2020-07-14 云南电网有限责任公司昆明供电局 Power transmission line channel tree height inversion method based on laser radar and optical remote sensing
CN111814707A (en) * 2020-07-14 2020-10-23 中国科学院空天信息创新研究院 Crop leaf area index inversion method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414891A (en) * 2020-04-07 2020-07-14 云南电网有限责任公司昆明供电局 Power transmission line channel tree height inversion method based on laser radar and optical remote sensing
CN111814707A (en) * 2020-07-14 2020-10-23 中国科学院空天信息创新研究院 Crop leaf area index inversion method and device

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
HUGHES H L等: "A SEMI-SUPERVISED APPROACH TO SAR-OPTICAL IMAGE MATCHING", ISPRS ANNALS OF PHOTOGRAMMETRY, REMOTE SENSING AND SPATIAL INFORMATION SCIENCES *
KWAK H G等: "Impact of Texture Information on Crop Classification with Machine Learning and UAV Images", APPLIED SCIENCES, vol. 9, no. 4 *
LAURILA等: "Cereal Yield Modeling in Finland Using Optical and Radar Remote Sensing", REMOTE SENSING, vol. 2, no. 9 *
PEILEI L等: "Peilei L, Jingjuan L, Guozhuang S. Combining Spectral and Texture Features for Estimating Leaf Area Index and Biomass of Maize Using Sentinel-1/2, and Landsat-8 Data", IEEE, no. 8 *
景卓鑫;张远;王珂靖;高炜;: "基于Radarsat-2雷达数据的水稻参数反演", 江苏农业科学, no. 11, pages 1002 - 1302 *
李增元;赵磊;李?;陈尔学;万祥星;徐昆鹏;: "合成孔径雷达森林资源监测技术研究综述", 南京信息工程大学学报(自然科学版), no. 02, pages 150 - 158 *
李雪玲;董莹莹;朱溢佞;黄文江;: "基于EnMAP卫星和深度神经网络的LAI遥感反演方法", 红外与毫米波学报, no. 01, pages 111 - 119 *
贾洁琼;刘万青;孟庆岩;孙云晓;孙震辉;: "基于GF-1 WFV影像和机器学习算法的玉米叶面积指数估算", 中国图象图形学报, no. 05, pages 719 - 729 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505486A (en) * 2021-07-14 2021-10-15 中国科学院空天信息创新研究院 Leaf area index inversion method and system for three-dimensional complex earth surface
CN113505486B (en) * 2021-07-14 2023-12-29 中国科学院空天信息创新研究院 Three-dimensional complex earth surface leaf area index inversion method and system
CN117315466A (en) * 2023-09-20 2023-12-29 北京佳格天地科技有限公司 Growth monitoring management method and system
CN117315466B (en) * 2023-09-20 2024-04-09 北京佳格天地科技有限公司 Growth monitoring management method and system

Also Published As

Publication number Publication date
CN112464172B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
Xiong et al. Identification of cash crop diseases using automatic image segmentation algorithm and deep learning with expanded dataset
Kong et al. Multi-stream hybrid architecture based on cross-level fusion strategy for fine-grained crop species recognition in precision agriculture
Nguyen et al. Monitoring agriculture areas with satellite images and deep learning
CN112861722B (en) Remote sensing land utilization semantic segmentation method based on semi-supervised depth map convolution
CN111914611B (en) Urban green space high-resolution remote sensing monitoring method and system
Wang et al. A deep learning framework combining CNN and GRU for improving wheat yield estimates using time series remotely sensed multi-variables
Lee et al. Applying machine learning methods to detect convection using Geostationary Operational Environmental Satellite-16 (GOES-16) advanced baseline imager (ABI) data
CN112464172B (en) Active and passive remote sensing inversion method and device for growth parameters
Liu et al. Cross-resolution national-scale land-cover mapping based on noisy label learning: A case study of China
CN115222100A (en) Crop yield prediction method based on three-dimensional cyclic convolution neural network and multi-temporal remote sensing image
CN115439754A (en) Large-range trans-climatic region crop mapping method based on time sequence remote sensing image
Guo et al. Multitemporal hyperspectral images change detection based on joint unmixing and information coguidance strategy
Li et al. Improving maize yield prediction at the county level from 2002 to 2015 in China using a novel deep learning approach
Szlobodnyik et al. Data augmentation by guided deep interpolation
CN112163020A (en) Multi-dimensional time series anomaly detection method and system
CN116844041A (en) Cultivated land extraction method based on bidirectional convolution time self-attention mechanism
Meng et al. Physical knowledge-enhanced deep neural network for sea surface temperature prediction
CN111242028A (en) Remote sensing image ground object segmentation method based on U-Net
Kumawat et al. Time-variant satellite vegetation classification enabled by hybrid metaheuristic-based adaptive time-weighted dynamic time warping
Yang et al. Towards Scalable Within-Season Crop Mapping With Phenology Normalization and Deep Learning
CN112487879A (en) Corn growth parameter active and passive remote sensing inversion method based on data augmentation and deep learning
Ayub et al. Wheat Crop Field and Yield Prediction using Remote Sensing and Machine Learning
Adams et al. Phenotypic trait extraction of soybean plants using deep convolutional neural networks with transfer learning.
Zhang et al. Vegetation Coverage Monitoring Model Design Based on Deep Learning
Zhou et al. Extracting tobacco planting areas using LSTM from time series Sentinel-1 SAR data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant