CN112668420A - Hyperspectral and LiDAR fusion intrusion tree species detection method based on non-negative risk estimation - Google Patents

Hyperspectral and LiDAR fusion intrusion tree species detection method based on non-negative risk estimation Download PDF

Info

Publication number
CN112668420A
CN112668420A CN202011507877.1A CN202011507877A CN112668420A CN 112668420 A CN112668420 A CN 112668420A CN 202011507877 A CN202011507877 A CN 202011507877A CN 112668420 A CN112668420 A CN 112668420A
Authority
CN
China
Prior art keywords
image
tree
tree species
hyperspectral
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011507877.1A
Other languages
Chinese (zh)
Other versions
CN112668420B (en
Inventor
王心宇
赵恒伟
钟燕飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202011507877.1A priority Critical patent/CN112668420B/en
Publication of CN112668420A publication Critical patent/CN112668420A/en
Application granted granted Critical
Publication of CN112668420B publication Critical patent/CN112668420B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a hyperspectral and LiDAR fusion intrusion tree species detection method based on non-negative risk estimation. According to the method, the remote sensing image weak supervision single-class network optimization problem is converted into the experience risk minimization problem through non-negative risk estimation, meanwhile, the space-spectrum fusion features provided by the hyperspectral image and the geometric features provided by the LiDAR data are comprehensively used, the depth space-spectrum fusion features of the image are automatically extracted through the convolutional neural network in a data-driven mode, parameters of the convolutional neural network are optimized end to end through the non-negative risk estimation, and later-period threshold adjustment is avoided. The invention can be used for the distribution detection of the invasive tree species, and can still obtain reliable detection results in the tropical zone with higher species diversity.

Description

Hyperspectral and LiDAR fusion intrusion tree species detection method based on non-negative risk estimation
Technical Field
The invention belongs to the technical field of hyperspectral remote sensing images and LiDAR data (LiDAR) processing, and particularly relates to a hyperspectral and LiDAR fusion intrusion tree species detection method based on non-negative risk estimation.
Background
As trading activities occur worldwide, many tree species move from one territory to another through various manual or accidental means, and some of them can evolve to invade tree species due to lack of control of natural enemies, causing irreversible damage to the variety of local species and the global ecological environment. Obtaining the distribution of the invasive tree species is key to controlling and monitoring their propagation. The traditional method for on-site investigation is really accurate, but time and labor are consumed, the large-range continuous observation is difficult to achieve, and the hyperspectral remote sensing earth observation technology provides possibility for large-range fine identification of invasive tree species through non-contact imaging.
Meanwhile, the hyperspectral invasive tree species distribution detection is also a difficult task: first, the conventional classification algorithm needs to collect samples of all possible ground objects in a target area, but collecting samples of all possible ground objects under the condition of only paying attention to an invasive tree species is high in cost, and how to detect the position of the invasive tree species by using the samples of the invasive tree species is a remote sensing image weak supervision classification problem which is difficult to process essentially. Secondly, the traditional intrusion tree species detection method is based on manual features, the features are shallow features limited by expert prior knowledge, and the identification capability of the intrusion tree species in a complex scene is limited. Thirdly, the traditional tree species detection algorithm is limited by threshold selection, and the quality of the threshold greatly affects the final detection result, which causes that the traditional detection algorithm is difficult to be used by forestry personnel who are not familiar with the related algorithm. The above-mentioned several problems limit the application of detection algorithms in the detection of intrusion tree species.
Disclosure of Invention
The invention aims to provide a hyperspectral and LiDAR fusion intrusion tree species detection method based on non-negative risk estimation.
The intrusion tree species detection method based on the deep non-negative risk estimation comprehensively uses the space-spectrum fusion characteristics provided by the hyperspectral image and the geometric characteristics provided by the LiDAR data, automatically extracts the deep space-spectrum fusion characteristics of the image in a data-driven mode through the convolutional neural network, optimizes the parameters of the convolutional neural network end to end through the empirical risk minimization principle by utilizing the non-negative risk estimation, and avoids the later threshold adjustment. After the training is finished, the network can be directly utilized to judge whether the input data is the tree type.
The intrusion tree species detection algorithm based on the fusion of the hyperspectral and LiDAR data of the nonnegative risk estimation provided by the invention has the following three remarkable characteristics. The method has the advantages that the advantages of different modal data are comprehensively utilized, and the interference of non-tree pixels on the identification of the invasive tree species is avoided through a hierarchical classification method; secondly, a deep-level space-spectrum fusion characteristic with stronger representation capability is extracted in a data-driven mode through a convolutional neural network, so that the method can still obtain a better detection result in a difficult scene, such as tropical Africa with higher variety; thirdly, designing a loss function based on non-negative risk estimation through an empirical risk minimization criterion, optimizing network parameters end to end, and avoiding a threshold value adjusting process.
The invention provides a hyperspectral and LiDAR data fusion intrusion tree species detection method based on non-negative risk estimation, which comprises the following steps:
step 1, obtaining a digital surface model DSM and a digital elevation model DEM based on different echoes of LiDAR data, and obtaining a canopy height model CHM by making a difference between the two models;
step 2, constructing a mask TreeMask based on the hyperspectral image and the CHM, wherein the mask is used for extracting the areas of all trees in the image;
step 3, transforming the hyperspectral image by using the minimum noise separation transform MNF, and performing normalization pretreatment on the transformed image;
step 4, based on the preprocessed image and the TreeMask, acquiring a ground real value of the invasive tree species through on-site sampling to obtain an invasive tree species data set for model training; randomly sampling in TreeMask to obtain an index of a pixel of a tree, and then obtaining an unlabeled data set for model training on the normalized hyperspectral image through the index;
step 5, training the convolutional neural network by using a training data set and a loss function based on non-negative risk estimation;
and 6, reasoning on the MNF image after the normalization pretreatment by using the trained convolutional neural network in the step 5, and obtaining the distribution of the invasive tree species after eliminating non-tree pixels through TreeMask.
Further, the step 2 is realized as follows,
step 2.1, extracting a vegetation area in the hyperspectral image based on the reflectivity of the hyperspectral image in the near-infrared waveband and the normalized vegetation index,
Figure BDA0002845426710000021
where ρ isNIRAnd ρRRespectively representing the reflectivity of a near infrared band and a red band of the hyperspectral image;
and 2.2, extracting the tree region in the vegetation region by using the CHM, wherein the pixels with the pixel values larger than a certain threshold value in the CHM image are regarded as tree pixels.
Further, the implementation manner of the step 3 is as follows,
step 3.1, firstly separating the noise image n from the original hyperspectral image z by using low-pass filtering, and then respectively calculating covariance matrixes sigma of z and nzSum ΣnWherein Z is (Z)1,z2,…zL),N=(n1,n2,…nL) (ii) a Computing
Figure BDA0002845426710000022
Characteristic value λ ofiAnd a feature vector uiAnd satisfy lambda1≥λ2≥…≥λLLet U be (U)1,…,uL) Then the final result of MNF transform is Y ═ UTZ;
Step 3.2, calculating the mean value mean [ i ] of the hyperspectral image band by band]Sum variance std [ i ]]I is the serial number of the wave band of the image, and for any wave band i]The normalization calculation mode of the pixel at any position is as follows:
Figure BDA0002845426710000031
wherein n _ band [ i ]]The normalized result of the band i, and (x, y) the pixel position.
Further, the implementation manner of the step 4 is as follows,
step 4.1, based on the position of the invasive tree species obtained by sampling on site, taking TreeMask as an auxiliary image, marking a training sample of the invasive tree species from the hyperspectral image, and taking the training sample out of the normalized image in a mode of pixels to be classified and neighborhood pixels according to blocks;
and 4.2, selecting unmarked samples invading the tree species samples by a certain multiple in a tree area of the hyperspectral image in a random selection mode, and extracting the unmarked samples from the normalized hyperspectral image in a mode of pixels to be classified and neighborhood pixels.
Further, the implementation manner of the step 5 is as follows,
step 5.1, constructing a basic feature extraction module, wherein the basic feature extraction module consists of a convolution layer, a batch normalization layer and an activation function, and a calculation formula of convolution operation is as follows:
Figure BDA0002845426710000032
wherein
Figure BDA0002845426710000033
Represents the value at the jth profile position (x, y) of the ith layer, the weight
Figure BDA0002845426710000034
And bias bijThe kth profile, P, connecting the preceding module oiAnd QiRespectively the size of the weight; the batch normalization is calculated as follows:
Figure BDA0002845426710000035
wherein beta and gamma are parameters which can be learned, and epsilon is a constant and is used for keeping the stability of the parameters; the calculation formula of the ReLU activation function is as follows: ReLU (v)ij)=max{0,vij};
Step 5.2, forming a deep convolutional neural network by repeatedly stacking a basic feature extraction module and a spatial down-sampling module, wherein the spatial down-sampling module adopts a convolutional layer-activation function with the step of 2 to replace a maximum pooling layer for spatial down-sampling operation, the final output of the network inputs features into a full connection layer through global average pooling operation, and the output of the full connection layer is used for risk estimation and prediction;
and (4) in a step 5.3,
Figure BDA0002845426710000041
inputting training data for the batch of invasive tree species data and unlabeled data for training
Figure BDA0002845426710000042
After passing through a deep convolutional neural network, jointly marking y, and performing risk calculation by using a loss function based on non-negative risk estimation; the loss function calculation process based on non-negative risk estimation is as follows:
Figure BDA0002845426710000043
wherein the risk of invading tree species is estimated as
Figure BDA0002845426710000044
f is a convolutional neural network, EPTo average the loss calculations for positive samples, i.e. labeled samples, the loss function l (f) (x), y) ═ 1/(1+ exp (yf (x))), πpIs a class prior; the risk of unlabeled samples is estimated as
Figure BDA0002845426710000045
EUTo average the loss calculations for unlabeled samples, the negative-class risk of invading tree species samples is estimated as
Figure BDA0002845426710000046
+1 represents the sample as an invasive tree species, -1 represents the sample as a non-invasive tree species;
and 5.4, after risk estimation, updating network parameters by an algorithm based on random gradient descent, and after iteration is repeated to a stopping condition (such as 300 rounds), finishing network training. Order to
Figure BDA0002845426710000047
i is the batch when riWhen the content is more than or equal to 0, use
Figure BDA0002845426710000048
Updating network parameters when r isiWhen less than 0, use
Figure BDA0002845426710000049
An update of the gradient is performed in that,
Figure BDA00028454267100000410
for the process of calculating the gradient by means of the risk estimation function, theta is the parameter to be updated by the network,
Figure BDA00028454267100000411
the batch of invasive tree species data used for training and unlabeled training data.
Further, the implementation manner of the step 6 is as follows,
step 6.1, after the normalized hyperspectral image after the whole MNF transformation is subjected to element-by-element block fetching, inputting a trained convolutional neural network to predict whether the image is an invasive tree, and if the result output by the network is greater than 0, considering the image as an invasive tree pixel, otherwise, considering the image as a non-invasive tree pixel;
step 6.2, for the prediction result output by the network, TreeMask is used for post-processing to avoid the influence of non-tree pixels, and the specific formula is as follows:
Figure BDA00028454267100000412
final _ map is the final prediction resultThe result is the output result of the network,
Figure BDA00028454267100000413
the pixels representing the corresponding positions of the two images are multiplied in a pixel-by-pixel mode, and the TreeMask is a binary image with a pixel value of 0 or 1, so that the false alarm problem caused by non-tree pixels can be relieved by the element-by-element multiplication mode of the corresponding positions.
The invention has the following advantages and beneficial effects:
(1) the advantages of the hyperspectral data and the LiDAR data are combined, so that the false alarm of a non-tree area is greatly reduced;
(2) the F1 value of the remote sensing weak supervision list classification algorithm in the intrusion tree species detection task is obviously improved based on the data-driven deep level space-spectrum fusion characteristic, so that the method can still keep a higher detection result in a hot zone area with higher species diversity;
(3) the loss function based on non-negative risk estimation can complete end-to-end network parameter optimization, so that the threshold value adjusting process after classification is avoided, and the practical application of related forestry personnel is facilitated;
drawings
FIG. 1 is an overall flow chart of the method.
FIG. 2 is a canopy height model CHM extracted using LiDAR data.
Fig. 3 is a schematic diagram of the TreeMask extraction process. Wherein a is an NDVI image; b is a reflectance image at a near infrared wavelength band; c is TreeMask finally extracted, white is the area where the tree exists in the image, and black represents the non-tree area.
Fig. 4 shows the result of MNF transformation of raw spectral data.
Fig. 5 is a possible deep spatial-spectral fusion feature extraction architecture.
FIG. 6 is an example intrusion tree species detection example. Wherein a is an exemplary eucalyptus tree species detection result; b is an exemplary black wattle species detection result.
Detailed Description
For a better understanding of the technical solutions of the present invention, the present invention will be further described in detail with reference to the accompanying drawings and examples.
As shown in FIG. 1, the invention provides a hyperspectral and LiDAR fusion intrusion tree species detection method based on non-negative risk estimation, which specifically comprises the following steps:
step 1, a Digital Surface Model (DSM) and a Digital Elevation Model (DEM) are obtained based on different echoes of LiDAR data, and a difference is made between the two to obtain a canopy height model CHM as shown in fig. 2.
Step 2, constructing a mask TreeMask based on the hyperspectral image and the CHM, and extracting the regions where all trees exist in the image, wherein the step further comprises the following steps:
step 2.1, extracting a vegetation area in the hyperspectral image based on the reflectivity of the hyperspectral image in the near-infrared band and the normalized vegetation index (NDVI),
Figure BDA0002845426710000051
where ρ isNIRAnd ρRThe reflectivity of the near infrared band and the reflectivity of the red band of the hyperspectral image are respectively. In this step, the threshold for reflectivity is 20% and the threshold for NDVI is 0.5;
step 2.2, extracting the tree region in the vegetation region by using the CHM, wherein pixels with pixel values larger than 3 in the CHM image are regarded as tree pixels;
step 3, the MNF is used for transforming the hyperspectral image, the follow-up processing is carried out on the first 10 wave bands of the image after the MNF is transformed, and the normalized preprocessing is carried out on the transformed image, and the steps further comprise:
step 3.1, firstly separating the noise image n from the original hyperspectral image z by using low-pass filtering, and then respectively calculating covariance matrixes sigma of z and nzSum ΣnWherein Z is (Z)1,z2,…zL),N=(n1,n2,…nL). Computing
Figure BDA0002845426710000061
Characteristic value λ ofiAnd a feature vector uiAnd satisfy lambda1≥λ2≥…≥λLLet U be (U)1,…,uL) Then MNF is changedThe final result of the conversion is that Y is equal to UTZ。
Step 3.2, calculating the mean value mean [ i ] of the hyperspectral image band by band]Sum variance std [ i ]]I is the serial number of the wave band of the image, and for any wave band i]The normalization calculation mode of the pixel at any position is as follows:
Figure BDA0002845426710000062
wherein n _ band [ i ]]The normalized result of the band i, and (x, y) the position of the pixel.
And 4, acquiring an intrusion tree seed data set for model training by sampling the ground real value of the acquired intrusion tree seed on site based on the preprocessed image and the TreeMask. Random sampling is carried out in TreeMask, an index of a pixel of the tree is obtained, and then an unlabeled data set for model training is obtained on the normalized hyperspectral image through the index. Wherein, the training data is extracted by the way of the pixel to be classified and the neighborhood pixels thereof according to the blocks, and the step further comprises the following steps:
step 4.1, based on the position of the invasive tree species obtained by sampling on site, taking TreeMask as an auxiliary image, marking a training sample of the invasive tree species from the hyperspectral image, taking the training sample out of the normalized image in a manner of pixels to be classified and neighborhood pixels thereof according to blocks, such as a hyperspectral image block of 11 × 11 × 129, wherein 11 is the height and width of the image block, and 129 is the number of channels of the image;
and 4.2, selecting unmarked samples invading the tree species samples by a certain multiple (40 times in the selection) in the tree region of the hyperspectral image in a random selection mode, and extracting the unmarked samples from the normalized hyperspectral image in a mode of pixels to be classified and neighborhood pixels thereof.
Step 5, training the convolutional neural network by using the training data set and a loss function based on non-negative risk estimation, wherein the step further comprises the following steps:
step 5.1, constructing a basic feature extraction module, wherein the basic feature extraction module consists of a convolutional layer (Conv 3 x 3) -batch normalization layer (BN) -activation function (ReLU), and replacing a maximum pooling layer with a Conv 3 x 3-ReLU module with the stride of 2 to perform spatial downward miningAnd (5) carrying out sample operation. The calculation formula of the convolution operation is as follows:
Figure BDA0002845426710000063
wherein
Figure BDA0002845426710000064
Represents the value at the jth profile position (x, y) of the ith layer, the weight
Figure BDA0002845426710000065
And bias bijThe kth profile, P, connecting the preceding module oiAnd QiRespectively, the spatial size of the weights. The batch normalization is calculated as follows:
Figure BDA0002845426710000071
where β and γ are parameters that can be learned and ε is added to maintain the stability of the parameters. The calculation formula of the ReLU activation function is as follows: ReLU (v)ij)=max{0,vij}。
And 5.2, forming a deep convolutional neural network by repeatedly stacking a basic feature extraction module (Conv 3 x 3-BN-ReLU) and a spatial down-sampling module (Conv 3 x 3-ReLU with the step length of 2), inputting the output of the network into a full-connection layer through global average pooling, and using the output of the full-connection layer for risk estimation and prediction. An alternative network construction scheme is shown in figure 5.
Step 5.3, input training data
Figure BDA0002845426710000072
After passing through the deep convolutional neural network, the joint labeling y is carried out, and the risk calculation is carried out by a loss function based on non-negative risk estimation. The loss function calculation process based on non-negative risk estimation is as follows:
Figure BDA0002845426710000073
wherein the risk of invading tree species is estimated as
Figure BDA0002845426710000074
f is a convolutional neural networkCollaterals of kidney EPAveraging the results of the loss calculations for the aligned samples (i.e., labeled samples), pipFor class prior, it can be obtained by class prior estimation algorithm such as kmep, and l (f (x), y) ═ 1/(1+ exp (yf (x))) is loss function. The risk of unlabeled samples is estimated as
Figure BDA0002845426710000075
EUFor averaging the loss calculation results of the unmarked samples, the negative-class risk of the invaded tree species samples is estimated as
Figure BDA0002845426710000076
+1 indicates that the sample is of an invasive tree type, -1 indicates that the sample is of a non-invasive tree type.
And 5.4, after risk estimation, updating network parameters by an algorithm based on random gradient descent, and after iteration is repeated to a stopping condition (such as 300 rounds), finishing network training. Order to
Figure BDA0002845426710000077
i is the batch when riWhen the content is more than or equal to 0, use
Figure BDA0002845426710000078
Updating network parameters when r isiWhen less than 0, use
Figure BDA0002845426710000079
An update of the gradient is performed in that,
Figure BDA00028454267100000710
for the process of calculating the gradient by means of the risk estimation function, theta is the parameter to be updated by the network,
Figure BDA00028454267100000711
the batch of invasive tree species data used for training and unlabeled training data.
Step 6, using the trained convolutional neural network in the step 5 to perform reasoning on the MNF image after normalization preprocessing, and obtaining the distribution of the invasive tree species after eliminating non-tree pixels through TreeMask, wherein the step further comprises the following steps:
and 6.1, taking blocks of the normalized whole hyperspectral image element by element, inputting the block into a trained convolutional neural network to predict whether the image is an invasive tree, and if the network output result is greater than 0, considering the pixel as an invasive tree pixel, otherwise, considering the pixel as a non-invasive tree pixel.
Step 6.2, for the prediction result output by the network, TreeMask is used for post-processing to avoid the influence of non-tree pixels, and the specific formula is as follows:
Figure BDA0002845426710000081
final _ map is the final prediction result, result is the output result of the network,
Figure BDA0002845426710000082
the pixels representing the corresponding positions of the two images are multiplied in a pixel-by-pixel mode, and the TreeMask is a binary image with a pixel value of 0 or 1, so that the false alarm problem caused by non-tree pixels can be solved in the element-by-element multiplication mode of the corresponding positions.
The intrusion tree species distribution detection process is realized by using Python language, wherein a deep learning framework uses Pythrch, and the detection process can realize automatic processing.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (6)

1. A hyperspectral and LiDAR fusion intrusion tree species detection method based on non-negative risk estimation is characterized by comprising the following steps:
step 1, obtaining a digital surface model DSM and a digital elevation model DEM based on different echoes of LiDAR data, and obtaining a canopy height model CHM by making a difference between the two models;
step 2, constructing a mask TreeMask based on the hyperspectral image and the CHM, wherein the mask is used for extracting the areas of all trees in the image;
step 3, transforming the hyperspectral image by using the minimum noise separation transform MNF, and performing normalization pretreatment on the transformed image;
step 4, based on the preprocessed image and the TreeMask, acquiring an intrusion tree seed data set for model training through the ground real value of the intrusion tree seed acquired by on-site sampling; randomly sampling in TreeMask to obtain an index of a pixel of a tree, and then obtaining an unlabeled data set for model training on the normalized hyperspectral image through the index;
step 5, training the convolutional neural network by using a training data set and a loss function based on non-negative risk estimation;
and 6, reasoning on the MNF image after the normalization pretreatment by using the trained convolutional neural network in the step 5, and obtaining the distribution of the invasive tree species after eliminating non-tree pixels through TreeMask.
2. The method of claim 1, wherein the method comprises the steps of: the implementation of said step 2 is as follows,
step 2.1, extracting a vegetation area in the hyperspectral image based on the reflectivity of the hyperspectral image in the near-infrared waveband and the normalized vegetation index,
Figure FDA0002845426700000011
where ρ isNIRAnd ρRRespectively representing the reflectivity of a near infrared band and a red band of the hyperspectral image;
and 2.2, extracting the tree region in the vegetation region by using the CHM, wherein the pixels with the pixel values larger than a certain threshold value in the CHM image are regarded as tree pixels.
3. The method of claim 1, wherein the method comprises the steps of: the implementation of said step 3 is as follows,
step 3.1, firstly separating the noise image n from the original hyperspectral image z by using low-pass filtering, and then respectively calculating covariance matrixes sigma of z and nzSum ΣnWherein Z is (Z)1,z2,…zL),N=(n1,n2,…nL) (ii) a Computing
Figure FDA0002845426700000012
Characteristic value λ ofiAnd a feature vector uiAnd satisfy lambda1≥λ2≥…≥λLLet U be (U)1,…,uL) Then the final result of MNF transform is Y ═ UTZ;
Step 3.2, calculating the mean value mean [ i ] of the hyperspectral image band by band]Sum variance std [ i ]]I is the serial number of the wave band of the image, and for any wave band i]The normalization calculation mode of the pixel at any position is as follows:
Figure FDA0002845426700000021
wherein n _ band [ i ]]The normalized result of the band i, and (x, y) the pixel position.
4. The method of claim 1, wherein the method comprises the steps of: the implementation of said step 4 is as follows,
step 4.1, based on the position of the invasive tree species obtained by sampling on site, taking TreeMask as an auxiliary image, marking a training sample of the invasive tree species from the hyperspectral image, and taking the training sample out of the normalized image in a mode of pixels to be classified and neighborhood pixels according to blocks;
and 4.2, selecting unmarked samples invading the tree species samples by a certain multiple in a tree area of the hyperspectral image in a random selection mode, and extracting the unmarked samples from the normalized hyperspectral image in a mode of pixels to be classified and neighborhood pixels.
5. The method of claim 1, wherein the method comprises the steps of: the implementation of said step 5 is as follows,
step 5.1, constructing a basic feature extraction module, wherein the basic feature extraction module consists of a convolution layer, a batch normalization layer and an activation function, and a calculation formula of convolution operation is as follows:
Figure FDA0002845426700000022
wherein
Figure FDA0002845426700000023
Represents the value at the jth profile position (x, y) of the ith layer, the weight
Figure FDA0002845426700000024
And bias bijThe kth profile, P, connecting the preceding module oiAnd QiRespectively the size of the weight; the batch normalization is calculated as follows:
Figure FDA0002845426700000025
wherein beta and gamma are parameters which can be learned, and epsilon is a constant and is used for keeping the stability of the parameters; the calculation formula of the ReLU activation function is as follows: ReLU (v)ij)=max{0,vij};
Step 5.2, forming a deep convolutional neural network by repeatedly stacking a basic feature extraction module and a spatial down-sampling module, wherein the spatial down-sampling module adopts a convolutional layer-activation function to replace a maximum pooling layer for spatial down-sampling operation, the final output of the network inputs features into a full connection layer through global average pooling operation, and the output of the full connection layer is used for risk estimation and prediction;
and (4) in a step 5.3,
Figure FDA0002845426700000026
inputting training data for the batch of invasive tree species data and unlabeled data for training
Figure FDA0002845426700000031
After passing through a deep convolutional neural network, jointly marking y, and performing risk calculation by using a loss function based on non-negative risk estimation; the loss function calculation process based on non-negative risk estimation is as follows:
Figure FDA0002845426700000032
wherein the risk of invading tree species is estimated as
Figure FDA0002845426700000033
f is a convolutional neural network, EPTo average the loss calculations for positive samples, i.e. labeled samples, the loss function l (f) (x), y) ═ 1/(1+ exp (yf (x))), πpIs a class prior; the risk of unlabeled samples is estimated as
Figure FDA0002845426700000034
EUTo average the loss calculations for unlabeled samples, the negative-class risk of invading tree species samples is estimated as
Figure FDA0002845426700000035
+1 represents the sample as an invasive tree species, -1 represents the sample as a non-invasive tree species;
step 5.4, after risk calculation, updating network parameters by an algorithm based on random gradient descent, and after iteration is repeated to a stop condition, completing network training; order to
Figure FDA0002845426700000036
i is the batch when riWhen the content is more than or equal to 0, use
Figure FDA0002845426700000037
The update of the network parameters is carried out,
Figure FDA0002845426700000038
in order to calculate the gradient by the risk estimation function, theta is the parameter to be updated in the network, when riWhen less than 0, use
Figure FDA0002845426700000039
An update of the gradient is performed.
6. The method of claim 1, wherein the method comprises the steps of: the implementation of said step 6 is as follows,
step 6.1, after the normalized hyperspectral image after the whole MNF transformation is subjected to element-by-element block fetching, inputting a trained convolutional neural network to predict whether the image is an invasive tree, and if the result output by the network is greater than 0, considering the image as an invasive tree pixel, otherwise, considering the image as a non-invasive tree pixel;
step 6.2, for the prediction result output by the network, TreeMask is used for post-processing to avoid the influence of non-tree pixels, and the specific formula is as follows:
Figure FDA00028454267000000310
final _ map is the final prediction result, result is the output result of the network,
Figure FDA00028454267000000311
the pixels representing the corresponding positions of the two images are multiplied in a pixel-by-pixel mode, and the TreeMask is a binary image with a pixel value of 0 or 1, so that the false alarm problem caused by non-tree pixels can be relieved by the element-by-element multiplication mode of the corresponding positions.
CN202011507877.1A 2020-12-18 2020-12-18 Hyperspectral and LiDAR fusion intrusion tree species detection method based on non-negative risk estimation Active CN112668420B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011507877.1A CN112668420B (en) 2020-12-18 2020-12-18 Hyperspectral and LiDAR fusion intrusion tree species detection method based on non-negative risk estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011507877.1A CN112668420B (en) 2020-12-18 2020-12-18 Hyperspectral and LiDAR fusion intrusion tree species detection method based on non-negative risk estimation

Publications (2)

Publication Number Publication Date
CN112668420A true CN112668420A (en) 2021-04-16
CN112668420B CN112668420B (en) 2022-06-07

Family

ID=75406825

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011507877.1A Active CN112668420B (en) 2020-12-18 2020-12-18 Hyperspectral and LiDAR fusion intrusion tree species detection method based on non-negative risk estimation

Country Status (1)

Country Link
CN (1) CN112668420B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117809193A (en) * 2024-03-01 2024-04-02 江西省林业科学院 Unmanned aerial vehicle hyperspectral image and ground object hyperspectral data fusion method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070291994A1 (en) * 2002-05-03 2007-12-20 Imagetree Corp. Remote sensing and probabilistic sampling based forest inventory method
CN109031344A (en) * 2018-08-01 2018-12-18 南京林业大学 A kind of method of Full wave shape laser radar and high-spectral data joint inversion forest structural variable
CN109492563A (en) * 2018-10-30 2019-03-19 深圳大学 A kind of tree species classification method based on unmanned plane Hyperspectral imaging and LiDAR point cloud
US20200065968A1 (en) * 2018-08-24 2020-02-27 Ordnance Survey Limited Joint Deep Learning for Land Cover and Land Use Classification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070291994A1 (en) * 2002-05-03 2007-12-20 Imagetree Corp. Remote sensing and probabilistic sampling based forest inventory method
CN109031344A (en) * 2018-08-01 2018-12-18 南京林业大学 A kind of method of Full wave shape laser radar and high-spectral data joint inversion forest structural variable
US20200065968A1 (en) * 2018-08-24 2020-02-27 Ordnance Survey Limited Joint Deep Learning for Land Cover and Land Use Classification
CN109492563A (en) * 2018-10-30 2019-03-19 深圳大学 A kind of tree species classification method based on unmanned plane Hyperspectral imaging and LiDAR point cloud

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
宫小雯等: "生存分析模型及在交通工程的应用", 《科技创新与应用》 *
宫小雯等: "生存分析模型及在交通工程的应用", 《科技创新与应用》, no. 14, 15 May 2020 (2020-05-15) *
李方方等: "面向对象随机森林方法在湿地植被分类的应用", 《遥感信息》, no. 01, 15 February 2018 (2018-02-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117809193A (en) * 2024-03-01 2024-04-02 江西省林业科学院 Unmanned aerial vehicle hyperspectral image and ground object hyperspectral data fusion method
CN117809193B (en) * 2024-03-01 2024-05-17 江西省林业科学院 Unmanned aerial vehicle hyperspectral image and ground object hyperspectral data fusion method

Also Published As

Publication number Publication date
CN112668420B (en) 2022-06-07

Similar Documents

Publication Publication Date Title
Avenash et al. Semantic Segmentation of Satellite Images using a Modified CNN with Hard-Swish Activation Function.
CN107346434A (en) A kind of plant pest detection method based on multiple features and SVMs
CN112183209A (en) Regional crop classification method and system based on multi-dimensional feature fusion
US20230165235A1 (en) Image monitoring for control of invasive grasses
Rangarajan et al. Detection of fusarium head blight in wheat using hyperspectral data and deep learning
Mamatov et al. Methods for improving contrast of agricultural images
Azizi et al. Semantic segmentation: A modern approach for identifying soil clods in precision farming
Kumar et al. Delineation of field boundary from multispectral satellite images through U-Net segmentation and template matching
Mendigoria et al. Varietal classification of Lactuca Sativa seeds using an adaptive neuro-fuzzy inference system based on morphological phenes
CN112668420B (en) Hyperspectral and LiDAR fusion intrusion tree species detection method based on non-negative risk estimation
CN117496356A (en) Agricultural artificial intelligent crop detection method and system
CN115861686A (en) Litchi key growth period identification and detection method and system based on edge deep learning
CN116721389A (en) Crop planting management method
Kumawat et al. Time-Variant Satellite Vegetation Classification Enabled by Hybrid Metaheuristic-Based Adaptive Time-Weighted Dynamic Time Warping
CN117743975A (en) Hillside cultivated land soil environment improvement method
CN116523352B (en) Forest resource information management method and system
AHM et al. A deep convolutional neural network based image processing framework for monitoring the growth of soybean crops
Moshou et al. Multisensor fusion of remote sensing data for crop disease detection
Sharmila et al. A Systematic Literature Review on Image Preprocessing and Feature Extraction Techniques in Precision Agriculture
Tahraoui et al. Land change detection in sentinel-2 images using ir-mad and deep neural network
Yenugudhati Identification of plant health using machine learning and image techniques
CN115311678A (en) Background suppression and DCNN combined infrared video airport flying bird detection method
Alam et al. Drone-Based Crop Product Quality Monitoring System: An Application of Smart Agriculture
CN113095145A (en) Hyperspectral anomaly detection deep learning method based on pixel pair matching and double-window discrimination
Kempegowda et al. Hybrid features and ensembles of convolution neural networks for weed detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant