CN114821321A - Blade hyperspectral image classification and regression method based on multi-scale cascade convolution neural network - Google Patents

Blade hyperspectral image classification and regression method based on multi-scale cascade convolution neural network Download PDF

Info

Publication number
CN114821321A
CN114821321A CN202210450076.9A CN202210450076A CN114821321A CN 114821321 A CN114821321 A CN 114821321A CN 202210450076 A CN202210450076 A CN 202210450076A CN 114821321 A CN114821321 A CN 114821321A
Authority
CN
China
Prior art keywords
cnn
scale
network
convolution
hyperspectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210450076.9A
Other languages
Chinese (zh)
Inventor
王健
朱逢乐
赵章风
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202210450076.9A priority Critical patent/CN114821321A/en
Publication of CN114821321A publication Critical patent/CN114821321A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of plant science research, and discloses a leaf hyperspectral image classification and regression method based on a multi-scale cascade convolution neural network, which comprises the steps of firstly, embedding expansion convolution in 3D-CNN to construct spectrum-space feature extraction structures with different scales, and realizing fusion of multi-scale features; secondly, cascading 1D-CNN after the 3D-CNN to further extract high-level abstract spectral features, and performing optimal framework exploration on the proposed multi-scale 3D-1D-CNN network; and finally, comparing the proposed multi-scale 3D-1D-CNN network model with a reference model and a multi-scale 3D-CNN model on two facility crop leaf data sets with limited samples to verify the effectiveness of the proposed method. The method is beneficial to the classification and regression of the hyperspectral images of the leaves, and can also provide new ideas and technical assistance for other image classification regression methods in the technical field of agricultural informatization.

Description

Blade hyperspectral image classification and regression method based on multi-scale cascade convolution neural network
Technical Field
The invention belongs to the field of plant science research, and particularly relates to a leaf hyperspectral image classification and regression method based on a multi-scale cascade convolution neural network.
Background
Biological processes such as photosynthesis, transpiration and the like of plant leaves are closely related to biochemical parameters of the leaves, such as chlorophyll content and water content of the leaves. Taking basil leaves and pepper leaves as examples, basil is a herbaceous spice economic crop which is cultivated in a greenhouse in southern areas of China and is used as a medicine after being eaten fresh or dried in the sun, and the chlorophyll content in the basil leaves directly influences the growth and development, the nutrition level and the economic yield of the basil. The pepper belongs to shallow root plants, has high cork degree and is not drought-tolerant, and serious water stress easily causes great damage to the physiological mechanism of the pepper. Plant leaves are affected by the above biochemical parameters and the corresponding visible-near infrared spectrum also exhibits different response characteristics. The hyperspectral imaging is a spectrum-integrated technology, continuous spectrum information of each pixel point and continuous image information of each spectrum band on an image can be obtained simultaneously, the spectrum dimension of the hyperspectral imaging is composed of hundreds of continuous bands in the range from visible light to infrared wavelength, and the hyperspectral imaging has the advantages of high resolution, rich band information, rapidness, no damage and the like and is widely applied to the field of plant phenotype research. Due to the problems of high spectral dimension, redundant adjacent band information, limited label samples and the like of a hyperspectral image three-dimensional data block, the traditional hyperspectral image analysis process is complicated, and the analysis result depends on expert experience, an efficient method for classifying and regressing the hyperspectral images of the leaves is required to be established, so that a foundation is laid for the subsequent plant biochemical parameter content prediction and the establishment of a stress diagnosis model.
In the field of plant science research, deep learning models represented by Convolutional Neural Networks (CNNs) are increasingly applied to analysis of hyperspectral images. CNN learns and extracts local and global features of data autonomously through multi-layer convolution and pooling operations. According to the convolution kernel structure of CNN, it can be divided into one-dimensional CNN (1D-CNN), two-dimensional CNN (2D-CNN) and three-dimensional CNN (3D-CNN). The 1D-CNN is a hyperspectral image analysis model which is most applied at present, extraction and modeling of depth spectral features are carried out based on one-dimensional spectral data, an average spectrum or a pixel point spectrum of a sample region of interest (ROI) is usually extracted manually in advance, and the method is applied to classification of cotton seeds, identification of low-temperature stress of corn plants, prediction of water content of the corn plants, quantification of total anthocyanin of lycium ruthenicum, detection of pesticide residue on leek leaves and the like. The 2D-CNN mainly extracts depth space characteristics, can analyze and model original or dimensionality-reduced hyperspectral images, and is applied to apple leaf state classification, identification of disease-resistant rice seeds, identification of canopy infected areas of potato plants, evaluation of corn seed vitality and the like. Compared with feature learning of 1D-CNN and 2D-CNN on spectrum and space dimensions respectively, the 3D-CNN can extract depth spectrum-space combined features end to end, meanwhile, information continuity of adjacent spectrum wave band features is kept, and the method is particularly suitable for analysis of hyperspectral image three-dimensional data blocks with spatial spectrum continuity.
At present, 3D-CNN is applied less in the field of plant science, and only reports on detection of soybean stem diseases and cotton leaf aphids are provided, because the acquisition difficulty of label samples in plant research is high, the consumed time is long, the number of 3D-CNN network parameters is large, the calculation complexity is high, the overfitting phenomenon is easy to happen to limited label training samples in plant research, and the generalization performance is reduced. Therefore, in the field of plant science research, an improved 3D-CNN model aiming at leaf hyperspectral image classification and regression needs to be researched, the model calculation complexity is reduced under the condition that training samples are limited, and meanwhile the model generalization performance is improved.
Disclosure of Invention
The invention aims to provide a leaf hyperspectral image classification and regression method based on a multi-scale cascade convolution neural network, and aims to solve the technical problems.
In order to solve the technical problems, the specific technical scheme of the blade hyperspectral image classification and regression method based on the multi-scale cascade convolution neural network is as follows:
a leaf hyperspectral image classification and regression method based on a multi-scale cascade convolution neural network comprises the following steps:
s1: constructing a hyperspectral imaging system;
s2: collecting hyperspectral images of basil leaf samples and pepper leaf samples, and preprocessing the images;
s3: the expansion convolution is introduced into the 3D-CNN, the receptive field of the convolution kernel is expanded while the network parameters and the calculation complexity are not increased, spectral-spatial feature extraction structures of different scales are constructed, the fusion of multi-scale features is realized, and the optimal feature is explored
A multi-scale 3D-CNN network structure;
s4: cascading the 1D-CNN network behind the optimal multi-scale 3D-CNN network described in S3, learning more abstract spectral features on the basis of spectrum-space combined feature extraction by taking three-dimensional data as input, and performing optimal frame exploration on the proposed multi-scale 3D-1D-CNN network;
s5: and S4, comparing the optimal framework of the multi-scale 3D-1D-CNN network with the reference 1D-CNN, the reference 2D-CNN, the reference 3D-CNN and the multi-scale 3D-CNN network model, and verifying the effectiveness of the method.
Further, the S1: the method comprises the steps of building a hyperspectral imaging system, obtaining a hyperspectral image of a sample, setting a spectral band range and spatial resolution, and correcting an original hyperspectral image.
Further, the hyperspectral imaging system comprises an SNAPSCAN VNIR spectrum camera, a 35mm lens, a 150W annular tungsten halogen light source, image acquisition software and an image acquisition platform; the spectral band range is 470-900nm, and 140 bands are total; the SNAPSCAN VNIR spectral camera adopts a built-in scanning imaging mode to rapidly acquire a hyperspectral image of a sample without relative displacement of the camera or the sample, the maximum spatial resolution of the SNAPSCAN VNIR spectral camera is 3650 x 2048, and the actual spatial resolution in data acquisition is adjusted according to the size of a blade sample; the vertical distance between the 35mm lens and the sample is 34.5cm, and the exposure time is set to be 30 ms; the method comprises the following steps of performing black-and-white correction on an original hyperspectral image acquired by the hyperspectral imaging system to calculate the reflectivity of a sample, and simultaneously reducing the influence of illuminance and camera dark current on acquisition of spectrograms of different samples:
Figure BDA0003618221780000031
wherein R is 0 And R are hyperspectral images before and after correction respectively, and W and D are a white board reference image and a dark current reference image respectively.
Further, sweet basil leaves cultivated under artificial LED illumination are subjected to 3 different light intensity treatments in S2, data acquisition is carried out when the basil leaves grow for 40 days, the relative value of chlorophyll is measured, and 540 hyperspectral images are acquired; s2, performing two kinds of water treatment on healthy pepper plants which grow consistently after 50 days of cultivation, normally irrigating control group samples, performing five-day continuous drought on experimental group samples, and collecting 600 hyperspectral images;
the hyperspectral image and corresponding tag data pair are read by a custom python function.
Further, the image preprocessing of S2 includes background segmentation, noise point removal, and size adjustment;
the background segmentation mode comprises the following steps: adopting an 800nm near-infrared band with the largest difference between the spectral reflectances of the blade and the background as a threshold band for background segmentation, wherein the threshold is set to be 0.15;
the method for removing the noise point comprises the following steps: removing noise points by using morphological transformation in opencv-python;
the size adjustment is as follows: the spatial dimension is uniformly reduced to 120 x 120.
Further, S2 includes dividing the preprocessed image into a training set, a verification set, and a test set, where the training set, the verification set, and the test set are divided in a random extraction manner; 380 samples are randomly extracted from the basil leaves to serve as a training set, 80 samples are taken as a verification set, and 80 samples are taken as a test set; 400 samples in pepper leaves are randomly extracted as a training set, 100 samples are taken as a verification set, 100 samples are taken as a test set, the data set is divided into a plurality of batches, and the number of batch processing samples is set to be 8.
Further, in S3, the constructing of the spectrum space feature extraction structures of different scales is to embed a standard 3D convolution and an expanded 3D convolution in parallel in the same layer of the network, where the two convolution kernels each occupy 50% of the number of channels, and then splice feature maps output by the different convolution kernels in channel dimensions after batch normalization, and perform ReLU nonlinear activation.
Further, the expanding 3D convolution described in S3 is implemented by inserting D-1 values with weight of 0 between adjacent weights of all rows and columns of the standard 3D convolution, where D is an expansion factor, and the receptive field of the convolution kernel is expanded while the network parameters are not increased and information is not lost; the convolution kernel of the standard 3D convolution is 3 × 3 × 3, D ═ 1, and the receptive field is 3 × 3 × 3;
the convolution kernel of the expansion 3D convolution is 3 multiplied by 3, and the test D is the expansion 3D convolution kernel structures of 2, 3 and 4 respectively, which correspond to the receptive fields of 5 multiplied by 5, 7 multiplied by 7 and 9 multiplied by 9 respectively, and the convolution kernels of different receptive fields can extract the spectral space characteristics of different scales in the characteristic diagram;
the optimal multi-scale 3D-CNN network structure is a model of a 3 × 3 × 3 extended 3D convolution kernel with a concatenation D ═ 2. Further, the S4 cascade connection 1D-CNN network behind the optimal multi-scale 3D-CNN network is to perform model performance test of converting 3D-CNN into 1D-CNN at different positions of the multi-scale 3DResNet-18 network to obtain an optimal multi-scale 3D-1D-CNN model;
the basic composition unit of the optimal multi-scale 3D-CNN network cascade 1D-CNN network comprises a convolution layer, a batch normalization layer, a ReLU nonlinear activation layer, a mean pooling layer and a full connection layer, wherein the whole network is divided into stages of 3D-CNN, 3D-CNN-to-1D-CNN and 1D-CNN from an input layer to an output layer;
in the 3D-CNN stage, a preprocessed hyperspectral image cube data block with the dimensionality of 120 x 140 is used as input, firstly, a 3 x 3 convolution kernel with the step length of 2,2,1 is adopted to extract local spectrum-space characteristics to generate a 64-channel three-dimensional spectrum image, and then, a continuous optimal multi-scale 3D-CNN network is input to realize extraction and fusion of spectrum-space characteristics with different scales;
in the stage of converting 3D-CNN into 1D-CNN, through testing, the position (c) in the multi-scale 3D ResNet-18 network has the best conversion effect, and the conversion efficiency is as follows: the 256 × 15 × 15 × 35 three-dimensional feature map is converted into a one-dimensional feature with the size of 35 by using a 15 × 15 × 1 3D convolution kernel;
in the 1D-CNN stage, the depth spectral features are extracted by adopting continuous one-dimensional convolution kernels with the size of 3 to obtain 512 x 17 one-dimensional features, the extracted high-level abstract features are compressed and aggregated based on mean pooling to obtain features with higher expression capacity, the dimensionality is 512 x 1, and finally, a model predicted value is output through a full-connection layer, and compared with an optimal multi-scale 3D-CNN network which is not cascaded with 1D-CNN, the parameter quantity of the model is reduced by about 35.83%.
Further, the reference 1D-CNN, the reference 2D-CNN, the reference 3D-CNN, and the multi-scale 3D-CNN include a convolutional layer, a batch normalization layer, a ReLU nonlinear activation layer, a mean pooling layer, and a full connection layer, in S5;
the reference 1D-CNN is a one-dimensional convolution neural network based on a residual error network framework ResNet, and feature extraction is carried out on spectral dimensions;
the reference 2D-CNN is a two-dimensional convolution neural network based on ResNet, and feature extraction is carried out on the spatial dimension; the reference 3D-CNN is a three-dimensional convolution neural network based on ResNet, and performs combined feature extraction on the dimensionality of the spectrum + space;
the multi-scale 3D-CNN is the optimal multi-scale 3D-CNN network in S3, and can acquire spectrum space characteristics of different scales;
the model training adopts a mode of training from scratch from the beginning, and the initial network weight comes from a random initialization value;
the loss function of the regression model is mean square error, and the loss function of the classification model is cross entropy;
the model is optimized by a gradient descent method, the momentum is set to be 0.9, and the initial learning rate is changed to be {1 multiplied by 10 -2 ,1×10 -3 ,1×10 -4 ,1×10 -5 The learning rate is reduced by one order of magnitude in each 30 training rounds, and the total training round is 80;
the evaluation index of regression model performance is the coefficient of determination R 2 And root mean square error, RMSE;
the classification model was evaluated for F1-score and accuracy.
The blade hyperspectral image classification and regression method based on the multi-scale cascade convolution neural network has the following advantages:
(1) aiming at the problem that the 1D-CNN and 2D-CNN network models cannot extract spectrum-space combined features on a hyperspectral image of a plant leaf, the invention provides a multi-scale 3D-CNN network framework fused with expansion convolution, which is beneficial to extracting the multi-scale spectrum-space combined features in the plant leaf and further improves the performance of the 3D-CNN model under the condition of not increasing network parameters and calculation complexity;
(2) aiming at the problems that a 3D-CNN network is high in computation complexity and easy to generate overfitting and low in model generalization performance under the condition of limited samples, the invention finds out the optimal network structure of cascade 1D-CNN after multi-scale 3D-CNN, further reduces the network parameter number of pure three-dimensional convolution, the model computation complexity and the overfitting degree, and improves the model generalization performance;
(3) the multi-scale cascade convolution neural network model method for classifying and regressing the hyperspectral images of the leaves is beneficial to classifying and regressing the hyperspectral images of the leaves, and can also provide new ideas and technical assistance for other image classification regression methods in the technical field of agricultural informatization.
Drawings
FIG. 1 is a flow chart of a leaf hyperspectral image classification and regression method based on a multi-scale cascade convolution neural network;
FIG. 2 is a schematic diagram of a multi-scale spectral space-combined feature extraction module that fuses the standard and expanded 3D convolutions;
FIG. 3 is an overall framework diagram of an optimal multi-scale 3D-1D-CNN network for end-to-end modeling of a hyperspectral image;
FIG. 4 is a diagram showing the performance comparison of a multi-scale 3D-1D-CNN model for quantifying the SPAD value of a basil leaf sample;
FIG. 5 is a diagram illustrating performance comparison of a multi-scale 3D-1D-CNN model for identifying drought stress of pepper leaf samples;
FIG. 6 is a multi-scale 3D-CNN network overall framework diagram of an unlinked 1D-CNN;
FIG. 7 is a comparison graph of model prediction effects of a reference 1D-CNN, a reference 2D-CNN, a reference 3D-CNN, a multi-scale 3D-CNN and a multi-scale 3D-1D-CNN for a basil leaf sample SPAD value quantification;
FIG. 8 is a comparison graph of model prediction effects of a benchmark 1D-CNN, a benchmark 2D-CNN, a benchmark 3D-CNN, a multi-scale 3D-CNN and a multi-scale 3D-1D-CNN for identifying drought stress of pepper leaf samples.
Detailed Description
In order to better understand the purpose, structure and function of the present invention, the following describes a leaf hyperspectral image classification and regression method based on a multi-scale cascade convolutional neural network in detail with reference to the attached drawings.
Firstly, in 3D-CNN, spectrum space feature extraction structures with different scales are constructed by embedding expansion convolution, fusion of multi-scale features is realized, and the performance of a 3D-CNN model is further improved under the condition of not increasing network parameters and calculation complexity;
secondly, designing an efficient multi-scale 3D-1D-CNN network structure based on the multi-scale 3D-CNN, namely cascading 1D-CNN after the 3D-CNN to further extract high-level abstract spectrum characteristics, and performing optimal frame exploration on the proposed multi-scale 3D-1D-CNN network to reduce the computational complexity and overfitting degree of pure three-dimensional convolution;
and finally, comparing the proposed multi-scale 3D-1D-CNN network model with a reference 1D-CNN model, a reference 2D-CNN model, a reference 3D-CNN model and a multi-scale 3D-CNN model on two facility crop leaf data sets with limited samples to verify the effectiveness of the proposed method.
As shown in FIG. 1, the method for classifying and regressing the hyperspectral image of the leaf based on the multi-scale cascade convolution neural network specifically comprises the following steps:
s1: a visible-near infrared hyperspectral imaging system is built, a hyperspectral image of a sample is obtained, a spectral band range and spatial resolution are set, and an original hyperspectral image is corrected.
The visible-near infrared hyperspectral imaging system comprises an SNAPSCAN VNIR spectrum camera (IMEC, Leuven, Belgium), a 35mm lens, a 150W annular halogen tungsten lamp light source, image acquisition software (HSImager), an image acquisition platform and the like; the spectral band range of S1 is 470-900nm, and 140 bands are total; s1, the spectral camera adopts a built-in scanning imaging mode to quickly acquire the hyperspectral image of the sample, does not need the relative displacement of the camera or the sample, and avoids the problem of spatial deformation of the spectral image; s1, the maximum spatial resolution of the spectrum camera is 3650 x 2048, and the actual spatial resolution in data acquisition is adjusted according to the size of the blade sample; s1, setting the vertical distance between the 35mm lens and the sample to be 34.5cm, and setting the exposure time to be 30 ms; s1, performing the following black and white correction on the original hyperspectral image acquired by the hyperspectral imaging system to calculate the reflectivity of the sample, and simultaneously reducing the influence of illuminance and camera dark current on acquisition of different sample spectrograms:
Figure BDA0003618221780000061
wherein R is 0 And R are the hyperspectral images before and after correction, respectively, and W and D are the whiteboard reference image (diffuse reflectance close to 100%) and the dark current reference image, respectively.
S2: collecting hyperspectral images of basil leaf samples and pepper leaf samples, preprocessing the images, and dividing all the images into a training set, a verification set and a test set; the image preprocessing comprises image background segmentation, noise point removal and size adjustment.
Sweet basil leaf is processed with 3 different light intensities of 200 + -5, 135 + -4, and 70 + -5 μmol m under artificial LED illumination -2 s -1 The mass ratio of red light to blue light is 3:1, the photoperiod is 16 hours/day, 40 pots are cultivated in each experimental group, and 1 plant is cultivated in each pot, so that 120 ocimum basilicum plants are cultivated. Light removal for different experimental groupsExcept for different strengths, the management of water, fertilizer, gas, heat, diseases, insects and grass is normal. Carrying out data acquisition when basil grows for 40 days, determining the SPAD value of each leaf by using a SPAD-502 chlorophyll meter (Minolta Camera Co., Osaka, Japan), respectively determining 3 times along the left side and the right side of the middle vein, taking the average value of 6 times of determination as the chlorophyll relative value of the sample, and acquiring 540 hyperspectral images, wherein the dimension of the hyperspectral image of each leaf sample is 600 × 800 × 140;
the pepper leaves are used for carrying out two kinds of water treatment on healthy pepper plants which grow uniformly for 50 days, wherein a control group sample is normally irrigated, a control group sample is normally irrigated (drip irrigation is carried out for 600 ml/day), an experimental group sample is continuously arid for 5 days (drip irrigation is carried out for 50 ml/day), and the drip irrigation time is 17: point 00. Each group had 10 pots, 2 plants per pot, for a total of 40 peppers. Except for different water application, different experimental groups are all managed by normal fertilizer, gas, heat, light, diseases, insects and grass. Collecting 300 leaf samples from each group, and collecting hyperspectral images of 600 leaves in total, wherein the dimension of the hyperspectral image of each leaf sample is 120 multiplied by 200 multiplied by 140;
reading the hyperspectral image and the corresponding tag data pair by self-defining python;
the image background segmentation mode comprises the following steps: adopting an 800nm near-infrared band with the largest difference between the spectral reflectances of the blade and the background as a threshold band for background segmentation, wherein the threshold is set to be 0.15;
the noise point removing method comprises the following steps: removing noise points by using morphological transformation in opencv-python;
and (3) size adjustment: uniformly reducing the space dimension to 120 multiplied by 120;
the mode of dividing the images into a training set, a verification set and a test set is a random extraction mode; 380 samples are randomly extracted from the basil leaves to serve as a training set, 80 samples are taken as a verification set, and 80 samples are taken as a test set; 400 samples in pepper leaves are randomly extracted as a training set, 100 samples are taken as a verification set, 100 samples are taken as a test set, the data set is divided into a plurality of batches, and the number of batch processing samples is set to be 8.
S3: introducing the expanded convolution into the 3D-CNN, expanding the receptive field of a convolution kernel without increasing network parameters and computational complexity, constructing spectrum space feature extraction structures with different scales, realizing the fusion of multi-scale features, and exploring the optimal multi-scale 3D-CNN network structure to improve the performance of a 3D-CNN model;
the 3D-CNN can extract continuous local spectrum-space combined features end to end, and the characteristic value of the jth feature of the ith layer at coordinates x, y and z
Figure BDA0003618221780000071
Can be expressed as:
Figure BDA0003618221780000081
in the formula: p i 、Q i And R i Is the size of the i-th layer of three-dimensional convolution kernels,
Figure BDA0003618221780000082
the characteristic value of the kth characteristic of the (i-1) th layer at the (x + p) (y + q) (z + r) position,
Figure BDA0003618221780000083
a convolution kernel for performing a convolution operation on the kth feature of the (i-1) layer; m is the number of (i-1) layer features, b ij For bias, δ represents the activation function.
The method comprises the steps of constructing spectrum space feature extraction structures of different scales, embedding standard 3D convolution and expansion 3D convolution in parallel on the same layer of a network, enabling two convolution kernels to occupy 50% of channel number respectively, splicing feature graphs output by the different convolution kernels on channel dimensionality after batch normalization, performing ReLU nonlinear activation, and achieving spectrum space feature extraction and fusion of different scales, wherein the framework is shown in figure 2;
the expansion 3D convolution is realized by inserting D-1 values with the weight of 0 between adjacent weights of all rows and columns of the standard 3D convolution, wherein D is an expansion factor, so that the receptive field of a convolution kernel is expanded without increasing network parameters and losing information;
the convolution kernel of the standard 3D convolution is 3 × 3 × 3, D ═ 1, and the receptive field is 3 × 3 × 3;
the convolution kernel of the expansion 3D convolution is 3 multiplied by 3, and the test D is the expansion 3D convolution kernel structures of 2, 3 and 4 respectively, which correspond to the receptive fields of 5 multiplied by 5, 7 multiplied by 7 and 9 multiplied by 9 respectively, and the convolution kernels of different receptive fields can extract the spectral space characteristics of different scales in the characteristic diagram;
the optimal multi-scale 3D-CNN network structure is characterized in that a preprocessed hyperspectral image cube data block with the dimensionality of 120 x 140 is used as input, firstly, a 3 x 3 convolution kernel with the step length of 2,2 and 1 is adopted to extract local spectrum-space characteristics to generate a 64-channel three-dimensional spectrum image, and then, the 64-channel three-dimensional spectrum-space combined characteristic extraction module is input;
the results in tables 1 and 2 show that when the 3 × 3 × 3 extended 3D convolution kernel with D ═ 2 is concatenated, the model performance of the multiscale 3D-CNN is better than other types of effects.
TABLE 1 Multi-scale 3D-CNN model Performance comparison of Ocimum leaf sample SPAD values quantification
Figure BDA0003618221780000084
TABLE 2 Multi-scale 3D-CNN model Performance comparison for Pepper leaf sample drought stress identification
Figure BDA0003618221780000085
Figure BDA0003618221780000091
S4: the 1D-CNN is cascaded after the optimal multi-scale 3D-CNN, three-dimensional data is used as input, more abstract spectral features are learned on the basis of spectrum-space combined feature extraction, and optimal framework exploration is performed on the proposed multi-scale 3D-1D-CNN network, so that the network parameter number, the model calculation complexity and the overfitting degree are reduced, and the model generalization performance is improved.
The multi-scale 3D-1D-CNN network is an optimal multi-scale 3D-1D-CNN model obtained by performing a model performance test of converting 3D-CNN into 1D-CNN at different sites (r-R) of the multi-scale 3D ResNet-18 network, as shown in FIG. 3;
the basic composition units of the multi-scale 3D-CNN cascade 1D-CNN network comprise a convolution layer, a batch normalization layer, a ReLU nonlinear activation layer, a mean value pooling layer and a full connection layer. From the input layer to the output layer, the whole network is divided into stages of 3D-CNN, 3D-CNN-to-1D-CNN and 1D-CNN;
in the 3D-CNN stage, a preprocessed hyperspectral image cube data block with the dimensionality of 120 multiplied by 140 is used as input, firstly, a 3 multiplied by 3 convolution kernel with the step length of 2,2,1 is adopted to extract local spectrum-space characteristics, a 64-channel three-dimensional spectrum image is generated, and then, the three-dimensional spectrum image is input into a continuous multi-scale 3D-CNN module to realize extraction and fusion of spectrum-space characteristics with different scales; as shown in fig. 3, the continuous multi-scale 3D-CNN module has 10 layers, the number of output channels is 64, 128, 256, respectively, and the number of output channels of each layer is the number of input channels of the next layer;
the stage of converting 3D-CNN into 1D-CNN, and the position in the multi-scale 3D ResNet-18 network is converted, and the test set has the best effect, shown in FIGS. 4 and 5. It can be seen that, at the beginning, with the increase of the multi-scale 3D convolutional layers, although the model training complexity of the two data sets increases, the model prediction performance increases significantly, which indicates that performing multi-scale spectral space feature extraction on the input three-dimensional data block is helpful for learning richer and richer information by the model, but when the multi-scale 3D convolutional layers increase to the position point, the model complexity continues to increase, the prediction generalization performance starts to decrease instead, and an overfitting phenomenon occurs, mainly due to the fact that the model parameters are too many and the sample data set is limited. Therefore, at the position (c), the three-dimensional feature map of 256 × 15 × 15 × 35 (the number of channels × the spatial height × the spatial width × the number of bands) output by the 3D-CNN is converted into a one-dimensional feature of size 35 using a 3D convolution kernel of 15 × 15 × 1.
In the 1D-CNN stage, extracting depth spectral features by adopting a continuous one-dimensional convolution kernel with the size of 3 to obtain 512 multiplied by 17 one-dimensional features, compressing and aggregating the extracted high-level abstract features based on mean pooling to obtain features with stronger expressive ability, wherein the dimensionality is 512 multiplied by 1, and finally outputting a model predicted value through a full-connection layer;
the network structure of the multi-scale 3D-CNN cascade 1D-CNN (figure 3) has about 35.83% lower parameter amount of the model compared with the multi-scale 3D-CNN network without cascade 1D-CNN (figure 6).
S5: and comparing the optimal network framework of the multi-scale 3D-1D-CNN in the S4 with the reference 1D-CNN, the reference 2D-CNN, the reference 3D-CNN and the multi-scale 3D-CNN network model to verify the effectiveness of the method.
Preferably, the reference 1D-CNN, the reference 2D-CNN, the reference 3D-CNN and the multi-scale 3D-CNN in S5 mainly comprise a convolution layer, a batch normalization layer, a ReLU nonlinear activation layer, a mean pooling layer and a full connection layer;
the reference 1D-CNN is a one-dimensional convolution neural network based on a residual error network framework (ResNet), and is used for performing feature extraction on spectral dimensions;
the reference 2D-CNN is a two-dimensional convolution neural network based on ResNet, and feature extraction is carried out on the spatial dimension;
the reference 3D-CNN is a three-dimensional convolution neural network based on ResNet, and performs combined feature extraction on the dimensionality of the spectrum + space;
the multi-scale 3D-CNN is the optimal multi-scale 3D-CNN network in S3, and can acquire spectrum space characteristics of different scales and avoid incomplete expression of characteristic information under a single scale;
the model training adopts a from scratch training mode, and the initial network weight is derived from a random initialization value;
the loss function of the regression model is mean square error, and the loss function of the classification model is cross entropy;
the model is optimized by a gradient descent method, the momentum is set to be 0.9, and the initial learning rate is changed to be {1 multiplied by 10 -2 ,1×10 -3 ,1×10 -4 ,1×10 -5 The learning rate is reduced by one order of magnitude in each 30 training rounds, and the total training round is 80;
the evaluation index of regression model performance is the coefficient of determination (R) 2 ) And Root Mean Square Error (RMSE);
the classification model was evaluated for F1-score and accuracy.
Compared with the multi-scale 3D-CNN, the multi-scale 3D-1D-CNN optimal framework provided by the method not only improves the model performance under the condition of a small sample, but also obviously reduces the computational complexity, as shown in tables 3 and 4:
TABLE 3 Multi-Scale 3D-1D-CNN model Performance comparison of Ocimum leaf sample SPAD values quantification
Figure BDA0003618221780000101
TABLE 4 Multi-Scale 3D-1D-CNN model Performance comparison for Pepper leaf sample drought stress identification
Figure BDA0003618221780000102
Figure BDA0003618221780000111
FIGS. 7 and 8 visually show the comparison of the prediction effects of the reference 1D-CNN, the reference 2D-CNN, the reference 3D-CNN, the multi-scale 3D-CNN and the multi-scale 3D-1D-CNN models on two blade data sets:
a large amount of spectral space information of the reference 1D-CNN model is lost in a data input layer, so that the model learning is insufficient, namely fitting is insufficient;
the reference 2D-CNN model simplifies the data preprocessing process and improves the automation degree of the hyperspectral image analysis process, but the 2D-CNN model only focuses on the extraction of spatial features;
the combined feature extraction of the reference 3D-CNN model in the spectrum plus space dimension can obtain more effective and detailed local abstract features, and meanwhile, the information continuity of the spectrum features is kept, so that the method is fully suitable for the data characteristics of the three-dimensional hyperspectral image block, but the 3D-CNN model has large parameter quantity and high calculation complexity;
the multi-scale 3D-CNN model can improve the model prediction effect on the whole, the parameter number and the calculation complexity of the model are not increased, but the phenomenon that the test set result is obviously lower than the verification set result appears on the data set, namely the overfitting degree is high;
it can be seen that the multiscale 3D-1D-CNN model proposed herein compares R in the regression test set to baseline 1D-CNN, baseline 2D-CNN, baseline 3D-CNN, multiscale 3D-CNN 2 The method has the advantages that the method has 22.46%, 8.15%, 4.30% and 2% of improvement, and has 28.56%, 16.59%, 8.49% and 4.13% of improvement on the accuracy of a classification test set, so that the effectiveness of the designed network model under the condition of small samples is proved, the network parameter quantity, the model calculation complexity and the overfitting degree of simple three-dimensional convolution can be reduced, and the generalization performance of the model is improved.
In conclusion, the method has obvious technical effect, better technical contribution to the development of the leaf hyperspectral image classification and regression method, wide application prospect in the field of plant science research and considerable economic benefit.
It is to be understood that the present invention has been described with reference to certain embodiments, and that various changes in the features and embodiments, or equivalent substitutions may be made therein by those skilled in the art without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (10)

1. A leaf hyperspectral image classification and regression method based on a multi-scale cascade convolution neural network is characterized by comprising the following steps:
s1: constructing a hyperspectral imaging system;
s2: collecting hyperspectral images of basil leaf samples and pepper leaf samples, and preprocessing the images;
s3: introducing the expanded convolution into the 3D-CNN, expanding the receptive field of a convolution kernel without increasing network parameters and calculation complexity, constructing spectrum space feature extraction structures with different scales, realizing the fusion of multi-scale features, and exploring the optimal multi-scale 3D-CNN network structure;
s4: cascading the 1D-CNN network behind the optimal multi-scale 3D-CNN network described in S3, learning more abstract spectral features on the basis of spectrum-space combined feature extraction by taking three-dimensional data as input, and performing optimal frame exploration on the proposed multi-scale 3D-1D-CNN network;
s5: and S4, comparing the optimal framework of the multi-scale 3D-1D-CNN network with the reference 1D-CNN, the reference 2D-CNN, the reference 3D-CNN and the multi-scale 3D-CNN network model, and verifying the effectiveness of the method.
2. The method for classifying and regressing vane hyperspectral images based on the multi-scale cascade convolutional neural network of claim 1, wherein the step S1: the method comprises the steps of building a hyperspectral imaging system, obtaining a hyperspectral image of a sample, setting a spectral band range and spatial resolution, and correcting an original hyperspectral image.
3. The blade hyperspectral image classification and regression method based on the multiscale cascade convolution neural network is characterized in that the hyperspectral imaging system comprises an SNAPSCAN VNIR spectrum camera, a 35mm lens, a 150W annular halogen tungsten lamp light source, image acquisition software and an image acquisition platform; the spectral band range is 470-900nm, and 140 bands are total; the SNAPSCAN VNIR spectral camera adopts a built-in scanning imaging mode to rapidly acquire a hyperspectral image of a sample without relative displacement of the camera or the sample, the maximum spatial resolution of the SNAPSCAN VNIR spectral camera is 3650 x 2048, and the actual spatial resolution in data acquisition is adjusted according to the size of a blade sample; the vertical distance between the 35mm lens and the sample is 34.5cm, and the exposure time is set to be 30 ms; the method comprises the following steps of performing black-and-white correction on an original hyperspectral image acquired by the hyperspectral imaging system to calculate the reflectivity of a sample, and simultaneously reducing the influence of illuminance and camera dark current on acquisition of spectrograms of different samples:
Figure DEST_PATH_IMAGE001
wherein R is 0 And R are hyperspectral images before and after correction respectively, and W and D are a white board reference image and a dark current reference image respectively.
4. The blade hyperspectral image classification and regression method based on the multiscale cascade convolution neural network is characterized in that sweet basil blades cultured under artificial LED illumination are subjected to 3 different light intensity treatments in S2, data acquisition is carried out when the basil grows for 40 days, the relative value of chlorophyll is measured, and 540 hyperspectral images are acquired; s2, carrying out two kinds of water treatment on healthy pepper plants which grow uniformly for 50 days, normally irrigating control group samples, continuously drying experimental group samples for five days, and collecting 600 hyperspectral images;
the hyperspectral image and corresponding tag data pair are read by a custom python function.
5. The method for classifying and regressing vane hyperspectral images based on the multi-scale cascade convolutional neural network as claimed in claim 1, wherein the image preprocessing of S2 comprises background segmentation, noise point removal and size adjustment;
the background segmentation mode comprises the following steps: adopting an 800nm near-infrared band with the largest difference between the spectral reflectances of the blade and the background as a threshold band for background segmentation, wherein the threshold is set to be 0.15;
the method for removing the noise point comprises the following steps: removing noise points by using morphological transformation in opencv-python;
the size adjustment is as follows: the spatial dimension is uniformly reduced to 120 x 120.
6. The method for classifying and regressing hyperspectral images of leaves based on a multi-scale cascade convolutional neural network as claimed in claim 1, wherein S2 comprises dividing the preprocessed images into a training set, a validation set and a test set, wherein the method for dividing the training set, the validation set and the test set is a random extraction method; 380 samples are randomly extracted from the basil leaves to serve as a training set, 80 samples are taken as a verification set, and 80 samples are taken as a test set; 400 samples in pepper leaves are randomly extracted as a training set, 100 samples are taken as a verification set, 100 samples are taken as a test set, the data set is divided into a plurality of batches, and the number of batch processing samples is set to be 8.
7. The blade hyperspectral image classification and regression method based on the multi-scale cascaded convolutional neural network as claimed in claim 1, wherein the constructing of the spectral-spatial feature extraction structures of different scales in S3 is to embed a standard 3D convolution and an expanded 3D convolution in parallel in the same layer of the network, the two convolution kernels occupy 50% of the number of channels respectively, and then splice feature maps output by the different convolution kernels in channel dimensions after batch normalization, and perform ReLU nonlinear activation.
8. The blade hyperspectral image classification and regression method based on the multiscale cascade convolution neural network is characterized in that the expansion 3D convolution is realized by inserting D-1 values with weight of 0 between adjacent weights of all rows and columns of the standard 3D convolution, wherein D is an expansion factor, and the receptive field of a convolution kernel is expanded while network parameters are not increased and information is not lost; the convolution kernel of the standard 3D convolution is 3 × 3 × 3, D =1, and the receptive field is 3 × 3 × 3; the convolution kernel of the expansion 3D convolution is 3 multiplied by 3, and the test D is the expansion 3D convolution kernel structures of 2, 3 and 4 respectively, which correspond to the receptive fields of 5 multiplied by 5, 7 multiplied by 7 and 9 multiplied by 9 respectively, and the convolution kernels of different receptive fields can extract the spectral space characteristics of different scales in the characteristic diagram;
the optimal multi-scale 3D-CNN network structure is a model of a 3 × 3 × 3 expanded 3D convolution kernel with splice D = 2.
9. The method for classifying and regressing vane hyperspectral images based on the multi-scale cascaded convolutional neural network as claimed in claim 1, wherein the S4 cascades the 1D-CNN network after the optimal multi-scale 3D-CNN network, and is an optimal multi-scale 3D-1D-CNN model obtained by performing model performance test of converting 3D-CNN into 1D-CNN at different positions of the multi-scale 3D ResNet-18 network;
the basic composition unit of the optimal multi-scale 3D-CNN network cascade 1D-CNN network comprises a convolution layer, a batch normalization layer, a ReLU nonlinear activation layer, a mean pooling layer and a full connection layer, wherein the whole network is divided into stages of 3D-CNN, 3D-CNN-to-1D-CNN and 1D-CNN from an input layer to an output layer;
in the 3D-CNN stage, a preprocessed hyperspectral image cube data block with the dimensionality of 120 x 140 is used as input, firstly, a 3 x 3 convolution kernel with the step length of 2,2,1 is adopted to extract local spectrum-space characteristics to generate a 64-channel three-dimensional spectrum image, and then, a continuous optimal multi-scale 3D-CNN network is input to realize extraction and fusion of spectrum-space characteristics with different scales;
the 3D-CNN to 1D-CNN stage is tested to be positioned in a multi-scale 3D ResNet-18 network
Figure 560238DEST_PATH_IMAGE002
The conversion effect is optimal, and the channel number, the space height, the space width and the wave band number output by the 3D-CNN are as follows: the 256 × 15 × 15 × 35 three-dimensional feature map is converted into a one-dimensional feature with the size of 35 by using a 15 × 15 × 1 3D convolution kernel;
in the 1D-CNN stage, the depth spectral features are extracted by adopting continuous one-dimensional convolution kernels with the size of 3 to obtain 512 x 17 one-dimensional features, the extracted high-level abstract features are compressed and aggregated based on mean pooling to obtain features with higher expression capacity, the dimensionality is 512 x 1, and finally, a model predicted value is output through a full-connection layer, and compared with an optimal multi-scale 3D-CNN network which is not cascaded with 1D-CNN, the parameter quantity of the model is reduced by about 35.83%.
10. The method for blade hyperspectral image classification and regression based on the multi-scale cascaded convolutional neural network as claimed in claim 1, wherein S5 the benchmark 1D-CNN, the benchmark 2D-CNN, the benchmark 3D-CNN, and the multi-scale 3D-CNN comprise a convolutional layer, a batch normalization layer, a ReLU nonlinear activation layer, a mean pooling layer, and a full connection layer;
the reference 1D-CNN is a one-dimensional convolution neural network based on a residual error network framework ResNet, and feature extraction is carried out on spectral dimensions;
the reference 2D-CNN is a two-dimensional convolution neural network based on ResNet, and feature extraction is carried out on the spatial dimension;
the reference 3D-CNN is a three-dimensional convolution neural network based on ResNet, and performs combined feature extraction on the dimensionality of the spectrum + space;
the multi-scale 3D-CNN is the optimal multi-scale 3D-CNN network in S3, and can acquire spectrum space characteristics of different scales;
the model training adopts a mode of training from scratch from the beginning, and the initial network weight comes from a random initialization value;
the loss function of the regression model is mean square error, and the loss function of the classification model is cross entropy;
the model is optimized by a gradient descent method, the momentum is set to be 0.9, and the initial learning rate is changed to be {1 multiplied by 10 −2 , 1×10 −3 , 1×10 −4 , 1×10 −5 The learning rate is reduced by one order of magnitude in each 30 training rounds, and the total training round is 80;
the evaluation index of regression model performance is the coefficient of determination R 2 And root mean square error, RMSE;
the classification model was evaluated for F1-score and accuracy.
CN202210450076.9A 2022-04-27 2022-04-27 Blade hyperspectral image classification and regression method based on multi-scale cascade convolution neural network Pending CN114821321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210450076.9A CN114821321A (en) 2022-04-27 2022-04-27 Blade hyperspectral image classification and regression method based on multi-scale cascade convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210450076.9A CN114821321A (en) 2022-04-27 2022-04-27 Blade hyperspectral image classification and regression method based on multi-scale cascade convolution neural network

Publications (1)

Publication Number Publication Date
CN114821321A true CN114821321A (en) 2022-07-29

Family

ID=82506704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210450076.9A Pending CN114821321A (en) 2022-04-27 2022-04-27 Blade hyperspectral image classification and regression method based on multi-scale cascade convolution neural network

Country Status (1)

Country Link
CN (1) CN114821321A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187870A (en) * 2022-09-13 2022-10-14 浙江蓝景科技有限公司杭州分公司 Marine plastic waste material identification method and system, electronic equipment and storage medium
CN116026787A (en) * 2023-03-29 2023-04-28 湖南汇湘轩生物科技股份有限公司 Essence grade detection method and system
CN116561590A (en) * 2023-07-10 2023-08-08 之江实验室 Deep learning-based micro-nano optical fiber load size and position prediction method and device
CN116563649A (en) * 2023-07-10 2023-08-08 西南交通大学 Tensor mapping network-based hyperspectral image lightweight classification method and device
CN117079060A (en) * 2023-10-13 2023-11-17 之江实验室 Intelligent blade classification method and system based on photosynthetic signals

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187870A (en) * 2022-09-13 2022-10-14 浙江蓝景科技有限公司杭州分公司 Marine plastic waste material identification method and system, electronic equipment and storage medium
CN115187870B (en) * 2022-09-13 2023-01-03 浙江蓝景科技有限公司杭州分公司 Marine plastic waste material identification method and system, electronic equipment and storage medium
CN116026787A (en) * 2023-03-29 2023-04-28 湖南汇湘轩生物科技股份有限公司 Essence grade detection method and system
CN116561590A (en) * 2023-07-10 2023-08-08 之江实验室 Deep learning-based micro-nano optical fiber load size and position prediction method and device
CN116563649A (en) * 2023-07-10 2023-08-08 西南交通大学 Tensor mapping network-based hyperspectral image lightweight classification method and device
CN116563649B (en) * 2023-07-10 2023-09-08 西南交通大学 Tensor mapping network-based hyperspectral image lightweight classification method and device
CN116561590B (en) * 2023-07-10 2023-10-03 之江实验室 Deep learning-based micro-nano optical fiber load size and position prediction method and device
CN117079060A (en) * 2023-10-13 2023-11-17 之江实验室 Intelligent blade classification method and system based on photosynthetic signals
CN117079060B (en) * 2023-10-13 2024-03-12 之江实验室 Intelligent blade classification method and system based on photosynthetic signals

Similar Documents

Publication Publication Date Title
CN114821321A (en) Blade hyperspectral image classification and regression method based on multi-scale cascade convolution neural network
CN107576618B (en) Rice panicle blast detection method and system based on deep convolutional neural network
Yang et al. Diagnosis of plant cold damage based on hyperspectral imaging and convolutional neural network
CN110287944B (en) Crop pest monitoring method based on multispectral remote sensing image of deep learning
CN111666815B (en) Automatic garlic planting information extraction method based on Sentinel-2 remote sensing image
CN106372592B (en) A kind of winter wheat planting area calculation method based on winter wheat area index
CN102072885B (en) Machine vision-based paddy neck blast infection degree grading method
CN108458978B (en) Sensitive waveband and waveband combination optimal tree species multispectral remote sensing identification method
CN112931150B (en) Irrigation system and method based on spectral response of citrus canopy
Wenting et al. Detecting maize leaf water status by using digital RGB images
CN112084977B (en) Image and time characteristic fused apple phenological period automatic identification method
CN116543316B (en) Method for identifying turf in paddy field by utilizing multi-time-phase high-resolution satellite image
Zhang et al. High-throughput corn ear screening method based on two-pathway convolutional neural network
Xuefeng et al. Estimation of carbon and nitrogen contents in citrus canopy by low-altitude remote sensing
Ilniyaz et al. Leaf area index estimation of pergola-trained vineyards in arid regions using classical and deep learning methods based on UAV-based RGB images
Long et al. Recognition of drought stress state of tomato seedling based on chlorophyll fluorescence imaging
Nabwire et al. Estimation of cold stress, plant age, and number of leaves in watermelon plants using image analysis
Thapa et al. Assessment of water stress in vineyards using on-the-go hyperspectral imaging and machine learning algorithms
Zhu et al. Channel and band attention embedded 3D CNN for model development of hyperspectral image in object-scale analysis
Lin et al. Data-driven modeling for crop growth in plant factories
De Silva et al. Plant disease detection using deep learning on natural environment images
CN116151454A (en) Method and system for predicting yield of short-forest linalool essential oil by multispectral unmanned aerial vehicle
Calma et al. Cassava Disease Detection using MobileNetV3 Algorithm through Augmented Stem and Leaf Images
CN114494689A (en) Identification method of tomato drought stress
Ye et al. Inter-relationships between canopy features and fruit yield in citrus as detected by airborne multispectral imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination