CN115965953B - Grain Variety Classification Method Based on Hyperspectral Imaging and Deep Learning - Google Patents

Grain Variety Classification Method Based on Hyperspectral Imaging and Deep Learning Download PDF

Info

Publication number
CN115965953B
CN115965953B CN202310009003.0A CN202310009003A CN115965953B CN 115965953 B CN115965953 B CN 115965953B CN 202310009003 A CN202310009003 A CN 202310009003A CN 115965953 B CN115965953 B CN 115965953B
Authority
CN
China
Prior art keywords
grain variety
variety classification
hyperspectral
grain
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310009003.0A
Other languages
Chinese (zh)
Other versions
CN115965953A (en
Inventor
于爽
战永泽
王忠杰
王泽宇
刘明义
胡睿晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202310009003.0A priority Critical patent/CN115965953B/en
Publication of CN115965953A publication Critical patent/CN115965953A/en
Application granted granted Critical
Publication of CN115965953B publication Critical patent/CN115965953B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a grain variety classification method based on hyperspectral imaging and deep learning, which belongs to the field of grain variety classification and comprises the following steps: acquiring a multi-channel hyperspectral image of a grain seed, preprocessing the multi-channel hyperspectral image to obtain a preprocessed hyperspectral image data set, and acquiring a first feature map based on the preprocessed hyperspectral image data set; wherein the pre-processed hyperspectral image dataset comprises a plurality of stitched images; constructing a grain variety classification network model, wherein the grain variety classification network model comprises a grain variety classification module; guiding training of a grain variety classification network model based on the grain variety classification module to obtain an optimized grain variety classification network model; and inputting the spliced images into an optimized grain variety classification network model to obtain grain variety classification results. The application provides a simple, efficient, lossless, economical and automatic grain variety classification method by utilizing hyperspectral imaging and deep learning technologies.

Description

Grain variety classification method based on hyperspectral imaging and deep learning
Technical Field
The application belongs to the field of grain variety classification, and particularly relates to a grain variety classification method based on hyperspectral imaging and deep learning.
Background
Seed variety classification or identification originates in the middle of the 19 th century. Subsequently, a series of conventional rice seed variety classification methods, mainly including morphological identification methods, physiological and biochemical identification methods, molecular biological identification methods, etc., have appeared, and these methods are advantageous and have great differences in practical applications. With the development of hybridization technology, more and more mixing characteristics appear among different crop varieties, which brings great challenges to the traditional seed variety identification method.
Compared with the traditional method, the emerging machine learning method can learn from a large amount of data quickly and reliably, and has the advantages of no damage and low cost. While most machine learning methods have good performance, they rely heavily on artificially generated features designed for a particular task, limiting the applicability of these methods in complex/difficult situations. Furthermore, the function of such features may not be sufficient to distinguish between subtle variations between different varieties or between large variations between the same varieties. Thus, automatic extraction of more discriminative features is considered critical to hyperspectral image classification. Compared with a manually generated method, the deep learning method can effectively and automatically extract the distinguishing features from the hyperspectral image. Currently, hyperspectral image classification methods based on depth networks can be generally classified into three types, including a spectral feature network, a spatial feature network and a spectral-spatial feature network. Overall, models that comprehensively consider spatial and spectral information have better classification performance. Recently, many have attempted to apply attention to deep neural networks, greatly improving their performance. Therefore, the introduction of the attention mechanism in the deep learning model is of great importance.
Since the acquisition of hyperspectral data is expensive and time consuming, the number of images finally acquired is insufficient to support network training. Data expansion is considered a viable approach to this problem, and is typically addressed by creating new samples from existing samples.
Aiming at the problems, the application provides a method for classifying grain varieties based on hyperspectral imaging and deep learning technologies so as to solve the problems in the prior art.
Disclosure of Invention
The application aims to provide a grain variety classification method based on hyperspectral imaging and deep learning, so as to solve the problems in the prior art.
In order to achieve the above purpose, the application provides a grain variety classification method based on hyperspectral imaging and deep learning, which comprises the following steps:
acquiring a multi-channel hyperspectral image of a grain seed, preprocessing the multi-channel hyperspectral image to obtain a preprocessed hyperspectral image dataset, and obtaining a first characteristic image based on the preprocessed hyperspectral image dataset; wherein the pre-processed hyperspectral image dataset comprises a plurality of stitched images;
constructing a grain variety classification network model, wherein the grain variety classification network model comprises a grain variety classification module; training the grain variety classification network model based on the grain variety classification module to obtain a trained grain variety classification model;
and inputting the spliced images into the trained grain variety classification model to obtain grain variety classification results.
Optionally, before preprocessing the multi-channel hyperspectral image,
and carrying out spectrum calibration and spectrum data baseline correction on the multi-channel hyperspectral based on the white polytetrafluoroethylene material plate and a multi-element scattering correction algorithm.
Optionally, the process of preprocessing the multi-channel hyperspectral image includes:
decomposing the multi-channel hyperspectral image to obtain a plurality of single-channel hyperspectral images, and discarding images corresponding to unclear spectrum wavebands based on the plurality of single-channel hyperspectral images to obtain a preprocessed hyperspectral image;
dividing the preprocessed hyperspectral image into a plurality of image subsets according to category labels, wherein each image subset comprises a plurality of single-channel hyperspectral images;
and cutting and splicing a plurality of the single-channel hyperspectral images in each image subset to obtain a plurality of spliced images, wherein the sub-images of the spliced images come from the same wave band.
Optionally, the process of inputting the first feature map into the grain variety classification module to obtain the weighted feature map includes:
extracting texture information of the first feature map based on a plurality of edge detection operators to obtain gradient maps corresponding to the edge detection operators;
obtaining a second feature map based on a plurality of gradient maps;
obtaining a weight graph based on the second feature graph;
and obtaining a weighted feature map based on the first feature map and the weight map.
Optionally, the grain variety classification module adopts a mixed gradient domain attention module with a pyramid structure;
wherein the pyramid structured hybrid gradient domain attention module comprises a pyramid module.
Optionally, extracting texture information of the first feature map based on a plurality of edge detection operators to obtain gradient maps corresponding to the edge detection operators, cascading a plurality of gradient maps, and then inputting the gradient maps into a convolution layer to perform feature extraction and channel recovery to obtain the second feature map;
wherein the plurality of edge detection operators includes Sobel, scharr, laplace, roberts and Prewitt.
Optionally, encoding the second feature map based on the pyramid module to obtain a weight map; wherein, the pyramid module includes: seven convolutional layers, three max pooling layers, and three deconvolution layers; the convolution kernel number of the convolution layer is equal to the channel number of the second feature map, and the size of the kernel is 3 multiplied by 3; the kernel size of the maximum pooling layer is set to 2 multiplied by 2, and the filling is set to 1; the core size in the deconvolution layer is set to 2 x 2 and the step size is set to 2.
An important feature of the pyramid module is that the module is used for effectively extracting detail features by encoding gradient graphs extracted by five classical image operators and reassigning proper weights of the feature graphs, and meanwhile noise and interference information are reduced.
The application has the technical effects that:
the application adopts the design of introducing external priori knowledge to the attention mechanism, so that the model learning process is greatly influenced towards the beneficial direction. According to the application, the learning weight can be redistributed to the feature map in the network learning process, so that the network is more concerned with the edge and texture information of grain seeds in the training process.
The application realizes a simple, efficient, nondestructive and automatic grain variety identification method based on hyperspectral imaging and deep learning technology, enriches the current grain variety identification technology system, is beneficial to further strengthening the supervision of the seed market and ensures the legal rights of breeding enterprises and rice farmers.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application. In the drawings:
FIG. 1 is a flow chart of a method in an embodiment of the application;
FIG. 2 is a schematic diagram of a step data enhancement strategy in an embodiment of the present application;
FIG. 3 is a network architecture diagram of a step four hybrid gradient domain attention module in an embodiment of the present application;
FIG. 4 is a flow chart of a verification experiment in an embodiment of the application.
Detailed Description
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Example 1
As shown in fig. 1-3, the present embodiment provides a grain variety classification method based on hyperspectral imaging and deep learning, specifically provides a method for classifying rice seeds based on hyperspectral imaging and deep learning technology, comprising the following steps:
step one, acquiring a hyperspectral image dataset of rice seeds, preprocessing data, converting a multi-channel hyperspectral image into a series of single-channel images, and discarding partial unclear images according to priori knowledge.
Step two, as shown in fig. 2, according to the data enhancement strategy, each hyperspectral image with 200 seeds is randomly segmented into 260 x 260 image blocks, and then every four image blocks are randomly spliced into one image. The method greatly alleviates the problem of insufficient training set caused by hyperspectral imaging characteristics.
And step three, introducing an edge detection operator. A feature map of size b×c×h×w is received as input from a layer of the network, edges are detected and retained using Sobel, scharr, laplace, roberts and Prewitt edge detection operators, and texture information is extracted.
(1) Sobel operator. It is used mainly as a discrete differential operator for edge detection. The approximate calculation rules for magnitude and direction are as follows:
where Gx and Gy represent images with horizontal and vertical edge detection, respectively. The Sobel operator is more suitable for images with larger gray gradients and larger noise.
(2) Scharr operator. By increasing the spacing between pixel values, weak edge information is effectively extracted, which is one implementation of enhancing the Sobel operator difference. The only difference between the Scharr operator and the Sobel operator is their different convolution kernels.
(3) Laplace operator. The operator is isotropic and can sharpen boundaries and lines in any direction. It is typically used to determine whether an edge pixel is located in a bright or dark region of an image. It is a second derivative operator that produces steep zero crossings at the edges, defined as follows:
where f represents a digital image, x represents a horizontal direction, and y represents a vertical direction.
(4) Roberts operator. The purpose is to detect edges using a local difference operator, defined as follows:
the operator has better detection effect on the vertical edge than the inclined edge.
(5) The Prewitt operator. The operator performs a neighborhood convolution on the image in image space using two direction templates. One direction template is used to detect horizontal edges and the other is used to detect vertical edges.
The form is defined as follows:
where d refers to the first derivative operation.
And step four, providing a pyramid structured mixed domain gradient attention module. Connecting the gradient graph G with the number of 5 channels being C obtained in the step three 1 ,G 2 ,…,G 5 An output with 5C channels is obtained and input to the convolutional layer for feature extraction and channel recovery, generating a feature map with C channels, named G. In the light of the multi-resolution technique, a pyramid module is used to encode the feature map G. The module consists of seven convolution layers, three max pooling layers and three deconvolution layers. The network architecture of the module is shown in fig. 3.
An important feature of the attention module is that it re-assigns appropriate weights to feature maps by encoding the gradient maps extracted by the five classical image operators to effectively extract detail features while reducing noise and interference information. Since the input and weighted feature maps are the same size and point multiplication operations are used, the module can be considered as an attention module based on a hybrid gradient domain design and can be combined with multiple networks in a low cost manner to achieve better classification performance.
And fifthly, after the edge detection operator and the mixed gradient domain attention module extract the edge and texture features, the result obtained by multiplying the weight graph output by the input feature graph and the input image is continuously trained by the backbone network, and a final classification result is obtained.
Example two
The embodiment provides a verification test of a rice variety classification method based on hyperspectral imaging and deep learning technology, as shown in fig. 4, comprising:
step one, selecting a data set. The rice seeds selected are of six major classes. Each type of rice seed contained 50 hyperspectral images, of which 35 images were randomly selected for training and the remaining 15 images were used for testing. The data enhancement method provided by the application is respectively adopted on a training set and a test set, and 7000 training images and 3000 test images with the size of 520 multiplied by 520 of each type of rice seeds are finally obtained.
Step two, setting an experiment. Implemented in the Pytorch 1.8.0 framework of the Windows 10 machine, and trained and tested on a platform equipped with an 11 th generation Intel (R) Core (TM) i7-11700K@3.60GHz CPU, NVIDIA Geforce RTX 3060Ti (12 GB) GPU and 32GB RAM. CUDA and CUDNN are also used for acceleration. Furthermore, using the SGD optimizer, the original learning rate is 0.04, and the basic cross entropy is used as the loss function.
And thirdly, evaluating the index.
Classification accuracy of each rice seed; overall classification accuracy; kappa coefficient; macro-F1.
And step four, evaluating the classification result. Seven classical network structures are adopted to quantitatively evaluate the effectiveness of the proposed mixed gradient domain attention module and the edge detection operator on rice seed classification based on a rice seed amplification hyperspectral image set generated by the proposed data enhancement method. According to the classification results of seven network structures with and without the attention module and the edge detection operator, including classification accuracy (AA), overall classification accuracy (OA), macro-F1 and Kappa coefficient (Kappa) of each type of rice seeds, each baseline model is found to be remarkably improved in terms of overall accuracy, macro-F1 and Kappa coefficient for the rice seed hyperspectral image set obtained herein after the edge detection operator is introduced and the mixed gradient domain attention module is added.
The present application is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present application are intended to be included in the scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (4)

1. A grain variety classification method based on hyperspectral imaging and deep learning is characterized by comprising the following steps:
acquiring a multi-channel hyperspectral image of a grain seed, preprocessing the multi-channel hyperspectral image to obtain a preprocessed hyperspectral image dataset, and obtaining a first characteristic image based on the preprocessed hyperspectral image dataset; wherein the pre-processed hyperspectral image dataset comprises a plurality of stitched images;
constructing a grain variety classification network model, wherein the grain variety classification network model comprises a grain variety classification module; training the grain variety classification network model based on the grain variety classification module to obtain a trained grain variety classification model;
inputting the spliced images into the trained grain variety classification model to obtain grain variety classification results;
before preprocessing the multi-channel hyperspectral image,
performing spectrum calibration and spectrum data baseline correction on the multi-channel hyperspectrum based on a white polytetrafluoroethylene material plate and a multi-element scattering correction algorithm;
the process of preprocessing the multi-channel hyperspectral image comprises the following steps:
decomposing the multi-channel hyperspectral image to obtain a plurality of single-channel hyperspectral images, and discarding images corresponding to unclear spectrum wavebands based on the plurality of single-channel hyperspectral images to obtain a preprocessed hyperspectral image;
dividing the preprocessed hyperspectral image into a plurality of image subsets according to category labels, wherein each image subset comprises a plurality of single-channel hyperspectral images, each grain seed comprises a plurality of varieties, and the category labels comprise identification labels corresponding to the varieties;
cutting and splicing a plurality of single-channel hyperspectral images in each image subset to obtain a plurality of spliced images, wherein sub-images of the spliced images come from the same wave band;
the process of acquiring the spliced image comprises the following steps: randomly dividing each hyperspectral image with 200 seeds into 260 x 260 image blocks, and randomly splicing every four image blocks into a spliced image;
the process of guiding training the grain variety classification network model based on the grain variety classification module comprises the following steps:
extracting texture information of the first feature map based on a plurality of edge detection operators to obtain gradient maps corresponding to the edge detection operators;
obtaining a second feature map based on a plurality of gradient maps;
the grain variety classification module redistributes learning weights to the second feature images, and extracts detail features to obtain weight images;
obtaining a weighted feature map based on the first feature map and the weight map;
obtaining a prediction result based on the weighted feature map;
and obtaining a loss value based on the predicted result and the real result, and training a grain variety classification network model based on the loss value to obtain a trained grain variety classification model.
2. The grain variety classification method based on hyperspectral imaging and deep learning of claim 1, wherein the grain variety classification module adopts a mixed gradient domain attention module with a pyramid structure;
wherein the pyramid structured hybrid gradient domain attention module comprises a pyramid module.
3. The grain variety classification method based on hyperspectral imaging and deep learning according to claim 1, wherein texture information of the first feature map is extracted based on a plurality of edge detection operators to obtain gradient maps corresponding to the edge detection operators, and a plurality of gradient maps are input into a convolution layer for feature extraction and channel recovery after being cascaded to obtain the second feature map;
wherein the plurality of edge detection operators includes Sobel, scharr, laplace, roberts and Prewitt.
4. The grain variety classification method based on hyperspectral imaging and deep learning as claimed in claim 2, wherein the second feature map is encoded based on the pyramid module to obtain a weight map;
wherein, the pyramid module includes: seven convolutional layers, three max pooling layers, and three deconvolution layers; the convolution kernel number of the convolution layer is equal to the channel number of the second feature map, and the size of the kernel is 3 multiplied by 3; the kernel size of the maximum pooling layer is set to 2 multiplied by 2, and the filling is set to 1; the core size in the deconvolution layer is set to 2 x 2 and the step size is set to 2.
CN202310009003.0A 2023-01-04 2023-01-04 Grain Variety Classification Method Based on Hyperspectral Imaging and Deep Learning Active CN115965953B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310009003.0A CN115965953B (en) 2023-01-04 2023-01-04 Grain Variety Classification Method Based on Hyperspectral Imaging and Deep Learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310009003.0A CN115965953B (en) 2023-01-04 2023-01-04 Grain Variety Classification Method Based on Hyperspectral Imaging and Deep Learning

Publications (2)

Publication Number Publication Date
CN115965953A CN115965953A (en) 2023-04-14
CN115965953B true CN115965953B (en) 2023-08-22

Family

ID=87357774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310009003.0A Active CN115965953B (en) 2023-01-04 2023-01-04 Grain Variety Classification Method Based on Hyperspectral Imaging and Deep Learning

Country Status (1)

Country Link
CN (1) CN115965953B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852227A (en) * 2019-11-04 2020-02-28 中国科学院遥感与数字地球研究所 Hyperspectral image deep learning classification method, device, equipment and storage medium
CN111914907A (en) * 2020-07-13 2020-11-10 河海大学 Hyperspectral image classification method based on deep learning space-spectrum combined network
CN113705526A (en) * 2021-09-07 2021-11-26 安徽大学 Hyperspectral remote sensing image classification method
WO2022073452A1 (en) * 2020-10-07 2022-04-14 武汉大学 Hyperspectral remote sensing image classification method based on self-attention context network
CN114882249A (en) * 2022-03-28 2022-08-09 吉林工程技术师范学院 Hyperspectrum-based seed variety quality detection method and system
CN115564996A (en) * 2022-09-29 2023-01-03 安徽大学 Hyperspectral remote sensing image classification method based on attention union network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852227A (en) * 2019-11-04 2020-02-28 中国科学院遥感与数字地球研究所 Hyperspectral image deep learning classification method, device, equipment and storage medium
CN111914907A (en) * 2020-07-13 2020-11-10 河海大学 Hyperspectral image classification method based on deep learning space-spectrum combined network
WO2022073452A1 (en) * 2020-10-07 2022-04-14 武汉大学 Hyperspectral remote sensing image classification method based on self-attention context network
CN113705526A (en) * 2021-09-07 2021-11-26 安徽大学 Hyperspectral remote sensing image classification method
CN114882249A (en) * 2022-03-28 2022-08-09 吉林工程技术师范学院 Hyperspectrum-based seed variety quality detection method and system
CN115564996A (en) * 2022-09-29 2023-01-03 安徽大学 Hyperspectral remote sensing image classification method based on attention union network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SSCDenseNet:一种空-谱卷积稠密网络的高光谱图像分类算法;刘启超;肖亮;刘芳;徐金环;;电子学报(04);全文 *

Also Published As

Publication number Publication date
CN115965953A (en) 2023-04-14

Similar Documents

Publication Publication Date Title
Smith et al. Segmentation of roots in soil with U-Net
CN107292343B (en) Hyperspectral remote sensing image classification method based on six-layer convolutional neural network and spectrum-space information combination
CN105354865B (en) The automatic cloud detection method of optic of multispectral remote sensing satellite image and system
CN110298414B (en) Hyperspectral image classification method based on denoising combination dimensionality reduction and guided filtering
CN112052755A (en) Semantic convolution hyperspectral image classification method based on multi-path attention mechanism
CN114463637A (en) Winter wheat remote sensing identification analysis method and system based on deep learning
CN108256557B (en) Hyperspectral image classification method combining deep learning and neighborhood integration
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN113850284B (en) Multi-operation detection method based on multi-scale feature fusion and multi-branch prediction
CN1252588C (en) High spectrum remote sensing image combined weighting random sorting method
CN114387454A (en) Self-supervision pre-training method based on region screening module and multi-level comparison
CN116824274B (en) Small sample fine granularity image classification method and system
CN105809199A (en) Polarized SAR image classification method based on sparse coding and DPL
CN115965953B (en) Grain Variety Classification Method Based on Hyperspectral Imaging and Deep Learning
CN117115685A (en) Method and system for identifying cash crop information based on deep learning
CN116452872A (en) Forest scene tree classification method based on improved deep pavv3+
CN110866552A (en) Hyperspectral image classification method based on full convolution space propagation network
CN113192076B (en) MRI brain tumor image segmentation method combining classification prediction and multi-scale feature extraction
CN102521601B (en) Method for classifying hyperspectral images based on semi-supervised conditional random field
CN115423802A (en) Automatic classification and segmentation method for squamous epithelial tumor cell picture based on deep learning
CN103761530A (en) Hyperspectral image unmixing method based on relevance vector machine
CN111709427B (en) Fruit segmentation method based on sparse convolution kernel
CN111860153B (en) Scale-adaptive hyperspectral image classification method and system
CN114821351A (en) Railway hazard source identification method and device, electronic equipment and storage medium
CN106952251A (en) A kind of image significance detection method based on Adsorption Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant