CN113158980A - Tea leaf classification method based on hyperspectral image and deep learning - Google Patents

Tea leaf classification method based on hyperspectral image and deep learning Download PDF

Info

Publication number
CN113158980A
CN113158980A CN202110534790.1A CN202110534790A CN113158980A CN 113158980 A CN113158980 A CN 113158980A CN 202110534790 A CN202110534790 A CN 202110534790A CN 113158980 A CN113158980 A CN 113158980A
Authority
CN
China
Prior art keywords
tea
image
network
classification
accuracy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110534790.1A
Other languages
Chinese (zh)
Inventor
康志亮
胡妍
王鹏
孙杰
耿金平
罗雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Agricultural University
Original Assignee
Sichuan Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Agricultural University filed Critical Sichuan Agricultural University
Priority to CN202110534790.1A priority Critical patent/CN113158980A/en
Publication of CN113158980A publication Critical patent/CN113158980A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a tea leaf classification method based on hyperspectral images and deep learning, which comprises the following steps of: step 1, acquiring a hyperspectral image of tea to obtain a data set; step 2, the data set is subjected to data enhancement to expand the data set; step 3, inputting the picture into a bilinear neural network to perform fine-grained image classification, detection and feature extraction; step 4, combining the bilinear neural network with an attention mechanism, and adding a full connection layer for training; and 5, obtaining the accuracy, precision and recall degree through evaluating indexes. The invention solves the problems of long time consumption, dependence on a large amount of chemical reagents, great waste, pollution and the like in the detection process in the prior art, and realizes the nondestructive detection of the tea quality.

Description

Tea leaf classification method based on hyperspectral image and deep learning
Technical Field
The invention belongs to the technical field of tea detection, and particularly relates to a tea classification method based on hyperspectral images and deep learning.
Background
As a hometown of tea, China has a long-standing tea culture. Tea drinking dates back to the Shennong era. The tea is a green healthy drink which is loved by people. The tea is rich in various amino acids, minerals, vitamins and the like which are beneficial to human health. Researches have proved that the tea has certain health care and pharmacological effects. The tea is beneficial to human health after being drunk for a long time, so that the tea is more and more favored by people. Along with the continuous expansion of tea drinking groups and the improvement of the life quality of people, the demand of people for famous tea is gradually increased. In actual life, the economic value difference between the famous tea and the common tea can reach several times, so that the common way of pursuing violence by illegal vendors is realized by using low-price tea with a relatively close appearance to serve as high-price tea, and common consumers are difficult to distinguish by naked eyes. Moreover, consumers are deceived by incorporating low-cost tea leaves with a close appearance into tea leaves of high economic value, and even tea drinkers with high experience cannot correctly distinguish the adulterated tea leaves. At present, the online and offline sales of tea leaves have different degrees of sales, so how to rapidly and nondestructively detect the sales of the tea leaves is not only beneficial to maintaining the benefits of consumers, but also beneficial to standardizing the tea market.
In many tea leaves, Tieguanyin is similar in appearance to some oolong teas such as camellia sinensis, Jinguanyin, Huangjingui, hairy crab tea, Dayewulong, Meizhan tea, but its economic value is very different from Tieguanyin. And the Tieguanyin phenomenon that other tea leaves are often sold in the market, so the invention has great significance in the research of respectively similar tea leaves.
Technical scheme of prior art I
The method for judging the tea quality currently comprises an electronic tongue and an electronic nose, and the samples to be detected are analyzed, identified and judged by simulating human taste sense and smell sense respectively. The electronic nose is composed of functional devices such as a gas sensor, a signal processing and pattern recognition system and the like, the whole information of gas is comprehensively evaluated through smell, the aroma component response diagram of tea is obtained in tea detection, and the judgment of different types and grades is realized through the analysis, identification and judgment of a pattern recognition method. The electronic tongue uses the lipid membrane as a taste sensor, detects the taste substances in a manner similar to human taste perception, outputs a signal pattern related to the sample, and obtains overall evaluation related to the taste characteristics of the sample through pattern recognition. In the aspect of different tea categories, tea odor information is collected through electronic nose detection, and tea is identified from the aspect of odor; the basic taste and aftertaste of the tea leaves are evaluated by the electronic tongue. Nowadays, a tea quality detection method by using an electronic nose and an electronic tongue in a combined mode is adopted. The tea leaf aftertaste information is used for identifying the tea leaf quality, PCA (principal component analysis) dimension reduction and LDA (latent Dirichlet Allocation) dimension reduction are adopted to compare the introduced aftertaste information, and a corresponding prediction model is carried out through an SVM (support vector machine) to grade the tea leaves. And the flavor of the tea is quickly predicted by detecting the odor information by using an electronic nose. The tea flavor is affected by flavor-producing substances, which in turn affect the volatile gas content of the tea.
The electronic nose and the electronic tongue detect the macroscopic information of the tea leaves, and further neglect the influence of a large amount of chemical components in the tea leaves on the quality of the tea leaves. In tea detection, tea soup of tea needs to be obtained, the original structure of the tea is damaged, and the tea detection method is not suitable for practical application. In addition, the practical application of the sensors of the electronic nose and the electronic tongue is further limited by the problems of low sensitivity and selectivity, lack of complete detection databases, lack of matched data analysis methods and the like. In the tea leaf classification process, the adulterated tea leaves are formed by mixing the tea leaves and other tea leaves, so that the electronic nose and the electronic tongue are utilized to analyze the adulteration of the tea leaves, and the true and false of the tea leaves are difficult to distinguish.
Technical scheme of prior art II
At present, in the quality grade evaluation of tea, the grade evaluation is mainly carried out by adopting a mode of combining sensory evaluation with physicochemical index measurement. In the sensory evaluation, evaluation experts comprehensively evaluate the tea leaves according to five indexes of appearance, liquor color, aroma, taste and leaf bottom; in the process of measuring the physical and chemical indexes, whether the indexes of the tea, such as the moisture, theanine, tea polyphenol and the like, meet the regulations of national standards is mainly measured according to the national standards. The method mainly comprises headspace solid phase microextraction, a gas chromatography-mass spectrometry combined method, a liquid chromatography-mass spectrometry combined method, high performance liquid chromatography and the like.
The sensory evaluation and the evaluation mode of the physicochemical indexes have inevitable defects, and are mainly characterized in that the sensory evaluation has larger subjectivity, and the physicochemical index measurement needs a plurality of samples to be destroyed, so that the waste of resources is generated; in addition, the physicochemical measurement process is complicated and time-consuming, so that more human errors are generated, and the reliability of the measurement result is not high; meanwhile, due to the fact that the measurement is time-consuming, real-time monitoring in the tea making process cannot be achieved, and the practical guiding significance to production is not great; the physical and chemical indexes are expensive to measure, and the application of the physical and chemical method to the measurement of the content of the tea is also limited. Although the methods can achieve high detection precision, the test samples are damaged in the detection process, and professional personnel are required to operate the test, so that the methods are difficult to popularize and apply.
In recent years, the development of spectrum technology makes great breakthrough in the aspect of nondestructive detection of tea, methods such as near infrared spectrum detection technology, hyperspectral detection technology, Raman spectrum and Fourier spectrum detection exist, most of the methods are methods for analyzing samples by using spectrum data, and fewer classification methods are used for deep learning by using a computer.
The spectrum technology also has certain limitation, and the collected data information is relatively limited because most of the data collection modes are point light source sampling, so that the final detection effect is influenced. Spectral data are often selected and utilized for analysis in tea detection, so that some important wave bands are ignored, and the detection effect is difficult to ensure. In the aspect of deep learning of a computer, the selection of a data set is not guaranteed frequently, most of data sets are pictures selected from a network, no expert identification is provided, and the definition of the pictures is insufficient. It is therefore critical to seek a reliable data collector.
Disclosure of Invention
The invention aims to solve the defects in the prior art, provides a tea leaf classification method based on hyperspectral images and deep learning, solves the problems of long time consumption, dependence on a large amount of chemical reagents, great waste, pollution and the like in the detection process in the prior art, and realizes nondestructive detection of tea leaf quality.
The invention adopts the following technical scheme:
a tea leaf classification method based on hyperspectral images and deep learning comprises the following steps:
step 1, acquiring a hyperspectral image of tea to obtain a data set;
step 2, the data set is subjected to data enhancement to expand the data set;
step 3, inputting the picture into a bilinear neural network to perform fine-grained image classification, detection and feature extraction;
step 4, combining the bilinear neural network with an attention mechanism, and adding a full connection layer for training;
and 5, obtaining the accuracy, precision and recall degree through evaluating indexes.
The further technical scheme is that the step 1 comprises the following steps: a plurality of tea leaves are obtained, tea leaf samples are contained in 100 x 20 glass culture dishes, a sample is randomly and uniformly paved on the bottom of the culture dish by 1cm and 10g of each sample, and 50 samples are collected for each type.
The further technical method comprises the following step 2: the original picture set contains 7 different varieties of tea leaves, and the total number of the tea leaves is 350, because a large amount of image data is needed for training the convolutional neural network so as to prevent overfitting; therefore, an Image Data Generator interface built in Tensorflow2.0 is adopted to enhance input Image Data, random rotation, translation and shearing operations are carried out on pictures, a Data set with 6 times of expansion pictures is obtained by fusing the pictures together, a feature layer is extracted from the pictures subjected to Data expansion by using an EfficientNet B4 network architecture, the model is finely adjusted and the full connection layer of the original model is modified, a Softmax layer is modified into 7 layers according to the number of required classification, different layers are respectively frozen according to different models, and pre-training is carried out by using the weight of Imagenet in a migration learning mode.
Further, step 3 comprises: the purpose of improving indexes is achieved by comprehensively optimizing the network width, the network depth and the resolution of the input image, and the model parameters and the calculated amount are greatly reduced under the condition that the accuracy index is similar to that of the existing classification network;
the existing 8 EfficientNet-B0-B7 models are selected by comprehensively considering parameters and accuracy of comparison models, and pictures are input into a bilinear neural network to perform detection and feature extraction tasks of local regions in the fine-grained image classification process; a Bilinear convolutional neural network (Bilinear CNN) is characterized in that A, B two convolutional neural networks are used for extracting two features of each position of an image, then outer product multiplication is carried out, finally classification is carried out through a classification layer, a model is coordinated with a CNNA network and a CNNB network, the CNNA is used for positioning the feature part of the image, the CNNB is used for carrying out feature extraction on the feature region detected by the CNNA, and the task of detecting and extracting the features of the local region in the fine-grained image classification process is completed.
Further, step 4 includes: combining the result obtained by the bilinear convolutional neural network with a CBAM attention module, firstly obtaining a weighting result through a channel attention module, then obtaining an extraction result through a space attention module, and finally weighting.
Further, step 5 comprises: the quality of the model needs to be evaluated using accuracy, which is the proportion of correctly detected objects among all detected objects, and recall, which is the proportion of correctly detected objects among all detected positive samples, as follows:
Figure BDA0003069388490000041
Figure BDA0003069388490000042
Figure BDA0003069388490000051
Figure BDA0003069388490000052
where true positive TP and true negative TN indicate correctly classified images and false positive FP and false negative FN indicate that images were misclassified.
The accuracy is as follows: accuracy ═ TP + TN)/(TP + FP + TN + FN), indicates the probability that all samples are correctly classified, and different thresholds T can be used.
The precision rate is as follows: precision is TP/(TP + FP), which indicates how many samples are true positive samples from the samples recalled as positive samples.
The recall ratio is: recall is TP/(TP + FN), indicating how many samples are recalled.
F1 is an index for comprehensively considering the precision and the recall ratio, and is the harmonic average of the precision ratio and the recall ratio.
The invention has the beneficial effects that:
on one hand, the method breaks through the traditional tea leaf classification method based on single hyperspectral imaging, performs data processing on hyperspectrum from the aspects of feature selection and feature extraction by acquiring hyperspectral images, and acquires a real and reliable data set by using the hyperspectral images, so that the acquired data are more comprehensive. On the other hand, the optimal network model and parameters are selected by combining the advantages of deep learning of a computer, a bilinear convolutional neural network is selected in the invention in combination with an attention mechanism, and the EfficientNet B4 is superior to other models such as VGG16, Resnet50 and the like in the aspects of precision, model parameters and model size. And the network model is optimized better.
The two data sets are combined with each other, and aiming at the problem that no publicly available tea picture data set exists at present, the picture obtained through hyperspectral imaging has higher reliability and safety, the manufactured data set is used for training of a network model, and the data set is enlarged in a data enhancement mode, so that the quality of the data set is better guaranteed. And comparing various network model structures, and selecting the optimal model for data processing analysis to obtain a classification result. The method is more reliable in the selection of the data set, the accuracy of the model is greatly improved, the training time is also accelerated, and the reliability of the method for tea classification is verified.
Drawings
FIG. 1 is a flow chart of the steps of the present invention;
FIG. 2 is a network structure of efficiency-B4;
FIG. 3 is a schematic diagram of a bilinear convolutional neural network structure;
FIG. 4 is a bilinear model plus attention map.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention are described below clearly and completely, and it is obvious that the described embodiments are some, not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the tea leaf classification method based on hyperspectral image and deep learning of the invention comprises the following steps:
collecting hyperspectral images of Tieguanyin, Jinguanyin, the camellia, the golden osmanthus, the hairy crab tea, the oolong with the big leaves and the Meizhan tea, fusing the obtained data sets together through operations such as rotation, translation and the like to obtain an extended picture, extracting a feature layer by using an EfficientNeTB4 network architecture after the extended picture is obtained, combining the output result of the convolution layer with CBAM, obtaining a weighted result through a channel attention module, then obtaining an extraction result through a space attention module and finally weighting, multiplying the extraction result obtained by CBAM with the feature layer one by one, adding a full-connection layer for classification, and finally obtaining a final classification result.
Example 1
The Tieguanyin, Jinguanyin, Bencha, Huangjingui, hairy crab tea, oolong with big leaves and Meizhan tea with similar growth phases are classified by deep learning.
Acquiring hyperspectral image of tea
Obtaining Tieguanyin, Jinguanyin, the camellia, golden osmanthus, hairy crab tea, oolong with big leaves and Meizhan tea in the market, wherein tea samples are contained in a 100 x 20 glass culture dish, the bottom of the culture dish is randomly and uniformly paved with about 1cm to form a sample, each sample is about 10g, and 50 samples are collected from each class;
the hyperspectral imaging system used in the experiment adopts a series of hyperspectral cameras of Image-lambda spectrum Image of Touhenhan optical company, mainly comprises an imaging spectrometer and a CCD, the spectrum acquisition range of the cameras is 400nm-1000nm, the minimum resolution of the spectrum is 2.8nm, the spectrum range has 520 wave bands in total, and the determination speed is less than 1min for each sample. The hyperspectral camera has pixels of 1344 multiplied by 1024, the pixel size is 6.45 multiplied by 6.45 mu m, a DC12V power supply is adopted for supplying power, and the normal operation temperature is 0-40 ℃.
Data enhancement
The original picture set contains 350 tea leaves of 7 different varieties. Since training the convolutional neural network requires a large amount of image data to prevent overfitting.
In the invention, an Image Data Generator interface built in Tensorflow2.0 is adopted to enhance input Image Data, and the images are subjected to operations such as random rotation, translation, shearing and the like to be fused together to obtain a Data set with about 6 times increased expansion images. And extracting a feature layer from the picture subjected to data expansion by utilizing an EfficientNetB4 network architecture. And carrying out fine adjustment on the model and modifying the full-connection layer of the original model, and modifying the Softmax layer into 7 layers according to the number of the required classifications. According to different models, different layers are respectively frozen, and pre-training is carried out by using the weight of Imagenet in a transfer learning mode.
Effect Net network
The purpose of index improvement is achieved by comprehensively optimizing the network width, the network depth and the resolution of the input image, and the model parameters and the calculated amount are greatly reduced under the condition that the accuracy index is similar to that of the existing classification network.
The network structure used by the invention is shown in FIG. 2, wherein 8 EfficientNet-B0-B7 models (or VGG16, Resnet50, AlexNet, VGGNet, transfer learning and the like) are available, and EfficientNet-B4 is selected by comprehensively considering parameters and accuracy of the models through comparison.
And inputting the picture into a bilinear neural network to perform detection and feature extraction tasks of a local area in a fine-grained image classification process.
The idea of the method is to extract the features of pictures respectively by using a two-way convolutional neural network, and then perform Bilinear combination based on the two extracted features, so that the information of the features is richer. The network structure is shown in fig. 3, which extracts two features of each position of the image by using A, B two-path convolutional neural network, then performs outer product multiplication, and finally classifies through a classification layer. In the invention, the model is coordinated with the CNNA network through the CNNB network, the CNNA is used for positioning the characteristic part of the image, and the CNNB is used for extracting the characteristics of the characteristic region detected by the CNNA, thereby completing the tasks of detecting the local region and extracting the characteristics in the fine-grained image classification process.
Combining the result obtained by the bilinear convolutional neural network with the CBAM attention module, firstly obtaining a weighting result through a channel attention module, then obtaining an extraction result through a space attention module, and finally obtaining a weighting result, as shown in fig. 4.
Model evaluation
Finally, the data classification evaluation is carried out through the model evaluation index
The simplest and most common index for evaluating a model is the accuracy, but the accuracy is used as the evaluation index without any premise, and the accuracy often cannot reflect the performance of the model, so the accuracy and the recall rate are needed to evaluate the quality of the model. Accuracy refers to the proportion of correctly detected objects among all detected objects, and recall refers to the proportion of correctly detected objects among all detected positive samples. As follows:
Figure BDA0003069388490000081
Figure BDA0003069388490000082
Figure BDA0003069388490000083
Figure BDA0003069388490000084
where True Positive (TP) and True Negative (TN) values indicate correctly classified images. False Positives (FP) and False Negatives (FN) indicate that the images were misclassified.
The accuracy is as follows: accuracy ═ TP + TN)/(TP + FP + TN + FN), indicates the probability that all samples are correctly classified, and different thresholds T can be used.
The precision rate is as follows: precision is TP/(TP + FP), which indicates how many samples are true positive samples from the samples recalled as positive samples.
The recall ratio is: recall is TP/(TP + FN), indicating how many samples are recalled.
F1 is an index for comprehensively considering the precision and the recall ratio, and is the harmonic average of the precision ratio and the recall ratio.
Example 2
Embodiment 2 is different from embodiment 1 in that image information is acquired using hyperspectrum.
The tea leaves can be classified by acquiring spectral characteristics while acquiring image information by utilizing hyperspectrum. From the aspects of model establishment and methods, more abundant and representative features are extracted, and different feature variable screening methods are adopted to extract feature variables, wherein the method comprises the following steps: IRIV iteration retains an information variable algorithm, and the influence of ion scattering on the spectral data is eliminated; the CARS competition adaptive weighting sampling method selects the wavelength point with large regression coefficient absolute value in the PLSR model by the adaptive weighting sampling technology, removes the wavelength point with small weight, and effectively finds out the algorithm of the optimal variable combination; UVE non-information variable elimination algorithm, which is a method for removing useless information and extracting characteristic wavelength. And (3) obtaining the characteristic variable with better interpretability by using an algorithm for carrying out stability analysis on the variable by using a partial least squares regression coefficient method, and building a model method, an Extreme Learning Machine (ELM) and the like.
1. The invention utilizes the hyperspectral imaging system to collect the samples, and does not damage the structure of the tea sample. I.e. a reliable data set is obtained using hyperspectrum.
2. According to the method, a data set is obtained through a hyperspectral image, a bilinear convolutional neural network is researched and used, and an attention mechanism is combined, so that the optimization is obtained in time and accuracy.
3. The protection key point of the invention is a deep learning method for creating a data set for hyperspectral collection of tea leaves and then classifying the data set.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (6)

1. A tea leaf classification method based on hyperspectral images and deep learning is characterized by comprising the following steps:
step 1, acquiring a hyperspectral image of tea to obtain a data set;
step 2, the data set is subjected to data enhancement to expand the data set;
step 3, inputting the picture into a bilinear neural network to perform fine-grained image classification, detection and feature extraction;
step 4, combining the bilinear neural network with an attention mechanism, and adding a full connection layer for training;
and 5, obtaining the accuracy, precision and recall degree through evaluating indexes.
2. The tea leaf classification method based on hyperspectral image and deep learning according to claim 1, wherein the step 1 comprises: a plurality of tea leaves are obtained, tea leaf samples are contained in 100 x 20 glass culture dishes, a sample is randomly and uniformly paved on the bottom of the culture dish by 1cm and 10g of each sample, and 50 samples are collected for each type.
3. The tea leaf classification method based on hyperspectral image and deep learning according to claim 1, wherein the step 2 comprises: the original picture set contains 7 different varieties of tea leaves, and the total number of the tea leaves is 350, because a large amount of image data is needed for training the convolutional neural network so as to prevent overfitting;
therefore, an Image Data Generator interface built in Tensorflow2.0 is adopted to enhance input Image Data, random rotation, translation and shearing operations are carried out on pictures, a Data set with 6 times of expansion pictures is obtained by fusing the pictures together, a feature layer is extracted from the pictures subjected to Data expansion by using an EfficientNet B4 network architecture, the model is finely adjusted and the full connection layer of the original model is modified, a Softmax layer is modified into 7 layers according to the number of required classification, different layers are respectively frozen according to different models, and pre-training is carried out by using the weight of Imagenet in a migration learning mode.
4. The tea leaf classification method based on hyperspectral image and deep learning according to claim 1, wherein the step 3 comprises: the purpose of improving indexes is achieved by comprehensively optimizing the network width, the network depth and the resolution of the input image, and the model parameters and the calculated amount are greatly reduced under the condition that the accuracy index is similar to that of the existing classification network;
the existing 8 EfficientNet-B0-B7 models are selected by comprehensively considering parameters and accuracy of comparison models, and pictures are input into a bilinear neural network to perform detection and feature extraction tasks of local regions in the fine-grained image classification process; the bilinear convolutional neural network extracts two characteristics of each position of an image by using A, B two convolutional neural networks, then performs outer product multiplication, and finally performs classification by a classification layer, the model is coordinated with a CNNA network through the CNNB network, the CNNA network is used for positioning the characteristic part of the image, the CNNB network is used for performing characteristic extraction on the characteristic region detected by the CNNA, and the task of detecting and extracting the characteristic of the local region in the fine-grained image classification process is completed.
5. The tea leaf classification method based on hyperspectral image and deep learning according to claim 1, wherein the step 4 comprises: combining the result obtained by the bilinear convolutional neural network with a CBAM attention module, firstly obtaining a weighting result through a channel attention module, then obtaining an extraction result through a space attention module, and finally weighting.
6. The tea leaf classification method based on hyperspectral image and deep learning according to claim 1, wherein the step 5 comprises: the quality of the model needs to be evaluated using accuracy, which is the proportion of correctly detected objects among all detected objects, and recall, which is the proportion of correctly detected objects among all detected positive samples, as follows:
Figure FDA0003069388480000021
Figure FDA0003069388480000022
Figure FDA0003069388480000023
Figure FDA0003069388480000024
wherein true positive TP and true negative TN indicate correctly classified images, false positive FP and false negative FN indicate that images are misclassified;
accuracy is the Accuracy, represents the probability that all samples are correctly classified, and uses different threshold values T;
precision is an accuracy rate, which indicates how many real positive samples are in the samples recalled as positive samples;
recall is Recall rate, which indicates how many samples are recalled;
F1the method is an index for comprehensively considering the precision and the recall ratio, and is a harmonic average of the precision ratio and the recall ratio.
CN202110534790.1A 2021-05-17 2021-05-17 Tea leaf classification method based on hyperspectral image and deep learning Pending CN113158980A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110534790.1A CN113158980A (en) 2021-05-17 2021-05-17 Tea leaf classification method based on hyperspectral image and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110534790.1A CN113158980A (en) 2021-05-17 2021-05-17 Tea leaf classification method based on hyperspectral image and deep learning

Publications (1)

Publication Number Publication Date
CN113158980A true CN113158980A (en) 2021-07-23

Family

ID=76876152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110534790.1A Pending CN113158980A (en) 2021-05-17 2021-05-17 Tea leaf classification method based on hyperspectral image and deep learning

Country Status (1)

Country Link
CN (1) CN113158980A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114112984A (en) * 2021-10-25 2022-03-01 上海布眼人工智能科技有限公司 Fabric fiber component qualitative method based on self-attention
CN114283303A (en) * 2021-12-14 2022-04-05 贵州大学 Tea leaf classification method
CN114818985A (en) * 2022-05-31 2022-07-29 安徽农业大学 Tea quality evaluation method based on center anchor point triple optimization pseudo-twin network
CN115062656A (en) * 2022-06-10 2022-09-16 安徽农业大学 Method and device for predicting tea polyphenol content based on electronic nose signal space domain
CN115754107A (en) * 2022-11-08 2023-03-07 福建省龙德新能源有限公司 Automatic sampling analysis system and method for preparing lithium hexafluorophosphate

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845418A (en) * 2017-01-24 2017-06-13 北京航空航天大学 A kind of hyperspectral image classification method based on deep learning
CN110852227A (en) * 2019-11-04 2020-02-28 中国科学院遥感与数字地球研究所 Hyperspectral image deep learning classification method, device, equipment and storage medium
CN111079851A (en) * 2019-12-27 2020-04-28 常熟理工学院 Vehicle type identification method based on reinforcement learning and bilinear convolution network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845418A (en) * 2017-01-24 2017-06-13 北京航空航天大学 A kind of hyperspectral image classification method based on deep learning
CN110852227A (en) * 2019-11-04 2020-02-28 中国科学院遥感与数字地球研究所 Hyperspectral image deep learning classification method, device, equipment and storage medium
CN111079851A (en) * 2019-12-27 2020-04-28 常熟理工学院 Vehicle type identification method based on reinforcement learning and bilinear convolution network

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
叶玉龙等: "《精制川茶产业技术预测研究》", 《安徽农业科学》 *
孟树林: "《基于多特征优化和改进关系网络的茶叶病害识别》", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *
张子振等: "《融合注意力机制和高效网络的糖尿病视网膜病变识别与分类》", 《中国图象图形学报》 *
徐泰燕: "《基于神经网络茶叶的品质分析》", 《福建茶叶》 *
李瑶: "《基于高光谱成像的茶叶品质判别方法研究》", 《中国优秀硕士学位论文全文数据库》 *
杨丹等: "《基于注意力机制的细粒度图像分类算法》", 《西南科技大学学报》 *
王文明等: "《基于图像处理的茶叶智能识别与检测技术研究进展分析》", 《中国农机化学报》 *
陈全胜等: "《支持向量机在机器视觉识别茶叶中的应用研究》", 《仪器仪表学报》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114112984A (en) * 2021-10-25 2022-03-01 上海布眼人工智能科技有限公司 Fabric fiber component qualitative method based on self-attention
CN114112984B (en) * 2021-10-25 2022-09-20 上海布眼人工智能科技有限公司 Fabric fiber component qualitative method based on self-attention
CN114283303A (en) * 2021-12-14 2022-04-05 贵州大学 Tea leaf classification method
CN114283303B (en) * 2021-12-14 2022-07-12 贵州大学 Tea leaf classification method
CN114818985A (en) * 2022-05-31 2022-07-29 安徽农业大学 Tea quality evaluation method based on center anchor point triple optimization pseudo-twin network
CN114818985B (en) * 2022-05-31 2024-04-16 安徽农业大学 Tea quality evaluation method based on central anchor point triplet optimization pseudo-twin network
CN115062656A (en) * 2022-06-10 2022-09-16 安徽农业大学 Method and device for predicting tea polyphenol content based on electronic nose signal space domain
CN115062656B (en) * 2022-06-10 2023-08-11 安徽农业大学 Tea polyphenol content prediction method and device based on electronic nose signal space domain
CN115754107A (en) * 2022-11-08 2023-03-07 福建省龙德新能源有限公司 Automatic sampling analysis system and method for preparing lithium hexafluorophosphate

Similar Documents

Publication Publication Date Title
CN113158980A (en) Tea leaf classification method based on hyperspectral image and deep learning
Li et al. Evaluating green tea quality based on multisensor data fusion combining hyperspectral imaging and olfactory visualization systems
CN102435713B (en) Automatic detection system for quality of traditional Chinese medicine
Gamboa et al. Wine quality rapid detection using a compact electronic nose system: Application focused on spoilage thresholds by acetic acid
Kiani et al. Integration of computer vision and electronic nose as non-destructive systems for saffron adulteration detection
CN110133049B (en) Electronic nose and machine vision-based rapid nondestructive testing method for tea grade
CN108875913B (en) Tricholoma matsutake rapid nondestructive testing system and method based on convolutional neural network
CN103278609A (en) Meat product freshness detection method based on multisource perceptual information fusion
Yuan et al. Selecting key wavelengths of hyperspectral imagine for nondestructive classification of moldy peanuts using ensemble classifier
Zhang et al. Channel attention convolutional neural network for Chinese baijiu detection with E-nose
CN102589470A (en) Fuzzy-neural-network-based tea leaf appearance quality quantification method
CN106290224A (en) The detection method of bacon quality
CN106568907A (en) Chinese mitten crab freshness damage-free detection method based on semi-supervised identification projection
CN112699756A (en) Hyperspectral image-based tea origin identification method and system
CN114663821B (en) Real-time nondestructive detection method for product quality based on video hyperspectral imaging technology
CN110133050A (en) A method of based on multisensor Qualitative fingerprint quantitative detection tea leaf quality
CN105181650A (en) Method for quickly identifying tea varieties through near-infrared spectroscopy technology
CN111896495A (en) Method and system for discriminating Taiping Houkui production places based on deep learning and near infrared spectrum
CN106874929A (en) A kind of pearl sorting technique based on deep learning
CN109295159A (en) Sausage quality Intelligent detecting method
Pu et al. Distinguishing fresh and frozen-thawed beef using hyperspectral imaging technology combined with convolutional neural networks
Hu et al. Determination of Tibetan tea quality by hyperspectral imaging technology and multivariate analysis
Li et al. Qualitative and quantitative analysis of the pile fermentation degree of Pu-erh tea
CN106940292A (en) Bar denier wood raw material quick nondestructive discrimination method of damaging by worms based on multi-optical spectrum imaging technology
CN105606544A (en) Nondestructive detection method of insect bodies of Cordyceps sinensis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210723