CN110084194B - Seed cotton mulching film online identification method based on hyperspectral imaging and deep learning - Google Patents

Seed cotton mulching film online identification method based on hyperspectral imaging and deep learning Download PDF

Info

Publication number
CN110084194B
CN110084194B CN201910345604.2A CN201910345604A CN110084194B CN 110084194 B CN110084194 B CN 110084194B CN 201910345604 A CN201910345604 A CN 201910345604A CN 110084194 B CN110084194 B CN 110084194B
Authority
CN
China
Prior art keywords
learning machine
encoder
extreme learning
mulching film
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910345604.2A
Other languages
Chinese (zh)
Other versions
CN110084194A (en
Inventor
倪超
张�雄
李振业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Forestry University
Original Assignee
Nanjing Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Forestry University filed Critical Nanjing Forestry University
Priority to CN201910345604.2A priority Critical patent/CN110084194B/en
Publication of CN110084194A publication Critical patent/CN110084194A/en
Application granted granted Critical
Publication of CN110084194B publication Critical patent/CN110084194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a seed cotton mulching film online identification method based on hyperspectral imaging and deep learning, which is characterized in that a hyperspectral imager is used for acquiring a seed cotton mulching film reflection spectrum image, a deep learning network consisting of a stacking weighting self-encoder and a particle swarm optimization extreme learning machine is constructed for online identification of the hyperspectral image, the hyperspectral image of the seed cotton mulching film is classified by the network consisting of the stacking weighting self-encoder and the extreme learning machine in the deep learning, a weighting mechanism is introduced into each self-encoder, and the influence of noise is reduced while the multichannel input advantage is ensured; the weight and the bias of the extreme learning machine are randomly determined, overfitting is easy to generate, the weight and the bias of the extreme learning machine are optimized by utilizing the particle swarm algorithm, and the classification precision is improved while the identification speed is ensured. The deep learning network formed by the stacked weighted self-encoder and the extreme learning machine can be used for online identification of the seed cotton mulching film.

Description

Seed cotton mulching film online identification method based on hyperspectral imaging and deep learning
Technical Field
The invention belongs to the technical field of seed cotton foreign fiber identification, and particularly relates to a seed cotton mulching film online identification method based on hyperspectral imaging and deep learning.
Background
China is a big country for cotton production and consumption, and cotton processing and spinning play an important role in national economy. Xinjiang as the main cotton production province in China widely applies the mulching film covering technology to cotton planting, the cotton picking production mechanization degree is high, seed cotton is mixed with a large amount of mulching films in the mechanical picking process, and if the seed cotton is not thoroughly cleaned, the seed cotton enters the lint along with the processing link, so that the quality of textiles and the dyeing quality of the textiles are influenced certainly. At present, the mulching film residue contained in the machine-harvested cotton becomes the fundamental difference of the quality of the machine-harvested cotton in China and the quality of the machine-harvested cotton in import, and is one of the important factors that the machine-harvested cotton in China is contradicted and unsmooth in the links of processing, storing, selling and the like, so that the embarrassing situation that textile enterprises prefer to import the cotton, select hands to harvest the cotton and carefully select the machine-harvested cotton in China when selecting the cotton is formed, and therefore the cleaning of the mulching film is an urgent technical problem to be solved for the Xinjiang cotton industry.
China is a big country for cotton production and consumption, and cotton processing and spinning play an important role in national economy. Xinjiang as the main cotton production province in China widely applies the mulching film covering technology to cotton planting, the cotton picking production mechanization degree is high, seed cotton is mixed with a large amount of mulching films in the mechanical picking process, and if the seed cotton is not thoroughly cleaned, the seed cotton enters the lint along with the processing link, so that the quality of textiles and the dyeing quality of the textiles are influenced certainly. At present, the mulching film residue contained in the machine-harvested cotton becomes the fundamental difference of the quality of the machine-harvested cotton in China and the quality of the machine-harvested cotton in import, and is one of the important factors that the machine-harvested cotton in China is contradicted and unsmooth in the links of processing, storing, selling and the like, so that the embarrassing situation that textile enterprises prefer to import the cotton, select hands to harvest the cotton and carefully select the machine-harvested cotton in China when selecting the cotton is formed, and therefore the cleaning of the mulching film is an urgent technical problem to be solved for the Xinjiang cotton industry.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the defects in the prior art, the invention aims to provide an on-line seed cotton mulching film identification method based on hyperspectral imaging and deep learning.
The technical scheme is as follows: in order to achieve the purpose of the invention, the technical scheme adopted by the invention is as follows:
a seed cotton mulching film online identification method based on hyperspectral imaging and deep learning is characterized in that a hyperspectral imager is used for acquiring a seed cotton mulching film reflection spectrum image, a deep learning network consisting of a stacking weighted self-encoder and a particle swarm optimized extreme learning machine is constructed for online identification of the hyperspectral image, and the method comprises the following steps:
(1) acquiring a reflection spectrum image of the seed cotton mulching film by using a hyperspectral imager;
(2) extracting high-order features related to output layer by using a stacking weighting self-encoder, taking 288-dimensional vectors formed by reflection spectrums of each pixel point in a hyperspectral image at a wave band of 1000 nm-2500 nm as the input of the whole network, and reducing the dimensions of the 288-dimensional vectors by using the stacking weighting self-encoder;
(3) carrying out supervised adjustment on the network weight of the stacking weighting self-encoder by adopting two layers of artificial neural networks and combining a BP algorithm;
(4) after training is finished, taking the high-order characteristics of the dimensionality reduction as the input of the extreme learning machine, and optimizing the weight and the bias of the extreme learning machine by utilizing an optimization algorithm;
(5) and processing the 36-dimensional high-order features after dimensionality reduction by using the optimized extreme learning machine to realize hyperspectral image classification, thereby identifying the cotton seed mulching film.
Preferably, in the step (1), the hyperspectral imager acquires a reflection spectrum image of the seed cotton mulching film at 1000 nm-2500 nm, 5.6nm is a spectrum section, and 288 spectrum section data are acquired.
Preferably, in the step (2), the stacked weighted self-encoder is a deep neural network formed by three layers of weighted self-encoders, network parameters are set, and high-order features related to output are extracted layer by layer; the input is 288-dimensional vectors formed by reflection spectrums of each pixel point in the hyperspectral image in a wave band of 1000 nm-2500 nm, the dimension reduction is carried out by a three-layer weighted self-encoder, and 36-dimensional high-order features are output.
Preferably, the weight and the offset value of each layer of weighted self-encoder are updated through a layer-by-layer pre-training technology and a gradient descent algorithm, the number of neurons of the weighted self-encoder in the three layers is 144, 72 and 36 respectively, and sigmoid transfer functions are adopted.
Preferably, in the step (3), the number of neurons of the two layers of artificial neural networks is respectively 18 and 4, the deep neural network is formed by respectively adopting sigmoid and softmax transfer functions and the three layers of weighted self-encoders, pre-marked data training is utilized, and the network weight of the stacked weighted self-encoder is supervised and adjusted by combining a BP algorithm.
Preferably, in the step (4), the optimization algorithm is a particle swarm optimization algorithm.
Preferably, the weight and bias of the extreme learning machine are used as particles in the particle swarm optimization algorithm, and the particle length D is k (n +1), wherein: k is the number of hidden layer nodes, and k is 20; n is the input dimension, n is 36;
the particle swarm optimization extreme learning machine comprises the following specific steps:
(1) particle Swarm Optimization (PSO) initialization, randomly generating m groups of particles, thetaiIs the ith particle, θi=[w11 i,w12 i,...,w1k i,w21 i,w22 i,...,w2k i,...,wn1 i,wn2 i,..,wnk i,b1 i,b2 i,..,bk i]Wherein w, b is [ -1, 1 [ ]]A random number in between;
(2) particle Swarm Optimization (PSO) parameter selection, wherein the population number m is 20, the iteration number t is 100, the acceleration coefficient c1 is c2 is 2,
dynamic update of inertial weight, omegat=(ωiniend)(tmax-t)/tmaxend
In the formula: omegainiIs the initial inertial weight, take ωini=0.9;
ωendIs the inertial weight, ω, at the maximum number of iterationsend=0.4;
tmaxIs the maximum number of iterations and,
t is the current iteration number;
(3) stacking the high-order features output by the weighted self-encoder as the input of an extreme learning machine, taking the classification precision of the extreme learning machine as a fitness value function, calculating the fitness value of each particle, and solving the individual optimal value and the global optimal value of each particle;
(4) updating the speed and position of the particles;
(5) and (4) reaching the maximum iteration times, exiting the optimization, and saving the optimal position as the parameter of the extreme learning machine.
Preferably, in step (4), the extreme learning machine takes the 36-dimensional high-order features after the stacking weighting self-encoder dimensionality reduction as input, and the extreme learning machine comprises a hidden layer, 20 neurons and a sigmoid transfer function.
Preferably, the hyperspectral imager is a SWIR series hyperspectral imager manufactured by SPECIM corporation of finland.
Preferably, in the step (5), the optimized extreme learning machine is used as a final classifier to process the 36-dimensional high-order features, so that the hyperspectral image classification is realized.
Has the advantages that: compared with the prior art, the invention has the following advantages:
1) according to the method, a hyperspectral technology is applied to the field of seed cotton mulching film identification, a deep learning network formed by a stacked weighted self-encoder and an extreme learning machine is adopted to classify hyperspectral images of the seed cotton mulching film, and the seed cotton mulching film is identified on line. For residual transparent mulching films which cannot be identified by a color camera and a black-and-white camera, the invention collects a 1000-2500 nm reflection spectrogram of seed cotton flow by a hyperspectral imager, and then identifies and classifies the residual mulching films with different spectral characteristics from the seed cotton.
2) Reducing the 288-dimensional vector of each pixel in the hyperspectral image into a 36-dimensional high-order feature by using a stacking weighted self-encoder; a weighting mechanism is introduced into the self-encoder of each layer, so that the influence of noise is reduced while the advantage of multi-channel input is ensured;
3) the extreme learning machine is used as a classifier, so that the recognition speed of the algorithm is improved.
4) The weight and the bias of the extreme learning machine are randomly determined, overfitting is easy to generate, the weight and the bias of the extreme learning machine are optimized by utilizing the particle swarm algorithm, the recognition speed is guaranteed, and the recognition accuracy of the algorithm is improved.
Drawings
FIG. 1 is a flow chart of an on-line seed cotton mulching film identification algorithm of the present invention;
FIG. 2 is a flow chart of the particle swarm optimization extreme learning machine of the present invention;
FIG. 3 is an original false color image of cotton seeds generated by the acquisition software of the present invention;
FIG. 4 is a diagram of the effect of cotton seed classification using the method of the present invention.
Detailed Description
The present invention will be further illustrated by the following specific examples, which are carried out on the premise of the technical scheme of the present invention, and it should be understood that these examples are only for illustrating the present invention and are not intended to limit the scope of the present invention.
The invention discloses a seed cotton mulching film online identification method based on hyperspectral imaging and deep learning, which comprises the following steps of obtaining a seed cotton mulching film reflection spectrum image by using a hyperspectral imager, constructing a deep learning network consisting of a stacked weighted self-encoder and a particle swarm optimized extreme learning machine, and identifying the hyperspectral image online, wherein the method comprises the following steps:
(1) a SWIR series hyperspectral imager of Finland SPECIM company is used for acquiring a reflection spectrum image of the seed cotton mulching film at 1000 nm-2500 nm, 5.6nm is a spectrum band, and 288 spectrum band data are acquired;
(2) extracting high-order features related to output layer by using a stacking weighting self-encoder, taking 288-dimensional vectors formed by reflection spectrums of each pixel point in a hyperspectral image at a wave band of 1000 nm-2500 nm as the input of the whole network, and reducing the dimensions of the 288-dimensional vectors by using the stacking weighting self-encoder;
the stacking Weighted self-encoder (VW-SAE) is a deep neural network formed by three layers of Weighted self-encoders, network parameters are set, and high-order features related to output are extracted layer by layer; the weight and the offset value of each layer of weighted self-encoder are updated through a layer-by-layer pre-training technology and a gradient descent algorithm, the number of neurons of the three layers of weighted self-encoders is 144, 72 and 36 respectively, and sigmoid transfer functions are adopted.
The input is 288-dimensional vectors formed by reflection spectrums of each pixel point in the hyperspectral image in a wave band of 1000 nm-2500 nm, and the output is a 36-dimensional high-order feature.
(3) Carrying out supervised adjustment on the network weight of the stacking weighting self-encoder by adopting two layers of artificial neural networks and combining a BP algorithm;
the two-layer Artificial Neural Networks (ANN) structure is as follows: the number of the neurons of the two layers of artificial neural networks is respectively 18 and 4, a sigmoid transfer function and the three layers of weighted self-encoders are adopted to form a deep neural network, pre-marked data training is utilized, and a BP algorithm is combined to conduct supervised adjustment on the network weight of the stacked weighted self-encoders.
(4) And after training is finished, taking the 36-dimensional high-order features as the input of the extreme learning machine, and optimizing the weight and the bias of the extreme learning machine by utilizing a particle swarm optimization algorithm.
The structure of an extreme learning machine (E L M) is as follows, the extreme learning machine takes 36-dimensional high-order features after dimension reduction of a stacking weighted self-encoder as input, the extreme learning machine comprises a hidden layer, 20 neurons and a sigmoid transfer function.
The Particle Swarm Optimization algorithm (PSO) is described in detail as follows:
taking the weight and the bias of the extreme learning machine as particles of a particle swarm optimization algorithm, wherein the length D of the particles is k (n +1), and the formula is as follows: k is the number of hidden layer nodes, and k is 20; n is the input dimension, n is 36;
the particle swarm optimization extreme learning machine comprises the following specific steps:
1) particle Swarm Optimization (PSO) initialization, randomly generating m groups of particles, thetaiIs the ith particle, θi=[w11 i,w12 i,...,w1k i,w21 i,w22 i,...,w2k i,...,wn1 i,wn2 i,..,wnk i,b1 i,b2 i,..,bk i]Wherein w, b is [ -1, 1 [ ]]A random number in between;
2) particle Swarm Optimization (PSO) parameter selection, wherein the population number m is 20, the iteration number t is 100, the acceleration coefficient c1 is c2 is 2,
dynamic update of inertial weight, omegat=(ωiniend)(tmax-t)/tmaxend
In the formula: omegainiIs the initial inertial weight, take ωini=0.9;
ωendIs the inertial weight, ω, at the maximum number of iterationsend=0.4;
tmaxIs the maximum number of iterations and,
t is the current iteration number;
3) stacking the high-order features output by the weighted self-encoder as the input of an extreme learning machine, taking the classification precision of the extreme learning machine as a fitness value function, calculating the fitness value of each particle, and solving the individual optimal value and the global optimal value of each particle;
4) updating the speed and position of the particles;
5) and (4) reaching the maximum iteration times, exiting the optimization, and saving the optimal position as the parameter of the extreme learning machine.
(5) And processing the 36-dimensional high-order features subjected to dimensionality reduction by using the optimized extreme learning machine as a final classifier, and realizing hyperspectral image classification so as to identify the cotton seed mulching film.
Experiments are carried out by using the seed cotton mulching film online identification method based on hyperspectral imaging and deep learning, the experimental effect is shown in figure 3, and the effect of the method is further explained by combining simulation experiments.
1. Simulation experiment conditions are as follows:
the deep learning network is written by using python, the software for program operation is Vim, python libraries such as numpy, keras and TensorFlow are used, a CPU (central processing unit) of a computer for operating the deep learning network is an intel i7 chip, the main frequency is 2.6GHz, the memory is 8G, and the GPU is an Nvidia GTX1050 display card; to speed up the operation, the Ubuntu16.04 platform, CUDA and CUDNN are used to process the data together.
2. Simulation experiment contents:
spectral data of the seed cotton mulching film are obtained by using a SWIR series hyperspectral imager of Finland SPECIM company, and the hyperspectral data are processed by using the algorithm provided by the invention.
As shown in fig. 3, it is the original false color image generated by the collecting software, and fig. 4 is the classification effect image generated by the method of the present invention. Experiments show that the method can identify the mulching film with a rate of 95.5%, and the identification time of each row of pixels is 2.47ms, so that the conclusion can be drawn that the method can identify the mulching film in the cotton seeds with high efficiency and has good real-time performance.

Claims (2)

1. An online seed cotton mulching film identification method based on hyperspectral imaging and deep learning is characterized by comprising the following steps: the method comprises the following steps of acquiring a seed cotton mulching film reflection spectrum image by using a hyperspectral imager, constructing a deep learning network consisting of a stacked weighted self-encoder and a particle swarm optimized extreme learning machine, and identifying the hyperspectral image on line, wherein the hyperspectral image is obtained by the following steps:
(1) acquiring a reflection spectrum image of the seed cotton mulching film by using a hyperspectral imager;
(2) extracting high-order features related to output layer by using a stacking weighting self-encoder, taking 288-dimensional vectors formed by reflection spectrums of each pixel point in a hyperspectral image at a wave band of 1000 nm-2500 nm as the input of the whole network, and reducing the dimensions of the 288-dimensional vectors by using the stacking weighting self-encoder; updating the weight and the offset value of each layer of weighted self-encoder by a layer-by-layer pre-training technology and a gradient descent algorithm, wherein the number of neurons of the three layers of weighted self-encoders is respectively 144, 72 and 36, and sigmoid transfer functions are adopted;
(3) carrying out supervised adjustment on the network weight of the stacking weighting self-encoder by adopting two layers of Artificial Neural Networks (ANNs) and combining a BP algorithm; the number of the neurons of the two layers of artificial neural networks is respectively 18 and 4, the deep neural network is formed by adopting sigmoid and softmax transfer functions and the three layers of weighting self-encoders, and network weights of the stacking weighting self-encoders are supervised adjusted by utilizing pre-marked data training and combining a BP algorithm;
(4) after training is finished, taking the high-order characteristics of dimensionality reduction as the input of an extreme learning machine, and optimizing the weight and the bias of the extreme learning machine by utilizing a particle swarm optimization algorithm; the extreme learning machine takes 36-dimensional high-order features after dimension reduction of the stacking weighted self-encoder as input, and comprises a hidden layer, 20 neurons and a sigmoid transfer function; taking the weight and the bias of the extreme learning machine as particles of a particle swarm optimization algorithm, wherein the length D of the particles is k (n +1), and the formula is as follows: k is the number of hidden layer nodes, and k is 20; n is the input dimension, n is 36;
the particle swarm optimization extreme learning machine comprises the following specific steps:
1) particle Swarm Optimization (PSO) initialization, randomly generating m groups of particles, thetaiIs the ith particle, θi=[w11 i,w12 i,...,w1k i,w21 i,w22 i,...,w2k i,...,wn1 i,wn2 i,..,wnk i,b1 i,b2 i,..,bk i]Wherein w, b is [ -1, 1 [ ]]A random number in between;
2) particle Swarm Optimization (PSO) parameter selection, wherein the population number m is 20, the iteration number t is 100, the acceleration coefficient c1 is c2 is 2,
dynamic update of inertial weight, omegat=(ωiniend)(tmax-t)/tmaxend
In the formula: omegainiIs the initial inertial weight, take ωini=0.9;
ωendIs the inertial weight, ω, at the maximum number of iterationsend=0.4;
tmaxIs the maximum number of iterations and,
t is the current iteration number;
3) stacking the high-order features output by the weighted self-encoder as the input of an extreme learning machine, taking the classification precision of the extreme learning machine as a fitness value function, calculating the fitness value of each particle, and solving the individual optimal value and the global optimal value of each particle;
4) updating the speed and position of the particles;
5) when the maximum iteration times are reached, the optimization is quitted, and the optimal position is stored as the parameter of the extreme learning machine;
(5) and processing the 36-dimensional high-order features after dimensionality reduction by using the optimized extreme learning machine to realize hyperspectral image classification, thereby identifying the cotton seed mulching film.
2. The seed cotton mulching film online identification method based on hyperspectral imaging and deep learning according to claim 1 is characterized in that: in the step (1), a hyperspectral imager acquires a reflection spectrum image of the seed cotton mulching film at 1000 nm-2500 nm, 5.6nm is a spectrum section, and 288 spectrum section data are acquired.
CN201910345604.2A 2019-04-26 2019-04-26 Seed cotton mulching film online identification method based on hyperspectral imaging and deep learning Active CN110084194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910345604.2A CN110084194B (en) 2019-04-26 2019-04-26 Seed cotton mulching film online identification method based on hyperspectral imaging and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910345604.2A CN110084194B (en) 2019-04-26 2019-04-26 Seed cotton mulching film online identification method based on hyperspectral imaging and deep learning

Publications (2)

Publication Number Publication Date
CN110084194A CN110084194A (en) 2019-08-02
CN110084194B true CN110084194B (en) 2020-07-28

Family

ID=67417082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910345604.2A Active CN110084194B (en) 2019-04-26 2019-04-26 Seed cotton mulching film online identification method based on hyperspectral imaging and deep learning

Country Status (1)

Country Link
CN (1) CN110084194B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110534009A (en) * 2019-09-05 2019-12-03 北京青橙创客教育科技有限公司 A kind of unmanned course teaching aid of artificial intelligence
CN111767943B (en) * 2020-05-20 2024-06-11 北京简巨科技有限公司 Mulch film identification method and device, electronic equipment and storage medium
CN112001066B (en) * 2020-07-30 2022-11-04 四川大学 Deep learning-based method for calculating limit transmission capacity
CN114882291B (en) * 2022-05-31 2023-06-06 南京林业大学 Seed cotton mulching film identification and classification method based on hyperspectral image pixel block machine learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203688447U (en) * 2014-01-15 2014-07-02 塔里木大学 Hyperspectral-technology-based cottonseed grade instrument
WO2017197626A1 (en) * 2016-05-19 2017-11-23 江南大学 Extreme learning machine method for improving artificial bee colony optimization

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203688447U (en) * 2014-01-15 2014-07-02 塔里木大学 Hyperspectral-technology-based cottonseed grade instrument
WO2017197626A1 (en) * 2016-05-19 2017-11-23 江南大学 Extreme learning machine method for improving artificial bee colony optimization

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
:"基于高光谱成像技术结合SAE-ELM方法的苹果硬度";饶利波 等;《激光与光电子学进展》;20190108;第56卷(第11期);摘要、第2-3节 *
"Deep Learning-Based Feature Representation and Its Application for Soft Sensor Modeling With Variable-Wise Weighted SAE";Xiaofeng Yuan等;《IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS》;20180731;第14卷(第7期);摘要 *
"基于粒子群优化的核极限学习机模型的风电功率区间预测方法";杨锡运 等;《中国电机工程学报》;20150930;第35卷;摘要、第1.2-1.3节 *

Also Published As

Publication number Publication date
CN110084194A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN110084194B (en) Seed cotton mulching film online identification method based on hyperspectral imaging and deep learning
Altaheri et al. Date fruit classification for robotic harvesting in a natural environment using deep learning
Patil et al. Rice-fusion: A multimodality data fusion framework for rice disease diagnosis
Ni et al. Online sorting of the film on cotton based on deep learning and hyperspectral imaging
CN109784204B (en) Method for identifying and extracting main fruit stalks of stacked cluster fruits for parallel robot
Baser et al. Tomconv: An improved cnn model for diagnosis of diseases in tomato plant leaves
Singh et al. Classification of wheat seeds using image processing and fuzzy clustered random forest
Pathak et al. Classification of fruits using convolutional neural network and transfer learning models
CN113221913A (en) Agriculture and forestry disease and pest fine-grained identification method and device based on Gaussian probability decision-level fusion
Luttrell et al. Facial recognition via transfer learning: fine-tuning Keras_vggface
Lee et al. Prediction of defect coffee beans using CNN
Nihar et al. Plant disease detection through the implementation of diversified and modified neural network algorithms
Karthiga et al. A Deep Learning Approach to classify the Honeybee Species and health identification
Wakhare et al. Using image processing and deep learning techniques detect and identify pomegranate leaf diseases
Singh et al. Feature selection and classification improvement of Kinnow using SVM classifier
Qiao et al. Rotation is all you need: Cross dimensional residual interaction for hyperspectral image classification
Chougui et al. Plant-leaf diseases classification using cnn, cbam and vision transformer
Suwarningsih et al. Ide-cabe: chili varieties identification and classification system based leaf
Sood et al. A comparative study of grape crop disease classification using various transfer learning techniques
Rahman et al. A CNN Model-based ensemble approach for Fruit identification using seed
CN111401442A (en) Fruit identification method based on deep learning
Sumari et al. A Precision Agricultural Application: Manggis Fruit Classification Using Hybrid Deep Learning.
Fisher et al. Tentnet: Deep learning tent detection algorithm using a synthetic training approach
CN112560824B (en) Facial expression recognition method based on multi-feature adaptive fusion
Darapaneni et al. Banana Sub-Family Classification and Quality Prediction using Computer Vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190802

Assignee: Nanjing Ruiyan Intelligent Technology Co.,Ltd.

Assignor: NANJING FORESTRY University

Contract record no.: X2020980004057

Denomination of invention: Seed cotton mulching film online identification method based on hyperspectral imaging and deep learning

License type: Common License

Record date: 20200715

EE01 Entry into force of recordation of patent licensing contract