CN114781585A - VEACNet network model for classifying pest images and classification method - Google Patents

VEACNet network model for classifying pest images and classification method Download PDF

Info

Publication number
CN114781585A
CN114781585A CN202210291675.0A CN202210291675A CN114781585A CN 114781585 A CN114781585 A CN 114781585A CN 202210291675 A CN202210291675 A CN 202210291675A CN 114781585 A CN114781585 A CN 114781585A
Authority
CN
China
Prior art keywords
layer
veacnet
branch
network model
pest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210291675.0A
Other languages
Chinese (zh)
Inventor
李亚楠
孙明
祁洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Institute of Technology
Original Assignee
Wuhan Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Institute of Technology filed Critical Wuhan Institute of Technology
Priority to CN202210291675.0A priority Critical patent/CN114781585A/en
Publication of CN114781585A publication Critical patent/CN114781585A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a VEACNet network model and a classification method for pest image classification, which take pest images as processing objects, construct a lightweight feature fusion neural network model based on asymmetric convolution, automatically extract the features of the pest images by adopting a standard convolution and asymmetric convolution double branch, and obtain the category labels of the pest images in a pest image data set through training and iterative feedback, thereby realizing the function of improving the efficiency and precision of pest identification and classification. The model of the invention classifies the real-time pest images, automatically extracts the features of the pest images, judges the pest category by using the extracted image features, has high accuracy of detection results and low hardware consumption cost, and effectively classifies the pests under the limited hardware condition.

Description

VEACNet network model for classifying pest images and classification method
Technical Field
The invention belongs to the technical field of agricultural automatic detection and classification, and particularly relates to a VEACNet network model and a classification method for pest image classification.
Background
Pests are one of the main stumbling stones that harm crop growth, retard crop maturation, and reduce grain yield. At present, under the influence of global warming and ecological environment change, the frequency of insect pests is higher and higher, the spread range is wider, and the pest control is very important in the whole crop growth cycle. The pest identification is an important prerequisite for pest control, and quick and accurate pest classification and identification can lay a solid foundation for pest control. In summary, identification and classification of pests is an important aspect of agricultural automatic detection and classification.
With the continuous development of computer vision and machine learning technology, the automation level of pest control is greatly improved. Thank army et al published a paper "sparse coding pyramid model-based agricultural pest image recognition" in 2016, using a spatial pyramid with sparse coding to identify agricultural pest images. Compared with the early support vector machine and neural network method, the method has the advantage that the identification precision of the pest image with the background is improved. However, these conventional pest image recognition methods require complicated preprocessing of the pest image, or the performance of the model is often affected by the characteristics of the selected features and requires high computational cost. The cheng xi et al published a paper "Pest identification video residual leading in complex background" in 2017, and a deep residual learning method is adopted to identify 10 agricultural pests in a complex farmland background, and the identification precision is far higher than that of a traditional support vector machine and a BP neural network. However, these deep neural network-based algorithms generally require higher computational costs, which in turn leads to higher hardware device costs. Therefore, it is necessary to search a lightweight neural network with less calculation amount and parameter amount and higher classification accuracy to solve the pest classification problem.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: a VEACNet network model and a classification method for pest image classification are provided for improving efficiency and accuracy of pest identification and classification.
The technical scheme adopted by the invention for solving the technical problems is as follows: the VEACNet network model for pest image classification comprises a first part, a second part and a third part which are sequentially connected in series; the first part is used for rapidly reducing the size of the feature diagram and comprises a standard convolutional layer and a maximum pooling layer which are sequentially connected in series; the second part is used for extracting main features and comprises a first branch and a second branch which are parallel; the first branch comprises two standard convolutional layers, the second branch comprises two asymmetric convolutional layers, and the number of channels of the asymmetric convolutional layers of the second branch is the same as that of the standard convolutional layers of the first branch; the sum of the matrices of the first standard convolutional layer of the first branch and the first asymmetric convolutional layer of the second branch is output to the second standard convolutional layer of the first branch and the second asymmetric convolutional layer of the second branch respectively, and the sum of the matrices of the second standard convolutional layer of the first branch and the second asymmetric convolutional layer of the second branch is output of the second part; the third part is used for obtaining doubled image characteristics in a mode of expanding the number of channels and comprises a standard convolution layer, and a maximum pooling layer and an average pooling layer which are respectively connected in series at the output end of the standard convolution layer; the standard convolution layer of the third part is used for further enlarging the channel number; the maximum pooling layer and the average pooling layer of the third part are respectively used for pooling output results of the standard convolution layer of the third part to obtain different image characteristics; splicing the results respectively output by the maximum pooling layer and the average pooling layer of the third part to obtain doubled image characteristics which are output by the third part; the VEACNet network model further comprises a full connection layer connected to the output end of the third part and used for predicting and outputting the flattened image characteristics.
According to the scheme, the convolution kernels of the standard convolution layer and the maximum pooling layer of the first part are both 3 × 3, and the steps are both 2.
According to the scheme, each asymmetric convolutional layer of the second part comprises a 3 × 3 convolutional layer, a 3 × 1 convolutional layer and a 1 × 3 convolutional layer which are connected in series, the step is 1, and the input of each convolutional layer is the output of the last convolutional layer; the asymmetric convolutional layer also comprises a 1 multiplied by 1 convolutional layer and a 3 multiplied by 3 maximal pooling layer, and the step length of the 3 multiplied by 3 maximal pooling layer is 2; the output of the asymmetric convolutional layer is the matrix of the 1 × 1 convolutional layer, 3 × 3 convolutional layer, 3 × 1 convolutional layer and 1 × 3 convolutional layer and the output passing through the 3 × 3 max pooling layer.
According to the scheme, the convolution kernel of the standard convolution layer of the third part is 3, and the step is 1; the convolution kernel of the maximum pooling layer of the third part is 3, and the step size is 2; the convolution kernel for the average pooling layer of the third section is 3 with a step size of 2.
According to the scheme, all the filling of the convolution and pooling operations in the VEACNet network model are set to be 0; the number of channels of the first part is (3, 64); the number of channels in the second part is (128, 256, 256); the number of channels in the third part is (256, 512, 1024).
A classification method based on a VEACNet network model for image classification comprises the following steps:
s1: acquiring and screening pest images, and constructing a pest image data set comprising a training set and a testing set;
s2: constructing a VEACNet network model based on asymmetric convolution, wherein the VEACNet network model comprises a first part, a second part and a third part which are sequentially connected in series;
the first part comprises a standard convolution layer and a maximum pooling layer which are sequentially connected in series;
the second part comprises a first branch and a second branch in parallel; the first branch comprises two standard convolution layers, the second branch comprises two asymmetric convolution layers, and the number of channels of the asymmetric convolution layers of the second branch is the same as that of the standard convolution layers of the first branch; the sum of the matrices of the first standard convolutional layer of the first branch and the first asymmetric convolutional layer of the second branch is output to the second standard convolutional layer of the first branch and the second asymmetric convolutional layer of the second branch respectively, and the sum of the matrices of the second standard convolutional layer of the first branch and the second asymmetric convolutional layer of the second branch is output of the second part;
the third part comprises a standard convolutional layer, and a maximum pooling layer and an average pooling layer which are respectively connected in series at the output end of the standard convolutional layer; splicing the results respectively output by the maximum pooling layer and the average pooling layer of the third part to obtain doubled image characteristics which are output by the third part;
the VEACNet network model further comprises a full connection layer connected to the output end of the third part;
s3: training a VEACNet network model; initializing model parameters, inputting the training set into a VEACNet network model for parameter adjustment until the output result is stabilized as an accurate classification result;
s4: verifying a VEACNet network model; after training is finished, pest images concentrated in testing are input into the VEACNet lightweight neural network model, and accuracy of the VEACNet lightweight neural network model is verified;
s5: and inputting the pest images acquired in real time into the trained VEACNet lightweight neural network model, and outputting a result, namely a classification result.
Further, in step S3, the specific steps include:
the input of the VEACNet network model is fixed to be x, if the size of the pest image is smaller than x, 0 is filled around the pest image of D0, the size of the pest image of D0 is modified to be x, and then the pest image is input into the VEACNet network model;
and if the size of the pest image is larger than x, intercepting the x image from the pest image by adopting a random cutting mode as the input of the VEACNet network model.
Further, in step S3, the specific steps include:
when training the VEACNet network model, the learning rate is set to 0.0002, the parallel learning number batch size is set to 16, and the training round number epoch is set to 100.
A computer storage medium having stored therein a computer program executable by a computer processor, the computer program performing a classification method.
The invention has the beneficial effects that:
1. the invention discloses a VEACNet network model and a classification method for classifying pest images, which take pest images as processing objects, construct a lightweight feature fusion neural network model based on asymmetric convolution, automatically extract the features of the pest images by adopting a double-channel of standard convolution and asymmetric convolution, and obtain the category labels of the pest images in pest image data sets through training and iterative feedback, thereby realizing the functions of improving the efficiency and precision of pest identification and classification.
2. The model of the invention classifies the real-time pest images, automatically extracts the features of the pest images, judges the pest types by using the extracted image features, has high accuracy of detection results and low hardware consumption cost, and effectively classifies the pests under the limited hardware conditions.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Fig. 2 is a schematic diagram of a VEACNet network structure according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and the detailed description.
Referring to fig. 1, a pest image classification method based on asymmetric convolution according to an embodiment of the present invention includes the following steps:
s1: a pest image dataset is constructed.
Acquiring pest images, and constructing a pest image data set after screening, wherein the pest image data set specifically comprises a training set and a testing set;
wherein the public data set D0 is used as the constructed pest image data set. D0 contains 4508 images of 40 agricultural pests taken at the experimental area of the academy of agricultural sciences, anhui, china, all at 200 x 200 in size. 3156 (70%) of these were randomly selected as the training set, and 1352 (30%) of the images were selected as the test set.
S2: constructing a VEACNet lightweight neural network model based on asymmetric convolution;
referring to fig. 2, the VEACNet lightweight neural network model is composed of a standard convolutional layer, a pooling layer, an asymmetric convolutional layer, and a full-link layer, and mainly includes three parts:
the first part of the VEACNet consists of a standard convolution layer and a maximum pooling layer for rapidly reducing the feature map size, the convolution kernels of both the standard convolution and the maximum pooling layer being 3 × 3 and steps being 2;
the second part of the VEACNet is used to extract the main features. It contains two branches, one branch consisting of two standard convolutional layers and the other consisting of two asymmetric convolutional layers, and the output of the second part is the matrix sum of the two branches. For matrix and operation, the number of channels in the asymmetric convolutional layer should be the same as the number of channels in the standard convolutional layer. The asymmetric convolution layer mainly comprises a 3 x 3 convolution layer, a 3 x 1 convolution layer and a 1 x 3 convolution layer in a series-parallel connection mode, namely the input of each convolution layer is the output of the last convolution layer, and the output of the asymmetric convolution layer is the matrix sum of the three convolution layers.
The third part of the VEACNet obtains doubled image features in a way that enlarges the number of channels. It consists of one standard convolution, one maximum pooling layer and one average pooling layer. The convolution kernel size of the standard convolution is 3 and the step size is 1, which is used to further expand the number of channels. And the maximum pooling layer and the average pooling layer respectively pool the output results of the standard convolutional layers to obtain different image characteristics, and then the results of the two pooling layers are spliced to further obtain doubled image characteristics. Finally, flattening the characteristic diagram and predicting by using the full connection layer.
Further, all the fills of the convolution and pooling operations in the VEACNet are set to 0, and the lane number changes are the first part (3, 64), the second part (128, 256, 256), the third part (256, 512, 1024).
S3: and (5) training the model. Initializing model parameters, inputting the training set into a VEACNet lightweight neural network model for parameter adjustment until the output result is stable as an accurate classification result;
in the example of the application, as the input of the VEACNet network is fixed to 224 × 224, 0 is filled around the pest image of D0, the pest image of D0 is modified to 224 × 224, and then input to the VEACNet lightweight neural network model; if the size of the pest image is larger than 224 x 224, the image with the size of 224 x 224 is cut out from the image in a random cutting mode to serve as the input of the network model.
During training, the learning rate is set to 0.0002, the parallel learning number (batch size) is set to 16, and the training round number (epoch) is set to 100.
S4: and (5) verifying the model. And inputting the pest images concentrated by the test into the VEACNet lightweight neural network model after the training is finished, and verifying the accuracy of the VEACNet lightweight neural network model.
S5: and inputting the pest image acquired in real time into the trained VEACNet lightweight neural network model, and outputting the result, namely the classification result.
The above embodiments are only used for illustrating the design idea and features of the present invention, and the purpose of the present invention is to enable those skilled in the art to understand the content of the present invention and implement it accordingly, and the protection scope of the present invention is not limited to the above embodiments. Therefore, all equivalent changes or modifications based on the principles and design concepts disclosed herein are intended to be included within the scope of the present invention.

Claims (9)

1. A VEACNet network model for pest image classification, characterized by: comprises a first part, a second part and a third part which are connected in series in sequence;
the first part is used for rapidly reducing the size of a feature map and comprises a standard convolution layer and a maximum pooling layer which are connected in series in sequence;
the second part is used for extracting main features and comprises a first branch and a second branch which are parallel; the first branch comprises two standard convolutional layers, the second branch comprises two asymmetric convolutional layers, and the number of channels of the asymmetric convolutional layers of the second branch is the same as that of the standard convolutional layers of the first branch; the sum of the matrices of the first standard convolutional layer of the first branch and the first asymmetric convolutional layer of the second branch is output to the second standard convolutional layer of the first branch and the second asymmetric convolutional layer of the second branch respectively, and the sum of the matrices of the second standard convolutional layer of the first branch and the second asymmetric convolutional layer of the second branch is output of the second part;
the third part is used for obtaining doubled image characteristics in a mode of expanding the number of channels and comprises a standard convolution layer, and a maximum pooling layer and an average pooling layer which are respectively connected in series at the output end of the standard convolution layer; the standard convolution layer of the third part is used for further enlarging the channel number; the maximum pooling layer and the average pooling layer of the third part are respectively used for pooling output results of the standard convolution layer of the third part to obtain different image characteristics; splicing the results respectively output by the maximum pooling layer and the average pooling layer of the third part to obtain doubled image characteristics which are output by the third part;
the VEACNet network model further comprises a full connection layer connected to the output end of the third part and used for predicting and outputting the flattened image characteristics.
2. The VEACNet network model for pest image classification as claimed in claim 1, wherein: the convolution kernels of the standard convolution layer and the maximum pooling layer of the first part are both 3 × 3, and the steps are both 2.
3. The VEACNet network model for pest image classification as claimed in claim 1, wherein: each asymmetric convolutional layer of the second part comprises a 3 × 3 convolutional layer, a 3 × 1 convolutional layer and a 1 × 3 convolutional layer which are connected in series, the step is 1, and the input of each convolutional layer is the output of the previous convolutional layer; the asymmetric convolution layer also comprises a 1 multiplied by 1 convolution layer and a 3 multiplied by 3 maximum pooling layer, and the step of the 3 multiplied by 3 maximum pooling layer is 2; the output of the asymmetric convolutional layer is the matrix of the 1 × 1 convolutional layer, 3 × 3 convolutional layer, 3 × 1 convolutional layer and 1 × 3 convolutional layer and the output passing through the 3 × 3 max pooling layer.
4. The VEACNet network model for pest image classification as claimed in claim 1, wherein: the convolution kernel of the standard convolution layer of the third part is 3, and the step is 1;
the convolution kernel of the maximum pooling layer of the third part is 3, and the step size is 2;
the third portion of the average pooling layer has a convolution kernel of 3 and a stride of 2.
5. The VEACNet network model for pest image classification as claimed in claim 1, wherein: the padding of all convolution and pooling operations in the VEACNet network model is set to 0;
the number of channels of the first part is (3, 64);
the number of channels of the second part is (128, 256, 256);
the number of channels in the third portion is (256, 512, 1024).
6. A classification method based on the VEACNet network model for image classification of any of claims 1 to 5, characterized in that: the method comprises the following steps:
s1: acquiring and screening pest images, and constructing a pest image data set comprising a training set and a testing set;
s2: constructing a VEACNet network model based on asymmetric convolution, wherein the VEACNet network model comprises a first part, a second part and a third part which are sequentially connected in series;
the first part comprises a standard convolution layer and a maximum pooling layer which are sequentially connected in series;
the second part comprises a first branch and a second branch in parallel; the first branch comprises two standard convolution layers, the second branch comprises two asymmetric convolution layers, and the number of channels of the asymmetric convolution layers of the second branch is the same as that of the standard convolution layers of the first branch; the sum of the matrices of the first standard convolutional layer of the first branch and the first asymmetric convolutional layer of the second branch is output to the second standard convolutional layer of the first branch and the second asymmetric convolutional layer of the second branch respectively, and the sum of the matrices of the second standard convolutional layer of the first branch and the second asymmetric convolutional layer of the second branch is output of the second part;
the third part comprises a standard convolution layer, and a maximum pooling layer and an average pooling layer which are respectively connected in series at the output end of the standard convolution layer; splicing the results respectively output by the maximum pooling layer and the average pooling layer of the third part to obtain doubled image characteristics which are output by the third part;
the VEACNet network model further comprises a full connection layer connected to the output end of the third part;
s3: training a VEACNet network model; initializing model parameters, inputting the training set into a VEACNet network model for parameter adjustment until the output result is stable as an accurate classification result;
s4: verifying a VEACNet network model; after training is finished, pest images in the test set are input into the VEACNet lightweight neural network model, and the accuracy of the VEACNet lightweight neural network model is verified;
s5: and inputting the pest images acquired in real time into the trained VEACNet lightweight neural network model, and outputting a result, namely a classification result.
7. The classification method according to claim 6, characterized in that: in the step S3, the specific steps are as follows:
the input of the VEACNet network model is fixed to be x, if the size of the pest image is smaller than x, 0 is filled around the pest image of D0, the size of the pest image of D0 is modified to be x, and then the pest image is input into the VEACNet network model;
and if the size of the pest image is larger than x, intercepting the x size image from the pest image by adopting a random cutting mode as the input of the VEACNet network model.
8. The classification method according to claim 6, characterized in that: in the step S3, the specific steps are as follows:
when training the VEACNet network model, the learning rate is set to 0.0002, the parallel learning number batch size is set to 16, and the training round number epoch is set to 100.
9. A computer storage medium, characterized in that: stored therein is a computer program executable by a computer processor, the computer program performing the classification method as claimed in any one of claims 6 to 8.
CN202210291675.0A 2022-03-23 2022-03-23 VEACNet network model for classifying pest images and classification method Pending CN114781585A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210291675.0A CN114781585A (en) 2022-03-23 2022-03-23 VEACNet network model for classifying pest images and classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210291675.0A CN114781585A (en) 2022-03-23 2022-03-23 VEACNet network model for classifying pest images and classification method

Publications (1)

Publication Number Publication Date
CN114781585A true CN114781585A (en) 2022-07-22

Family

ID=82424841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210291675.0A Pending CN114781585A (en) 2022-03-23 2022-03-23 VEACNet network model for classifying pest images and classification method

Country Status (1)

Country Link
CN (1) CN114781585A (en)

Similar Documents

Publication Publication Date Title
Zhang et al. Identification of maize leaf diseases using improved deep convolutional neural networks
CN107067043B (en) Crop disease and insect pest detection method
CN107392091B (en) Agricultural artificial intelligence crop detection method, mobile terminal and computer readable medium
KR101876397B1 (en) Apparatus and method for diagonising disease and insect pest of crops
CN110222215B (en) Crop pest detection method based on F-SSD-IV3
CN110717420A (en) Cultivated land extraction method and system based on remote sensing image and electronic equipment
CN113609889B (en) High-resolution remote sensing image vegetation extraction method based on sensitive characteristic focusing perception
CN109376767A (en) Retina OCT image classification method based on deep learning
CN113657326A (en) Weed detection method based on multi-scale fusion module and feature enhancement
CN111080652A (en) Optical remote sensing image segmentation method based on multi-scale lightweight cavity convolution
Zou et al. A segmentation network for smart weed management in wheat fields
CN112686862A (en) Pest identification and counting method, system and device and readable storage medium
CN115331104A (en) Crop planting information extraction method based on convolutional neural network
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
Zhang et al. Robust image segmentation method for cotton leaf under natural conditions based on immune algorithm and PCNN algorithm
CN114898359B (en) Litchi plant diseases and insect pests detection method based on improvement EFFICIENTDET
CN110363103A (en) Identifying pest method, apparatus, computer equipment and storage medium
Chen et al. CitrusYOLO: a algorithm for citrus detection under orchard environment based on YOLOV4
CN114862596A (en) Crop bank loan data risk analysis method and device
CN112883915A (en) Automatic wheat ear identification method and system based on transfer learning
CN114781585A (en) VEACNet network model for classifying pest images and classification method
CN115439842A (en) Mulberry sclerotinia severity detection method based on deep learning
CN116310541A (en) Insect classification method and system based on convolutional network multidimensional learning
CN114972264A (en) Method and device for identifying mung bean leaf spot based on MS-PLNet model
Gao et al. Classification Method of Rape Root Swelling Disease Based on Convolution Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination