CN112288022A - SSD algorithm-based characteristic fusion-based grain insect identification method and identification system - Google Patents

SSD algorithm-based characteristic fusion-based grain insect identification method and identification system Download PDF

Info

Publication number
CN112288022A
CN112288022A CN202011205968.XA CN202011205968A CN112288022A CN 112288022 A CN112288022 A CN 112288022A CN 202011205968 A CN202011205968 A CN 202011205968A CN 112288022 A CN112288022 A CN 112288022A
Authority
CN
China
Prior art keywords
grain
neural network
feature
network model
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011205968.XA
Other languages
Chinese (zh)
Other versions
CN112288022B (en
Inventor
吕宗旺
金会芳
孙福艳
甄彤
陈丽瑛
邱帅欣
桂崇文
唐浩然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Technology
Original Assignee
Henan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Technology filed Critical Henan University of Technology
Priority to CN202011205968.XA priority Critical patent/CN112288022B/en
Publication of CN112288022A publication Critical patent/CN112288022A/en
Application granted granted Critical
Publication of CN112288022B publication Critical patent/CN112288022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a grain insect recognition method and system based on SSD algorithm feature fusion, wherein the recognition method comprises the following steps: establishing a data set; establishing a neural network model and training the neural network model by adopting a data set to obtain a trained neural network model; and collecting the grain insect image to be identified, inputting the grain insect image into the trained neural network model, and detecting the type and the position of the grain insect. According to the technical scheme provided by the invention, output characteristic graphs of the convolutional layer conv4_3 and conv5_3 in the neural network model are fused, block11 which is unfavorable for small target detection is deleted, a prior frame suitable for the grain insect is obtained by adopting a K-means clustering algorithm, the defect of the prior frame in the original SSD is improved, the identification and the positioning of the grain insect are facilitated, and the problem of poor accuracy in the prior art for identifying the grain insect can be solved.

Description

SSD algorithm-based characteristic fusion-based grain insect identification method and identification system
Technical Field
The invention relates to the technical field of grain insect identification, in particular to a characteristic fusion grain insect identification method and system based on an SSD algorithm.
Background
The grain and oil food is damaged by grain storage pests in the whole process of production, processing and storage. The grain insect can not only eat the grain and cause the loss of the grain quantity, but also the living metabolic product of the grain insect can cause the grain to generate heat, aggravate the activity of grain microorganism, cause the grain to rot and deteriorate and possibly induce the generation of microorganism toxin. In addition, because the corpses and the excrement of the pests exist in the grains, the grains are polluted, so that the sanitary quality of the grains is reduced, and the health of a user is harmed.
With the development of computer technology, the informatization requirement of the grain industry is higher and higher at present. Due to the introduction of the intelligent storage technology, the grain quality monitoring in the storage link is more and more efficient. The grain insect detection is taken as an important link in the grain quality monitoring link, and the grain insect detection based on image processing has gradually become a research hotspot in recent years. The grain insect image detection comprises two modes of traditional digital processing and deep learning image processing. According to the on-line monitoring requirement of intelligent storage, the precision and the real-time performance of the grain insect detection need to be further improved. Conventional image processing techniques have been gradually phased out due to accuracy and real-time issues. The target detection algorithm in the deep learning image processing technology can not only accurately position a detection target, but also meet the real-time requirement of the single-stage target detection algorithm through continuous optimization processing speed.
The Single-stage target detection algorithm is represented by YOLO (young Only Live one) and SSD (Single Shot Multi-Box Detector), is famous for high detection speed, and is currently applied to various industries such as traffic sign detection, unmanned aerial vehicle target detection, remote sensing target detection, pedestrian video monitoring and the like, but the detection method in the prior art has the problem of inaccurate identification result when identifying the grain insects due to small volume.
Disclosure of Invention
The invention aims to provide a method and a system for identifying grain insects based on feature fusion of an SSD algorithm, so as to solve the problem of inaccurate identification of the grain insects in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a characteristic fusion grain insect identification method based on an SSD algorithm comprises the following steps:
the method comprises the following steps: establishing a data set;
step two: establishing a neural network model, and training the neural network model by adopting training data of a data set to obtain a trained neural network model;
step three: collecting grain insect images to be identified, inputting the grain insect images into the trained neural network model, and detecting the types and positions of the grain insects;
the training data in the data set comprises a plurality of images of the grain insects, each image only has an image of one grain insect, and the resolution of each image is 640 multiplied by 480;
the characteristic diagrams of the neural network model comprise block12, block7, block8, block9 and block10, wherein block12 is formed by fusing characteristic diagrams output by convolution layers conv4_3 and conv5_3 through TOP-DOWN modules, block7, block8, block9 and block10 are characteristic diagrams output by convolution layers conv7, conv8_2, conv9_2 and conv10_2 respectively, and the prior frames of the characteristic diagrams are obtained by clustering through a K-means algorithm.
Further, the feature map method for fusing convolution layers conv4_3 and conv5_3 outputs by using the TOP-DOWN module is as follows:
performing convolution and deconvolution for one time on the feature graph output by the convolution layer conv5_3, and then accessing a BN module;
carrying out convolution operation twice on the shallow convolution layer conv4_3 output characteristic diagram, and then accessing a BN module;
and performing feature fusion on the feature map output by the convolutional layer conv5_3 and the feature map output by the convolutional layer conv4_3, and finally outputting the finally fused feature map through a Relu activation function.
Furthermore, the feature map output by the convolutional layer conv5_3 and the feature map output by the convolutional layer conv4_3 are subjected to feature fusion by adopting a dot multiplication method.
Further, the method for calculating the prior frame size of each feature map by the K-means algorithm comprises the following steps:
marking the grain insects in the data set image, and initializing to obtain m clustering centers;
calculating the distance between all GT in the data set and each clustering center, taking the clustering center closest to the GT as the index of the GT, and taking the characteristic points with the same index as the same cluster;
when the clustering results of two consecutive times are the same, judging that clustering is finished;
and taking the minimum bounding box of each cluster as a prior box.
Further, the grain pests include, but are not limited to, corn weevil, tribolium castaneum, ips, tribolium castaneum, and Indian meal moth.
A characteristic fusion-based grain insect recognition system based on an SDD algorithm comprises a processor and a memory, wherein the memory is stored with a computer program for being executed on the processor; when the processor executes the computer program, the method for identifying the grain insects based on the feature fusion of the SDD algorithm is realized, and the method comprises the following steps:
the method comprises the following steps: establishing a data set;
step two: establishing a neural network model, and training the neural network model by adopting training data of a data set to obtain a trained neural network model;
step three: collecting grain insect images to be identified, inputting the grain insect images into the trained neural network model, and detecting the types and positions of the grain insects;
the training data in the data set comprises a plurality of images of the grain insects, each image only has an image of one grain insect, and the resolution of each image is 640 multiplied by 480;
the characteristic diagrams of the neural network model comprise block12, block7, block8, block9 and block10, wherein block12 is formed by fusing characteristic diagrams output by convolution layers conv4_3 and conv5_3 through TOP-DOWN modules, block7, block8, block9 and block10 are characteristic diagrams output by convolution layers conv7, conv8_2, conv9_2 and conv10_2 respectively, and the prior frames of the characteristic diagrams are obtained by clustering through a K-means algorithm.
Further, the feature map method for fusing convolution layers conv4_3 and conv5_3 outputs by using the TOP-DOWN module is as follows:
performing convolution and deconvolution for one time on the feature graph output by the convolution layer conv5_3, and then accessing a BN module;
carrying out convolution operation twice on the shallow convolution layer conv4_3 output characteristic diagram, and then accessing a BN module;
and performing feature fusion on the feature map output by the convolutional layer conv5_3 and the feature map output by the convolutional layer conv4_3, and finally outputting the finally fused feature map through a Relu activation function.
Furthermore, the feature map output by the convolutional layer conv5_3 and the feature map output by the convolutional layer conv4_3 are subjected to feature fusion by adopting a dot multiplication method.
Further, the method for calculating the prior frame size of each feature map by the K-means algorithm comprises the following steps:
marking the grain insects in the data set image, and initializing to obtain m clustering centers;
calculating the distance between all GT in the data set and each clustering center, taking the clustering center closest to the GT as the index of the GT, and taking the characteristic points with the same index as the same cluster;
when the clustering results of two consecutive times are the same, judging that clustering is finished;
and taking the minimum bounding box of each cluster as a prior box.
Further, the grain pests include, but are not limited to, corn weevil, tribolium castaneum, ips, tribolium castaneum, and Indian meal moth.
The invention has the beneficial effects that: according to the technical scheme provided by the invention, the characteristic graphs output by the convolutional layers conv4_3 and conv5_3 in the neural network model are fused, block11 which is unfavorable for small target detection is deleted, and a prior frame suitable for the grain insect is obtained by adopting a K-means clustering algorithm, so that the defect of a default prior frame in the original SSD algorithm is improved, and the identification and positioning of the grain insect are facilitated. Therefore, the technical scheme provided by the invention can solve the problem of poor accuracy in grain insect identification in the prior art.
Drawings
FIG. 1 is a flow chart of a method for identifying grain insects based on feature fusion of an SSD algorithm in an embodiment of the method of the present invention;
FIG. 2 is a schematic diagram of a neural network model according to an embodiment of the method of the present invention;
FIG. 3 is a schematic diagram illustrating a process of fusing a shallow feature map and a deep feature map according to an embodiment of the method of the present invention.
Detailed Description
The invention aims to provide a characteristic fusion type granary insect recognition method and system based on an SSD algorithm.
The method comprises the following steps:
the embodiment provides a method for identifying grain insects based on feature fusion of an SSD algorithm, the flow of which is shown in FIG. 1, and the method comprises the following steps:
the method comprises the following steps: a data set is established.
The training data in the data set includes a variety of images of grain insects including, but not limited to, corn weevils, tribolium castaneum, ips typographus, and Indian meal moth. When the image is collected, the live body imago is used for shooting, and the diversity of collected samples can be ensured because the live body imago is more active. When the images are collected, the videos of the grain insects are shot firstly, then screenshot is carried out on the videos, and the data set images are manufactured.
The training data in the data set of this example contained 1998 images each having a size of 640X 480, 3-10 grain insects per image, and only one grain insect per image. 1438 images in the data set were used to train the neural network model, 360 images were used to validate the neural network model, and 200 were used to test the neural network model.
Step two: and establishing a neural network model.
The neural network model established in this embodiment uses the VGG16 network as a feature extraction network, and the structure thereof is shown in fig. 2, a circular mark in the figure is a TOP-DOWN module shown in fig. 2, and the feature maps of the outputs of conv4_3 and conv5_3 are fused by using the module.
The neural network model in the embodiment comprises block12, block7, block8, block9 and block10, wherein block12 is formed by fusing feature graphs output by convolution layers conv4_3 and conv5_3 through TOP-DOWN modules, and block7, block8, block9 and block10 are feature graphs output by convolution layers conv7, conv8_2, conv9_2 and conv10_2 respectively. The sizes of the prior boxes of the feature maps are obtained by clustering through a K-means algorithm.
Step three: and training the established neural network model by adopting the training data in the data set to obtain the trained neural network model.
When the established neural network model is trained by adopting the training data in the data set, the image of the training data in the data set is used as input, and the type and the position of the grain insects in the image are used as output, so that the trained neural network model is obtained.
Step four: and collecting images of the grain insects to be identified, inputting the images into the trained neural network model, and identifying the types and the number of the grain insects in the images.
In this embodiment, a TOP-DOWN module is used to fuse the shallow feature map output by conv4_3 and the deep feature map output by conv5_3, and the fusion process is shown in fig. 3 and includes the steps of:
step 1.1: carrying out convolution and deconvolution twice on the deep characteristic graph to convert the size of the deep characteristic graph into twice of the original size, and then accessing a BN module, wherein the convolution and the deconvolution are operation modes in deep learning;
step 1.2: performing convolution operation on the shallow feature map twice, and then accessing a BN (Batch Normalization) module;
step 1.3: performing feature fusion on the deep layer feature map and the shallow layer feature map, in this embodiment, the deep layer feature map and the shallow layer feature map are subjected to feature fusion in a manner of performing point multiplication operation on the shallow layer feature map and the deep layer feature map in each channel;
step 1.4: and activating the feature graph after point multiplication by adopting a Relu activation function to obtain a final fused feature graph, wherein the Relu activation function is an operation of introducing nonlinearity into the convolutional neural network.
In the present embodiment, in the process of fusing the deep layer feature map and the shallow layer feature map, when performing convolution operation, a convolution kernel of 3 × 3 × C1 is used as the convolution kernel.
In the embodiment, the K-means algorithm is adopted for clustering, the data set is traversed, and the length-width ratio of each characteristic diagram prior frame is determined, so that the established neural network model can more easily and accurately locate the grain insects.
The process of obtaining the prior frame by adopting K-means comprises the following steps:
step 2.1: marking the grain insects in the data set image, initializing to obtain m clustering centers, namely randomly selecting m bounding boxes from all GT (manually marked bounding boxes), wherein m is a positive integer greater than 1;
step 2.2: calculating the distance between all GT in the data set and each cluster center, selecting the cluster center with the minimum distance and storing the index of the cluster center; and when the clustering results of two consecutive times are consistent, judging that the clustering is finished.
Step 2.3: and taking the marking points of the same clustering center as the same class to obtain the minimum bounding box of each class, wherein the bounding box is the prior box.
According to the clustering result, the length-width ratio of the prior frame corresponding to the feature map output by each layer is shown in table 1.
TABLE 1
Characteristic diagram Aspect ratio of prior frame
Block12 [1,1′,2,1./2,1./4,1./3]
Block7 [1,1′,2,1./2,1./4,1./3]
Block8 [1,1′,2,1./2,1./4,1./3]
Block9 [1,1′,2,1./2,1./4,1./3]
Block10 [1,1′,2,1./2]
Precision, Recall, AP, and mep (average accuracy) and FPS (frame rate per second) are used in this embodiment to measure the merits of the established neural network model. All indexes are that the larger the numerical value is, the better the detection performance is represented. Wherein FPS represents the detection speed, and the larger the value, the faster the detection speed.
The calculation formula of the accuracy rate and the recall rate is as follows:
Figure BDA0002757069200000051
Figure BDA0002757069200000052
all det ectioons=TP+FP
all ground truthes=FN+TP
where TP is the number of positive samples (True positive) correctly divided into positive samples, FP is the number of negative samples (False positive) incorrectly divided into positive samples, and FN is the number of positive samples (False negative) incorrectly divided into negative samples.
Compared with the original SSD algorithm, the optimized SSD model has the advantage that the mAP is improved from 88.56% to 96.74%, and the improvement is great. Although the FPS is reduced from 25 to 21, the requirement of real-time detection can still be met. The mAP comparison of the optimized neural network detection results is shown in table 1, and the optimized neural network model greatly improves the target detection precision of the grain insects.
TABLE 1
Model (model) mAP/% Pseudocercosporium castaneum Grain moth Corn elephant Brown rice and flat paddy thief Indian meal moth FPS
SSD before optimization 88.56 91.27 90.37 93.43 76.06 91.66 25
Optimized SSD 96.74 95.40 98.63 96.95 95.67 97.06 21
The embodiment of the system is as follows:
the embodiment provides a characteristic fusion type grain insect recognition system based on an SDD algorithm, which comprises a processor and a memory, wherein the memory is stored with a computer program for being executed on the processor; when the processor executes the computer program, the characteristic fusion grain insect identification method based on the SDD algorithm provided by the embodiment of the method is realized.
The embodiments of the present invention disclosed above are intended merely to help clarify the technical solutions of the present invention, and it is not intended to describe all the details of the invention nor to limit the invention to the specific embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.
Those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. A characteristic fusion grain insect identification method based on an SSD algorithm is characterized by comprising the following steps:
the method comprises the following steps: establishing a data set;
step two: establishing a neural network model, and training the neural network model by adopting training data of a data set to obtain a trained neural network model;
step three: collecting grain insect images to be identified, inputting the grain insect images into the trained neural network model, and detecting the types and positions of the grain insects;
the training data in the data set comprises a plurality of images of the grain insects, each image only has an image of one grain insect, and the resolution of each image is 640 multiplied by 480;
the characteristic diagram of the neural network model comprises block12, block7, block8, block9 and block10, wherein the characteristic diagram comprises block12, block7, block8 and block10
The block12 is formed by fusing feature maps output by convolution layers conv4_3 and conv5_3 through a TOP-DOWN module, the block7, the block8, the block9 and the block10 are feature maps output by convolution layers conv7, conv8_2, conv9_2 and conv10_2 respectively, and the sizes of the prior frames of the feature maps are obtained by clustering through a K-means algorithm.
2. The SDD algorithm-based feature fusion grainworm identification method as claimed in claim 1, wherein the feature map method of fusing convolution layers conv4_3 and conv5_3 outputs by using TOP-DOWN module is as follows:
performing convolution and deconvolution for one time on the feature graph output by the convolution layer conv5_3, and then accessing a BN module;
carrying out convolution operation twice on the shallow convolution layer conv4_3 output characteristic diagram, and then accessing a BN module;
and performing feature fusion on the feature map output by the convolutional layer conv5_3 and the feature map output by the convolutional layer conv4_3, and finally outputting the finally fused feature map through a Relu activation function.
3. The SDD algorithm-based feature fusion grainworm identification method as claimed in claim 2, wherein the feature fusion is performed on the feature map output by the convolution layer conv5_3 and the feature map output by the convolution layer conv4_3 in a dot multiplication mode.
4. The SDD algorithm-based feature fusion grainworm identification method according to claim 1, wherein the method for calculating the prior frame size of each feature map through a K-means algorithm comprises the following steps:
marking the grain insects in the data set image, and initializing to obtain m clustering centers;
calculating the distance between all GT in the data set and each clustering center, taking the clustering center closest to the GT as the index of the GT, and taking the characteristic points with the same index as the same cluster;
when the clustering results of two consecutive times are the same, judging that clustering is finished;
and taking the minimum bounding box of each cluster as a prior box.
5. The SDD algorithm-based feature fusion-based grain insect recognition method of claim 1, wherein the grain insects include but are not limited to Zea mays, Tripsalliferae, Guest beetle, Tripsalliferae terrestris, and Indian meal moth.
6. A characteristic fusion-based grain insect recognition system based on an SDD algorithm comprises a processor and a memory, wherein the memory is stored with a computer program for being executed on the processor; the method is characterized in that when the processor executes the computer program, the characteristic fusion grain insect recognition method based on the SDD algorithm is realized, and the method comprises the following steps:
the method comprises the following steps: establishing a data set;
step two: establishing a neural network model, and training the neural network model by adopting training data of a data set to obtain a trained neural network model;
step three: collecting grain insect images to be identified, inputting the grain insect images into the trained neural network model, and detecting the types and positions of the grain insects;
the training data in the data set comprises a plurality of images of the grain insects, each image only has an image of one grain insect, and the resolution of each image is 640 multiplied by 480;
the characteristic diagram of the neural network model comprises block12, block7, block8, block9 and block10, wherein the characteristic diagram comprises block12, block7, block8 and block10
The block12 is formed by fusing feature maps output by convolution layers conv4_3 and conv5_3 through a TOP-DOWN module, the block7, the block8, the block9 and the block10 are feature maps output by convolution layers conv7, conv8_2, conv9_2 and conv10_2 respectively, and the sizes of the prior frames of the feature maps are obtained by clustering through a K-means algorithm.
7. The SDD algorithm-based feature fusion grainworm identification system as recited in claim 6, wherein the feature map method of fusing convolution layers conv4_3 and conv5_3 outputs using TOP-DOWN module is as follows:
performing convolution and deconvolution for one time on the feature graph output by the convolution layer conv5_3, and then accessing a BN module;
carrying out convolution operation twice on the shallow convolution layer conv4_3 output characteristic diagram, and then accessing a BN module;
and performing feature fusion on the feature map output by the convolutional layer conv5_3 and the feature map output by the convolutional layer conv4_3, and finally outputting the finally fused feature map through a Relu activation function.
8. The SDD algorithm-based feature fusion grainworm identification system according to claim 7, wherein the feature fusion is performed on the feature map output by the convolution layer conv5_3 and the feature map output by the convolution layer conv4_3 in a dot-and-dash manner.
9. The SDD algorithm-based feature fusion grainworm identification system according to claim 6, wherein the method for calculating the prior frame size of each feature map by the K-means algorithm comprises the following steps:
marking the grain insects in the data set image, and initializing to obtain m clustering centers;
calculating the distance between all GT in the data set and each clustering center, taking the clustering center closest to the GT as the index of the GT, and taking the characteristic points with the same index as the same cluster;
when the clustering results of two consecutive times are the same, judging that clustering is finished;
and taking the minimum bounding box of each cluster as a prior box.
10. The SDD algorithm-based feature fusion-based grain insect recognition system of claim 6, wherein the grain insects include but are not limited to Zea mays, Tripsalliferae, Guest beetle, Tripsalliferae terrestris, and Indian meal moth.
CN202011205968.XA 2020-11-02 2020-11-02 SSD algorithm-based characteristic fusion-based grain insect identification method and identification system Active CN112288022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011205968.XA CN112288022B (en) 2020-11-02 2020-11-02 SSD algorithm-based characteristic fusion-based grain insect identification method and identification system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011205968.XA CN112288022B (en) 2020-11-02 2020-11-02 SSD algorithm-based characteristic fusion-based grain insect identification method and identification system

Publications (2)

Publication Number Publication Date
CN112288022A true CN112288022A (en) 2021-01-29
CN112288022B CN112288022B (en) 2022-09-20

Family

ID=74354104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011205968.XA Active CN112288022B (en) 2020-11-02 2020-11-02 SSD algorithm-based characteristic fusion-based grain insect identification method and identification system

Country Status (1)

Country Link
CN (1) CN112288022B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067105A (en) * 2022-01-12 2022-02-18 安徽高哲信息技术有限公司 Grain density estimation method, storage medium, and grain density estimation apparatus
CN116310658A (en) * 2023-05-17 2023-06-23 中储粮成都储藏研究院有限公司 Method for establishing grain storage pest image data set based on spherical camera
CN116797774A (en) * 2023-05-24 2023-09-22 国网江苏省电力有限公司淮安供电分公司 Substation signboard identification method based on YOLOv5 and CNOCR

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976350A (en) * 2010-10-20 2011-02-16 中国农业大学 Grain storage pest detection and identification method based on video analytics and system thereof
US20180300880A1 (en) * 2017-04-12 2018-10-18 Here Global B.V. Small object detection from a large image
CN109886359A (en) * 2019-03-25 2019-06-14 西安电子科技大学 Small target detecting method and detection model based on convolutional neural networks
CN110660052A (en) * 2019-09-23 2020-01-07 武汉科技大学 Hot-rolled strip steel surface defect detection method based on deep learning
US10524461B1 (en) * 2018-09-25 2020-01-07 Jace W. Files Pest detector to identify a type of pest using machine learning
CN110766041A (en) * 2019-09-04 2020-02-07 江苏大学 Deep learning-based pest detection method
CN111091122A (en) * 2019-11-22 2020-05-01 国网山西省电力公司大同供电公司 Training and detecting method and device for multi-scale feature convolutional neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976350A (en) * 2010-10-20 2011-02-16 中国农业大学 Grain storage pest detection and identification method based on video analytics and system thereof
US20180300880A1 (en) * 2017-04-12 2018-10-18 Here Global B.V. Small object detection from a large image
US10524461B1 (en) * 2018-09-25 2020-01-07 Jace W. Files Pest detector to identify a type of pest using machine learning
CN109886359A (en) * 2019-03-25 2019-06-14 西安电子科技大学 Small target detecting method and detection model based on convolutional neural networks
CN110766041A (en) * 2019-09-04 2020-02-07 江苏大学 Deep learning-based pest detection method
CN110660052A (en) * 2019-09-23 2020-01-07 武汉科技大学 Hot-rolled strip steel surface defect detection method based on deep learning
CN111091122A (en) * 2019-11-22 2020-05-01 国网山西省电力公司大同供电公司 Training and detecting method and device for multi-scale feature convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHENG-YANG FU 等: ""DSSD : Deconvolutional Single Shot Detector"", 《ARXIV》 *
邓壮来 等: ""基于SSD的粮仓害虫检测研究"", 《计算机工程与应用》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067105A (en) * 2022-01-12 2022-02-18 安徽高哲信息技术有限公司 Grain density estimation method, storage medium, and grain density estimation apparatus
CN116310658A (en) * 2023-05-17 2023-06-23 中储粮成都储藏研究院有限公司 Method for establishing grain storage pest image data set based on spherical camera
CN116310658B (en) * 2023-05-17 2023-08-01 中储粮成都储藏研究院有限公司 Method for establishing grain storage pest image data set based on spherical camera
CN116797774A (en) * 2023-05-24 2023-09-22 国网江苏省电力有限公司淮安供电分公司 Substation signboard identification method based on YOLOv5 and CNOCR

Also Published As

Publication number Publication date
CN112288022B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
Wang et al. Plant disease detection and classification method based on the optimized lightweight YOLOv5 model
Jackulin et al. A comprehensive review on detection of plant disease using machine learning and deep learning approaches
CN112288022B (en) SSD algorithm-based characteristic fusion-based grain insect identification method and identification system
Yang et al. Rapid detection and counting of wheat ears in the field using YOLOv4 with attention module
Mochida et al. Computer vision-based phenotyping for improvement of plant productivity: a machine learning perspective
Ahmad et al. Deep learning based detector YOLOv5 for identifying insect pests
WO2020253416A1 (en) Object detection method and device, and computer storage medium
CN107665355B (en) Agricultural pest detection method based on regional convolutional neural network
Palacios et al. A non-invasive method based on computer vision for grapevine cluster compactness assessment using a mobile sensing platform under field conditions
Fujita et al. A practical plant diagnosis system for field leaf images and feature visualization
CN111241939A (en) Rice yield estimation method based on unmanned aerial vehicle digital image
Li et al. High-performance plant pest and disease detection based on model ensemble with inception module and cluster algorithm
Wang et al. Tomato young fruits detection method under near color background based on improved faster R-CNN with attention mechanism
Wang et al. ShuffleNet-Triplet: A lightweight RE-identification network for dairy cows in natural scenes
Xu et al. Detection and counting of maize leaves based on two-stage deep learning with UAV-based RGB image
Liu et al. Tomato pest recognition algorithm based on improved YOLOv4
CN116563205A (en) Wheat spike counting detection method based on small target detection and improved YOLOv5
Yu et al. TasselLFANet: a novel lightweight multi-branch feature aggregation neural network for high-throughput image-based maize tassels detection and counting
Qiang et al. Detection of citrus pests in double backbone network based on single shot multibox detector
Cong et al. MYOLO: a lightweight fresh shiitake mushroom detection model based on YOLOv3
Lu et al. Citrus green fruit detection via improved feature network extraction
Mitra et al. A smart agriculture framework to automatically track the spread of plant diseases using mask region-based convolutional neural network
Wu et al. Multi-class weed recognition using hybrid CNN-SVM classifier
Wang et al. Research on tea trees germination density detection based on improved YOLOv5
Lin et al. A novel approach for estimating the flowering rate of litchi based on deep learning and UAV images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant