CN113553948A - Automatic recognition and counting method for tobacco insects and computer readable medium - Google Patents
Automatic recognition and counting method for tobacco insects and computer readable medium Download PDFInfo
- Publication number
- CN113553948A CN113553948A CN202110835216.XA CN202110835216A CN113553948A CN 113553948 A CN113553948 A CN 113553948A CN 202110835216 A CN202110835216 A CN 202110835216A CN 113553948 A CN113553948 A CN 113553948A
- Authority
- CN
- China
- Prior art keywords
- tobacco
- neural network
- worm
- network model
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 241000208125 Nicotiana Species 0.000 title claims abstract description 95
- 235000002637 Nicotiana tabacum Nutrition 0.000 title claims abstract description 95
- 241000238631 Hexapoda Species 0.000 title claims abstract description 71
- 238000000034 method Methods 0.000 title claims abstract description 49
- 241000255908 Manduca sexta Species 0.000 claims abstract description 61
- 238000003062 neural network model Methods 0.000 claims abstract description 50
- 241000607479 Yersinia pestis Species 0.000 claims abstract description 41
- 238000012549 training Methods 0.000 claims abstract description 41
- 239000000779 smoke Substances 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 12
- 238000013528 artificial neural network Methods 0.000 claims description 12
- 238000011156 evaluation Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000012360 testing method Methods 0.000 claims description 9
- 238000012795 verification Methods 0.000 claims description 9
- 238000002372 labelling Methods 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 4
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 238000009432 framing Methods 0.000 claims description 3
- 238000002360 preparation method Methods 0.000 claims description 3
- 235000019504 cigarettes Nutrition 0.000 abstract description 26
- 238000013135 deep learning Methods 0.000 abstract description 7
- 230000006870 function Effects 0.000 description 20
- 238000012544 monitoring process Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000007557 optical granulometry Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Catching Or Destruction (AREA)
Abstract
The automatic tobacco insect identifying and counting method is used for identifying the types of the tobacco insects captured by the tobacco insect trapper and counting the number of the tobacco insects, and comprises the following steps: s1: acquiring photo data of a historical tobacco pest trapper; s2: processing the photo data of the historical smoke insect trapper to form batch annotation data set information; s3: training a neural network model by using the obtained batch labeled data set information to obtain a tobacco worm recognition neural network model; s4: and identifying the pictures of the tobacco worm trappers to be counted by using the obtained tobacco worm identification neural network model, and automatically obtaining the information of the types and the quantity of the tobacco worms on the pictures. Through handling historical cigarette worm trapper photo data, adopt the deep learning neural network model to train and obtained the cigarette worm discernment neural network model that can accurately discern and predict cigarette worm kind and quantity on the cigarette worm trapper photo, utilize this model can carry out automatic quick discernment, count to the cigarette worm that the cigarette worm trapper was traped, obtain cigarette worm kind and quantity information.
Description
Technical Field
The application belongs to the technical field of automatic measurement, and particularly relates to an automatic tobacco worm identification and counting method and a computer readable medium.
Background
A large amount of high-value tobacco leaves stored in tobacco leaf warehouses and workshops of tobacco enterprises are easy to be damaged by insect pests, and the usability of the tobacco leaves and the quality of cigarettes are further seriously influenced.
In the prior art, when tobacco enterprises control tobacco pests, a tobacco pest trap is generally used for monitoring a storage warehouse of tobacco raw materials and a cigarette production and processing place, and the pest problem and the severity of the influence are judged according to monitoring data, so that the monitoring data is an important basis for determining which pest control measures are taken.
In the prior art, after the tobacco leaf traps are hung in tobacco leaf warehouses and workshops, the quantity of pests in the traps is counted mainly in a mode of manual inspection and manual recording, and pest situation information is fed back. With the progress of information technology, the information technology is gradually adopted to monitor the tobacco pest trapper, including photographing and recording, then the identification and counting of the tobacco pest types are carried out manually, and the pest situation is further analyzed. Because the tobacco leaf warehouse and the workshop have the cigarette worm trapper of great quantity all to need the artifical discernment of accomplishing the cigarette worm kind, count and input information system, such operation process not only needs a large amount of manpower resources, and is consuming time longer moreover, and the pest situation can not real-time feedback, can not in time effectively monitor the tobacco leaf pest situation.
Disclosure of Invention
In view of the above, in one aspect, some embodiments disclose an automatic tobacco insect identification and counting method for identifying the type of tobacco insects captured by a tobacco insect trap and counting the number of the tobacco insects, the method comprising the steps of:
s1: acquiring photo data of a historical tobacco pest trapper;
s2: processing the photo data of the historical smoke insect trapper to form batch annotation data set information;
s3: training a neural network model by using the obtained batch labeled data set information to obtain a tobacco worm recognition neural network model;
s4: and identifying the pictures of the tobacco worm traps to be counted by utilizing the obtained tobacco worm identification neural network model, and automatically obtaining the information of the types and the quantity of the tobacco worms on the pictures of the tobacco worm traps.
Further, in some embodiments, the step S2 of the automatic cigarette worm identification and counting method specifically includes:
s201: marking historical picture data of the tobacco pest trapper, framing position information of each tobacco pest, further marking type information of the tobacco pests, and forming marking data set information containing the historical picture information, the position information and the type information of the tobacco pest trapper;
s202: and dividing 80% of the information of the labeled data set into a training set, 10% into a verification set and 10% into a test set.
In some embodiments, in step S3 of the automatic recognition and counting method for tobacco worms, the neural network model is a YOLO-V3 neural network model, and the process of training the YOLO-V3 neural network model specifically includes:
s301: generating a series of candidate regions on the picture of the tobacco pest trapper according to a set rule, and marking the candidate regions according to the position relation between the candidate regions and the object real frame;
s302: extracting the photo characteristics of the tobacco pest trapper by using a convolutional neural network, predicting the position and the category of a candidate region to obtain a prediction frame, and taking the prediction frame as a sample;
s303: labeling according to the position and the category of the real frame relative to the sample to obtain a label value;
s304: predicting the position and the category of the sample through a YOLO-V3 neural network model to obtain a predicted value, and comparing the predicted value with a label value to establish an evaluation index function;
s305: training parameters of a YOLO-V3 neural network by using a training set, selecting parameters of the YOLO-V3 neural network by using a verification set, simulating a real effect after the training of the YOLO-V3 neural network by using a test set, and performing a training process by using a two-layer loop nesting mode:
the inner-layer cycle adopts a batch mode to be responsible for one-time traversal of the whole data set, performs data preparation, forward calculation, calculation of an evaluation index function and back propagation, and then updates the model parameters;
the outer loop repeatedly traverses the data set to execute the inner loop;
and if the calculated evaluation index function reaches a preset error value, finishing outer circulation, ending the training process and generating the tobacco worm recognition neural network model.
Further, in some embodiments of the automatic smoke worm identification and counting method disclosed in step S301, a candidate region close enough to the real frame is labeled as a positive sample, while the position of the real frame is used as the position target of the positive sample, and a candidate region deviating more from the real frame is labeled as a negative sample, which does not need to predict the position or the category.
In some embodiments of the automatic recognition and counting method for tobacco worms disclosed in the embodiment, step S4 specifically includes:
s401: loading the trained tobacco worm recognition neural network model into a model example, and transmitting a photo needing to be detected and recognized;
s402: forward calculation is carried out on the tobacco worm recognition neural network to obtain the position and the category of a tobacco worm prediction box;
s403: and counting all the tobacco worm prediction frames, the position information and the category information to obtain the types of the tobacco worms and the quantity information of various tobacco worms in the input picture.
In some embodiments, the automatic recognition and counting method for the tobacco worms disclosed in the embodiments, if the tobacco worm types of the two prediction boxes are the same and the position coincidence ratio of the two prediction boxes is relatively large, the two prediction boxes are determined to predict the same tobacco worm target, and the determining method includes: selecting a first prediction frame with the highest score in a certain category, calculating the intersection ratio of other prediction frames and the first prediction frame, and if the intersection ratio is greater than a set prediction value, determining that the other prediction frames and the first prediction frame predict the same tobacco worm target; wherein the intersection-to-parallel ratio is expressed as:
where a denotes the first prediction box, B denotes the other prediction boxes, and IoU is the cross-over ratio.
In some embodiments of the automatic smoke insect identification and counting method disclosed in step S1, the historical smoke insect trap photos include photos taken by smoke insect traps at different positions in a tobacco warehouse or a workshop, and the historical smoke insect trap photos are divided, flipped, scaled and rotated to form more image data containing trapped smoke insects.
In another aspect, some embodiments disclose a computer-readable medium containing computer-executable instructions, wherein the computer-executable instructions, when processed by a data processing device, perform the automatic smoke insect identification and counting method disclosed in the above embodiments.
According to the automatic tobacco insect recognition and counting method, historical tobacco insect trap photo data in places such as tobacco leaf warehouses and workshops are processed, a deep learning neural network model is adopted for training to obtain a tobacco insect recognition neural network model capable of accurately recognizing and predicting the types and the quantity of tobacco insects on tobacco insect trap photos, the tobacco insects trapped by the tobacco insect traps can be automatically and quickly recognized and counted by the aid of the model to obtain the type and the quantity information of the tobacco insects, the tobacco insect traps are subjected to real-time photo analysis and visual result display, tobacco insect monitoring manpower can be greatly saved by quickly obtaining insect situation information, and monitoring efficiency is improved.
Drawings
FIG. 1 is a flow chart of a method for automatically identifying and counting tobacco insects
FIG. 2 photograph of the insect trap of example 1
Detailed Description
The word "embodiment" as used herein, is not necessarily to be construed as preferred or advantageous over other embodiments, including any embodiment illustrated as "exemplary". Performance index tests in the examples of this application, unless otherwise indicated, were performed using routine experimentation in the art. It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure.
Unless defined otherwise, technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; other test methods and techniques not specifically mentioned in the present application are those commonly employed by those of ordinary skill in the art.
The terms "substantially" and "about" are used herein to describe small fluctuations. For example, they may mean less than or equal to ± 5%, such as less than or equal to ± 2%, such as less than or equal to ± 1%, such as less than or equal to ± 0.5%, such as less than or equal to ± 0.2%, such as less than or equal to ± 0.1%, such as less than or equal to ± 0.05%. Numerical data represented or presented herein in a range format is used merely for convenience and brevity and thus should be interpreted flexibly to include not only the numerical values explicitly recited as the limits of the range, but also to include all the individual numerical values or sub-ranges encompassed within that range as if each numerical value and sub-range is explicitly recited. For example, a numerical range of "1 to 5%" should be interpreted to include not only the explicitly recited values of 1% to 5%, but also include individual values and sub-ranges within the indicated range. Thus, included in this numerical range are individual values, such as 2%, 3.5%, and 4%, and sub-ranges, such as 1% to 3%, 2% to 4%, and 3% to 5%, etc. This principle applies equally to ranges reciting only one numerical value. Moreover, such an interpretation applies regardless of the breadth of the range or the characteristics being described.
In this document, including the claims, all conjunctions such as "comprising," including, "" carrying, "" having, "" containing, "" involving, "" containing, "and the like are to be understood as being open-ended, i.e., to mean" including but not limited to. Only the conjunctions "consisting of … …" and "consisting of … …" are closed conjunctions.
In the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details.
On the premise of no conflict, the technical features disclosed in the embodiments of the present application may be combined arbitrarily, and the obtained technical solution belongs to the content disclosed in the embodiments of the present application.
In some embodiments, as shown in fig. 1, the method for automatic identification and enumeration of tobacco worms comprises the steps of:
s1: acquiring photo data of a historical tobacco pest trapper; generally, the historical pictures of the tobacco leaf warehouse or workshop comprise pictures shot by the tobacco leaf traps at different positions, and the historical pictures of the tobacco leaf traps are subjected to segmentation, turning, scaling and rotation processing, so that more picture data containing trapped tobacco can be formed;
s2: processing the photo data of the historical smoke insect trapper to form batch annotation data set information; generally, when a historical cigarette insect trap photo is labeled, the position information of each cigarette insect is framed respectively, the type information of each labeled box is further labeled, and the image data of all the trapped cigarette insects, the labeled position information and the labeled type information are integrated to form labeled data set information containing the historical cigarette insect trap photo information, the position information and the type information; then, 80% of information of the labeled data set is divided into a training set, the training set is used for training parameters of the neural network model, 10% of information of the labeled data set is divided into a verification set, the verification set is used for selecting parameters of the neural network model, and 10% of information of the labeled data set is divided into a test set and used for simulating the real effect of the neural network model after application;
s3: training a neural network model by using the obtained batch labeled data set information to obtain a tobacco worm recognition neural network model; usually, three parts of data set information of a training set, a verification set and a test set are used for carrying out repeated and multi-turn training on a deep learning neural network model with target detection capability so as to determine the optimal parameters of the neural network model for expressing the target recognition of the detection of the tobacco insects; usually, the training takes the minimization of the fitted error as an optimization target, and when the evaluation index function is optimal and reaches a preset error value, the training is finished; as an optional embodiment, selecting a YoLO-V3 neural network model as a neural network model for identifying and detecting the tobacco worms, and training the neural network model to obtain a tobacco worm identification neural network model; as an optional embodiment, the evaluation index is a loss function, and the minimum loss function is an optimization target with the minimum error;
generally, a deep learning neural network model is a multi-layer mapping function from input to output, a neural network with deep enough layers can theoretically fit any complex function, is very suitable for learning the internal rule and the expression layer of sample data, and has good applicability to image identification; the deep learning neural network can be formulated as follows:
Y=f3(f2(f1(w1·x1+w2·x2+w3·x3+b)+...)...)...)
in the formula, w1、w2、w3… … is weight, b is offset, x1、x2、x3… … is the input picture pixel data;
s4: and identifying the pictures of the tobacco worm traps to be counted by utilizing the obtained tobacco worm identification neural network model, and automatically obtaining the information of the types and the quantity of the tobacco worms on the pictures of the tobacco worm traps. Generally, a trained tobacco worm recognition neural network model is loaded into a model example, and a photo needing to be detected and recognized is transmitted; calculating the positions of all the cigarette worm prediction frames and the scores of the categories of the cigarette worms by forward calculation of a cigarette worm recognition neural network; the score is obtained by multiplying the probability of the category to which the target tobacco insect belongs by the probability of whether the prediction box contains the target tobacco insect or not, and the calculation result with the score larger than 0 is reserved; and counting all the tobacco worm prediction frames, the position information and the category information to obtain the types of the tobacco worms and the quantity information of various tobacco worms in the input picture. Generally, for a picture to be detected and recognized, a plurality of prediction frames are generated as a result of the calculation, and the overlap ratio of many prediction frames in the output prediction frames is relatively large, so that it is necessary to eliminate redundant prediction frames with large overlap ratio. If a plurality of prediction boxes all correspond to the same tobacco worm, only the prediction box with the highest score is selected, and the rest prediction boxes are discarded.
As an alternative embodiment, the process of training the YOLO-V3 neural network model in the automatic tobacco worm identification and counting method specifically includes:
s301: generating a series of candidate regions on the picture of the tobacco pest trapper according to a set rule, and marking the candidate regions according to the position relation between the candidate regions and the object real frame; generally, a candidate region close enough to the real frame is labeled as a positive sample, while the position of the real frame is taken as a position target of the positive sample, a candidate region deviating greatly from the real frame is labeled as a negative sample, and the negative sample does not need to predict a position or a category; determining the position relation of the real frame and the candidate area by using the intersection ratio of the real frame and the candidate area, and determining whether the sample is a positive sample by setting a threshold value of the intersection ratio; usually, the threshold is selected according to training experience and is optimized and determined after training;
s302: extracting the photo characteristics of the tobacco pest trapper by using a convolutional neural network, predicting the position and the category of a candidate region to obtain a prediction frame, and taking the prediction frame as a sample;
s303: labeling according to the position and the category of the real frame relative to the sample to obtain a label value;
s304: predicting the position and the category of the sample through a YOLO-V3 neural network model to obtain a predicted value, and comparing the predicted value with a label value to establish an evaluation index function;
s305: training parameters of a YOLO-V3 neural network by using a training set, selecting parameters of the YOLO-V3 neural network by using a verification set, simulating a real effect after the training of the YOLO-V3 neural network by using a test set, and performing a training process by using a two-layer loop nesting mode: the inner loop is responsible for one traversal of the entire data set in a batch-wise manner, for example, if the number of data set samples is 1000, there are 10 samples in one batch, and the number of batches traversed by one data set is 1000/10-100, that is, the inner loop is executed 100 times; performing four steps of data preparation, forward calculation, evaluation index function calculation and back propagation in each inner layer cycle, and then updating model parameters; updating the model parameters every time the inner-layer cycle is finished, repeatedly traversing the data set by the outer-layer cycle to execute the inner-layer cycle, and determining the number of times of executing the inner-layer cycle; for example, if the calculated evaluation index function reaches a preset error value, an outer loop is completed, the training process is ended, a tobacco worm recognition neural network model is generated, and parameters of the neural network model are maintained.
As an alternative embodiment, the method for eliminating the redundant prediction boxes in the automatic cigarette worm identification and counting method includes that if the cigarette worm types of the two prediction boxes are the same and the position coincidence ratio of the two prediction boxes is relatively large, the two prediction boxes are determined to predict the same cigarette worm target, and the determination method includes: selecting a first prediction frame with the highest score in a certain category, calculating the intersection ratio of other prediction frames and the first prediction frame, if the intersection ratio is greater than a set prediction value, determining that the other prediction frames and the first prediction frame predict the same tobacco worm target, and the other prediction frames are redundant prediction frames, so that the problem of not counting can be eliminated; wherein the intersection-to-parallel ratio is expressed as:
where a denotes the first prediction box, B denotes the other prediction boxes, IoU is an intersection-to-parallel ratio, which denotes the relationship between prediction boxes a and B, equal to A, B, the number of pixels contained in the intersection of two prediction boxes divided by the number of pixels contained in their union.
The threshold value of the cross-over ratio is usually preset to judge whether the frame is a redundancy prediction frame, and the preset threshold value is usually from the practical experience of neural network model training and proves to be effective and reasonable in the simulation training of tobacco worm identification and counting.
As an optional embodiment, in the automatic recognition and counting method for tobacco insects, the evaluation index function is a loss function, and the establishment process of the loss function specifically includes:
inputting a smoke insect trapper photo, obtaining an output characteristic diagram P0 level characteristic diagram, a P1 level characteristic diagram and a P2 level characteristic diagram of three levels through characteristic extraction, respectively generating corresponding anchor frames and prediction frames by using small square grids with different sizes, and labeling the anchor frames; wherein, the P0 level feature map correspondingly uses a small square grid with the size of 32 multiplied by 32 to generate three anchor frames with the sizes of [116,90], [156,198], [373,326] respectively in the center of each region; the P1 level feature map is corresponding to three anchor frames with the sizes of [30,61], [62,45], [59,119] generated in the center of each area by using a small square grid with the size of 16 multiplied by 16; the P2 level feature map is corresponding to using 8 × 8 small square grids to generate three anchor frames with sizes of [10,13], [16,30], [33,23] respectively in the center of each region;
associating the feature maps of the three levels with corresponding anchor frame labels, respectively establishing level loss functions of the three levels, adding the three level loss functions to obtain a total loss function, wherein x, y, w and h represent the width and height of coordinates of the labeling frame, C is confidence coefficient, P is classification probability, and the total loss function is expressed as follows:
in the formula, x, y, w, h, C and p are predicted values;is a label value;indicates whether the jth box of the ith mesh matches the object, and if yes, is 1, and if no, is 0,the opposite is true; b represents the number of anchor boxes, S represents the size of an output grid, and classes represents the number of categories;representing the coordination coefficients of different sizes of rectangular frames with inconsistent contribution to the loss function; lambda [ alpha ]noobjWhen the target is not predicted by the prediction frame, the confidence error of the target is represented by the weight of the loss function, and the value is generally 0.5.
During training, the coordinate width and height of a labeling frame of a target in a certain picture are represented as x, y, w, h, confidence (1 or 0), and classification probability p (i) ([1, 0, 0, 0, 0. ] and [0, 1, 0, 0, 0. ]). When the jth prediction frame of the ith grid is matched with a certain labeling target, calculating the prediction frame to obtain the central coordinate difference, width and height errors, confidence coefficient errors and classification errors of the prediction frame, and only obtaining the confidence coefficient errors of other prediction frames which are not matched with the labeling target.
In another aspect, some embodiments disclose a computer-readable medium containing computer-executable instructions, wherein the computer-executable instructions, when processed by a data processing device, perform the automatic smoke insect identification and counting method disclosed in the above embodiments. Generally, computer program instructions or code for carrying out operations for some embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, Python, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Further details of the technique are illustrated below with reference to specific embodiments.
Example 1
The automatic recognition and counting method for the tobacco insects disclosed in the embodiment 1 comprises the following steps:
collecting historical photographs of the tobacco insect traps, wherein the photographs comprise picture data taken of tobacco insect traps placed at different positions in a tobacco warehouse or a workshop. Processing the photo data of the tobacco pest trapper, including segmentation, turning, scaling, rotation and the like, to form more photo data containing the trapped tobacco pests; because the trap captures a large number of tobacco worms in different forms, each target, particularly a characteristic target, is separately cut out in the processing process to form a single picture data, so that the number of training special color samples is increased;
and marking all the picture data, and framing the position of each cigarette insect and the type information of the cigarette insect during marking. Dividing historical picture data with the position marking information and the category information into a training set, a verification set and a test set;
using the three data sets to train the deep learning neural network model with the target detection capability for multiple rounds, and generating a tobacco worm recognition neural network model;
loading the trained tobacco worm recognition neural network model into a tobacco worm situation monitoring information system of a user; the shooting device arranged on site shoots the tobacco pest trap at regular time, and transmits the latest tobacco pest picture information back to the tobacco pest situation monitoring information system through the network, as shown in the left original image in fig. 2; the pest situation monitoring information system uses the cigarette pest recognition neural network model to perform real-time data processing, key pest situation information such as position coordinates, categories and the like of all cigarette pests in a transmission picture is recognized, the information system can store the pest situation information to remind relevant personnel, the information system is combined with an input original picture to be visually displayed, as shown in a picture with result information on the right side in figure 2, a recognition frame is drawn in an original picture according to the position coordinates and the size of each target recognized by the method, meanwhile, the categories and the number are marked on the original picture, and monitoring personnel can conveniently check and verify the pest situation. The embodiment realizes the real-time insect condition digital online monitoring of the tobacco warehouse or workshop.
According to the automatic tobacco insect recognition and counting method disclosed by the embodiment, historical tobacco insect trap photo data in tobacco leaf warehouses, workshops and the like are processed, the deep learning neural network model is adopted for training to obtain the tobacco insect recognition neural network model capable of accurately recognizing and predicting the types and the quantity of tobacco insects on the tobacco insect trap photos, the tobacco insects trapped by the tobacco insect trap can be automatically and quickly recognized and counted by the aid of the model, the type and the quantity information of the tobacco insects are obtained, the tobacco insect trap is subjected to real-time photo analysis and visual result display, tobacco insect monitoring manpower can be greatly saved by quickly obtaining insect situation information, and monitoring efficiency is improved.
The technical solutions and the technical details disclosed in the embodiments of the present application are only examples to illustrate the inventive concept of the present application, and do not constitute limitations on the technical solutions of the present application, and all the inventive changes, substitutions, or combinations that are made to the technical details disclosed in the present application without creativity are the same as the inventive concept of the present application and are within the protection scope of the claims of the present application.
Claims (8)
1. The automatic recognition and counting method of the tobacco insects is used for recognizing the types of the tobacco insects captured by the tobacco insect trapper and counting the number of the tobacco insects, and is characterized by comprising the following steps:
s1: acquiring photo data of a historical tobacco pest trapper;
s2: processing the photo data of the historical smoke insect trapper to form batch annotation data set information;
s3: training a neural network model by using the obtained batch labeled data set information to obtain a tobacco worm recognition neural network model;
s4: and identifying the pictures of the tobacco worm traps to be counted by utilizing the obtained tobacco worm identification neural network model, and automatically obtaining the information of the types and the quantity of the tobacco worms on the pictures of the tobacco worm traps.
2. The method for automatically identifying and counting tobacco worms according to claim 1, wherein the step S2 specifically comprises:
s201: marking historical picture data of the tobacco pest trapper, framing position information of each tobacco pest, further marking type information of the tobacco pests, and forming marking data set information containing the historical picture information, the position information and the type information of the tobacco pest trapper;
s202: and dividing 80% of the information of the labeled data set into a training set, 10% into a verification set and 10% into a test set.
3. The method according to claim 2, wherein in the step S3, the neural network model is a YOLO-V3 neural network model, and the training of the YOLO-V3 neural network model specifically comprises:
s301: generating a series of candidate regions on a tobacco pest trap photo according to a set rule, and marking the candidate regions according to the position relation between the candidate regions and the object real frame;
s302: extracting the photo features of the tobacco pest trapper by using a convolutional neural network, predicting the position and the category of the candidate region to obtain a prediction frame, and taking the prediction frame as a sample;
s303: labeling according to the position and the category of the real frame relative to the sample to obtain a label value;
s304: predicting the position and the category of the sample through a YOLO-V3 neural network model to obtain a predicted value, and comparing the predicted value with the label value to establish an evaluation index function;
s305: training parameters of a YOLO-V3 neural network by using the training set, selecting parameters by using the verification set, simulating a real effect after training by using the test set, and performing a training process by adopting a two-layer loop nesting mode:
the inner-layer cycle adopts a batch mode to be responsible for one-time traversal of the whole data set, performs data preparation, forward calculation, calculation of an evaluation index function and back propagation, and then updates the model parameters;
the outer loop repeatedly traverses the data set to execute the inner loop;
and if the calculated evaluation index function reaches a preset error value, finishing outer circulation, ending the training process and generating the tobacco worm recognition neural network model.
4. The method according to claim 3, wherein in step S301, the candidate area close enough to the real frame is labeled as a positive sample, while the position of the real frame is used as the position target of the positive sample, and the candidate area deviating more from the real frame is labeled as a negative sample, which does not need to predict the position or category.
5. The method for automatically identifying and counting tobacco worms according to claim 1, wherein the step S4 specifically comprises:
s401: loading the trained tobacco worm recognition neural network model into a model example, and transmitting a photo needing to be detected and recognized;
s402: forward calculation is carried out on the tobacco worm recognition neural network to obtain the position and the category of a tobacco worm prediction box;
s403: and counting all the tobacco worm prediction frames, the position information and the category information to obtain the types of the tobacco worms and the quantity information of various tobacco worms in the input picture.
6. The method according to claim 5, wherein if the two prediction boxes have the same type of tobacco worm and the coincidence of the positions of the two prediction boxes is relatively high, the two prediction boxes are determined to predict the same tobacco worm target, and the determination method comprises:
selecting a first prediction box with the highest score in a certain category;
calculating the intersection ratio of the other prediction frames and the first prediction frame;
if the intersection ratio is larger than the set predicted value, determining that the other prediction frames and the first prediction frame predict the same tobacco worm target;
wherein the intersection-to-parallel ratio is expressed as:
where a denotes the first prediction box, B denotes the other prediction boxes, and IoU is the cross-over ratio.
7. The method for automatically identifying and counting tobacco insects according to claim 1, wherein in step S1, the historical photos of the tobacco insects traps include pictures taken by the tobacco insects traps at different positions in a tobacco warehouse or a workshop, and the historical photos of the tobacco insects traps are subjected to splitting, turning, scaling and rotating processes, so as to form picture data containing trapped tobacco insects.
8. A computer readable medium containing computer executable instructions, wherein the computer executable instructions when processed by a data processing apparatus perform the method of automatic identification and counting of tobacco insects according to any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110835216.XA CN113553948A (en) | 2021-07-23 | 2021-07-23 | Automatic recognition and counting method for tobacco insects and computer readable medium |
ZA2021/09627A ZA202109627B (en) | 2021-07-23 | 2021-11-26 | Method for automatically identifying and counting cigarette beetles and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110835216.XA CN113553948A (en) | 2021-07-23 | 2021-07-23 | Automatic recognition and counting method for tobacco insects and computer readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113553948A true CN113553948A (en) | 2021-10-26 |
Family
ID=78104155
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110835216.XA Pending CN113553948A (en) | 2021-07-23 | 2021-07-23 | Automatic recognition and counting method for tobacco insects and computer readable medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113553948A (en) |
ZA (1) | ZA202109627B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919930A (en) * | 2019-03-07 | 2019-06-21 | 浙江大学 | The statistical method of fruit number on tree based on convolutional neural networks YOLO V3 |
CN110782435A (en) * | 2019-10-17 | 2020-02-11 | 浙江中烟工业有限责任公司 | Tobacco worm detection method based on deep learning model |
CN110991435A (en) * | 2019-11-27 | 2020-04-10 | 南京邮电大学 | Express waybill key information positioning method and device based on deep learning |
CN111062413A (en) * | 2019-11-08 | 2020-04-24 | 深兰科技(上海)有限公司 | Road target detection method and device, electronic equipment and storage medium |
WO2020164282A1 (en) * | 2019-02-14 | 2020-08-20 | 平安科技(深圳)有限公司 | Yolo-based image target recognition method and apparatus, electronic device, and storage medium |
CN112132090A (en) * | 2020-09-28 | 2020-12-25 | 天地伟业技术有限公司 | Smoke and fire automatic detection and early warning method based on YOLOV3 |
CN112581443A (en) * | 2020-12-14 | 2021-03-30 | 北京华能新锐控制技术有限公司 | Light-weight identification method for surface damage of wind driven generator blade |
CN112597995A (en) * | 2020-12-02 | 2021-04-02 | 浙江大华技术股份有限公司 | License plate detection model training method, device, equipment and medium |
CN113139437A (en) * | 2021-03-31 | 2021-07-20 | 成都飞机工业(集团)有限责任公司 | Helmet wearing inspection method based on YOLOv3 algorithm |
-
2021
- 2021-07-23 CN CN202110835216.XA patent/CN113553948A/en active Pending
- 2021-11-26 ZA ZA2021/09627A patent/ZA202109627B/en unknown
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020164282A1 (en) * | 2019-02-14 | 2020-08-20 | 平安科技(深圳)有限公司 | Yolo-based image target recognition method and apparatus, electronic device, and storage medium |
CN109919930A (en) * | 2019-03-07 | 2019-06-21 | 浙江大学 | The statistical method of fruit number on tree based on convolutional neural networks YOLO V3 |
CN110782435A (en) * | 2019-10-17 | 2020-02-11 | 浙江中烟工业有限责任公司 | Tobacco worm detection method based on deep learning model |
CN111062413A (en) * | 2019-11-08 | 2020-04-24 | 深兰科技(上海)有限公司 | Road target detection method and device, electronic equipment and storage medium |
CN110991435A (en) * | 2019-11-27 | 2020-04-10 | 南京邮电大学 | Express waybill key information positioning method and device based on deep learning |
CN112132090A (en) * | 2020-09-28 | 2020-12-25 | 天地伟业技术有限公司 | Smoke and fire automatic detection and early warning method based on YOLOV3 |
CN112597995A (en) * | 2020-12-02 | 2021-04-02 | 浙江大华技术股份有限公司 | License plate detection model training method, device, equipment and medium |
CN112581443A (en) * | 2020-12-14 | 2021-03-30 | 北京华能新锐控制技术有限公司 | Light-weight identification method for surface damage of wind driven generator blade |
CN113139437A (en) * | 2021-03-31 | 2021-07-20 | 成都飞机工业(集团)有限责任公司 | Helmet wearing inspection method based on YOLOv3 algorithm |
Non-Patent Citations (1)
Title |
---|
A20181006: "YOLOv3损失函数详解", Retrieved from the Internet <URL:https://www.docin.com/p-2475530640.html> * |
Also Published As
Publication number | Publication date |
---|---|
ZA202109627B (en) | 2022-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021203505A1 (en) | Method for constructing pest detection model | |
CN109214280B (en) | Shop identification method and device based on street view, electronic equipment and storage medium | |
CN106844614A (en) | A kind of floor plan functional area system for rapidly identifying | |
TW201947463A (en) | Model test method and device | |
CN114565826B (en) | Agricultural pest and disease identification and diagnosis method, system and device | |
CN112200011B (en) | Aeration tank state detection method, system, electronic equipment and storage medium | |
CN113222913B (en) | Circuit board defect detection positioning method, device and storage medium | |
CN108133235A (en) | A kind of pedestrian detection method based on neural network Analysis On Multi-scale Features figure | |
CN111310826A (en) | Method and device for detecting labeling abnormity of sample set and electronic equipment | |
CN114140463A (en) | Welding defect identification method, device, equipment and storage medium | |
CN116543347A (en) | Intelligent insect condition on-line monitoring system, method, device and medium | |
CN112633354A (en) | Pavement crack detection method and device, computer equipment and storage medium | |
CN111429431B (en) | Element positioning and identifying method based on convolutional neural network | |
CN114550017B (en) | Pine wilt disease integrated early warning and detecting method and device based on mobile terminal | |
CN113421192A (en) | Training method of object statistical model, and statistical method and device of target object | |
CN115526852A (en) | Molten pool and splash monitoring method in selective laser melting process based on target detection and application | |
CN109615610B (en) | Medical band-aid flaw detection method based on YOLO v2-tiny | |
CN112004063B (en) | Method for monitoring connection correctness of oil discharge pipe based on multi-camera linkage | |
CN112131354A (en) | Answer screening method and device, terminal equipment and computer readable storage medium | |
CN117351472A (en) | Tobacco leaf information detection method and device and electronic equipment | |
CN113553948A (en) | Automatic recognition and counting method for tobacco insects and computer readable medium | |
CN117152528A (en) | Insulator state recognition method, insulator state recognition device, insulator state recognition apparatus, insulator state recognition program, and insulator state recognition program | |
CN115359412B (en) | Hydrochloric acid neutralization experiment scoring method, device, equipment and readable storage medium | |
CN116486231A (en) | Concrete crack detection method based on improved YOLOv5 | |
CN111401370B (en) | Garbage image recognition and task assignment management method, model and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |