CN112686862A - Pest identification and counting method, system and device and readable storage medium - Google Patents

Pest identification and counting method, system and device and readable storage medium Download PDF

Info

Publication number
CN112686862A
CN112686862A CN202011611861.5A CN202011611861A CN112686862A CN 112686862 A CN112686862 A CN 112686862A CN 202011611861 A CN202011611861 A CN 202011611861A CN 112686862 A CN112686862 A CN 112686862A
Authority
CN
China
Prior art keywords
pest
data set
result
processing
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011611861.5A
Other languages
Chinese (zh)
Inventor
朱旭华
陈渝阳
冯晋
吴弘洋
刘志敏
申智慧
姚波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Top Cloud Agri Technology Co ltd
Original Assignee
Zhejiang Top Cloud Agri Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Top Cloud Agri Technology Co ltd filed Critical Zhejiang Top Cloud Agri Technology Co ltd
Priority to CN202011611861.5A priority Critical patent/CN112686862A/en
Publication of CN112686862A publication Critical patent/CN112686862A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a pest identification and counting method, which comprises the steps of obtaining pictures of various pest species to form an original data set, and calibrating the original data set; performing first processing on the calibrated original data set, and training and verifying a target detection model based on a first processing result; selecting insect pest pictures with similar species to perform second processing training and verifying the classification model; adopting a target detection model to carry out object detection on the pest picture to be detected so as to obtain a pest detection result; cutting the pest picture to be detected based on the position information of the similar categories, and classifying the pest types of the cut result by adopting a classification model to obtain a pest classification result; and obtaining the type and the quantity of the pests based on the position information, the preliminary classification result and the pest classification result. The invention completes the remote image acquisition and counting functions of the trapped insects by an AI image recognition technology, monitors the target insects in real time, and reduces the inconvenience and inaccuracy of manual insect separation and data re-verification.

Description

Pest identification and counting method, system and device and readable storage medium
Technical Field
The invention relates to the technical field of identification, in particular to a pest identification and counting method, a pest identification and counting system, a pest identification and counting device and a readable storage medium.
Background
The insect pheromone (sex attractant) has the advantages of strong specificity, no drug resistance problem, environmental friendliness, 100 percent compatibility with other control technologies, obvious improvement on the quality of agricultural products and the like, and sex attraction has become one of the national advocated green control technologies. The sexual attractant combines the traditional insect sex attractant with the Internet of things information technology, monitors and biologically prevents insects which are probably harmful in the production of agriculture and forestry, can guide the scientific chemical prevention and control in the field and reduce the using amount of pesticides, thereby reducing the production cost, greatly improving the quality of agricultural products, increasing the value of the agricultural products and responding to the plant protection policy of prevention and comprehensive prevention and control in the aspect of insect pest prevention.
In the prior art, the traditional trapper is simple and crude, only traps the data without counting, cannot monitor, and cannot achieve the effect of guiding prevention and control. The existing sex-trapping forecasting system products in the market are heavy, difficult to move and high in pricing, the counting and false-reporting phenomena generally exist, and common users have demands and do not have purchasing power. At present big data research is more extensive, whether can be with big data and traditional trapper to develop low-cost intelligent nature and induced the forecast system product, this application has just solved this problem.
Disclosure of Invention
The invention provides a pest identification and counting method, a system, a device and a readable storage medium aiming at the defects in the prior art.
In order to solve the technical problem, the invention is solved by the following technical scheme:
a pest identification and counting method comprising the steps of:
obtaining pictures of various insect pest species to form an original data set, and calibrating the original data set;
performing first processing on the calibrated original data set, and training and verifying a target detection model based on a first processing result; selecting insect pest pictures with similar species for second processing based on the calibrated original data set, and training and verifying a classification model based on a second processing result;
performing object detection on the pest picture to be detected by adopting a target detection model to obtain a pest detection result, wherein the detection result comprises position information and a score result of each pest in the picture to be detected, and obtaining a primary category and a similar category based on the position information and the score result;
cutting the pest picture to be detected based on the position information of the similar categories, and classifying the pest types of the cut result by adopting a classification model to obtain a pest classification result;
and obtaining the type and the quantity of the pests based on the position information, the preliminary classification result and the pest classification result.
As an implementation manner, the performing the first processing on the raw data set, and training and verifying the target detection model based on the first processing result specifically includes:
carrying out sample enhancement processing on the calibrated original data set and dividing the calibrated original data set into a training data set, a testing data set and a verification data set, wherein the enhancement processing comprises one or more of rotation processing, brightness and tone adjustment processing, denoising processing and image mirroring processing;
and inputting the training data set into the target detection model for training, taking the test data set and the verification data set as input after the training is finished, and verifying the training result to further obtain the target detection model.
As an implementation manner, selecting pest pictures with similar species for second processing based on the calibrated original data set, and training and verifying the classification model based on the second processing result specifically include:
selecting the calibrated original data set based on the similarity degree of pest categories, selecting a similar pest category data set, and performing cutting treatment;
performing sample enhancement processing based on the cutting processing result and dividing the sample enhancement processing into a classification model training data set, a classification model testing data set and a classification model verification set, wherein the enhancement processing comprises one or more of rotation processing, brightness and tone adjustment processing, denoising processing and image mirroring processing;
and inputting the classification model training data set into a classification model for training, taking the classification model test data set and the classification model verification data set as input after the training is finished, and verifying the training result to further obtain the classification model.
As an implementation manner, before inputting the training data set into the target detection model for the training step, the method further includes:
and modifying the anchor frame parameter value of the target detection model based on a k-means clustering algorithm.
As an implementation manner, before inputting the classification model training data set into the classification model for the training step, the method further includes:
and calculating a loss function in the classification model based on a forward network propagation algorithm, and updating parameters of the classification model through a backward propagation algorithm to obtain a corrected classification model.
As an implementation, the method further comprises the following steps: perfecting and processing the picture of the pest to be detected, which specifically comprises the following steps:
classifying the images in the original dataset according to the body structure of the pests, and marking each based on the structural features of each type of pest;
if the pest image in the pest image to be detected is a defective image, extracting structural features of the pest image in the image, and comparing the extracted features with each type of structural features marked in the original data set to obtain comparison results, wherein at least three groups of the comparison results are recommended;
and complementing the rest of the incomplete pest image based on the comparison result to obtain a pest image to be detected with a complete pest image.
A pest identification and counting system comprises a data acquisition module, a processing and training module, a result detection module, a processing and classifying module and a comprehensive result module;
the data acquisition module is used for acquiring pictures of various insect pest species to form an original data set and calibrating the original data set;
the processing training module is used for carrying out first processing on the calibrated original data set, training and verifying the target detection model based on a first processing result; selecting insect pest pictures with similar species for second processing based on the calibrated original data set, and training and verifying a classification model based on a second processing result;
the result detection module is configured to: performing object detection on the pest picture to be detected by adopting a target detection model to obtain a pest detection result, wherein the detection result comprises position information and a score result of each pest in the picture to be detected, and obtaining a primary category and a similar category based on the position information and the score result;
the processing and classifying module is used for cutting the pest picture to be detected based on the position information of similar categories, and classifying the pest types of the cutting result by adopting a classification model so as to obtain a pest classification result;
and the comprehensive result module is used for obtaining the types and the quantity of the pests based on the position information, the preliminary classification result and the pest classification result.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the following method steps:
obtaining pictures of various insect pest species to form an original data set, and calibrating the original data set;
performing first processing on the calibrated original data set, and training and verifying a target detection model based on a first processing result; selecting insect pest pictures with similar species for second processing based on the calibrated original data set, and training and verifying a classification model based on a second processing result;
performing object detection on the pest picture to be detected by adopting a target detection model to obtain a pest detection result, wherein the detection result comprises position information and a score result of each pest in the picture to be detected, and obtaining a primary category and a similar category based on the position information and the score result;
cutting the pest picture to be detected based on the position information of the similar categories, and classifying the pest types of the cut result by adopting a classification model to obtain a pest classification result;
and obtaining the type and the quantity of the pests based on the position information, the preliminary classification result and the pest classification result.
An apparatus for pest identification counting, comprising a memory, a processor and a computer program stored in said memory and executable on said processor, said processor implementing the following method steps when executing said computer program:
obtaining pictures of various insect pest species to form an original data set, and calibrating the original data set;
performing first processing on the calibrated original data set, and training and verifying a target detection model based on a first processing result; selecting insect pest pictures with similar species for second processing based on the calibrated original data set, and training and verifying a classification model based on a second processing result;
performing object detection on the pest picture to be detected by adopting a target detection model to obtain a pest detection result, wherein the detection result comprises position information and a score result of each pest in the picture to be detected, and obtaining a primary category and a similar category based on the position information and the score result;
cutting the pest picture to be detected based on the position information of the similar categories, and classifying the pest types of the cut result by adopting a classification model to obtain a pest classification result;
and obtaining the type and the quantity of the pests based on the position information, the preliminary classification result and the pest classification result.
The insect sex attraction device comprises a body and a pest identification and counting device, wherein the pest identification and counting device is arranged on or in the body by taking the body as a carrier.
Due to the adoption of the technical scheme, the invention has the remarkable technical effects that:
the intelligent trapping and forecasting algorithm developed based on the method, the device and the system of the invention finishes the functions of remotely acquiring and counting the images of the trapped insects by an AI image recognition technology, the method monitors the target insects in real time, dynamically and visually presents the field growth and elimination of the target insects in front of a user through clear images and accurate data, thereby not only solving the problem that the existing sex-attraction detecting and reporting products in the market adopt infrared counting to cause pain spots with misreporting and misinformation of data and reducing the inconvenience and inaccuracy of manually classifying and rechecking the data, the device can increase the functions of identification and technology for the existing sex-inducing device, is light, small, cheap and high in quality, ensures that the view of ecological pest comprehensive treatment is better implemented in the treatment of agricultural and forestry pests in production, and actively responds to the green prevention and control and environment-friendly policy advocated by the state.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic overall flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram of the overall architecture of the system of the present invention;
FIG. 3 is a flow chart of an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of the denseneret of the present invention;
FIGS. 5-6 are graphs illustrating test results for one embodiment;
fig. 7-8 are specific identification diagrams in another embodiment.
Detailed Description
The present invention will be described in further detail with reference to examples, which are illustrative of the present invention and are not to be construed as being limited thereto.
At present, insect pheromones (sex pheromones) have the advantages of strong specificity, no drug resistance problem, environmental friendliness, 100% compatibility with other control technologies, obvious improvement of the quality of agricultural products and the like, and sex pheromones have become one of the green control technologies advocated by the nation. The sexual attractant combines the traditional insect sex attractant with the Internet of things information technology, monitors and biologically prevents insects which are probably harmful in the production of agriculture and forestry, can guide the scientific chemical prevention and control in the field and reduce the using amount of pesticides, thereby reducing the production cost, greatly improving the quality of agricultural products, increasing the value of the agricultural products and responding to the plant protection policy of prevention and comprehensive prevention and control in the aspect of insect pest prevention.
In the prior art, the traditional trapper is simple and crude, only traps the data without counting, cannot monitor, and cannot achieve the effect of guiding prevention and control. The existing sex-trapping forecasting system products in the market are heavy, difficult to move and high in pricing, the counting and false-reporting phenomena generally exist, and common users have demands and do not have purchasing power. At present big data research is more extensive, whether can be with big data and traditional trapper to develop low-cost intelligent nature and induced the forecast system product, this application has just solved this problem.
Example 1:
a pest identification and counting method, as shown in fig. 1, comprising the steps of:
s100, obtaining pictures of various insect pest species to form an original data set, and calibrating the original data set;
s200, carrying out first processing on the calibrated original data set, and training and verifying a target detection model based on a first processing result; selecting insect pest pictures with similar species for second processing based on the calibrated original data set, and training and verifying a classification model based on a second processing result;
s300, performing object detection on the pest picture to be detected by adopting a target detection model to obtain a pest detection result, wherein the detection result comprises position information and a score result of each pest in the picture to be detected, and obtaining a primary category and a similar category based on the position information and the score result;
s400, cutting the pest picture to be detected based on the position information of the similar categories, and classifying the pest types of the cutting result by adopting a classification model to obtain a pest classification result;
and S500, obtaining the types and the quantity of the pests based on the position information, the preliminary classification result and the pest classification result.
In the invention, by deeply learning a CSPDarkNet53 target detection model in a darknet frame and matching with a densenet classification model, picture model training is carried out on specific pests captured by a sex-attracting device, and the types and the quantity of the pests are finally obtained by fusing the results of the two models, wherein the general process comprises the following steps: (1) deploying the self-research sexual attraction device to the field, putting an attraction core to trap specific pests, and simultaneously taking pictures by using a built-in camera to construct a sexual attraction pest database; (2) constructing a yolov4 target detection algorithm, adopting a CSPDarkNet53 network as a feature extractor, adopting 3 yolo layers for 3 times of detection, respectively detecting on feature maps sampled by 32 times, 16 times and 8 times, simultaneously adding an SPP module, and adopting a mode of converging different backhaul-level parameters in a PANet to replace the original FPN method; (3) in order to distinguish similar samples and eliminate the problem of false detection of irrelevant targets in a detection task, the method constructs a densenert classification model, cuts out difficult samples in a training data set as a classification training data set, and trains a second model; (4) cutting the pest picture to be detected based on the position information of the similar categories, and classifying the pest types of the cut result by adopting a classification model to obtain a pest classification result; (5) based on the position information, the preliminary classification result and the pest classification result, the types and the number of the pests are obtained, namely, pictures collected under a real scene can be analyzed to obtain a final detection result. By the method, the target insect field growth and growth can be dynamically and visually presented to the user through clear images and accurate data, so that pain spots of misreporting and misinformation of data caused by infrared counting adopted by existing sexual trapping and forecasting products in the market are solved, and inconvenience and inaccuracy of manual insect separation and data re-verification are reduced.
In addition, the method, the system or the device can be applied to the sex attractant device for insects, so that the types and the quantity of the captured pests can be directly obtained, the existing sex attractant device can be added with the functions of identification and technology, and the sex attractant device is light, small, cheap and light, so that the comprehensive ecological pest management viewpoint can be better implemented in agricultural and forestry pest management in production, and the sex attractant device actively responds to the green prevention and control and environment-friendly policy advocated by the state.
The sex-inducing equipment self-researched by a company is deployed in the field to collect pest images, the images are used as an original data set, the more the number of the images is, the better the image number is, data calibration is carried out on the images in the original data set by adopting a labelimg calibration tool after the original data set is collected, generally, one image corresponds to one xml calibration file, and the calibration treatment specifically comprises the following steps: marking pest category information and pest position information on an original data set, and performing sample enhancement treatment after marking.
The training data set of the invention comprises 5000 pictures in total, including about 4 ten thousand pest targets, and 8 categories in total; there are 8 categories in the original data set, and 3 of the categories are relatively unobvious, so that 3 difficult categories in the original data set can be used as pest category data set to construct training classification data set, and total 3 categories and total 9000 small pictures are obtained.
In step S200, the first processing is performed on the original data set, and the training and verifying of the target detection model based on the first processing result specifically includes:
carrying out sample enhancement processing on the calibrated original data set and dividing the calibrated original data set into a training data set, a testing data set and a verification data set, wherein the enhancement processing comprises one or more of rotation processing, brightness and tone adjustment processing, denoising processing and image mirroring processing;
and inputting the training data set into the target detection model for training, taking the test data set and the verification data set as input after the training is finished, and verifying the training result to further obtain the target detection model.
In step S200, selecting pest pictures with similar species for second processing based on the calibrated original data set, training and verifying a classification model based on a second processing result, specifically:
selecting the calibrated original data set based on the similarity degree of pest categories, selecting a similar pest category data set, and performing cutting treatment;
performing sample enhancement processing based on the cutting processing result and dividing the sample enhancement processing into a classification model training data set, a classification model testing data set and a classification model verification set, wherein the enhancement processing comprises one or more of rotation processing, brightness and tone adjustment processing, denoising processing and image mirroring processing;
and inputting the classification model training data set into a classification model for training, taking the classification model test data set and the classification model verification data set as input after the training is finished, and verifying the training result to further obtain the classification model.
Because some categories have large similarity, similar categories cannot be accurately distinguished through a pure detection network, and therefore, under the result of a detection model, an unidentified picture needs to pass through a category classification vessel. Therefore, when the classification model is trained, a similar pest category database needs to be selected, the target of each type of pest is cut and stored into a new small graph to form a new pest category data set; these data sets are subjected to sample enhancement processing prior to training.
In one embodiment, the input image size of the constructed classification network is 256x256, and a deep classification model is trained by using a densneet network in a Darknet framework by using a pest category data set, and the specific network structure of the densneet classification network is shown in FIG. 4. The DenseBlock + Transition structure is used in the DenseNet network, the feature graphs from different layers are directly connected, feature reuse is achieved, and efficiency is improved.
In one embodiment, before inputting the training data set into the target detection model for the training step, the method further comprises:
and modifying the anchor frame parameter value of the target detection model based on a k-means clustering algorithm.
In one embodiment, before inputting the classification model training data set into the classification model for the training step, the method further comprises:
and calculating a loss function in the classification model based on a forward network propagation algorithm, and updating parameters of the classification model through a backward propagation algorithm to obtain a corrected classification model.
Through the steps described in the above embodiments, the training process of the target detection model and the classification model is specifically as follows:
the input size of the training picture needs to be increased to 1024x 1024;
selecting CSPDarkNet53 as a model backbone network, wherein the model backbone network comprises 29 convolution layers of 3x3, a 725x725 receptive field and 27.6M parameters;
the spp module is added into the CSPDarkNet53 network, so that the receptive field is increased, the most important context characteristics are separated, and the running speed of the network is not reduced;
the PANet is used as a parameter polymerization method from different backbone levels aiming at different detector levels, so that the original FPN method is replaced;
further adding data augmentation modes such as Mosaic, Mixup, CutMix and self-confrontation training (SAT) in the training configuration file;
performing clustering statistics on the sex attraction data set in a size of 1024x1024 by adopting a k-means clustering algorithm to obtain an optimal anchor point parameter;
the small picture classification model was trained in the Darknet framework using the DenseNet model,DenseNet has a more aggressive dense connection mechanism than ResNet, and each layer accepts all its previous layers as its extra input. ResNet is the short-circuiting of each layer with some previous layer, by element-level addition. In DenseNet, each layer is connected to all previous layers in the channel latitude and serves as the input for the next layer. For a network of L-layer, DenseNet comprises
Figure BDA0002874861930000081
This is a dense connection compared to ResNet. And DenseNet is directly connected with feature maps from different layers, so that feature reuse can be realized, and efficiency is improved.
Expressed by formula, the output of the conventional network at layer l is:
xl=Hl(xl-1)
and ResNet, adds the identity function from the previous layer input:
xl=Hl(xl-1)+xl-1
in the DenseNet used in the present invention, all previous layers, the so-called inputs, are connected:
xl=Hl([x0,x1,…,xl-1])
CNN networks generally need to reduce the parameter size of the feature map by a pooling layer or a convolutional layer with stride greater than 1, while densnet's dense connection method needs the front and back feature sizes to remain the same. In order to keep the same dimension, a DenseBlock + Transition structure is used in a DenseNet network, wherein DenseBlock is a module comprising a plurality of layers, the characteristic diagram of each layer is the same, and a dense connection mode is adopted between layers. The Transition module is two adjacent DenseBlock and reduces the dimension of the feature map through the pooling layer. In DenseBlock, the feature maps for each layer are the same size, and front and back can be connected in the channel dimension. The nonlinear combination function in DenseBlock adopts the structure of BN + ReLU +3x3 convolution layer. Meanwhile, the DenseNet has a hyper-parameter k called growth rate, each layer in all DenseBlock outputs k characteristic maps after convolution, the number of channels is k, and the method can also be called as adopting k volume cores. Generally, a smaller k (e.g., 12) may result in better performance. Assuming that the number of feature map channels of the input layer is k0, the number of channels input by the L layer is k0+ k (L-1), so as the number of layers increases, even if the k value is small, the input of the DenseBlock is large, because only k features are unique per layer due to feature reuse. In order to solve the problem that the input of the rear layer is too large, a bottleeck layer is adopted in the DenseBlock to reduce the calculation amount, and a 1x1 convolution layer is added in the original structure. This structure is called the DenseNet-B structure, namely BN + ReLU +1x1_ Conv + BN + ReLU +3x3_ Conv, wherein the 1x1 convolutional layer obtains 4k feature maps, and the effect is to reduce the number of features and improve the calculation efficiency.
For the Transition layer, the role is to connect two adjacent DenseBlock and reduce the size of the feature map. The Transition includes one convolutional layer of 1x1 and an average pooling layer of 2x 2. The structure is BN + ReLU +1x1_ Conv +2x2_ AvgPooling. Meanwhile, the Transition layer can compress the model. DenseNet performs better on the public data set than ResNet, so the invention adopts DenseNet as the basic framework network of the classification model. And finally training to obtain a target detection model and a classification model through the steps.
When a convolutional neural network is used for training a detection model, a detection task training set is subjected to early-stage data enhancement, a CSPDarkNet53 is used as a model main network, and the input size is 1024x1024x 3; initializing a network, calculating a loss function through forward network propagation, updating model parameters through backward propagation, and completing 100000 iterations in total; adopting a Momentum random gradient descent method, and taking the weight value as 0.949; training the batch size to 64 and subdivisions to 16, too large a batch size or too small subdivisions can cause insufficient video memory; the initial learning rate was set to 0.001, and the learning rate was decreased to 0.0001 and 0.00001 at 80000 times and 90000 times; clustering the most appropriate anchors value of the data set by adopting a k-means algorithm; in the process of training the model, the intersection of the candidate box and the grountruth is regarded as a positive sample if the intersection is more than 0.5, and regarded as a negative sample if the intersection is less than 0.5; complete training takes approximately 30 hours to complete training.
When a convolutional neural network is used for training a classification model, DenseNet is used as a model backbone network, the input size is 256x256x3, the network is initialized, a loss function is calculated through forward network propagation, model parameters are updated through backward propagation, and 50000 iterations are completed in total; adopting a Momentum random gradient descent method, and taking the weight value as 0.9; train batch size set to 128, subdivision set to 4; the initial learning rate is set to 0.1; a complete training takes about 15 hours.
And cutting the pest picture to be detected based on the position information of similar categories, classifying the pest types of the cut result by adopting a classification model to obtain a pest classification result, fusing the result of the detection model and the result of the classification model, and obtaining the types and the quantity of the pests based on the position information, the preliminary classification result and the pest classification result. Among 8 target pests, 5 are obvious in characteristics, excellent results can be obtained through a target detection model, and in addition, 3 pests (spodoptera frugiperda, prodenia litura and armyworm) which are similar need to be classified through a classification model once, and the two pests are fused to obtain a final result. If the detection result is 5 simpler pests, directly returning the result; if the detection result is one of the three difficult types, the image where the detected frame is located is input into a DenseNet classification network, the classification result is compared with the detection result, the type with higher classification is obtained as the final output result, and the final detection result is stored into a dictionary format and is transmitted to a terminal through redis and displayed; according to the work log of the outdoor intelligent temporating machine test of Topu Yunnong, 8 test temporating machines are deployed in the field in the time interval of 6 months, 14 days to 9 months and 3 days, 576 groups of effective data are shot in total, and the result is identified by an algorithm and compared with the result of manual correction, so that the accuracy rate reaches about 95%; the test results for the three main categories of pests are shown in the following table:
species of pest Number of pictures Algorithm identification number Number of manual corrections Recognition rate
Chilo suppressalis 292 1751 1798 97.38%
Spodoptera frugiperda 159 254 258 98.45%
Asiatic corn borer 26 165 184 89.67%
In another embodiment, the species of the rice planthopper is detected by taking the rice planthopper as a detection object, as shown in fig. 7-8. The first layer model uses a yolov4 target detection model, and a yolov4 target detection algorithm is improved compared with yolov3, and is mainly embodied in the following aspects: in consideration of the problem of the calculated amount, yolov4 improves the input end during training, so that the training can have good performance on a single GPU; the most performance improvement of network data enhancement is Mosaic data enhancement, Mosaic used in yolov4 refers to a CutMix data enhancement mode proposed at the end of 2019, however, CutMix only uses two pictures for splicing, and Mosaic data enhancement adopts 4 pictures for splicing in a mode of random zooming, random cutting and random arrangement, so that the target distribution is more uniform, and the performance improvement is facilitated; the Backbone network CSPDarknet53 used by Yolov4 is a backhaul structure generated on the basis of Yolov3 Backbone network Darknet53, wherein 5 CSP modules are included, the size of the convolution kernel in front of each CSP module is 3 × 3, and stride is 2, so that the function of downsampling can be achieved. Since the Backbone has 5 CSP modules, and the input image is 608X 608, the rule of the change of the feature map is: 608- >304- >152- >76- >38- >19, and a characteristic diagram with the size of 19 × 19 is obtained after 5 times of CSP modules. And only the backhaul adopts the Mish activation function, and the Leaky _ relu activation function is still adopted behind the network. The advantage of doing so is that the learning ability of CNN is enhanced, so that the accuracy is maintained while the weight is reduced, and meanwhile, the calculation cost and the memory cost can be reduced; heck innovation by yolov 4. In the field of target detection, in order to better extract fusion features, layers are usually inserted into a Backbone and an output layer, and the part is called Neck. Equivalent to the Neck of a target detection network, is also very critical, and the Neck structure of yolov4 mainly adopts a mode of an SPP module and FPN + PAN.
In this embodiment, the second layer model of the rice planthopper detection method of the large-resolution images uses a bilinear network model based on Resnet 50. The bilinear network includes two branch CNN structures, which may be the same network or different networks, in this embodiment, the Resnet 50 is used as the same branch network, and after an image is sent to the two branch Resnet 50 in this network, corresponding fusion operation is performed on the two acquired feature branches. And selecting a cross entropy loss function as the loss function, and selecting SGD optimization as the optimization mode. The initial learning rate was set to 0.01, the batch size was set to 8, the decay rate was set to 0.00001, the iteration cycle was 20, and the top-5 evaluation index was used.
The first-layer classifier data set is two kinds of rice planthopper (sogatella furcifera and brown planthopper) pest images, and all the images are collected from intelligent pest situation forecasting lamps. The images were stored in JPG format with a resolution of 5472 × 3648(W × H) for 2000 frames, and were divided into 16 small-resolution images with a resolution of 1368 × 912(W × H) for 32000 frames on average. All target pests in the data sample are marked by using a Labelimg data set marking tool, all data are gathered and sorted by an expert to form an original data set, and the original data set is uniformly and randomly divided into a training data set, a verification data set and a test data set according to the proportion of 7.5:1.5: 1. The original image is segmented to obtain a single plant hopper individual, then the single insect individual is randomly rotated in multiple angles, and a second-layer classifier database is manufactured, wherein 38000 heads of sogatella furcifera, 36000 heads of brown plant hoppers and 41000 heads of non-targets of similar plant hoppers exist. While the first layer model was performed under the Darknet framework. In order to improve the detection accuracy of the model, the input network parameters width and height are both set to 960. Considering memory issues, in the network profile, the batch size is set to 64, indicating that the parameters will be updated once every 64 images are input into the network. The subdivisions is set to 8, and the number of samples that represent one-time input to the training network is batch/subdivisions — 8. momentum affects the speed at which the gradient drops to the optimum, and is usually set to 0.9, the invention sets momentum to 0.9, for better feature extraction, the maximum number of iterations is set to 200000, and the iteration step size is set to 20000 before 150000. 150000 ~ 200000 is set to 10000 and the initial learning rate is set to 0.01. The second layer classification model is trained based on Pythrch, the input network parameters width and height are both set to 224, the size of batch _ size is set to 128, the initial learning rate is 0.001, and the learning rate drops to 0.1 times of the previous stage every 100 epochs. The total number of iterations was 20 ten thousand. And aiming at the first layer of target detection model, respectively calculating the anchor frame value of each sub-model by using a K-means clustering algorithm, thus obtaining three groups of anchor frame values with different scales for replacing the original network anchor value.
In addition, in the process of the method of the present application, the method further includes the following steps: perfecting and processing the picture of the pest to be detected, which specifically comprises the following steps:
classifying the images in the original dataset according to the body structure of the pests, and marking each based on the structural features of each type of pest;
if the pest image in the pest image to be detected is a defective image, extracting structural features of the pest image in the image, and comparing the extracted features with each type of structural features marked in the original data set to obtain comparison results, wherein at least three groups of the comparison results are recommended;
and complementing the rest of the incomplete pest image based on the comparison result to obtain a pest image to be detected with a complete pest image.
The embodiment is to solve the types of the incomplete pests, the body structure of the pests is classified according to the existence of wings, spots on the wings, tentacles, feet, tails or shells of the pests, and the like, but the classification is not strict in textbooks, because the types of the pests are determined more carefully and accurately in the following process, and the more the marked features are, the better the matching result can be obtained by the incomplete image. If the pest image in the pest image to be detected is a incomplete image, comparing the pest according to the marked features, the comparison result is more ideal if the structural features are overlapped, at least three results are finally selected from the comparison result, the incomplete part of the pest is completely supplemented according to the result to obtain a more complete pest image, it should be noted that the supplemented part must be larger than the incomplete part when the supplementation is complete, the connecting parts are overlapped, then the several overlapped and complete pest images are input into a target detection model, the target detection model performs detection operation, the pest image with the overlapped part is detected, if at least one group of results in three groups of results is only one pest detected, the comparison result recommended in the early stage is correct, and the type of the pest is judged based on the result of only one pest detected, and obtaining a specific pest identification result. If pest images with overlapped parts spliced out based on the three results are input into the target detection model to obtain a plurality of identification results, the pest image with the largest repetition frequency is found out from the plurality of identification results to serve as a final pest detection result.
Example 2:
a pest identification and counting system is shown in FIG. 2 and comprises a data acquisition module 100, a processing training module 200, a result detection module 300, a processing classification module 400 and a comprehensive result module 500;
the data acquisition module 100 is configured to acquire pictures of each pest type to form an original data set, and perform calibration processing on the original data set;
the processing training module 200 is configured to perform first processing on the calibrated raw data set, train and verify a target detection model based on a first processing result; selecting insect pest pictures with similar species for second processing based on the calibrated original data set, and training and verifying a classification model based on a second processing result;
the result detection module 300 is configured to: performing object detection on the pest picture to be detected by adopting a target detection model to obtain a pest detection result, wherein the detection result comprises position information and a score result of each pest in the picture to be detected, and obtaining a primary category and a similar category based on the position information and the score result;
the processing and classifying module 400 is configured to perform cutting processing on the pest image to be detected based on the position information of the similar categories, and perform pest type classification on a result of the cutting processing by using a classification model to obtain a pest classification result;
the comprehensive result module 500 is configured to obtain the type and quantity of the pest based on the location information, the preliminary classification result, and the pest classification result.
In one embodiment, a calibration module 600 is further included, including the calibration module 600 being configured to: pest category information and pest location information are marked on the original data set.
The process training module 200 is configured to:
carrying out sample enhancement processing on the calibrated original data set and dividing the calibrated original data set into a training data set, a testing data set and a verification data set, wherein the enhancement processing comprises one or more of rotation processing, brightness and tone adjustment processing, denoising processing and image mirroring processing;
and inputting the training data set into the target detection model for training, taking the test data set and the verification data set as input after the training is finished, and verifying the training result to further obtain the target detection model.
The process training module 200 is configured to: selecting the calibrated original data set based on the similarity degree of pest categories, selecting a similar pest category data set, and performing cutting treatment;
performing sample enhancement processing based on the cutting processing result and dividing the sample enhancement processing into a classification model training data set, a classification model testing data set and a classification model verification set, wherein the enhancement processing comprises one or more of rotation processing, brightness and tone adjustment processing, denoising processing and image mirroring processing;
and inputting the classification model training data set into a classification model for training, taking the classification model test data set and the classification model verification data set as input after the training is finished, and verifying the training result to further obtain the classification model.
The process training module 200 is configured to: before the step of inputting the training data set into the target detection model for training, the method further comprises the following steps:
and modifying the anchor frame parameter value of the target detection model based on a k-means clustering algorithm.
The process training module 200 is configured to: before inputting the classification model training data set into the classification model for training, the method further comprises the following steps:
and calculating a loss function in the classification model based on a forward network propagation algorithm, and updating parameters of the classification model through a backward propagation algorithm to obtain a corrected classification model.
Example 3:
a computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of:
obtaining pictures of various insect pest species to form an original data set, and calibrating the original data set;
performing first processing on the calibrated original data set, and training and verifying a target detection model based on a first processing result; selecting insect pest pictures with similar species for second processing based on the calibrated original data set, and training and verifying a classification model based on a second processing result;
performing object detection on the pest picture to be detected by adopting a target detection model to obtain a pest detection result, wherein the detection result comprises position information and a score result of each pest in the picture to be detected, and obtaining a primary category and a similar category based on the position information and the score result;
cutting the pest picture to be detected based on the position information of the similar categories, and classifying the pest types of the cut result by adopting a classification model to obtain a pest classification result;
and obtaining the type and the quantity of the pests based on the position information, the preliminary classification result and the pest classification result.
In an embodiment, when the processor executes the computer program, the first processing on the raw data set is implemented, and the training and verifying of the target detection model based on the first processing result is specifically:
carrying out sample enhancement processing on the calibrated original data set and dividing the calibrated original data set into a training data set, a testing data set and a verification data set, wherein the enhancement processing comprises one or more of rotation processing, brightness and tone adjustment processing, denoising processing and image mirroring processing;
and inputting the training data set into the target detection model for training, taking the test data set and the verification data set as input after the training is finished, and verifying the training result to further obtain the target detection model.
In one embodiment, when the processor executes the computer program, the selecting of the similar-kind pest images from the calibrated original data set for the second processing is realized, and the training and verifying of the classification model based on the second processing result are specifically:
selecting the calibrated original data set based on the similarity degree of pest categories, selecting a similar pest category data set, and performing cutting treatment;
performing sample enhancement processing based on the cutting processing result and dividing the sample enhancement processing into a classification model training data set, a classification model testing data set and a classification model verification set, wherein the enhancement processing comprises one or more of rotation processing, brightness and tone adjustment processing, denoising processing and image mirroring processing;
and inputting the classification model training data set into a classification model for training, taking the classification model test data set and the classification model verification data set as input after the training is finished, and verifying the training result to further obtain the classification model.
In one embodiment, the computer program, when executed by the processor, further comprises prior to the step of inputting the training data set into the target detection model for training:
and modifying the anchor frame parameter value of the target detection model based on a k-means clustering algorithm.
In one embodiment, the computer program, when executed by a processor, further comprises before the step of inputting a training data set of classification models into the classification models for training:
and calculating a loss function in the classification model based on a forward network propagation algorithm, and updating parameters of the classification model through a backward propagation algorithm to obtain a corrected classification model.
In one embodiment, when the processor executes the computer program, the calibration processing is specifically implemented as: pest category information and pest location information are marked on the original data set.
Example 4:
in one embodiment, a pest identification and counting apparatus is provided, which may be a server or a mobile terminal. The device for identifying and counting pests comprises a processor, a memory, a network interface and a database which are connected through a system bus. Wherein the processor of the pest identification and counting device is configured to provide computing and control capabilities. The memory of the pest identification and counting device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database is used for storing all data of the pest identification and counting device. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of pest identification counting.
Example 5:
the insect sex attraction device comprises a body and a pest identification and counting device, wherein the pest identification and counting device is arranged on or in the body by taking the body as a carrier.
The intelligent trapping and forecasting algorithm of the invention completes the functions of acquiring and counting the remote images of the trapped insects through the AI image recognition technology, monitors the target insects in real time, and dynamically and visually presents the field growth and elimination of the target insects in front of the user through clear images and accurate data, thereby not only solving the problem that the existing sexual trapping and forecasting products in the market adopt infrared counting to have pain points of data misrepresentation and error report, and reducing the inconvenience and inaccuracy of manually dividing insects to recheck the data Forestry pest control actively responds to the policies of green prevention and control and environmental protection advocated by the state.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that:
reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention. In addition, it should be noted that the specific embodiments described in the present specification may differ in the shape of the components, the names of the components, and the like. All equivalent or simple changes of the structure, the characteristics and the principle of the invention which are described in the patent conception of the invention are included in the protection scope of the patent of the invention. Various modifications, additions and substitutions for the specific embodiments described may be made by those skilled in the art without departing from the scope of the invention as defined in the accompanying claims.

Claims (9)

1. A pest identification and counting method, comprising the steps of:
obtaining pictures of various insect pest species to form an original data set, and calibrating the original data set;
performing first processing on the calibrated original data set, and training and verifying a target detection model based on a first processing result; selecting insect pest pictures with similar species for second processing based on the calibrated original data set, and training and verifying a classification model based on a second processing result;
performing object detection on the pest picture to be detected by adopting a target detection model to obtain a pest detection result, wherein the detection result comprises position information and a score result of each pest in the picture to be detected, and obtaining a primary category and a similar category based on the position information and the score result;
cutting the pest picture to be detected based on the position information of the similar categories, and classifying the pest types of the cut result by adopting a classification model to obtain a pest classification result;
and obtaining the type and the quantity of the pests based on the position information, the preliminary classification result and the pest classification result.
2. The pest identification and counting method according to claim 1, wherein the first processing is performed on the raw data set, and the target detection model is trained and verified based on the first processing result, specifically:
carrying out sample enhancement processing on the calibrated original data set and dividing the calibrated original data set into a training data set, a testing data set and a verification data set, wherein the enhancement processing comprises one or more of rotation processing, brightness and tone adjustment processing, denoising processing and image mirroring processing;
and inputting the training data set into the target detection model for training, taking the test data set and the verification data set as input after the training is finished, and verifying the training result to further obtain the target detection model.
3. The pest identification and counting method according to claim 1, wherein the pest images with similar species are selected from the calibrated original data set for second processing, and the classification model is trained and verified based on the second processing result, specifically:
selecting the calibrated original data set based on the similarity degree of pest categories, selecting a similar pest category data set, and performing cutting treatment;
performing sample enhancement processing based on the cutting processing result and dividing the sample enhancement processing into a classification model training data set, a classification model testing data set and a classification model verification set, wherein the enhancement processing comprises one or more of rotation processing, brightness and tone adjustment processing, denoising processing and image mirroring processing;
and inputting the classification model training data set into a classification model for training, taking the classification model test data set and the classification model verification data set as input after the training is finished, and verifying the training result to further obtain the classification model.
4. The pest identification and scoring method of claim 2, wherein inputting the training data set into the target detection model for the training step further comprises:
and modifying the anchor frame parameter value of the target detection model based on a k-means clustering algorithm.
5. The pest identification and scoring method of claim 2, wherein inputting the classification model training dataset into the classification model for the training step further comprises:
and calculating a loss function in the classification model based on a forward network propagation algorithm, and updating parameters of the classification model through a backward propagation algorithm to obtain a corrected classification model.
6. The pest identification and counting method according to claim 1, further comprising the steps of:
perfecting and processing the picture of the pest to be detected, which specifically comprises the following steps:
classifying the images in the original dataset according to the body structure of the pests, and marking each based on the structural features of each type of pest;
if the pest image in the pest image to be detected is a defective image, extracting structural features of the pest image in the image, and comparing the extracted features with each type of structural features marked in the original data set to obtain comparison results, wherein at least three groups of the comparison results are recommended;
and complementing the rest of the incomplete pest image based on the comparison result to obtain a pest image to be detected with a complete pest image.
7. A pest identification and counting system is characterized by comprising a data acquisition module, a processing and training module, a result detection module, a processing and classifying module and a comprehensive result module;
the data acquisition module is used for acquiring pictures of various insect pest species to form an original data set and calibrating the original data set;
the processing training module is used for carrying out first processing on the calibrated original data set, training and verifying the target detection model based on a first processing result; selecting insect pest pictures with similar species for second processing based on the calibrated original data set, and training and verifying a classification model based on a second processing result;
the result detection module is configured to: performing object detection on the pest picture to be detected by adopting a target detection model to obtain a pest detection result, wherein the detection result comprises position information and a score result of each pest in the picture to be detected, and obtaining a primary category and a similar category based on the position information and the score result;
the processing and classifying module is used for cutting the pest picture to be detected based on the position information of similar categories, and classifying the pest types of the cutting result by adopting a classification model so as to obtain a pest classification result;
and the comprehensive result module is used for obtaining the types and the quantity of the pests based on the position information, the preliminary classification result and the pest classification result.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of one of claims 1 to 6.
9. An apparatus for pest identification counting, comprising a memory, a processor and a computer program stored in said memory and executable on said processor, wherein said processor when executing said computer program performs the method steps of any one of claims 1 to 6.
CN202011611861.5A 2020-12-30 2020-12-30 Pest identification and counting method, system and device and readable storage medium Pending CN112686862A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011611861.5A CN112686862A (en) 2020-12-30 2020-12-30 Pest identification and counting method, system and device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011611861.5A CN112686862A (en) 2020-12-30 2020-12-30 Pest identification and counting method, system and device and readable storage medium

Publications (1)

Publication Number Publication Date
CN112686862A true CN112686862A (en) 2021-04-20

Family

ID=75455362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011611861.5A Pending CN112686862A (en) 2020-12-30 2020-12-30 Pest identification and counting method, system and device and readable storage medium

Country Status (1)

Country Link
CN (1) CN112686862A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111207A (en) * 2021-04-21 2021-07-13 海南绿昆环岛害虫防治有限公司 Harmful organism control and detection information system and implementation method thereof
CN113362032A (en) * 2021-06-08 2021-09-07 贵州开拓未来计算机技术有限公司 Verification and approval method based on artificial intelligence image recognition
CN113449806A (en) * 2021-07-12 2021-09-28 苏州大学 Two-stage forestry pest identification and detection system and method based on hierarchical structure
CN113558022A (en) * 2021-07-22 2021-10-29 湖北第二师范学院 Intelligent wheat field guarding robot capable of predicting insect pests

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111207A (en) * 2021-04-21 2021-07-13 海南绿昆环岛害虫防治有限公司 Harmful organism control and detection information system and implementation method thereof
CN113111207B (en) * 2021-04-21 2023-04-07 海南绿昆环岛害虫防治有限公司 Harmful organism control and detection information system and implementation method thereof
CN113362032A (en) * 2021-06-08 2021-09-07 贵州开拓未来计算机技术有限公司 Verification and approval method based on artificial intelligence image recognition
CN113449806A (en) * 2021-07-12 2021-09-28 苏州大学 Two-stage forestry pest identification and detection system and method based on hierarchical structure
CN113558022A (en) * 2021-07-22 2021-10-29 湖北第二师范学院 Intelligent wheat field guarding robot capable of predicting insect pests

Similar Documents

Publication Publication Date Title
CN112686862A (en) Pest identification and counting method, system and device and readable storage medium
Xu et al. Aerial images and convolutional neural network for cotton bloom detection
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
KR101830056B1 (en) Diagnosis of Plant disease using deep learning system and its use
Deng et al. Deep learning-based automatic detection of productive tillers in rice
CN112668490B (en) Yolov 4-based pest detection method, system, device and readable storage medium
CN109242826B (en) Mobile equipment end stick-shaped object root counting method and system based on target detection
CN109740483A (en) A kind of rice growing season detection method based on deep-neural-network
CN111797760A (en) Improved crop pest and disease identification method based on Retianet
CN112001370A (en) Crop pest and disease identification method and system
CN115272828A (en) Intensive target detection model training method based on attention mechanism
CN110751226A (en) Crowd counting model training method and device and storage medium
Zhang et al. Biometric facial identification using attention module optimized YOLOv4 for sheep
CN116030348A (en) LS-YOLOv5 network-based mung bean leaf spot disease detection method and device
CN112164030A (en) Method and device for quickly detecting rice panicle grains, computer equipment and storage medium
CN110399804A (en) A kind of food inspection recognition methods based on deep learning
CN112883915B (en) Automatic wheat head identification method and system based on transfer learning
Menezes et al. Pseudo-label semi-supervised learning for soybean monitoring
CN113869098A (en) Plant disease identification method and device, electronic equipment and storage medium
CN108229467A (en) Interpret the method, apparatus and electronic equipment of remote sensing images
CN114663769B (en) Fruit identification method based on YOLO v5
CN113553897A (en) Crop identification method based on unmanned aerial vehicle and YOLOv3 model
CN114359644B (en) Crop pest identification method based on improved VGG-16 network
CN113609913B (en) Pine wood nematode disease tree detection method based on sampling threshold interval weighting
Deng et al. A paddy field segmentation method combining attention mechanism and adaptive feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zhu Xuhua

Inventor after: Chen Yuyang

Inventor after: Feng Jin

Inventor after: Wu Hongyang

Inventor after: Liu Zhimin

Inventor after: Shen Zhihui

Inventor after: Yao Bo

Inventor before: Zhu Xuhua

Inventor before: Chen Yuyang

Inventor before: Feng Jin

Inventor before: Wu Hongyang

Inventor before: Liu Zhimin

Inventor before: Shen Zhihui

Inventor before: Yao Bo

CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhu Xuhua

Inventor after: Zhou Tiefeng

Inventor after: Chen Yuyang

Inventor after: Feng Jin

Inventor after: Wu Hongyang

Inventor after: Liu Zhimin

Inventor after: Shen Zhihui

Inventor after: Yao Bo

Inventor before: Zhu Xuhua

Inventor before: Chen Yuyang

Inventor before: Feng Jin

Inventor before: Wu Hongyang

Inventor before: Liu Zhimin

Inventor before: Shen Zhihui

Inventor before: Yao Bo

CB03 Change of inventor or designer information