CN113191334A - Plant canopy dense leaf counting method based on improved CenterNet - Google Patents

Plant canopy dense leaf counting method based on improved CenterNet Download PDF

Info

Publication number
CN113191334A
CN113191334A CN202110598653.4A CN202110598653A CN113191334A CN 113191334 A CN113191334 A CN 113191334A CN 202110598653 A CN202110598653 A CN 202110598653A CN 113191334 A CN113191334 A CN 113191334A
Authority
CN
China
Prior art keywords
image
leaf
network
leaves
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110598653.4A
Other languages
Chinese (zh)
Other versions
CN113191334B (en
Inventor
陆声链
陈文康
李帼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Normal University
Original Assignee
Guangxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Normal University filed Critical Guangxi Normal University
Priority to CN202110598653.4A priority Critical patent/CN113191334B/en
Publication of CN113191334A publication Critical patent/CN113191334A/en
Application granted granted Critical
Publication of CN113191334B publication Critical patent/CN113191334B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a plant canopy leaf counting method based on improved CenterNet, which comprises the steps of improving a CenterNet network model structure and optimizing a loss process function, adding a space depth conversion module at a network input end, converting an input image into modules with different depths, and introducing a CBAM attention module so as to detect leaf edge information under different resolutions; then, extracting image receptive field characteristics under different scales by using space pyramid pooling of cavity convolution, fusing and inputting the image receptive field characteristics into a DLA-34 trunk network; finally, different DLA-34 networks are converted into different DLA-34 networks by using a reverse space depth conversion moduleLinking the feature information obtained at different stages to retain the features of dense, irregular leaves, and usingsmooth L1 The function optimizes a target prediction loss function used by the original CenterNet network, overcomes the detection problem of overlapping leaves and irregular leaves in different receptive fields, and can count plant canopy leaves with different growth periods and different shielding degrees in a complex natural environment.

Description

Plant canopy dense leaf counting method based on improved CenterNet
Technical Field
The invention relates to the technical field of image recognition, in particular to a plant canopy dense leaf counting method based on improved CenterNet.
Background
Plant phenotype research is one of the core research and application fields recognized by academia and industry, and is one of the core technical fields for solving future agricultural challenges. The phenotypic parameters of the plants can change along with the growth of plant organs, and accurate and rapid acquisition of plant phenotypic information can help to understand the yield rule of crops and improve the yield of the crops. Therefore, rapid detection technology of plant phenotype information becomes a research hotspot of modern agriculture and information technology, and plays a crucial role in scientific research and improvement of crop productivity. Meanwhile, a large number of more accurate and efficient image processing algorithms and technologies emerge in the aspect of computer vision, and more challenges are provided while new opportunities are brought to the development of the plant image processing field.
Many researchers have conducted a lot of research around this problem, and some solutions have been proposed. For example, some researchers have proposed extracting leaf regions from plant images with complex backgrounds by a watershed segmentation method of HSV color space using a color-based leaf segmentation technique, and matching existing extracted leaf templates with hidden data to improve the accuracy of leaf segmentation. In addition, the scholars propose an image segmentation technology based on shape characteristics, an active contour method is adopted, corresponding parameters are set to limit the segmentation contour, and the blade contour in the picture is extracted. In recent years, researchers have proposed a leaf identification method based on a convolutional neural network, and these methods generally obtain RGB images of leaves, perform preprocessing and labeling, construct a leaf data set, set parameters of a network model, put a training set into the convolutional neural network for training, and finally obtain a leaf detection model. In addition, some also use a method based on image calculation to obtain an initial contour image of the blade to be measured, screen the image by a geometric morphology method to obtain the contour of the target blade, and then perform segmentation and identification by contour features.
In the existing leaf identification method based on the convolutional neural network, one defect is that when the leaves are counted, the identification precision of a single target is over-emphasized, and the depth and the detection speed of the convolutional neural network are not considered; another disadvantage is that when optimizing the network structure, the recognition accuracy is often reduced and the recognition information for the specified target is lacking. The main disadvantage of counting the blades based on the image calculation or region segmentation method is that the identification of overlapped and dense blades in a complex environment is lacked, only obvious blade outlines or characteristic descriptions can be roughly segmented, the details of some shielded blades are lost, and the accurate blade number cannot be counted.
The characteristics of the leaf blade individuals, colors, growth characteristics and the like of different varieties show different differences along with different growth cycles, even if the leaf blades of the same variety have different morphological characteristics, the leaf blades of different varieties have different character characteristics. Because different plants have great difference in geometric parameters such as shapes and textures, and under different natural environments, the plants can be influenced by factors such as illumination conditions, overlapping leaves, pest disasters and the like to influence the identification of the leaves. Therefore, in the blade counting, it is necessary to consider the factors of the blades themselves and the influence of the complex environment.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a plant canopy intensive leaf counting method based on improved CenterNet, which further improves the CenterNet algorithm and has the thought that a target size prediction loss function in the algorithm is firstly optimized, then a space depth-to-depth module is combined with a CBAM (cubic boron nitride am) attention module to emphasize the features of leaves under different scales, and finally space pyramid pooling (ASPP) of cavity convolution and a deep feature network are densely connected, so that the detection problem of overlapped leaves and irregular leaves under different receptive fields is solved, and plant canopy leaves with different growth periods and different shielding degrees can be counted under a complex natural environment.
The technical scheme for realizing the purpose of the invention is as follows:
a plant canopy leaf counting method based on improved centrnet, comprising the steps of:
s1, acquiring images: collecting images of plant canopies, naming the pictures according to the format of a COCO data set, and simultaneously creating two folders named Anno and Ima;
s2, image preprocessing:
s2-1, image marking: marking the leaves in the image collected in the step S1 by using an image marking tool LabelImg, marking the positions of the leaves and classifying the types or overlapping shielding degrees of each leaf;
(1) when the leaves are marked on the image, the plant species to which the leaves belong is added to the label, and the naming format of the label is as follows: english-leaf corresponding to the leaf; for example, if the plant is Cucumber, the labeled leaf tag may be named cuumber-leaf; if the plant is an Eggplant, the leaf tag of the plant can be named as Eggplant-leaf;
(2) when the label occludes more than 50% of the leaves, the label can be named on the basis of the step (1): english-leaf-o corresponding to the leaf, such as Cucumber-leaf-o; when the leaves with the shielding of not more than 50% are marked, the leaves are named as English-leaf-e corresponding to the leaves, such as Cucumber-leaf-e;
(3) when leaves in the growing period are marked, the label is named on the basis of the step (1): english-leaf-g corresponding to the leaf, such as Cucumber-leaf-g; when leaves at maturity are labeled, the label is named: the English-leaf-m corresponding to the leaf is Cucumber-leaf-m.
S2-2, image amplification: if the image collected in the step S1 cannot meet the requirement that 200 pictures are needed for identifying one type of leaf, performing image amplification; selecting an image storage path and an XML file path for marking information, formulating an amplified image output path, performing image amplification according to the required quantity on the basis of an original image, and selecting the size, the rotation angle and the definition parameters of the image to amplify the image;
s2-3, calculating the mean value and the standard deviation of the picture, and the steps are as follows:
s2-3-1, firstly, putting the marked and amplified data set picture into a folder;
s2-3-2, calculating the mean value and the standard deviation of the picture by using a meanStdDev function in an Opencv library;
s2-3-3, calculating the maximum value and the minimum value of the picture pixel by using a minMaxLoc function in an Opencv library;
s2-3-4, creating a Python file of the user-defined blade type, changing the class name into the user-defined blade type name, and writing the mean value and the standard deviation obtained in the step S2-3-2 and the pixel value obtained in the step S2-3-3 into the Python file;
s2-4, dividing the data set: dividing the amplified image and the labeled file into a training set, a testing set and a verification set, wherein the training set, the testing set and the verification set respectively account for 70%, 15% and 15%;
s3, setting network model parameters: in a configuration file of a YOLOv4 network model, setting the size of a convolutional neural network input image as 512 × 512, the number of identification types, a batch _ size value and an iteration number parameter according to the size of a computer memory and a video memory and the finally presented detection effect requirement; setting the number of threads supporting cuda acceleration; the functions and requirements of the specific parameters are as follows:
s3-1, when the parameter num _ works is 1 (showing that multi-thread training is started), the batch _ szie parameter is 64, the iteration number is 6000, and the type of the detected object is 2, showing that the gpu training model is used, and at least 6GB of memory is needed;
s3-2, setting a parameter num _ works as 0 (indicating that the multi-thread training is closed), setting a batch _ szie parameter as 64, setting the iteration number as 6000, and indicating that the cpu training model is used and at least 6GB of memory is needed when the detected object type is 2;
s3-3, when the parameter num _ works is 1 (showing that multi-thread training is started), the batch _ szie parameter is 16, the iteration number is 6000 and the type of the detected object is 2, showing that the gpu training model is used and at least 4GB memory is needed;
s3-4, when the parameter num _ works is 0 (representing that the multi-thread training is closed), the batch _ szie parameter is 16, the iteration number is 6000, and the type of the detected object is 2, representing that the cpu training model is used, at least 4GB of memory is needed;
s4, optimizing the target size prediction loss function in the CenterNet network to obtain an optimized target size prediction loss function, wherein the optimization process is as follows:
s4-1: in the original target size prediction loss function, a target size variable S is predictedpkWith the true target size variable SkRemoving the modulus operation;
s4-2: by usingsmooth L1 The loss function replaces a modular operation and is added to the original target size prediction loss function;
s5, improving the structure of the CenterNet network to obtain the improved structure of the CenterNet network, wherein the improvement process is as follows:
s5-1: adding a space depth conversion module before the existing CenterNet network structure, dividing an image with the input size of X channel number of 3 into four feature graphs with the same size, wherein the size of each feature graph is X/4X 3, and dividing the size of the image on the premise of keeping the channel number unchanged;
s5-2, adding a CBAM (CBAM) attention mechanism module behind each divided feature map, and adjusting the input parameters of each attention module to be X/4;
s5-3, adding a space pyramid pooling module of the hole convolution behind the CBAM attention mechanism module, setting the expansion rates of the space pyramid pooling module of the hole convolution to be 1, 6, 12 and 18, and fusing the feature maps output by the CBAM attention module into a picture to be input into a DLA-34 backbone network;
s5-4, in DLA-34 backbone network, changing the single output of original network into each scale output characteristic diagram, and adopting down sampling to transmit the characteristic obtained by the scale to the network structure of next scale;
s5-5, adding a reverse depth-to-space module behind the DLA-34 backbone network, fusing the feature maps output by each scale into one feature map, adding a 1 × 1 convolution module, and adjusting the size and the channel number of the fused feature maps to be consistent with the image input by the network to obtain an improved CenterNet network structure;
s6, training a network model: setting parameters of the improved CenterNet network structure, putting the improved CenterNet network structure with the set parameters into a computer with a configured environment, and training in the training set divided in the step S2-4; in the training process, the pictures which are divided in the test set are put into a computer for testing to obtain the training effect of each stage, a flip _ test parameter is set to be enhanced by using data in the training process, and the trained network model is stored after the training is finished;
s7, identifying by using the trained network model: and preparing a shot blade image on a computer, inputting a training command in a command line under a Python environment, wherein the command content comprises the name of the trained optimal blade detection model and the name of the blade image to be identified, displaying the detection result on the computer, and obtaining the number of the blades in the image.
The invention provides a plant canopy leaf counting method based on improved CenterNet, which is characterized in that a space depth conversion module is added at the input end of a network by improving a CenterNet network model structure and optimizing a loss process function, an input image is converted into modules with different depths, and a CBAM attention module is introduced at the same time so as to detect leaf edge information under different resolutions; then, extracting image receptive field characteristics under different scales by using space pyramid pooling of cavity convolution, fusing and inputting the image receptive field characteristics into a DLA-34 trunk network; and finally, connecting the feature information obtained by different DLA-34 networks at different stages by using a reverse space depth conversion module so as to keep the features of the dense and irregular blades. In addition, usesmooth L1 The function optimizes the target prediction loss function used by the original centrnet network. Compared with the prior art, the invention has the following advantages:
(1) the user can input plant images of any size for leaf detection and counting, and does not need to set any intermediate parameters or carry out preprocessing;
(2) an improved CenterNet network structure is adopted to train the leaf data set, and the trained model can accurately identify the mutually shielded and overlapped leaves of the plant canopy;
(3) the blade detection counting machine can detect and count blades in different periods, different varieties and different illumination conditions under the natural environment condition, has the characteristics of high identification precision and strong robustness, and can meet the requirements of real-time and nondestructive detection counting.
Drawings
FIG. 1 is a flow chart of a plant canopy leaf counting method of the present invention based on modified CenterNet;
FIG. 2 is a schematic diagram of a spatial to depth module;
FIG. 3 is a schematic diagram of a CBAM attention mechanism;
FIG. 4 is a diagram of a spatial pyramid pooling module of hole convolution;
figure 5 is a diagram of a modified centret network architecture;
FIG. 6 is a graph showing the effect of improved CenterNet on early cucumber leaf detection;
FIG. 7 is a graph showing the effect of improved CenterNet on the detection of late cucumber leaves;
FIG. 8 is a graph showing the effect of improved CenterNet on the detection of orchid leaves;
FIG. 9 is a graph showing the effect of improved CenterNet on leaf detection of copper coin;
FIG. 10 is a graph showing the effect of improved CenterNet on outdoor eggplant leaves;
figure 11 is a graph of the effect of improved centret on indoor eggplant leaves.
Detailed Description
The invention will be further elucidated with reference to the drawings and examples, without however being limited thereto.
Example (b):
a plant canopy leaf counting method based on improved centrnet, comprising the steps of:
s1, acquiring images: a user adopts a camera or other image acquisition equipment to acquire images of plant canopies, names pictures according to the acquired images according to the format of a COCO data set, and simultaneously creates two folders named Anno and Ima;
s2, image preprocessing:
s2-1, image marking: in the image collected in the step S1, marking the leaves in the image by using an image marking tool label img, marking the positions of the leaves and classifying the type or overlapping shielding degree of each leaf;
(1) when the leaves are marked on the image, the plant species to which the leaves belong is added to the label. For example, when Cucumber leaves are labeled, the label may be named cuumber-leaf; when the Eggplant leaves are marked, the label can be named as Eggplant-leaf;
(2) when leaves with more than 50% of shielding are marked, the label can be named as Cucumber-leaf-o on the basis of the step (1), and when Cucumber leaves with less than 50% of shielding are marked, the label can be named as Cucumber-leaf-e;
(3) when the Cucumber leaves in the growing period are marked, the label can be named as Cucumber-leaf-g on the basis of the step (1), and when the Cucumber leaves in the mature period are marked, the label can be named as Cucumber-leaf-m.
S2-2, image amplification: if the image collected in the step S1 cannot meet the requirement that 200 pictures are needed for identifying one type of leaf, performing image amplification; the user selects the image storage path and the XML file path of the marked information, and formulates an amplified image output path, so that the image amplification is carried out on the basis of the original image according to the required quantity of the user, and the user can select parameters such as the size, the rotation angle, the definition and the like of the image to amplify the image;
s2-3, calculating the mean value and the standard deviation of the picture, and the steps are as follows:
s2-3-1, putting the marked and amplified data set picture into a folder by a user;
s2-3-2, calculating the mean value and the standard deviation of the picture by using a meanStdDev function in an Opencv library;
s2-3-3, calculating the maximum value and the minimum value of the picture pixel by using a minMaxLoc function in an Opencv library;
s2-3-4, creating a Python file of the user-defined blade type, changing the class name into the user-defined blade type name, and writing the mean value and the standard deviation obtained in the step S2-3-2 and the pixel value obtained in the step S2-3-3 into the Python file;
s2-4, dividing the data set: dividing the amplified image and the labeled file into a training set, a testing set and a verification set, wherein the training set, the testing set and the verification set respectively account for 70%, 15% and 15%;
s3, setting network model parameters: in a configuration file of a YOLOv4 network model, setting the size of a convolutional neural network input image as 512 × 512, the number of identification types, a batch _ size value and an iteration number parameter according to the size of a computer memory and a video memory and the detection effect requirement finally presented by a user; and the number of threads supporting cuda acceleration is needed to be used by the user; the set parameter values and their effects are as follows:
s3-1, when the parameter num _ works is 1 (multi-thread training is started), the batch _ szie parameter is 64, the iteration number is 6000 and the type of the detected object is 2, the fact that the gpu training model is used is shown, and at least 6GB of memory is needed;
s3-2, when the parameter num _ works is 0 (multi-thread training is closed), the batch _ szie parameter is 64, the iteration number is 6000 and the type of the detected object is 2, the cpu training model is used, and at least 6GB of memory is needed;
s3-3, when the parameter num _ works is 1 (multi-thread training is started), the batch _ szie parameter is 16, the iteration number is 6000 and the type of the detected object is 2, the fact that the gpu training model is used is shown, and at least 4GB of memory is needed;
s3-4, when the parameter num _ works is 0 (multi-thread training is closed), the batch _ szie parameter is 16, the iteration number is 6000 and the type of the detected object is 2, the cpu training model is used, and at least 4GB of memory is needed;
s4, optimizing the target size prediction loss function in the CenterNet network to obtain the optimized target size prediction loss function, wherein the optimization process is as follows:
s4-1: in the original target size prediction loss function, a target size variable S is predictedpkWith the true target size variable SkRemoving the modulus operation;
s4-2: by usingsmooth L1 The loss function replaces a modular operation and is added to the original target size prediction loss function;
s5, the structure of the CenterNet network is improved to obtain the improved structure of the CenterNet network, and the improvement process is as follows:
s5-1: before the existing centrnet network structure, a space depth conversion module is added, an image with the input size of X channel number of 3 is divided into four feature maps with the same size, each feature map is X/4X 3, the size of the image is divided on the premise of keeping the channel number unchanged, and the effect is shown in figure 2.
S5-2, adding a CBAM attention mechanism module behind each divided feature map, and adjusting the input parameters of each attention module to X/4, wherein the effect is shown in FIG. 3;
s5-3, adding a space pyramid pooling module with cavity convolution behind the CBAM attention mechanism module, setting the expansion rates of the space pyramid pooling module with cavity convolution to be 1, 6, 12 and 18, and fusing the feature maps output by the CBAM attention module into a picture to be input into a DLA-34 trunk network, wherein the effect is shown in figure 4;
s5-4, in DLA-34 backbone network, changing the single output of original network into each scale output characteristic diagram, and adopting down sampling to transmit the characteristic obtained by the scale to the network structure of next scale;
s5-5, adding a reverse depth-to-space module behind the DLA-34 backbone network, fusing the feature maps output by each scale into one feature map, adding a 1 × 1 convolution module, adjusting the size and the number of channels of the fused feature map to be consistent with the image input by the network, and enabling the overall network structure effect to be as shown in FIG. 5;
s6, training a network model: and setting parameters of the improved CenterNet network structure, putting the improved CenterNet network structure with the set parameters into a computer with a configured environment, and training by using the training set divided in the step S2-4. In the training process, the pictures which are divided in the test set are put into a computer for testing to obtain the training effect of each stage, a flip _ test parameter is set to be enhanced by using data in the training process, and the trained network model is stored after the training is finished.
S7, identifying by using the trained network model: and preparing a shot blade image on a computer, inputting a training command in a command line under a Python environment, wherein the command content comprises the name of the trained optimal blade detection model and the name of the blade image to be identified, and then seeing the detection result on the computer and obtaining the number of the blades in the image. The effect is shown in fig. 6 to 11.

Claims (3)

1. A plant canopy leaf counting method based on improved centrnet, comprising the steps of:
s1, acquiring images: collecting images of plant canopies, naming the pictures according to the format of a COCO data set, and simultaneously creating two folders named Anno and Ima;
s2, image preprocessing:
s2-1, image marking: marking the leaves in the image collected in the step S1 by using an image marking tool LabelImg, marking the positions of the leaves and classifying the types or overlapping shielding degrees of each leaf;
s2-2, image amplification: if the image collected in the step S1 cannot meet the requirement that 200 pictures are needed for identifying one type of leaf, performing image amplification; selecting an image storage path and an XML file path for marking information, formulating an amplified image output path, performing image amplification according to the required quantity on the basis of an original image, and selecting the size, the rotation angle and the definition parameters of the image to amplify the image;
s2-3, calculating the mean value and the standard deviation of the picture, and the steps are as follows:
s2-3-1, firstly, putting the marked and amplified data set picture into a folder;
s2-3-2, calculating the mean value and the standard deviation of the picture by using a meanStdDev function in an Opencv library;
s2-3-3, calculating the maximum value and the minimum value of the picture pixel by using a minMaxLoc function in an Opencv library;
s2-3-4, creating a Python file of the user-defined blade type, changing the class name into the user-defined blade type name, and writing the mean value and the standard deviation obtained in the step S2-3-2 and the pixel value obtained in the step S2-3-3 into the Python file;
s2-4, dividing the data set: dividing the amplified image and the labeled file into a training set, a testing set and a verification set, wherein the training set, the testing set and the verification set respectively account for 70%, 15% and 15%;
s3, setting network model parameters: in a configuration file of a YOLOv4 network model, setting the size of a convolutional neural network input image as 512 × 512, the number of identification types, a batch _ size value and an iteration number parameter according to the size of a computer memory and a video memory and the finally presented detection effect requirement; setting the number of threads supporting cuda acceleration;
s4, optimizing the target size prediction loss function in the CenterNet network to obtain an optimized target size prediction loss function, wherein the optimization process is as follows:
s4-1: in the original target size prediction loss function, a target size variable S is predictedpkWith the true target size variable SkRemoving the modulus operation;
s4-2: by usingsmooth L1 The loss function replaces a modular operation and is added to the original target size prediction loss function;
s5, improving the structure of the CenterNet network to obtain the improved structure of the CenterNet network, wherein the improvement process is as follows:
s5-1: adding a space depth conversion module before the existing CenterNet network structure, dividing an image with the input size of X channel number of 3 into four feature graphs with the same size, wherein the size of each feature graph is X/4X 3, and dividing the size of the image on the premise of keeping the channel number unchanged;
s5-2, adding a CBAM (CBAM) attention mechanism module behind each divided feature map, and adjusting the input parameters of each attention module to be X/4;
s5-3, adding a space pyramid pooling module of the hole convolution behind the CBAM attention mechanism module, setting the expansion rates of the space pyramid pooling module of the hole convolution to be 1, 6, 12 and 18, and fusing the feature maps output by the CBAM attention module into a picture to be input into a DLA-34 backbone network;
s5-4, in DLA-34 backbone network, changing the single output of original network into each scale output characteristic diagram, and adopting down sampling to transmit the characteristic obtained by the scale to the network structure of next scale;
s5-5, adding a reverse depth-to-space module behind the DLA-34 backbone network, fusing the feature maps output by each scale into one feature map, adding a 1 × 1 convolution module, and adjusting the size and the channel number of the fused feature maps to be consistent with the image input by the network to obtain an improved CenterNet network structure;
s6, training a network model: setting parameters of the improved CenterNet network structure, putting the improved CenterNet network structure with the set parameters into a computer with a configured environment, and training in the training set divided in the step S2-4; in the training process, the pictures which are divided in the test set are put into a computer for testing to obtain the training effect of each stage, a flip _ test parameter is set to be enhanced by using data in the training process, and the trained network model is stored after the training is finished;
s7, identifying by using the trained network model: and preparing a shot blade image on a computer, inputting a training command in a command line under a Python environment, wherein the command content comprises the name of the trained optimal blade detection model and the name of the blade image to be identified, displaying the detection result on the computer, and obtaining the number of the blades in the image.
2. The improved centret based plant canopy leaf counting method of claim 1, wherein in step S2-1, said marking the position of the leaf and classifying the type or overlapping occlusion degree of each leaf specifically requires the following:
(1) when the leaves are marked on the image, the plant species to which the leaves belong is added to the label, and the naming format of the label is as follows: english-leaf corresponding to the leaf;
(2) when the label occludes more than 50% of the leaves, the label can be named on the basis of the step (1): english-leaf-o corresponding to the leaf; when the leaves with the shielding rate not more than 50% are marked, the leaves are named as English-leaf-e corresponding to the leaves;
(3) when leaves in the growing period are marked, the label is named on the basis of the step (1): english-leaf-g corresponding to the leaf; when leaves at maturity are labeled, the label is named: english-leaf-m corresponding to the leaf.
3. The method according to claim 1, wherein in step S3, the network model parameters are set, and the specific parameters are used and required as follows:
s3-1, when the parameter num _ works is 1, the batch _ szie parameter is 64, the iteration number is 6000 and the type of the detected object is 2, the gpu training model is used, and at least 6GB memory is needed;
s3-2, setting the parameter num _ works as 0, setting the batch _ szie parameter as 64, setting the iteration number as 6000, and when the detected object type is 2, indicating that the CPU training model is used and at least 6GB memory is needed;
s3-3, when the parameter num _ works is 1, the batch _ szie parameter is 16, the iteration number is 6000 and the type of the detected object is 2, indicating that the gpu training model is used and at least 4GB memory is needed;
s3-4, when the parameter num _ works is 0, the batch _ szie parameter is 16, the iteration number is 6000 and the type of the detected object is 2, the cpu is used for training the model, and at least 4GB of memory is needed.
CN202110598653.4A 2021-05-31 2021-05-31 Plant canopy dense leaf counting method based on improved CenterNet Active CN113191334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110598653.4A CN113191334B (en) 2021-05-31 2021-05-31 Plant canopy dense leaf counting method based on improved CenterNet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110598653.4A CN113191334B (en) 2021-05-31 2021-05-31 Plant canopy dense leaf counting method based on improved CenterNet

Publications (2)

Publication Number Publication Date
CN113191334A true CN113191334A (en) 2021-07-30
CN113191334B CN113191334B (en) 2022-07-01

Family

ID=76985797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110598653.4A Active CN113191334B (en) 2021-05-31 2021-05-31 Plant canopy dense leaf counting method based on improved CenterNet

Country Status (1)

Country Link
CN (1) CN113191334B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688830A (en) * 2021-08-13 2021-11-23 湖北工业大学 Deep learning target detection method based on central point regression
CN114241411A (en) * 2021-12-15 2022-03-25 平安科技(深圳)有限公司 Counting model processing method and device based on target detection and computer equipment
CN116310806A (en) * 2023-02-28 2023-06-23 北京理工大学珠海学院 Intelligent agriculture integrated management system and method based on image recognition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369540A (en) * 2020-03-06 2020-07-03 西安电子科技大学 Plant leaf disease identification method based on mask convolutional neural network
CN111476280A (en) * 2020-03-27 2020-07-31 海南医学院 Plant leaf identification method and system
US20200279374A1 (en) * 2019-02-28 2020-09-03 Iunu, Inc. Automated plant disease detection
CN111709489A (en) * 2020-06-24 2020-09-25 广西师范大学 Citrus identification method based on improved YOLOv4
CN112329697A (en) * 2020-11-18 2021-02-05 广西师范大学 Improved YOLOv 3-based on-tree fruit identification method
CN112784756A (en) * 2021-01-25 2021-05-11 南京邮电大学 Human body identification tracking method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200279374A1 (en) * 2019-02-28 2020-09-03 Iunu, Inc. Automated plant disease detection
CN111369540A (en) * 2020-03-06 2020-07-03 西安电子科技大学 Plant leaf disease identification method based on mask convolutional neural network
CN111476280A (en) * 2020-03-27 2020-07-31 海南医学院 Plant leaf identification method and system
CN111709489A (en) * 2020-06-24 2020-09-25 广西师范大学 Citrus identification method based on improved YOLOv4
CN112329697A (en) * 2020-11-18 2021-02-05 广西师范大学 Improved YOLOv 3-based on-tree fruit identification method
CN112784756A (en) * 2021-01-25 2021-05-11 南京邮电大学 Human body identification tracking method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GUOFA LI等: "Detection of Road Objects With Small Appearance in Images for Autonomous Driving in Various Traffic Situations Using a Deep Learning Based Approach", 《IEEE ACCESS》 *
JAN WEYLER等: "Joint Plant Instance Detection and Leaf Count Estimation for In-Field Plant Phenotyping", 《IEEE ROBOTICS AND AUTOMATION LETTERS》 *
高钦泉等: "基于改进CenterNet的竹条表面缺陷检测方法", 《计算机应用》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688830A (en) * 2021-08-13 2021-11-23 湖北工业大学 Deep learning target detection method based on central point regression
CN113688830B (en) * 2021-08-13 2024-04-26 湖北工业大学 Deep learning target detection method based on center point regression
CN114241411A (en) * 2021-12-15 2022-03-25 平安科技(深圳)有限公司 Counting model processing method and device based on target detection and computer equipment
CN114241411B (en) * 2021-12-15 2024-04-09 平安科技(深圳)有限公司 Counting model processing method and device based on target detection and computer equipment
CN116310806A (en) * 2023-02-28 2023-06-23 北京理工大学珠海学院 Intelligent agriculture integrated management system and method based on image recognition
CN116310806B (en) * 2023-02-28 2023-08-29 北京理工大学珠海学院 Intelligent agriculture integrated management system and method based on image recognition

Also Published As

Publication number Publication date
CN113191334B (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN111709489B (en) Citrus identification method based on improved YOLOv4
CN113191334B (en) Plant canopy dense leaf counting method based on improved CenterNet
Jia et al. Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot
CN109409365A (en) It is a kind of that method is identified and positioned to fruit-picking based on depth targets detection
Li et al. An overlapping-free leaf segmentation method for plant point clouds
CN110223349A (en) A kind of picking independent positioning method
CN112766155A (en) Deep learning-based mariculture area extraction method
CN114140665A (en) Dense small target detection method based on improved YOLOv5
CN111462044A (en) Greenhouse strawberry detection and maturity evaluation method based on deep learning model
CN111178177A (en) Cucumber disease identification method based on convolutional neural network
CN116310548A (en) Method for detecting invasive plant seeds in imported seed products
CN114708208B (en) Machine vision-based famous tea tender bud identification and picking point positioning method
CN109815973A (en) A kind of deep learning method suitable for the identification of fish fine granularity
CN113744226A (en) Intelligent agricultural pest identification and positioning method and system
Wang et al. DualSeg: Fusing transformer and CNN structure for image segmentation in complex vineyard environment
Shuai et al. An improved YOLOv5-based method for multi-species tea shoot detection and picking point location in complex backgrounds
CN113077438B (en) Cell nucleus region extraction method and imaging method for multi-cell nucleus color image
Zhao et al. Transient multi-indicator detection for seedling sorting in high-speed transplanting based on a lightweight model
Shen et al. YOLOv5-Based Model Integrating Separable Convolutions for Detection of Wheat Head Images
CN113920190A (en) Ginkgo flower spike orientation method and system
CN117079125A (en) Kiwi fruit pollination flower identification method based on improved YOLOv5
CN116740337A (en) Safflower picking point identification positioning method and safflower picking system
CN116883718A (en) Sugarcane seedling missing detection positioning method based on improved YOLOV7
CN112329697B (en) Improved YOLOv 3-based on-tree fruit identification method
CN116416523A (en) Machine learning-based rice growth stage identification system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant