CN117522950A - Geometric parameter measurement method for plant stem growth based on machine vision - Google Patents

Geometric parameter measurement method for plant stem growth based on machine vision Download PDF

Info

Publication number
CN117522950A
CN117522950A CN202311835468.8A CN202311835468A CN117522950A CN 117522950 A CN117522950 A CN 117522950A CN 202311835468 A CN202311835468 A CN 202311835468A CN 117522950 A CN117522950 A CN 117522950A
Authority
CN
China
Prior art keywords
convolution module
plant
network
rectangular frame
branch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311835468.8A
Other languages
Chinese (zh)
Other versions
CN117522950B (en
Inventor
易文龙
张星
程香平
赵应丁
殷华
徐亦璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Agricultural University
Institute of Applied Physics of Jiangxi Academy of Sciences
Original Assignee
Jiangxi Agricultural University
Institute of Applied Physics of Jiangxi Academy of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Agricultural University, Institute of Applied Physics of Jiangxi Academy of Sciences filed Critical Jiangxi Agricultural University
Priority to CN202311835468.8A priority Critical patent/CN117522950B/en
Publication of CN117522950A publication Critical patent/CN117522950A/en
Application granted granted Critical
Publication of CN117522950B publication Critical patent/CN117522950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Abstract

The invention discloses a geometrical parameter measurement method for plant stem growth based on machine vision, which comprises the steps of processing plant stem images by using an improved YOLOv8obb network, marking plant stems with rotary rectangular frames, marking respective rotary rectangular frames by taking a growing branch point as a limit, wherein an included angle between the long side of the rotary rectangular frames and the vertical upward direction of the vertical ground is the branch angle of the plant stems, the long side of the rotary rectangular frames is the branch length of the plant stems, and the short side of the rotary rectangular frames is the branch diameter of the plant stems; the plant stem branch length and diameter are obtained by measuring the pixel size and converting the pixel size with the known proportional relation. The invention has more accurate target detection capability in the plant stem data set, can more accurately position and identify the plant stem target, and realizes the detection of the plant stem geometric parameters based on machine vision.

Description

Geometric parameter measurement method for plant stem growth based on machine vision
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a geometric parameter measurement method for plant stem growth based on machine vision.
Background
The plant mainly comprises plant organs such as roots, stems, leaves and the like, and the geometric parameter measurement of plant stem growth is one of important contents of plant morphology and growth and development research. By accurately measuring geometric parameters such as the length, the diameter, the angle and the like of the plant stem branches, the growth rule and the characteristics of the plants can be revealed, and the method has important significance in the fields of agricultural production, plant physiology research, plant genetic improvement and the like.
The traditional plant stem geometric parameter measurement method mainly relies on manual measurement, and has the problems of inaccurate measurement, low efficiency, strong subjectivity and the like. To overcome these problems, automated measurement methods based on machine vision techniques are receiving increasing attention.
The plant stem growth geometric parameter measuring method based on machine vision obtains an image of a plant stem by using an image acquisition device (such as a camera) and performs automatic measurement by using an image processing and analysis algorithm. The method can realize high-precision and high-efficiency measurement of the geometric parameters of the plant stems, and reduces subjectivity and error of manual operation. The core technology of this approach is the target detection and image processing algorithms. The target detection algorithm can identify and position the plant stems, and the geometric features of the stems are extracted through labeling and frame selection modes. And further extracting and calculating geometric parameters such as the length, the diameter, the branching angle and the like of the stem according to an image processing algorithm.
The automatic measurement is realized by using the image acquisition equipment and the image processing algorithm, and the method has the advantages of high precision, high efficiency and low cost, and has important significance for researching the growth rule of plants and agricultural production. In the background of rapid development of plant growth measurement technology, machine vision stands out in the background, and plant growth is measured accurately and conveniently by using the machine vision.
YOLOv8 is a deep learning-based object detection algorithm that is mainly used to detect and identify objects in real time in images. By converting the target detection task into a regression problem, efficient target detection and positioning are achieved. The YOLOv8 network is able to extract rich feature representations from the input image, which feature representations contain object information at different locations in the image. YOLOv8 predicts the rotated rectangular boxes of the target object present in the image by a convolution operation, and the object class and confidence score corresponding to each rotated rectangular box. YOLOv8 improves the accuracy and robustness of detection by adopting a multi-scale prediction mode, and can effectively process target objects with different sizes and proportions. To remove overlapping rotated rectangular boxes, YOLOv8 also uses the NMS algorithm, retaining the rotated rectangular box with the highest confidence, and eliminating the lower scoring portion of the overlapping rotated rectangular box. YOLOv8 can rapidly and accurately detect and identify objects of different categories in an image, and the high-efficiency network structure and the multi-scale characteristics of the YOLOv8 enable the YOLOv8 to have advantages in real-time target detection application. YOLOv8 can provide more accurate detection results than conventional target detection methods without slowing down processing speed.
Disclosure of Invention
Based on the above, the invention aims to provide a geometrical parameter measurement method for plant stem growth based on machine vision, so as to solve the problems that the traditional plant stem geometrical parameter measurement method in the technical background mainly depends on manual measurement, and has inaccurate measurement, low efficiency, strong subjectivity and the like.
The invention is realized in such a way that a geometrical parameter measuring method for plant stem growth based on machine vision comprises the following steps:
step one: collecting plant stalk images through a shooting device;
step two: processing the plant stalk image by using a modified YOLOv8obb network, performing rotary rectangular frame marking processing on the plant stalk, marking respective rotary rectangular frames by taking a growing branch point as a limit, wherein each rotary rectangular frame comprises plant stalk branches, and the plant stalk branches are parallel to the rotary rectangular frames; the improved YOLOv8obb network is characterized in that a space and channel reconstruction convolution module (sconv) is added after a second convolution module of a main network in the YOLOv8 network, a 4-layer C2f module of the main network is replaced by a distribution shift convolution module (DSConv), and then a layer of SE attention mechanism is added after the 4-layer distribution shift convolution module (DSConv);
step three: according to the characteristic that plants grow upwards under the illumination condition, the included angle between the long side of the rotating rectangular frame and the direction (namely the positive half axis of the X axis of the coordinate system) vertical to the ground is the branch angle of the plant stems, the long side of the rotating rectangular frame is the branch length of the plant stems, and the short side of the rotating rectangular frame is the branch diameter of the plant stems; the plant stem branch length and diameter are obtained by measuring the pixel size and converting the pixel size with the known proportional relation.
Further preferably, the backbone network of the improved YOLOv8obb network is sequentially formed by a first convolution module, a second convolution module, a space and channel reconstruction convolution module, a first distribution shift convolution module, a first SE attention mechanism, a third convolution module, a second distribution shift convolution module, a second SE attention mechanism, a fourth convolution module, a third distribution shift convolution module, a third SE attention mechanism, a fifth convolution module, a fourth distribution shift convolution module and a fourth SE attention mechanism, and the output features of the second distribution shift convolution module, the output features of the third distribution shift convolution module and the output features processed by the fourth distribution shift convolution module through the fourth SE attention mechanism are selected to enter the neck network for multi-scale fusion.
Further preferably, the rotation angle of each rotating rectangular frame is adjusted when the rotation process is performed; selecting a center point of the rotary rectangular frame as a rotation center, and carrying out rotary transformation on the rotary rectangular frame according to a rotation angle; the transformed rotated rectangular box is input into a modified YOLOv8obb network for object detection.
Further preferably, the rotation transformation process is: the coordinates of the rotating rectangular frame are (x, y, w, h, theta), wherein (x, y) is the center point coordinate, x represents the center point abscissa, y represents the center point ordinate, w and h are the width and height of the rotating rectangular frame respectively, and theta is the rotating angle of the rotating rectangular frame; for the transformed rotated rectangular box, the calculation is performed according to the following formula:
new center point coordinates:
new width:
new high:
new rotation angle:
further preferably, the regression process is performed on the rotated rectangular box used using CIoU loss function.
The invention uses an improved YOLOv8obb network to process plant stem images, carries out rotary rectangular frame marking processing on plant stems, marks respective rotary rectangular frames by taking grown branch points as boundaries, and identifies the lengths and diameters of the branches of the plant stems according to the rotary rectangular frames. The main network in the Yolov8 network is added with a space and channel reconstruction convolution module (ScConv), the 4-layer C2f module of the main network is replaced by a distributed shift convolution module (DSConv), and a layer of SE attention mechanism is added, so that the improved Yolov8obb network has more accurate target detection capability in a plant stem data set, can more accurately position and identify plant stem targets, and has better performance compared with other networks. The invention realizes the detection of the geometrical parameters of the plant stems based on machine vision.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of a backbone network of the improved YOLOv8 network of the present invention.
Fig. 3 is a diagram of a method of determining the horizontal and vertical dimensions of a rotating rectangular frame based on the second long side definition.
Fig. 4 is a graph of various network authentication losses.
Fig. 5 is a graph of variation in mean accuracy (mAP) for different network verification sets.
Fig. 6 is a graph of fitted data of angle predictions and true values.
Fig. 7 is a graph of fitted data of a length predicted value and a true value.
Fig. 8 is a graph of plant stalk branching parameter detection speed log.
Detailed Description
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, the method for measuring the geometric parameters of plant stem growth based on machine vision provided by the invention comprises the following steps:
step one: and acquiring plant stalk images through a shooting device. The data acquisition is divided into two parts, namely laboratory specimen acquisition, the acquired plants are placed in a test environment, and environmental interference can be basically eliminated; secondly, collecting field samples, and further processing plant stem images to remove complex backgrounds of the plant stem images because other environmental factors exist beside the field plants in shooting;
step two: processing the plant stalk image by using a modified YOLOv8obb network, performing rotary rectangular frame marking processing on the plant stalk, marking respective rotary rectangular frames by taking a growing branch point as a limit, wherein each rotary rectangular frame comprises plant stalk branches, and the plant stalk branches are parallel to the rotary rectangular frames;
step three: according to the characteristic that plants grow upwards under the illumination condition, the included angle between the long side of the rotating rectangular frame and the direction (namely the positive half axis of the X axis of the coordinate system) vertical to the ground is the branch angle of the plant stems, the long side of the rotating rectangular frame is the branch length of the plant stems, and the short side of the rotating rectangular frame is the branch diameter of the plant stems; the plant stem branch length and diameter are obtained by measuring the pixel size and converting the pixel size with the known proportional relation.
YOLOv8obb refers to a labeling mode of a rotary rectangular frame (obb), the photographed plant stem image is labeled first, any quadrilateral frame line vertices are labeled and arranged in a clockwise order, the data set is used for training by using a YOLOv8 network, and then the photographed plant stem image can be processed and analyzed. YOLOv8obb is suitable for scenes where there is rotation of the object, e.g. the direction of travel of the vehicle may not be just a horizontal direction. The growth directions of the branches of the plant stems are all at multiple angles and are not fixed; in the case of performing the rotation processing of YOLOv8obb, it is necessary to adjust the rotation angle for each rotating rectangular frame. First, it is necessary to determine the rotation center, and it is generally possible to select the center point of the rotating rectangular frame as the rotation center. Then, the rotating rectangular frame is subjected to rotation conversion according to the rotation angle. The rotation transformation involves a coordinate transformation in a rotating rectangular box. Let the coordinates of the rotating rectangular frame be (x, y, w, h, θ), where (x, y) is the center point coordinate, x is the center point abscissa, y is the center point ordinate, w and h are the width and height of the rotating rectangular frame, respectively, and θ is the rotation angle of the rotating rectangular frame. For the rotated rectangular frame, the calculation is performed according to the following formula:
new center point coordinates:
new width:
new high:
new rotation angle:(illustrating that the rotated object box is parallel to the X axis);
the rotated rectangular box is input into a modified YOLOv8obb network for target detection.
The selection of the loss function relates to the stability and convergence rate of the training of the network model, and the present embodiment uses the CIoU loss function to perform regression processing on the used rotating rectangular frame. The CIoU loss function is used to determine how good the rotated rectangular frame is to return, and the smaller the loss function value, the closer the predicted rotated rectangular frame is to the real bounding box. In the judgment, 3 important factors are generally considered, including the overlapping area of the rotating rectangular frames, the length-width ratio of the rotating rectangular frames and the distance between the center points. The CIoU loss function is defined as follows:
in the method, in the process of the invention,indicating a loss of CIoU,representing a real bounding boxAnd a prediction bounding boxIoU (cross ratio) value of (a);is a true bounding boxIs defined by the center point coordinates of (a);for predicting bounding boxesIs defined by the center point coordinates of (a);is a true bounding boxFrom the center point of (c) to the prediction bounding boxEuclidean distance of the center point of (c);distance being the smallest diagonal of the bounding box;is a weight coefficient;is a consistency parameter that measures the relative proportions of width and height of the bounding box.Andrepresenting the width and height of a real rotating rectangular frame respectively;andthen representing the width and height of the predicted rotated rectangular box, respectively.
The original YOLOv8obb network rotating rectangular frame only contains information of category and confidence, the size and angle of the rotating rectangular frame cannot be output, and the traditional horizontal frame calculation size is based on extracting the coordinates of the upper left corner and the lower right corner from the boundary frame coordinate information output by the model. And then the coordinates of the upper left corner and the lower right corner are used for calculating the width and the height of the boundary box. From the width and height, the long and short sides of the bounding box can be calculated. However, since the coordinates of the four points of the rotating rectangular frame are not necessarily in the horizontal or vertical direction, the size and position cannot be directly calculated by the conventional method.
As shown in fig. 3, in the method of the invention, in the rotating rectangular frame of the YOLOv8obb network, the horizontal size and the vertical size of the rotating rectangular frame are judged based on the second long side, and the angle output of the rotating rectangular frame about the positive half axis of the X axis is judged according to the second long side; because of the upward growth characteristics of the plant stalks, the expression of the branch angle of the plant stalks is more consistent, the angle about the positive half axis of the X axis is used as the branch angle of the plant stalks. In the rotating rectangular frame, the lowest horizontal point is found from the four points of the rotating rectangular frame, and then the points are respectively connected with the other three points of the rotating rectangular frame to form three sides, wherein the first long side is the diagonal line of the rotating rectangular frame, the second long side is the distance in the long axis direction of the rotating rectangular frame, namely the long side, and the third long side is the distance in the short axis direction of the rotating rectangular frame, namely the short side, so that the second long side can be used for acquiring the size and the angle parameters of the rotating rectangular frame.
The invention obtains evaluation index data after the data set pictures used for testing various improved networks comprise 2440 training sets, 305 testing sets and 305 verification sets, and the various improved networks are uniformly operated for 500 rounds of network convergence.
Currently, it is distinguished by the width and depth of the model. YOLOv8 has five versions. In order to find the model most suitable for the application, network structure experiments of different widths and depths were performed on the dataset. The training results for Yolov8 with different depths and widths are shown in table 1.
TABLE 1 model selection experiments
According to the experiment, although the detection performance of the complex network model on the plant stem branch sample is slightly improved, the weight and the volume of the model are large, and more GPU is occupied in the training process. The average accuracy of detection of the plant skeleton samples by the YOLOv8l with the largest width and depth is highest, the average accuracy (mAP) is 95.6%, but the model is more complex, and the model volume is 82582635.YOLOv8n is the simplest network with an average accuracy (mAP) of 92.3%. The average accuracy (mAP) of the Yolov8s network was 94.6% and the model volume was 12604395, which was reduced by about 85% over Yolov8 l. Therefore, in summary, the practical application environment and the performance of the model on the data set need to be considered, and trade-off is made from various angles such as accuracy, processing speed and complexity of the model. The choice of YOLOv8s exhibiting a relative balance was decided as the basis model of the present invention, and the subsequent experiments will be based on YOLOv8 s. Under the condition that only a small amount of parameters and calculated amount are required to be added, the model is optimized, so that the comprehensive detection effect of the model is improved.
The evaluation index comparison data for the different attention mechanisms run in the integrated usage dataset are shown in table 2.
TABLE 2 attention selection experiment
After the SE attention mechanism and the CBAM attention mechanism are added, the average accuracy is improved by 0.2%, but the accuracy after the SE attention mechanism is added is 96.3%, the improvement is 0.6%, the improved value is most obvious, and the SE attention mechanism has a certain effect on the feature extraction of plant branches.
The four evaluation index comparison data for the various modified networks after operation using the data set are shown in table 3.
TABLE 3 ablation experiment
In order to verify the effectiveness of the invention on the improvement of YOLOv8s, comparative experiments of different improvement methods were carried out. After the SE attention mechanism is added, the average accuracy (mAP) is improved by 0.2%, and the SE attention mechanism has a certain effect on the feature extraction of the plant skeleton sample. After the space and channel reconstruction convolution module (ScConv) is added, the average accuracy reaches 95.3%, the average accuracy is improved by 0.5%, and the capability of detecting the plant branch samples is improved. After the distributed shift convolution module (DSConv) is added, the average accuracy reaches 95.8%, and the average accuracy is improved by 0.5%. Overall, the method of the present invention improves the average accuracy of the original YOLOv8s by about 1.3%. The recognition capability of the model is improved. And the average accuracy is 0.2% higher than that of the YOLOv8l with the maximum average accuracy, and the model volume is only 42% of that of the YOLOv8l, so that the accuracy and the performance are both considered;
to evaluate the performance of the improved YOLOv8obb network, it was used in a comparative experiment with the following seven models of exemplary objective detection neural network Rotated Faster RCNN obb, gliding Vertex, R3Det, oriented RCNN, S2A-Net, YOLOv5sobb, YOLOv8sobb, performance evaluation index comparative data for running different network models using the data set are shown in table 4. As shown in fig. 2, the space and channel reconstruction convolution module (ScConv) is added after the second convolution module of the backbone network in the original YOLOv8 network, the 4-layer C2f module of the backbone network is replaced by the distribution shift convolution module (DSConv), then a layer of SE attention mechanism is added after the 4-layer distribution shift convolution module (DSConv), and in various experiments, the combined effect of the SE attention mechanism, the space and channel reconstruction convolution module (ScConv) and the distribution shift convolution module (DSConv) is optimal, so that the embodiment selects a network combining the SE attention mechanism, the space and channel reconstruction convolution module and the distribution shift convolution module.
TABLE 4 comparative experiments
As shown in fig. 2, the backbone network of the improved YOLOv8obb network in this embodiment is sequentially formed by a first convolution module, a second convolution module, a spatial and channel reconstruction convolution module, a first distribution shift convolution module, a first SE attention mechanism, a third convolution module, a second distribution shift convolution module, a second SE attention mechanism, a fourth convolution module, a third distribution shift convolution module, a third SE attention mechanism, a fifth convolution module, a fourth distribution shift convolution module, and a fourth SE attention mechanism, and the output features of the second distribution shift convolution module, the output features of the third distribution shift convolution module, and the output features of the fourth distribution shift convolution module processed by the fourth SE attention mechanism are selected to enter the neck network for multi-scale fusion.
According to the embodiment, the distributed shift convolution module (DSConv) is introduced on the basis of the YOLOv8 network, so that remarkable advantages are brought to a network model, and the performance of the YOLOv8 network in a computer vision task is enhanced. The distributed shift convolution module remarkably improves the calculation efficiency by optimizing the calculation load of the convolution layer. The memory usage is only 1/10 of that of the standard convolution, and the speed is up to 10 times of that of the standard convolution. In the real-time target detection scenario of the YOLOv8 network, it is important to improve the calculation efficiency for fast and accurate target positioning. The introduction of the distributed shift convolution module is helpful to accelerate the overall reasoning speed while maintaining high precision, and provides better performance for real-time application. The distributed shift convolution module skillfully decomposes the convolution kernel into two component parts, wherein one part is an untrainable integer value, so that the memory use efficiency is effectively improved.
The present embodiment introduces a space and channel reconstruction convolution module (SCConv) on the basis of the YOLOv8 network. Because the YOLOv8 network needs efficient calculation in tasks such as real-time target detection, and the space and channel reconstruction convolution module enables the YOLOv8 network to be lighter by reducing model parameters and calculation cost. The design of the spatial and channel reconstruction convolution module aims to not only reduce redundant features, but also improve the ability of feature representation. Therefore, after the space and channel reconstruction convolution module is introduced, the YOLOv8 network can maintain high-level target detection accuracy while reducing the calculation cost. This is critical for accurate target positioning in real-time scenarios. The space and channel reconstruction convolution module is also designed into a plug and play architecture unit, can directly replace standard convolution in a YOLOv8 network, and can easily optimize the network structure according to specific tasks and scenes without large-scale modification. Secondly, the space and channel reconstruction convolution module effectively limits feature redundancy and enhances the efficiency of the network through the design of a smart Space Reconstruction Unit (SRU) and a Channel Reconstruction Unit (CRU); this makes the YOLOv8 network more efficient in processing large-scale image data. The introduction of the space and channel reconstruction convolution module injects the characteristics of light weight and high efficiency into the YOLOv8 network, is more suitable for real-time target detection application, and simultaneously keeps good accuracy. This provides better performance and efficiency for applications running in environments where resources are limited.
As shown in fig. 4, the improved YOLOv8obb network is compared with the proposed better effect network (Rotated Faster RCNN obb, gliding vertex, YOLOv5sobb, R3Det, oriented RCNN, s2a_net), the comparative evaluation index is the loss rate obtained by the validation set, and as the number of training rounds increases, the loss rate of each network tends to converge, and the loss rate of the improved YOLOv8obb network used in the invention is lower than that of other networks, which indicates that the improved YOLOv8obb network has better performance in the plant stalk dataset, and the low loss rate means that the model can more accurately fit training data in the training process. The network has stronger characterization capability and learning capability, and can better capture the patterns and characteristics in plant stems.
As shown in fig. 5, the improved YOLOv8obb network is compared with the proposed better effect network (Rotated Faster RCNN obb, gliding vertex, YOLOv5sobb, R3Det, oriented RCNN, s2a_net), the compared evaluation index is the average accuracy (mAP) obtained by the verification set, as the training round number increases, the average accuracy (mAP) of each network tends to converge, and the average accuracy (mAP) of the improved YOLOv8obb network used in the present invention is higher than that of other networks, which indicates that the improved YOLOv8obb network has more accurate target detection capability in the plant stalk data set, can more accurately locate and identify plant stalk targets, and has better performance than other networks.
As shown in FIG. 6, the method of the invention measures the angle parameters of branches on 120 images of the actual plant stems, and the comparison of the predicted values and the manual measured values, and the expression of the obtained unitary linear regression model isFitting the determined coefficientsNot less than 0.99, a Root Mean Square Error (RMSE) of 1.0472, an average relative error (MRAE) of 0.0883, and relatively high detection accuracy of each parameter.
As shown in FIG. 7, the method of the invention measures the length parameters of branches on 120 images of the actual plant stems, and the comparison of the predicted value and the manual measurement value, and the expression of the obtained unitary linear regression model isFitting the determined coefficientsNot less than 0.99, a Root Mean Square Error (RMSE) of 1.0503, an average relative error (MRAE) of 0.0251, and higher detection precision of each parameter.
As shown in FIG. 8, the method of the present invention measures the branch parameter detection speed recording curve on 120 real plant stalk images, and the angle and length parameters of the branches in the plant stalk images are simultaneously detected and output, and in the 120 plant stalk images, the slowest detection speed of a single plant is 0.028s (seconds), the fast detection speed of a single plant Zhang Zui is 0.009s (seconds), the average detection speed of the 120 plant stalk images is 0.013s (seconds), the speed is faster, and the efficiency is higher.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (5)

1. The geometrical parameter measurement method for plant stem growth based on machine vision is characterized by comprising the following steps of:
step one: collecting plant stalk images through a shooting device;
step two: processing the plant stalk image by using a modified YOLOv8obb network, performing rotary rectangular frame marking processing on the plant stalk, marking respective rotary rectangular frames by taking a growing branch point as a limit, wherein each rotary rectangular frame comprises plant stalk branches, and the plant stalk branches are parallel to the rotary rectangular frames; the improved YOLOv8obb network is characterized in that a space and channel reconstruction convolution module is added after a second convolution module of a main network in the YOLOv8 network, a 4-layer C2f module of the main network is replaced by a distribution shift convolution module, and then a layer of SE attention mechanism is added after the 4-layer distribution shift convolution module;
step three: according to the characteristic that plants grow upwards under the condition of illumination, the included angle between the long side of the rotating rectangular frame and the direction vertical to the ground upwards is the branch angle of the plant stems, the long side of the rotating rectangular frame is the branch length of the plant stems, and the short side of the rotating rectangular frame is the branch diameter of the plant stems; the plant stem branch length and diameter are obtained by measuring the pixel size and converting the pixel size with the known proportional relation.
2. The method for measuring geometrical parameters of plant stalk growth based on machine vision according to claim 1, wherein the backbone network of the improved YOLOv8obb network is composed of a first convolution module, a second convolution module, a space and channel reconstruction convolution module, a first distribution shift convolution module, a first SE attention mechanism, a third convolution module, a second distribution shift convolution module, a second SE attention mechanism, a fourth convolution module, a third distribution shift convolution module, a third SE attention mechanism, a fifth convolution module, a fourth distribution shift convolution module and a fourth SE attention mechanism in sequence, and the output characteristics of the second distribution shift convolution module, the output characteristics of the third distribution shift convolution module and the output characteristics of the fourth distribution shift convolution module processed by the fourth SE attention mechanism are selected to enter the neck network for multi-scale fusion.
3. The method for measuring geometrical parameters of plant stem growth based on machine vision according to claim 1, wherein the rotation angle of each rotating rectangular frame is adjusted when the rotation process is performed; selecting a center point of the rotary rectangular frame as a rotation center, and carrying out rotary transformation on the rotary rectangular frame according to a rotation angle; the transformed rotated rectangular box is input into a modified YOLOv8obb network for object detection.
4. A method for measuring geometrical parameters of plant stalk growth based on machine vision according to claim 3, characterized in that the rotation transformation process is: the coordinates of the rotating rectangular frame are (x, y, w, h, theta), wherein (x, y) is the center point coordinate, x represents the center point abscissa, y represents the center point ordinate, w and h are the width and height of the rotating rectangular frame respectively, and theta is the rotating angle of the rotating rectangular frame; for the rotated rectangular frame, the calculation is performed according to the following formula:
new center point coordinates:
new width:
new high:
new rotation angle:
5. the method for measuring geometrical parameters of plant stalk growth based on machine vision according to claim 4, wherein the used rotating rectangular box is subjected to regression processing using CIoU loss function.
CN202311835468.8A 2023-12-28 2023-12-28 Geometric parameter measurement method for plant stem growth based on machine vision Active CN117522950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311835468.8A CN117522950B (en) 2023-12-28 2023-12-28 Geometric parameter measurement method for plant stem growth based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311835468.8A CN117522950B (en) 2023-12-28 2023-12-28 Geometric parameter measurement method for plant stem growth based on machine vision

Publications (2)

Publication Number Publication Date
CN117522950A true CN117522950A (en) 2024-02-06
CN117522950B CN117522950B (en) 2024-03-12

Family

ID=89762936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311835468.8A Active CN117522950B (en) 2023-12-28 2023-12-28 Geometric parameter measurement method for plant stem growth based on machine vision

Country Status (1)

Country Link
CN (1) CN117522950B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844632A (en) * 2016-03-21 2016-08-10 华南农业大学 Rice plant identification and positioning method based on machine visual sense
CN114120093A (en) * 2021-12-01 2022-03-01 安徽理工大学 Coal gangue target detection method based on improved YOLOv5 algorithm
CN114419011A (en) * 2022-01-24 2022-04-29 郑州轻工业大学 Cotton foreign fiber online detection method and system
CN114548208A (en) * 2021-12-22 2022-05-27 上海辰山植物园 Improved plant seed real-time classification detection method based on YOLOv5
CN114677325A (en) * 2022-01-25 2022-06-28 安徽农业大学 Construction method of rice stem section segmentation model and detection method based on model
CN115424257A (en) * 2022-08-15 2022-12-02 大理大学 Crop seedling stage plant counting method based on improved multi-column convolutional neural network
CN115497076A (en) * 2022-10-09 2022-12-20 江苏智能无人装备产业创新中心有限公司 High-precision and high-efficiency signal identification detection method, device and medium
CN115700805A (en) * 2021-07-30 2023-02-07 特变电工股份有限公司 Plant height detection method, device, equipment and storage medium
CN115953402A (en) * 2023-03-13 2023-04-11 江西农业大学 Plant stress-strain measurement method and device based on machine vision
CN115984704A (en) * 2023-02-10 2023-04-18 浙江理工大学 Plant and fruit detection algorithm of tomato picking robot
CN116758363A (en) * 2022-03-03 2023-09-15 四川大学 Weight self-adaption and task decoupling rotary target detector
CN117218633A (en) * 2023-07-31 2023-12-12 海信集团控股股份有限公司 Article detection method, device, equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844632A (en) * 2016-03-21 2016-08-10 华南农业大学 Rice plant identification and positioning method based on machine visual sense
CN115700805A (en) * 2021-07-30 2023-02-07 特变电工股份有限公司 Plant height detection method, device, equipment and storage medium
CN114120093A (en) * 2021-12-01 2022-03-01 安徽理工大学 Coal gangue target detection method based on improved YOLOv5 algorithm
CN114548208A (en) * 2021-12-22 2022-05-27 上海辰山植物园 Improved plant seed real-time classification detection method based on YOLOv5
CN114419011A (en) * 2022-01-24 2022-04-29 郑州轻工业大学 Cotton foreign fiber online detection method and system
CN114677325A (en) * 2022-01-25 2022-06-28 安徽农业大学 Construction method of rice stem section segmentation model and detection method based on model
CN116758363A (en) * 2022-03-03 2023-09-15 四川大学 Weight self-adaption and task decoupling rotary target detector
CN115424257A (en) * 2022-08-15 2022-12-02 大理大学 Crop seedling stage plant counting method based on improved multi-column convolutional neural network
CN115497076A (en) * 2022-10-09 2022-12-20 江苏智能无人装备产业创新中心有限公司 High-precision and high-efficiency signal identification detection method, device and medium
CN115984704A (en) * 2023-02-10 2023-04-18 浙江理工大学 Plant and fruit detection algorithm of tomato picking robot
CN115953402A (en) * 2023-03-13 2023-04-11 江西农业大学 Plant stress-strain measurement method and device based on machine vision
CN117218633A (en) * 2023-07-31 2023-12-12 海信集团控股股份有限公司 Article detection method, device, equipment and storage medium

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
FANGTAO REN 等: "Identification of Plant Stomata Based on YOLO v5 Deep Learning Model", 《CSAI 2021》, 31 December 2021 (2021-12-31), pages 78 - 83, XP058844543, DOI: 10.1145/3507548.3507560 *
GUOLIANG YANG 等: "A Lightweight YOLOv8 Tomato Detection Algorithm Combining Feature Enhancement and Attention", 《AGRONOMY 2023》, vol. 13, no. 1824, 9 July 2023 (2023-07-09), pages 1 - 14 *
崔强 等: "一种基于树莓派部署的改进 YOLOv5 植物生长状态检测算法", 《电子技术与软件工程》, 31 March 2023 (2023-03-31), pages 171 - 177 *
左昊轩 等: "基于双目视觉和改进YOLOv8的玉米茎秆宽度原位识别方法", 《智慧农业》, vol. 5, no. 3, 30 September 2023 (2023-09-30), pages 86 - 95 *
李文婧 等: "基于改进YOLOv4的植物叶茎交点目标检测研究", 《计算机工程与应用》, vol. 58, no. 4, 31 December 2022 (2022-12-31), pages 221 - 228 *
杨红云 等: "水稻叶片几何参数无损测量方法研究", 《江西农业大学学报》, no. 02, 20 April 2020 (2020-04-20), pages 275 - 279 *
蒋毅 等: "深度迁移学习在紫茎泽兰检测中的应用", 《计算机系统应用》, no. 06, 15 June 2020 (2020-06-15), pages 201 - 212 *
陈慧颖 等: "基于YOLOv5 m 和 CBAM-CPN 的单分蘖水稻植株表型参数提取", 《农业工程学报》, 2 December 2023 (2023-12-02), pages 1 - 8 *

Also Published As

Publication number Publication date
CN117522950B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
WO2020177432A1 (en) Multi-tag object detection method and system based on target detection network, and apparatuses
CN113012150A (en) Feature-fused high-density rice field unmanned aerial vehicle image rice ear counting method
Li et al. A leaf segmentation and phenotypic feature extraction framework for multiview stereo plant point clouds
CN110472575A (en) A kind of string tomato maturation detection method based on deep learning and computer vision
CN112766155A (en) Deep learning-based mariculture area extraction method
CN112200163B (en) Underwater benthos detection method and system
CN113191334B (en) Plant canopy dense leaf counting method based on improved CenterNet
CN112215217B (en) Digital image recognition method and device for simulating doctor to read film
CN113744226A (en) Intelligent agricultural pest identification and positioning method and system
CN115953402A (en) Plant stress-strain measurement method and device based on machine vision
Zhao et al. Transient multi-indicator detection for seedling sorting in high-speed transplanting based on a lightweight model
CN117522950B (en) Geometric parameter measurement method for plant stem growth based on machine vision
CN117079125A (en) Kiwi fruit pollination flower identification method based on improved YOLOv5
He et al. Visual recognition and location algorithm based on optimized YOLOv3 detector and RGB depth camera
Saeed et al. Cotton plant part 3D segmentation and architectural trait extraction using point voxel convolutional neural networks
CN116563205A (en) Wheat spike counting detection method based on small target detection and improved YOLOv5
CN115249329A (en) Apple leaf disease detection method based on deep learning
CN112465821A (en) Multi-scale pest image detection method based on boundary key point perception
CN111768101A (en) Remote sensing farmland change detection method and system considering phenological characteristics
CN116704497B (en) Rape phenotype parameter extraction method and system based on three-dimensional point cloud
Hu et al. Machine Vision-Based Recognition for fruit cracking in cherry
CN117173122B (en) Lightweight ViT-based image leaf density determination method and device
Fan et al. Segmentation and Extraction of Maize Phytomers Using 3D Data Acquired by RGB-D Cameras
CN115690619A (en) Decoupling method for positioning and classifying tasks in object detection and storage medium
Yang et al. Rapid Detection and Count-ing of Wheat Ears in the Field Using YOLOv4 with Attention Module. Agronomy 2021, 11, 1202

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant