CN117036861A - Corn crop line identification method based on Faster-YOLOv8s network - Google Patents

Corn crop line identification method based on Faster-YOLOv8s network Download PDF

Info

Publication number
CN117036861A
CN117036861A CN202311038710.9A CN202311038710A CN117036861A CN 117036861 A CN117036861 A CN 117036861A CN 202311038710 A CN202311038710 A CN 202311038710A CN 117036861 A CN117036861 A CN 117036861A
Authority
CN
China
Prior art keywords
layer
input end
output end
convolution layer
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311038710.9A
Other languages
Chinese (zh)
Inventor
刁智华
薛帮国
郭培亮
张保华
张东彦
张竞成
杨然兵
李江波
贺振东
赵素娜
何艳
赵春江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University of Light Industry
Original Assignee
Zhengzhou University of Light Industry
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University of Light Industry filed Critical Zhengzhou University of Light Industry
Priority to CN202311038710.9A priority Critical patent/CN117036861A/en
Publication of CN117036861A publication Critical patent/CN117036861A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a corn crop row identification method based on a fast-YOLOv 8s network, which comprises the following steps: firstly, constructing a data set of corn crop rows; secondly, constructing a Faster-YOLOv8s network, and training and verifying the Faster-YOLOv8s network by utilizing a data set to obtain a Faster-YOLOv8s network model; obtaining a corn cob detection frame by using a Faster-YOLOv8s network model; then, positioning a corn crop row characteristic point by utilizing the midpoint of the corn cob detection frame; and finally, fitting the characteristic points of the corn crop rows by adopting a least square method to obtain the center line of the crop rows. The invention provides a method for positioning corn plants by taking corn cobs as recognition targets, which has excellent performance in the aspects of corn cob detection, characteristic point positioning and the like, can meet the requirements of real-time and accuracy of visual navigation of an agricultural robot, and provides an effective approach for navigation of the agricultural robot in a complex farmland environment.

Description

Corn crop line identification method based on Faster-YOLOv8s network
Technical Field
The invention relates to the technical field of crop row detection, in particular to a corn crop row identification method based on a fast-YOLOv 8s network, which is mainly applied to precise navigation during weeding and pesticide application of an agricultural robot.
Background
Corn is one of the important grain crops in China, and plays a role in grain development in China. In modern agriculture, corn is not separated from an agricultural robot during planting and management, and extraction of a corn crop line central line can provide a basis for navigation of the agricultural robot, so how to accurately identify the corn crop line central line becomes a key of the visual navigation technology of the agricultural robot.
Accurate positioning of crops in images is the basis for accurately identifying crop row lines, and researchers at home and abroad have made a great deal of research on the accurate positioning of crops in images. In order to reduce the influence of illumination factors on path recognition, the high-country organ and the like divide crop rows by using an HSI color space and K-means and extract a navigation line by using Hough transformation, but the method only considers the influence of illumination, which is a single factor, on the extraction of the navigation line, and the working environment is a greenhouse. In order to overcome the influence of high weeds on corn crop row detection, M.Montalvo et al effectively separate crops from weeds by using an ExG vegetation index and a double Otsu method, and finally fit crop row lines by using a least square method, wherein the method only considers the influence of weed factors on crop row line identification. In order to overcome the influence of the morphological difference of rice plants on the detection precision of guide lines, keun Ha Choi et al propose a novel guide line extraction algorithm which positions rice plants according to morphological characteristics of rice and extracts guide lines by using an improved robust regression method, wherein the method does not consider the influence of environmental factors such as illumination, weeds and the like on the extraction of the guide lines. In order to identify curved rice crop rows, fuchun Liu et al utilize SSD models to locate rice plants and least squares to fit crop row lines, the algorithm does not take into account the effects of complex environmental factors such as high weeds, broken rows, etc. on navigation line extraction. In order to more accurately identify crops and weeds, shahbaz Khan et al optimize the number, size and proportion of feature extraction modules and anchor frames in a traditional Faster R-CNN network architecture to achieve the best effect. In order to solve the influence of environmental factors such as illumination, leaf shielding and the like on tomato detection, guoxou Liu and the like introduce a tightly connected system structure in a YOLOV3 network structure, and a rectangular detection frame is modified into a round shape so as to more accurately identify tomatoes, and the method has poor detection effect on tomatoes under the condition of serious shielding. In order to effectively detect broccoli seedlings, sun Zhe and the like, a ResNet-101 network is introduced into a fast R-CNN model, and the method can effectively detect broccoli seedlings in a natural state. In order to solve the problem of low weed segmentation precision in a complex farmland environment, the Mask R-CNN model, the FCN segmentation algorithm and the like are combined, so that the target contour can be segmented while the target is detected. In order to solve the problem that the agricultural robot needs to avoid the obstacle in the working process, li Wentao and the like combine the shallow layer information and the second prediction layer information of the YOLOv3-tiny network to form a new third prediction layer, and add an attention mechanism into the network to improve the detection precision of the network to the obstacle. In order to reduce the influence of environmental factors such as duckweed, blue algae, illumination and the like on the extraction precision of the center line of the seedling row, zhang Qin and the like position the seedling by utilizing a YOLOv3 model, and simultaneously perform self-adaptive clustering and pretreatment on a detection frame, and finally fit the center line of the seedling row according to the extracted SUSAN corner features, the algorithm can efficiently identify the center line of the seedling row, but is only suitable for seedling periods, and the identification of the center line under the condition of adhesion of seedling leaves is not considered. Zhihua Diao et al propose a lightweight three-dimensional convolutional neural network for rapid identification of corn seedlings and weeds, but the algorithm is only applicable to the seedling stage of corn and the detection accuracy is lost. In order to reduce the influence of weed distribution and illumination intensity on crop row detection, yue Hu and the like cluster detection results of an improved YOLOv4 network, extract characteristic points in a detection frame by using a mean value method, and finally fit crop row lines by using a least square method. In order to reduce the influence of light, duckweed and weeds on rice row detection, shanshan Wang et al combine a ResNet network with a row vector grid classification network to rapidly and accurately identify rice rows. In order to know the influence of different weed densities, seedling line curvature changes and other factors on seedling line identification, shanshan Wang and the like detect straight and curved crop lines by using an improved YOLOv5 network and an improved central line extraction algorithm, but the method can only be used for the seedling period of rice and cannot consider the whole growth period of the rice. In order to solve the influence of complex field conditions such as high weeds, low illumination intensity and the like on corn crop line detection, yang Yang Yang and the like divide crop lines and backgrounds in an interested area by utilizing a YOLOv5 network, an ultra-green method and a maximum inter-class method, then detect positioning characteristic points according to FAST angular points, and finally fit the center line of the crop lines by utilizing a least square method.
Although the various algorithms can better identify the central line of the crop row, the identification conditions are single, and different growth periods and different growth environments of the crop cannot be considered.
Disclosure of Invention
Aiming at the defects of poor recognition effect, poor adaptability and the like of the conventional corn crop line recognition algorithm in a complex farmland environment, the invention provides a corn crop line recognition method based on a fast-YOLOv 8s network, which aims at accurately recognizing the corn crop line central lines of different growth periods in the complex farmland environment.
The technical scheme of the invention is realized as follows:
a corn crop line identification method based on a Faster-YOLOv8s network comprises the following steps:
s1: constructing a data set of corn crop rows;
s2: constructing a Faster-YOLOv8s network, and training and verifying the Faster-YOLOv8s network by using a data set to obtain a Faster-YOLOv8s network model; obtaining a corn cob detection frame by using a Faster-YOLOv8s network model; the Faster-Yolov8s network is obtained by replacing an SPPF module in a skeleton structure of the Yolov8s network by an ASPPF module, and the ASPPF module is obtained by improving an ASPP structure;
s3: positioning a corn crop row characteristic point by using the midpoint of the corn cob detection frame;
s4: and fitting the characteristic points of the corn crop rows by adopting a least square method to obtain the center line of the crop rows.
The construction method of the corn crop row data set comprises the following steps: corn crop row pictures in different growth periods and different growth environments are respectively collected as a data set; firstly, cutting and data enhancing are carried out on an original picture in a data set; then according to 4:1:1 dividing the data set into a training set, a verification set and a test set; finally, marking the data set by using a marking tool Labelimg, wherein the marking object is a maize canopy plant core; wherein, different growth periods comprise corn seedling period and corn growth middle period, and different growth environments comprise normal, weed, broken line and adhesion.
The ASPPF module comprises a convolution layer I, a convolution layer II, a convolution layer III, a convolution layer IV, a fusion layer, a pooling layer, a convolution layer V, an upsampling layer, a convolution layer VI and an output layer;
the input characteristics are respectively input into the input end of the convolution layer I and the input end of the pooling layer, the output end of the convolution layer I is respectively connected with the input end of the convolution layer II and the input end of the fusion layer, the output end of the convolution layer II is respectively connected with the input end of the convolution layer III and the input end of the fusion layer, the output end of the pooling layer is connected with the input end of the convolution layer V, the output end of the convolution layer V is connected with the input end of the up-sampling layer, the output end of the up-sampling layer is connected with the input end of the fusion layer, the output end of the fusion layer is connected with the input end of the convolution layer VI, and the output end of the convolution layer VI is connected with the output layer.
The convolution kernel sizes of the convolution layer I, the convolution layer V and the convolution layer VI are all 1 multiplied by 1; the sizes of the convolution layers of the convolution layer II, the convolution layer III and the convolution layer IV are 3 multiplied by 3, the expansion rate of the convolution layer II is 6, the expansion rate of the convolution layer III is 12, and the expansion rate of the convolution layer IV is 18.
The Faster-YOLOv8s network comprises a Backbone module, a Neck module and a Head module;
the backbond module comprises a first convolution layer, a second convolution layer, a C2f-I, a third convolution layer, a C2f-II, a fourth convolution layer, a C2f-III, a fifth convolution layer, a C2f-IV and an ASPPF module;
the Neck module comprises a C2f-V, a first fusion layer, a first upsampling layer, a C2f-VI, a second fusion layer, a second upsampling layer, a sixth convolution layer, a third fusion layer, a C2f-VII, a seventh convolution layer, a fourth fusion layer and a C2f-VIII;
the Head module comprises a detection layer I, a detection layer II and a detection layer III;
the input end of the first convolution layer is used for receiving input data, the output end of the first convolution layer is connected with the input end of the second convolution layer, the output end of the second convolution layer is connected with the input end of the C2f-I, the output end of the C2f-I is connected with the input end of the third convolution layer, the output end of the third convolution layer is connected with the input end of the C2f-II, the output end of the C2f-II is respectively connected with the input end of the fourth convolution layer and the input end of the first fusion layer, the output end of the fourth convolution layer is connected with the input end of the C2f-III, the output end of the C2f-III is respectively connected with the input end of the fifth convolution layer and the input end of the second fusion layer, the output end of the fifth convolution layer is connected with the input end of the C2f-IV, the output end of the C2f-IV is connected with the input end of the ASPPF module, the output end of the ASPPF module is respectively connected with the input end of the second upsampling layer and the input end of the fourth fusion layer, the output end of the second upsampling layer is connected with the input end of the second fusion layer, the output end of the second fusion layer is connected with the input end of the C2f-VI, the output end of the C2f-VI is respectively connected with the input end of the first upsampling layer and the input end of the third fusion layer, the output end of the first upsampling layer is connected with the input end of the first fusion layer, the output end of the first fusion layer is connected with the input end of the C2f-V, the output end of the C2f-V is respectively connected with the input end of the sixth convolution layer and the input end of the detection layer I, the output end of the third fusion layer is connected with the input end of the C2f-VII, the output end of the C2f-VII is respectively connected with the input end of the seventh convolution layer and the detection layer II, the output end of the seventh convolution layer is connected with the input end of the fourth fusion layer, the output end of the fourth fusion layer is connected with the input end of the C2f-VIII, and the output end of the C2f-VIII is connected with the detection layer III.
The coordinates of the characteristic points of the corn crop row are (x) 0 ,y 0 ),x 0 、y 0 The expressions of (2) are respectively:
wherein x is i To detect the left upper corner abscissa of the frame, y j To detect the vertical coordinate of the upper left corner of the frame, x I To detect the horizontal coordinate of the lower right corner of the frame, y J Is the ordinate of the lower right corner of the detection frame.
Compared with the prior art, the invention has the beneficial effects that:
1) The invention provides a novel space pyramid pooling structure ASPPF for positioning corn plants by taking corn cobs as identification targets for the first time, and provides an improved YOLOv8s model Faster-YOLOv8s for more accurately detecting corn cobs.
2) The invention solves the problems of weak robustness and the like of the traditional crop row recognition algorithm under different growth periods and different growth environments. The disclosed Faster-Yolov8s network has good extraction effect on corncob under different environmental pressures in different growth periods, and MAP and F1 of the network are improved to 90.2% and 91% from 86.4% and 86% of the Yolov7 network and 88.8% and 87% of the Yolov8s network.
3) The average fitting time and average angle error of the center line of the crop row are recognized by combining the midpoint of the corn cob detection frame with the least square method, and the average fitting time and average angle error of the center line of the crop row are reduced to 45ms and 0.63 degrees from 82.6ms and 0.97 ms of methods such as Zhang Qin, 74.8ms and 0.75 of methods such as Yue Hu and 67ms and 2.03 of methods such as Yang Yang. The accuracy is improved to 94.35% from 91.4% of Zhang Qin and other methods, 93.6% of Yue Hu and other methods and 87.35% of Yang Yang and other methods.
4) The method provided by the invention has excellent performances in the aspects of corn cob detection, characteristic point positioning and the like, can meet the requirements of real-time and accuracy of visual navigation of the agricultural robot, and provides an effective path for navigation of the agricultural robot in a complex farmland environment.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of the present invention.
FIG. 2 is a graph of maize plant cob tags.
Fig. 3 is a diagram of the ASPPF network architecture.
FIG. 4 is a block diagram of the Faster-YOLOv8s network.
Fig. 5 is a performance evaluation graph of various networks.
FIG. 6 is a graph of the detection results of the Faster-YOLOv8s network.
Fig. 7 is a feature point localization map based on the mid-point of the corn cob detection box.
Fig. 8 is a crop line centerline graph fitted using a least squares method.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without any inventive effort, are intended to be within the scope of the invention.
As shown in FIG. 1, the embodiment of the invention provides a corn crop line identification method based on a Faster-Yolov8s network, which comprises the following specific steps:
s1: constructing a data set of corn crop rows; the pictures are taken in Zhengzhou city, and corn crop rows with different growth periods and different growth environments are respectively collected. Different growth periods comprise corn seedling period and corn growth medium period, and different growth environments comprise normal, weed, broken line, adhesion and the like. A total of 3000 original pictures were taken, according to 4:1: the ratio of 1 is divided into a training set, a verification set and a test set respectively. And marking the data set by using a marking tool Labelimg, wherein the marking object is maize canopy plant core. Considering that the corn has the conditions of leaf adhesion, mutual shielding and the like in the middle of growth, part of single corn cob can be not easily marked, so that a plurality of corn cobs can be marked together in the label manufacturing process. The navigation of the agricultural robot mainly depends on 2 to 4 columns of corn crop rows in the middle of the image, so that the original picture needs to be cut before the data set is marked, and only 2 to 4 columns of corn crop rows in the middle of the picture are reserved for navigation. Meanwhile, in order to enrich the image data of the training set, the characteristics of the corn cob are better extracted, and the data of the image of the training set are enhanced. Fig. 2 is a photograph of a maize cob tag.
S2: constructing a Faster-YOLOv8s network, and training and verifying the Faster-YOLOv8s network by using a data set to obtain a Faster-YOLOv8s network model; obtaining a corn cob detection frame by using a Faster-YOLOv8s network model; the first standard convolution branch and the parallel expansion convolution layers with different expansion rates in the ASPP structure are changed into a structure with serial-before-parallel operation, so that the speed and the precision of feature extraction are improved, and the improved ASPP structure is named ASPPF. The ASPPF network architecture is shown in fig. 3.
The ASPPF module comprises a convolution layer I, a convolution layer II, a convolution layer III, a convolution layer IV, a fusion layer, a pooling layer, a convolution layer V, an upsampling layer, a convolution layer VI and an output layer; the input characteristics are respectively input into the input end of the convolution layer I and the input end of the pooling layer, the output end of the convolution layer I is respectively connected with the input end of the convolution layer II and the input end of the fusion layer, the output end of the convolution layer II is respectively connected with the input end of the convolution layer III and the input end of the fusion layer, the output end of the pooling layer is connected with the input end of the convolution layer V, the output end of the convolution layer V is connected with the input end of the up-sampling layer, the output end of the up-sampling layer is connected with the input end of the fusion layer, the output end of the fusion layer is connected with the input end of the convolution layer VI, and the output end of the convolution layer VI is connected with the output layer. The convolution kernel sizes of the convolution layer I, the convolution layer V and the convolution layer VI are all 1 multiplied by 1; the sizes of the convolution layers of the convolution layer II, the convolution layer III and the convolution layer IV are 3 multiplied by 3, the expansion rate of the convolution layer II is 6, the expansion rate of the convolution layer III is 12, and the expansion rate of the convolution layer IV is 18.
Meanwhile, an ASPPF module is used for replacing an SPPF module in a network backbone structure of the Yolov8s for detecting corn cobs, and the changed network structure is named as a Faster-Yolov8s network. The structure of the Faster-Yolov8s network is shown in FIG. 4.
The Faster-YOLOv8s network comprises a Backbone module, a Neck module and a Head module; the backbond module comprises a first convolution layer, a second convolution layer, a C2f-I, a third convolution layer, a C2f-II, a fourth convolution layer, a C2f-III, a fifth convolution layer, a C2f-IV and an ASPPF module; the Neck module comprises a C2f-V, a first fusion layer, a first upsampling layer, a C2f-VI, a second fusion layer, a second upsampling layer, a sixth convolution layer, a third fusion layer, a C2f-VII, a seventh convolution layer, a fourth fusion layer and a C2f-VIII; the Head module comprises a detection layer I, a detection layer II and a detection layer III.
The input end of the first convolution layer is used for receiving input data, the output end of the first convolution layer is connected with the input end of the second convolution layer, the output end of the second convolution layer is connected with the input end of the C2f-I, the output end of the C2f-I is connected with the input end of the third convolution layer, the output end of the third convolution layer is connected with the input end of the C2f-II, the output end of the C2f-II is respectively connected with the input end of the fourth convolution layer and the input end of the first fusion layer, the output end of the fourth convolution layer is connected with the input end of the C2f-III, the output end of the C2f-III is respectively connected with the input end of the fifth convolution layer and the input end of the second fusion layer, the output end of the fifth convolution layer is connected with the input end of the C2f-IV, the output end of the C2f-IV is connected with the input end of the ASPPF module, the output end of the ASPPF module is respectively connected with the input end of the second upsampling layer and the input end of the fourth fusion layer, the output end of the second upsampling layer is connected with the input end of the second fusion layer, the output end of the second fusion layer is connected with the input end of the C2f-VI, the output end of the C2f-VI is respectively connected with the input end of the first upsampling layer and the input end of the third fusion layer, the output end of the first upsampling layer is connected with the input end of the first fusion layer, the output end of the first fusion layer is connected with the input end of the C2f-V, the output end of the C2f-V is respectively connected with the input end of the sixth convolution layer and the input end of the detection layer I, the output end of the third fusion layer is connected with the input end of the C2f-VII, the output end of the C2f-VII is respectively connected with the input end of the seventh convolution layer and the detection layer II, the output end of the seventh convolution layer is connected with the input end of the fourth fusion layer, the output end of the fourth fusion layer is connected with the input end of the C2f-VIII, and the output end of the C2f-VIII is connected with the detection layer III.
Training a Faster-YOLOv8s network, adopting pyhon3.8 as a programming language, adopting pytorch1.8.0 as a deep learning framework, setting epochs as 100, setting batch size as 4, setting initial learning rate as 0.01, adopting a random gradient descent (SGD) optimization algorithm, setting weight attenuation as 0.0005, and saving the model with highest precision in the training process as an optimal model.
And evaluating a Faster-YOLOv8s network, and evaluating the model according to the detection result of the verification set corn cob. The evaluation indexes of the model comprise an accuracy rate P, a recall rate R, an average accuracy mean MAP and an F1 value, and P, R, MAP, F1 are respectively shown in formulas (1), (2), (3) and (4). Wherein T is P To be correctly divided into the number of positive samples, F P To be divided into the number of positive samples by mistake, F N For the number of erroneously classified negative samples, i is the class number and n is the total class number.
The performance of the network was evaluated using MAP and F1, and the evaluation results of the Yolov7 network, the Yolov8s network, and the Faster-Yolov8s network are shown in FIG. 5.
Testing the Faster-YOLOv8s network, and testing the trained Faster-YOLOv8s network by using a corn crop row test set under different environmental pressures such as normal, weed, broken row, adhesion and the like under different growth periods. The results of the detection of corncob by the Faster-Yolov8s network are shown in FIG. 6.
S3: positioning corn crop line features using midpoints of corn cob detection framesA dot; the crop row feature point positioning results are shown in fig. 7. The coordinates of the characteristic points of the corn crop row are (x) 0 ,y 0 ),x 0 、y 0 The expressions of (2) are respectively:
wherein x is i To detect the left upper corner abscissa of the frame, y j To detect the vertical coordinate of the upper left corner of the frame, x I To detect the horizontal coordinate of the lower right corner of the frame, y J Is the ordinate of the lower right corner of the detection frame.
S4: fitting the characteristic points of the corn crop rows by adopting a least square method to obtain the center line of the crop rows; the corn crop row center line extraction effect is shown in fig. 8.
The invention can also be applied to the central line recognition of other different crop rows, such as wheat, rice and the like.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (6)

1. A corn crop line identification method based on a Faster-YOLOv8s network is characterized by comprising the following steps:
s1: constructing a data set of corn crop rows;
s2: constructing a Faster-YOLOv8s network, and training and verifying the Faster-YOLOv8s network by using a data set to obtain a Faster-YOLOv8s network model; obtaining a corn cob detection frame by using a Faster-YOLOv8s network model; the Faster-Yolov8s network is obtained by replacing an SPPF module in a skeleton structure of the Yolov8s network by an ASPPF module, and the ASPPF module is obtained by improving an ASPP structure;
s3: positioning a corn crop row characteristic point by using the midpoint of the corn cob detection frame;
s4: and fitting the characteristic points of the corn crop rows by adopting a least square method to obtain the center line of the crop rows.
2. The corn crop line identification method based on the fast-YOLOv 8s network according to claim 1, wherein the method for constructing the corn crop line data set is as follows: corn crop row pictures in different growth periods and different growth environments are respectively collected as a data set; firstly, cutting and data enhancing are carried out on an original picture in a data set; then according to 4:1:1 dividing the data set into a training set, a verification set and a test set; finally, marking the data set by using a marking tool Labelimg, wherein the marking object is a maize canopy plant core; wherein, different growth periods comprise corn seedling period and corn growth middle period, and different growth environments comprise normal, weed, broken line and adhesion.
3. The corn crop row identification method based on the fast-YOLOv 8s network of claim 1, wherein the ASPPF module comprises a convolutional layer I, a convolutional layer II, a convolutional layer III, a convolutional layer IV, a fusion layer, a pooling layer, a convolutional layer V, an upsampling layer, a convolutional layer VI, and an output layer;
the input characteristics are respectively input into the input end of the convolution layer I and the input end of the pooling layer, the output end of the convolution layer I is respectively connected with the input end of the convolution layer II and the input end of the fusion layer, the output end of the convolution layer II is respectively connected with the input end of the convolution layer III and the input end of the fusion layer, the output end of the pooling layer is connected with the input end of the convolution layer V, the output end of the convolution layer V is connected with the input end of the up-sampling layer, the output end of the up-sampling layer is connected with the input end of the fusion layer, the output end of the fusion layer is connected with the input end of the convolution layer VI, and the output end of the convolution layer VI is connected with the output layer.
4. The corn crop line identification method based on the fast-YOLOv 8s network according to claim 3, wherein the convolution kernel sizes of the convolution layers I, V and VI are all 1×1; the sizes of the convolution layers of the convolution layer II, the convolution layer III and the convolution layer IV are 3 multiplied by 3, the expansion rate of the convolution layer II is 6, the expansion rate of the convolution layer III is 12, and the expansion rate of the convolution layer IV is 18.
5. The corn crop row identification method based on the fast-YOLOv 8s network of claim 3, wherein the fast-YOLOv 8s network comprises a backbox module, a neg module and a Head module;
the backbond module comprises a first convolution layer, a second convolution layer, a C2f-I, a third convolution layer, a C2f-II, a fourth convolution layer, a C2f-III, a fifth convolution layer, a C2f-IV and an ASPPF module;
the Neck module comprises a C2f-V, a first fusion layer, a first upsampling layer, a C2f-VI, a second fusion layer, a second upsampling layer, a sixth convolution layer, a third fusion layer, a C2f-VII, a seventh convolution layer, a fourth fusion layer and a C2f-VIII;
the Head module comprises a detection layer I, a detection layer II and a detection layer III;
the input end of the first convolution layer is used for receiving input data, the output end of the first convolution layer is connected with the input end of the second convolution layer, the output end of the second convolution layer is connected with the input end of the C2f-I, the output end of the C2f-I is connected with the input end of the third convolution layer, the output end of the third convolution layer is connected with the input end of the C2f-II, the output end of the C2f-II is respectively connected with the input end of the fourth convolution layer and the input end of the first fusion layer, the output end of the fourth convolution layer is connected with the input end of the C2f-III, the output end of the C2f-III is respectively connected with the input end of the fifth convolution layer and the input end of the second fusion layer, the output end of the fifth convolution layer is connected with the input end of the C2f-IV, the output end of the C2f-IV is connected with the input end of the ASPPF module, the output end of the ASPPF module is respectively connected with the input end of the second upsampling layer and the input end of the fourth fusion layer, the output end of the second upsampling layer is connected with the input end of the second fusion layer, the output end of the second fusion layer is connected with the input end of the C2f-VI, the output end of the C2f-VI is respectively connected with the input end of the first upsampling layer and the input end of the third fusion layer, the output end of the first upsampling layer is connected with the input end of the first fusion layer, the output end of the first fusion layer is connected with the input end of the C2f-V, the output end of the C2f-V is respectively connected with the input end of the sixth convolution layer and the input end of the detection layer I, the output end of the third fusion layer is connected with the input end of the C2f-VII, the output end of the C2f-VII is respectively connected with the input end of the seventh convolution layer and the detection layer II, the output end of the seventh convolution layer is connected with the input end of the fourth fusion layer, the output end of the fourth fusion layer is connected with the input end of the C2f-VIII, and the output end of the C2f-VIII is connected with the detection layer III.
6. The method for identifying a corn crop line based on the fast-YOLOv 8s network according to claim 1, wherein the coordinates of the characteristic points of the corn crop line are (x 0 ,y 0 ),x 0 、y 0 The expressions of (2) are respectively:
wherein x is i To detect the left upper corner abscissa of the frame, y j To detect the vertical coordinate of the upper left corner of the frame, x I To detect the horizontal coordinate of the lower right corner of the frame, y J Is the ordinate of the lower right corner of the detection frame.
CN202311038710.9A 2023-08-17 2023-08-17 Corn crop line identification method based on Faster-YOLOv8s network Pending CN117036861A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311038710.9A CN117036861A (en) 2023-08-17 2023-08-17 Corn crop line identification method based on Faster-YOLOv8s network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311038710.9A CN117036861A (en) 2023-08-17 2023-08-17 Corn crop line identification method based on Faster-YOLOv8s network

Publications (1)

Publication Number Publication Date
CN117036861A true CN117036861A (en) 2023-11-10

Family

ID=88644593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311038710.9A Pending CN117036861A (en) 2023-08-17 2023-08-17 Corn crop line identification method based on Faster-YOLOv8s network

Country Status (1)

Country Link
CN (1) CN117036861A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576560A (en) * 2023-11-17 2024-02-20 中化现代农业有限公司 Method, device, equipment and medium for identifying field weeds of northern spring corns

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576560A (en) * 2023-11-17 2024-02-20 中化现代农业有限公司 Method, device, equipment and medium for identifying field weeds of northern spring corns

Similar Documents

Publication Publication Date Title
Song et al. Kiwifruit detection in field images using Faster R-CNN with VGG16
CN105740759B (en) Semilate rice information decision tree classification approach based on feature extraction in multi-temporal data
CN109961024A (en) Wheat weeds in field detection method based on deep learning
CN110378909A (en) Single wooden dividing method towards laser point cloud based on Faster R-CNN
CN109948444A (en) Method for synchronously recognizing, system and the robot of fruit and barrier based on CNN
CN110839519A (en) Internet of things intelligent agricultural irrigation device and method based on deep learning
Huang et al. Deep localization model for intra-row crop detection in paddy field
Alejandrino et al. Visual classification of lettuce growth stage based on morphological attributes using unsupervised machine learning models
CN113657326A (en) Weed detection method based on multi-scale fusion module and feature enhancement
CN117036861A (en) Corn crop line identification method based on Faster-YOLOv8s network
CN108073947B (en) Method for identifying blueberry varieties
CN111967441A (en) Crop disease analysis method based on deep learning
CN112861666A (en) Chicken flock counting method based on deep learning and application
Kang et al. Support vector machine classification of crop lands using sentinel-2 imagery
CN113673628A (en) Corn planting distribution extraction method based on high-resolution satellite data
Diao et al. Navigation line extraction algorithm for corn spraying robot based on improved YOLOv8s network
CN116543316A (en) Method for identifying turf in paddy field by utilizing multi-time-phase high-resolution satellite image
Miao et al. Crop weed identification system based on convolutional neural network
Lu et al. Citrus green fruit detection via improved feature network extraction
Kalpana et al. Diagnosis of major foliar diseases in black gram (Vigna mungo L.) using convolution neural network (CNN)
Widiyanto et al. Monitoring the growth of tomatoes in real time with deep learning-based image segmentation
CN115828181A (en) Potato disease category identification method based on deep learning algorithm
Wang et al. Lightweight Convolution Neural Network Based on Multi-Scale Parallel Fusion for Weed Identification
Dahiya et al. An Effective Detection of Litchi Disease using Deep Learning
CN115035423A (en) Hybrid rice male and female parent identification and extraction method based on unmanned aerial vehicle remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination