CN118196639A - Corn yield estimation method and device - Google Patents

Corn yield estimation method and device Download PDF

Info

Publication number
CN118196639A
CN118196639A CN202410605064.8A CN202410605064A CN118196639A CN 118196639 A CN118196639 A CN 118196639A CN 202410605064 A CN202410605064 A CN 202410605064A CN 118196639 A CN118196639 A CN 118196639A
Authority
CN
China
Prior art keywords
corn
model
collecting
yield estimation
identification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410605064.8A
Other languages
Chinese (zh)
Other versions
CN118196639B (en
Inventor
于福东
修汉森
赵明
孙立娜
陈忠磊
靳海科
李铁
郭琦
张新轶
张兵
冯宇琦
张晓奚
唐志会
王莫寒
赵恩泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Province Zhongnong Sunshine Data Co ltd
Original Assignee
Jilin Province Zhongnong Sunshine Data Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Province Zhongnong Sunshine Data Co ltd filed Critical Jilin Province Zhongnong Sunshine Data Co ltd
Priority to CN202410605064.8A priority Critical patent/CN118196639B/en
Publication of CN118196639A publication Critical patent/CN118196639A/en
Application granted granted Critical
Publication of CN118196639B publication Critical patent/CN118196639B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/457Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

A corn yield estimation method and a device relate to the technical field of corn planting. In order to solve the technical problems of low efficiency and low precision existing in the prior art, the traditional corn yield estimation method mainly depends on manual sampling and simple statistical analysis, and the technical scheme provided by the invention is as follows: comprising the following steps: collecting preset corn image data; collecting a preset corn yield estimation model; collecting a preset optimization algorithm; training a corn yield estimation model according to the corn image data and an optimization algorithm to respectively obtain a corn individual recognition model, a corn cross section recognition model and a front corn kernel recognition model; collecting a picture to be detected; preprocessing a picture to be detected; and obtaining the total grain number of the corn according to the corn individual identification model, the corn cross section identification model, the front corn grain identification model and the preprocessed picture to be detected. And corresponding virtual devices. The method can be applied to corn yield estimation work in agricultural production.

Description

Corn yield estimation method and device
Technical Field
Relates to the technical field of corn planting.
Background
The traditional corn yield estimation method mainly relies on manual sampling and simple statistical analysis, and has the problems of low efficiency and low precision.
Disclosure of Invention
In order to solve the technical problems of low efficiency and low precision existing in the prior art, the traditional corn yield estimation method mainly depends on manual sampling and simple statistical analysis, and the technical scheme provided by the invention is as follows:
a method of estimating corn yield, the method comprising:
collecting preset corn image data;
collecting a preset corn yield estimation model;
Collecting a preset optimization algorithm;
Training the corn yield estimation model according to the corn image data and the optimization algorithm to respectively obtain a corn individual recognition model, a corn cross section recognition model and a front corn kernel recognition model;
collecting a picture to be detected;
Preprocessing the picture to be detected;
and obtaining the total grain number of the corn according to the corn individual identification model, the corn cross section identification model, the front corn grain identification model and the preprocessed picture to be detected.
Further, a preferred embodiment is provided, wherein each corn is read through the corn individual identification model, the column number of each corn is obtained through the corn cross section identification model, and finally the number of single-column corn kernels obtained through the front corn kernel identification model is multiplied by the column number of each corn to obtain the grain number of each corn.
Further, a preferred embodiment is provided, wherein the brightness and the definition of the preset corn image data both conform to a preset threshold range.
Further, a preferred embodiment is provided wherein the individual corn identification model, the cross-section corn identification model and the front corn kernel identification model employ a Task-ALIGNED ASSIGNER matching strategy.
Further, a preferred embodiment is provided wherein the individual corn identification model, the cross-section corn identification model, and the frontal corn kernel identification model are implemented using a YOLOv n model.
Further, a preferred embodiment is provided, wherein the positive kernel recognition model is implemented by using an OBB detection algorithm in combination with the YOLOv n model.
Further, a preferred embodiment is provided, wherein the optimization algorithm is VFL Loss.
A corn yield estimation device, the device comprising:
a module for collecting preset corn image data;
A module for collecting a preset corn yield estimation model;
a module for collecting a preset optimization algorithm;
training the corn yield estimation model according to the corn image data and the optimization algorithm to respectively obtain a corn individual recognition model, a corn cross section recognition model and a front corn kernel recognition model;
A module for collecting the picture to be measured;
A module for preprocessing the picture to be detected;
And obtaining a module of the total grain number of the corn according to the corn individual recognition model, the corn cross section recognition model, the front corn grain recognition model and the preprocessed picture to be detected.
A computer storage medium storing a computer program which, when read by a computer, performs the method.
A computer comprising a processor and a storage medium, said computer performing said method when said processor reads a computer program stored in said storage medium.
Compared with the prior art, the technical scheme provided by the invention has the following advantages:
according to the corn yield estimation method provided by the invention, accurate detection and positioning of corn plants are realized by training the deep neural network by utilizing large-scale corn plant image data, and more comprehensive data support is provided, so that the precision and accuracy of corn yield estimation are improved. Compared with the traditional method, the deep learning-based target detection algorithm can better mine the characteristics of corn plants, realize automatic identification and counting of corn kernels and estimate the yield according to the kernels.
Compared with the traditional target detection algorithm, the corn yield estimation method provided by the invention has the advantages that the acquisition YOLOv algorithm is optimized in the backbone network and Neck part, and the performance and accuracy of the model are improved. And an Anchor-Free method is adopted, so that the model is simpler and more efficient, and better performance can be obtained in a target detection task.
According to the corn yield estimation method provided by the invention, the VFL Loss is adopted for model optimization, the Loss contribution of negative examples is reduced by adjusting the scaling factor of the Loss, more learning information is reserved, and the performance and efficiency of the target detection model are improved.
According to the corn yield estimation method provided by the invention, the input photo is segmented, so that the interference of other objects on the identification result is reduced. And the image is processed through an edge detection operator, edge points are connected into continuous line segments, and accuracy and stability of the recognition result are improved.
According to the corn yield estimation method provided by the invention, the collected data is screened in brightness and definition, so that the photos with poor quality are removed, and the reliability and accuracy of the data are improved. Meanwhile, the data are marked in detail, the position and the characteristics of each corn plant are accurately captured, and a reliable data base is provided for model training.
The corn yield estimation method provided by the invention can improve the precision and accuracy of corn yield estimation and provide more accurate decision support for agricultural production.
Compared with the traditional corn yield estimation method, the corn yield estimation method provided by the invention has the advantages that the characteristics of corn plants can be better mined by utilizing a deep learning technology and a target detection algorithm, the automatic identification and counting of corn kernels are realized, and the yield is estimated according to the number of kernels.
Compared with the current research situation, the corn yield estimation method provided by the invention adopts YOLOv algorithm and VFL Loss to perform model optimization, and improves the performance and efficiency of target detection. Meanwhile, through image segmentation processing, data screening and marking, accuracy and stability of the identification result are improved.
The corn yield estimation method provided by the invention can be applied to corn yield estimation work in agricultural production.
Drawings
FIG. 1 is a schematic flow chart of a corn yield estimation method;
FIG. 2 is a simplified vector representation;
FIG. 3 is a diagram showing the number of visible columns of corn versus total columns at 20 columns;
FIG. 4 is a diagram showing the number of visible columns of corn versus total columns at 18 columns;
FIG. 5 is a diagram showing the number of visible columns of corn versus total columns for 16 columns;
FIG. 6 is a diagram showing the number of visible columns of corn versus total columns at 14 columns;
FIG. 7 is a diagram showing the number of visible columns of corn versus the total number of columns at 12 columns;
FIGS. 8-10 are schematic illustrations of corn placement;
FIG. 11 is a graph showing the corn recognition result of FIG. 8;
FIG. 12 is a graph showing the corn recognition result of FIG. 9;
fig. 13 is a schematic diagram of the corn recognition result in fig. 10.
Wherein A represents a reference substance having a standard diameter.
Detailed Description
In order to make the advantages and benefits of the technical solution provided by the present invention more apparent, the technical solution provided by the present invention will now be described in further detail with reference to the accompanying drawings, in which:
In one embodiment, the present embodiment provides a method for estimating corn yield, the method comprising:
collecting preset corn image data;
collecting a preset corn yield estimation model;
Collecting a preset optimization algorithm;
Training the corn yield estimation model according to the corn image data and the optimization algorithm to respectively obtain a corn individual recognition model, a corn cross section recognition model and a front corn kernel recognition model;
collecting a picture to be detected;
Preprocessing the picture to be detected;
and obtaining the total grain number of the corn according to the corn individual identification model, the corn cross section identification model, the front corn grain identification model and the preprocessed picture to be detected.
In a second embodiment, the method for estimating corn yield according to the first embodiment is further defined, wherein each corn is read by the corn individual recognition model, the number of columns of each corn is obtained by the corn cross section recognition model, and the number of grains of each corn is obtained by multiplying the number of single-column corn grains obtained by the front corn grain recognition model by the number of columns of the corn.
In a third embodiment, the method for estimating corn yield according to the first embodiment is further defined, wherein the brightness and the definition of the preset corn image data both conform to a preset threshold range.
The fourth embodiment is a further limitation of the corn yield estimation method provided in the first embodiment, wherein the corn individual identification model, the corn cross section identification model and the front corn kernel identification model adopt a Task-ALIGNED ASSIGNER matching strategy.
The fifth embodiment is a further limitation of the corn yield estimation method provided in the first embodiment, wherein the individual corn identification model, the corn cross section identification model and the front corn kernel identification model are implemented by adopting YOLOv n models.
The sixth embodiment is a further limitation of the corn yield estimation method provided in the first embodiment, wherein the front-side corn kernel identification model is implemented by adopting an OBB detection algorithm and a YOLOv n model.
Embodiment seven and this embodiment are further defined on the method for estimating corn yield provided in embodiment one, wherein the optimization algorithm is VFL Loss.
An eighth embodiment provides a corn yield estimation device, the device comprising:
a module for collecting preset corn image data;
A module for collecting a preset corn yield estimation model;
a module for collecting a preset optimization algorithm;
training the corn yield estimation model according to the corn image data and the optimization algorithm to respectively obtain a corn individual recognition model, a corn cross section recognition model and a front corn kernel recognition model;
A module for collecting the picture to be measured;
A module for preprocessing the picture to be detected;
And obtaining a module of the total grain number of the corn according to the corn individual recognition model, the corn cross section recognition model, the front corn grain recognition model and the preprocessed picture to be detected.
The ninth embodiment provides a computer storage medium storing a computer program, which when read by a computer performs the method provided in the first embodiment.
In a tenth embodiment, a computer is provided, including a processor and a storage medium, where the computer performs the method provided in the first embodiment when the processor reads a computer program stored in the storage medium.
An eleventh embodiment, which is described in detail and fully with reference to fig. 1-7, describes the technical solution provided above in further detail by a specific example, specifically:
in the embodiment, first, YOLOv is used for training, and corn and a reference object are detected and identified, so that length and width information of the corn is obtained. Next, training was performed by means of the obb model of YOLOv to achieve detection and identification of the corn grain number and cross-sectional grain number (representing the number of columns). According to fig. 3-7, the total grain number was estimated from the relationship between the number of columns and the number of columns visible on the front of the photograph and the number of grains visible on the front. And finally, calculating the yield of the corn by using an estimated yield formula. Through the series of steps, the method can efficiently and rapidly evaluate the yield condition of corn in farmlands under the condition of no threshing and counting, and provides scientific decision support for agricultural production.
And screening the acquired data. When data screening is carried out, the collected photos are processed by using an OpenCV library, so that the reliability of a data source is ensured. First, screening is performed according to the brightness level of the photograph. The brightness value of each photo is calculated through the image processing function provided by the OpenCV library, and a threshold is set, so that the photo with insufficient brightness or overexposed brightness is excluded, the selected photo has proper brightness, and the accuracy of subsequent processing is ensured. Second, the sharpness of the photograph is evaluated and screened. The sharpness index of the photograph, such as the contrast of the image and the sharpness of the outline, is calculated using the image processing algorithm provided by the OpenCV library. Then, a definition threshold is set, and photos with definition lower than the threshold are excluded, so that the selected photos have enough definition and reliable data support can be provided. And the collected photos are screened in brightness and definition, so that the photos with poor quality are eliminated, and the reliability and accuracy of the data are improved. Such a data screening process may provide a more reliable basis for subsequent data analysis and processing.
Comparison of the training models for target detection. YOLOv8 provides a brand new SOTA model compared with YOLOv5, provides models with different sizes of N/S/M/L/X scale based on scaling factors as well as YOLOv, supports image classification, target detection, instance segmentation and gesture detection tasks, changes a C3 structure of YOLOv5 into a C2f structure with richer gradient flow in a backbone network and Neck part, adjusts different channel numbers for different scale models, and greatly improves model performance; the Head part is replaced by a decoupling Head structure which is the main stream at present, the classification and detection heads are separated, the Anchor-Based is replaced by Anchor-Free, a TASKALIGNEDASSIGNER positive sample distribution strategy is adopted in the aspect of Loss calculation, and Distribution Focal Loss is introduced. A method called Anchor-Free is used instead of the conventional Anchor-Based method. In YOLOv, the Anchor-Free method discards the predefined Anchor points (anchors), but locates the target by predicting the center point of the target and the width and height of the bounding box. The method has the advantages that a group of anchor points do not need to be defined in advance, so that the model is simpler and more flexible, and meanwhile, the model can be better adapted to target objects with various scales and shapes. The basic idea of the Anchor-Free method is to predict the center point coordinates of the target object and the width and height of the bounding box by regression in each grid cell, instead of predicting the offset relative to the predefined Anchor points. Compared with the traditional Anchor-Based method, the method has better adaptability, can more accurately detect target objects with various scales and shapes, and simultaneously reduces the complexity and the calculation amount of the model. Therefore, YOLOv adopts an Anchor-Free method, so that the model is simpler and more efficient, and better performance can be obtained in the target detection task.
And (3) selecting optimization of a target detection algorithm. The matching strategy of Task-ALIGNED ASSIGNER in YOLOv can be summarized as selecting positive samples based on a weighted combination of classification scores and IoU values. The formula is as follows:
Wherein the classification score(s) represents the confidence of the model in the class of the object in the prior frame, the IoU value (u) represents the degree of overlap of the predicted frame and the real frame, And/>Representing the weight super-parameters, and optimizing and verifying for multiple times to obtain the optimal value after sequencing according to the matching degree, wherein/>, in the algorithmGet 1.0,/>Taking 0.6. By weighting and combining the two indexes, a high-quality prior frame can be better selected as a positive sample, so that the performance and accuracy of the target detection model are improved. In general, task-ALIGNED ASSIGNER functions to dynamically select high quality positive samples based on the combination of classification scores and IoU values to achieve Task alignment and direct the network to focus on high quality a priori frames, thereby improving the performance of target detection.
In this embodiment, YOLOv n model is used for training. First, parameters nc 1 # number of classes are modified as identification categories, and NAmes 0, corn1 and icon are used as identification categories, so that a corn individual identification model is obtained.
YOLOv8-obb are trained to perform rotational target detection training of corn kernels. The shape of the corn kernels is irregular. The detection algorithm (Oriented bounding box) of the OBB therefore directs the bounding box in a manner of OBB rotation target detection. Is a SAT (separation theorem) based technique for determining the directional bounding box of an object. In contrast to rectangular collision detection alone, OBB is a more general algorithm model. A bounding box is a simple geometry that can completely enclose an object, and a separation axis theorem is a method for determining whether two geometries intersect. In this embodiment, only the rectangular case is considered, and a vector mode is used to determine whether two objects collide. The simplified vector representation is shown in fig. 2:
Let the unit vector on the X-axis be (1, 0). Then the unit vector points on the vector P and X axes are multiplied by the formula:
Where p is a vector and x is an axis unit vector. The coordinates of the P point are (px, py), x represents the x-axis coordinate value of the point P, and y represents the y-axis coordinate value of the point P. cospx is the cos value of the angle between p and x, in FIG. 2, Q is the angle of p on the x-axis, cospx can also be the cos value of the angle between p and Q as cospQ. In this embodiment, YOLOv-obb models are used as basic models for training. Thus, a rotation target detection model of the corn kernels is obtained.
YOLOv8 training is performed to perform corn column number target detection training. Modifying the parameters nc 1 # number of classes as identification types and the names 0:icon as identification types to obtain a model of the visible column number of the front photo.
YOLOv8 training is performed to perform the target detection training of the number of corn cross-section columns. The corn is broken off from the middle part, the cross section is taken as an identification target, the grain number is taken as the total column number of the corn, and YOLOv n model is adopted for training. Modifying the parameters nc 1 # number of classes as identification category and the names 0 column as identification category to obtain the corn cross section column number identification model.
YOLOv8 training is performed to perform corn column number target detection training. The corn is broken off from the middle part, the cross section is taken as an identification target, the grain number is taken as the total column number of the corn, and YOLOv n model is adopted for training. Thus obtaining the corn column number identification model.
And selecting a plurality of algorithms to optimize the model. VFL Loss was designed to address the problem of severe imbalance between foreground and background in dense object detection training, with inspiration derived from Focal Loss. Focal Loss adjusts the Loss function by introducing a modulation factor, reducing the Loss contribution of simple samples, and increasing the attention to misclassified samples. However, positive and negative samples of the Focal Loss process are symmetric. In contrast, VFL Loss employs an asymmetric weighting strategy that reduces the Loss contribution of only negative samples, while retaining more learning information for rare positive samples. Specifically, the VFL Loss sets the ground truth class score for the foreground point to IoU between the generated bounding boxes and ground truth, based on the target score, and sets the score for all classes to 0 for the background point. In this way, the VFL Loss has better performance and efficiency in handling unbalanced samples in intensive target detection tasks. The formula is as follows:
According to the formula, VFL Loss reduces the Loss contribution of the negative case (q=0) mainly, while the weight of the positive case (q > 0) is reduced relatively little by adjusting the scaling factor γ of the Loss.
The method comprises the steps of dividing the input photo before the input photo is identified, so that interference of other objects on an identification result is reduced. First, each pixel in the image is subjected to an edge detection operator, and then the output of the operator is evaluated according to a preset standard to determine whether the pixel is an edge point. Next, some edge points may be removed or padded to eliminate edge discontinuities and connect them into continuous line segments. The goal of edge detection is to find places in the image where the grey level or structure is abrupt, which marks the end of one region and the beginning of another region. Such abrupt changes, known as edges, are represented in the image as discontinuities in pixel gray values, which can typically be detected by derivative operations on the image. And (3) performing operation by adopting a Sobel operator, wherein the Sobel operator is used for carrying out average and difference, namely weighted average difference, on the value corresponding to the current row or column. The horizontal and vertical gradient templates are respectively:
And calculating the total grain number of the corn according to the recognized front grain number and cross section grain number results of the corn. The following formula was used to calculate the total corn grain number.
Wherein x 1 represents the number of visible grains, c 1 represents the number of visible columns, and c 2 represents the total number of columns, thereby obtaining the total grain number z of the single corn cob.
And (5) estimating the yield. Corn estimation was performed using the following formula:
Where x represents the effective spike number, y represents the number of single grains (averaged when there are multiple sticks), w represents the weight of a single grain in grams (g), and z represents the moisture content. Using empirical data as input, the constants commonly used today are: effective spike number: 4000, single grain weight: 0.36, moisture content: 0.25. c (yield per mu) is calculated according to the formula and is expressed in kilograms (kg).
Embodiment twelve, the present embodiment is described with reference to fig. 3 to 13, and this embodiment is a specific example provided according to embodiment eleven, for proving the benefits of the technical solution provided above, specifically:
Step one: data acquisition and preprocessing:
For training and optimizing the corn estimation algorithm, a large amount of corn image data is collected and strictly screened to ensure that only qualified corn data is included in the training set. These data include single-stick corn photographs and corn cross-section photographs to comprehensively reflect the growth status and characteristics of corn plants. Through such data collection and screening work, a more accurate and reliable model is established, and more accurate decision support is provided for agricultural production. Meanwhile, the data set is continuously updated and perfected to adapt to the corn growth conditions of different seasons and regions, and the data comprise single-cob corn photos and corn cross-section photos.
Step two: data marking:
Two different types of corn data will be labeled in detail. For individual corn and cross-sectional photographs, the method of target detection will be used for marking to accurately capture the location and characteristics of each corn plant. And for the front corn photo, the target detection mode is also adopted for marking so as to identify and mark the form and state of the corn. Whereas for cross-sectional kernels, rotation detection will be used to ensure that the position and number of each kernel is accurately detected. By the aid of the fine marking mode, key information in corn data is effectively extracted and analyzed, and a reliable data base is provided for subsequent model training.
Step three: dividing the data set and processing the data:
And after brightness and definition screening is carried out on the acquired data through an opencv library, the data set is subdivided into a training set, a verification set and a test set, so that the effectiveness and generalization capability of model training are ensured. Meanwhile, aiming at obb training, the mark file is modified to adapt to the requirement of rotation detection. The training set is used for training and optimizing model parameters, the verification set is used for super-parameter adjustment and model performance evaluation, and finally the test set is used for evaluation of a final model and generalization capability test. The data sets are strictly divided according to the dividing proportion and the standard, and the consistency of data distribution and characteristics among the data sets is ensured.
Step four: training was performed using YOLOv:
Four special model files are successfully obtained through training of a large number of data sets and are used for identifying and analyzing different corn features. The corn individual identification model can accurately identify individual plants in a corn field, and accurately position and classify the individual plants. The corn cross section identification model is specially used for detecting the corn cross section and carrying out particle detection and counting on the corn cross section. The positive kernel recognition model is dedicated to recognizing particles in the corn photo, providing accurate assessment of corn yield for farmers.
Step five: test verification and model optimization:
In order to verify the recognition results of the model, attention needs to be paid to some possible situations, such as that the cross-section corn kernels misrecognize white areas in the stalk as corn kernels, or that the recognition results of the individual corn kernels after cutting are greatly different from those of the corn kernels after not cutting. For these problems, a series of optimization measures are taken. First, the diversity of the dataset is increased, including images of various lighting conditions, angles, and corn varieties, to cover more possible scenarios. And secondly, the training data is re-marked, so that the accuracy and consistency of marking are ensured, and the possibility of misjudgment of the model is reduced. In addition, parameters and architecture of the model can be adjusted, and sensitivity of the model to details and characteristics is improved, so that accuracy and stability of the identification result are improved. Through the optimization measures, the performance of the model is continuously improved, various complex conditions are better dealt with, and the reliability and the practicability of the identification result are improved.
Step six: dividing and identifying the input picture:
fig. 8, 9 and 10 are respectively input pictures, and fig. 11, 12 and 13 are respectively identification results obtained by identifying fig. 8, 9 and 10.
In FIGS. 8 to 13, the reference object A having a standard diameter is 4cm in diameter in the present embodiment; in fig. 11, corn represents corn, and numbers represent confidence, i.e., similarity.
Step seven: total grain number calculation and estimated yield calculation:
the number of columns represented by the recognized front corn grain number and the cross section grain number are substituted into a formula to calculate, so that more comprehensive information is obtained. This calculation can help assess the planting density of the corn field and the growth of the corn plants. Through analysis of the data, farmers can better adjust planting strategies and optimize field management, so that yield and quality are improved.
And obtaining the total grain number and the yield.
The technical solution provided by the present invention is described in further detail through several specific embodiments, so as to highlight the advantages and benefits of the technical solution provided by the present invention, however, the above specific embodiments are not intended to be limiting, and any reasonable modification and improvement, combination of embodiments, equivalent substitution, etc. of the present invention based on the spirit and principle of the present invention should be included in the scope of protection of the present invention.
In the description of the present invention, only the preferred embodiments of the present invention are described, and the scope of the claims of the present invention should not be limited thereby; furthermore, the descriptions of the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "N" means at least two, for example, two, three, etc., unless specifically defined otherwise. Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more N executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present invention. Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or N wires, a portable computer cartridge (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments. In addition, each functional unit in the embodiments of the present invention may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.

Claims (10)

1. A method for estimating corn yield, the method comprising:
collecting preset corn image data;
collecting a preset corn yield estimation model;
Collecting a preset optimization algorithm;
Training the corn yield estimation model according to the corn image data and the optimization algorithm to respectively obtain a corn individual recognition model, a corn cross section recognition model and a front corn kernel recognition model;
collecting a picture to be detected;
Preprocessing the picture to be detected;
Obtaining the total grain number of the corn according to the corn individual identification model, the corn cross section identification model, the front corn grain identification model and the preprocessed picture to be detected;
estimating the corn yield according to the total grain number of the corn;
Wherein, the positive sample is selected according to the weighted combination of the classification score and IoU values in the corn individual recognition model, and the positive sample is obtained by the formula:
wherein the classification score s represents the confidence of the model in the object class in the prior frame, the IoU value u represents the overlapping degree of the prediction frame and the real frame, And/>Representing weight super-parameters,/>Get 1.0,/>Taking 0.6;
The optimization algorithm adopts the formula:
Implementation, where p is a vector, q is a real label, The scaling factor gamma representing the weight overload and the adjustment loss is reduced when q=0, the loss contribution of the negative example is reduced, and the weight of the positive example is reduced when q > 0;
The calculation formula of the total grain number of the corn is as follows:
wherein x 1 represents the visible grain number, c 1 represents the visible column number, and c 2 represents the total column number, so as to obtain the total grain number z of the single corn cob;
By the formula:
The corn yield was estimated, where x represents the effective spike number, y represents the number of single sticks, w represents the single weight in grams, and z represents the moisture content.
2. The method for estimating the yield of corn according to claim 1, wherein each corn is read by the individual corn identification model, the number of columns of each corn is obtained by the cross section corn identification model, and the number of grains of each corn is obtained by multiplying the number of columns of the corn by the number of grains of each single column obtained by the front corn grain identification model.
3. The method of claim 1, wherein the predetermined corn image data has a brightness and sharpness that meet a predetermined threshold range.
4. The method of corn yield estimation according to claim 1, wherein the individual corn identification model, the cross-section corn identification model, and the front corn kernel identification model employ a Task-ALIGNED ASSIGNER matching strategy.
5. The method of corn yield estimation according to claim 1, wherein the individual corn identification model, the cross-section corn identification model, and the front corn kernel identification model are implemented using YOLOv n models.
6. The corn yield estimation method of claim 1, wherein the front-side corn kernel identification model is implemented using an OBB detection algorithm in combination with a YOLOv n model.
7. The corn yield estimation method of claim 1, wherein the optimization algorithm is VFL Loss.
8. Corn yield estimation device, characterized in that it comprises:
a module for collecting preset corn image data;
A module for collecting a preset corn yield estimation model;
a module for collecting a preset optimization algorithm;
training the corn yield estimation model according to the corn image data and the optimization algorithm to respectively obtain a corn individual recognition model, a corn cross section recognition model and a front corn kernel recognition model;
A module for collecting the picture to be measured;
A module for preprocessing the picture to be detected;
a module for obtaining the total grain number of the corn according to the corn individual recognition model, the corn cross section recognition model, the front corn grain recognition model and the preprocessed picture to be detected;
a module for estimating the corn yield according to the total grain number of the corn;
Wherein, the positive sample is selected according to the weighted combination of the classification score and IoU values in the corn individual recognition model, and the positive sample is obtained by the formula:
wherein the classification score s represents the confidence of the model in the object class in the prior frame, the IoU value u represents the overlapping degree of the prediction frame and the real frame, And/>Representing weight super-parameters,/>Get 1.0,/>Taking 0.6;
The optimization algorithm adopts the formula:
Implementation, where p is a vector, q is a real label, The scaling factor gamma representing the weight overload and the adjustment loss is reduced when q=0, the loss contribution of the negative example is reduced, and the weight of the positive example is reduced when q > 0;
The calculation formula of the total grain number of the corn is as follows:
wherein x 1 represents the visible grain number, c 1 represents the visible column number, and c 2 represents the total column number, so as to obtain the total grain number z of the single corn cob;
By the formula:
The corn yield was estimated, where x represents the effective spike number, y represents the number of single sticks, w represents the single weight in grams, and z represents the moisture content.
9. Computer storage medium for storing a computer program, characterized in that the computer performs the method according to any one of claims 1-7 when the computer program is read by the computer.
10. Computer comprising a processor and a storage medium, characterized in that the computer performs the method according to any of claims 1-7 when the processor reads a computer program stored in the storage medium.
CN202410605064.8A 2024-05-16 2024-05-16 Corn yield estimation method and device Active CN118196639B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410605064.8A CN118196639B (en) 2024-05-16 2024-05-16 Corn yield estimation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410605064.8A CN118196639B (en) 2024-05-16 2024-05-16 Corn yield estimation method and device

Publications (2)

Publication Number Publication Date
CN118196639A true CN118196639A (en) 2024-06-14
CN118196639B CN118196639B (en) 2024-09-06

Family

ID=91399152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410605064.8A Active CN118196639B (en) 2024-05-16 2024-05-16 Corn yield estimation method and device

Country Status (1)

Country Link
CN (1) CN118196639B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112577956A (en) * 2020-12-04 2021-03-30 中国农业大学 Corn seed test system and method based on intelligent device photographing function
WO2023072018A1 (en) * 2021-10-26 2023-05-04 中国科学院空天信息创新研究院 Wheat yield observation method based on computer vision and deep learning techniques
CN116070789A (en) * 2023-03-17 2023-05-05 北京茗禾科技有限公司 Artificial intelligence-based single-yield prediction method for mature-period rice and wheat
CN116311228A (en) * 2023-01-28 2023-06-23 潍柴动力股份有限公司 Uncertainty sampling-based corn kernel identification method and system and electronic equipment
CN116579446A (en) * 2022-11-28 2023-08-11 中国科学院地理科学与资源研究所 Method for estimating high-precision wheat grain yield by using deep learning and phenotype characteristics
CN117893973A (en) * 2024-01-25 2024-04-16 中国农业科学院作物科学研究所 Method for monitoring corn seedling number, computer equipment and computer program product

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112577956A (en) * 2020-12-04 2021-03-30 中国农业大学 Corn seed test system and method based on intelligent device photographing function
WO2023072018A1 (en) * 2021-10-26 2023-05-04 中国科学院空天信息创新研究院 Wheat yield observation method based on computer vision and deep learning techniques
CN116579446A (en) * 2022-11-28 2023-08-11 中国科学院地理科学与资源研究所 Method for estimating high-precision wheat grain yield by using deep learning and phenotype characteristics
CN116311228A (en) * 2023-01-28 2023-06-23 潍柴动力股份有限公司 Uncertainty sampling-based corn kernel identification method and system and electronic equipment
CN116070789A (en) * 2023-03-17 2023-05-05 北京茗禾科技有限公司 Artificial intelligence-based single-yield prediction method for mature-period rice and wheat
CN117893973A (en) * 2024-01-25 2024-04-16 中国农业科学院作物科学研究所 Method for monitoring corn seedling number, computer equipment and computer program product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MOHSEN SHAHHOSSEINI ET AL.: ""Corn Yield Prediction With Ensen=mble CNN-DNN"", 《SEC.TECHNICAL ADVANCES IN PLANT SCIENCE》, 2 August 2021 (2021-08-02) *

Also Published As

Publication number Publication date
CN118196639B (en) 2024-09-06

Similar Documents

Publication Publication Date Title
Liu et al. Detection of aphids in wheat fields using a computer vision technique
US20210407076A1 (en) Multi-sample Whole Slide Image Processing in Digital Pathology via Multi-resolution Registration and Machine Learning
CN108810620B (en) Method, device, equipment and storage medium for identifying key time points in video
CN106960195B (en) Crowd counting method and device based on deep learning
CN106934346B (en) A kind of method of target detection performance optimization
US8135202B2 (en) Automated method and system for nuclear analysis of biopsy images
EP2681715B1 (en) Method and software for analysing microbial growth
US8340420B2 (en) Method for recognizing objects in images
Hu et al. Automatic plankton image recognition with co-occurrence matrices and support vector machine
CN108564085B (en) Method for automatically reading of pointer type instrument
US20150003701A1 (en) Method and System for the Automatic Analysis of an Image of a Biological Sample
US7643674B2 (en) Classification methods, classifier determination methods, classifiers, classifier determination devices, and articles of manufacture
CN113658192B (en) Multi-target pedestrian track acquisition method, system, device and medium
GB2498331A (en) Method of classifying images of animals based on their taxonomic group
WO1996009598A1 (en) Cytological slide scoring apparatus
Liu et al. A shadow-based method to calculate the percentage of filled rice grains
CN109063619A (en) A kind of traffic lights detection method and system based on adaptive background suppression filter and combinations of directions histogram of gradients
CN110008792B (en) Image detection method, image detection device, computer equipment and storage medium
CN110751619A (en) Insulator defect detection method
CN109147932B (en) Cancer cell HER2 gene amplification analysis method and system
JP2009064434A (en) Determination method, determination system and computer readable medium
US20230177699A1 (en) Image processing method, image processing apparatus, and image processing system
CN118196639B (en) Corn yield estimation method and device
CN108875825B (en) Granary pest detection method based on image blocking
Lau et al. Estimating Norway lobster abundance from deep-water videos: an automatic approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant