CN112883915B - Automatic wheat head identification method and system based on transfer learning - Google Patents

Automatic wheat head identification method and system based on transfer learning Download PDF

Info

Publication number
CN112883915B
CN112883915B CN202110299041.5A CN202110299041A CN112883915B CN 112883915 B CN112883915 B CN 112883915B CN 202110299041 A CN202110299041 A CN 202110299041A CN 112883915 B CN112883915 B CN 112883915B
Authority
CN
China
Prior art keywords
wheat
images
data set
training
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110299041.5A
Other languages
Chinese (zh)
Other versions
CN112883915A (en
Inventor
许鑫
乔红波
马新明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Agricultural University
Original Assignee
Henan Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Agricultural University filed Critical Henan Agricultural University
Priority to CN202110299041.5A priority Critical patent/CN112883915B/en
Publication of CN112883915A publication Critical patent/CN112883915A/en
Application granted granted Critical
Publication of CN112883915B publication Critical patent/CN112883915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic wheat ear identification method and system based on transfer learning, and relates to the technical field of transfer learning, wherein the method comprises the following steps: collecting wheat ear images with and without ground standards; constructing a data set based on the wheat ear image; inputting the constructed data set into a Yolov5 model to perform migration learning and target recognition training on the model; and inputting the to-be-tested image to a young 5 model after training to realize the identification and counting of the wheat ears in a unit area. By adopting the method and the system, the model precision after transfer learning is 91.10 percent, mAP@0.5 is 98.20 percent, and the spike density after transfer learning is applied to identify R 2 =0.83, on the basis, the K-means feature extraction algorithm is optimized by the MAIR feature detection algorithm to improve the recognition accuracy, and the optimized wheat spike density is used for recognizing R 2 =0.95, the average processing time of a single image is 12.17s, and the FPS of the model reaches 111.

Description

Automatic wheat head identification method and system based on transfer learning
Technical Field
The invention relates to the technical field of transfer learning, in particular to an automatic wheat ear identification method and system based on transfer learning.
Background
Wheat is most widely distributed in the world, is grain crop with the largest planting area, has the planting area of more than 2 hundred million hectares, and takes wheat as staple food for 35% -40% of population in the world. Calculating the ear number per unit area is a main method for determining the wheat yield, however, the investigation of the ear number per unit area is still mainly performed by manual investigation, and the time and the labor are wasted.
In recent years, with the in-depth application of machine learning in agriculture, there have been many studies on automatic calculation of spike number using techniques such as image analysis and deep learning. These process analysis techniques include: digital cameras, thermal imaging technology, ground vehicles, and unmanned aerial vehicle platforms. However, since various studies mainly focus on local experimental data, the data volume is small, the data diversity is insufficient, the genotype number, environment and conditions for training and testing models are limited, and developing a universal model to analyze the image count spike number remains a major problem.
The transfer learning can avoid a large amount of data labeling work, improve the machine learning performance and enhance the generalization capability of the deep neural network. The transfer learning is widely applied to various fields such as farmland information extraction, land utilization classification, crop pest identification and classification, crop disease monitoring and identification, weed identification, crop lodging area extraction and the like, but research on the identification and counting of wheat ears is not performed yet.
Disclosure of Invention
In order to solve the problems, the invention provides a wheat ear automatic identification method and a system based on transfer learning, wherein the system is used for realizing automatic wheat ear extraction based on a K-means image segmentation technology, and a wheat ear identification model is constructed by using the transfer learning to realize automatic identification and counting of wheat ears. The method aims to verify the feasibility of the transfer learning to accelerate the wheat head recognition, provides a low-cost universal wheat head recognition method to expand the application scene of the smart phone, provides a time-saving and labor-saving solution for estimating the wheat head density in unit area, is easy to operate and implement, and can also provide an application reference for the smart phone as a crop phenotype extraction tool.
An automatic wheat head identification method based on transfer learning comprises the following steps:
s1: collecting wheat ear images with and without ground standards;
s2: constructing a dataset based on the ear image in step S1;
s3: inputting the data set constructed in the step S2 into a Yolov5 model to perform migration learning and target recognition training on the model;
s4: and inputting the to-be-tested image to a young 5 model after training to realize the identification and counting of the wheat ears in a unit area.
Further, the data set in step S2 includes a first data set, a second data set, a third data set, and a fourth data set;
the first data set comprises 15000 divided wheat ear images without ground standard with uniform size, and the number of wheat ears in the images is marked manually, wherein 12000 wheat ears are used as a training set, 3000 wheat ears are used as a verification set, and the training set is used for extracting and learning the comprehensive characteristics of a Yolov5 model;
the second data set comprises 16800 divided images without standard wheat ears with uniform size, the number of wheat ears in the images with standard wheat ears is marked manually, 3200 images without wheat ears, 1 wheat ear, 2 wheat ears and 3 wheat ears are selected as training sets, and the rest images are used as verification sets for performing feature migration reinforcement learning training on the Yolov5 model;
the third data set comprises 112 wheat ear images with different genotypes and ground standards shot from different regions at different times, wherein each genotype is randomly selected by 3 pieces, 336 pieces are randomly cut from the center position of the image, then contour feature extraction segmentation is carried out, 10 subgraphs of each segmented image are randomly selected by 3360 pieces to be used as a training set, 5 subgraphs of each segmented image are randomly selected by 1680 pieces to be used as a verification set, and the method is used for carrying out small data set transfer learning training on a Yolov5 model and verifying the transfer learning effect;
the fourth data set comprises 2 images selected randomly from 104 genotype wheat ear images without ground standards, wherein the total number of the images is 208, the size of each genotype wheat ear image is cut from the center position of the image at random, then contour feature extraction and segmentation are carried out, 10 subgraphs of each segmented image are selected randomly, the total number of the subgraphs is 2080 as a training set, the total number of the subgraphs of each segmented image is selected randomly, and the total number of the subgraphs of each segmented image is 1040 as a verification set, so that the identification effect verification is carried out on a Yolov5 model.
Further, in step S2, the algorithm for segmenting the image and extracting the contour features is a K-means clustering algorithm, wherein the parameters of the morphological closing operation kernel for extracting the contour features are 7*7.
Furthermore, when the K-means clustering algorithm is adopted for contour feature extraction, a wheat head contour detection algorithm is also added, and the minimum area intersection ratio MAIR of any two contours in the contour detection algorithm is calculated as follows:
Figure 562723DEST_PATH_IMAGE001
wherein ,
Figure 492370DEST_PATH_IMAGE002
the area of the current i, j profile,
Figure 231656DEST_PATH_IMAGE003
is constant, is MAIR threshold, and is only kept smaller than
Figure 482640DEST_PATH_IMAGE003
Is defined by the contour of (a).
Furthermore, the method also comprises the step of carrying out noise reduction and enhancement treatment on the acquired original wheat ear image before the image contour feature extraction.
Furthermore, the collection equipment for collecting the wheat ear image is a mobile phone.
The invention also provides an automatic wheat ear identification system based on transfer learning, which comprises the following steps:
the image acquisition module is used for acquiring wheat ear images with and without ground standards;
the image preprocessing module is used for constructing a data set based on the wheat ear image;
the training module is used for inputting the constructed data set into the Yolov5 model to perform migration learning and target recognition training on the model;
and the result output module is used for inputting the to-be-tested image to the young 5 model after training to realize the identification and counting of the wheat ears in unit area.
The invention has the beneficial effects that:
by adopting the method and the system, the model precision after transfer learning is 91.10 percent, mAP@0.5 is 98.20 percent, and the spike density after transfer learning is applied to identify R 2 =0.83, on the basis of which a K-means feature extraction algorithm is optimized, a MAIR feature detection algorithm is provided to improve the recognition accuracy, and the optimized wheat head density is recognized as R 2 =0.95, the average processing time of a single image is 12.17s, and the FPS of the model reaches 111. The wheat head identification and counting are realized based on the smart phone and the transfer learning technology, so that the cost of wheat head counting is reduced, the identification precision and efficiency of wheat heads are improved, and the requirement of wheat head counting in unit area can be met.
In addition to the objects, features and advantages described above, the present invention has other objects, features and advantages. The present invention will be described in further detail with reference to the drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
FIG. 1 is a flow chart of a wheat ear automatic identification method based on transfer learning according to an embodiment of the invention;
FIG. 2 shows the identification results of the wheat spike densities of different genotypes without performing transfer learning by the YOLOv5x model according to the embodiment of the invention;
FIG. 3 is a graph showing the identification result of the Yolov5x model according to the embodiment of the present invention for wheat spike densities of different genotypes after transfer learning on a third data set;
FIG. 4 is a graph showing the identification result of the Yolov5x model according to the embodiment of the present invention for different genotypes of wheat spike density after performing migration learning on the second data set+the first data set and then performing migration learning on the third data set;
FIG. 5 is a graph showing the recognition result of the Yolov5x model of the present invention for different genotypes of wheat spike density after performing transfer learning on a third data set based on the training result of 10000 epochs of the second data set+the first data set;
FIG. 6 shows the effect of MAIR algorithm on recognition result in the method according to the embodiment of the invention, a is the recognition result without MAIR algorithm, and b is the recognition result with MAIR algorithm;
fig. 7 is a photograph of an image of a wheat ear in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
For a clearer description of the embodiments, in the examples of the present application, ear-range is used
The first data set is represented, the second data set is represented by Ear-classification, the third data set is represented by Ear-classification, and the fourth data set is represented by Ear-classification-nogs.
Examples
Referring to fig. 1, a wheat ear automatic identification method based on transfer learning includes:
s1: collecting wheat ear images with and without ground standards;
as shown in FIG. 7, the test field was a farm (34.20 ℃N, 113.94 ℃E) at Henan agricultural university, chuchang, china. Many days are typical temperate and monsoon climates. Each cell is a breeding genotype material, the area is 1 x 3m, and the sowing row spacing is 20cm.
The image acquisition equipment is a HUAWEI nova 3i smart phone, the height of the image acquisition equipment is about 1.5m, the shot image size is 4608 x 3456, the shot focal length is 4mm, a 10cm black plate is placed at the edge of the image to serve as a ground standard during shooting, the automatic identification and area calculation of the ground standard are realized by using an image mask technology, the spatial resolution of the image is 0.21-0.24mm, and an area of 0.5 x 0.5m is cut from the center position of the image to serve as a wheat ear identification area based on the corresponding relation between the image pixels and the actual area of the image, so that the influence of image distortion on an identification result is reduced.
76 genotypes (fig. 7, gs1, gs2) were photographed on day 16, 5, 2020, and 6 images were photographed per cell for a total of 456 images. 36 black plates with ground standard genotypes (fig. 1, gs3) were photographed on day 21, 5 months 2020, the ground standard still being 10cm x 10cm, 6 images were photographed per cell, a total of 216 images, 104 genotypes without ground standard (fig. 7, rs), 5 images were photographed per cell, and a total of 520 images were photographed per cell.
And carrying out noise reduction and enhancement processing on the image after shooting and shearing.
S2: constructing a dataset based on the ear image in step S1;
the data sets include a first data set (Ear-random), a second data set (Ear-classification), a third data set (Ear-broadcasting), and a fourth data set (Ear-broadcasting-logs);
the first data set comprises 15000 divided wheat ear images without ground standard with the unified size of 128 x 128, and the number of wheat ears in the images is marked manually, wherein 12000 wheat ears are used as a training set, 3000 wheat ears are used as a verification set, and the training set is used for extracting and learning the comprehensive characteristics of a Yolov5 model;
the second data set comprises 16800 divided images of standard wheat ears without ground with the uniform size of 128 x 128, the number of wheat ears in the images of the wheat ears is marked manually, 3200 images of the images without wheat ears, 1 wheat ear, 2 wheat ears and 3 wheat ears are selected as training sets, and the rest images are used as verification sets for performing feature transfer reinforcement learning training on a Yolov5 model;
the third data set comprises 112 images with different genotypes of ground standards from GS1, GS2 and GS3, wherein 3 images with different genotypes are randomly selected from each genotype, 336 images with wheat ears are randomly cut from the center of the images, then contour feature extraction segmentation is carried out, 10 subgraphs of each segmented image are randomly selected, 3360 images are randomly selected as a training set, 5 subgraphs of each segmented image are randomly selected, 1680 images are randomly selected as a verification set, the training set is used for carrying out small data set transfer learning training on a Yolov5 model, the transfer learning effect is verified, and the images are uniformly scaled to 128 x 128 before training;
the fourth data set comprises 2 images selected randomly from 104 wheat ear images with different genotypes of RS, wherein the total number of the images is 208, the size of the images is cut from the central position of the images at random, then contour feature extraction and segmentation are carried out, 10 subgraphs of each segmented image are selected randomly, the total number of the subgraphs is 2080 as a training set, the total number of the subgraphs of each segmented image is selected randomly, the total number of the subgraphs of each segmented image is 1040 as a verification set, the identification effect verification is carried out on a Yolov5 model, and the images are uniformly scaled to 128 x 128 before training.
The wheat ear feature extraction is carried out by utilizing a K-means clustering algorithm, the contour feature extraction is carried out by utilizing a morphological opening and closing algorithm, in order to improve the accuracy of feature extraction, a wheat ear contour detection algorithm is added to the K-means clustering algorithm during contour extraction and small image target identification so as to extract a comprehensive wheat ear target, the repeated identification of the wheat ear target is prevented, and the minimum area intersection ratio MAIR formula of any two contours in the contour detection algorithm is shown as follows:
Figure 372099DEST_PATH_IMAGE004
wherein ,
Figure 906985DEST_PATH_IMAGE005
the area of the current i, j profile,
Figure 334949DEST_PATH_IMAGE006
is constant, is MAIR threshold, and is only kept smaller than
Figure 322496DEST_PATH_IMAGE006
Is defined by the contour of (a).
Verification of the influence of MAIR detection algorithm on recognition results
After the transfer learning is performed on the Ear-class+Ear-range data setAnd then adding a MAIR contour detection algorithm into the model for transfer learning on the Ear-broadcasting. During the extraction of the wheat ear contour
Figure 359854DEST_PATH_IMAGE006
=0.75, when the wheat head is identified and detected
Figure 483668DEST_PATH_IMAGE006
=0.65, the density of the wheat ears was estimated. Adding MAIR contour detection algorithm, and detecting spike density and manual counting result R 2 =0.87, RMSE=0.05ears/m 2 ,bias=-13.50ears/m 2 Compared with a model without MAIR algorithm, the recognition accuracy is obviously improved. The recognition results before and after the addition of the MAIR algorithm are shown in FIG. 6. After MAIR algorithm detection, repeated extraction of the wheat head outline features can be effectively reduced, so that repeated identification of wheat heads is reduced, and particularly adhesion is caused, such as a gray oval area in fig. 6 (the wheat head outline is marked in red, and color images cannot be used in the patent, so that the situation is unclear).
The morphological closing operation kernel parameter of the contour feature extraction is 7*7.
S3: inputting the data set constructed in the step S2 into a Yolov5 model to perform migration learning and target recognition training on the model;
the algorithm verification index of the deep migration learning model is precision (P), recovery (R), and meanwhile, a typical index mAP for target identification is analyzed. 3 images are randomly selected from 112 genotypes with ground standards of GS1, GS2 and GS3, and a region with the size of 0.5m by 0.5m is selected from the shooting center position to serve as a comparison of algorithm identification results. 3 images were randomly selected from 104 genotypes without ground standard in the RS for evaluation of the ability of the algorithm to migrate over a wider range of genotypes after the transfer learning. R is adopted for verifying estimation accuracy of wheat head density in unit area 2 RMSE and biasmAP@0.5 and mAP@0.5:0.95,mean Average Precision (IoU =0.5), represent average maps over different IoU thresholds (from 0.5 to 0.95, step sizes 0.05) (0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95).
Training and validation based on YOLOv5x model was performed on a PyCharm operating system Ubuntu 18.04.4, intel (R) Xeon (R) CPU@2.20GHz,40 cores, 64G memory, 2 NVIDIA Quadro P5000 GPUs. In a single GPU environment during training, the input picture size is 128 x 128, the training batch size=16,
the number of iterations epochs=10000. Training results as shown in table 1, the accuracy of the model of YOLOv5x on the first data set, the second data set, and the three different numbers of the first data set and the second data set and the characteristic data set was tested, the training effect on the second data set was best, the accuracy reached 91.70%, and the training time of the first data set and the second data set was longest, reaching 18 to 38.21d.
TABLE 1 different orders of magnitude image recognition training and verification accuracy
Figure 876341DEST_PATH_IMAGE007
Influence of transfer learning on model accuracy and efficiency
It can be seen from table 1 that an increase in the amount of data significantly increases the training time, and thus a pre-training transfer learning technique was employed to improve the model training process. Based on YOLOv5x, 1000 iterations were performed with the first dataset, and on this basis 1000 migration training was performed on the first dataset + the second dataset, and finally migration was performed on the third dataset. The training test results are shown in Table 2, the accuracy of the YOLOv5x model is improved from 90.80% to 91.10%, and meanwhile, the accuracy of Recall, mAP@0.5 and mAP@0.5:0.95 are improved to different degrees. The method has the advantages that training time can be greatly saved by utilizing the transfer learning technology, and training precision can be improved.
Meanwhile, training test is carried out on the YOLOv5s, the batch-size is increased to 64 during training, and the result shows that the training effect on the first data set is higher than that on the YOLOv5x, the transfer learning effect YOLOv5s is lower than that on the YOLOv5x, the speed of YOLOv5s is obviously higher than that of YOLOv5x, and the speed on the third data set reaches 556FPS.
TABLE 2 training and verifying model accuracy with transfer learning
Figure 85605DEST_PATH_IMAGE008
S4: and inputting the to-be-tested image to a young 5 model after training to realize the identification and counting of the wheat ears in a unit area.
From 112 genotypes, images with ground standards are randomly selected, and each genotype is 3, and 336 total genotypes are used for testing the performance of the transfer learning model. Based on YOLOv5x, the wheat ears are automatically identified and counted by using a model after training for 1000 times by using a first data set, and the result is shown as R in figure 2 2 =0.68; then, after performing transfer learning based on the third data set, density estimation is performed, and as a result, R is as shown in FIG. 3 2 =0.84. In order to compare model effects of different learning degrees, after Ear-random training for 1000 times, first performing 1000 times of transfer learning on Ear-manager+Ear-random data set, then performing transfer learning on Ear-broadcasting data set, and estimating the wheat head density as shown in FIG. 4, R 2 =0.83; meanwhile, the training result of 10000 epochs based on Ear-classer+Ear-random is shown in FIG. 5 after the Ear-broadcasting dataset is subjected to transfer learning, R is shown as 2 =0.83. The recognition accuracy of the model can be obviously improved through transfer learning, and along with deepening of the transfer learning, data are gradually and uniformly distributed on two sides of 1:1 oblique lines.
In order to verify the effect of the method in practical application, and also in order to test the effectiveness of the method on a wider genotype, a migration learning model is constructed based on two models of different sizes, namely YOLOv5x and YOLOv 5s. After the transfer learning is carried out on the Ear-class+Ear-range data set, the transfer learning is carried out on the Ear-casting, and two models Ear-casting-final-5 x and Ear-casting-final-5 s are trained. Based on the two models, the transfer learning is carried out on the Ear-broadcasting-nogs data set, the Ear-broadcasting-nogs-final-5 x and the Ear-broadcasting-nogs-final-5 s of the two models are trained respectively, and the epochs of all the models are 1000. Finally, the performance and efficiency of the model was evaluated based on 312 images randomly selected in the RS, and the results are shown in table 3. To simulate different size resolutionsIn the case of the rate, 2000-2400 size images are cropped randomly as the original image is cropped. In the wheat head detection process, the processing speed of the model and the processing speed of the whole algorithm are calculated respectively. R of 4 models 2 The accuracy of the two models, that is, the Ear-casting-nogs-final-5 s and the Ear-casting-nogs-final-5 x, is reduced, but the deviation is also obviously reduced, after the transfer learning, the stability is between 0.94 and 0.95. In addition, the model speed comparison test shows that two models Ear-casting-final-5 s and Ear-casting-nogs-final-5 s based on the YOLOv5s transfer learning also show high precision, as shown in the table 3, R 2 =0.95, and the efficiency is higher, the average time of individual images is 12.17s, and the fps reaches 111.
Table 3 algorithm performance stress and efficiency assessment
Figure 582445DEST_PATH_IMAGE009
Example 2
An automatic wheat head identification system based on transfer learning, comprising:
the image acquisition module is used for acquiring wheat ear images with and without ground standards;
the image preprocessing module is used for constructing a data set based on the wheat ear image;
the training module is used for inputting the constructed data set into the Yolov5 model to perform migration learning and target recognition training on the model;
and the result output module is used for inputting the to-be-tested image to the young 5 model after training to realize the identification and counting of the wheat ears in unit area.
The method and the system are based on a model and an algorithm developed by a mobile phone platform, but can also be transplanted to an unmanned plane platform. The method of the application divides the wheat ear image into small images (128 x 128), is favorable for combining a big data parallel processing algorithm, realizes real-time analysis processing of wheat ears, and provides a powerful algorithm scheme for large-scale wheat ear counting and yield prediction.
In addition, the method also reduces the technology and economic cost of wheat head identification, expands the application scene of the smart phone, and enables the smart phone to be used as sensing equipment for agricultural production. The method can realize rapid counting of wheat ears, can provide a rapid and simple wheat ear density estimation scheme by combining with a smart phone, and can also help breeders to provide real-time automatic wheat ear phenotype information extraction. Although wheat is taken as an example of wheat in the present application, wheat ear counting is performed, the present invention is not limited to wheat, and the present invention can be applied to other crops such as rice and millet and small object recognition scenes.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (2)

1. The automatic wheat head identification method based on transfer learning is characterized by comprising the following steps of:
s1: collecting wheat ear images with and without ground standards;
s2: constructing a dataset based on the ear image in step S1;
s3: inputting the data set constructed in the step S2 into a Yolov5 model to perform migration learning and target recognition training on the model;
s4: inputting an attempted image to be tested into a young 5 model after training to realize the identification and counting of wheat ears in unit area;
the data sets in step S2 include a first data set, a second data set, a third data set, and a fourth data set;
the first data set comprises 15000 divided wheat ear images without ground standard with uniform size, and the number of wheat ears in the images is marked manually, wherein 12000 wheat ears are used as a training set, 3000 wheat ears are used as a verification set, and the training set is used for extracting and learning the comprehensive characteristics of a Yolov5 model;
the second data set comprises 16800 divided images without standard wheat ears with uniform size, the number of wheat ears in the images with standard wheat ears is marked manually, 3200 images without wheat ears, 1 wheat ear, 2 wheat ears and 3 wheat ears are selected as training sets, and the rest images are used as verification sets for performing feature migration reinforcement learning training on the Yolov5 model;
the third data set comprises 112 wheat ear images with different genotypes and ground standards shot from different regions at different times, wherein each genotype is randomly selected by 3 pieces, 336 pieces in total, cutting is randomly carried out from the center position of the images by 1800-2200 pixels, then contour feature extraction segmentation is carried out, 10 subgraphs of each segmented image are randomly selected by 3360 pieces as a training set, 5 subgraphs of each segmented image are randomly selected by 1680 pieces as a verification set, and the method is used for carrying out small data set transfer learning training on a Yolov5 model and verifying the transfer learning effect;
the fourth data set comprises 2 images selected randomly from 104 wheat ear images with different genotypes and without ground standards, wherein the total number of the images is 208, the size of 2000-2400 pixels is cut from the center position of the images randomly, then contour feature extraction segmentation is carried out, 10 subgraphs of each segmented image are selected randomly, the total number of the subgraphs is 2080 as a training set, and the total number of the subgraphs of each segmented image is selected randomly, and 1040 subgraphs are taken as a verification set for verifying the identification effect of the Yolov5 model;
the algorithm for segmenting the image and extracting the contour features in the step S2 is a K-means clustering algorithm, and the parameters of morphological closing operation kernel for extracting the contour features are 7*7;
when the K-means clustering algorithm is adopted for contour feature extraction, a wheat head contour detection algorithm is also added, and the minimum area intersection ratio MAIR of any two contours in the contour detection algorithm is calculated according to the following formula:
Figure QLYQS_1
wherein ,
Figure QLYQS_2
areas of the current i, j profiles, +.>
Figure QLYQS_3
Is constant, is MAIR threshold, and is calculated by MAIR algorithm to be less than +.>
Figure QLYQS_4
Is a contour of (2);
the method also comprises the step of carrying out noise reduction and enhancement treatment on the collected original wheat ear image before the image contour feature extraction;
the collection equipment for collecting the wheat ear image is a mobile phone.
2. An automatic wheat head identification system based on transfer learning for realizing the automatic wheat head identification method based on transfer learning as claimed in claim 1, comprising:
the image acquisition module is used for acquiring wheat ear images with and without ground standards;
the image preprocessing module is used for constructing a data set based on the wheat ear image;
the training module is used for inputting the constructed data set into the Yolov5 model to perform migration learning and target recognition training on the model;
and the result output module is used for inputting the to-be-tested image to the young 5 model after training to realize the identification and counting of the wheat ears in unit area.
CN202110299041.5A 2021-03-20 2021-03-20 Automatic wheat head identification method and system based on transfer learning Active CN112883915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110299041.5A CN112883915B (en) 2021-03-20 2021-03-20 Automatic wheat head identification method and system based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110299041.5A CN112883915B (en) 2021-03-20 2021-03-20 Automatic wheat head identification method and system based on transfer learning

Publications (2)

Publication Number Publication Date
CN112883915A CN112883915A (en) 2021-06-01
CN112883915B true CN112883915B (en) 2023-05-23

Family

ID=76041496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110299041.5A Active CN112883915B (en) 2021-03-20 2021-03-20 Automatic wheat head identification method and system based on transfer learning

Country Status (1)

Country Link
CN (1) CN112883915B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435282B (en) * 2021-06-18 2021-12-21 南京农业大学 Unmanned aerial vehicle image ear recognition method based on deep learning
CN114066022A (en) * 2021-10-26 2022-02-18 中国科学院空天信息创新研究院 Wheat yield per unit observation method based on computer vision and deep learning technology
CN116188489A (en) * 2023-02-01 2023-05-30 中国科学院植物研究所 Wheat head point cloud segmentation method and system based on deep learning and geometric correction
CN116703829B (en) * 2023-05-11 2024-08-30 内蒙古工业大学 Buckwheat husking parameter online detection method and system for buckwheat husking machine

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2020100705A4 (en) * 2020-05-05 2020-06-18 Chang, Jiaying Miss A helmet detection method with lightweight backbone based on yolov3 network
AU2020100953A4 (en) * 2020-06-05 2020-07-16 D, Vijayakumar DR Automated food freshness detection using feature deep learning
CN111461052A (en) * 2020-04-13 2020-07-28 安徽大学 Migration learning-based method for identifying lodging regions of wheat in multiple growth periods
CN112069985A (en) * 2020-08-31 2020-12-11 华中农业大学 High-resolution field image rice ear detection and counting method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10789462B2 (en) * 2019-01-15 2020-09-29 International Business Machines Corporation Weakly and fully labeled mammogram classification and localization with a dual branch deep neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461052A (en) * 2020-04-13 2020-07-28 安徽大学 Migration learning-based method for identifying lodging regions of wheat in multiple growth periods
AU2020100705A4 (en) * 2020-05-05 2020-06-18 Chang, Jiaying Miss A helmet detection method with lightweight backbone based on yolov3 network
AU2020100953A4 (en) * 2020-06-05 2020-07-16 D, Vijayakumar DR Automated food freshness detection using feature deep learning
CN112069985A (en) * 2020-08-31 2020-12-11 华中农业大学 High-resolution field image rice ear detection and counting method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Wheat ear counting using K-means clustering segmentation and convolutional neural network;Xin Xu;《Plant Methods》;20200806;第1-13页 *
基于深度神经网络的大田小麦麦穗检测方法研究;高云鹏;《中国硕士论文全文数据库》;20200415(第4期);正文第1-36页 *
基于迁移学习与模型融合的犬种识别方法;李思瑶等;《智能计算机与应用》;20191101(第06期);全文 *

Also Published As

Publication number Publication date
CN112883915A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN112883915B (en) Automatic wheat head identification method and system based on transfer learning
Wang et al. A deep learning approach incorporating YOLO v5 and attention mechanisms for field real-time detection of the invasive weed Solanum rostratum Dunal seedlings
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN109325431B (en) Method and device for detecting vegetation coverage in feeding path of grassland grazing sheep
US20210383149A1 (en) Method for identifying individuals of oplegnathus punctatus based on convolutional neural network
CN113298023B (en) Insect dynamic behavior identification method based on deep learning and image technology
CN111476119B (en) Insect behavior identification method and device based on space-time context
CN113822185A (en) Method for detecting daily behavior of group health pigs
CN111339912A (en) Method and system for recognizing cattle and sheep based on remote sensing image
CN115272828A (en) Intensive target detection model training method based on attention mechanism
CN110363218B (en) Noninvasive embryo assessment method and device
CN112465038A (en) Method and system for identifying disease and insect pest types of fruit trees
CN113435355A (en) Multi-target cow identity identification method and system
CN112861666A (en) Chicken flock counting method based on deep learning and application
Cai et al. A deep learning-based algorithm for crop Disease identification positioning using computer vision
Xu et al. An automatic wheat ear counting model based on the minimum area intersection ratio algorithm and transfer learning
CN114529840A (en) YOLOv 4-based method and system for identifying individual identities of flocks of sheep in sheepcote
Watcharabutsarakham et al. An approach for density monitoring of brown planthopper population in simulated paddy fields
CN108967246B (en) Shrimp larvae positioning method
Gong et al. An Improved Method for Extracting Inter-row Navigation Lines in Nighttime Maize Crops using YOLOv7-tiny
CN113449712B (en) Goat face identification method based on improved Alexnet network
Avanzato et al. Dairy cow behavior recognition using computer vision techniques and CNN networks
CN114758356A (en) Method and system for recognizing cow lip prints based on local invariant features
CN113378004A (en) FANet-based farmer working behavior identification method, device, equipment and medium
CN114140428A (en) Method and system for detecting and identifying larch caterpillars based on YOLOv5

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant