CN114266752A - Wheat spike number identification method, system and medium based on fast R-CNN - Google Patents

Wheat spike number identification method, system and medium based on fast R-CNN Download PDF

Info

Publication number
CN114266752A
CN114266752A CN202111589730.6A CN202111589730A CN114266752A CN 114266752 A CN114266752 A CN 114266752A CN 202111589730 A CN202111589730 A CN 202111589730A CN 114266752 A CN114266752 A CN 114266752A
Authority
CN
China
Prior art keywords
wheat
ears
ear
cnn
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111589730.6A
Other languages
Chinese (zh)
Inventor
肖永贵
李磊
杨梦娇
穆哈默德艾迪尔·哈森
韩志国
夏先春
何中虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Crop Sciences of Chinese Academy of Agricultural Sciences
Original Assignee
Institute of Crop Sciences of Chinese Academy of Agricultural Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Crop Sciences of Chinese Academy of Agricultural Sciences filed Critical Institute of Crop Sciences of Chinese Academy of Agricultural Sciences
Priority to CN202111589730.6A priority Critical patent/CN114266752A/en
Publication of CN114266752A publication Critical patent/CN114266752A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the field of intelligent biotechnology, and relates to a wheat head number identification method, a system and a medium based on Faster R-CNN, which comprises the following steps: extracting images of canopy layers of wheat ears of different families, and calibrating the wheat ears in the images to obtain labels corresponding to the wheat ears; inputting the images and the labels of the wheat ears into a ResNet network model for feature extraction to obtain feature maps of the wheat ears, and establishing corresponding candidate frames according to the wheat ear outlines; training a Faster R-CNN model through a characteristic diagram to obtain an optimal ear recognition model; inputting the images of the wheat ears to be detected into a wheat ear recognition model to obtain candidate frames corresponding to the wheat ears; and counting the number of the candidate boxes to obtain the number of the wheat ears. The Quantitative Trait Locus (QTL) positioning method can be used for accurately positioning the wheat, can be used for accurately identifying the wheat with the high overlapping characteristic, and is expected to provide a high-throughput analysis tool for the phenotype identification related to the yield of the molecular breeding of the wheat.

Description

Wheat spike number identification method, system and medium based on fast R-CNN
Technical Field
The invention relates to a wheat head number identification method, a system and a medium based on fast R-CNN, belongs to the technical field of intelligent biology, and particularly relates to the technical field of automatic identification of wheat head numbers.
Background
As an important staple grain crop in China, the ear morphology parameters of wheat directly reflect the growth condition and yield information of wheat (Yang Advance, and the like, 2013), and are important parameters for embodying the quality and the yield of wheat. At present, the statistics of the number of small ears and seeds at the ear part of wheat mainly depend on manual counting, so that time and labor are lost, and the efficiency is low. The existing means for solving the problem mainly comprises analyzing the plant phenotype by an optical imaging method, and has very obvious advantages compared with manual counting, but for wheat, because adjacent leaves are overlapped or ears and fruits are overlapped, a large amount of shielding exists, and the accurate ear number of the wheat is difficult to obtain by optical imaging.
At present, a deep learning model is similar to a framework of interconnection and work among human neurons, high-level abstract rules in data features are summarized through feature learning of a large amount of data, and the deep learning model is greatly developed in various fields. In particular, deep learning methods such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have been successful in the fields of image classification, object recognition, sequence feature extraction, and the like. At the present stage, some scholars use the neural network model for feature recognition of crops, for example, the crops and the background are segmented by using an image processing technology and a deep learning method, so that a series of important phenotypic parameters such as the length and the width of wheat ears, the projection area and the orientation of the glume-containing wrapped grains and the like are obtained; the extraction of important ear phenotype parameter information such as ear row number, row grain number, total grain number and the like is realized through image processing in the corn ear research; the extraction of the related phenotypic information of the structural characteristics and the seed number of the rice ears with higher precision is realized in the aspect of the research of the rice ears.
However, the wheat ear has a complex structure, the plant spacing is small, the wheat ears are highly overlapped, and awns exist in part of wheat varieties, so that the phenotype identification of the wheat ears is difficult to accurately carry out.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a wheat head number identification method, system and medium based on fast R-CNN, which can perform accurate QTL positioning on wheat, can accurately identify wheat with high overlapping features, and is expected to provide a high throughput analysis tool for the phenotypic identification related to the yield of wheat molecular breeding.
In order to achieve the purpose, the invention provides the following technical scheme: a wheat spike number identification method based on fast R-CNN comprises the following steps: extracting images of canopy layers of wheat ears of different families, and calibrating the wheat ears in the images to obtain labels corresponding to the wheat ears; inputting the images and the labels of the wheat ears into a ResNet network model for feature extraction to obtain feature maps of the wheat ears, and establishing corresponding candidate frames according to the wheat ear outlines; training a Faster R-CNN model through a characteristic diagram to obtain an optimal ear recognition model; inputting the images of the wheat ears to be detected into a wheat ear recognition model to obtain candidate frames corresponding to the wheat ears; and counting the number of the candidate boxes to obtain the number of the wheat ears.
Furthermore, QTL positioning is carried out on the gene characters of the wheat through the number of wheat ears in unit area.
Further, the QTL positioning method comprises the following steps: obtaining a double haploid population of wheat; typing the double haploid population of wheat by an SNP chip; inputting the typing result into an IcMapping network, deleting redundant markers, and constructing a gene linkage map; obtaining QTL positioning of wheat gene characters by a composite interval mapping method according to a gene linkage map.
Further, the method of typing comprises: carrying out whole genome amplification on the genome DNA of wheat to be detected to obtain an amplification product; cutting the amplified product with random endonuclease to obtain DNA segment; hybridizing the DNA fragment with the SNP chip to ensure that the DNA fragment is complementarily combined with the specific capture probe on the bead of the SNP chip; washing to remove DNA fragments which are not hybridized or mismatched and hybridized; labeling nucleotide substrates by dinitrophenol and biotin, performing single base extension on the specific capture probe, and labeling different fluorescent dyes by different nucleotide substrates through dyeing; and scanning the SNP chip, judging and reading according to the detected fluorescence number and outputting a typing result.
Further, training the Faster R-CNN model includes adjusting the proportion of candidate boxes, the IOU threshold, and optimizing the strategy learning rate of model training.
Further, whether the candidate frames exceed a threshold value is judged through L2 regularization and softmax classification and regression of the full connection layer so as to judge whether the candidate frames contain complete wheat ears, position coordinates of the candidate frames exceeding the threshold value are obtained through softmax regression, and candidate frames which are larger than the IOU threshold value but not are optimal prediction of real wheat ears in the candidate frames are rejected through non-maximum value inhibition.
Further, inputting the images and labels of the wheat ears into a plurality of sub-modules in a ResNet network model for feature extraction, generating a candidate frame according to the extracted features, performing L2 regularization and softmax classification and regression on the generated candidate frame and the images and labels of the original wheat ears together, calculating the cross-over ratio of the result, performing non-maximum suppression on the result to obtain a preliminary classification scheme, sequentially inputting the preliminary classification scheme into an ROI pooling layer, another sub-module in the ResNet network model, the L2 regularization, a softmax layer, cross-over ratio calculation and the non-maximum suppression, and outputting a final image recognition result.
Further, a plurality of sub-modules of the ResNet network model comprise four sub-modules, wherein the first sub-module comprises a convolutional layer, a BN layer, a RELU layer and a maximum pooling layer; the second to the four sub-modules comprise a rolling block and a plurality of identification blocks; another sub-module in the ResNet network model includes a rolling block, two identification blocks, a max-pooling layer, and a flattening layer.
The invention also discloses a wheat spike number identification system based on fast R-CNN, which comprises the following steps: the calibration module is used for extracting images of canopy layers of the wheatears of different families and calibrating the wheatears in the images to obtain labels corresponding to the wheatears; the characteristic map establishing module is used for inputting the image and the label of the wheat ear into a ResNet network model for characteristic extraction to obtain a characteristic map of the wheat ear and establishing a corresponding candidate frame according to the contour of the wheat ear; the model training module is used for training the model based on the extracted characteristic diagram to obtain an optimal ear recognition model; the ear recognition module is used for inputting the ear image to be detected into the ear recognition model to obtain candidate frames corresponding to each ear; and the wheat ear number output module is used for counting the number of the candidate frames so as to obtain the wheat ear number.
The present invention also discloses a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform a fast R-CNN based wheatear identification method according to any of the above.
Due to the adoption of the technical scheme, the invention has the following advantages:
1. the invention establishes a wheat unit area ear number phenotype identification model, positions the QTL locus related to the yield character based on the model, and tests show that the model established by the invention can quickly and efficiently identify the unit area ear number of the wheat family to be tested, and can assist in screening the high-yield family by combining the genotype data of the wheat to be tested, thereby laying a theoretical foundation for breeding the wheat variety with high and stable yield and excellent quality.
2. The invention organically combines molecular marker-assisted selective breeding and deep learning species target detection, establishes a method for assisting breeders in selecting high-yield families, can obtain the ear number phenotype data of different families per unit area at high throughput by using the model of the invention, can quickly and accurately identify the high-yield potential of the wheat to be detected according to the genotype of the wheat population to be detected, and accelerates the genetic gain of the wheat yield.
Drawings
FIG. 1 is a schematic diagram of a wheat ear number identification method based on fast R-CNN according to an embodiment of the present invention;
FIG. 2 is a visual image of a loss function of the ear recognition model training process in an embodiment of the present invention;
fig. 3 is a high-quality ear canopy RGB image of different families or varieties in an embodiment of the present invention, fig. 3(a) is an original ear canopy image, fig. 3(b) is a cropped ear canopy image, and fig. 3(c) is a marked ear canopy image;
fig. 4 is a schematic diagram illustrating the output result of the number of wheat ears according to an embodiment of the present invention, fig. 4(a) is an original ear canopy image, and fig. 4(b) is a final ear canopy image;
FIG. 5 is a QTL mapping analysis chart of spike counts per unit area obtained by three methods, MSN, ISN and VSN, in accordance with an embodiment of the present invention.
Detailed Description
The present invention is described in detail by way of specific embodiments in order to better understand the technical direction of the present invention for those skilled in the art. It should be understood, however, that the detailed description is provided for a better understanding of the invention only and that they should not be taken as limiting the invention. In describing the present invention, it is to be understood that the terminology used is for the purpose of description only and is not intended to be indicative or implied of relative importance.
The Zhongmai 895 is a semi-winterness multi-spike middle and late maturing variety bred by hybridization with Zhongmai 16 as female parent and litchi reclaimed No. 4 as male parent, which is obtained from the institute of crop science of Chinese academy of agricultural sciences and the research of cotton of Chinese academy of agricultural sciences. The characteristics of high yield, wide adaptability, high temperature resistance in the later stage of grouting and the like are shown in a variety comparison test and field demonstration in 2013-2015. Yangmai 16 is the variety with the largest planting area in the middle and lower Yangtze river wheat areas, and has the characteristics of high grouting speed, high grain weight and the like.
The invention relates to a method, a system and a medium for identifying the number of wheat ears based on R-CNN, wherein 101 families of a Double Haploid (DH) group created by Yangmai 16 and Chinese dwarf 895 as parents are used as objects, and the number of wheat ears is subjected to phenotype identification and detection based on a fast-RCNN model in deep learning. The result shows that the model can be used for identification or assisted identification of the number of ears per unit area of wheat, and is expected to provide a high-throughput analysis tool for the phenotypic identification related to the yield of wheat molecular breeding. The solution of the invention is explained in detail below by three embodiments with reference to the attached drawings.
Example one
This embodiment of a method for identifying the number of wheatears based on fast R-CNN, as shown in fig. 1, includes:
s1, extracting canopy images of the ears of different families, and calibrating the ears of wheat in the images to obtain labels corresponding to the ears of wheat.
Referring to fig. 3, as shown in fig. 3(a), 1840 high-quality RGB images of the corncob canopy are provided, wherein 1032 images are from 166 natural populations, the model is trained by using a training set of the model, 808 images are from the parents of yangmai 16 and midgmai 895 to create a DH population, the model is trained by using a verification set of the model, and the subsequent QTL localization is performed. As shown in fig. 3(b), a square frame of 0.5 × 0.5cm is randomly placed in the non-marginal area of the cell to be photographed, so that the uniformity of the area of the photographed object can be ensured each time, and the picture is cut, wherein the RGB image of the wheat ear canopy is photographed at the middle stage of wheat grain filling. And (3) when the images of the wheat ear canopy layers of different families are extracted, counting the number of the wheat ears in a square frame of 0.5 multiplied by 0.5cm by a manual method at the same time, counting each family or variety for 2 times, and taking the average value of the numbers for later verification. The partial phenotype results are expressed as MSN (artificial field phenotype statistics).
As shown in fig. 3(c), the head of the wheat head is calibrated by the Labelimg software for each obtained image of the canopy of the wheat head, and the number of labels appearing in each image is counted, and the partial phenotype result is expressed by ISN (based on labeled data of the Labelimg software). The overlapped and shielded wheat ears are also calibrated in the calibration process so as to ensure the accuracy of model training.
S2, inputting the images and labels of the wheat ears into a ResNet50 trunk feature extraction network model for feature extraction, obtaining feature maps of the wheat ears, and establishing corresponding candidate frames according to the wheat ear outlines.
In this embodiment, the original feature extraction is performed with a step size of 8, and a higher-dimensionality and more texture feature maps are obtained through continuous downsampling convolution by a ResNet50 network model.
Candidate boxes of different sizes and proportions are generated in the feature map by the RPN network, wherein the size is the area proportion of the candidate boxes, each candidate box comprises 4 area proportions and has three area sizes. In consideration of the characteristic that the wheat ears are dense and small, one pixel point of each feature map is set to include 12 generated candidate frames in the present embodiment. The aspect ratio of each candidate box may be selected to be 0.5, 1.0 or 2.0, and the size 0.25, 0.5, 1.0 or 2.0, so that each ear is encompassed by the candidate box. And setting the IOU threshold values to be 0.3 and 0.7 as the basis for discarding and adjusting the coordinates of the generated candidate frame, discarding the candidate frame when the IOU is less than 0.3, reserving the generated candidate frame when the IOU is more than 0.7, and adjusting the position of the candidate frame by softmax regression to enable the candidate frame to be closer to the actual condition of the wheat ear.
S3, training a Faster R-CNN model through a characteristic diagram to obtain an optimal ear recognition model;
as shown in FIG. 1, the Faster R-CNN mainly comprises four partial feature extraction modules (fully connected layers), RPN (region pro-posal networks), ROI posing pooling layer and the last fully connected layer that can be used for classification and regression. Compared with other two-stage models, the Faster R-CNN utilizes the RPN structure, greatly reduces the time for extracting the candidate frame, and is easier to connect the extracted candidate frame and the following network structure into a whole, thereby being capable of more accurately and rapidly positioning and classifying the object to be detected. In this embodiment, a feature extraction module of a ResNet50 residual model is used, and weights trained by a VOC2007 data set are used as initial weights for ear recognition, so that the model can obtain weight combination with minimum loss function more quickly.
Training the Faster R-CNN model includes adjusting the proportion of candidate boxes, the IOU threshold, and optimizing the strategy learning rate of model training.
Judging whether the candidate frame exceeds a threshold value through L2 regularization and softmax classification and regression of the full connection layer to judge whether the candidate frame contains complete wheat ears, obtaining position coordinates of the candidate frame exceeding the threshold value through softmax regression, judging whether the candidate frame contains the wheat ears by taking 0.3 as the threshold value, eliminating the candidate frame which is larger than the IOU threshold value, such as 0.7, but not the candidate frame which is the optimal prediction of the real wheat ears in the candidate frame through non-maximum value inhibition, and setting the number of the wheat ears which are retained at last to be 300 according to the condition that the number of the wheat ears in the wheat ear canopy image is between 100 and 250.
Adjusting some parameters in the Faster R-CNN by using the Faster R-CNN model and combining the field ear shape and density characteristics to adapt to the identification of the ear, and training the model by using pictures obtained from natural groups as a training set. The parameter modification specifically comprises three parts of adjusting the proportion of the candidate frame, adjusting the threshold of the IOU (interaction-over-unity), and the strategy learning rate of model training:
at a learning rate of 3e-4As an initial value, after the number of iterations reaches 90000 times, the learning rate is reduced by one tenth, and the ear recognition model is trained with a momentum of 0.9 and an iteration number of 500000 times.
S4, inputting the images of the wheat ears to be detected into the wheat ear recognition model to obtain candidate frames corresponding to the wheat ears;
s5 counts the number of candidate boxes to obtain the number of wheat ears, and the output result is shown in fig. 4.
The concrete structure of the Faster R-CNN model is shown in figure 3, an image and a label of an ear of wheat are input into a plurality of sub-modules in a ResNet network model for feature extraction, a candidate frame is generated according to the extracted features, the generated candidate frame and the image and the label of the original ear of wheat are subjected to L2 regularization, softmax classification and regression together, the cross-over ratio of the result is calculated, the result is subjected to non-maximum suppression, a preliminary classification scheme is obtained, the preliminary classification scheme is sequentially input into another sub-module in the ROI regularization pooling layer and the ResNet50 network model, an L2 regularization layer, a softmax layer, cross-over ratio calculation and non-maximum suppression, and a final image recognition result is output.
The ResNet network model comprises a plurality of sub-modules, wherein the sub-modules comprise four sub-modules, and the first sub-module comprises a convolutional layer, a BN layer, a RELU layer and a maximum pooling layer; the second modules each comprise a volume block and two identification blocks; the third modules comprise a rolling block and three identification blocks; the fourth modules comprise a rolling block and four identification blocks; another sub-module in the ResNet50 network model includes a volume block, two identification blocks, a max-pooling layer, and a flattening layer.
For the ear identification model, the model performance is evaluated mainly from two aspects of loss function and generalization ability. A visual image of the loss function of the ear recognition model in the embodiment is shown in fig. 2; the generalization ability was evaluated by accuracy, precision, recall and F1 score, and the results are shown in table 1.
TABLE 1 evaluation index table for generalization ability of ear recognition model
Figure BDA0003428708570000061
Figure BDA0003428708570000071
The above model can quickly and accurately evaluate the number of ears per unit area. In the embodiment, 50 images of the corncob canopy are randomly verified, and compared with the artificial phenotype data MSN, the average accuracy is 86.7%. For comparison, the average accuracy of the comparison with the artificial phenotype data MSN according to the model output method (VSN) and the image labeling method (ISN) in this embodiment was 50% and 83%, respectively. It can be seen that the method of the present application has a higher accuracy with respect to both the model output method (VSN) and the image annotation method (ISN).
The SAS9.4 software is used for calculating basic statistics, generalized heritability and variance analysis on phenotypes obtained by the MSN, ISN and VSN methods, and results show that the generalized heritability of the panicle number phenotypes in unit area obtained by the three methods is high and is between 0.71 and 0.93, the phenotypic variation coefficient is between 11.2 and 13.4 percent, and the phenotypic variation coefficients are in accordance with normal distribution and are suitable for QTL positioning. QTL positioning is carried out on the gene characters of wheat through the number of wheat ears in unit area. The QTL positioning method comprises the following steps:
obtaining double haploid population of wheat.
Carrying out 660K SNP chip typing on DH groups of Yangmai 16/Migmai 895 by using an Illumina SNP genotyping platform, wherein the number of the chips comprises BS, Bobwhite, CAP, D _ contig and the like, and the total number of the chips is 630518. And typing the wheat double haploid population through the SNP chip. The parting method comprises the following steps: carrying out whole genome amplification on the genome DNA of wheat to be detected to obtain an amplification product; cutting the amplified product with random endonuclease to obtain DNA segment; hybridizing the DNA fragment with the SNP chip to ensure that the DNA fragment is complementarily combined with the specific capture probe on the bead of the SNP chip; washing to remove DNA fragments which are not hybridized or mismatched and hybridized; labeling nucleotide substrates by dinitrophenol and biotin, performing single base extension on the specific capture probe, and labeling different fluorescent dyes by different nucleotide substrates through dyeing; and scanning the SNP chip, judging and reading according to the detected fluorescence number and outputting a typing result.
Inputting the typing result into an IcMapping network, deleting redundant markers, and constructing a gene linkage map;
phenotype data obtained by three methods of MSN, ISN and VSN are put into an ICIM-ADD (ICIM-ADD) of IcMapping 4.0 software, and QTL positioning is carried out on the trait of the number of ears per unit area of wheat by combining a 660K SNP chip typing result. The results show that the phenotype results obtained based on the model are highly related to the phenotypes obtained manually in the field and labeled based on pictures. Three QTL loci were co-localized, wherein qsnyz. caas-7DS on the 7DS chromosome was localized in all three ways, and the locus had LOD values between 3.34 and 4.86 and a physical interval between 80.24 and 80.77, as shown in figure 5. The above results show that the method in this embodiment can be used in the field of wheat breeding with high throughput.
In the embodiment, three QTL sites related to the number of ears per unit area are located and distributed on wheat chromosomes 4DS, 7DS and 7 DL. The method can be used for screening the wheat strain with higher ear number per unit area, lays a theoretical foundation for breeding the wheat variety with high and stable yield and excellent quality, and provides a molecular auxiliary selection means.
Example two
Based on the same inventive concept, the embodiment discloses a wheatear number identification system based on Faster R-CNN, comprising:
the calibration module is used for extracting images of canopy layers of the wheatears of different families and calibrating the wheatears in the images to obtain labels corresponding to the wheatears;
the characteristic map establishing module is used for inputting the image and the label of the wheat ear into a ResNet network model for characteristic extraction to obtain a characteristic map of the wheat ear and establishing a corresponding candidate frame according to the contour of the wheat ear;
the model training module is used for training the Faster R-CNN model through the characteristic diagram to obtain an optimal ear recognition model;
the ear recognition module is used for inputting the ear image to be detected into the ear recognition model to obtain candidate frames corresponding to each ear; and the wheat ear number output module is used for counting the number of the candidate frames so as to obtain the wheat ear number.
EXAMPLE III
Based on the same inventive concept, the present embodiment discloses a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform the fast R-CNN based wheatear number identification method according to any one of the above.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims. The above description is only for the specific embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art will understand that
Those skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and all of them should be covered by the protection scope of the present disclosure. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A wheat spike number identification method based on fast R-CNN is characterized by comprising the following steps:
extracting images of canopy layers of wheat ears of different families, and calibrating the wheat ears in the images to obtain labels corresponding to the wheat ears;
inputting the images and the labels of the wheat ears into a ResNet network model for feature extraction to obtain feature maps of the wheat ears, and establishing corresponding candidate frames according to the wheat ear outlines;
training a Faster R-CNN model through the characteristic diagram to obtain an optimal ear recognition model;
inputting the images of the wheat ears to be detected into the wheat ear recognition model to obtain candidate frames corresponding to the wheat ears;
and counting the number of the candidate boxes so as to obtain the number of wheat ears.
2. The method for wheat ear number identification based on Faster R-CNN as claimed in claim 1, wherein QTL mapping of the genetic trait of wheat is performed by the number of ears per unit area.
3. The method for wheat spike number identification based on Faster R-CNN as claimed in claim 2, wherein the QTL mapping method comprises:
obtaining a double haploid population of wheat;
typing the double haploid population of wheat by an SNP chip;
inputting the typing result into an IcMapping network, deleting redundant markers, and constructing a gene linkage map;
obtaining QTL positioning of wheat gene characters by a composite interval mapping method according to a gene linkage map.
4. The method for wheat spike number identification based on Faster R-CNN as claimed in claim 3, wherein the method of typing comprises:
carrying out whole genome amplification on the genome DNA of wheat to be detected to obtain an amplification product;
cutting the amplification product by using random endonuclease to obtain a DNA fragment;
hybridizing the DNA fragment with an SNP chip to enable the DNA fragment to be complementarily combined with a specific capture probe on the microbead of the SNP chip;
washing to remove DNA fragments which are not hybridized or mismatched and hybridized;
labeling nucleotide substrates by dinitrophenol and biotin, performing single base extension on the specific capture probe, and labeling different fluorescent dyes by different nucleotide substrates through dyeing;
and scanning the SNP chip, judging and reading according to the detected fluorescence number and outputting a typing result.
5. The method for wheat spike number identification based on Faster R-CNN as claimed in claim 1, wherein training the Faster R-CNN model comprises adjusting the ratio of candidate boxes, IOU threshold and optimizing the strategy learning rate of model training.
6. The method of claim 5, wherein the candidate frames are judged to exceed the threshold value through L2 regularization and softmax classification and regression of the full connectivity layer to judge whether the candidate frames contain complete ears, the position coordinates of the candidate frames exceeding the threshold value are obtained through softmax regression, and the candidate frames which are larger than the IOU threshold value but not the best prediction of real ears are rejected through non-maximum suppression.
7. The method of claim 6, wherein the images and labels of the wheat ears are inputted into several sub-modules of the ResNet network model for feature extraction, candidate frames are generated according to the extracted features, the generated candidate frames and the images and labels of the original wheat ears are subjected to L2 regularization and softmax classification and regression together, the cross-over ratio of the results is calculated, and the results are subjected to non-maximum suppression, so as to obtain a preliminary classification scheme, and the preliminary classification scheme is sequentially inputted into another sub-module of the ROI pooling layer, the ResNet network model, the L2 regularization, the softmax layer, the cross-over ratio calculation and the non-maximum suppression, so as to output the final image recognition result.
8. The method of wheat ear number identification based on Faster R-CNN as claimed in claim 7 wherein the several sub-modules of the ResNet network model include four sub-modules, the first sub-module including convolutional layer, BN layer, RELU layer and max pooling layer; the second to the four sub-modules comprise a rolling block and a plurality of identification blocks; another sub-module in the ResNet network model includes a rolling block, two identification blocks, a max-pooling layer, and a flattening layer.
9. A wheatear number identification system based on fast R-CNN, characterized by comprising:
the calibration module is used for extracting images of canopy layers of the wheatears of different families and calibrating the wheatears in the images to obtain labels corresponding to the wheatears;
the characteristic diagram establishing module is used for inputting the image and the label of the wheat ear into a ResNet network model for characteristic extraction to obtain a characteristic diagram of the wheat ear and establishing a corresponding candidate frame according to the wheat ear outline;
the model training module is used for training a Faster R-CNN model through the characteristic diagram to obtain an optimal ear recognition model;
the ear recognition module is used for inputting an ear image to be detected into the ear recognition model to obtain candidate frames corresponding to each ear;
and the wheat ear number output module is used for counting the number of the candidate frames so as to obtain the wheat ear number.
10. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform the fast R-CNN based wheatear identification method of any one of claims 1-8.
CN202111589730.6A 2021-12-23 2021-12-23 Wheat spike number identification method, system and medium based on fast R-CNN Pending CN114266752A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111589730.6A CN114266752A (en) 2021-12-23 2021-12-23 Wheat spike number identification method, system and medium based on fast R-CNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111589730.6A CN114266752A (en) 2021-12-23 2021-12-23 Wheat spike number identification method, system and medium based on fast R-CNN

Publications (1)

Publication Number Publication Date
CN114266752A true CN114266752A (en) 2022-04-01

Family

ID=80829167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111589730.6A Pending CN114266752A (en) 2021-12-23 2021-12-23 Wheat spike number identification method, system and medium based on fast R-CNN

Country Status (1)

Country Link
CN (1) CN114266752A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228782A (en) * 2022-12-22 2023-06-06 中国农业科学院农业信息研究所 Wheat Tian Sui number counting method and device based on unmanned aerial vehicle acquisition
CN116740592A (en) * 2023-06-16 2023-09-12 安徽农业大学 Wheat yield estimation method and device based on unmanned aerial vehicle image
WO2024160059A1 (en) * 2023-02-01 2024-08-08 中国科学院植物研究所 Wheat-ear point cloud segmentation method and system based on deep learning and geometric correction

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109055370A (en) * 2018-09-17 2018-12-21 中国农业科学院作物科学研究所 Stalk WSC content gene label and application based on middle wheat 895
CN110427839A (en) * 2018-12-26 2019-11-08 西安电子科技大学 Video object detection method based on multilayer feature fusion
CN112488006A (en) * 2020-12-05 2021-03-12 东南大学 Target detection algorithm based on wheat image
CN112529045A (en) * 2020-11-20 2021-03-19 济南信通达电气科技有限公司 Weather image identification method, equipment and medium related to power system
CN112779348A (en) * 2020-12-31 2021-05-11 四川农业大学 Wheat unit area spike number major QTL site, KASP primer closely linked with same and application thereof
CN113222991A (en) * 2021-06-16 2021-08-06 南京农业大学 Deep learning network-based field ear counting and wheat yield prediction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109055370A (en) * 2018-09-17 2018-12-21 中国农业科学院作物科学研究所 Stalk WSC content gene label and application based on middle wheat 895
CN110427839A (en) * 2018-12-26 2019-11-08 西安电子科技大学 Video object detection method based on multilayer feature fusion
CN112529045A (en) * 2020-11-20 2021-03-19 济南信通达电气科技有限公司 Weather image identification method, equipment and medium related to power system
CN112488006A (en) * 2020-12-05 2021-03-12 东南大学 Target detection algorithm based on wheat image
CN112779348A (en) * 2020-12-31 2021-05-11 四川农业大学 Wheat unit area spike number major QTL site, KASP primer closely linked with same and application thereof
CN113222991A (en) * 2021-06-16 2021-08-06 南京农业大学 Deep learning network-based field ear counting and wheat yield prediction

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228782A (en) * 2022-12-22 2023-06-06 中国农业科学院农业信息研究所 Wheat Tian Sui number counting method and device based on unmanned aerial vehicle acquisition
CN116228782B (en) * 2022-12-22 2024-01-12 中国农业科学院农业信息研究所 Wheat Tian Sui number counting method and device based on unmanned aerial vehicle acquisition
WO2024160059A1 (en) * 2023-02-01 2024-08-08 中国科学院植物研究所 Wheat-ear point cloud segmentation method and system based on deep learning and geometric correction
CN116740592A (en) * 2023-06-16 2023-09-12 安徽农业大学 Wheat yield estimation method and device based on unmanned aerial vehicle image
CN116740592B (en) * 2023-06-16 2024-02-02 安徽农业大学 Wheat yield estimation method and device based on unmanned aerial vehicle image

Similar Documents

Publication Publication Date Title
CN114266752A (en) Wheat spike number identification method, system and medium based on fast R-CNN
US8321147B2 (en) Statistical approach for optimal use of genetic information collected on historical pedigrees, genotyped with dense marker maps, into routine pedigree analysis of active maize breeding populations
Xie et al. Optical topometry and machine learning to rapidly phenotype stomatal patterning traits for maize QTL mapping
Feldmann et al. Multi-dimensional machine learning approaches for fruit shape phenotyping in strawberry
CN114818909B (en) Weed detection method and device based on crop growth characteristics
CN112883915B (en) Automatic wheat head identification method and system based on transfer learning
Hsieh et al. Fruit maturity and location identification of beef tomato using R-CNN and binocular imaging technology
CN112575116B (en) Soybean whole genome SNP locus combination, gene chip and application
CN111291686B (en) Extraction method and system for crop root-fruit phenotype parameters and root-fruit phenotype discrimination method and system
CN110982933B (en) Molecular marker closely linked with major QTL (quantitative trait locus) of wheat grain length and application thereof
CN107419000A (en) A kind of full genome system of selection and its application that prediction Soybean Agronomic Characters phenotype is sampled based on haplotype
CN104615912A (en) Modified whole genome correlation analysis algorithm based on channel
Bartholomé et al. Genomic prediction: progress and perspectives for rice improvement
CN116580773A (en) Breeding cross-representation type prediction method and system based on ensemble learning and electronic equipment
CN109727642B (en) Whole genome prediction method and device based on random forest model
CN114937030A (en) Phenotypic parameter calculation method for intelligent agricultural planting of lettuce
CN108291265A (en) The method of palm oil yield for prognostic experiment oil palm plant
CN110692512A (en) Method for rapidly predicting heterosis based on crop genome size
CN110245551A (en) The recognition methods of field crops under the operating condition of grass more than a kind of
CN109472771A (en) Detection method, device and the detection device of maize male ears
CN118216422B (en) Phenotype assisted lemon breeding method based on deep learning
Sharma et al. Genomic selection: a revolutionary approach for forest tree improvement in the wake of climate change
CN109961441A (en) Hybrid rice seed splits the efficient measuring method of clever rate
Garcia Computer Vision Phenomics and Quantitative Genetics of Sweet Corn Ear Architecture and Fungal Disease Resistance
CN116883838A (en) Pepper disease and pest identification method and system for field scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Xiao Yonggui

Inventor after: Li Lei

Inventor after: Yang Mengjiao

Inventor after: Muhammad adir Hassen

Inventor after: Han Zhiguo

Inventor after: Xia Xianchun

Inventor after: He Zhonghu

Inventor before: Xiao Yonggui

Inventor before: Li Lei

Inventor before: Yang Mengjiao

Inventor before: Muhammad adir Hassen

Inventor before: Han Zhiguo

Inventor before: Xia Xianchun

Inventor before: He Zhonghu

CB03 Change of inventor or designer information