CN105115469A - Paddy rice spike phenotypic parameter automatic measuring and spike weight predicting method - Google Patents

Paddy rice spike phenotypic parameter automatic measuring and spike weight predicting method Download PDF

Info

Publication number
CN105115469A
CN105115469A CN201510457927.2A CN201510457927A CN105115469A CN 105115469 A CN105115469 A CN 105115469A CN 201510457927 A CN201510457927 A CN 201510457927A CN 105115469 A CN105115469 A CN 105115469A
Authority
CN
China
Prior art keywords
rice
image
mrow
ear
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510457927.2A
Other languages
Chinese (zh)
Other versions
CN105115469B (en
Inventor
杨万能
黄成龙
冯慧
段凌凤
陈国兴
熊立仲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Red Star Yang Technology Co ltd
Original Assignee
Huazhong Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong Agricultural University filed Critical Huazhong Agricultural University
Priority to CN201510457927.2A priority Critical patent/CN105115469B/en
Publication of CN105115469A publication Critical patent/CN105115469A/en
Application granted granted Critical
Publication of CN105115469B publication Critical patent/CN105115469B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Medicines Containing Plant Substances (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a paddy rice spike phenotypic parameter automatic measuring and spike weight predicting method. The method comprises the following steps of acquiring a rice spike image through a scanner; laying a dried rice spike on white paper in a flat way and fixing the rice spike; starting a computer and connecting the scanner; and putting the rice spike fixed on the white paper in the scanner to acquire the rice spike image. By means of the method, multiple characters of the rice spike can be rapidly and accurately acquired only through the rice spike scanning image while a large measuring platform and complex measuring software are not required. Furthermore, related information about the spike weight can be acquired. The measuring efficiency is improved. The labor intensity is further lowered.

Description

Rice ear phenotype parameter automatic measurement and ear weight prediction method
Technical Field
The invention belongs to the field of digital image processing and mathematical modeling, and particularly relates to a rice ear phenotype parameter automatic measurement and ear weight prediction method.
Background
The rice ears are the expression forms of the rice after the rice is mature and are directly related to the yield of the rice, so the character characteristics of the rice ears have important significance for the research of rice breeding and functional genomics. At the present stage, the rice ear character measurement and the rice ear weight measurement both depend on manual measurement, and are time-consuming, labor-consuming, low in efficiency and poor in repeatability, and the manual measurement belongs to destructive measurement, so that a plurality of characters of the rice ears cannot be obtained simultaneously. With the rapid development of modern breeding technology, thousands of new rice varieties can be produced every day, in order to efficiently and accurately screen new varieties with popularization potential from the varieties, large-scale and high-flux measurement of rice ear characters is required, and the manual measurement method obviously cannot meet the requirements. Under the background, large phenotype measuring instruments and related software such as Scanalyzer3D system of LemmaTec company, Ikeda et AL, developed PASAR/PASTAViewer, AL-Tam et AL, developed P-TRAP, etc. exist at home and abroad, but all have the measuring defects of depending on a large measuring platform and software with complicated design, and cannot reflect the related information of the panicle weight.
Disclosure of Invention
The invention aims to overcome the defects and provide a method for automatically measuring the phenotype parameters of rice ears and predicting the weight of the rice ears.
The invention relates to a method for automatically measuring rice ear phenotype parameters and predicting ear weight, which comprises the following steps:
(1) acquiring an image of the rice ears by a scanner: flatly paving the dried rice ears on white paper, fixing the rice ears by using an adhesive tape, starting a computer, connecting a scanner, and putting the rice ears fixed on the white paper into the scanner to obtain rice ear images;
(2) graying the rice ear image obtained by scanning: r, G, B components of the rice ear image obtained by scanning are extracted, and a gray image is obtained by adopting a combination mode of 3R-G + B.
(3) Filtering and denoising the obtained gray-scale image, and separating a binary image of the rice ear part: filtering and denoising the obtained grayed image, and obtaining a binary image by adopting OTSU automatic threshold processing;
(4) extracting the real particle area and the total particle area, the real particle number and the total particle number simultaneously by using the binary image in the step (3); extracting a rice ear framework according to a Hilditch thinning algorithm, and extracting the ear length of the longest path by using the obtained rice ear framework; calculating a box-counting dimension; restoring the texture and color characteristics of the rice ear image;
(5) performing correlation analysis on the character parameters and the ear weight in the step (4), and performing stepwise regression analysis on the character parameters and the ear weight;
(6) determining a BP neural network structure according to the correlation analysis and stepwise regression analysis result in the step (5);
(7) and (4) predicting the spike weight according to the BP neural network structure in the step (6).
Further, the method for extracting the solid particle area and the total particle area in the step (4) comprises the following steps: the foreground point value of the obtained binary image is 1, the background point value is 0, the image is preprocessed to distinguish real grains from shrivelled grains, the number of the real grains and the number of the total grains are counted, and after the shrivelled grains are removed, the number of foreground point pixels occupied by the grains is multiplied by the unit pixel spatial resolution (mm)2) And calculating the areas of the solid particles and the total particles, wherein the calculation formula is as follows:
S = N dpi 2
wherein S is the area, N is the number of target foreground pixels, and dpi is the resolution.
Further, the method for extracting the rice ear framework and calculating the ear length in the step (4) comprises the following steps: processing the image by using a Hilditch thinning algorithm to obtain a skeleton of the image, removing burrs, optimizing the skeleton image, extracting the spike length by using the optimized skeleton image, scanning the whole skeleton image to obtain end point information, respectively calculating the total number of foreground pixels between every two end points, wherein the maximum value is the spike length, only the foreground pixel of the longest path is reserved, and the foreground pixels in other paths are converted into background pixels.
Further, the method for calculating the box-counting dimension in the step (4) comprises the following steps: according to the definition of the box dimension, a Cantor set structure is adopted, the binary image is processed to obtain the Cantor set box dimension reflecting the shape of the rice ear, and the calculation formula is as follows:
<math> <mrow> <mi>B</mi> <mi>C</mi> <mi>D</mi> <mo>=</mo> <munder> <mi>lim</mi> <mrow> <mi>k</mi> <mo>&RightArrow;</mo> <mi>&infin;</mi> </mrow> </munder> <mfrac> <mrow> <msub> <mi>lnN</mi> <msub> <mi>&delta;</mi> <mi>k</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>F</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>-</mo> <msub> <mi>ln&delta;</mi> <mi>k</mi> </msub> </mrow> </mfrac> </mrow> </math>
wherein the BCD is the box-counting dimension,kthe size of the box is the same as the size of the box,the number of boxes that are not empty.
Further, the method for recovering the texture and color characteristics of the rice ear image in the step (4) comprises the following steps: restoring the texture and color characteristics of the binary image by adopting an image mask technology, respectively extracting 6 texture characteristic parameters by utilizing a gray level histogram and a gray level co-occurrence matrix, dividing the gray level range of the whole image into 3 groups, respectively counting the number of pixel points in each group range, removing branches, and simultaneously calculating the color information of the rice ears containing the branches and the color information of the rice ears without the branches.
Further, the specific process of the step (5) is as follows: performing correlation analysis on the character parameters in the step (4) and the ear weight to obtain a correlation coefficient r for measuring the linear correlation degree between two random variables, and performing stepwise regression analysis, wherein
<math> <mrow> <mi>r</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>x</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>y</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> </mrow> <msqrt> <mrow> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>x</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>&CenterDot;</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>y</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> </mrow> </math>
Where n is the sample size, xi,yiIs an observed value of two variables which are,is the average of two variables.
Further, the specific process of the step (6) is as follows:
hidden layer node output value in BP neural network:
<math> <mrow> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>=</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>V</mi> <mrow> <mi>k</mi> <mi>i</mi> </mrow> </msub> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mrow> <mo>(</mo> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mi>q</mi> <mo>)</mo> </mrow> </mrow> </math>
output layer node output value in BP neural network:
<math> <mrow> <msub> <mi>y</mi> <mi>j</mi> </msub> <mo>=</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>q</mi> </munderover> <msub> <mi>W</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mrow> <mo>(</mo> <mi>j</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein VkiWeight between input layer and hidden layer, WjkIs the weight between the hidden layer and the output layer, f1For the transfer function of the hidden layer, f2Xi is the transfer function of the output layer, and Xi is the input value of the node of the input layer;
error E of p-th training samplepComprises the following steps:
<math> <mrow> <msub> <mi>E</mi> <mi>p</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>t</mi> <mi>j</mi> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>y</mi> <mi>j</mi> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
wherein,in order to be able to output the desired value,is the actual output value;
for all p samples, the global error is:
<math> <mrow> <mi>E</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mo>&Sigma;</mo> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>p</mi> </munderover> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>t</mi> <mi>j</mi> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>y</mi> <mi>j</mi> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>p</mi> </munderover> <msub> <mi>E</mi> <mi>p</mi> </msub> </mrow> </math>
wherein,in order to be able to output the desired value,is the actual output value.
Under the condition of not depending on a large-scale measuring platform and complex measuring software, the invention can quickly and accurately obtain a plurality of properties of the rice ears only by depending on the scanning images of the rice ears and obtain the related information of the weight of the rice ears, thereby improving the measuring efficiency and reducing the labor intensity of workers.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a graph of the predicted ear weight and the actual ear weight.
Detailed Description
Example (b): a rice ear phenotype parameter automatic measurement and ear weight prediction method comprises the following steps:
(1) acquiring an image of the rice ears by a scanner: and flatly paving the dried rice ears on white paper, fixing the rice ears by using an adhesive tape, starting a computer, connecting a scanner, and putting the rice ears fixed on the white paper into the scanner to obtain a high-resolution rice ear image.
(2) Graying the rice ear image obtained by scanning: in order to improve the contrast between the foreground point and the background point, R, G, B components of the rice ear image obtained by scanning are extracted, and a gray image with obviously enhanced contrast is obtained by adopting a 3R-G + B combination mode.
(3) Filtering and denoising the obtained gray-scale image, and separating a binary image of the rice ear part: filtering and denoising the obtained grayed image, and obtaining a binary image by adopting OTSU automatic threshold processing;
(4) calculating the grain number and the ear area of the rice ears: the foreground point value of the obtained binary image is 1, the background point value is 0, and the real grain and the shriveled grain have large double image response to the spike, so that the image needs to be preprocessed to distinguish the real grain from the shriveled grain, the number of the real grain and the total grain are counted, and after the shriveled grain part is removed, the number of foreground point pixels occupied by the grain can be multiplied by the spatial resolution (mm) of the unit pixel2) And calculating the areas of the solid particles and the total particles. The calculation formula is as follows:
S = N dpi 2
wherein S is the area, N is the number of target foreground pixels, and dpi is the resolution.
(5) Processing an original binary image, removing burrs to obtain a rice ear framework, and calculating ear length information: processing the image by using a Hilditch thinning algorithm to obtain a skeleton of the image, removing burrs, optimizing the skeleton image, extracting the spike length by using the optimized skeleton image, scanning the whole skeleton image to obtain end point information, respectively calculating the total number of foreground pixels between every two end points, wherein the maximum value is the spike length, and only the foreground pixel of the longest path is reserved for ensuring that the calculated value reflects the actual spike length and manual judgment can be carried out, and the foreground pixels in other paths are converted into background pixels.
(6) Obtaining a box-collecting dimension BCD of each ear image: and adopting a Cantor set structure according to the definition of the box dimension, and processing the binary image to obtain the Cantor set box dimension reflecting the shape of the rice ear. The calculation formula is as follows:
<math> <mrow> <mi>B</mi> <mi>C</mi> <mi>D</mi> <mo>=</mo> <munder> <mi>lim</mi> <mrow> <mi>k</mi> <mo>&RightArrow;</mo> <mi>&infin;</mi> </mrow> </munder> <mfrac> <mrow> <msub> <mi>lnN</mi> <msub> <mi>&delta;</mi> <mi>k</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>F</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>-</mo> <msub> <mi>ln&delta;</mi> <mi>k</mi> </msub> </mrow> </mfrac> </mrow> </math>
in the formulakThe size of the box is the same as the size of the box,the number of boxes that are not empty.
(7) Restoring texture and color features of the binary image: and recovering the texture and color characteristics of the binary image by adopting an image mask technology.
(8) 6 textural features are extracted: respectively extracting 6 texture characteristic parameters by utilizing a gray level histogram and a gray level co-occurrence matrix; the original color information is reserved to a certain extent in the image graying process, so that the color can be reflected by the gray value, the gray value range of the whole image is divided into 3 groups, the number of pixel points in each group range is counted respectively, the image is preprocessed because branches possibly have influence, the branches are removed, and meanwhile, the color information of the rice ears containing the branches and the color information of the rice ears without the branches are calculated.
(9) Carrying out analysis modeling: after a sufficient number of rice ear images are processed, the property parameters can be obtained, correlation analysis is carried out on the parameters and the ear weight to obtain a correlation coefficient r, stepwise regression analysis is carried out to obtain a co-linear relation between the property parameters and the property parameters which obviously contribute to the ear weight, a BP neural network structure is designed according to an analysis result, and a model of the ear weight is predicted by utilizing the BP neural network; wherein the correlation coefficient is an index for measuring the linear correlation degree between two random variables;
<math> <mrow> <mi>r</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>x</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>y</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> </mrow> <msqrt> <mrow> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>x</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>&CenterDot;</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>y</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> </mrow> </math>
where n is the sample size, xi,yiIs an observed value of two variables which are,is the average of two variables.
Hidden layer node output value in BP neural network:
<math> <mrow> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>=</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>V</mi> <mrow> <mi>k</mi> <mi>i</mi> </mrow> </msub> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mrow> <mo>(</mo> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mi>q</mi> <mo>)</mo> </mrow> </mrow> </math>
output layer node output value in BP neural network:
<math> <mrow> <msub> <mi>y</mi> <mi>j</mi> </msub> <mo>=</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>q</mi> </munderover> <msub> <mi>W</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mrow> <mo>(</mo> <mi>j</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> </math>
in the formula VkiIs the weight between the input layer and the hidden layer, WjkIs the weight between the hidden layer and the output layer, f1For the transfer function of the hidden layer, f2For transfer function of output layer, Xi being nodes of input layerThe value is input.
Error E of p-th training samplepComprises the following steps:
<math> <mrow> <msub> <mi>E</mi> <mi>p</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>t</mi> <mi>j</mi> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>y</mi> <mi>j</mi> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
wherein,in order to be able to output the desired value,is the actual output value.
For all p samples, the global error is:
<math> <mrow> <mi>E</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mo>&Sigma;</mo> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>p</mi> </munderover> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>t</mi> <mi>j</mi> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>y</mi> <mi>j</mi> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>p</mi> </munderover> <msub> <mi>E</mi> <mi>p</mi> </msub> </mrow> </math>
wherein,in order to be able to output the desired value,is the actual output value.
Firstly, 50 cut mature rice ears are dried and pretreated under normal sunlight, flatly laid and fixed on white paper for scanning, images are processed according to a preset computer program, character parameters are processed, a ear weight model is obtained, the predicted ear weight can be obtained, the actual ear weight and the predicted ear weight of 50 rice ears are listed in the following table 1, 10-time cross validation is carried out on the ear weight model to check the precision and the reliability of the model, and the result is shown in the table 2.
TABLE 1 actual ear weight and predicted ear weight (data unit is g)
Table 2: 10 fold cross validation parameters
Wherein, the cross validation parameter calculation formula is as follows:
and the R side: an indicator reflecting the degree of correlation between the two variables. The correlation coefficient may be used to evaluate the accuracy of the model to the tassel re-estimation. Model-to-sample estimationMean value of values ofThe formula for calculating the correlation coefficient R is as follows:
<math> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>y</mi> <mo>-</mo> </mover> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>y</mi> <mover> <mo>^</mo> <mo>-</mo> </mover> </mover> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> <msqrt> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>y</mi> <mo>-</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>y</mi> <mover> <mo>^</mo> <mo>-</mo> </mover> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mfrac> </mrow> </math>
RMSE: the square root of the mean and the square of the error of the sample prediction values and the true values. The parameter is very sensitive to the condition that the error deviates from the mean value greatly, and can be used for reflecting the discrete degree of the model estimation error. The calculation formula is as follows:
<math> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> <mo>=</mo> <msqrt> <mrow> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </math>
MAE: the mean of absolute errors between the predicted and true values of the samples. The mean absolute error is absolute due to the dispersion, and thus the mean absolute error can better reflect the actual situation of the predicted value error. The calculation formula is as follows:
<math> <mrow> <mi>M</mi> <mi>A</mi> <mi>E</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mrow> <mo>|</mo> <mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>^</mo> </mover> </mrow> <mo>|</mo> </mrow> </mrow> </math>
MAPE: the mean of the relative errors between the predicted and true values of the samples. Can be used to reflect the estimation accuracy of the model. The calculation formula is as follows:
<math> <mrow> <mi>M</mi> <mi>A</mi> <mi>P</mi> <mi>E</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mfrac> <mrow> <mo>|</mo> <mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> </mrow> <mo>|</mo> </mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> </mfrac> <mo>*</mo> <mn>100</mn> <mi>%</mi> </mrow> </math>
according to the result of 10 times of cross validation parameters, the average error of the method for predicting the spike weight is only 6%, the aims of quickly and accurately measuring a plurality of parameters and predicting the spike weight by using a digital image processing and mathematical modeling method are fulfilled, and the measuring efficiency is greatly improved.

Claims (7)

1. A rice ear phenotype parameter automatic measurement and ear weight prediction method is characterized by comprising the following steps:
(1) acquiring an image of the rice ears by a scanner: flatly paving the dried rice ears on white paper, fixing the rice ears by using an adhesive tape, starting a computer, connecting a scanner, and putting the rice ears fixed on the white paper into the scanner to obtain rice ear images;
(2) graying the rice ear image obtained by scanning: r, G, B components of the rice ear image obtained by scanning are extracted, and a gray image is obtained by adopting a combination mode of 3R-G + B.
(3) Filtering and denoising the obtained gray-scale image, and separating a binary image of the rice ear part: filtering and denoising the obtained grayed image, and obtaining a binary image by adopting OTSU automatic threshold processing;
(4) extracting the real particle area and the total particle area, the real particle number and the total particle number simultaneously by using the binary image in the step (3); extracting a rice ear framework according to a Hilditch thinning algorithm, and extracting the ear length of the longest path by using the obtained rice ear framework; calculating a box-counting dimension; restoring the texture and color characteristics of the rice ear image;
(5) performing correlation analysis on the character parameters and the ear weight in the step (4), and performing stepwise regression analysis on the character parameters and the ear weight;
(6) determining a BP neural network structure according to the correlation analysis and stepwise regression analysis result in the step (5);
(7) and (4) predicting the spike weight according to the BP neural network structure in the step (6).
2. The method for automatically measuring rice panicle phenotype parameters and predicting panicle weight as claimed in claim 1, wherein the method for extracting the solid particle area and the total particle area in the step (4) comprises the following steps: the foreground point value of the obtained binary image is 1, the background point value is 0, the image is preprocessed to distinguish real grains from shrivelled grains, the number of the real grains and the number of the total grains are counted, and after the shrivelled grains are removed, the number of foreground point pixels occupied by the grains is multiplied by the unit pixel spatial resolution (mm)2) And calculating the areas of the solid particles and the total particles, wherein the calculation formula is as follows:
wherein S is the area, N is the number of target foreground pixels, and dpi is the resolution.
3. The method for automatically measuring phenotypic parameters of rice ears and predicting rice ear weight as claimed in claim 1 or 2, wherein the method for extracting rice ear skeletons and calculating ear length in step (4) comprises: processing the image by using a Hilditch thinning algorithm to obtain a skeleton of the image, removing burrs, optimizing the skeleton image, extracting the spike length by using the optimized skeleton image, scanning the whole skeleton image to obtain end point information, respectively calculating the total number of foreground pixels between every two end points, wherein the maximum value is the spike length, only the foreground pixel of the longest path is reserved, and the foreground pixels in other paths are converted into background pixels.
4. The method for automatically measuring phenotypic parameters of rice ears and predicting ear weight as claimed in claim 1 or 2, wherein the method for calculating the box-counting dimension in step (4) is: according to the definition of the box dimension, a Cantor set structure is adopted, the binary image is processed to obtain the Cantor set box dimension reflecting the shape of the rice ear, and the calculation formula is as follows:
wherein the BCD is the box-counting dimension,kthe size of the box is the same as the size of the box,the number of boxes that are not empty.
5. The method for automatically measuring phenotypic parameters of rice ears and predicting rice ear weight as claimed in claim 1 or 2, wherein the method for recovering the texture and color characteristics of the rice ear image in the step (4) comprises: restoring the texture and color characteristics of the binary image by adopting an image mask technology, respectively extracting 6 texture characteristic parameters by utilizing a gray level histogram and a gray level co-occurrence matrix, dividing the gray level range of the whole image into 3 groups, respectively counting the number of pixel points in each group range, removing branches, and simultaneously calculating the color information of the rice ears containing the branches and the color information of the rice ears without the branches.
6. The method for automatically measuring the phenotypic parameters of rice ears and predicting the weight of ears as claimed in claim 1 or 2, wherein the step (5) comprises the following steps: performing correlation analysis on the character parameters in the step (4) and the ear weight to obtain a correlation coefficient r for measuring the linear correlation degree between two random variables, and performing stepwise regression analysis, wherein
Where n is the sample size, xi,yiIs an observed value of two variables which are,is the average of two variables.
7. The method for automatically measuring the phenotypic parameters of rice ears and predicting the weight of ears as claimed in claim 1 or 2, wherein the step (6) comprises the following steps:
hidden layer node output value in BP neural network:
output layer node output value in BP neural network:
wherein VkiIs the weight between the input layer and the hidden layer, WjkIs the weight between the hidden layer and the output layer, f1For the transfer function of the hidden layer, f2Xi is the transfer function of the output layer, and Xi is the input value of the node of the input layer;
error E of p-th training samplepComprises the following steps:
wherein,in order to be able to output the desired value,is the actual output value;
for all p samples, the global error is:
wherein,in order to be able to output the desired value,is the actual output value.
CN201510457927.2A 2015-07-29 2015-07-29 Paddy rice spike of rice phenotypic parameter automatic measurement and fringe weight Forecasting Methodology Active CN105115469B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510457927.2A CN105115469B (en) 2015-07-29 2015-07-29 Paddy rice spike of rice phenotypic parameter automatic measurement and fringe weight Forecasting Methodology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510457927.2A CN105115469B (en) 2015-07-29 2015-07-29 Paddy rice spike of rice phenotypic parameter automatic measurement and fringe weight Forecasting Methodology

Publications (2)

Publication Number Publication Date
CN105115469A true CN105115469A (en) 2015-12-02
CN105115469B CN105115469B (en) 2017-10-24

Family

ID=54663514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510457927.2A Active CN105115469B (en) 2015-07-29 2015-07-29 Paddy rice spike of rice phenotypic parameter automatic measurement and fringe weight Forecasting Methodology

Country Status (1)

Country Link
CN (1) CN105115469B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574853A (en) * 2015-12-07 2016-05-11 中国科学院合肥物质科学研究院 Method and system for calculating number of wheat grains based on image identification
CN106404780A (en) * 2016-08-31 2017-02-15 上海交通大学 Tablet computer-based rice spike phenotype analyzer and use method thereof
CN107491812A (en) * 2016-06-13 2017-12-19 中国农业大学 Short-term load forecasting method based on Spot Price
CN109738442A (en) * 2019-01-05 2019-05-10 华中农业大学 A kind of full-automatic extraction system of rice spike of rice character based on the registration imaging of big view X-ray visible light
CN111895916A (en) * 2020-07-14 2020-11-06 华南农业大学 Rice spike length measuring device and measuring method
CN114283882A (en) * 2021-12-31 2022-04-05 华智生物技术有限公司 Nondestructive poultry egg quality character prediction method and system
CN115375694A (en) * 2022-10-27 2022-11-22 浙江托普云农科技股份有限公司 Portable rice whole ear measuring method based on image recognition and application thereof
CN115860269A (en) * 2023-02-20 2023-03-28 南京信息工程大学 Crop yield prediction method based on triple attention mechanism

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011089569A1 (en) * 2010-01-21 2011-07-28 Indegene Lifesystems Pvt. Ltd. Tool for clinical data mining and analysis
CN104021369A (en) * 2014-04-30 2014-09-03 南京农业大学 Grain counting method for spike of single rice based on digital image processing technology

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011089569A1 (en) * 2010-01-21 2011-07-28 Indegene Lifesystems Pvt. Ltd. Tool for clinical data mining and analysis
US20120290317A1 (en) * 2010-01-21 2012-11-15 Rajesh Nair Tool for clinical data mining and analysis
CN104021369A (en) * 2014-04-30 2014-09-03 南京农业大学 Grain counting method for spike of single rice based on digital image processing technology

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吕行军: "基于图像处理技术的麦穗识别研究", 《中国硕士学位论文全文数据库信息科技辑》 *
段凌凤: "水稻植株穗部性状在体测量研究", 《中国博士学位论文全文数据库信息科技辑》 *
龚红菊: "基于分形理论及图像纹理分析的水稻产量预测方法研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574853A (en) * 2015-12-07 2016-05-11 中国科学院合肥物质科学研究院 Method and system for calculating number of wheat grains based on image identification
CN105574853B (en) * 2015-12-07 2018-05-15 中国科学院合肥物质科学研究院 The method and system that a kind of wheat head grain number based on image recognition calculates
CN107491812A (en) * 2016-06-13 2017-12-19 中国农业大学 Short-term load forecasting method based on Spot Price
CN106404780A (en) * 2016-08-31 2017-02-15 上海交通大学 Tablet computer-based rice spike phenotype analyzer and use method thereof
CN109738442A (en) * 2019-01-05 2019-05-10 华中农业大学 A kind of full-automatic extraction system of rice spike of rice character based on the registration imaging of big view X-ray visible light
CN109738442B (en) * 2019-01-05 2021-12-31 华中农业大学 Full-automatic rice ear character extraction system based on large-field X-ray visible light registration imaging
CN111895916A (en) * 2020-07-14 2020-11-06 华南农业大学 Rice spike length measuring device and measuring method
CN111895916B (en) * 2020-07-14 2021-09-24 华南农业大学 Rice spike length measuring device and measuring method
CN114283882A (en) * 2021-12-31 2022-04-05 华智生物技术有限公司 Nondestructive poultry egg quality character prediction method and system
CN114283882B (en) * 2021-12-31 2022-08-19 华智生物技术有限公司 Non-destructive poultry egg quality character prediction method and system
CN115375694A (en) * 2022-10-27 2022-11-22 浙江托普云农科技股份有限公司 Portable rice whole ear measuring method based on image recognition and application thereof
CN115860269A (en) * 2023-02-20 2023-03-28 南京信息工程大学 Crop yield prediction method based on triple attention mechanism

Also Published As

Publication number Publication date
CN105115469B (en) 2017-10-24

Similar Documents

Publication Publication Date Title
CN105115469B (en) Paddy rice spike of rice phenotypic parameter automatic measurement and fringe weight Forecasting Methodology
CN104102929B (en) Hyperspectral remote sensing data classification method based on deep learning
CN105139395B (en) SAR image segmentation method based on small echo pond convolutional neural networks
CN107004123A (en) Iterative defect filters out technique
CN111222545B (en) Image classification method based on linear programming incremental learning
CN110232445B (en) Cultural relic authenticity identification method based on knowledge distillation
CN104820841B (en) Hyperspectral classification method based on low order mutual information and spectrum context waveband selection
CN103810522B (en) Counting method and device for corn ear grains
CN114325879B (en) Quantitative precipitation correction method based on classification probability
CN114022446B (en) Leather flaw detection method and system based on improvement YOLOv3
CN102073867B (en) Sorting method and device for remote sensing images
CN109948527B (en) Small sample terahertz image foreign matter detection method based on integrated deep learning
CN112528058B (en) Fine-grained image classification method based on image attribute active learning
CN113077444A (en) CNN-based ultrasonic nondestructive detection image defect classification method
CN108345897A (en) A kind of evaluation method of Scenic Bridges index
CN115439654B (en) Method and system for finely dividing weakly supervised farmland plots under dynamic constraint
CN111126511A (en) Vegetation index fusion-based LAI quantitative model establishment method
Xin et al. Three‐dimensional reconstruction of Vitis vinifera (L.) cvs Pinot Noir and Merlot grape bunch frameworks using a restricted reconstruction grammar based on the stochastic L‐system
CN115494007A (en) Random forest based high-precision rapid detection method and device for soil organic matters
CN117853722A (en) Steel metallographic structure segmentation method integrating superpixel information
CN115457001A (en) Photovoltaic panel foreign matter detection method, system, device and medium based on VGG network
CN115272826A (en) Image identification method, device and system based on convolutional neural network
Gislason et al. Comparison between automated analysis of zooplankton using ZooImage and traditional methodology
CN108663334A (en) The method for finding soil nutrient spectral signature wavelength based on multiple Classifiers Combination
CN111627018B (en) Steel plate surface defect classification method based on double-flow neural network model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210513

Address after: Room 01, 5 / F, building 2, Gezhouba Sun City, No.40, Gaoxin 4th Road, Donghu New Technology Development Zone, Wuhan City, Hubei Province, 430000

Patentee after: WUHAN RED STAR YANG TECHNOLOGY Co.,Ltd.

Address before: 430070 No. 1 Lion Rock street, Hongshan District, Hubei, Wuhan

Patentee before: HUAZHONG AGRICULTURAL University