CN111241939A - Rice yield estimation method based on unmanned aerial vehicle digital image - Google Patents

Rice yield estimation method based on unmanned aerial vehicle digital image Download PDF

Info

Publication number
CN111241939A
CN111241939A CN201911409162.XA CN201911409162A CN111241939A CN 111241939 A CN111241939 A CN 111241939A CN 201911409162 A CN201911409162 A CN 201911409162A CN 111241939 A CN111241939 A CN 111241939A
Authority
CN
China
Prior art keywords
rice
yield
image
unmanned aerial
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911409162.XA
Other languages
Chinese (zh)
Inventor
曹英丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Agricultural University
Original Assignee
Shenyang Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Agricultural University filed Critical Shenyang Agricultural University
Priority to CN201911409162.XA priority Critical patent/CN111241939A/en
Publication of CN111241939A publication Critical patent/CN111241939A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses a rice yield estimation method based on unmanned aerial vehicle digital images, which mainly comprises the following steps: the method comprises the steps of designing a test cell, carrying a high-definition digital camera by an unmanned aerial vehicle, shooting canopy images of rice in the test cell, analyzing the recognition capability of each channel or index of RGB and HSV color spaces on rice ears by applying an optimal subset selection algorithm, extracting 7 characteristic parameters suitable for northern japonica rice ear image segmentation, constructing a rice ear segmentation model based on a BP neural network, further performing connected domain analysis on the rice ear images, obtaining the number of the rice ears, and finally substituting a yield estimation formula to estimate the rice yield. The method can quickly and accurately obtain the digital images of the rice canopy and accurately estimate the yield of the rice.

Description

Rice yield estimation method based on unmanned aerial vehicle digital image
Technical Field
The invention relates to the technical field of unmanned aerial vehicle remote sensing application, in particular to a rice yield estimation method based on unmanned aerial vehicle digital images.
Background
The traditional rice yield estimation method mainly estimates the yield by satellite remote sensing, but the yield estimation by satellite remote sensing has lower resolution, and the precision cannot be guaranteed in areas with complex terrain and various farming systems, especially in auxiliary breeding application; moreover, the estimated yield model established by the satellite remote sensing estimated yield is mostly a statistical model, has large errors in different regions and different years, is lack of mechanicalness, and cannot be further popularized and applied; secondly, a rice remote sensing estimation system for practical production application is lacked.
Unmanned aerial vehicle remote sensing belongs to low latitude remote sensing technique, and it is less disturbed by atmospheric factor in acquireing the image process, has advantages such as use cost is low, easy operation, the image of acquireing is fast, ground resolution ratio height. Research and analysis show that the remote sensing image acquired by the unmanned aerial vehicle can well replace a satellite image to estimate the yield of the small-area rice. In practical applications, rice yield estimation by using a digital rice image method is not common, and therefore, a new method capable of accurately estimating rice yield needs to be developed.
Disclosure of Invention
The invention mainly aims to overcome the defects in the prior art, provides a rice yield estimation method based on unmanned aerial vehicle digital images, and can realize accurate estimation of rice yield.
In order to achieve the purpose, the invention adopts the following technical scheme:
a rice yield estimation method based on unmanned aerial vehicle digital images comprises the following steps:
s1, carrying a digital camera by using an unmanned aerial vehicle to obtain a rice canopy digital image of a research area;
s2, manually cutting and labeling the digital images of the rice canopy obtained in the step S1, and constructing three types of image samples consisting of rice ears, rice leaves and backgrounds;
s3, calculating R, G, B values at the image pixel level, H, S, V values projected to HSV space, and four indices for the three types of image samples in step S2, respectively: normalizing a green-red difference index NGRDI, a red-green ratio index RGRI, a green-leaf index GLI and a green-crossing index EXG, and taking 10 parameters of R, G, B, H, S, V, NGRDI, RGRI, GLI and EXG as initial classification features;
s4, traversing all combinations of features based on an optimal subset selection algorithm to respectively establish a rice ear, leaf and background classification model, carrying out optimization selection on 10 classification features R, G, B, H, S, V, NGRDI, RGRI, GLI and EXG, calculating the root mean square error of cross validation in a regression model, and reducing dimensionality, thereby simplifying the classification model and finding out the optimal feature input;
s5, constructing an image segmentation model based on the BP neural network according to the selected optimal classification characteristics as input, and segmenting the image into three types: rice ears, leaves and background;
and S6, after image segmentation, carrying out binarization processing on the segmented rice ear image, calculating the number of connected areas close to the size of the rice ears in the image, estimating the number of the rice ears, and finally substituting the number into a rice yield estimation formula to estimate the yield of the rice.
Preferably, the step S3 includes the following steps:
matlab is applied to image preprocessing, R, G, B values of image pixel points are read, a projection space H, S, V value and four index values are respectively calculated according to the following formulas, and the H, S, V value and four index value calculation formulas are as follows:
V=max{R,G,B}
Figure BDA0002349509080000021
Figure BDA0002349509080000022
NGRDI=(G-R)/(G+R)
RGRI=R/G
GLI=(2G-R-B)/(2G+R+B)
EXG=2g-r-b
in the formula, R, G, B represents red-band, green-band, and blue-band pixel values, R, G, and B represent normalized results, and R is R/(R + G + B), G is G/(R + G + B), and B is B/(R + G + B).
As a preferred technical solution, in the step S4, the method further includes performing normalization processing on the values of R, G, B, H, S, V, NGRDI, RGRI, GLI, and EXG, specifically:
after the shot rice picture is intercepted, calculating the values of R, G, B, H, S, V, NGRDI, RGRI, GLI and EXG of each type of pixel as 10 classification features;
firstly, taking out samples with the same number from each type of sample pixels, calibrating result labels of each type, selecting y as 0 to represent rice ears, and selecting y as 1 to represent rice leaves; selecting y-2 as background;
and connecting all sample pixel information into a table according to characteristics, performing normalization operation on all data and regression values by using a formula X-Max/(Max-Min), wherein X represents any position data, Max is the maximum value of the characteristic column where X is located, and Min is the minimum value of the characteristic column where X is located, and splitting the table obtained after regression into two parts of characteristic data and regression values which are respectively assigned to variables X and Y.
Preferably, the step S4 further includes the steps of:
grouping sample data of 10 characteristics of three types of rice ears, rice leaves and backgrounds by using a cross verification method, which specifically comprises the following steps:
(1) the initial sampling is divided into k sub-samples, a single sub-sample is reserved as data of a verification model, and the other k-1 samples are used for training;
(2) repeating the cross validation for k times, taking one of the subsamples as a test set each time without repeating, and taking the other remaining samples as training set training models, wherein each subsample is validated once;
(3) averaging the k results or using other combinations to arrive at a single estimate;
the MSE calculation method comprises the following steps:
Figure BDA0002349509080000031
in the formula (I), the compound is shown in the specification,
Figure BDA0002349509080000032
which represents an estimate of the output Y,
Figure BDA0002349509080000041
all the times areMSE of numbersiAveraging to obtain the final MSE
Figure BDA0002349509080000042
Figure BDA0002349509080000043
As a preferred technical solution, in step S4, the optimal subset selection algorithm sets k features in sequence from 1 to p, and traverses all combination situations of the k features
Figure BDA0002349509080000044
Modeling is carried out, a model with the minimum prediction error is selected, p is the total characteristic number, k is 1,2 … 10 is carried out, p is 10 because of 10 characteristics, and all generated combination cases have 2 in totalpThe optimal subset selection algorithm is used to implement the following procedures, 1024 types:
(1) setting the number k of the features from 1 to 10, and circularly executing the following operations;
(2) for all combinations of models containing k features
Figure BDA0002349509080000045
Expressing that a linear regression model is established for all feature combinations in sequence, model parameters are estimated by a least square method, a model with the minimum residual square sum RSS is selected as the best model with k features and is recorded as MkWherein
Figure BDA0002349509080000046
(3) Estimating the prediction error of the model from the model M by applying a cross-validation method1,…,MpAnd selecting the optimal classification model.
As a preferred technical solution, in step S6, the specific steps of estimating the yield of rice according to the rice yield estimation formula are as follows:
cutting rice ears from the rice canopy image, extracting the number of the rice ears, and then estimating a formula according to the rice yield: estimated yield per hectare (kg) × ear number per hectare × thousand grain weight (g) × 10-6And x 85%, calculating the yield of the rice, and finally comparing the yield with the actually measured yield of the rice to verify the accuracy of the estimated yield of the image.
As a preferred technical solution, after step S6, the method further includes a step of verifying the estimation accuracy, specifically:
evaluating and judging the precision of the rice ear extraction by using the root mean square error RMSE and the average absolute percentage error MAPE;
Figure BDA0002349509080000047
Figure BDA0002349509080000048
in the formula: n is the number of total test sample parties of the rice district; y is the number of rice ears actually measured in each sample prescription in the field; y isiThe smaller the RMSE and MAPE values are, the closer the estimated value is to the true value, the better the estimated yield effect is and the higher the precision is, so that the rice spike number extracted from the image corresponding to each sample is.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) the method adopts the technical scheme of combining optimal subset selection and the BP neural network, analyzes the capability of each channel or index of northern japonica rice canopy digital image color space to identify the rice ears, solves the problems of rice ear characteristic selection and rice ear number estimation, and provides an important technical support for constructing the rice yield estimation independent of a statistical method.
(2) According to the method, the technical scheme of estimating the rice yield based on the digital image carried by the unmanned aerial vehicle is adopted, the method has the advantages of high efficiency, flexibility, high resolution and the like, the rice canopy number of the designated area can be accurately and actively acquired, and the problem of quick and accurate estimation of the rice yield in the small-area test cell is solved by combining the constructed yield estimation model based on the rice ear segmentation, so that the technical effect of high-flux rice yield estimation is achieved.
Drawings
FIG. 1 is a general technical flow diagram of the present invention;
FIG. 2 is a plot of rice field location and plot layout according to the present invention.
FIG. 3 is a sample diagram of the ear, leaf and background of rice labeled according to the present invention.
Fig. 4 is a flow chart of an optimal subset selection algorithm utilized by the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
As shown in fig. 1, the present invention provides a method for estimating rice yield based on digital images of unmanned aerial vehicles, comprising the following steps: selecting a test cell, and then, in the stage of the complete heading of rice, using a quad-rotor unmanned aerial vehicle to carry a digital camera to obtain a digital image of a rice canopy. After the shot rice picture is manually cut and labeled, a rice ear, leaf and background sample library is constructed, the values of R, G, B, H, S, V, NGRDI, RGRI, GLI and EXG of each type of image pixels of the rice ear, leaf and background are calculated to serve as 10 classification features to be selected, samples with the same number are taken out from each type of sample pixels, and feature analysis based on an optimal subset selection algorithm is carried out.
The labeled data is divided into a training data set and a testing data set. And traversing the combination situation of all the characteristics by using an optimal subset selection algorithm to respectively establish a linear regression model, evaluating the classification result of the model by using a cross validation algorithm, optimally selecting 10 classification characteristics R, G, B, H, S, V, NGRDI, RGRI, GLI and EXG, calculating the root mean square error of classification, and reducing the dimensionality, thereby simplifying the classification model and obtaining the optimal characteristics of the northern japonica rice ear segmentation.
The rice ear segmentation features selected by the optimal subset selection algorithm are used as input, an image segmentation model based on a BP neural network is constructed, and the images are divided into three categories: rice ears, leaves, and background. And then carrying out binarization processing on the divided rice ear images, and calculating the number of connected areas close to the size of the rice ears in the images, namely the number of the rice ears. After the number of the rice ears is extracted, according to a rice yield estimation formula: estimated yield per hectare(kg) × 10 × ear number per hectare × ear number × thousand grains weight (g) × 10-6X 85%, the yield of rice was estimated.
Example 1
Unmanned aerial vehicle digital image shooting and rice ear extraction tests of rice were performed in 2017 and 2018 at Shenyang agriculture university super rice achievement transformation base (Shenyang agriculture of Shenyang city of 123 Ning province, Shenyang city of 413 Ning province). Shenyang is located in the south of the northeast region, belongs to a temperate semi-humid continental climate, and has an annual average temperature of 6.2 to 9.7. And annual precipitation is 600-800 mm. The design of the two-year community is consistent, the design of the split area test is adopted, the current field planted rice variety Shennong 9816 is selected, and 7 nitrogen application levels are set: nitrogen-free treatment (0kg/ha), low nitrogen treatment (150kg/ha), medium nitrogen treatment (240kg/ha), high nitrogen treatment (330kg/ha), organic fertilizer substitution 10%, organic fertilizer substitution 20% and organic fertilizer substitution 30%, the test is repeated for 3 times, 21 cells are randomly arranged (cells with different nitrogen application levels are shown in the middle graph of figure 2), and the area of each cell is 30m2(4.2m is a flat area, and the areas of different fertilization treatment areas are isolated by using ridges.
Data collection was performed at rice heading date of 21/8/2017 and at rice heading date of 1/9/2018, respectively. The method comprises the steps that two white sample squares (as shown in a right picture of a picture 1) with the area of 0.5m multiplied by 0.5m are respectively arranged in a corner area and a central area of a 21 cell in a test, a four-rotor unmanned aerial vehicle is used for carrying a high-definition digital camera (PHANTOM 4, 1240 ten thousand pixels and 4000 multiplied by 3000 picture resolution), the orthoscopic digital images of rice canopy in the white sample squares are collected, the shooting time is 10:00-14:00, the weather is clear and no cloud, the unmanned aerial vehicle respectively shoots from four flight heights of 2m, 3m, 6m and 9m, the shooting error is too large due to the influence of a rotor wind field of the unmanned aerial vehicle with the flight height of 2m, the shooting data of the flight height are abandoned, and the test obtains 21 multiplied by 2 multiplied by 3 which is three flight heights of 126 RGB digital; and (3) synchronously carrying out ground test: manually counting the number of the rice ears in 42 sample squares.
The RGB digital images of the rice obtained by the test mainly comprise three types of objects, namely rice ears, rice leaves and a background, and the three types of objects are classified by adopting a supervision classification method, so that the rice ears and the number of the rice ears in a unit area are extracted and used for the design of a yield estimation model. 1800 three-classification image samples (as shown in figure 3) of rice ears, leaves and backgrounds are constructed by adopting a manual labeling method and are used for training a classification model. And respectively calculating the values of R, G, B, H, S, V, NGRDI, RGRI, GLI and EXG of the pixel level for the three types of shot object samples with three heights to serve as 10 classification features to be selected. Firstly, taking out the same number of pixels from each type of sample pixels, and randomly selecting 30000 pixels from each type of sample to form 90000 sample records in the research; labeling the corresponding category of each record, and selecting y-0 to represent rice ears, y-1 to represent rice leaves, and y-2 to represent picture backgrounds; and finally, normalizing all 90000 sample records, and assigning two parts of feature data and classification results to variables X and Y respectively, wherein X is a 90000 multiplied by 10 matrix, and Y is a 90000 multiplied by 1 matrix.
Finding out the optimal classification characteristics or models based on the optimal subset selection algorithm, wherein the flow chart is shown in fig. 4, the classification models adopt linear regression models, estimating model test errors by using a cross validation algorithm, performing optimal selection on 10 classification characteristics R, G, B, H, S, V, NGRDI, RGRI, GLI and EXG, removing irrelevant characteristics in the models, and reducing dimensionality, thereby simplifying the classification models, and the expression of the linear regression models is that Y is β01X12X2+ …, where the weight vector is estimated using the least squares method
Figure BDA0002349509080000071
The least square method matrix calculation formula is as follows:
Figure BDA0002349509080000072
wherein, XbigA 90000 × 11 matrix is obtained by adding a column vector of all 1 columns to the top variable X before the matrix is used for calculation
Figure BDA0002349509080000073
Grouping all sample data sets by a cross verification method, selecting k to be 5, dividing all the data sets into 5 parts, establishing a model based on a linear regression method, and repeatedly taking the data sets each timeOne part is used as a test set, the other four parts are used as training sets, and then the mean square error MSE of the model on the test set is calculatedi(i ═ 1,2,3,4,5), MSE for 5 timesiAveraging to obtain the final MSE. In order to avoid imbalance and inaccurate estimation test error, the number of sample pixels of each type in each group is set to be as same as possible when 5 groups are extracted. And calculating the Mean Square Error (MSE) by using the obtained prediction result as a test error of one-time test, calculating the average value of the 5 groups of MSEs as the final estimation of the test error, and using the size of the test error value as the basis for selecting the optimal model.
And the optimal subset selection algorithm takes all the combination conditions in the features to be selected as input features, calculates the classified root mean square error, and selects the corresponding feature or model when the error is minimum. This study has 10 features in common, so p is 10, and all combinations generated have 2 in commonpThe optimal subset selection algorithm is applied to feature selection of digital images shot at the flying heights of 3m, 6m and 9m respectively, the result shows that the precision of a digital image classification model is reduced along with the increase of the number of features, the model precision tends to be stable when the feature number p is 7, the root mean square error of cross validation is 0.0385, and the result shows that the corresponding optimal features R, B, H, S, V, GLI and EXG can be used as features extracted from northern japonica rice canopy image rice ears.
TABLE 1 Process results and errors for optimal subset feature selection
Figure BDA0002349509080000081
The classification features selected by the optimal subset selection algorithm are input into a three-layer BP neural network, 7 neuron nodes of an input layer and 3 neuron nodes of an output layer respectively correspond to the probabilities of rice ears, rice leaves and backgrounds, the weight learning rate and the threshold learning rate are respectively set to be 0.1 and 0.01, the training iteration frequency is 50, and three classifications of the ears, the leaves and the backgrounds are respectively carried out on three digital images with flight heights of 3m, 6m and 9 m. And (3) carrying out binarization processing on the divided rice ear images, and calculating the number of connected areas close to the size of the rice ears in the images, namely the number of the rice ears, so as to identify the number of the rice ears in the test sample. The Root Mean Square Error (RMSE) and the Mean Absolute Percent Error (MAPE) were used to evaluate and judge the accuracy of the rice ear extraction.
Figure BDA0002349509080000091
Figure BDA0002349509080000092
In the formula: n is the number of total test sample parties of the rice district; y is the number of rice ears actually measured in each sample prescription in the field; y isiThe number of rice ears extracted from the image for each sample was determined. The smaller the RMSE and MAPE values are, the closer the estimated value is to the true value, the better the estimated effect is and the higher the precision is. The evaluation of the ear extraction accuracy is shown in table 2.
TABLE 2 evaluation of ear extraction accuracy
Figure BDA0002349509080000093
The rice ears are segmented from the rice canopy image, the number of the rice ears is extracted, then the rice ears are substituted into a rice yield estimation formula to calculate the yield of the rice, and the rice variety to be tested is Shennong 9816, the thousand kernel weight is 22.6 g, and the number of the rice ears is 139.1. The number of rice ears in each rice cell test sample prescription extracted from the image is averaged, then the number of rice ears per hectare is converted, the result is substituted into a formula to calculate the estimated yield per hectare of each rice cell, and finally the estimated yield is compared with the actually measured rice yield, the accuracy of the estimated yield of the image is verified, and the estimation precision evaluation of the rice yield is shown in table 3.
TABLE 3 evaluation of rice yield estimation accuracy
Figure BDA0002349509080000094
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (7)

1. A rice yield estimation method based on unmanned aerial vehicle digital images is characterized by comprising the following steps:
s1, carrying a digital camera by using an unmanned aerial vehicle to obtain a rice canopy digital image of a research area;
s2, manually cutting and labeling the digital images of the rice canopy obtained in the step S1, and constructing three types of image samples consisting of rice ears, rice leaves and backgrounds;
s3, calculating R, G, B values at the image pixel level, H, S, V values projected to HSV space, and four indices for the three types of image samples in step S2, respectively: normalizing a green-red difference index NGRDI, a red-green ratio index RGRI, a green-leaf index GLI and a green-crossing index EXG, and taking 10 parameters of R, G, B, H, S, V, NGRDI, RGRI, GLI and EXG as initial classification features;
s4, traversing all combinations of features based on an optimal subset selection algorithm to respectively establish a rice ear, leaf and background classification model, carrying out optimization selection on 10 classification features R, G, B, H, S, V, NGRDI, RGRI, GLI and EXG, calculating the root mean square error of cross validation in a regression model, and reducing dimensionality, thereby simplifying the classification model and finding out the optimal feature input;
s5, constructing an image segmentation model based on the BP neural network according to the selected optimal classification characteristics as input, and segmenting the image into three types: rice ears, leaves and background;
and S6, after image segmentation, carrying out binarization processing on the segmented rice ear image, calculating the number of connected areas close to the size of the rice ears in the image, estimating the number of the rice ears, and finally substituting the number into a rice yield estimation formula to estimate the yield of the rice.
2. The method for estimating rice yield based on digital images of unmanned aerial vehicles according to claim 1, wherein the step S3 comprises the following steps:
matlab is applied to image preprocessing, R, G, B values of image pixel points are read, a projection space H, S, V value and four index values are respectively calculated according to the following formulas, and the H, S, V value and four index value calculation formulas are as follows:
V=max{R,G,B}
Figure FDA0002349509070000011
Figure FDA0002349509070000021
NGRDI=(G-R)/(G+R)
RGRI=R/G
GLI=(2G-R-B)/(2G+R+B)
EXG=2g-r-b
in the formula, R, G, B represents red-band, green-band, and blue-band pixel values, R, G, and B represent normalized results, and R is R/(R + G + B), G is G/(R + G + B), and B is B/(R + G + B).
3. The method for estimating rice yield based on digital images of unmanned aerial vehicles according to claim 2, wherein the step S4 further comprises normalizing the values of R, G, B, H, S, V, NGRDI, RGRI, GLI and EXG, specifically:
after the shot rice picture is intercepted, calculating the values of R, G, B, H, S, V, NGRDI, RGRI, GLI and EXG of each type of pixel as 10 classification features;
firstly, taking out samples with the same number from each type of sample pixels, calibrating result labels of each type, selecting y as 0 to represent rice ears, and selecting y as 1 to represent rice leaves; selecting y-2 as background;
and connecting all sample pixel information into a table according to characteristics, performing normalization operation on all data and regression values by using a formula X-Max/(Max-Min), wherein X represents any position data, Max is the maximum value of the characteristic column where X is located, and Min is the minimum value of the characteristic column where X is located, and splitting the table obtained after regression into two parts of characteristic data and regression values which are respectively assigned to variables X and Y.
4. The method for estimating rice yield based on digital images of unmanned aerial vehicles according to claim 1, wherein the step S4 further comprises the steps of:
grouping sample data of 10 characteristics of three types of rice ears, rice leaves and backgrounds by using a cross verification method, which specifically comprises the following steps:
(1) the initial sampling is divided into k sub-samples, a single sub-sample is reserved as data of a verification model, and the other k-1 samples are used for training;
(2) repeating the cross validation for k times, taking one of the subsamples as a test set each time without repeating, and taking the other remaining samples as training set training models, wherein each subsample is validated once;
(3) averaging the k results or using other combinations to arrive at a single estimate;
the MSE calculation method comprises the following steps:
Figure FDA0002349509070000031
in the formula (I), the compound is shown in the specification,
Figure FDA0002349509070000032
which represents an estimate of the output Y,
Figure FDA0002349509070000033
MSE of all timesiAveraging to obtain the final MSE
Figure FDA0002349509070000034
Figure FDA0002349509070000035
5. The digital video based on unmanned aerial vehicle of claim 1The rice yield estimation method is characterized in that in step S4, the optimal subset selection algorithm sets k characteristics from 1 to p in sequence, and traverses all combination conditions of the k characteristics
Figure FDA0002349509070000036
Modeling is carried out, a model with the minimum prediction error is selected, p is a total characteristic number, k is 1, 2.. 10, p is 10 due to 10 characteristics, all generated combination conditions are 2p is 1024, and the optimal subset selection algorithm is utilized to realize the following process:
(1) setting the number k of the features from 1 to 10, and circularly executing the following operations;
(2) for all combinations of models containing k features
Figure FDA0002349509070000037
Expressing that a linear regression model is established for all feature combinations in sequence, model parameters are estimated by a least square method, a model with the minimum residual square sum RSS is selected as the best model with k features and is recorded as MkWherein
Figure FDA0002349509070000038
(3) Estimating the prediction error of the model from the model M by applying a cross-validation method1,…,MpAnd selecting the optimal classification model.
6. The method for estimating rice yield based on digital images of unmanned aerial vehicles according to claim 1, wherein in step S6, the estimating of rice yield according to the rice yield estimation formula comprises:
cutting rice ears from the rice canopy image, extracting the number of the rice ears, and then estimating a formula according to the rice yield: estimated yield per hectare (kg) × ear number per hectare × thousand grain weight (g) × 10-6And x 85%, calculating the yield of the rice, and finally comparing the yield with the actually measured yield of the rice to verify the accuracy of the estimated yield of the image.
7. The method for estimating rice yield based on digital images of unmanned aerial vehicles according to claim 1, wherein after step S6, the method further comprises a step of verifying estimation accuracy, specifically:
evaluating and judging the precision of the rice ear extraction by using the root mean square error RMSE and the average absolute percentage error MAPE;
Figure FDA0002349509070000041
Figure FDA0002349509070000042
in the formula: n is the number of total test sample parties of the rice district; y is the number of rice ears actually measured in each sample prescription in the field; y isiThe smaller the RMSE and MAPE values are, the closer the estimated value is to the true value, the better the estimated yield effect is and the higher the precision is, so that the rice spike number extracted from the image corresponding to each sample is.
CN201911409162.XA 2019-12-31 2019-12-31 Rice yield estimation method based on unmanned aerial vehicle digital image Pending CN111241939A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911409162.XA CN111241939A (en) 2019-12-31 2019-12-31 Rice yield estimation method based on unmanned aerial vehicle digital image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911409162.XA CN111241939A (en) 2019-12-31 2019-12-31 Rice yield estimation method based on unmanned aerial vehicle digital image

Publications (1)

Publication Number Publication Date
CN111241939A true CN111241939A (en) 2020-06-05

Family

ID=70874170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911409162.XA Pending CN111241939A (en) 2019-12-31 2019-12-31 Rice yield estimation method based on unmanned aerial vehicle digital image

Country Status (1)

Country Link
CN (1) CN111241939A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111678876A (en) * 2020-07-21 2020-09-18 福州大学 Quick detection method for hexavalent chromium in water environment based on machine learning
CN111860603A (en) * 2020-06-23 2020-10-30 沈阳农业大学 Method, device, equipment and storage medium for identifying rice ears in picture
CN112215714A (en) * 2020-09-08 2021-01-12 北京农业智能装备技术研究中心 Rice ear detection method and device based on unmanned aerial vehicle
CN112417378A (en) * 2020-12-10 2021-02-26 常州大学 Eriocheir sinensis quality estimation method based on unmanned aerial vehicle image processing
CN113673326A (en) * 2021-07-14 2021-11-19 南京邮电大学 Unmanned aerial vehicle platform crowd counting method and system based on image deep learning
CN116740592A (en) * 2023-06-16 2023-09-12 安徽农业大学 Wheat yield estimation method and device based on unmanned aerial vehicle image
CN116757332A (en) * 2023-08-11 2023-09-15 北京市农林科学院智能装备技术研究中心 Leaf vegetable yield prediction method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107092891A (en) * 2017-04-25 2017-08-25 无锡中科智能农业发展有限责任公司 A kind of paddy rice yield estimation system and method based on machine vision technique
CN109459392A (en) * 2018-11-06 2019-03-12 南京农业大学 A kind of rice the upperground part biomass estimating and measuring method based on unmanned plane multispectral image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107092891A (en) * 2017-04-25 2017-08-25 无锡中科智能农业发展有限责任公司 A kind of paddy rice yield estimation system and method based on machine vision technique
CN109459392A (en) * 2018-11-06 2019-03-12 南京农业大学 A kind of rice the upperground part biomass estimating and measuring method based on unmanned plane multispectral image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DUAN L.等: "Determination of rice panicle numbers during heading by multi-angle imaging" *
李昂: "基于无人机数码影像的水稻产量估测研究" *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860603A (en) * 2020-06-23 2020-10-30 沈阳农业大学 Method, device, equipment and storage medium for identifying rice ears in picture
CN111678876A (en) * 2020-07-21 2020-09-18 福州大学 Quick detection method for hexavalent chromium in water environment based on machine learning
CN112215714A (en) * 2020-09-08 2021-01-12 北京农业智能装备技术研究中心 Rice ear detection method and device based on unmanned aerial vehicle
CN112215714B (en) * 2020-09-08 2024-05-10 北京农业智能装备技术研究中心 Unmanned aerial vehicle-based rice spike detection method and device
CN112417378A (en) * 2020-12-10 2021-02-26 常州大学 Eriocheir sinensis quality estimation method based on unmanned aerial vehicle image processing
CN113673326A (en) * 2021-07-14 2021-11-19 南京邮电大学 Unmanned aerial vehicle platform crowd counting method and system based on image deep learning
CN113673326B (en) * 2021-07-14 2023-08-15 南京邮电大学 Unmanned plane platform crowd counting method and system based on image deep learning
CN116740592A (en) * 2023-06-16 2023-09-12 安徽农业大学 Wheat yield estimation method and device based on unmanned aerial vehicle image
CN116740592B (en) * 2023-06-16 2024-02-02 安徽农业大学 Wheat yield estimation method and device based on unmanned aerial vehicle image
CN116757332A (en) * 2023-08-11 2023-09-15 北京市农林科学院智能装备技术研究中心 Leaf vegetable yield prediction method, device, equipment and medium
CN116757332B (en) * 2023-08-11 2023-12-05 北京市农林科学院智能装备技术研究中心 Leaf vegetable yield prediction method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN111241939A (en) Rice yield estimation method based on unmanned aerial vehicle digital image
CN110245709B (en) 3D point cloud data semantic segmentation method based on deep learning and self-attention
CN109325431B (en) Method and device for detecting vegetation coverage in feeding path of grassland grazing sheep
CN106951836B (en) crop coverage extraction method based on prior threshold optimization convolutional neural network
CN110796168A (en) Improved YOLOv 3-based vehicle detection method
CN103440505B (en) The Classification of hyperspectral remote sensing image method of space neighborhood information weighting
CN111639587B (en) Hyperspectral image classification method based on multi-scale spectrum space convolution neural network
CN112749627A (en) Method and device for dynamically monitoring tobacco based on multi-source remote sensing image
CN113012150A (en) Feature-fused high-density rice field unmanned aerial vehicle image rice ear counting method
CN111160396B (en) Hyperspectral image classification method of graph convolution network based on multi-graph structure
CN109145885B (en) Remote sensing classification method and system for large-scale crops
CN113221765B (en) Vegetation phenological period extraction method based on digital camera image effective pixels
CN104820841B (en) Hyperspectral classification method based on low order mutual information and spectrum context waveband selection
CN107491793B (en) Polarized SAR image classification method based on sparse scattering complete convolution
CN112766155A (en) Deep learning-based mariculture area extraction method
CN111222545B (en) Image classification method based on linear programming incremental learning
CN109344845A (en) A kind of feature matching method based on Triplet deep neural network structure
CN114399686A (en) Remote sensing image ground feature identification and classification method and device based on weak supervised learning
CN112949657B (en) Forest land distribution extraction method and device based on remote sensing image texture features
CN112329733B (en) Winter wheat growth monitoring and analyzing method based on GEE cloud platform
CN109726679B (en) Remote sensing classification error spatial distribution mapping method
CN115205691B (en) Rice planting area identification method and device, storage medium and equipment
CN103530875A (en) End member extraction data preprocessing method
CN116563205A (en) Wheat spike counting detection method based on small target detection and improved YOLOv5
CN103761530B (en) Hyperspectral image unmixing method based on relevance vector machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination