CN116416523A - Machine learning-based rice growth stage identification system and method - Google Patents

Machine learning-based rice growth stage identification system and method Download PDF

Info

Publication number
CN116416523A
CN116416523A CN202310209296.7A CN202310209296A CN116416523A CN 116416523 A CN116416523 A CN 116416523A CN 202310209296 A CN202310209296 A CN 202310209296A CN 116416523 A CN116416523 A CN 116416523A
Authority
CN
China
Prior art keywords
rice
image
gray level
growth stage
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310209296.7A
Other languages
Chinese (zh)
Inventor
沈婧芳
魏逸飞
段凌凤
韩保住
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong Agricultural University
Original Assignee
Huazhong Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong Agricultural University filed Critical Huazhong Agricultural University
Priority to CN202310209296.7A priority Critical patent/CN116416523A/en
Publication of CN116416523A publication Critical patent/CN116416523A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a system and a method for identifying a rice growth stage based on machine learning; the system comprises a rice characteristic generation module and a rice growth stage identification model construction module; the rice characteristic generation module is used for obtaining fractal dimension of rice and statistic constructed by using a gray level co-occurrence matrix according to the rice image; the rice growth stage identification model construction module is used for obtaining the mapping relation between the characteristics of rice and the rice growth stage according to the neural network model, so as to construct the rice growth stage identification model. The method introduces the discrimination of the fractal dimension and the gray level co-occurrence matrix to the growth period, the accuracy of the single machine learning model and the optimal weighted integration model after the fractal dimension variable is added is respectively improved by about 2-8%, the accuracy of the single machine learning model and the optimal weighted integration model after the gray level co-occurrence matrix variable is added is respectively improved by about 2-7%, and the detection accuracy of the rice growth stage is greatly improved.

Description

Machine learning-based rice growth stage identification system and method
Technical Field
The invention belongs to the technical field of monitoring of rice growth stages, and particularly relates to a system and a method for identifying rice growth stages based on machine learning.
Background
Rice is one of the most important crops in the world and plays an important role in agricultural production. The full knowledge of the growth stages of the rice can enable people to take corresponding cultivation management measures according to the physiological characteristics of the rice in different growth stages and the requirements on external conditions, and reasonably meet the requirements of the growth and development of the rice on using a proper amount of water, fertilizer and pesticide, thereby achieving the purpose of high and stable yield of the rice; meanwhile, the automatic supervision of different growth stages of crops can make timely and reasonable management decisions in different growth stages of rice, which is a future development direction of agriculture and has important significance for modern farmland management.
In the whole growth period, the external morphology of the rice can be visually observed to be obviously changed, the self-similarity and the scaleless of the plant shape are more obvious along with the increment of the growth stage of the rice, the application of computer vision technology in the detection of the quality and the growth stage of the rice has emerged along with the development of agricultural intelligence in recent years, and technologies such as machine learning, internet and the like are increasingly applied to agriculture to automatically observe, detect and distinguish different key growth stages of the rice, so that the automatic classification and prediction of the rice are realized, and the quantity and the quality of the rice are improved.
At present, the rice is mainly identified by a manual inspection method, but the method is very tedious, time-consuming and laborious and is often limited by subjective perception of a rice state by an observer, and the problem that inaccurate judgment of a growth stage is caused by difficult and inaccurate extraction of traditional rice profile features exists. Therefore, research on automatic identification methods of rice at different growth stages is urgently needed to reduce labor cost, improve accuracy and instantaneity of observation and avoid damage to plants. However, the traditional rice contour feature extraction is not easy and inaccurate, so that the judgment of the growth stage is inaccurate, and the key growth stage of the rice plant development cannot be timely and accurately identified.
Disclosure of Invention
In order to improve the accuracy of judging the growth stage of the rice, the invention provides a system and a method for identifying the growth stage of the rice based on machine learning.
The rice growth stage identification system based on machine learning for achieving one of the purposes of the invention comprises a rice characteristic generation module and a rice growth stage identification model construction module;
the rice characteristic generation module is used for obtaining a plurality of fractal dimensions of rice according to the rice image and constructing a plurality of statistics used for representing texture characteristics of the rice by utilizing the gray level co-occurrence matrix;
The rice growth stage identification model construction module is used for obtaining the mapping relation between the characteristics of rice and the rice growth stage according to the neural network model so as to construct a rice growth stage identification model; the rice growth stage identification model is used for identifying the growth stage of the rice according to the rice image; the characteristics of the rice comprise a plurality of fractal dimensions of the rice obtained from a rice characteristic generation module and statistics constructed by using a gray level co-occurrence matrix.
Further, the rice characteristic generation module comprises a first fractal dimension generation module, a second fractal dimension generation module and a texture characteristic acquisition module;
the first fractal dimension generation module is used for calculating two fractal dimensions D1 and RFD based on the whole rice and the edge of the rice leaf according to the rice image;
the second fractal dimension generation module is used for calculating two fractal dimensions D2 and Sandbox based on the whole rice and the circumscribed rectangle according to the rice image;
the texture feature acquisition module is used for obtaining a gray level co-occurrence matrix according to the correlation of adjacent pixels and the gray level change of diagonal elements of the co-occurrence matrix, and constructing a plurality of statistics used for representing the texture features of rice according to the gray level co-occurrence matrix.
Still further, the statistics include: contrast, variability, contrast score matrix, entropy, correlation, and angular second moment.
Still further, the fractal dimension generating module further comprises a gray level map generating module, which is used for converting the rice image into a gray level map before the fractal dimension is obtained, wherein the gray level map is used for obtaining the fractal dimensions D1, RFD, D2 and Sandbox of the rice; the fractal dimension generation module further comprises a binary image generation module, wherein the binary image generation module is used for carrying out edge detection and denoising on the image by adopting a Sobel operator and Gaussian filtering method on the gray level image of the rice image to obtain a binary image of the rice image, and the binary image is used for obtaining fractal dimensions D1, RFD, D2 and Sandbox of the rice.
The calculation method of the fractal dimension D1 and the RFD comprises the following steps:
s501, randomly selecting a pixel point A from a gray level image of rice, and marking coordinate values as (i, j); randomly selecting a pixel point A 'from the binary image of the rice, wherein the coordinate value of the pixel point A' is marked as (i ', j');
s502, determining another pixel point B and a pixel point B 'in the gray level image and the binary image of the rice by adopting a random walk method, wherein coordinate values of the pixel point B and the pixel point B' are respectively marked as (u, v) and (u ', v'); and r= | (i, j) - (u, v) |= | (i ', j') - (u ', v')|, R is a randomly set value, r=1, 2, n;
S503, calculating the difference value G of the gray values of the two pixel points A and B and the difference value G ' of the gray values of the two pixel points A ' and B ':
G=I(i,j)-I(u,v)
G'=I(i',j')-I(u',v')
wherein:
i (I, j) and I (u, v) are gray values of the pixel point A and the pixel point B respectively;
i (I ', j'), I (u ', v') are gray values of the pixel point A 'and the pixel point B', respectively;
s503, repeating the steps S501-S502 to obtain a plurality of difference values G and G ' of each gray level image and each binary image, and calculating an average value E (G) of each gray level image and an average value E (G ') of each binary image according to the difference values G and G ';
s504, calculating to obtain two fractal dimensions of each gray level image and binary image according to the following formula:
Figure BDA0004112021860000031
Figure BDA0004112021860000032
wherein:
d1 is a fractal dimension based on a gray image;
RFD is fractal dimension based on binary image;
c is a set constant.
Further, the calculation method of the fractal dimension D2 and Sandbox further comprises:
s701, respectively selecting an M multiplied by M grid on each rice gray level image and each binary image for division, wherein M is the number of grid boundaries;
s702, randomly selecting (a, b) grids (a epsilon [1, M ], b epsilon [1, M ]) on each rice gray level image; randomly selecting the (a ', b') th grid (a 'E [1, M ], b' E [1, M ]) on each rice binary image; the number of frames required to cover each grid is calculated as follows:
Figure BDA0004112021860000033
Figure BDA0004112021860000034
Wherein:
n (a, b) represents the number of frames required to cover the (a, b) th grid;
n (a ', b') represents the number of frames required to cover the (a ', b') th grid;
P max : representing the maximum value of the pixel values of all the pixel points in the (a, b) th grid;
P min : representing the minimum of pixel values of all pixel points in the (a, b) th grid;
P' max : representing the maximum value of the pixel values of all the pixel points in the (a ', b') th grid;
P' min : representing the minimum value of the pixel values of all the pixel points in the (a ', b') th grid;
s703, calculating the sum N of the number of covered frames of each grid as follows:
Figure BDA0004112021860000041
Figure BDA0004112021860000042
s704, respectively calculating two fractal dimensions in the gray level image and the binary image according to the following formulas:
Figure BDA0004112021860000043
Figure BDA0004112021860000044
wherein:
d2 is a fractal dimension based on a gray image;
sandbox is a fractal dimension based on binary images.
Further, the method for constructing a plurality of statistics for characterizing the texture features of the rice by using the gray level co-occurrence matrix comprises the following steps:
s801, dividing a binary image of each rice into L gray levels according to gray values, wherein each pixel corresponds to one gray level;
s802, obtaining a gray level co-occurrence matrix p (x, y) according to the gray level of each pixel of the binary image of each rice;
x and y respectively represent the gray level of two pixel points, x is 0 and L-1, and y is 0 and L-1;
s803, extracting 6 texture features from the gray level co-occurrence matrix: contrast, variability, contrast score matrix, entropy, correlation, and angular second moment, denoted Con, DISL, IDM, ENT, corr and ASM, respectively, the calculation formula includes:
Figure BDA0004112021860000045
Figure BDA0004112021860000046
Figure BDA0004112021860000047
Figure BDA0004112021860000048
Figure BDA0004112021860000049
Figure BDA00041120218600000410
wherein:
μ xy : the average value of gray levels x and y of two different pixel points;
σ xy : standard deviation of gray levels x and y of two different pixels.
The identification method for the rice growth stage based on machine learning for achieving the second purpose of the invention comprises the following steps:
s1, obtaining a plurality of fractal dimensions of rice according to an original image of the rice and constructing a plurality of statistics used for representing texture features of the rice by using a gray level co-occurrence matrix; the fractal dimension is a data characteristic quantity for representing the phenotypic character of rice;
s2, obtaining a mapping relation between the characteristics of the rice and the growth stage of the rice according to the neural network model, so as to construct a rice growth stage identification model; the rice growth stage identification model is used for identifying the growth stage of the rice according to the rice image; the characteristics of the rice comprise a plurality of fractal dimensions of the rice obtained from a rice characteristic generation module and a plurality of statistics for representing the texture characteristics of the rice by utilizing a gray level co-occurrence matrix.
Further, the step S1 includes the steps of:
calculating two fractal dimensions D1 and RFD based on the whole rice and the edge of the rice leaf according to the rice image;
calculating two fractal dimensions D2 and Sandbox based on the whole rice and the circumscribed rectangle according to the rice image;
according to the correlation of adjacent pixels and the gray level change of diagonal elements of the co-occurrence matrix, a gray level co-occurrence matrix is obtained, and a plurality of statistics used for representing the texture characteristics of rice are constructed according to the gray level co-occurrence matrix;
still further, the plurality of statistics includes: contrast, variability, contrast score matrix, entropy, correlation, and angular second moment;
further, the step S2 includes the steps of:
s201, combining a plurality of phenotype character data sets of a plurality of rice samples with a data set which is extracted from a gray level image and a binary image and contains four fractal dimensions D1, RFD, D2, sandbox and six gray level symbiotic matrixes to obtain an initial data set of rice;
s202, drawing a thermodynamic diagram of correlation analysis according to all the characteristics of the initial rice dataset; obtaining the relation between each characteristic and the rice growth stage according to the thermodynamic diagram; retaining part of the characteristics according to the relation between the characteristics and the rice growth stage;
S203, carrying out combined cross verification on the rest of the features by adopting a random forest model on the basis of recursive feature elimination, obtaining the importance degree of different feature numbers on the accuracy of judging the rice growth stage by calculating the sum of decision coefficients of the rest of the features, and reserving a plurality of feature combination numbers according to the importance degree to obtain a modeling data set;
s204, carrying out data preprocessing and normalization processing on the modeling data set and training to obtain a training-completed rice growth stage identification model.
A non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the machine learning based rice growth stage identification method.
The beneficial effects are that:
the invention discovers that the introduction of the fractal dimension and the gray level co-occurrence matrix has positive significance for judging the growth period, the accuracy of the single machine learning model and the optimal weighted integration model after the addition of the fractal dimension variable is respectively improved by about 2-8%, the accuracy of the single machine learning model and the optimal weighted integration model after the addition of the gray level co-occurrence matrix variable is respectively improved by about 2-7%, and the detection precision of the rice growth stage is greatly improved.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the method of the present invention;
FIG. 2 is a ROC curve of six machine learning models in an embodiment;
FIG. 3 is a ROC curve of an optimized weighted integration model in an embodiment;
FIG. 4 is a confusion matrix for optimizing a weighted integration model in an embodiment;
FIG. 5 is a feature importance result of support vector machines, decision trees, adaboost, and optimization weighted integration model input variables in an embodiment.
Detailed Description
The following detailed description is presented to explain the claimed invention and to enable those skilled in the art to understand the claimed invention. The scope of the invention is not limited to the following specific embodiments. It is also within the scope of the invention to include the claims of the present invention as made by those skilled in the art, rather than the following detailed description.
Referring to fig. 1, which is a flowchart illustrating an embodiment of the method of the present invention, the present invention provides a machine learning model and an integrated model based on data of an automatic phenotype platform of rice to detect a growth stage of rice, and the specific steps are as follows:
(1) Data extraction
a. The automatic phenotype platform of the rice measures 28 phenotype characters of 1094 rice samples in total of 521 rice varieties in three different growth stages (tillering stage, jointing stage and heading stage) and takes the 28 phenotype characters as a data set measured by the automatic phenotype platform of the rice;
Table 1 data characterization of automatic phenotype platform for rice
Figure BDA0004112021860000061
Figure BDA0004112021860000071
b. The automatic rice phenotype platform uses a visible light industrial camera (AVT stingery FG 504) to shoot rice photos, and a rice RGB image is obtained;
c. and (3) layering the rice image by adopting an automatic segmentation process layering method based on kernel linear discriminant analysis and Gaussian process regression to obtain a gray level map and a binary map of the rice. The target leaves are distinguished from their similar leaves in two steps using a nuclear linear discriminant analysis: firstly, roughly detecting the whole blade, and secondly, finely detecting the edge of the blade, wherein the specific method is as follows:
the first step is to convert the collected rice color image into gray image by RGB channel graying, to mutually superimpose and change the three colors of red, green and blue of the image, to respectively take the value of each channel as the gray value of the gray image to obtain the gray image of RGB three channels, to draw the gray histogram of the three gray images, to select the optimal rice gray image according to the main information distribution of the rice pot plant.
The second step is based on the rough segmentation of the nuclear linear discriminant analysis, modeling a supervised classifier using the nuclear linear discriminant analysis, and segmenting the target blade from the similar blade background. Cutting and collecting target leaf areas from the gray level image, creating a cut background leaf area, cutting the target leaf, extracting boundaries of the roughly-cut image, collecting areas containing target leaf boundaries, and collecting other areas (such as leaf areas and backgrounds). The method adopts Remove Image Background tools based on Python, ruby, deep learning and other technologies, and can automatically identify foreground objects (namely target blades) and backgrounds by using a powerful artificial intelligence AI algorithm, so that large-scale batch image segmentation is realized.
And thirdly, performing edge detection on the image after the segmentation of the nuclear linear discriminant analysis. The edges detected on the boundary region may not be continuous due to some misclassification of the kernel linear discriminant analysis. To eliminate these errors, the blade needs to be masked, i.e., a part of the area is masked with a selected object and image-processed. Then, the Sobel operator and Gaussian filtering are combined to detect the edges of the pictures and remove noise, the basic principle of Gaussian filtering is to carry out weighted average on the pixels of the images in the sliding window, and the Gaussian function is utilized to calculate the weight coefficient, the weight coefficient is determined according to the distance between the center pixel point of the image of the sliding window and other pixels in the image in the window, the weight coefficient increases along with the increase of the distance, and otherwise, the weight is lower if the distance is smaller. The formula is:
Figure BDA0004112021860000072
wherein sigma 2 Representing the variance of the gaussian function, p, q is the transverse coordinates and h (p, q) is the function of the gaussian filter. Currently, there are two types of templates commonly used, 3*3 and 5*5, as follows:
Figure BDA0004112021860000081
the comparison result shows that the template of 3*3 is optimal, and then the rice color in the gray level graph is changed into white, so that the binary graph required by us is obtained.
d. For the gray level image and the binary image of the rice, two fractal dimensions based on the whole rice and the edge of the rice leaf are calculated by adopting a random walk method, and the method specifically comprises the following steps:
s501, randomly selecting a pixel point A and a pixel point A 'from a gray level image and a binary image of rice respectively, wherein coordinate values of the pixel point A and the pixel point A' are respectively marked as (i, j) and (i ', j');
s502, determining another pixel point B and a pixel point B 'in the gray level image and the binary image of the rice by adopting a random walk method, wherein coordinate values of the pixel point B and the pixel point B' are respectively marked as (u, v) and (u ', v'); and r= | (i, j) - (u, v) |= | (i ', j') - (u ', v')|, R is a randomly set value, r=1, 2, n;
s503, calculating the difference value G of the gray values of the two pixel points A and B and the difference value G ' of the gray values of the two pixel points A ' and B ':
G=I(i,j)-I(u,v)
G'=I(i',j')-I(u',v')
wherein:
i (I, j) and I (u, v) are gray values of the pixel point A and the pixel point B respectively;
i (I ', j'), I (u ', v') are gray values of the pixel point A 'and the pixel point B', respectively;
s503, repeating the steps S501-S502 to obtain a plurality of difference values G and G ' of each gray level image and each binary image, and calculating an average value E (G) of each gray level image and an average value E (G ') of each binary image according to the difference values G and G ';
s504, calculating to obtain two fractal dimensions of each gray level image and binary image according to the following formula:
Figure BDA0004112021860000082
Figure BDA0004112021860000083
Wherein:
d1 is a fractal dimension based on a gray image;
RFD is fractal dimension based on binary image;
e. for the gray level image and the binary image of the rice, calculating two fractal dimensions based on the whole rice and an external rectangle by adopting a box counting dimension method, wherein the method specifically comprises the following steps:
s701, respectively selecting an M multiplied by M grid on each rice gray level image and each binary image for division, wherein M is the number of grid boundaries;
s702, randomly selecting (a, b) grids (a epsilon [1, M ], b epsilon [1, M ]) on each rice gray level image; randomly selecting the (a ', b') th grid (a 'E [1, M ], b' E [1, M ]) on each rice binary image; the number of frames required to cover each grid is calculated as follows:
Figure BDA0004112021860000091
Figure BDA0004112021860000092
wherein:
n (a, b) represents the number of frames required to cover the (a, b) th grid;
n (a ', b') represents the number of frames required to cover the (a ', b') th grid;
P max : representing the maximum value of the pixel values of all the pixel points in the (a, b) th grid;
P min : representing the minimum of pixel values of all pixel points in the (a, b) th grid;
P' max : representing the maximum value of the pixel values of all the pixel points in the (a ', b') th grid;
P' min : representing all of the (a ', b') th gridA minimum value of pixel values of the pixel points;
S703, calculating the sum N of the number of covered frames of each grid as follows:
Figure BDA0004112021860000093
Figure BDA0004112021860000094
s704, respectively calculating two fractal dimensions in the gray level image and the binary image according to the following formulas:
Figure BDA0004112021860000095
Figure BDA0004112021860000096
wherein:
d2 is a fractal dimension based on a gray image;
sandbox is a fractal dimension based on binary images.
f. For the rice binary image, calculating the expression of the correlation of adjacent pixels and the gray level change degree of diagonal elements of the co-occurrence matrix, and obtaining the statistic constructed by the gray level co-occurrence matrix as the specific calculation method of the texture features of rice classification, wherein the specific calculation method comprises the following steps:
obtaining a gray level co-occurrence matrix p (x, y) according to the gray level of the binary image of each rice; the specific method for obtaining the gray level co-occurrence matrix p (x, y) is as follows:
dividing the picture into L levels according to gray values, wherein the gray value of each pixel corresponds to one gray level, starting from a pixel point with any gray level of x, leaving a certain fixed position relation d= (dx, dy) on a straight line with the direction of theta, and expressing the probability of reaching the pixel point with the gray level of y as a matrix, namely a gray level symbiotic matrix; the gray level co-occurrence matrix is represented by p (x, y) (x, y=0, 1, 2..l-1), wherein L represents the gray level of an image, x, y represent the gray levels of pixel points, respectively, and d represents the spatial positional relationship between two pixel points; in this embodiment, d of the gray level co-occurrence matrix is set to be 1, the direction θ is set to be [ "0", "45", "90", "135" ], values of four angles are calculated respectively in this embodiment, and then an average value of the four angles is taken; then extracting 6 texture features from the gray level co-occurrence matrix: the contrast, difference, contrast score matrix, entropy, correlation and angular second moment (energy), noted Con, DISL, IDM, ENT, corr and ASM, respectively, are calculated as follows:
Figure BDA0004112021860000101
Figure BDA0004112021860000102
Figure BDA0004112021860000103
Figure BDA0004112021860000104
Figure BDA0004112021860000105
Figure BDA0004112021860000106
Wherein mu xy For the gray level x, y mean value sigma of different pixel points xy The standard deviation of gray levels x and y of different pixel points.
g. The four fractal dimension D1, RFD, D2, sandbox and six gray level co-occurrence matrix data are respectively subjected to K-S test, each angle of different stages of the rice is judged to be subject to uniform distribution, the existing autocorrelation is quite similar, the situation that the local specificity is influenced by the rice in growth does not exist is illustrated, and the fractal dimension of the rice and the gray level co-occurrence matrix can be used as an important characteristic to be added into a data set.
(2) Feature selection
a. Combining 28 phenotypic character data sets of 1094 rice samples obtained by measurement of the automatic rice phenotype platform with data sets extracted from gray level images and binary images and containing four fractal dimensions and six gray level co-occurrence matrixes to obtain an initial data set of rice;
b. according to the thermodynamic diagram of the correlation analysis drawn by all the features of the initial data set of the rice, observing the relation between each feature and the growth stage of the rice, removing the features with less obvious relation with the growth stage of the rice, and removing the features f1-f12 and the features LD1-LD6 shown in the table 1 in the embodiment;
c. Carrying out combined cross verification on the rest features by adopting a random forest model on the basis of recursive feature elimination, and finally obtaining the importance degree of different feature numbers on the accuracy rate of rice growth stage judgment by calculating the sum of decision coefficients of the rest features, and reserving the optimal feature combination number to obtain a modeling data set; the importance degree is determined according to actual requirements, and the optimal feature combination number in the embodiment is a feature combination of 31 feature components.
The method for eliminating the recursive features is mainly characterized in that a random forest model is adopted, training is carried out for multiple times on an initial data set of rice, features with low weights are removed according to a weight coefficient after each training according to the obtained optimal feature combination number 31, the model is repeatedly constructed, the optimal features are selected according to the coefficients, the selected features are extracted, the process is repeated on the other features until all the features are traversed, the optimal feature combination is selected to obtain a modeling data set, and the rice data set after feature selection is carried out by using the recursive feature elimination method, namely the modeling data set.
(3) Machine learning modeling
a. Performing data preprocessing on the modeling data set, wherein the data preprocessing comprises missing value processing, abnormal value processing and checking whether data are balanced or not;
Wherein the checking data is balanced or not is: the proportion of the number of samples of the data sets due to the different tags is likely to be unbalanced. Therefore, if the classification is performed by training directly using an algorithm, the training effect may be poor; therefore, data inspection is needed, so that the sample number proportion of the data set of the label is equivalent;
b. and carrying out normalization processing on the modeling data set, wherein the purpose is to eliminate the dimension influence of rice characteristics, and a specific calculation formula is as follows:
Figure BDA0004112021860000111
wherein x is i Each of the features is represented by a pattern,
Figure BDA0004112021860000112
mean value s representing the feature i Standard deviation representing the characteristic
c. Dividing the modeling data set into a training set and a test set by adopting a random sampling method according to a dividing ratio of 8:2, wherein 80% of all samples are used for model training each time, and the other 20% are used as test sets to estimate performance indexes, and setting the same random number seeds for the same model to ensure the consistency of the model;
d. training and verifying the established model by adopting a ten-fold cross verification method, which allows the model to be split into a plurality of training set test sets for training, wherein the training sets are randomly divided into 10 subsets with approximately equal sizes, one of the subsets is used as a verification set for verifying the accuracy of the model according to the sequence, and the other nine subsets are used as training set training models;
e. Parameter optimizing is carried out on the established model by adopting a Bayesian optimizing method, a Gaussian process is adopted, previous parameter information is considered, the prior is continuously updated, and super-parameter combinations which enable the identification effect of each model classifier to be optimal are searched;
f. the initial data set of the rice and the modeling data set of the rice after feature selection are used for training a machine learning model, classification labels are a tillering period, a jointing period and a heading period, a multi-classification machine learning model is established, and the machine learning classification model used in the embodiment comprises: support Vector Machines (SVMs), decision trees, random forests, adaboost, stacked integration, and optimization weighted integration learning classifiers;
the specific calculation method of the stacking integrated model is as follows:
first layer model: modeling, fitting and predicting by using a support vector machine, a decision tree, a random forest and an AdaBoost algorithm;
second layer model: the classification model uses the prediction results of the first 5 models as characteristics, the labels of the test set as labels, and the XGBClassifier algorithm as a base classifier to perform modeling, fitting and prediction.
The specific calculation formula of the optimization weighted integration model is as follows:
Figure BDA0004112021860000121
Figure BDA0004112021860000122
Figure BDA0004112021860000123
Wherein w is j Is the weight corresponding to the base model j (j=l,., k), n is the total number of samples, y i Is to observe the true value of i,
Figure BDA0004112021860000124
is the prediction of observation i by the base model j.
g. Evaluation of the established model performance can be characterized by using a confusing matrix function in sklearn. Metrics in Python through a confusion matrix between model prediction results and real results, calculating AUC values and drawing ROC curves for evaluating classification performance of the model, and visually displaying the model results
h. The precision_score, recovery_score, accuracy_score, f1_score, and cohen_kappa_score functions in sklearn. Metrics in Python are used to obtain the evaluation index accuracy, recall, accuracy, F1-score value, and kappa coefficient of the model, and the specific calculation formulas are as follows:
Figure BDA0004112021860000125
Figure BDA0004112021860000126
Figure BDA0004112021860000127
Figure BDA0004112021860000128
/>
Figure BDA0004112021860000129
Figure BDA00041120218600001210
Figure BDA00041120218600001211
wherein TP i True positive, indicating that the rice is correctly classified as the ith growth stage; FP (Fabry-Perot) i False positive, indicating misclassified rice as the ith growth stage; FN (Fn) i As false negative of class i, indicates water misclassified as the ith growth stage of the other growth stagesAnd (3) rice. P (P) i And R is i The accuracy and recall of class i, respectively, n being the number of classes (n=3 in this study), P w And R is w The precision and recall of the weighted F1-score, respectively. P is p 0 Is the sum of the number of correctly classified samples of each class divided by the total number of samples, i.e. the overall classification accuracy, p e Is the sum of the 'products of actual and predicted numbers' corresponding to all the categories respectively;
i. modeling the rice modeling data set subjected to the correlation analysis and the recursive feature elimination method by using a plurality of machine learning classifiers respectively, and comparing classification results of the models before and after feature selection through the evaluation index evaluation;
j. the four fractal dimensions D1, D2, RFD, sandbox and six types of texture features Con, DISL, IDM, ENT, corr and ASM obtained by using gray level symbiotic matrixes are respectively and independently added into a rice modeling data set after feature selection to form a new data set, and then modeling is carried out by using a plurality of machine learning classifiers respectively, and the classification results of the four types of fractal dimensions are compared and the six types of gray level symbiotic matrixes are independently added for evaluation through the evaluation indexes; the specific implementation is as follows:
embodiment one:
(1) Respectively reading all data sets of rice and rice modeling data sets after feature selection, and generating two different data sets: the data set 1 is an initial data set of rice, and the data set 2 is a modeling data set of rice;
(2) Setting corresponding classification labels according to the types of rice growing stages, wherein the classification labels of the multi-classification models are respectively a tillering stage, a jointing stage and a heading stage, and in each multi-classification model, 1094 pieces of rice sample data are obtained, wherein the label of the rice sample data in the tillering stage is set to be '-1'; rice sample data at the jointing period, and the label is set to be 0; the rice sample data in the heading stage, the label is set to be 1;
(3) And respectively modeling the two data sets, integrating a plurality of machine learning classifiers by using a Support Vector Machine (SVM), a decision tree, a random forest, adaboost, stacking and optimizing weight, dividing the two data sets into a training set and a testing set according to a dividing ratio of 8:2, and setting the same random number seeds for adopting the same model to ensure the consistency of the model. Training and verifying by adopting a ten-fold cross verification method, and searching for a super-parameter combination which enables the identification effect of each model classifier to be optimal by adopting a Bayesian optimization method;
(4) In the embodiment, the precision indexes of all the rice data sets by adopting six machine learning models are shown in table 2, and the precision indexes of the rice data sets after feature selection are shown in table 3; the evaluation indexes of the 6 machine learning classifiers after feature selection are improved, the single machine learning model with the best classification effect is an Adaboost model, the accuracy and the F1 fraction of the model are 93.15% and 0.93%, the kappa coefficients are 0.91, and the accuracy is improved by about 0.5% compared with the model which is not subjected to feature selection. The integrated model provides better performance than the base model, wherein the optimized weighted integration is the most accurate model and is better than the stacked model, the accuracy and F1-score reach 94.06% and 0.94, the kappa coefficient reaches 0.92, and the accuracy is improved by about 0.6% compared with the model without feature selection. Overall, the machine learning classifier performs better after feature selection;
Table 2 precision index of six models of dataset 1
Figure BDA0004112021860000131
Table 3 precision index of six models of dataset 2
Figure BDA0004112021860000141
(5) The ROC curves for the six machine learning models are shown in fig. 2, with the ROC curves on the abscissa of the false positive rate and the true rate on its ordinate. For a data sample of a classification task, the ROC curve needs to calculate the probability that the sample belongs to the correct class, and to transform the probability into the corresponding class, we need to select a threshold value, the ROC curve is obtained by the change of the threshold value, and the performance of each classification model is displayed. Due to the high accuracy of the model, all varieties are closer to the value representing true positive rate. Compared with a basic model, the AUC value of the integrated model is higher and reaches the level of 0.98, and the AUC value based on a random forest model and an Adaboost model in a single machine learning model also reaches about 0.97, which proves that the performance of each classification model is good;
(6) The ROC curve and the confusion matrix of the optimized weighted integration model are shown in fig. 3 and 4 respectively, in three growth stages of all models, the tillering stage is the most accurate in identification, and is not easy to confuse, and the second stage is the heading stage, wherein the reason is probably that the rice rapidly stretches upwards between the internodes of the stems in the heading stage, and the rice stretches out of the top leaves along with the stretching of the stems in the heading stage, so that the overall morphology of the rice is not greatly different.
Embodiment two:
(1) Respectively reading the rice modeling data sets after the feature selection, and generating three different data sets 3, 4 and 5: wherein the rice features in dataset 3 do not include fractal dimension and gray level co-occurrence matrix; the rice features in the dataset 4 do not include gray level co-occurrence matrices, including fractal dimensions; the rice features in the dataset 5 do not include fractal dimensions, including gray level co-occurrence matrices;
(2) Setting corresponding classification labels according to the types of rice growth stages, wherein the classification labels of the multi-classification model are respectively a tillering stage, a jointing stage and a heading stage, and 1094 pieces of rice sample data are used in each multi-classification model in the embodiment, wherein the label of the rice sample data in the tillering stage is set to be '-1'; rice sample data at the jointing period, and the label is set to be 0; the rice sample data in the heading stage, the label is set to be 1;
(3) And respectively modeling the three data sets, dividing the three data sets into a training set and a testing set according to a dividing ratio of 8:2 by using a Support Vector Machine (SVM), a decision tree, a random forest, an Adaboost, stacking integration and optimizing weighting integration six machine learning classifiers, and setting the same random number seeds for the same model to ensure the consistency of the model. Training and verifying by adopting a ten-fold cross verification method, and searching for a super-parameter combination which enables the identification effect of each model classifier to be optimal by adopting a Bayesian optimization method;
(4) The accuracy, F1 score and kappa coefficient index of the six machine learning models in the data sets 3,4 and 5 are shown in tables 4,5 and 6, and the fact that the fractal dimension or the gray level co-occurrence matrix is not added in different models can be found in the tables, so that the judging effect of the classifier is improved after the fractal dimension or the gray level co-occurrence matrix is introduced. In general, after the gray level co-occurrence matrix feature is introduced, the evaluation index of each model is improved more, and the accuracy is improved by about 2-7%. After the fractal dimension feature is introduced, the decision tree model and the optimization weighted integration model are improved more than the model after the gray level co-occurrence matrix feature is introduced, and the accuracy is improved by about 3% and 8% respectively. Overall, the single machine learning model improves more, and the introduction of fractal dimension and gray co-occurrence matrix has positive significance for the discrimination of growth cycle.
Table 4 accuracy index for six models of dataset 3,4,5
Figure BDA0004112021860000151
Table 5F 1 score index for six models of dataset 3,4,5
Figure BDA0004112021860000152
TABLE 6 kappa coefficient index for six models of dataset 3,4,5
Figure BDA0004112021860000153
Embodiment III:
(1) The rice modeling data sets after the feature selection were read out, respectively, and one data set 6 was generated, which was identical to the data set 2 in example 1.
(2) Setting corresponding classification labels according to the types of rice growing stages, wherein the classification labels of the multi-classification models are respectively a tillering stage, a jointing stage and a heading stage, and in each multi-classification model, 1094 pieces of rice sample data are obtained, wherein the label of the rice sample data in the tillering stage is set to be '-1'; rice sample data at the jointing period, and the label is set to be 0; rice sample data in heading stage, and the label is set to be 1"
(3) Modeling the data set 6, using a Support Vector Machine (SVM), a decision tree, a random forest, adaboost, stacking integration and optimization weighting integration six machine learning classifiers, respectively adopting a random forest method in a feature selection tree model to evaluate feature importance, calculating the importance of each feature to the category of the rice growth stage, and sequencing the feature contribution degree to find more influential feature variables to optimize the model
(4) The feature importance results for the first 10 input variables are shown in table 7. The input feature importance was calculated from the six models to find the most influential independent variables to optimize the model, and table 7 shows the top 10 feature importance results from weighting all 6 models. The weighting of the parameter SA to 0.1359 is the most important feature. The first 10 input variables include two texture feature variables extracted from the image, and two fractal dimension variables. Other auxiliary data derived from the image should be added as they may lead to higher estimation accuracy. In addition, parameters such as relative frequency and structural parameters of rice that reflect plant compactness appear to be less important;
TABLE 7 feature importance coefficients and ranking for six models
Figure BDA0004112021860000161
(5) The feature importance results of the support vector machine, decision tree, adaboost, and optimization weighted integration model input variables are shown in fig. 5. Based on the feature importance method, the feature importance of morphological parameters in the decision tree model is larger than the feature importance of texture parameters of other four models, so that the fractal dimension, gray level co-occurrence matrix and other texture parameters can be deduced to be very important for detecting the growth stage of the rice. While the feature importance of the different models is not the same, the feature importance of RFD, D2, sandbox, ENT, ASM, and g_g in the four models is large in general.
k. And according to the machine learning classifier model, respectively adopting a random forest method in the feature selection tree model to evaluate the feature importance, calculating the importance of each feature to the category of the rice growth stage, and sequencing the feature contribution degree to find the feature variable with higher influence so as to optimize the model.
The embodiment shows that by utilizing the method for detecting the rice growth stage, the optimal single machine learning model is an Adaboost model, the accuracy and the F1 fraction of the Adaboost model respectively reach 93.15% and 0.93%, and the kappa coefficients are 0.91; by using the method for detecting the rice growth stage, the integrated model, particularly the optimized weighted integration after feature selection, carries out optimal classification, and compared with the existing single machine learning model, the accuracy is improved by about 1.5%, the accuracy and the F1-score respectively reach 94.06% and 0.94, and the kappa coefficient reaches 0.92.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
The present embodiment also provides a computer readable storage medium, where a computer program is stored, where the computer program includes program instructions, where the program instructions, when executed by a processor, implement the steps of the method of the present invention, and are not described herein in detail.
The computer readable storage medium may be the data transmission apparatus provided in any of the foregoing embodiments or an internal storage unit of a computer device, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like, which are provided on the computer device.
Further, the computer-readable storage medium may also include both internal storage units and external storage devices of the computer device. The computer-readable storage medium is used to store the computer program and other programs and data required by the computer device. The computer-readable storage medium may also be used to temporarily store data to be output or already output.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
What is not described in detail in this specification is prior art known to those skilled in the art.

Claims (10)

1. The system for identifying the rice growth stage based on machine learning is characterized by comprising a rice characteristic generation module and a rice growth stage identification model construction module;
the rice characteristic generation module is used for obtaining a plurality of fractal dimensions of rice according to the rice image and constructing a plurality of statistics used for representing texture characteristics of the rice by utilizing the gray level co-occurrence matrix;
The rice growth stage identification model construction module is used for obtaining the mapping relation between the characteristics of rice and the rice growth stage according to the neural network model so as to construct a rice growth stage identification model; the rice growth stage identification model is used for identifying the growth stage of the rice according to the rice image; the characteristics of the rice comprise a plurality of fractal dimensions of the rice obtained from a rice characteristic generation module and statistics constructed by using a gray level co-occurrence matrix.
2. The machine learning based rice growth stage identification system of claim 1, wherein the rice feature generation module comprises a first fractal dimension generation module, a second fractal dimension generation module, and a textural feature acquisition module;
the first fractal dimension generation module is used for calculating two fractal dimensions D1 and RFD based on the whole rice and the edge of the rice leaf according to the rice image;
the second fractal dimension generation module is used for calculating two fractal dimensions D2 and Sandbox based on the whole rice and the circumscribed rectangle according to the rice image;
the texture feature acquisition module is used for obtaining a gray level co-occurrence matrix according to the correlation of adjacent pixels and the gray level change of diagonal elements of the co-occurrence matrix, and constructing a plurality of statistics used for representing the texture features of rice according to the gray level co-occurrence matrix.
3. The machine learning based rice growth stage identification system of claim 2, wherein the statistics comprise: contrast, variability, contrast score matrix, entropy, correlation, and angular second moment.
4. A machine learning based rice growth stage identification system as claimed in claim 2 or 3 wherein the fractal dimension generation module further comprises a grey-scale map generation module for converting the rice image into a grey-scale map for obtaining the fractal dimensions D1, RFD, D2 and Sandbox of the rice prior to obtaining the fractal dimension.
5. The machine learning-based rice growth stage identification system as claimed in claim 4, wherein the fractal dimension generation module further comprises a binary image generation module, wherein the binary image generation module is used for performing edge detection and denoising on the image by adopting a Sobel operator and a gaussian filtering method on the gray image of the rice image to obtain a binary image of the rice image, and the binary image is used for obtaining fractal dimensions D1, RFD, D2 and Sandbox of the rice.
6. The machine learning based rice growth stage identification system of claim 5, wherein the fractal dimension D1 and RFD calculation method comprises:
S501, randomly selecting a pixel point A from a gray level image of rice, and marking coordinate values as (i, j); randomly selecting a pixel point A 'from the binary image of the rice, wherein the coordinate value of the pixel point A' is marked as (i ', j');
s502, determining another pixel point B and a pixel point B 'in the gray level image and the binary image of the rice by adopting a random walk method, wherein coordinate values of the pixel point B and the pixel point B' are respectively marked as (u, v) and (u ', v'); and r= | (i, j) - (u, v) |= | (i ', j') - (u ', v')|, R is a randomly set value, r=1, 2, n;
s503, calculating the difference value G of the gray values of the two pixel points A and B and the difference value G ' of the gray values of the two pixel points A ' and B ':
G=I(i,j)-I(u,v)
G'=I(i',j')-I(u',v')
wherein:
i (I, j) and I (u, v) are gray values of the pixel point A and the pixel point B respectively;
i (I ', j'), I (u ', v') are gray values of the pixel point A 'and the pixel point B', respectively;
s503, repeating the steps S501-S502 to obtain a plurality of difference values G and G ' of each gray level image and each binary image, and calculating an average value E (G) of each gray level image and an average value E (G ') of each binary image according to the difference values G and G ';
s504, calculating to obtain two fractal dimensions of each gray level image and binary image according to the following formula:
Figure FDA0004112021840000021
Figure FDA0004112021840000022
wherein:
d1 is a fractal dimension based on a gray image;
RFD is fractal dimension based on binary image;
c is a set constant.
7. The machine learning based rice growth stage identification system of claim 5, wherein the fractal dimension D2 and Sandbox calculation method comprises:
s701, respectively selecting an M multiplied by M grid on each rice gray level image and each binary image for division, wherein M is the number of grid boundaries;
s702, randomly selecting (a, b) grids (a epsilon [1, M ], b epsilon [1, M ]) on each rice gray level image; randomly selecting the (a ', b') th grid (a 'E [1, M ], b' E [1, M ]) on each rice binary image; the number of frames required to cover each grid is calculated as follows:
Figure FDA0004112021840000023
Figure FDA0004112021840000024
wherein:
n (a, b) represents the number of frames required to cover the (a, b) th grid;
n (a ', b') represents the number of frames required to cover the (a ', b') th grid;
P max : representing the maximum value of the pixel values of all the pixel points in the (a, b) th grid;
P min : representing the minimum of pixel values of all pixel points in the (a, b) th grid;
P' max : representing the maximum value of the pixel values of all the pixel points in the (a ', b') th grid;
P' min : representing the minimum value of the pixel values of all the pixel points in the (a ', b') th grid;
s703, calculating the sum N of the number of covered frames of each grid as follows:
Figure FDA0004112021840000031
Figure FDA0004112021840000032
S704, respectively calculating two fractal dimensions in the gray level image and the binary image according to the following formulas:
Figure FDA0004112021840000033
Figure FDA0004112021840000034
wherein:
d2 is a fractal dimension based on a gray image;
sandbox is a fractal dimension based on binary images.
8. The machine learning based rice growth stage identification system of claim 5 wherein the method of constructing a plurality of statistics for characterizing a texture feature of rice using a gray level co-occurrence matrix comprises:
s801, dividing a binary image of each rice into L gray levels according to gray values, wherein each pixel corresponds to one gray level;
s802, obtaining a gray level co-occurrence matrix p (x, y) according to the gray level of each pixel of the binary image of each rice;
x and y respectively represent the gray level of two pixel points, x is 0 and L-1, and y is 0 and L-1;
s803, extracting 6 texture features from the gray level co-occurrence matrix: contrast, variability, contrast score matrix, entropy, correlation, and angular second moment, denoted Con, DISL, IDM, ENT, corr and ASM, respectively, the calculation formula includes:
Figure FDA0004112021840000035
Figure FDA0004112021840000036
Figure FDA0004112021840000037
Figure FDA0004112021840000041
Figure FDA0004112021840000042
Figure FDA0004112021840000043
wherein:
μ xy : the average value of gray levels x and y of two different pixel points;
σ xy : standard deviation of gray levels x and y of two different pixels.
9. A method for identifying a growth stage of rice based on machine learning based on the system of claim 1, comprising the steps of:
S1, obtaining a plurality of fractal dimensions of rice according to an original image of the rice and constructing a plurality of statistics used for representing texture features of the rice by using a gray level co-occurrence matrix; the fractal dimension is a data characteristic quantity for representing the phenotypic character of rice;
s2, obtaining a mapping relation between the characteristics of the rice and the growth stage of the rice according to the neural network model, so as to construct a rice growth stage identification model; the rice growth stage identification model is used for identifying the growth stage of the rice according to the rice image; the characteristics of the rice comprise a plurality of fractal dimensions of the rice obtained from a rice characteristic generation module and a plurality of statistics for representing the texture characteristics of the rice by utilizing a gray level co-occurrence matrix.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the machine learning based rice growth phase identification method of claim 9.
CN202310209296.7A 2023-03-07 2023-03-07 Machine learning-based rice growth stage identification system and method Pending CN116416523A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310209296.7A CN116416523A (en) 2023-03-07 2023-03-07 Machine learning-based rice growth stage identification system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310209296.7A CN116416523A (en) 2023-03-07 2023-03-07 Machine learning-based rice growth stage identification system and method

Publications (1)

Publication Number Publication Date
CN116416523A true CN116416523A (en) 2023-07-11

Family

ID=87055666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310209296.7A Pending CN116416523A (en) 2023-03-07 2023-03-07 Machine learning-based rice growth stage identification system and method

Country Status (1)

Country Link
CN (1) CN116416523A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152620A (en) * 2023-10-30 2023-12-01 江西立盾光电科技有限公司 Plant growth control method and system following plant state change

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152620A (en) * 2023-10-30 2023-12-01 江西立盾光电科技有限公司 Plant growth control method and system following plant state change
CN117152620B (en) * 2023-10-30 2024-02-13 江西立盾光电科技有限公司 Plant growth control method and system following plant state change

Similar Documents

Publication Publication Date Title
CN114549522A (en) Textile quality detection method based on target detection
CN109154978A (en) System and method for detecting plant disease
Pérez et al. Image classification for detection of winter grapevine buds in natural conditions using scale-invariant features transform, bag of features and support vector machines
CN110569747A (en) method for rapidly counting rice ears of paddy field rice by using image pyramid and fast-RCNN
Alharbi et al. Automatic counting of wheat spikes from wheat growth images
CN109886146B (en) Flood information remote sensing intelligent acquisition method and device based on machine vision detection
Manik et al. Leaf morphological feature extraction of digital image anthocephalus cadamba
EP3989161A1 (en) Method and system for leaf age estimation based on morphological features extracted from segmented leaves
CN111723749A (en) Method, system and equipment for identifying wearing of safety helmet
CN110874835B (en) Crop leaf disease resistance identification method and system, electronic equipment and storage medium
CN116310548A (en) Method for detecting invasive plant seeds in imported seed products
CN116416523A (en) Machine learning-based rice growth stage identification system and method
Loresco et al. Computer vision performance metrics evaluation of object detection based on Haar-like, HOG and LBP features for scale-invariant lettuce leaf area calculation
Macías-Macías et al. Mask R-CNN for quality control of table olives
CN111738310B (en) Material classification method, device, electronic equipment and storage medium
CN113807143A (en) Crop connected domain identification method and device and operation system
Zeng et al. Detecting and measuring fine roots in minirhizotron images using matched filtering and local entropy thresholding
CN115205691B (en) Rice planting area identification method and device, storage medium and equipment
Dhanuja et al. Areca nut disease detection using image processing technology
Shweta et al. External feature based quality evaluation of Tomato using K-means clustering and support vector classification
CN114700941A (en) Strawberry picking method based on binocular vision and robot system
CN110516686A (en) The mosquito recognition methods of three color RGB images
CN117333494B (en) Deep learning-based straw coverage rate detection method and system
CN116468671B (en) Plant disease degree detection method, device, electronic apparatus, and storage medium
CN113486773B (en) Cotton plant growing period identification method, system, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination