CN114596562A - Rice field weed identification method - Google Patents

Rice field weed identification method Download PDF

Info

Publication number
CN114596562A
CN114596562A CN202210167579.5A CN202210167579A CN114596562A CN 114596562 A CN114596562 A CN 114596562A CN 202210167579 A CN202210167579 A CN 202210167579A CN 114596562 A CN114596562 A CN 114596562A
Authority
CN
China
Prior art keywords
image
color
follows
gray
moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210167579.5A
Other languages
Chinese (zh)
Inventor
张道兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210167579.5A priority Critical patent/CN114596562A/en
Publication of CN114596562A publication Critical patent/CN114596562A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image processing technology, in particular to a rice field weed identification method. The problem of making mistakes easily when discerning weeds in the field is solved. The method comprises the following steps: step 1, shooting an image of a rice field; step 2, selecting a development environment and a development tool; step 3, preprocessing the collected image; step 4, extracting the characteristics of the image; step 5, combining the extracted features to obtain a comprehensive feature vector, and storing the comprehensive feature vector as a sample library; step 6, establishing a BP neural network by using MATLAB to obtain a trained model; and 7, randomly extracting data in the sample library as an input trained model to obtain a recognition result. The image is preprocessed and divided into two directions, so that the subsequent extraction of color features and texture features of the image is facilitated, and the identification of field weeds is facilitated; by extracting a plurality of characteristic quantities of color characteristics and texture characteristics of the image and training by using the BP neural network, the accuracy of identification is improved.

Description

Rice field weed identification method
Technical Field
The invention relates to an image processing technology, in particular to a rice field weed identification method.
Background
China is a big agricultural country, has long-standing agricultural cultivation civilizations, and can not leave the land since ancient times because grains are originated from the land. The interference of weeds can not be avoided when the weeds are crossed with the land, the weeds in the field are distinguished by eyes of people, along with the rapid development of scientific technology, the lives of people are more and more changed day by day, and the agriculture is also greatly changed. The computer network technology, the image processing technology, the mode recognition technology, the artificial intelligence technology and the like have great influence on agriculture, the identification and detection of crops are realized by collecting various pictures and utilizing the image processing technology and the mode recognition technology, early warning or reporting can be timely and effectively carried out, a large amount of manpower and material resources are saved, and the working efficiency is greatly improved.
At present, although various neural networks are used for identifying weeds in fields, the following problems still exist:
1. the obtained image has complex and various backgrounds, and the image is difficult to segment, so that the subsequent extraction of the image characteristics is not facilitated;
2. the description of the image features has no unified theorem and is not comparable;
3. the weeds in the field are easy to identify, and the accuracy rate of the weeds is still improved.
Disclosure of Invention
The invention aims to provide a rice field weed identification method, which is used for solving the problem that errors are easy to occur in field weed identification and improving the identification accuracy.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for identifying weeds in a rice field comprises the following steps:
step 1, shooting an image of a rice field;
step 2, selecting a development environment and a development tool;
step 3, preprocessing the collected image;
step 4, extracting the characteristics of the image;
step 5, combining the extracted features to obtain a comprehensive feature vector, and storing the comprehensive feature vector as a sample library;
step 6, establishing a BP neural network by using MATLAB to obtain a trained model;
step 7, randomly extracting data in the sample as an input trained model to obtain a recognition result;
further, in the step 1, the image shooting comprises the step of randomly shooting images in the rice field in clear weather, wherein the images comprise normal rice images and weed images, the weeds comprise barnyard grass, goosegrass herb, alternanthera philoxeroides, moleplant seeds and the like, the number of the shot images is not less than 500, the size of the images is 640 x 480 pixels, the format of the shot images is JPG, and the images are shot by a CCD digital camera.
Further, in step 2, the development environment is divided into a software environment and a hardware environment, where the hardware environment includes: intel (R) Xeon (R) CPU E5-26430 @3.30GHz 3.30GHz, an installation memory 16.0GB, and a hard disk memory 470 GB; the software environment includes: the operating system is a Windows7 flagship edition 64-bit operating system; the development tool selects MATLAB R2019b software to perform relevant operations.
Further, in the step 3, the image preprocessing is divided into two directions, one direction is to convert the color RGB image into the HSI image, and the other direction is to convert the color RGB image into the grayscale image;
wherein, the color RGB image is converted into HSI image, the calculation formula is as follows:
Figure BDA0003516210410000021
Figure BDA0003516210410000022
Figure BDA0003516210410000023
in formulas (1), (2), and (3), R, G, B represents three components of a color RGB image, respectively;
wherein converting the color RGB image into a binary image comprises: graying, gray level transformation, denoising, background segmentation and mathematical morphology open operation are specifically as follows:
the invention adopts a weighted average method, and the formula is as follows:
Gray=0.114B+0.587G+0.299R (4)
in formula (4), R, G, B represents three components of a color RGB image, respectively;
the gray level transformation is called image enhancement, and is used for simplifying information, and the common methods for gray level transformation include: linear transformation, logarithmic transformation and gamma transformation, the invention adopts linear transformation, and the method specifically comprises the following steps:
let the gray scale range of the original image f (x, y) be [ f1,F1]It is desirable that the gray scale range of the transformed image g (x, y) be [ f [ ]2,F2]Then, there are:
Figure BDA0003516210410000031
the method adopts a 3-by-3 template median filter to denoise the image, can directly call a self-contained median filter function medfilt2 in software MATLAB to complete the image denoising, and at the moment, the image is recorded as a preprocessed gray image;
the image segmentation means that each pixel point of the whole image is divided into a plurality of non-overlapping categories according to the property characteristics such as color characteristics, texture characteristics, shape characteristics and the like of the original image, so that the image segmentation is realized, and the image segmentation method based on the threshold value is adopted to segment the image, and specifically comprises the following steps: if the gray value of a certain pixel point is smaller than a certain threshold, the gray value of the pixel is 0 and is used as a background to display black, otherwise, the gray value is 1 and is displayed as white, and the formula is as follows:
Figure BDA0003516210410000032
in the formula (6), T represents a threshold value,
the opening operation refers to firstly carrying out corrosion operation and then carrying out expansion operation, and the function of the opening operation is to smooth the edges of the rice or the weeds and eliminate holes in the rice or the weeds;
and taking the binary image and the preprocessed gray-scale image phase together to obtain a final preprocessed gray-scale image after the background is separated.
Further, in the step 4, the extracted features include color features and texture features;
the color feature extraction is to extract a first moment, a second moment and a third moment of each color component from the HSI image, and the color feature extraction method is to extract the color moments, and specifically comprises the following steps:
the first moment defines the average intensity of each color component and is calculated as follows:
Figure BDA0003516210410000041
the second moment defines the variance of each color component, and the calculation formula is as follows:
Figure BDA0003516210410000042
the third moment defines the skewness of each color component, and the calculation formula is as follows:
Figure BDA0003516210410000043
in the formulae (7), (8) and (9), PijAn ith color component representing a jth pixel of the image, i ═ 1,2, 3;
finally, a 9-dimensional vector is obtained to characterize the color features of an image, and the 9-dimensional vector is expressed as follows:
VY=[μ11,s122,s233,s3] (10)
wherein, mua、σaAnd saH, S, I representing the first, second and third moments of the three color channels, a being 1,2, 3;
the texture features are extracted from the binary image, and the extraction method adopts a gray level co-occurrence matrix method, which specifically comprises the following steps:
let f (x, y) be a two-dimensional digital image with size of M × N pixels and quantization level number of L, the co-occurrence matrix is L × L, and let S be a set of pixels with specific spatial relationship in the target region R, the co-occurrence matrix P can define the co-occurrence matrix as:
Figure BDA0003516210410000044
in formula (11), # denotes the potential of set S, i.e., set S vs. P (g)1,g2The number of elements that contribute, | d, θ), P represents a matrix of L × L, P ∈ (0,1), d represents (x)1,y1) And (x)2,y2) Theta denotes (x)1,y1) And (x)2,y2) The included angle between the coordinate and the horizontal axis;
the texture feature quantity extracted from the gray level co-occurrence matrix comprises 5 features of energy, entropy, moment of inertia, inverse differential moment and correlation, and the gray level co-occurrence matrices in the four directions are respectively calculated by taking d as 1, theta as 0 degrees, 45 degrees, 90 degrees and 135 degrees, and are specifically as follows:
the energy reflects the thickness degree of the image texture, and the calculation formula is as follows:
Figure BDA0003516210410000051
the entropy reflects the complexity of the gray level change of the whole image, and the calculation formula is as follows:
Figure BDA0003516210410000052
the moment of inertia reflects the definition of the image and the depth of the texture grooves, and the calculation formula is as follows:
Figure BDA0003516210410000053
the inverse differential moment measures the local stability of the image, and the calculation formula is as follows:
Figure BDA0003516210410000054
the correlation is used to describe the similarity between the row and column elements in the matrix, and the calculation formula is as follows:
Figure BDA0003516210410000055
in equation (16), there are:
Figure BDA0003516210410000056
Figure BDA0003516210410000057
Figure BDA0003516210410000058
Figure BDA0003516210410000059
calculating the mean value and the standard deviation of each texture parameter according to the following calculation formula:
mean value:
Figure BDA00035162104100000510
standard deviation:
Figure BDA0003516210410000061
in equations (21) and (22), x represents a certain sample value, n represents the number of samples,
Figure BDA0003516210410000066
mean, σ represents standard deviation;
finally, 8-dimensional feature vectors are obtained for characterizing texture features of an image, and the 8-dimensional feature vectors are expressed as follows:
Figure BDA0003516210410000062
in the formula (23), the first and second groups,
Figure BDA0003516210410000063
and
Figure BDA0003516210410000064
the mean and standard deviation of 5 texture features are shown, and b is 1,2,3,4, and 5.
Further, in step 5, the color features and texture features extracted in step 4 are integrated into a 17-dimensional comprehensive feature vector, which is expressed as follows:
Figure BDA0003516210410000065
normalizing the feature vector:
the MATLAB carries a normalization function mapminmax directly, the function is called to finish normalization of the feature vector, the mapminmax has various calling forms, and the invention adopts the following calling forms:
[Y,PS]=mapminmax(X,YMIN,YMAX)
in the invention, the value of YMIN is-1, and the value of YMAX is 1, namely the sample data is normalized to [ -1,1 ].
Further, in step 6, establishing a BP neural network by using MATLAB, including:
step SS1, determining training data;
step SS2, establishing a BP neural network of a back propagation algorithm;
step SS3, setting network parameters;
step SS4, training and simulating operation;
step SS5, reaching the expected error or reaching the maximum training times, ending the training, storing the trained model for later calling and evaluating the effect;
in the invention, the number of neurons of an input layer is 17, the number of neurons of a hidden layer is 32, the number of neurons of an output layer is 5, the neurons correspond to 5 plants such as rice, barnyard grass, goosegrass, alternanthera philoxeroides and stephania japonica, the number of transmission functions tansig of the neurons of the hidden layer and the number of transmission functions purelin of the output layer respectively, the name of a training function is trainbp, the function adopted by training is a trainlm function, and sim function is adopted by simulation operation.
Further, the step 7 specifically includes randomly extracting data in the sample library as input data, normalizing the input data, putting the normalized data into a trained model, and performing inverse normalization on the obtained data to obtain the recognition result.
The invention has the beneficial effects that:
1. the images are preprocessed and divided into two directions, so that the extraction of color features and texture features of the images is facilitated, and the identification of field weeds is facilitated.
2. By using MATLAB software and with the aid of functions and functions of the MATLAB software, the method is very convenient for processing data and constructing a BP neural network, and greatly saves time and development cost.
3. By extracting a plurality of characteristic quantities of color characteristics and texture characteristics of the image and training by using the BP neural network, the robustness of the algorithm is enhanced, and the algorithm can be applied to more scenes.
Drawings
Fig. 1 is a schematic diagram of a grayed image provided by the present invention.
FIG. 2 is a schematic diagram of a denoising method provided by the present invention.
Fig. 3 is a schematic diagram of image segmentation provided by the present invention.
Fig. 4 is a schematic diagram of a binarized image provided by the present invention.
FIG. 5 is a pre-processed gray scale image provided by the present invention.
Fig. 6 is a schematic diagram of a texture feature extraction method provided by the present invention.
FIG. 7 is a graph of the expected number of exercises provided by the present invention.
FIG. 8 is a flow chart of a method for identifying weeds in a rice field according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
This example provides a method for identifying weeds in a rice field, as shown in fig. 8, including the steps of:
step 1, shooting images of a rice field;
in sunny days, images are randomly shot in the rice field, the images comprise normal rice images and weed images, the weeds comprise barnyard grass, goosegrass, alternanthera philoxeroides, moleplant seeds and the like, the number of the shot images is 500, the size of the images is 640 x 480 pixels, the format of the shot images is JPG, and the images are shot by a CCD digital camera.
Step 2, selecting a development environment and a development tool;
the development environment of the invention is divided into a software environment and a hardware environment, wherein the hardware environment comprises: intel (R) Xeon (R) CPU E5-26430 @3.30GHz 3.30GHz, an installation memory 16.0GB and a hard disk memory 470 GB; the software environment includes: the operating system is a Windows7 flagship edition 64-bit operating system;
selecting MATLAB R2019b software by a development tool to perform related operations;
step 3, preprocessing the collected image;
the image preprocessing is divided into two directions, wherein one direction is used for converting the color RGB image into an HSI image, and the other direction is used for converting the color RGB image into a gray image;
wherein the color RGB image is converted into an HSI image as follows:
the color model is a color representation method, is a mathematical model for describing colors by a group of numerical values, and can be divided into hardware-oriented and visual perception-oriented. The color models facing hardware comprise an RGB model, a CMY model and a YCrCb model, and the color models facing visual perception comprise an HSI model, an HSV model, an HSB model and a Lab model.
The invention adopts an HSI model, namely an RGB color image is converted into an HSI image, wherein the HSI model comprises 3 components which respectively represent hue (hue), saturation (saturation) and brightness (intensity), and for any 3R, G, B values normalized to a range of [0, 1], the calculation formula of H, S, I components in the corresponding HSI model is as follows:
Figure BDA0003516210410000091
Figure BDA0003516210410000092
Figure BDA0003516210410000093
in equations (1), (2) and (3), when S is equal to 0, it corresponds to colorless, where H has no meaning, and when H is defined to be equal to 0, it also has no meaning when discussing S, when I is equal to 0 or I is equal to 1.
Wherein converting the color RGB image into a grayscale image comprises: graying, gray level transformation, denoising, background segmentation and mathematical morphology open operation are specifically as follows:
because the collected images are all RGB color images, the technology for processing the color images at the present stage is to convert the color images into gray level images, and three methods are commonly used for gray level processing: the component method, the maximum method and the weighted average method are shown in figure 1, and the invention adopts the weighted average method, and the formula is as follows:
Gray=0.114B+0.587G+0.299R (4)
in formula (4), R, G, B represents three components of an RGB color image, respectively;
the gray level transformation is called image enhancement, and is used for simplifying information, and the common methods for gray level transformation include: the invention adopts linear transformation, namely, linear transformation, logarithmic transformation and gamma transformation, and specifically comprises the following steps:
let the gray scale range of the original image f (x, y) be [ f1,F1]It is desirable that the gray scale range of the transformed image g (x, y) be [ f [ ]2,F2]Then, there are:
Figure BDA0003516210410000101
as shown in fig. 2, the denoising method includes: gaussian filtering, a median filter, wavelet denoising and the like, wherein the 3 x 3 template median filter is adopted to denoise the image, and because the software MATLAB is provided with a median filtering function medfilt2, a corresponding program can be written to directly call the function to denoise the image;
the image segmentation means that each pixel point of the whole image is divided into a plurality of non-overlapping categories according to the nature characteristics such as color characteristics, texture characteristics and shape characteristics of the original image, so that the image segmentation is realized. As shown in fig. 3, the conventional image segmentation methods mainly include a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a segmentation method based on a specific theory, and the like, and as shown in fig. 4, the present invention segments an image by using the threshold-based segmentation method, specifically: if the gray value of a certain pixel point is smaller than a certain threshold, the gray value of the pixel is 0, otherwise, the gray value is 1, and the formula is as follows:
Figure BDA0003516210410000102
in the formula (6), T represents a threshold value,
there are four basic operators in mathematical morphology, which are: the method comprises the following steps of corrosion, expansion, opening operation and closing operation, wherein the opening operation refers to the corrosion operation firstly and then the expansion operation, and the opening operation is used for smoothing the edges of the rice or the weeds and eliminating holes in the rice or the weeds;
and (3) comparing the binary image with the preprocessed gray-scale image and obtaining a final preprocessed gray-scale image after separating the background, as shown in fig. 5.
Step 4, extracting the characteristics of the image;
the features to be extracted include color features and texture features, which are two most basic features of an image and are also two features that can represent the low-level features of the image most.
The color feature extraction method comprises the following steps: the invention adopts the color moments to extract color features from the HSI image.
The color information of the image is mainly concentrated in the low-order moment, so that the color distribution of the image can be expressed only by counting the first-order moment (mean), the second-order moment (variance) and the third-order moment (skewness) of each color component. Therefore, an image is described by color moments, only nine components are needed, and the method is simpler compared with other color feature extraction methods. The calculation formula is as follows:
the first moment defines the average intensity of each color component and is calculated as follows:
Figure BDA0003516210410000111
the second moment defines the variance of each color component, and the calculation formula is as follows:
Figure BDA0003516210410000112
the third moment defines the skewness of each color component, and the calculation formula is as follows:
Figure BDA0003516210410000113
in formulas (7), (8) and (9), PijAn ith color component representing a jth pixel of the image, i ═ 1,2, 3;
in addition, the invention adopts an HSI model which comprises H, S, I three color channels, and a 9-dimensional vector is obtained to represent the color characteristics of an image, wherein the 9-dimensional vector is expressed as follows:
VY=[μ11,s122,s233,s3] (10)
wherein, mua、σaAnd saH, S, I representing the first, second and third moments of the three color channels, a being 1,2, 3;
as shown in fig. 6, the commonly used texture feature extraction methods are generally classified into four categories, which are: statistical-based methods, model-based methods, structure-based methods, signal processing-based methods. The invention adopts a Gray-level co-occurrence matrix method (GLCM) to extract texture characteristics, which comprises the following steps:
let f (x, y) be a two-dimensional digital image with size of M × N pixels and quantization level number of L, the co-occurrence matrix is L × L, and let S be a set of pixels with specific spatial relationship in the target region R, the co-occurrence matrix P can define the co-occurrence matrix as:
Figure BDA0003516210410000121
in formula (11), # denotes the potential of set S, i.e., set S vs. P (g)1,g2The number of elements that contribute, | d, θ), P represents a matrix of L × L, P ∈ (0,1), d represents (x)1,y1) And (x)2,y2) Theta denotes (x)1,y1) And (x)2,y2) The included angle between the coordinate and the horizontal axis;
in this embodiment, the size of the sliding window is 9 × 9, the number of quantization steps L is 32, the value of the direction θ is 0 °,45 °,90 °,135 °, and the step distance d is 1;
haralick extracts 14 texture feature quantities from the gray level co-occurrence matrix, including: contrast (moment of inertia), entropy, sum entropy, difference average, sum average, mean square sum, non-similarity, homogeneity, correlation, angular second moment (energy), variance, inverse difference moment (local stationarity), standard deviation; if these 14 texture feature quantities are selected, the calculation amount is increased, and therefore only the 5 features of energy, entropy, moment of inertia, inverse difference moment and correlation are selected and calculated.
Taking d as 1, θ as 0 °,45 °,90 °,135 °, calculating gray level co-occurrence matrices in the four directions as follows:
the energy reflects the thickness degree of the image texture, and the calculation formula is as follows:
Figure BDA0003516210410000122
the entropy reflects the complexity of the gray level change of the whole image, and the calculation formula is as follows:
Figure BDA0003516210410000123
the moment of inertia reflects the definition of the image and the depth of the texture grooves, and the calculation formula is as follows:
Figure BDA0003516210410000131
the inverse differential moment measures the local stability of the image, and the calculation formula is as follows:
Figure BDA0003516210410000132
the correlation is used to describe the similarity between the row and column elements in the matrix, and the calculation formula is as follows:
Figure BDA0003516210410000133
in equation (16), there are:
Figure BDA0003516210410000134
Figure BDA0003516210410000135
Figure BDA0003516210410000136
Figure BDA0003516210410000137
calculating the mean value and the standard deviation of each texture parameter according to the following calculation formula:
mean value:
Figure BDA0003516210410000138
standard deviation:
Figure BDA0003516210410000139
in equations (21) and (22), x represents a certain sample value, n represents the number of samples,
Figure BDA00035162104100001313
mean, σ represents standard deviation;
finally, 8-dimensional feature vectors are obtained for representing texture features of an image, and the 8-dimensional feature vectors are expressed as follows:
Figure BDA00035162104100001310
in the formula (23), the first and second groups,
Figure BDA00035162104100001311
and
Figure BDA00035162104100001312
the mean and standard deviation of 5 texture features are shown, and b is 1,2,3,4, and 5.
Step 5, combining the extracted features to obtain a feature vector, and storing the feature vector as a sample library;
the characteristics extracted in the step 4 are as follows: color feature VYAnd texture feature VwIntegrating the color feature and the texture feature into a 17-dimensional comprehensive feature vector V:
Figure BDA0003516210410000141
normalizing the feature vector:
if there are N images in the image library, the feature vector of the nth image can be represented as:
Vn=[V1,n,V2,n,…,VM,n],n∈[1,N] (25)
where M represents the dimension of the feature vector, then the N images form an M × N matrix, represented as follows:
Figure BDA0003516210410000142
the matrix A indicates that each column is a sample, each row is the same dimension of a plurality of samples, the number of the samples is N, and the dimension is M;
and software MATLAB is provided with a normalization function mapminmax, the function is directly called to finish normalization of the feature vector, the mapminmax has various calling forms, and the invention adopts the following calling forms:
[Y,PS]=mapminmax(X,YMIN,YMAX)
wherein, X represents the characteristic vector of a certain sample, YMIN represents the expected minimum value, YMAX represents the expected maximum value, in this embodiment, YMIN takes the value of-1, YMAX takes the value of 1, namely, the sample data is normalized to [ -1,1], and the sample data is stored as a sample library;
step 6, carrying out BP neural network analysis by using MATLAB;
using MATLAB to analyze BP neural network, firstly constructing BP neural network, specifically as follows:
step SS1, determining training data;
randomly selecting 80% of data from a sample library as a training sample, 10% of data as a test sample, and using a function of dividerand;
step SS2, establishing a BP neural network of a back propagation algorithm;
the BP neural network is built using the newff function, which requires 4 input parameters:
the first parameter is a matrix of Rx2 to define the minimum and maximum values of the R input vectors;
the second parameter is an array that sets the number of neurons in each layer;
the third parameter is the transfer function name used by each layer;
the last parameter is the name of the used training function;
in this embodiment, the number of neurons in the hidden layer is 32, the number of neurons in the output layer is 5, the hidden layer corresponds to 5 plants, namely, rice, barnyard grass, goosegrass, alternanthera philoxeroides and moleplant seeds, the transmission function tansig function of the hidden layer neurons, the transmission function purelin function of the output layer, and the name of the training function is trainbp;
step SS3, setting network parameters;
the network parameters comprise maximum training times, expected errors, learning rate, momentum factors, displayed interval times and the like;
in the embodiment, the maximum training frequency is 8000, the expected error is 1e-007, the learning rate is 0.01, the momentum factor is 0.9, and the displayed interval frequency is 15;
step SS4, training and simulating operation;
the function adopted by training is a tranlm function, and the simulation operation adopts a sim function;
step SS5, reaching the expected error or reaching the maximum training times, ending the training, storing the trained model for later calling and evaluating the effect;
as shown in fig. 7, the network achieves the expected error after 875 repeated learning;
and 7, randomly extracting data in the sample library as input data, normalizing the input data, putting the normalized data into a trained model, and performing reverse normalization on the obtained data to obtain an identification result.
By adopting the method, the weeds in the rice field are identified, the identification accuracy is not lower than 95%, and a better effect is achieved.
Thus, the flow of the whole method is completed.
By combining with specific implementation, the method has the advantages that MATLAB software is used, and functions of the MATLAB software are utilized, so that the method is very convenient for processing data and constructing a BP neural network, time and development cost are greatly saved, and meanwhile, the identification accuracy rate also achieves a better effect.
The invention is not described in detail, but is well known to those skilled in the art.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (8)

1. A rice field weed identification method is characterized by comprising the following steps:
step 1, shooting an image of a rice field;
step 2, selecting a development environment and a development tool;
step 3, preprocessing the collected image;
step 4, extracting the characteristics of the image;
step 5, combining the extracted features to obtain a comprehensive feature vector, and storing the comprehensive feature vector as a sample library;
step 6, establishing a BP neural network by using MATLAB to obtain a trained model;
and 7, randomly extracting data in the sample library as an input trained model to obtain a recognition result.
2. The method for identifying weeds in a rice field according to claim 1, wherein the step 1 comprises randomly taking images in the rice field in a clear sky, wherein the images comprise normal rice images and weed images, the weeds comprise barnyard grass, goosegrass, alternanthera philoxeroides, stephania japonica and the like, the number of the taken images is not less than 500, the size of the images is 640 pixels by 480 pixels, the format of the taken images is JPG, and the images are taken by a CCD digital camera.
3. A method for identifying weeds in rice fields according to claim 2, wherein in the step 2, the development environment is divided into a software environment and a hardware environment, wherein the hardware environment comprises: intel (R) Xeon (R) CPU E5-26430 @3.30GHz 3.30GHz, an installation memory 16.0GB, and a hard disk memory 470 GB; the software environment includes: the operating system is a Windows7 flagship edition 64-bit operating system; the development tool selects MATLAB R2019b software to perform relevant operations.
4. A rice field weed identification method according to claim 3, wherein in said step 3, the image preprocessing is divided into two directions, one direction is to convert color RGB image into HSI image, and one direction is to convert color RGB image into gray scale image;
wherein, the color RGB image is converted into HSI image, the calculation formula is as follows:
Figure FDA0003516210400000021
Figure FDA0003516210400000022
Figure FDA0003516210400000023
in formulas (1), (2), and (3), R, G, B represents three components of a color RGB image, respectively;
wherein converting the color RGB image into a binary image comprises: graying, gray level transformation, denoising, background segmentation and mathematical morphology open operation are specifically as follows:
the invention adopts a weighted average method, and the formula is as follows:
Gray=0.114B+0.587G+0.299R (4)
in formula (4), R, G, B represents three components of the color RGB color, respectively;
the gray level transformation is called image enhancement, and is used for simplifying information, and the common methods for gray level transformation include:
the invention adopts linear transformation, namely, linear transformation, logarithmic transformation and gamma transformation, and specifically comprises the following steps:
let the gray scale range of the original image f (x, y) be [ f1,F1]It is desirable that the gray scale range of the transformed image g (x, y) be [ f [ ]2,F2]Then, there are:
Figure FDA0003516210400000024
the method adopts a 3-by-3 template median filter to denoise the image, can directly call a self-contained median filter function medfilt2 in software MATLAB to complete the image denoising, and at the moment, the image is recorded as a preprocessed gray image;
the image segmentation means that each pixel point of the whole image is divided into a plurality of non-overlapping categories according to the property characteristics such as color characteristics, texture characteristics, shape characteristics and the like of the original image, so that the image segmentation is realized, and the image segmentation method based on the threshold value is adopted to segment the image, and specifically comprises the following steps: if the gray value of a certain pixel point is smaller than a certain threshold, the gray value of the pixel is 0 and is used as a background to display black, otherwise, the gray value is 1 and is displayed as white, and the formula is as follows:
Figure FDA0003516210400000031
in the formula (6), T represents a threshold value,
the opening operation refers to firstly carrying out corrosion operation and then carrying out expansion operation, and the function of the opening operation is to smooth the edges of the rice or the weeds and eliminate holes in the rice or the weeds;
and taking the binary image and the preprocessed gray-scale image phase together to obtain a final preprocessed gray-scale image after the background is separated.
5. The method for identifying weeds in rice fields according to claim 4, wherein the extracted features in the step 4 comprise color features and texture features; the color feature extraction is to extract a first moment, a second moment and a third moment of each color component from the HSI image, and the color feature extraction method is to extract the color moments, and specifically comprises the following steps:
the first moment defines the average intensity of each color component and is calculated as follows:
Figure FDA0003516210400000032
the second moment defines the variance of each color component, and the calculation formula is as follows:
Figure FDA0003516210400000033
the third moment defines the skewness of each color component, and the calculation formula is as follows:
Figure FDA0003516210400000034
in the formulae (7), (8) and (9), PijAn ith color component representing a jth pixel of the image, i ═ 1,2, 3; finally, a 9-dimensional vector is obtained to characterize the color features of an image, and the 9-dimensional vector is expressed as follows:
VY=[μ11,s122,s233,s3] (10)
wherein, mua、σaAnd saH, S, I representing the first, second and third moments of the three color channels, a being 1,2, 3;
the texture features are extracted from the binary image, and the extraction method adopts a gray level co-occurrence matrix method, which specifically comprises the following steps:
let f (x, y) be a two-dimensional digital image with size of M × N pixels and quantization level number of L, the co-occurrence matrix is L × L, and let S be a set of pixels with specific spatial relationship in the target region R, the co-occurrence matrix P can define the co-occurrence matrix as:
Figure FDA0003516210400000041
in formula (11), # denotes the potential of set S, i.e., set S vs. P (g)1,g2The number of elements contributing to d, θ), P represents a matrix of L × L, P ∈ (0,1), d represents (x)1,y1) And (x)2,y2) Theta denotes (x)1,y1) And (x)2,y2) The included angle between the coordinate and the horizontal axis;
the texture feature quantity extracted from the gray level co-occurrence matrix comprises 5 features of energy, entropy, moment of inertia, inverse differential moment and correlation, and the gray level co-occurrence matrices in the four directions are respectively calculated by taking d as 1, theta as 0 degrees, 45 degrees, 90 degrees and 135 degrees, and are specifically as follows:
the energy reflects the thickness degree of the image texture, and the calculation formula is as follows:
Figure FDA0003516210400000042
the entropy reflects the complexity of the gray level change of the whole image, and the calculation formula is as follows:
Figure FDA0003516210400000043
the moment of inertia reflects the definition of the image and the depth of the texture grooves, and the calculation formula is as follows:
Figure FDA0003516210400000044
the inverse differential moment measures the local stability of the image, and the calculation formula is as follows:
Figure FDA0003516210400000045
the correlation is used to describe the similarity between the row and column elements in the matrix, and the calculation formula is as follows:
Figure FDA0003516210400000046
in equation (16), there are:
Figure FDA0003516210400000047
Figure FDA0003516210400000051
Figure FDA0003516210400000052
Figure FDA0003516210400000053
calculating the mean value and the standard deviation of each texture parameter according to the following calculation formula:
mean value:
Figure FDA0003516210400000054
standard deviation:
Figure FDA0003516210400000055
in equations (21) and (22), x represents a certain sample value, n represents the number of samples,
Figure FDA0003516210400000056
mean, σ represents standard deviation;
finally, 8-dimensional feature vectors are obtained for representing texture features of an image, and the 8-dimensional feature vectors are expressed as follows:
Figure FDA0003516210400000057
in the formula (23), the first and second groups,
Figure FDA0003516210400000058
and
Figure FDA0003516210400000059
the mean and standard deviation of 5 texture features are shown, and b is 1,2,3,4, and 5.
6. The method for identifying weeds in rice fields according to claim 5, wherein in the step 5, the color features and the texture features extracted in the step 4 are integrated into a 17-dimensional comprehensive feature vector, which is expressed as follows:
Figure FDA00035162104000000510
normalizing the feature vector:
the MATLAB carries a normalization function mapminmax directly, the function is called to finish normalization of the feature vector, the mapminmax has various calling forms, and the invention adopts the following calling forms:
[Y,PS]=mapminmax(X,YMIN,YMAX)
in the invention, the value of YMIN is-1, and the value of YMAX is 1, namely the sample data is normalized to [ -1,1 ].
7. The method according to claim 6, wherein the step 6 of establishing a BP neural network by using MATLAB comprises:
step SS1, determining training data;
step SS2, establishing a BP neural network of a back propagation algorithm;
step SS3, setting network parameters;
step SS4, training and simulating operation;
step SS5, reaching the expected error or reaching the maximum training times, ending the training, storing the trained model for later calling and evaluating the effect;
in the invention, the number of neurons of an input layer is 17, the number of neurons of a hidden layer is 32, the number of neurons of an output layer is 5, the neurons correspond to 5 plants such as rice, barnyard grass, goosegrass, alternanthera philoxeroides and stephania japonica, the number of transmission functions tansig of the neurons of the hidden layer and the number of transmission functions purelin of the output layer respectively, the name of a training function is trainbp, the function adopted by training is a trainlm function, and sim function is adopted by simulation operation.
8. The method for identifying weeds in rice fields as claimed in claim 7, wherein the step 7 specifically comprises randomly extracting data in a sample library as input data, normalizing the input data, putting the normalized data into a trained model, and performing inverse normalization on the obtained data to obtain the identification result.
CN202210167579.5A 2022-02-23 2022-02-23 Rice field weed identification method Pending CN114596562A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210167579.5A CN114596562A (en) 2022-02-23 2022-02-23 Rice field weed identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210167579.5A CN114596562A (en) 2022-02-23 2022-02-23 Rice field weed identification method

Publications (1)

Publication Number Publication Date
CN114596562A true CN114596562A (en) 2022-06-07

Family

ID=81806616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210167579.5A Pending CN114596562A (en) 2022-02-23 2022-02-23 Rice field weed identification method

Country Status (1)

Country Link
CN (1) CN114596562A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315493A (en) * 2023-11-29 2023-12-29 浙江天演维真网络科技股份有限公司 Identification and resolution method, device, equipment and medium for field weeds

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315493A (en) * 2023-11-29 2023-12-29 浙江天演维真网络科技股份有限公司 Identification and resolution method, device, equipment and medium for field weeds
CN117315493B (en) * 2023-11-29 2024-02-20 浙江天演维真网络科技股份有限公司 Identification and resolution method, device, equipment and medium for field weeds

Similar Documents

Publication Publication Date Title
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN108985238B (en) Impervious surface extraction method and system combining deep learning and semantic probability
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN111275696B (en) Medical image processing method, image processing method and device
CN109410171B (en) Target significance detection method for rainy image
CN104598908A (en) Method for recognizing diseases of crop leaves
CN110458192B (en) Hyperspectral remote sensing image classification method and system based on visual saliency
Wang et al. Combined use of FCN and Harris corner detection for counting wheat ears in field conditions
CN113160150B (en) AI (Artificial intelligence) detection method and device for invasion of foreign matters in wire mesh
CN109886146B (en) Flood information remote sensing intelligent acquisition method and device based on machine vision detection
Zhang et al. Contrast preserving image decolorization combining global features and local semantic features
CN105405138A (en) Water surface target tracking method based on saliency detection
CN114022872A (en) Multi-crop leaf disease identification method based on dynamic neural network
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN112258525A (en) Image abundance statistics and population recognition algorithm based on bird high frame frequency sequence
CN112419258A (en) Robust environmental sound identification method based on time-frequency segmentation and convolutional neural network
CN114596562A (en) Rice field weed identification method
Krylov et al. False discovery rate approach to unsupervised image change detection
CN107358635B (en) Color morphological image processing method based on fuzzy similarity
Ouzounis et al. Interactive collection of training samples from the max-tree structure
Bui et al. Using BEMD in CNN to identify landslide in satellite image
CN106650824A (en) Moving object classification method based on support vector machine
CN111079807A (en) Ground object classification method and device
CN113989509B (en) Crop insect pest detection method, crop insect pest detection system and crop insect pest detection equipment based on image recognition
Xiao et al. Blind Quality Metric via Measurement of Contrast, Texture, and Colour in Night-Time Scenario.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination