High-precision crop disease and insect pest image identification method
Technical Field
The invention relates to the technical field of image recognition, in particular to a high-precision crop disease and insect pest image recognition method.
Background
Computer vision technology was used mainly for the recognition and analysis of two-dimensional images in the fifties of the last century, and scientists began to study the recognition of three-dimensional images using computer vision technology in the sixties. Until the eighties of the last century, more scholars put forward many new theories and research methods in the computer vision aspect, which also lays a foundation for the application and research of the computer vision technology in the agriculture aspect. In the early days, people can not well apply the computer vision technology to the intelligent agriculture, especially to the identification of crop diseases and insect pests. Early agricultural pest identification was mainly against manual recording and photographing techniques, which severely affected timely treatment of crop pests.
In the aspect of agricultural application, the early image pattern recognition technology is mainly applied to the aspects of crop quality monitoring, crop growth environment control, crop classification and the like. But there is little technology and scientific research associated with the identification and classification of crop pests in smart agriculture. On the basis, some scientists abroad begin to research and experiment on the identification and classification of crop diseases and insect pests by the computer vision technology earlier, while the research and application of the computer vision technology in China are late due to the immaturity of the early technology, and the development and the application of the computer vision technology are mainly monitored and recorded by agricultural experts on site and the conditions of the crop diseases and insect pests, so that the application of the intelligent identification technology for the diseases and insect pests is not wide.
The most important of intelligent pest and disease identification is to extract the features of each image, and the traditional pest and disease image identification mainly utilizes a convolutional neural network mode to perform classification identification. The method comprises the steps of classifying and identifying the collected image features by utilizing the hierarchy of a convolutional neural network system structure and the learning characteristic of the neural network structure, and classifying the image by utilizing a softmax function (also called polynomial logistic regression). However, in the case where image data is particularly large, the prediction performance of the classification method using the softmax function is low. To obtain higher image recognition accuracy through the convolutional neural network mode, more learning parameters in the convolutional neural network and training data amount in the convolutional neural network mode are needed, so that the recognition complexity is increased, and the image data classification complexity is increased. Besides, under the condition that the image pixel and size are not changed, if the depth of the structure of the convolutional neural network is continuously increased, the accuracy of image recognition cannot be improved along with the increase of the structure depth.
At present, an image feature extraction method and a Bionic Pattern (BPR) identification method based on a Convolutional Neural Network (CNN) are mainly available.
The image feature extraction method based on the convolutional neural network mainly comprises forward propagation and backward propagation of the convolutional neural network, wherein the forward propagation and the backward propagation are carried out on the convolutional neural network, the convolutional layer with the alternating function and an architecture of the convolutional neural network are used, an output layer is arranged in the architecture, each character class in the output layer is represented by a single node, after the convolutional neural network is trained, only parameters in large connecting layers are reserved, so that feature vectors in the large connecting layers are extracted by the parameters, and then the feature vectors are classified and identified by a classifier. The image feature extraction method based on the convolutional neural network mainly uses the learning characteristic of the neural network, simultaneously needs to be divided into a plurality of layers, the corresponding training methods of each layer are different, but the image data needs to be concentrated and normalized, so that if images with different sizes are available, the images cannot be trained together and can only be divided, and the convolutional neural network only has the learning function but does not have the memory function, so that the convolutional neural network is used for processing common two-dimensional images, but is not ideal for the processing capability of videos or natural languages.
The bionic mode recognition model method is a model method based on material recognition, which simulates the cognitive function of human beings, so as to classify images and finally recognize the images according to the classifications; the classification process of the bionic mode recognition is mainly that complex geometric figures are constructed in a pixel space, then convolutional nerves are used for covering the figures, the minimum distance of each base point in the figures is found out at the same time, and finally the images covered by the latitude are calculated for classification, and then the images are recognized; the bionic pattern recognition model method is a method focusing on classification recognition, and comprises the steps of constructing complex geometric figures in a pixel space and covering the figures by using convolution nerves, wherein in the covering process, the coverage rate is reduced when the space dimension is increased, and in the case of overlarge data set, the coverage of the space figures becomes troublesome, so that the efficiency is reduced, and the recognition accuracy is reduced.
In view of the foregoing, there is a need for further improvements and innovations in the prior art.
Disclosure of Invention
The invention aims to provide a high-precision crop pest and disease image identification method which is reasonable in conception on the premise of an original CNN (convolutional neural network) model and a BPR (bionic pattern recognition) model, can be used for covering each single shape by using multi-dimensional neural nodes so as to remove the limitation on image identification dimensionality, improves the identification dimensionality, can well extract the characteristics of an image in a large number of heterogeneous data sets and perform classification identification, does not have the problem that the identification precision is reduced along with the increase of the image data quantity, and obviously improves the image identification precision.
The technical scheme of the invention is as follows:
the method for identifying the high-precision crop disease and insect pest image specifically comprises the steps of firstly extracting the refined features of the image, forming a feature set by the extracted features, constructing different graphs by the features in the feature set, then sequentially covering each graph through multi-dimensional nerve nodes, stripping the features in the coverage range of the multi-dimensional nerve nodes from the feature set, then covering each graph constructed in a feature space set one by one through the multi-dimensional nerve nodes according to the process until all the features in the feature set are stripped to be empty, and deriving the discontinuous fall coverage rate of image identification, namely the identification accuracy according to the obtained final coverage range.
The high-precision crop disease and insect pest image identification method specifically comprises the following steps:
(1) constructing a training set H, H ═ H1,H2,…,HLWhere the training set contains N classes, HKIs the kth class, contains N sampling points, HK={L1,L2,…,LN};
(2) Calculate HKDistance between any two sampling points, and from HKFind two sampling points M in11And M12Let ρ (M)11,M12) Is the smallest Hi,Hi∈SK{ρ(Hi,Hj) In which H isi≠Hj;
(3) Find out the third sampling point M13,M13∈HK-{M11,M12But not at sample M11And sample M12On the straight line of the component, then connecting M13,M11And M12The three sampling points form a plane triangle A1;
(4) The triangular pixel region A is then neuron-pair1Coverage, size of covered space P1={Y|ρYF1<Fh,Y∈RnWhere ρ YF1Denotes Y and F1The distance between them;
(5) judging whether each sampling point in H is at P1If the sampling point is in the coverage area, stripping the sampling point from the H, and allowing the H to standK=HK-{Li|Li∈P1};
(6) From the set HKThen find a new sampling point M21And let the new sampling point M21And M13、M11、M12Distance between the three sampling pointsThe sum of the distances is minimum;
(7) for { M13,M11,M12Two of the three samples are renamed to M22And M23Knowing the two sampling points M22And M23Is and sample point M21Two sampling points with the shortest distance, and then M22,M23,M21Joined to form a second planar triangle A2;
(8) Using neurons to pixel triangle region A2Coverage, size of covered space P2Set HKValue of (A) becomes HK=HK-{M21};
(9) Repeating the steps (5) to (7) to find another sampling point Mi,Mi∈HKThe newly found sample point is then marked as Mi1And separating M as in step (7)i1The nearest two sampling points are respectively marked as Mi2And Mi3;
(10) Then connect Mi3,Mi1And Mi2The three sampling points form a plane triangle Ai(ii) a Covering this with neurons, the size of the covered space PiSet HKValue of (A) becomes HK=HK-{Mi};
(11) And finally, judging whether the HK set is an empty set, if not, repeating the steps (9) - (10) until the HK set is empty, and if so, deducing discontinuous fall coverage rate of image recognition, namely recognition accuracy according to the obtained final coverage range of the K classes.
The high-precision crop disease and pest image identification method comprises the following steps: the image recognition method uses multi-dimensional neural nodes to overlay each single shape; the multidimensional neural nodes and single shapes are defined as follows:
① setting A
0,A
1,…,A
s(S.ltoreq.N) is an N-dimensional feature space V
SPoints, vectors, within which are mutually uncorrelated
Have no linear correlation, i.e. have linear independence; there is one set of pixel points
Ω S is A
0,A
1,…,A
sAn S-dimension single shape for a vertex;
② setting Q to be a polyhedron within a feature space, where feature space V
SSatisfy, y ∈ V
Sand/Q. While the distance between y and the polyhedron Q satisfies the equation L (y, Q) ═ L
min|L
min=min(L(x,y)),
If there is one R satisfying
When Ah is more than 0, R can be called probability coverage to the polyhedron;
when the Q in the definition of steps ① and ② is a line, then R is a straight-through neuron, when Q is a planar triangle, then R is a three-dimensional neuron, and when Q is a tetrahedron, then R is a four-dimensional neuron.
Has the advantages that:
the high-precision crop disease and insect pest image identification method does not reduce the identification precision along with the increase of the image data quantity. In the method, the image data set is classified, N triangular areas are constructed in multiple dimensions, and the triangular areas constructed by the N triangular areas are covered by the neural network, so that the limitation on the image identification dimension is removed, and the identification dimension is effectively improved. Meanwhile, the characteristics of the image can be well extracted from a large number of heterogeneous data sets and classified and identified, and the accuracy of image identification is greatly improved. Because the image is divided into different blocks in image recognition, each block contains as little pixel information as possible, the invention can extract the image features as accurately as possible in the process of splitting and extracting the image features, and can perform coverage classification as comprehensively as possible in the process of coverage classification, thereby saving the complexity of the recognition work and more importantly improving the accuracy of the image recognition.
Drawings
FIG. 1 is a flow chart of the high-precision crop pest image identification method of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention are described below clearly and completely, and it is obvious that the described embodiments are some, not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In smart agriculture, pest recognition of crops plays a crucial role. In order to ensure the accuracy of identification, the identification method of the high-precision crop disease and insect pest image uses multidimensional neural nodes to cover each single shape; wherein the multidimensional neural nodes and the single shape are defined as follows:
(1) setting A
0,A
1,…,A
s(S.ltoreq.N) is an N-dimensional feature space V
SPoints within which are not related to each other, then vectors
Neither has a linear dependence, that is to say a linear independence. There is one set of pixel points
Ω S is A
0,A
1,…,A
sIs an S-dimensional single shape of the vertex. That is, the line segment, the plane figure and the polyhedron are respectively regarded as a one-dimensional, two-dimensional and multi-dimensional simple figure in the multi-dimensional spaceIn single form.
(2) Let Q be a polyhedron within a feature space, where feature space V
SSatisfy, y ∈ V
Sand/Q. While the distance between y and the polyhedron Q satisfies the equation L (y, Q) ═ L
min|L
min=min(L(x,y)),
If there is one R satisfying
Ah > 0, then we can call this R as probabilistic coverage for the polyhedron.
When Q within the above definition is a line segment, then R is a straight-through neuron; when Q is a plane triangle, R is a three-dimensional neuron; when Q is a tetrahedron, R is a four-dimensional neuron.
Whether a certain identification method has good practicability or not needs to be measured, the identification complexity under the condition that the image data set is large and the accuracy of the identified result need to be considered; on the basis, the image can be subjected to refined feature extraction firstly, a feature space is constructed according to the extracted features, then a plurality of different graphs are constructed in the feature space, the graphs are covered, and then accurate classification and identification are carried out according to the feature coverage, so that the identification efficiency is high, and the accuracy is high.
As shown in fig. 1, the method for identifying high-precision crop pest and disease images of the present invention specifically includes the steps of performing refined feature extraction on an image, forming a feature set by using the extracted features, constructing different patterns by using the features in the feature set, sequentially covering each pattern by using multidimensional neural nodes, stripping the features in the coverage range of the multidimensional neural nodes from the feature set, covering each pattern constructed in a feature space set one by using the multidimensional neural nodes according to the process until all the features in the feature set are stripped to be empty, and deriving the discontinuous fall coverage rate of image identification, namely the identification precision, according to the obtained final coverage range.
The invention relates to a high-precision crop disease and insect pest image identification method, which specifically comprises the following steps:
(1) constructing a training set H, H ═ H1,H2,…,HLWhere the training set contains N classes, HKIs the kth class, contains N sampling points, HK={L1,L2,…,LN};
(2) Calculate HKDistance between any two sampling points, and from HKFind two sampling points M in11And M12Let ρ (M)11,M12) Is the smallest Hi,Hi∈SK{ρ(Hi,Hj) In which H isi≠Hj;
(3) Find out the third sampling point M13,M13∈HK-{M11,M12But not at sample M11And sample M12On the straight line of the component, then connecting M13,M11And M12The three sampling points form a plane triangle A1;
(4) The triangular pixel region A is then neuron-pair1Coverage, size of covered space P1={Y|ρYF1<Fh,Y∈RnWhere ρ YF1Denotes Y and F1The distance between them;
(5) judging whether each sampling point in H is at P1If the sampling point is in the coverage area, stripping the sampling point from the H, and allowing the H to standK=HK-{Li|Li∈P1};
(6) From the set HKThen find a new sampling point M21And let the new sampling point M21And M13、M11、M12The sum of the distances of the three sampling points is minimum;
(7) for { M13,M11,M12Two of the three samples are renamed to M22And M23Knowing the two sampling points M22And M23Is and sample point M21Two sampling points with the shortest distance, and then M22,M23,M21Joined to form a second planar triangle A2;
(8) Using neurons to pixel triangle region A2Coverage, size of covered space P2Set HKValue of (A) becomes HK=HK-{M21};
(9) Repeating the steps (5) to (7) to find another sampling point Mi,Mi∈HKThe newly found sample point is then marked as Mi1And separating M as in step (7)i1The nearest two sampling points are respectively marked as Mi2And Mi3;
(10) Then connect Mi3,Mi1And Mi2The three sampling points form a plane triangle Ai(ii) a Covering this with neurons, the size of the covered space PiSet HKValue of (A) becomes HK=HK-{Mi};
(11) Finally, judge HKAnd (4) whether the set is an empty set or not, if not, repeating the steps (9) - (10) until the set is empty, and if so, deducing discontinuous fall coverage rate of image recognition, namely recognition accuracy according to the obtained final coverage range of the K classes.
The method has reasonable conception, and uses the multi-dimensional neural nodes to cover each single shape so as to remove the limitation on the dimension of image recognition, improve the dimension of recognition, well extract the characteristics of the image in a large number of heterogeneous data sets and carry out classification recognition, avoid the problem of reducing the recognition precision along with the increase of the image data volume, and obviously improve the image recognition precision.