CN111160414A - High-precision crop disease and insect pest image identification method - Google Patents
High-precision crop disease and insect pest image identification method Download PDFInfo
- Publication number
- CN111160414A CN111160414A CN201911271371.2A CN201911271371A CN111160414A CN 111160414 A CN111160414 A CN 111160414A CN 201911271371 A CN201911271371 A CN 201911271371A CN 111160414 A CN111160414 A CN 111160414A
- Authority
- CN
- China
- Prior art keywords
- image
- identification
- feature
- coverage
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 241000607479 Yersinia pestis Species 0.000 title claims abstract description 26
- 201000010099 disease Diseases 0.000 title claims abstract description 19
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 title claims abstract description 19
- 241000238631 Hexapoda Species 0.000 title description 12
- 230000001537 neural effect Effects 0.000 claims abstract description 16
- 238000000605 extraction Methods 0.000 claims abstract description 7
- 238000005070 sampling Methods 0.000 claims description 48
- 210000002569 neuron Anatomy 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 8
- 239000013598 vector Substances 0.000 claims description 5
- 238000013527 convolutional neural network Methods 0.000 description 18
- 210000005036 nerve Anatomy 0.000 description 5
- 239000011664 nicotinic acid Substances 0.000 description 5
- 238000011160 research Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000003909 pattern recognition Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 1
- 230000003920 cognitive function Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000006386 memory function Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/02—Agriculture; Fishing; Forestry; Mining
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Mining & Mineral Resources (AREA)
- Tourism & Hospitality (AREA)
- Animal Husbandry (AREA)
- Economics (AREA)
- Agronomy & Crop Science (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Marine Sciences & Fisheries (AREA)
- General Business, Economics & Management (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a high-precision crop pest and disease image identification method, which comprises the steps of firstly carrying out refined feature extraction on an image, forming a feature set by using the extracted features, constructing different graphs by using features in the feature set, then sequentially covering each graph by using multidimensional neural nodes, stripping the features in the coverage range of the multidimensional neural nodes from the feature set, then covering each graph constructed in a feature space set one by one according to the process until all the features in the feature set are stripped to be empty, and deriving the discontinuous fall coverage rate of image identification, namely the identification precision according to the obtained final coverage range. The method has reasonable concept, improves the identification dimensionality, can well extract the characteristics of the image in a large number of heterogeneous data sets and perform classification identification, does not have the problem of reducing the identification precision along with the increase of the image data amount, and obviously improves the image identification precision.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to a high-precision crop disease and insect pest image recognition method.
Background
Computer vision technology was used mainly for the recognition and analysis of two-dimensional images in the fifties of the last century, and scientists began to study the recognition of three-dimensional images using computer vision technology in the sixties. Until the eighties of the last century, more scholars put forward many new theories and research methods in the computer vision aspect, which also lays a foundation for the application and research of the computer vision technology in the agriculture aspect. In the early days, people can not well apply the computer vision technology to the intelligent agriculture, especially to the identification of crop diseases and insect pests. Early agricultural pest identification was mainly against manual recording and photographing techniques, which severely affected timely treatment of crop pests.
In the aspect of agricultural application, the early image pattern recognition technology is mainly applied to the aspects of crop quality monitoring, crop growth environment control, crop classification and the like. But there is little technology and scientific research associated with the identification and classification of crop pests in smart agriculture. On the basis, some scientists abroad begin to research and experiment on the identification and classification of crop diseases and insect pests by the computer vision technology earlier, while the research and application of the computer vision technology in China are late due to the immaturity of the early technology, and the development and the application of the computer vision technology are mainly monitored and recorded by agricultural experts on site and the conditions of the crop diseases and insect pests, so that the application of the intelligent identification technology for the diseases and insect pests is not wide.
The most important of intelligent pest and disease identification is to extract the features of each image, and the traditional pest and disease image identification mainly utilizes a convolutional neural network mode to perform classification identification. The method comprises the steps of classifying and identifying the collected image features by utilizing the hierarchy of a convolutional neural network system structure and the learning characteristic of the neural network structure, and classifying the image by utilizing a softmax function (also called polynomial logistic regression). However, in the case where image data is particularly large, the prediction performance of the classification method using the softmax function is low. To obtain higher image recognition accuracy through the convolutional neural network mode, more learning parameters in the convolutional neural network and training data amount in the convolutional neural network mode are needed, so that the recognition complexity is increased, and the image data classification complexity is increased. Besides, under the condition that the image pixel and size are not changed, if the depth of the structure of the convolutional neural network is continuously increased, the accuracy of image recognition cannot be improved along with the increase of the structure depth.
At present, an image feature extraction method and a Bionic Pattern (BPR) identification method based on a Convolutional Neural Network (CNN) are mainly available.
The image feature extraction method based on the convolutional neural network mainly comprises forward propagation and backward propagation of the convolutional neural network, wherein the forward propagation and the backward propagation are carried out on the convolutional neural network, the convolutional layer with the alternating function and an architecture of the convolutional neural network are used, an output layer is arranged in the architecture, each character class in the output layer is represented by a single node, after the convolutional neural network is trained, only parameters in large connecting layers are reserved, so that feature vectors in the large connecting layers are extracted by the parameters, and then the feature vectors are classified and identified by a classifier. The image feature extraction method based on the convolutional neural network mainly uses the learning characteristic of the neural network, simultaneously needs to be divided into a plurality of layers, the corresponding training methods of each layer are different, but the image data needs to be concentrated and normalized, so that if images with different sizes are available, the images cannot be trained together and can only be divided, and the convolutional neural network only has the learning function but does not have the memory function, so that the convolutional neural network is used for processing common two-dimensional images, but is not ideal for the processing capability of videos or natural languages.
The bionic mode recognition model method is a model method based on material recognition, which simulates the cognitive function of human beings, so as to classify images and finally recognize the images according to the classifications; the classification process of the bionic mode recognition is mainly that complex geometric figures are constructed in a pixel space, then convolutional nerves are used for covering the figures, the minimum distance of each base point in the figures is found out at the same time, and finally the images covered by the latitude are calculated for classification, and then the images are recognized; the bionic pattern recognition model method is a method focusing on classification recognition, and comprises the steps of constructing complex geometric figures in a pixel space and covering the figures by using convolution nerves, wherein in the covering process, the coverage rate is reduced when the space dimension is increased, and in the case of overlarge data set, the coverage of the space figures becomes troublesome, so that the efficiency is reduced, and the recognition accuracy is reduced.
In view of the foregoing, there is a need for further improvements and innovations in the prior art.
Disclosure of Invention
The invention aims to provide a high-precision crop pest and disease image identification method which is reasonable in conception on the premise of an original CNN (convolutional neural network) model and a BPR (bionic pattern recognition) model, can be used for covering each single shape by using multi-dimensional neural nodes so as to remove the limitation on image identification dimensionality, improves the identification dimensionality, can well extract the characteristics of an image in a large number of heterogeneous data sets and perform classification identification, does not have the problem that the identification precision is reduced along with the increase of the image data quantity, and obviously improves the image identification precision.
The technical scheme of the invention is as follows:
the method for identifying the high-precision crop disease and insect pest image specifically comprises the steps of firstly extracting the refined features of the image, forming a feature set by the extracted features, constructing different graphs by the features in the feature set, then sequentially covering each graph through multi-dimensional nerve nodes, stripping the features in the coverage range of the multi-dimensional nerve nodes from the feature set, then covering each graph constructed in a feature space set one by one through the multi-dimensional nerve nodes according to the process until all the features in the feature set are stripped to be empty, and deriving the discontinuous fall coverage rate of image identification, namely the identification accuracy according to the obtained final coverage range.
The high-precision crop disease and insect pest image identification method specifically comprises the following steps:
(1) constructing a training set H, H ═ H1,H2,…,HLWhere the training set contains N classes, HKIs the kth class, contains N sampling points, HK={L1,L2,…,LN};
(2) Calculate HKDistance between any two sampling points, and from HKFind two sampling points M in11And M12Let ρ (M)11,M12) Is the smallest Hi,Hi∈SK{ρ(Hi,Hj) In which H isi≠Hj;
(3) Find out the third sampling point M13,M13∈HK-{M11,M12But not at sample M11And sample M12On the straight line of the component, then connecting M13,M11And M12The three sampling points form a plane triangle A1;
(4) The triangular pixel region A is then neuron-pair1Coverage, size of covered space P1={Y|ρYF1<Fh,Y∈RnWhere ρ YF1Denotes Y and F1The distance between them;
(5) judging whether each sampling point in H is at P1If the sampling point is in the coverage area, stripping the sampling point from the H, and allowing the H to standK=HK-{Li|Li∈P1};
(6) From the set HKThen find a new sampling point M21And let the new sampling point M21And M13、M11、M12Distance between the three sampling pointsThe sum of the distances is minimum;
(7) for { M13,M11,M12Two of the three samples are renamed to M22And M23Knowing the two sampling points M22And M23Is and sample point M21Two sampling points with the shortest distance, and then M22,M23,M21Joined to form a second planar triangle A2;
(8) Using neurons to pixel triangle region A2Coverage, size of covered space P2Set HKValue of (A) becomes HK=HK-{M21};
(9) Repeating the steps (5) to (7) to find another sampling point Mi,Mi∈HKThe newly found sample point is then marked as Mi1And separating M as in step (7)i1The nearest two sampling points are respectively marked as Mi2And Mi3;
(10) Then connect Mi3,Mi1And Mi2The three sampling points form a plane triangle Ai(ii) a Covering this with neurons, the size of the covered space PiSet HKValue of (A) becomes HK=HK-{Mi};
(11) And finally, judging whether the HK set is an empty set, if not, repeating the steps (9) - (10) until the HK set is empty, and if so, deducing discontinuous fall coverage rate of image recognition, namely recognition accuracy according to the obtained final coverage range of the K classes.
The high-precision crop disease and pest image identification method comprises the following steps: the image recognition method uses multi-dimensional neural nodes to overlay each single shape; the multidimensional neural nodes and single shapes are defined as follows:
① setting A0,A1,…,As(S.ltoreq.N) is an N-dimensional feature space VSPoints, vectors, within which are mutually uncorrelatedHave no linear correlation, i.e. have linear independence; there is one set of pixel pointsΩ S is A0,A1,…,AsAn S-dimension single shape for a vertex;
② setting Q to be a polyhedron within a feature space, where feature space VSSatisfy, y ∈ VSand/Q. While the distance between y and the polyhedron Q satisfies the equation L (y, Q) ═ Lmin|Lmin=min(L(x,y)), If there is one R satisfyingWhen Ah is more than 0, R can be called probability coverage to the polyhedron;
when the Q in the definition of steps ① and ② is a line, then R is a straight-through neuron, when Q is a planar triangle, then R is a three-dimensional neuron, and when Q is a tetrahedron, then R is a four-dimensional neuron.
Has the advantages that:
the high-precision crop disease and insect pest image identification method does not reduce the identification precision along with the increase of the image data quantity. In the method, the image data set is classified, N triangular areas are constructed in multiple dimensions, and the triangular areas constructed by the N triangular areas are covered by the neural network, so that the limitation on the image identification dimension is removed, and the identification dimension is effectively improved. Meanwhile, the characteristics of the image can be well extracted from a large number of heterogeneous data sets and classified and identified, and the accuracy of image identification is greatly improved. Because the image is divided into different blocks in image recognition, each block contains as little pixel information as possible, the invention can extract the image features as accurately as possible in the process of splitting and extracting the image features, and can perform coverage classification as comprehensively as possible in the process of coverage classification, thereby saving the complexity of the recognition work and more importantly improving the accuracy of the image recognition.
Drawings
FIG. 1 is a flow chart of the high-precision crop pest image identification method of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention are described below clearly and completely, and it is obvious that the described embodiments are some, not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In smart agriculture, pest recognition of crops plays a crucial role. In order to ensure the accuracy of identification, the identification method of the high-precision crop disease and insect pest image uses multidimensional neural nodes to cover each single shape; wherein the multidimensional neural nodes and the single shape are defined as follows:
(1) setting A0,A1,…,As(S.ltoreq.N) is an N-dimensional feature space VSPoints within which are not related to each other, then vectorsNeither has a linear dependence, that is to say a linear independence. There is one set of pixel pointsΩ S is A0,A1,…,AsIs an S-dimensional single shape of the vertex. That is, the line segment, the plane figure and the polyhedron are respectively regarded as a one-dimensional, two-dimensional and multi-dimensional simple figure in the multi-dimensional spaceIn single form.
(2) Let Q be a polyhedron within a feature space, where feature space VSSatisfy, y ∈ VSand/Q. While the distance between y and the polyhedron Q satisfies the equation L (y, Q) ═ Lmin|Lmin=min(L(x,y)), If there is one R satisfyingAh > 0, then we can call this R as probabilistic coverage for the polyhedron.
When Q within the above definition is a line segment, then R is a straight-through neuron; when Q is a plane triangle, R is a three-dimensional neuron; when Q is a tetrahedron, R is a four-dimensional neuron.
Whether a certain identification method has good practicability or not needs to be measured, the identification complexity under the condition that the image data set is large and the accuracy of the identified result need to be considered; on the basis, the image can be subjected to refined feature extraction firstly, a feature space is constructed according to the extracted features, then a plurality of different graphs are constructed in the feature space, the graphs are covered, and then accurate classification and identification are carried out according to the feature coverage, so that the identification efficiency is high, and the accuracy is high.
As shown in fig. 1, the method for identifying high-precision crop pest and disease images of the present invention specifically includes the steps of performing refined feature extraction on an image, forming a feature set by using the extracted features, constructing different patterns by using the features in the feature set, sequentially covering each pattern by using multidimensional neural nodes, stripping the features in the coverage range of the multidimensional neural nodes from the feature set, covering each pattern constructed in a feature space set one by using the multidimensional neural nodes according to the process until all the features in the feature set are stripped to be empty, and deriving the discontinuous fall coverage rate of image identification, namely the identification precision, according to the obtained final coverage range.
The invention relates to a high-precision crop disease and insect pest image identification method, which specifically comprises the following steps:
(1) constructing a training set H, H ═ H1,H2,…,HLWhere the training set contains N classes, HKIs the kth class, contains N sampling points, HK={L1,L2,…,LN};
(2) Calculate HKDistance between any two sampling points, and from HKFind two sampling points M in11And M12Let ρ (M)11,M12) Is the smallest Hi,Hi∈SK{ρ(Hi,Hj) In which H isi≠Hj;
(3) Find out the third sampling point M13,M13∈HK-{M11,M12But not at sample M11And sample M12On the straight line of the component, then connecting M13,M11And M12The three sampling points form a plane triangle A1;
(4) The triangular pixel region A is then neuron-pair1Coverage, size of covered space P1={Y|ρYF1<Fh,Y∈RnWhere ρ YF1Denotes Y and F1The distance between them;
(5) judging whether each sampling point in H is at P1If the sampling point is in the coverage area, stripping the sampling point from the H, and allowing the H to standK=HK-{Li|Li∈P1};
(6) From the set HKThen find a new sampling point M21And let the new sampling point M21And M13、M11、M12The sum of the distances of the three sampling points is minimum;
(7) for { M13,M11,M12Two of the three samples are renamed to M22And M23Knowing the two sampling points M22And M23Is and sample point M21Two sampling points with the shortest distance, and then M22,M23,M21Joined to form a second planar triangle A2;
(8) Using neurons to pixel triangle region A2Coverage, size of covered space P2Set HKValue of (A) becomes HK=HK-{M21};
(9) Repeating the steps (5) to (7) to find another sampling point Mi,Mi∈HKThe newly found sample point is then marked as Mi1And separating M as in step (7)i1The nearest two sampling points are respectively marked as Mi2And Mi3;
(10) Then connect Mi3,Mi1And Mi2The three sampling points form a plane triangle Ai(ii) a Covering this with neurons, the size of the covered space PiSet HKValue of (A) becomes HK=HK-{Mi};
(11) Finally, judge HKAnd (4) whether the set is an empty set or not, if not, repeating the steps (9) - (10) until the set is empty, and if so, deducing discontinuous fall coverage rate of image recognition, namely recognition accuracy according to the obtained final coverage range of the K classes.
The method has reasonable conception, and uses the multi-dimensional neural nodes to cover each single shape so as to remove the limitation on the dimension of image recognition, improve the dimension of recognition, well extract the characteristics of the image in a large number of heterogeneous data sets and carry out classification recognition, avoid the problem of reducing the recognition precision along with the increase of the image data volume, and obviously improve the image recognition precision.
Claims (3)
1. A high-precision crop pest and disease image identification method is characterized in that firstly, refined feature extraction is carried out on an image, the extracted features form a feature set, then, the features in the feature set are constructed into different graphs, then, each graph is sequentially covered through multi-dimensional neural nodes, the features in the coverage range of the multi-dimensional neural nodes are stripped out of the feature set, then, each graph constructed in a feature space set is covered one by one through the multi-dimensional neural nodes according to the process until all the features in the feature set are stripped to be empty, and at the moment, the discontinuous fall coverage rate of image identification, namely the identification precision is deduced according to the obtained final coverage range.
2. The method for identifying the high-precision crop pest image according to claim 1, which is characterized by comprising the following steps:
(1) constructing a training set H, H ═ H1,H2,…,HLWhere the training set contains N classes, HKIs the kth class, contains N sampling points, HK={L1,L2,…,LN};
(2) Calculate HKDistance between any two sampling points, and from HKFind two sampling points M in11And M12Let ρ (M)11,M12) Is the smallest Hi,Hi∈SK{ρ(Hi,Hj) In which H isi≠Hj;
(3) Find out the third sampling point M13,M13∈HK-{M11,M12But not at sample M11And sample M12On the straight line of the component, then connecting M13,M11And M12The three sampling points form a plane triangle A1;
(4) The triangular pixel region A is then neuron-pair1Coverage, size of covered space P1={Y|ρYF1<Fh,Y∈RnWhere ρ YF1Denotes Y and F1The distance between them;
(5) judging whether each sampling point in H is at P1If the sampling point is in the coverage area, stripping the sampling point from the H, and allowing the H to standK=HK-{Li|Li∈P1};
(6) From the set HKThen find a new sampling point M21And let the new sampling point M21And M13、M11、M12The sum of the distances of the three sampling points is minimum;
(7) for { M13,M11,M12Two of the three samples are renamed to M22And M23Knowing the two sampling points M22And M23Is and sample point M21Two sampling points with the shortest distance, and then M22,M23,M21Joined to form a second planar triangle A2;
(8) Using neurons to pixel triangle region A2Coverage, size of covered space P2Set HKValue of (A) becomes HK=HK-{M21};
(9) Repeating the steps (5) to (7) to find another sampling point Mi,Mi∈HKThe newly found sample point is then marked as Mi1And separating M as in step (7)i1The nearest two sampling points are respectively marked as Mi2And Mi3;
(10) Then connect Mi3,Mi1And Mi2The three sampling points form a plane triangle Ai(ii) a Covering this with neurons, the size of the covered space PiSet HKValue of (A) becomes HK=HK-{Mi};
(11) Finally, judge HKIf the set is an empty set, repeating the steps (9) - (10) until the set is empty, and if the set is empty, obtaining the final coverage range of the K classes according to the obtainedAnd the discontinuous fall coverage rate of the image recognition, namely the recognition accuracy is deduced.
3. The method for identifying high-precision crop pest images according to claim 1, characterized by comprising the following steps: the image recognition method uses multi-dimensional neural nodes to overlay each single shape; the multidimensional neural nodes and single shapes are defined as follows:
① setting A0,A1,...,As(S.ltoreq.N) is an N-dimensional feature space VSPoints, vectors, within which are mutually uncorrelated(i ═ 1, 2, …, S) have no linear dependence, i.e. have linear independence; there is one set of pixel pointsΩ S is A0,A1,…,AsAn S-dimension single shape for a vertex;
② setting Q to be a polyhedron within a feature space, where feature space VSSatisfy, y ∈ VSand/Q. While the distance between y and the polyhedron Q satisfies the equation If there is one R satisfyingWhen Ah is more than 0, R can be called probability coverage to the polyhedron;
when the Q in the definition of steps ① and ② is a line, then R is a straight-through neuron, when Q is a planar triangle, then R is a three-dimensional neuron, and when Q is a tetrahedron, then R is a four-dimensional neuron.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911271371.2A CN111160414B (en) | 2019-12-12 | 2019-12-12 | High-precision crop disease and insect pest image identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911271371.2A CN111160414B (en) | 2019-12-12 | 2019-12-12 | High-precision crop disease and insect pest image identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111160414A true CN111160414A (en) | 2020-05-15 |
CN111160414B CN111160414B (en) | 2021-06-04 |
Family
ID=70557079
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911271371.2A Active CN111160414B (en) | 2019-12-12 | 2019-12-12 | High-precision crop disease and insect pest image identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111160414B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115205194A (en) * | 2022-04-20 | 2022-10-18 | 浙江托普云农科技股份有限公司 | Method, system and device for detecting coverage rate of sticky trap based on image processing |
CN117557914A (en) * | 2024-01-08 | 2024-02-13 | 成都大学 | Crop pest identification method based on deep learning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080244769A1 (en) * | 2007-03-26 | 2008-10-02 | Eby William H | Soybean cultivar s060295 |
CN107067043A (en) * | 2017-05-25 | 2017-08-18 | 哈尔滨工业大学 | A kind of diseases and pests of agronomic crop detection method |
WO2019106638A1 (en) * | 2017-12-03 | 2019-06-06 | Seedx Technologies Inc. | Systems and methods for sorting of seeds |
CN110084790A (en) * | 2019-04-17 | 2019-08-02 | 电子科技大学成都学院 | Bionic pattern identifies the algorithm improvement differentiated in iconography pneumonia |
CN110517311A (en) * | 2019-08-30 | 2019-11-29 | 北京麦飞科技有限公司 | Pest and disease monitoring method based on leaf spot lesion area |
-
2019
- 2019-12-12 CN CN201911271371.2A patent/CN111160414B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080244769A1 (en) * | 2007-03-26 | 2008-10-02 | Eby William H | Soybean cultivar s060295 |
CN107067043A (en) * | 2017-05-25 | 2017-08-18 | 哈尔滨工业大学 | A kind of diseases and pests of agronomic crop detection method |
WO2019106638A1 (en) * | 2017-12-03 | 2019-06-06 | Seedx Technologies Inc. | Systems and methods for sorting of seeds |
CN110084790A (en) * | 2019-04-17 | 2019-08-02 | 电子科技大学成都学院 | Bionic pattern identifies the algorithm improvement differentiated in iconography pneumonia |
CN110517311A (en) * | 2019-08-30 | 2019-11-29 | 北京麦飞科技有限公司 | Pest and disease monitoring method based on leaf spot lesion area |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115205194A (en) * | 2022-04-20 | 2022-10-18 | 浙江托普云农科技股份有限公司 | Method, system and device for detecting coverage rate of sticky trap based on image processing |
CN117557914A (en) * | 2024-01-08 | 2024-02-13 | 成都大学 | Crop pest identification method based on deep learning |
CN117557914B (en) * | 2024-01-08 | 2024-04-02 | 成都大学 | Crop pest identification method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN111160414B (en) | 2021-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112308158B (en) | Multi-source field self-adaptive model and method based on partial feature alignment | |
AU2020102885A4 (en) | Disease recognition method of winter jujube based on deep convolutional neural network and disease image | |
CN108182441B (en) | Parallel multichannel convolutional neural network, construction method and image feature extraction method | |
Gulhane et al. | Diagnosis of diseases on cotton leaves using principal component analysis classifier | |
CN105975932B (en) | Gait Recognition classification method based on time series shapelet | |
Sabrol et al. | Fuzzy and neural network based tomato plant disease classification using natural outdoor images | |
CN109241995B (en) | Image identification method based on improved ArcFace loss function | |
CN107451565B (en) | Semi-supervised small sample deep learning image mode classification and identification method | |
CN108509920B (en) | CNN-based face recognition method for multi-patch multi-channel joint feature selection learning | |
CN104517122A (en) | Image target recognition method based on optimized convolution architecture | |
CN108734719A (en) | Background automatic division method before a kind of lepidopterous insects image based on full convolutional neural networks | |
CN109344699A (en) | Winter jujube disease recognition method based on depth of seam division convolutional neural networks | |
CN111127423B (en) | Rice pest and disease identification method based on CNN-BP neural network algorithm | |
CN111160414B (en) | High-precision crop disease and insect pest image identification method | |
De Ocampo et al. | Mobile platform implementation of lightweight neural network model for plant disease detection and recognition | |
CN113435254A (en) | Sentinel second image-based farmland deep learning extraction method | |
Ma et al. | Research on fish image classification based on transfer learning and convolutional neural network model | |
CN115965819A (en) | Lightweight pest identification method based on Transformer structure | |
CN114972208A (en) | YOLOv 4-based lightweight wheat scab detection method | |
Hu et al. | Lightweight multi-scale network with attention for facial expression recognition | |
Miao et al. | Crop weed identification system based on convolutional neural network | |
CN112597907A (en) | Citrus red spider insect pest identification method based on deep learning | |
CN111191510B (en) | Relation network-based remote sensing image small sample target identification method in complex scene | |
Treboux et al. | Towards retraining of machine learning algorithms: an efficiency analysis applied to smart agriculture | |
Yang et al. | 3D convolutional neural network for hyperspectral image classification using generative adversarial network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |