CN116664944A - Vineyard pest identification method based on attribute feature knowledge graph - Google Patents
Vineyard pest identification method based on attribute feature knowledge graph Download PDFInfo
- Publication number
- CN116664944A CN116664944A CN202310689138.6A CN202310689138A CN116664944A CN 116664944 A CN116664944 A CN 116664944A CN 202310689138 A CN202310689138 A CN 202310689138A CN 116664944 A CN116664944 A CN 116664944A
- Authority
- CN
- China
- Prior art keywords
- pest
- vineyard
- features
- feature
- attribute
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 241000607479 Yersinia pestis Species 0.000 title claims abstract description 161
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000012549 training Methods 0.000 claims abstract description 22
- 230000004927 fusion Effects 0.000 claims abstract description 15
- 238000004519 manufacturing process Methods 0.000 claims abstract description 10
- 241000238631 Hexapoda Species 0.000 claims description 36
- 239000013598 vector Substances 0.000 claims description 28
- 238000012360 testing method Methods 0.000 claims description 17
- 238000010586 diagram Methods 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 11
- 238000012512 characterization method Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000013136 deep learning model Methods 0.000 claims description 5
- 201000010099 disease Diseases 0.000 claims description 5
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims description 5
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 4
- 238000003708 edge detection Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000003860 storage Methods 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 230000002708 enhancing effect Effects 0.000 abstract 1
- 230000000694 effects Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 241001414720 Cicadellidae Species 0.000 description 3
- 238000013145 classification model Methods 0.000 description 3
- 241001048568 Apolygus lucorum Species 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 2
- 241000334154 Isatis tinctoria Species 0.000 description 2
- 238000002679 ablation Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 241001124076 Aphididae Species 0.000 description 1
- 241000086608 Empoasca vitis Species 0.000 description 1
- 241000334160 Isatis Species 0.000 description 1
- 241000721621 Myzus persicae Species 0.000 description 1
- 241000283615 Taylorilygus apicalis Species 0.000 description 1
- 241000209140 Triticum Species 0.000 description 1
- 235000021307 Triticum Nutrition 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 235000013339 cereals Nutrition 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000010419 fine particle Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application relates to a vineyard pest identification method based on attribute feature knowledge graph, comprising the following steps: constructing a vineyard pest attribute characteristic knowledge graph; collecting a vineyard pest image and manufacturing a data set; extracting MF features and ViT features of the vineyard pest images; and carrying out feature fusion on the attribute features, the MF features and the ViT features, and inputting the features into a ViT model for training to obtain the pest prediction category. The beneficial effects of the application are as follows: the application improves the accuracy of the classification of the pest images in the vineyard, introduces the restriction of the knowledge graph, and utilizes the depth characteristic template graph to extract and match, thereby enhancing the identification capability of the pest images with small size.
Description
Technical Field
The application relates to the technical field of pest identification, in particular to a vineyard pest identification method based on attribute characteristic knowledge maps.
Background
With the large-scale and intensive development of vineyards, effective identification and management of vineyard pests becomes an important problem. Currently, methods for identifying pest species using deep learning techniques have been widely used. Such methods typically include the steps of data collection, data preprocessing, model training, model testing, and optimization. However, this method has some technical problems. First, there are various kinds of insect pests, and each kind of insect pest has different morphological, color, size, etc., so that a large amount and variety of insect pest image data needs to be collected, which may be difficult in practical operation. Second, the labeling of pest images requires a lot of manual work and expertise, which makes the labeling of data a time consuming and costly process. Finally, due to the variety of the characteristics of insect pest shape, color, size and the like and the complexity of image background, the existing deep learning model may have insufficient generalization capability and poor recognition effect when processing some complex scenes.
Disclosure of Invention
The application aims at overcoming the defects of the prior art, and provides a vineyard pest identification method based on attribute feature knowledge graph, which comprises the following steps:
s1, constructing a vineyard pest attribute characteristic knowledge graph;
s2, collecting a vineyard pest image and manufacturing a data set;
s3, extracting MF features and ViT features of the vineyard pest images;
and S4, carrying out feature fusion on the attribute features, the MF features and the ViT features, and inputting the features into a ViT model for training to obtain the pest prediction category.
Preferably, S1 includes:
s101, collecting vineyard pest data;
s102, converting and normalizing the vineyard pest data;
s103, constructing a knowledge graph (Graph of Pest Knowledge, GPKG) of the vineyard diseases and insect pests according to the converted and normalized vineyard insect pest data; the example set is defined as a triplet of < insect pest category, relation, attribute feature > and takes the entity of the vineyard insect pest as the node of the map and the relation between the entity and the node as the edge of the map.
Preferably, in S102, when the vineyard pest data is semi-structured data, entity information is directly extracted from the text, where the entity information includes a pest name and a feature description; and when the vineyard pest data is structured data, entity extraction is performed by using a deep learning model, and entities related to the pests are marked.
Preferably, S2 includes:
s201, collecting image data of various vineyard pests, and manufacturing a data set;
s202, the data set is processed according to 3: the scale of 1 is divided into training and test sets.
Preferably, S3 includes:
s301, inputting a vineyard pest image, and calculating the color moment as a color feature f c ;
S302, utilizing local binary pattern and ashExtracting texture features of vineyard pest images by the degree symbiotic matrix, and splicing the two to form global texture features f t ;
S303, extracting outline features f of the image by adopting a Canni edge detection algorithm o ;
S304, the color characteristic f is obtained through splicing operation c Texture feature f t And profile feature f o Spliced together to obtain a traditional feature vector f MF Expressed as:
in the middle ofRepresenting a splicing operation;
s305, extracting high-level semantic characterization features f from images by using ViT model SF 。
Preferably, S4 includes:
s401, extracting a depth characteristic template diagram of concept nodes in the knowledge graph by using a graph convolution neural network, and indexing the depth characteristic template diagram in the template diagram through a label of an input image to obtain an attribute characteristic vector f of the pest corresponding to the nodes in the knowledge graph CF The method comprises the steps of carrying out a first treatment on the surface of the Then, with the conventional feature vector f MF Cosine similarity calculation is carried out to obtain similarity loss
Wherein n represents the dimension of the feature vector and is equal to the total number of insect pest categories;
then, carrying out cosine similarity calculation by using the manual feature vector of each image and the feature vectors corresponding to all the nodes representing the insect pest categories in the ACKG, and combining to obtain attribute similarity feature vectorsBy l k Representing the index of the pest category node, then->Expressed as:
s402, attribute feature vector f CF Conventional feature vector f MF And high-level semantic characterization feature f SF Feature fusion is carried out to obtain training insect pest image features f train Expressed as:
f train =f CF +f SF +f MF
attribute similarity feature vectorAnd high-level semantic characterization feature f SF Feature fusion is carried out to obtain training insect pest image features f test Expressed as:
s403, training the insect pest image characteristic f train Or testing pest image features f test Inputting a classifier to obtain a predicted insect pest category; model lossBy cross entropy loss function->And cosine loss function->The representation is:
in the above, y i Andreal tag and predictive tag respectively representing input pest image,/->Representation->Is used for the prediction probability of (1).
In a second aspect, there is provided a vineyard pest identification device based on an attribute feature knowledge graph, for performing any of the vineyard pest identification methods based on the attribute feature knowledge graph of the first aspect, including:
the construction module is used for constructing a vineyard pest attribute characteristic knowledge graph;
the acquisition module is used for acquiring the pest images of the vineyard and manufacturing a data set;
the extraction module is used for extracting MF features and ViT features of the vineyard pest images;
and the fusion module is used for carrying out feature fusion on the attribute features, the MF features and the ViT features, and inputting the features into the ViT model for training to obtain the pest prediction category.
In a third aspect, a computer storage medium having a computer program stored therein is provided; the computer program, when run on a computer, causes the computer to perform the vineyard pest identification method based on the attribute feature knowledge graph of any one of claims 1 to 6.
The beneficial effects of the application are as follows:
1. the application improves the accuracy of the classification of the vineyard pest images: according to the application, by using various feature extraction methods and mixed feature representation, the feature information such as color, texture, contour and the like of the vineyard pest image can be more comprehensively captured, so that the accuracy and the robustness of the classification model are improved.
2. The application introduces knowledge graph constraint: according to the application, the vineyard pest attribute feature knowledge graph is constructed, and the pest images are associated with the concept nodes in the knowledge graph, so that the classification model can be restrained from learning specific pest categories, the model is prevented from excessively paying attention to background features, and the concentration and classification performance of pests are improved.
3. The application utilizes depth feature template map extraction and matching: the application utilizes GCN to extract the depth characteristic template image of concept nodes in the knowledge graph, and generates a new characteristic vector by matching with the similarity of the input image. The feature extraction and matching method can better represent pest images, reduce interference of background noise and improve the discrimination capability of the classification model.
4. The application enhances the identification capability of small-size pest images: the application can effectively identify the small-size pest images by introducing the traditional characteristics and the restriction of the knowledge graph, and avoid the problems of difficult identification and misclassification caused by smaller image sizes.
Drawings
FIG. 1 is a diagram of a knowledge graph constructed in accordance with the present application;
FIG. 2 is a diagram of a network model structure provided by the present application;
FIG. 3 is a working schematic diagram of a vineyard pest identification method based on attribute feature knowledge graph provided by the application;
FIG. 4 is a flow chart of a vineyard pest identification method based on attribute feature knowledge graph provided by the application;
fig. 5 is a comparison chart of the visual effect provided by the application.
Detailed Description
The application is further described below with reference to examples. The following examples are presented only to aid in the understanding of the application. It should be noted that it will be apparent to those skilled in the art that modifications can be made to the present application without departing from the principles of the application, and such modifications and adaptations are intended to be within the scope of the application as defined in the following claims.
Example 1:
a vineyard pest identification method based on attribute feature knowledge graph comprises the following steps:
s1, constructing a vineyard pest attribute characteristic knowledge graph.
S1 comprises the following steps:
s101, collecting vineyard pest data.
For example, the data relating to vineyard pests is crawled from a knowledge base of a professional agriculture website, an insect science website, a wikipedia, a hundred degrees encyclopedia, and the like using the Scrapy framework. Specifically, the vineyard pest data is collected by writing a script crawler, specifying a start URL and data extraction rules. Through systematic searching and collecting, 1264 pieces of data related to common pests and diseases (including green plant bug, dyers woad leaf hopper, grape leafhopper and the like) in a vineyard are successfully obtained.
S102, converting and normalizing the vineyard pest data.
In S102, vineyard pest data is processed and converted into normalized knowledge corpus by using regular expressions and a deep learning model (such as Bi-LSTM-CRF). Specifically, when the vineyard pest data is semi-structured data, entity information is directly extracted from the text, wherein the entity information comprises pest names and feature descriptions; and when the vineyard pest data is structured data, entity extraction is performed by using a deep learning model, and entities related to the pests are marked.
S103, constructing a knowledge graph of the vineyard diseases and insect pests according to the converted and normalized vineyard insect pest data.
Specifically, the knowledge graph organizes relevant information of the vineyard insect pests in a form of triplets (insect pest types, relationships and attribute characteristics), wherein entities of the vineyard insect pests are used as nodes of the graph, and relationships among the entities are used as edges of the graph; and uses the graph database Neo4j as a storage way of knowledge. The construction mode can provide more accurate and comprehensive pest attribute characteristic support and provide more reliable information for subsequent analysis and decision.
As shown in fig. 1, mapping a knowledge graph GPKG into a trainable deep learning network by using a GAT network; the nodes of the knowledge graph include pest class N l And pest characteristics N f Two types:
wherein n and m represent the total number of pest categories and the number of all attribute nodes in the map, respectively.
S2, collecting a vineyard pest image and manufacturing a data set.
S2 comprises the following steps:
s201, collecting image data of various vineyard pests, and manufacturing a data set.
In the embodiment of the application, at a plant factory base of a Zhejiang city college of Hangzhou, image data of 8 current-season vineyard pests including lygus lucorum, dyer woad leaf hoppers, wheat binary aphids and the like are collected by using a remote visual automatic pest monitoring system iMETOS iSCOUT. Under the guidance of 3 agricultural specialists, the images are manually screened and labeled, and the types and the numbers of pests contained in each image are determined.
S202, collecting GP8 data sets according to 3: the scale of 1 is divided into training and test sets. Wherein the training set contains 1023 samples and the test set contains 342 samples. The reasonable distribution of the sample number of the training set and the test set is ensured so as to carry out model training and performance evaluation.
And S3, extracting MF features and ViT features of the vineyard pest images.
S3 comprises the following steps:
s301, inputVineyard pest image, the color moment of which is calculated as the color feature f c The method comprises the steps of carrying out a first treatment on the surface of the The color moment is a statistic on the distribution of colors of an image and is used to characterize the color information of the image.
S302, extracting texture features of the vineyard pest images by utilizing a local binary pattern and a gray level co-occurrence matrix, and splicing the two to form global texture features f t The method comprises the steps of carrying out a first treatment on the surface of the The local binary pattern is used to capture local texture information of the image, and the gray level co-occurrence matrix is used to describe the spatial relationship between the gray level values of the pixels.
S303, extracting outline features f of the image by adopting a Canni edge detection algorithm o The method comprises the steps of carrying out a first treatment on the surface of the The Canni edge detection algorithm can identify edge portions in the image to capture profile information of the pest image.
S304, the color characteristic f is obtained through splicing operation c Texture feature f t And profile feature f o Spliced together to obtain a traditional feature vector f MF Expressed as:
in the middle ofRepresenting a stitching operation.
S305, extracting high-level semantic characterization features f from images by using ViT model SF . The ViT model performs feature extraction by segmenting the image into fixed-size blocks and linearly embedding them into feature space, and then applying a transducer structure. In this way, viT can capture global context information in an image, generating features f with high-level semantics SF 。
And S4, carrying out feature fusion on the attribute features, the MF features and the ViT features, and inputting the features into a ViT model for training to obtain the pest prediction category.
S4 comprises the following steps:
s401, extracting a depth characteristic template diagram of concept nodes in a knowledge graph by using a graph convolution neural network, and inputting an imageThe labels are indexed in the template diagram to obtain attribute feature vectors f of corresponding nodes of the pests in the knowledge graph CF The method comprises the steps of carrying out a first treatment on the surface of the Then, with the conventional feature vector f MF Cosine similarity calculation is carried out to obtain similarity loss
Wherein n represents the dimension of the feature vector and is equal to the total number of insect pest categories;
s402, attribute feature vector f CF Conventional feature vector f MF And high-level semantic characterization feature f SF Feature fusion is carried out to obtain training insect pest image features f train Expressed as:
f train =f CF +f SF +f MF
s403, training the insect pest image characteristic f train Or testing pest image features f test Inputting a classifier to obtain a predicted insect pest category; model lossBy cross entropy loss function->And cosine loss function->The representation is:
in the above-mentioned method, the step of,y i andreal tag and predictive tag respectively representing input pest image,/->Representation->Is used for the prediction probability of (1).
In order to verify the effect of the method, the embodiment of the application collects a data set in an intelligent plant factory laboratory of the Zhejiang university city college, and uses a remote visual automatic pest monitoring system iMETOS iSCOUT to collect image data of 8 on-season vineyard pests including lygus lucorum, dyers woad leaf hoppers, myzus persicae and the like as shown in table 1.
Table 1 GP dataset specific data
Two comparative schemes were designed together:
the first scheme is based on GP data set, and different methods are compared to verify the basic classification accuracy degree of the overall model; because the pest fine-particle identification method of this example relies on the ViT model, the overall model should exhibit more desirable results than ViT, with the results shown in table 2 below:
TABLE 2 comparison of different model Performance
Model | Accuracy/% | F1 fraction/% | Accuracy/% | Recall/% |
VGG-16 | 88.30 | 86.75 | 87.21 | 86.29 |
ResNet-152 | 91.23 | 89.33 | 90.11 | 88.56 |
Inception-V3 | 90.64 | 88.71 | 89.92 | 87.53 |
Xception | 87.72 | 85.34 | 87.66 | 83.14 |
MobileNet | 88.89 | 87.01 | 88.96 | 85.14 |
SqueezeNet | 80.12 | 78.26 | 83.39 | 73.72 |
ViT | 93.86 | 92.05 | 94.28 | 89.92 |
ACKGViT | 95.03 | 93.98 | 95.17 | 92.82 |
Table 2 lists the performance of the pretrained networks VGG-16, resNet-152, acceptance-V3, xception, mobileNet, squeezeNet and ViT, respectively, on the GP21 test set. As can be seen from Table 2, the ViT model was significantly better than the other models in both Accuracy and F1 indices. Compared with one of the models ResNet-152 used at the highest frequency in the current visual task, the Accuracy and F1 values of the ViT are improved, which shows that the high-level characterization extracted by ViT can integrate global and local information of pest images more finely, so that the method takes ViT as a backbone network to construct a GPKG-ViT model.
The performance of ACKGViT is shown in the last line of Table 2, compared to ViT, accuracy and F of ACKGViT 1 The values were increased by 1.17 and 1.93 percent, respectively, because ViT was not as capable of identifying similarly shaped objects, while the knowledge-graph was able to provide detailed information between different classes of pests, thereby assisting ViT in distinguishing pest types.
In order to further analyze the improvement effect of the knowledge graph on the classification performance of the vineyard diseases and insect pests, 3 groups of ablation tests are performed, and the results are shown in table 3.
Table 3 ablation test results
Model | Accuracy | F1 | Precision | Recall |
GPKG-ViT | 91.21 | 85.95 | 87.52 | 84.99 |
w/o MF | 89.86(-1.35) | 83.63(-2.32) | 86.24 | 81.84 |
w/o KG | 89.66(-1.55) | 83.59(-2.36) | 86.00 | 81.85 |
w/o MFUKG | 89.57(-1.64) | 83.05(-2.90) | 84.98 | 81.70 |
Where "w/o" denotes a removal operation. MF indicates manual features and KG indicates knowledge maps.
As can be seen from the table, removing the branch (w/o MFUKG) where the knowledge-graph is located reduces the model performance Accuracy and F1 by 1.64 and 2.90%, respectively. Removing the manual features (w/o MF) and removing the knowledge-graph (w/o KG) reduced the model performance F1 by 1.35% and 1.55%, respectively, and the Accuracy by 2.32% and 2.36%, respectively. The above results indicate that: 1) It is effective to assist ViT in acquiring more accurate pest information by introducing a knowledge pattern; 2) The traditional features and knowledge patterns are not very effective in improving the performance of the model, and the main reasons are as follows: the traditional feature extraction method has defects in the aspect of expressing high-level semantic information of an image, and the graph convolution network cannot be effectively trained by only using a knowledge graph, so that node feature vectors are insufficient in characterization.
Experimental results show that the application achieves ideal effect in the aspect of identifying the insect pests of the vineyard.
Example 2:
on the basis of embodiment 1, embodiment 2 of the present application provides a vineyard pest identification apparatus based on attribute feature knowledge graph, comprising:
the construction module is used for constructing a vineyard pest attribute characteristic knowledge graph;
the acquisition module is used for acquiring the pest images of the vineyard and manufacturing a data set;
the extraction module is used for extracting MF features and ViT features of the vineyard pest images;
and the fusion module is used for carrying out feature fusion on the attribute features, the MF features and the ViT features, and inputting the features into the ViT model for training to obtain the pest prediction category.
Specifically, the apparatus provided in this embodiment is an apparatus corresponding to the method provided in embodiment 1, so that the portions in this embodiment that are the same as or similar to those in embodiment 1 may be referred to each other, and will not be described in detail in this disclosure.
In summary, the application utilizes the advantages of the knowledge graph in describing the attribute characteristics of the pest entities and the association between the pest entities, and uses the fine grain attribute characteristics and the pest entity association characteristic information provided by the knowledge graph for the classification research of the vineyard pests, thereby realizing the accurate identification of the vineyard pests.
Claims (8)
1. The vineyard pest identification method based on the attribute feature knowledge graph is characterized by comprising the following steps of:
s1, constructing a vineyard pest attribute characteristic knowledge graph;
s2, collecting a vineyard pest image and manufacturing a data set;
s3, extracting MF features and ViT features of the vineyard pest images;
and S4, carrying out feature fusion on the attribute features, the MF features and the ViT features, and inputting the features into a ViT model for training to obtain the pest prediction category.
2. The method for identifying a vineyard pest based on the attribute-feature knowledge graph according to claim 1, wherein S1 comprises:
s101, collecting vineyard pest data;
s102, converting and normalizing the vineyard pest data;
s103, constructing a knowledge graph of the vineyard diseases and insect pests according to the converted and normalized vineyard insect pest data; the example set is defined as a triple of < insect pest category, relation, attribute feature >, and the entity of the vineyard insect pest is taken as a node of the map, and the relation between the entity and the node is taken as an edge of the map.
3. The method for identifying a vineyard pest based on the attribute-feature knowledge graph according to claim 2, wherein in S102, when the vineyard pest data is semi-structured data, entity information is directly extracted from text, and the entity information includes a pest name and a feature description; and when the vineyard pest data is structured data, entity extraction is performed by using a deep learning model, and entities related to the pests are marked.
4. The method for identifying a vineyard pest based on the attribute-feature knowledge graph according to claim 3, wherein S2 comprises:
s201, collecting image data of various vineyard pests, and manufacturing a data set;
s202, dividing the data set into a training set and a testing set according to the proportion of 3:1.
5. The method for identifying a vineyard pest based on the attribute-feature knowledge graph according to claim 4, wherein S3 comprises:
s301, inputting a vineyard pest image, and calculating the color moment as a color feature f c ;
S302, extracting texture features of the vineyard pest images by utilizing a local binary pattern and a gray level co-occurrence matrix, and splicing the two to form global texture features f t ;
S303, extracting outline features f of the image by adopting a Canni edge detection algorithm o ;
S304, the color characteristic f is obtained through splicing operation c Texture feature f t And profile feature f o Spliced together to obtain a traditional feature vector f MF Expressed as:
in the middle ofRepresenting a splicing operation;
s305, extracting high-level semantic characterization features f from images by using ViT model SF 。
6. The method for identifying a vineyard pest based on the attribute-feature knowledge graph of claim 5, wherein S4 comprises:
s401, extracting a depth characteristic template diagram of concept nodes in the knowledge graph by using a graph convolution neural network, and indexing the depth characteristic template diagram in the template diagram through a label of an input image to obtain an attribute characteristic vector f of the pest corresponding to the nodes in the knowledge graph CF The method comprises the steps of carrying out a first treatment on the surface of the Then, with the conventional feature vector f MF Cosine similarity calculation is carried out to obtain similarity loss
Wherein n represents the dimension of the feature vector and is equal to the total number of insect pest categories;
then, carrying out cosine similarity calculation by using the manual feature vector of each image and the feature vectors corresponding to all the nodes representing the insect pest categories in the ACKG, and combining to obtain attribute similarity feature vectorsBy l k Representing the index of the pest category node, then->Expressed as:
s402, attribute feature vector f CF Conventional feature vector f MF And high-level semantic characterization feature f SF Feature fusion is carried out to obtain training insect pest image features f train Expressed as:
f train =f CF +f SF +f MF
attribute similarity feature vectorAnd high-level semantic characterization feature f SF Feature fusion is carried out to obtain training insect pest image features f test Expressed as:
s403, training the insect pest image characteristic f train Or testing pest image features f test Inputting a classifier to obtain a predicted insect pest category; model lossBy cross entropy loss function->And cosine loss function->The representation is:
in the above, y i Andrespectively represent the input insectsTrue and predicted labels of the nuisance image, +.>Representation->Is used for the prediction probability of (1).
7. A vineyard pest identification apparatus based on an attribute-feature knowledge graph, characterized by being configured to perform the vineyard pest identification method based on an attribute-feature knowledge graph as set forth in any one of claims 1 to 6, comprising:
the construction module is used for constructing a vineyard pest attribute characteristic knowledge graph;
the acquisition module is used for acquiring the pest images of the vineyard and manufacturing a data set;
the extraction module is used for extracting MF features and ViT features of the vineyard pest images;
and the fusion module is used for carrying out feature fusion on the attribute features, the MF features and the ViT features, and inputting the features into the ViT model for training to obtain the pest prediction category.
8. A computer storage medium, wherein a computer program is stored in the computer storage medium; the computer program, when run on a computer, causes the computer to perform the vineyard pest identification method based on the attribute feature knowledge graph of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310689138.6A CN116664944A (en) | 2023-06-12 | 2023-06-12 | Vineyard pest identification method based on attribute feature knowledge graph |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310689138.6A CN116664944A (en) | 2023-06-12 | 2023-06-12 | Vineyard pest identification method based on attribute feature knowledge graph |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116664944A true CN116664944A (en) | 2023-08-29 |
Family
ID=87713524
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310689138.6A Pending CN116664944A (en) | 2023-06-12 | 2023-06-12 | Vineyard pest identification method based on attribute feature knowledge graph |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116664944A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117151342A (en) * | 2023-10-24 | 2023-12-01 | 广东省农业科学院植物保护研究所 | Litchi insect pest identification and resistance detection method, litchi insect pest identification and resistance detection system and storage medium |
CN117392470A (en) * | 2023-12-11 | 2024-01-12 | 安徽中医药大学 | Fundus image multi-label classification model generation method and system based on knowledge graph |
-
2023
- 2023-06-12 CN CN202310689138.6A patent/CN116664944A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117151342A (en) * | 2023-10-24 | 2023-12-01 | 广东省农业科学院植物保护研究所 | Litchi insect pest identification and resistance detection method, litchi insect pest identification and resistance detection system and storage medium |
CN117151342B (en) * | 2023-10-24 | 2024-01-26 | 广东省农业科学院植物保护研究所 | Litchi insect pest identification and resistance detection method, litchi insect pest identification and resistance detection system and storage medium |
CN117392470A (en) * | 2023-12-11 | 2024-01-12 | 安徽中医药大学 | Fundus image multi-label classification model generation method and system based on knowledge graph |
CN117392470B (en) * | 2023-12-11 | 2024-03-01 | 安徽中医药大学 | Fundus image multi-label classification model generation method and system based on knowledge graph |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110717534B (en) | Target classification and positioning method based on network supervision | |
CN116664944A (en) | Vineyard pest identification method based on attribute feature knowledge graph | |
Soltis et al. | Plants meet machines: Prospects in machine learning for plant biology | |
CN107292314A (en) | A kind of lepidopterous insects species automatic identification method based on CNN | |
CN110702648B (en) | Fluorescent spectrum pollutant classification method based on non-subsampled contourlet transformation | |
CN112232374B (en) | Irrelevant label filtering method based on depth feature clustering and semantic measurement | |
CN116012653A (en) | Method and system for classifying hyperspectral images of attention residual unit neural network | |
CN111461067B (en) | Zero sample remote sensing image scene identification method based on priori knowledge mapping and correction | |
CN104680185A (en) | Hyperspectral image classification method based on boundary point reclassification | |
CN112927783A (en) | Image retrieval method and device | |
CN117252842A (en) | Aircraft skin defect detection and network model training method | |
CN111461323A (en) | Image identification method and device | |
Orlov et al. | Computer vision for microscopy applications | |
López-Cifuentes et al. | Attention-based knowledge distillation in scene recognition: the impact of a dct-driven loss | |
CN111898528B (en) | Data processing method, device, computer readable medium and electronic equipment | |
CN116524725B (en) | Intelligent driving traffic sign image data identification system | |
CN113496221A (en) | Point supervision remote sensing image semantic segmentation method and system based on depth bilateral filtering | |
Qiu et al. | High throughput saliency-based quantification of grape powdery mildew at the microscopic level for disease resistance breeding | |
CN117520561A (en) | Entity relation extraction method and system for knowledge graph construction in helicopter assembly field | |
CN115661739A (en) | Vineyard pest fine-grained identification method based on attribute characteristic knowledge graph | |
CN113158878B (en) | Heterogeneous migration fault diagnosis method, system and model based on subspace | |
CN116343205A (en) | Automatic labeling method for fluorescence-bright field microscopic image of planktonic algae cells | |
CN116434273A (en) | Multi-label prediction method and system based on single positive label | |
CN112507895A (en) | Method and device for automatically classifying qualification certificate files based on big data analysis | |
Rao et al. | leaf disease detection using machine learning techniques” |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |