CN111832642A - Image identification method based on VGG16 in insect taxonomy - Google Patents

Image identification method based on VGG16 in insect taxonomy Download PDF

Info

Publication number
CN111832642A
CN111832642A CN202010643798.7A CN202010643798A CN111832642A CN 111832642 A CN111832642 A CN 111832642A CN 202010643798 A CN202010643798 A CN 202010643798A CN 111832642 A CN111832642 A CN 111832642A
Authority
CN
China
Prior art keywords
image
insect
vgg16
data set
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010643798.7A
Other languages
Chinese (zh)
Inventor
吴开华
张赫
张竞成
陈冬梅
李凯强
李欣恺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202010643798.7A priority Critical patent/CN111832642A/en
Publication of CN111832642A publication Critical patent/CN111832642A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image identification method based on VGG16 in insect taxonomy, which comprises the following steps: s1, establishing an image data set; s2, processing the insect image of the image data set to obtain a training data set; s3, training the training data set by using a VGG16 model; s4, extracting partial images from the image data set as reference images and images to be identified, and performing corner detection to correct the reference images; s5, processing the image to be recognized and the corrected reference image, inputting the processed image into the trained VGG16 model, and extracting image features; s6, visualizing the extracted image features to obtain a feature map; and S7, calculating feature map image similarity SSIM of the image to be recognized and all reference images under each type of insect eye level, calculating a mean value, and classifying the image to be recognized to the type with the largest mean value as the belonging eye level. The invention improves the accuracy and efficiency of insect classification.

Description

Image identification method based on VGG16 in insect taxonomy
Technical Field
The invention belongs to the technical field of insect image identification and classification, and particularly relates to an image identification method based on VGG16 in insect taxonomy.
Background
In the conventional insect classification method, whether the conventional classification or the numerical classification, the main basis for classifying the insects is the physical characteristics of the insects, including the color, the speckle, the body appendages (such as tumor, score, cilia, etc.), the size (such as body length, body width, etc.), and the like of the insects. However, these physical characteristics cannot fully reflect the differences between the physical and morphological characteristics of different insect groups, and classifying insects by only these physical characteristics cannot achieve a good accuracy.
With the rapid development and wide application of computer science, it becomes possible to extract and analyze the features of images of living beings including insects by means of computer vision, and classify the species of the living beings by using the features extracted by the computer. The body types of different insects are different in size and shape, and the mathematical morphological characteristics of the insects include mathematical characteristics such as the body area, the perimeter, the eccentricity, the roundness and the like of the insects besides the body length and the body width. These quantified characteristics, which are capable of expressing the physical forms of different insect groups, are likely to reflect the differences between different insect groups more accurately and comprehensively. Before the deep learning convolutional neural network appears, the extraction of the features in the pattern recognition mainly depends on manual extraction, and certain subjectivity exists. In addition, most of the existing methods for image recognition based on deep learning do not pay attention to which results are generated in the operation process of the neural network. The full connection layer is destructive to the image space structure, which may bring loss to the recognition accuracy of the image to some extent. And at present, the research of identifying the image eye level on the insect taxonomy based on the convolutional neural network is rarely carried out.
Disclosure of Invention
Based on the above-mentioned shortcomings and drawbacks of the prior art, it is an object of the present invention to at least solve one or more of the above-mentioned problems of the prior art, in other words, to provide an image recognition method based on VGG16 in insect taxonomy that satisfies one or more of the above-mentioned needs.
In order to achieve the purpose, the invention adopts the following technical scheme:
an image identification method based on VGG16 in insect taxonomy comprises the following steps:
s1, collecting insect images of various insect eye level elements, classifying the images according to the insect eye level elements, and establishing an image data set;
s2, carrying out target and background segmentation and size normalization on the insect image of the image data set to obtain a training data set;
s3, training a training data set by using a VGG16 model by adopting a transfer learning method to obtain a trained VGG16 model;
s4, extracting partial images from the image data set as reference images and images to be identified, carrying out corner detection on the images to be identified and the reference images, calculating a corner mean value to obtain an offset value, and correcting the reference images according to the offset value;
s5, performing target and background segmentation and size normalization on the image to be recognized and the corrected reference image, inputting the image to the trained VGG16 model, and extracting image features;
s6, visualizing the extracted image features by using a TensorBoard to obtain a feature map and storing the feature map;
and S7, calculating feature map image similarity SSIM of the image to be recognized and all reference images under each type of insect eye level, calculating a mean value, and classifying the image to be recognized to the type with the largest mean value to obtain the insect eye level to which the image to be recognized belongs.
Preferably, the insect orders include lepidoptera, orthoptera, hemiptera, coleoptera, and homoptera.
Preferably, the image data set satisfies the following condition:
each insect eye level element comprises not less than 10 insect pest species, and each insect original image comprises at least 30.
Preferably, the target and background segmentation adopts a full convolution neural network FCN to change the background into black.
Preferably, the size normalization normalizes the image size to 224 x 224.
Preferably, in step S2, data enhancement processing is further performed after the size normalization.
Preferably, the data enhancement processing includes one or more of flipping, rotating, brightness adjusting, and color adjusting.
Preferably, the VGG16 model comprises 13 convolutional layers, 3 fully-connected layers and 5 pooling layers.
Preferably, in step S7, feature maps obtained by convolution of the first to fifth convolutional layers of the VGG16 model are used as input for calculating the image similarity.
Preferably, in step S7, the image similarity SSIM is measured from three aspects of brightness, contrast, and structure:
Figure BDA0002572372810000031
wherein x and y respectively represent the image to be identified and the reference image, muxIs the mean value of x, μyIs the mean value of y, σxIs the variance of x, σyIs the variance of y, σxyIs the covariance of x and y; c1 ═ k1L)2、c2=(k2L)2A constant for maintaining stability, L being the dynamic range of image pixel values 0-255, k1=0.01,k2=0.03。
Compared with the prior art, the invention has the beneficial effects that:
the image identification method for insect classification based on VGG16 is convenient for classifying insects and improves the accuracy and efficiency of insect classification.
The method extracts the features of the insect image through the deep learning convolutional neural network, avoids the subjectivity of the traditional artificial feature extraction, and has more comprehensive extracted features.
At present, researches on insect taxonomy mainly focus on traditional mathematical morphological characteristics, and the deep learning convolutional neural network has little application to the insect taxonomy, so that the invention is a new attempt of deep learning on the insect taxonomy.
Drawings
FIG. 1 is a flowchart of an image recognition method based on VGG16 in insect taxonomy according to an embodiment of the present invention;
FIG. 2 is a network architecture diagram of a VGG16 in an embodiment of the invention;
fig. 3 is a characteristic diagram of an insect image after being convoluted with layers of VGG16 according to an embodiment of the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention, the following description will explain the embodiments of the present invention with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
As shown in fig. 1, the image recognition method based on VGG16 in insect taxonomy according to the embodiment of the present invention includes the following steps:
s1, collecting insect images of various insect eye level elements, classifying the images according to the insect eye level elements, and establishing an image data set;
specifically, insect images of various insect eye-level orders can be collected from public data sets, data sets obtained by web crawlers and data sets collected on the spot; classifying the collected insect images according to insect eye-level orders, wherein the insect eye-level orders comprise five major categories of lepidoptera, orthoptera, hemiptera, coleoptera and homoptera, each eye-level order comprises not less than 10 insect pest types, and each insect original image comprises at least 30 insects.
S2, carrying out target and background segmentation and size normalization on the insect image of the image data set to obtain a training data set;
specifically, the insect image of the image data set is subjected to target-background segmentation by using a full convolution neural network (FCN), and the background is changed into black so as to better extract insect features.
Since the forward propagation of the VGG16 fully connected layer is the product of the current layer's weights and the previous layer's outputs, as follows:
ak=σ(zk)=σ(Wkak-1+bk)
where k denotes the k-th layer, W is the weight of the k-th layer, bkIs the bias of the k-th layer, ak-1Representing the output of the k-1 layer (i.e. the previous layer), which is also the input of the k-th layer, akRepresents the output of the k-th layer (i.e., the current layer), and σ is the activation function.
While the shape of the VGG16 weight matrix is fixed, if the size of the input is not fixed, the input cannot be propagated forward, so that the calculation cannot be performed, and the model training is interrupted. For model training to proceed properly, it is necessary to normalize the image size to 224 x 224.
In addition, in order to obtain better feature extraction and classification effects, the deep learning convolutional neural network VGG16 model has a higher requirement on the number of training data sets, and therefore, for a case that the number of images in an image data set is small, the data set may be sample-expanded by a data enhancement method, where the commonly used method includes: flipping, rotating, brightness adjusting, color adjusting, etc.
Therefore, in order to enable the images to be input into the convolutional neural network, all images are first resized using the python program to normalize all image sizes to 224 x 224.
In order to make the trained model have better generalization capability, a large amount of training data sets are needed to train the model, and in case of insufficient number of original image data sets, the data sets can be expanded by means of data enhancement (i.e. sample expansion), wherein a common method comprises: flipping, brightness adjustment, color adjustment, etc.
S3, training a training data set by using a VGG16 model by adopting a transfer learning method to obtain a trained VGG16 model;
specifically, as shown in fig. 2, the deep learning convolutional neural network VGG16 includes:
13 convolutional layers, each represented by convx _ x, sequentially comprising conv1-1, conv1-2, conv2-1, conv2-2, conv3-1, conv3-2, conv3-3, conv4-1, conv4-2, conv4-3, conv5-1, conv5-2 and conv 5-3;
5 pooling layers, respectively represented by poolx, sequentially comprising pool1 between conv1-2 and conv2-1, pool2 between conv2-2 and conv3-1, pool3 between conv3-3 and conv4-1, pool4 between conv4-3 and conv5-1, and pool5 behind conv 5-3;
the 3 fully connected layers, denoted fcxxx respectively, include fc4096, fc4096 and fc1000 in sequence after pool 5. Among them, the convolutional layer and the fully-connected layer have a weight coefficient and are also called as weight layers, and the total number is 13+3 — 16, which is the source of VGG 16.
Because the ImageNet image data set comprises insects, the obtained VGG16 trained by the ImageNet has a good effect of extracting insect features, the insect images are trained on the ImageNet data set by a deep learning convolutional neural network model VGG16 by adopting a transfer learning method, the training data set obtained by image segmentation, size normalization and data enhancement is input into a VGG16 model, and an insect image training model, namely a trained VGG16 model, is obtained by fine tuning a network.
S4, extracting partial images from the image data set as a reference image and an image to be identified, carrying out image corner detection on the image to be identified and the reference image, obtaining an offset value by calculating a corner mean value, and further correcting the reference image according to the offset value, specifically correcting the relative position of insects in the image;
specifically, a picture is randomly selected from an image data set as an image to be identified, a reference image is selected as far as possible to include different insect species under different mesh levels, wherein each insect image includes different postures of insects as much as possible, corner detection is carried out on the image to be identified and each reference image, the mean value of the corners of the two images is obtained, an offset value is obtained by comparing the mean values of the corners of the two images, and the reference image is corrected according to the offset value, so that the relative positions of the insects in the images are substantially consistent.
S5, preprocessing the image to be recognized and the corrected reference image, including target and background segmentation and size normalization, inputting the preprocessed image into a trained VGG16 model, and extracting image features;
specifically, the image to be recognized and the corrected reference image are input into the trained VGG16 model, and the image features are extracted.
S6, visualizing the extracted image features by using a TensorBoard to obtain a feature map and storing the feature map;
specifically, a visualization tool, tensorboard, is used for viewing a feature map obtained by convolution of the convolution layer of the image, as shown in fig. 3, 1-13 sequentially correspond to feature maps obtained by convolution of 1-13 th convolution layer; and storing the characteristic diagram as the basis for analyzing and selecting the next convolution characteristic diagram.
The VGG16 shallow layer network extracts texture and detail features, the shallow layer network contains more features and also has the capability of extracting key features, the deep layer network extracts the outline, the shape and the strongest features, but the extracted features are high-dimensional abstract features along with the deepening of the convolution layer number, and possibly some useless features are extracted excessively due to extraction, so that the feature maps obtained by convolution of the first layer to the fifth layer are adopted to form a convolution feature set to serve as the basis of image similarity analysis in the next step.
The method comprises the following steps that the first two convolutional layers respectively use 64 filters with the length of 3 x 3 and the step of 1 to convolve an input image to obtain a feature map with the size of 224 x 64, wherein padding parameters are parameters in the same convolution; then relu is carried out, a filter with 2 multiplied by 2 and step size of 2 is used for constructing a maximum pooling layer, and the size of the pooled layer is changed into 112 multiplied by 64; the third and the fourth convolution layers respectively use 128 filters with 3 × 3 and step 1 to convolve the input image to obtain a feature map with the size of 112 × 128, wherein the padding parameter is a parameter in the same convolution; then relu is performed, a 2 x 2 filter with step 2 is used to construct the maximum pooling layer, and the size of the pooled layer is changed to 56 x 128; and the fifth convolutional layer convolves the input image by using 256 filters with the size of 3 × 3 and the step size of 1 to obtain a feature map with the size of 56 × 256, wherein the padding parameter is a parameter in the same convolution, then relu is performed, a filter with the size of 2 × 2 and the step size of 2 is used for constructing a maximum pooling layer, the size of the pooled layer is 28 × 256, and the parameters of other convolutional layers can refer to the prior art and are not described herein.
And S7, calculating feature map image similarity SSIM of the image to be recognized and all reference images under each type of insect eye level, calculating a mean value, and classifying the image to be recognized to the type with the largest mean value to obtain the insect eye level to which the image to be recognized belongs.
Specifically, the convolution characteristic set constructed in the previous step is used for image structure similarity analysis. SSIM is a full-reference image quality evaluation index, and measures image similarity from three aspects of brightness, contrast, and structure, the structural similarity range is (0, 1), and its calculation formula is as follows:
Figure BDA0002572372810000071
wherein x and y respectively represent the image to be identified and the reference image, muxIs the mean value of x, μyIs the mean value of y, σxIs the variance of x, σyIs the variance of y, σxyIs the covariance of x and y; c1 ═ k1L)2、c2=(k2L)2A constant for maintaining stability, L being the dynamic range of image pixel values 0-255, k1=0.01,k2=0.03。
And calculating the SSIM values of the image to be recognized and all reference images under each target level element and calculating the average SSIM value of the image to be recognized and all images under each target level element, wherein the category corresponding to the maximum value of the average value of the SSMI is the target level element to which the image to be recognized belongs.
The foregoing has outlined rather broadly the preferred embodiments and principles of the present invention and it will be appreciated that those skilled in the art may devise variations of the present invention that are within the spirit and scope of the appended claims.

Claims (10)

1. An image identification method based on VGG16 in insect taxonomy is characterized by comprising the following steps:
s1, collecting insect images of various insect eye level elements, classifying the images according to the insect eye level elements, and establishing an image data set;
s2, carrying out target and background segmentation and size normalization on the insect image of the image data set to obtain a training data set;
s3, training a training data set by using a VGG16 model by adopting a transfer learning method to obtain a trained VGG16 model;
s4, extracting partial images from the image data set as reference images and images to be identified, carrying out corner detection on the images to be identified and the reference images, calculating a corner mean value to obtain an offset value, and correcting the reference images according to the offset value;
s5, performing target and background segmentation and size normalization on the image to be recognized and the corrected reference image, inputting the image to the trained VGG16 model, and extracting image features;
s6, visualizing the extracted image features by using a TensorBoard to obtain a feature map and storing the feature map;
and S7, calculating feature map image similarity SSIM of the image to be recognized and all reference images under each type of insect eye level, calculating a mean value, and classifying the image to be recognized to the type with the largest mean value to obtain the insect eye level to which the image to be recognized belongs.
2. The image identification method based on VGG16 in insect taxonomy according to claim 1, wherein the insect order-level elements include Lepidoptera, Orthoptera, Hemiptera, Coleoptera and Homoptera.
3. The image identification method based on VGG16 in insect taxonomy according to claim 2, wherein the image data set satisfies the following condition:
each insect eye level element comprises not less than 10 insect pest species, and each insect original image comprises at least 30.
4. The image identification method based on VGG16 in insect taxonomy according to claim 1, wherein the target and background segmentation adopts a full convolution neural network (FCN) to change the background to black.
5. The method of claim 1, wherein the size normalization normalizes the size of the image to 224 x 224 based on VGG16 image recognition in insect taxonomy.
6. The method of claim 1, wherein in step S2, the size normalization is followed by data enhancement.
7. The image identification method based on VGG16 in insect taxonomy according to claim 6, wherein the data enhancement processing comprises one or more of flipping, rotating, brightness adjusting and color adjusting.
8. The image identification method based on VGG16 in insect taxonomy according to claim 1, wherein the VGG16 model comprises 13 convolutional layers, 3 fully-connected layers and 5 pooling layers.
9. The image identification method based on VGG16 in insect taxonomy according to claim 8, wherein in step S7, feature maps obtained by convolution of the first to fifth convolutional layers of the VGG16 model are used as input for calculating image similarity.
10. The image recognition method based on VGG16 in insect taxonomy according to claim 1, wherein in step S7, the similarity SSIM of the image is measured from three aspects of brightness, contrast and structure:
Figure FDA0002572372800000021
wherein x and y respectively represent the image to be identified and the reference image, muxIs the mean value of x, μyIs the mean value of y, σxIs the variance of x, σyIs the variance of y, σxyIs the covariance of x and y; c1 ═ k1L)2、c2=(k2L)2A constant for maintaining stability, L being the dynamic range of image pixel values 0-255, k1=0.01,k2=0.03。
CN202010643798.7A 2020-07-07 2020-07-07 Image identification method based on VGG16 in insect taxonomy Pending CN111832642A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010643798.7A CN111832642A (en) 2020-07-07 2020-07-07 Image identification method based on VGG16 in insect taxonomy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010643798.7A CN111832642A (en) 2020-07-07 2020-07-07 Image identification method based on VGG16 in insect taxonomy

Publications (1)

Publication Number Publication Date
CN111832642A true CN111832642A (en) 2020-10-27

Family

ID=72900220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010643798.7A Pending CN111832642A (en) 2020-07-07 2020-07-07 Image identification method based on VGG16 in insect taxonomy

Country Status (1)

Country Link
CN (1) CN111832642A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113140012A (en) * 2021-05-14 2021-07-20 北京字节跳动网络技术有限公司 Image processing method, image processing apparatus, image processing medium, and electronic device
CN113159075A (en) * 2021-05-10 2021-07-23 北京虫警科技有限公司 Insect identification method and device
CN113298023A (en) * 2021-06-11 2021-08-24 长江大学 Insect dynamic behavior identification method based on deep learning and image technology
CN113313752A (en) * 2021-05-27 2021-08-27 哈尔滨工业大学 Leaf area index identification method based on machine vision
CN114022714A (en) * 2021-11-11 2022-02-08 哈尔滨工程大学 Harris-based data enhanced image classification method and system
CN117077004A (en) * 2023-08-18 2023-11-17 中国科学院华南植物园 Species identification method, system, device and storage medium
CN117934962A (en) * 2024-02-06 2024-04-26 青岛兴牧畜牧科技发展有限公司 Pork quality classification method based on reference color card image correction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135231A (en) * 2018-12-25 2019-08-16 杭州慧牧科技有限公司 Animal face recognition methods, device, computer equipment and storage medium
CN110146869A (en) * 2019-05-21 2019-08-20 北京百度网讯科技有限公司 Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter
CN110309841A (en) * 2018-09-28 2019-10-08 浙江农林大学 A kind of hickory nut common insect pests recognition methods based on deep learning
CN110766041A (en) * 2019-09-04 2020-02-07 江苏大学 Deep learning-based pest detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110309841A (en) * 2018-09-28 2019-10-08 浙江农林大学 A kind of hickory nut common insect pests recognition methods based on deep learning
CN110135231A (en) * 2018-12-25 2019-08-16 杭州慧牧科技有限公司 Animal face recognition methods, device, computer equipment and storage medium
CN110146869A (en) * 2019-05-21 2019-08-20 北京百度网讯科技有限公司 Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter
CN110766041A (en) * 2019-09-04 2020-02-07 江苏大学 Deep learning-based pest detection method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113159075A (en) * 2021-05-10 2021-07-23 北京虫警科技有限公司 Insect identification method and device
CN113140012A (en) * 2021-05-14 2021-07-20 北京字节跳动网络技术有限公司 Image processing method, image processing apparatus, image processing medium, and electronic device
CN113140012B (en) * 2021-05-14 2024-05-31 北京字节跳动网络技术有限公司 Image processing method, device, medium and electronic equipment
CN113313752A (en) * 2021-05-27 2021-08-27 哈尔滨工业大学 Leaf area index identification method based on machine vision
CN113298023A (en) * 2021-06-11 2021-08-24 长江大学 Insect dynamic behavior identification method based on deep learning and image technology
CN114022714A (en) * 2021-11-11 2022-02-08 哈尔滨工程大学 Harris-based data enhanced image classification method and system
CN114022714B (en) * 2021-11-11 2024-04-16 哈尔滨工程大学 Harris-based data enhanced image classification method and system
CN117077004A (en) * 2023-08-18 2023-11-17 中国科学院华南植物园 Species identification method, system, device and storage medium
CN117077004B (en) * 2023-08-18 2024-02-23 中国科学院华南植物园 Species identification method, system, device and storage medium
CN117934962A (en) * 2024-02-06 2024-04-26 青岛兴牧畜牧科技发展有限公司 Pork quality classification method based on reference color card image correction

Similar Documents

Publication Publication Date Title
CN111832642A (en) Image identification method based on VGG16 in insect taxonomy
CN107610087B (en) Tongue coating automatic segmentation method based on deep learning
CN107066559B (en) Three-dimensional model retrieval method based on deep learning
CN110348399B (en) Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN111179216B (en) Crop disease identification method based on image processing and convolutional neural network
CN109978848B (en) Method for detecting hard exudation in fundus image based on multi-light-source color constancy model
CN111369605B (en) Infrared and visible light image registration method and system based on edge features
CN110232387B (en) Different-source image matching method based on KAZE-HOG algorithm
CN108363970A (en) A kind of recognition methods of fingerling class and system
CN106295124A (en) Utilize the method that multiple image detecting technique comprehensively analyzes gene polyadenylation signal figure likelihood probability amount
CN111178121B (en) Pest image positioning and identifying method based on spatial feature and depth feature enhancement technology
Wang et al. GKFC-CNN: Modified Gaussian kernel fuzzy C-means and convolutional neural network for apple segmentation and recognition
CN109815923B (en) Needle mushroom head sorting and identifying method based on LBP (local binary pattern) features and deep learning
CN108765427A (en) A kind of prostate image partition method
CN112949725B (en) Wheat seed classification method based on multi-scale feature extraction
CN112329818B (en) Hyperspectral image non-supervision classification method based on graph convolution network embedded characterization
CN116630960B (en) Corn disease identification method based on texture-color multi-scale residual shrinkage network
CN112364747B (en) Target detection method under limited sample
CN108876776B (en) Classification model generation method, fundus image classification method and device
CN112559791A (en) Cloth classification retrieval method based on deep learning
CN113011506B (en) Texture image classification method based on deep fractal spectrum network
CN113378620B (en) Cross-camera pedestrian re-identification method in surveillance video noise environment
Pushpa et al. Deep learning model for plant species classification using leaf vein features
CN116309477A (en) Neural network-based urban wall multispectral imaging disease nondestructive detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination