CN113887505A - Cattle image classification method and device, electronic equipment and storage medium - Google Patents

Cattle image classification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113887505A
CN113887505A CN202111234047.0A CN202111234047A CN113887505A CN 113887505 A CN113887505 A CN 113887505A CN 202111234047 A CN202111234047 A CN 202111234047A CN 113887505 A CN113887505 A CN 113887505A
Authority
CN
China
Prior art keywords
image
result
feature extraction
training image
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111234047.0A
Other languages
Chinese (zh)
Inventor
潘元志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhenjiang Hongxiang Automation Technology Co ltd
Original Assignee
Zhenjiang Hongxiang Automation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhenjiang Hongxiang Automation Technology Co ltd filed Critical Zhenjiang Hongxiang Automation Technology Co ltd
Priority to CN202111234047.0A priority Critical patent/CN113887505A/en
Publication of CN113887505A publication Critical patent/CN113887505A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a cattle image classification method and device, electronic equipment and a storage medium. The classification method comprises the following steps: performing feature extraction on an image to be classified by using a feature extraction module of a cattle classification model to obtain a feature extraction result of the image to be classified, wherein the image to be classified is a cattle image to be classified; and classifying the feature extraction result of the image to be classified by utilizing a classification module of the cattle classification model to obtain the prediction classification information of the image to be classified, wherein the prediction classification information is used for indicating the prediction detail classification of cattle. By adopting the method, the feature extraction is carried out on the cattle image to be classified, and the extracted feature extraction result is classified so as to obtain the prediction classification information indicating the cattle prediction detail classification; the method of firstly extracting the features and then classifying realizes the accurate identification of the cattle subdivided varieties.

Description

Cattle image classification method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of deep learning, in particular to a cattle image classification method and device, electronic equipment and a storage medium.
Background
In the field of livestock breeding, particularly in the field of cattle breeding, accurate identification of excellent livestock breeds is crucial to breeding new breeds, increasing benefits, enhancing the environmental adaptability of livestock and reducing the degree of poverty in rural areas, for example, Pakistan has a plurality of world-level buffalo breeds such as Nili Ravi, Kundi and other buffalo breeds, the dairy industry of Pakistan has rapidly developed in the last two decades, and a plurality of dairy farms are established. However, despite the large production capacity of these breeds of buffalo, they produce much less buffalo milk at home, due to the lack of accurate breed identification making selection and testing of offspring difficult, resulting in long calving intervals, silent estrus and late maturity in the raised buffalo breed.
For the requirement of accurate identification of vigorous cattle breeds, the traditional identification method is to classify local cattle according to the regional differences of the cattle, but the regional differences are not enough to distinguish subdivided breeds, so in order to effectively and meaningfully improve and protect the cattle, the detailed characterization and evaluation of the differences between the breeds needs to use modern technical tools.
With the development of deep learning and image recognition technology, more and more livestock breeds are beginning to be identified by using image recognition technology, for example, a method based on deep learning is proposed in (Kumar et al, 2018), and a single cow is identified based on the main oral point image sequence characteristics of the cow. Their proposed approach solves the problem of losing or exchanging livestock and insurance claims; as another example, an artificial intelligence driven fish species identification system based on CNN framework was proposed in another study (Rauf et al, 2019).
However, in the prior art, the identification of the general variety of various livestock can be realized, but the accurate identification of the cattle subdivided variety cannot be realized.
Disclosure of Invention
The application aims to provide a cattle image classification method, a cattle image classification device, an electronic device and a storage medium, and solves the technical problem that accurate identification of cattle subdivided varieties cannot be achieved in the prior art.
The purpose of the application is realized by adopting the following technical scheme:
in a first aspect, the present application provides a method for classifying cattle images, the method comprising: performing feature extraction on an image to be classified by using a feature extraction module of a cattle classification model to obtain a feature extraction result of the image to be classified, wherein the image to be classified is a cattle image to be classified; and classifying the feature extraction result of the image to be classified by utilizing a classification module of the cattle classification model to obtain the prediction classification information of the image to be classified, wherein the prediction classification information is used for indicating the prediction detail classification of cattle.
The technical scheme has the advantages that the method carries out feature extraction on the cattle image to be classified, and classifies the extracted feature extraction result to further obtain prediction classification information indicating the cattle prediction detail classification; the method of firstly extracting the features and then classifying realizes the accurate identification of the cattle subdivided varieties.
In some alternative embodiments, the training process of the cattle classification model is as follows: performing feature extraction on a training image by using a feature extraction module of a model to be trained to obtain a feature extraction result of the training image, wherein the training image is a cattle image used for model training; classifying the feature extraction result of the training image by utilizing a classification module of the model to be trained to obtain prediction classification information of the training image, wherein the prediction classification information is used for indicating the prediction detailed classification of the cattle; and training the model to be trained by utilizing the prediction classification information and the labeling classification information of the training image to obtain the cattle classification model, wherein the labeling classification information is used for indicating the labeling detail classification of the cattle.
The technical scheme has the advantages that the model to be trained is trained by utilizing the cattle image and the labeling fine classification, the cattle image is subjected to feature extraction firstly, then the feature extraction result is classified to obtain prediction classification information, and the model to be trained is trained by utilizing the labeling fine classification and the prediction classification information; the cattle classification model is obtained by utilizing the cattle image and the marked fine classification training, so that when the cattle classification model is applied to cattle image classification, the fine classification type of the cattle can be accurately identified.
In some optional embodiments, the performing, by using the feature extraction module of the model to be trained, feature extraction on the training image to obtain a feature extraction result of the training image includes: processing the training image by using a first convolution block of the feature extraction module to obtain a first processing result of the training image; and acquiring a feature extraction result of the training image based on the first processing result of the training image.
The technical scheme has the advantages that the first processing result is obtained by processing the training image through the first volume block, the feature extraction result is obtained based on the first processing result, the obtained processing result can reflect the features of the training image through the processing of the volume block, the accuracy of the feature extraction result is further improved, and the accuracy of the trained cattle classification model when applied to cattle image classification is further improved.
In some optional embodiments, the obtaining a feature extraction result of the training image based on the first processing result of the training image includes: pooling a first processing result of the training image by using a first pooling layer of the feature extraction module to obtain a first pooling result of the training image; processing the first pooling result of the training image by using a second convolution block of the feature extraction module to obtain a second processing result of the training image; and acquiring a feature extraction result of the training image based on a second processing result of the training image.
The technical scheme has the advantages that the first pooling layer is added and used for obtaining a first pooling result after pooling processing of the first processing result, the second rolling block is added and further processing the first pooling result to obtain a second processing result, the feature extraction result is obtained based on the second processing result, the obtained second processing result can reflect the detailed features of the training image compared with the first processing result through the added pooling layer and the second rolling block, the accuracy of the feature extraction result is improved, and the accuracy of the trained cattle classification model when applied to cattle image classification is further improved.
In some optional embodiments, the obtaining a feature extraction result of the training image based on the second processing result of the training image includes: pooling a second processing result of the training image by using a second pooling layer of the feature extraction module to obtain a second pooling result of the training image; processing the second pooling result of the training image by using a third convolution block of the feature extraction module to obtain a third processing result of the training image; and acquiring a feature extraction result of the training image based on a third processing result of the training image.
The technical scheme has the advantages that the second pooling layer is added and used for obtaining a second pooling result after pooling processing of the second processing result, then the third rolling block is added and further processing the second pooling result to obtain a third processing result, then the feature extraction result is obtained based on the third processing result, and through processing of the added pooling layer and the rolling block, the obtained third processing result further reflects the detailed features of the training image compared with the second processing result, so that the accuracy of the feature extraction result is improved, and the accuracy of the trained cattle classification model when applied to cattle image classification is further improved.
In some optional embodiments, the obtaining a feature extraction result of the training image based on the third processing result of the training image includes: pooling a third processing result of the training image by using a third pooling layer of the feature extraction module to obtain a third pooling result of the training image; processing the third pooling result of the training image by using a fourth convolution block of the feature extraction module to obtain a fourth processing result of the training image; and acquiring a feature extraction result of the training image based on a fourth processing result of the training image.
The technical scheme has the advantages that the third pooling layer is added and used for obtaining a third pooling result after pooling processing is conducted on a third processing result, then the fourth volume block is added and further conducts processing on the third pooling result to obtain a fourth processing result, then a feature extraction result is obtained based on the fourth processing result, and through processing of the added pooling layer and the volume block, the obtained fourth processing result further reflects the detail features of the training image compared with the third processing result, so that accuracy of the feature extraction result is improved, and accuracy of the trained cattle classification model when applied to cattle image classification is further improved.
In some optional embodiments, the obtaining a feature extraction result of the training image based on the fourth processing result of the training image includes: pooling a fourth processing result of the training image by using a fourth pooling layer of the feature extraction module to obtain a fourth pooling result of the training image; processing a fourth pooling result of the training image by using a fifth convolution block of the feature extraction module to obtain a fifth processing result of the training image; and taking the fifth processing result of the training image as the feature extraction result of the training image.
The technical scheme has the advantages that the fourth pooling layer is added and used for obtaining a fourth pooling result after pooling the fourth processing result, then the fifth volume block is added and further processing the fourth pooling result to obtain a fifth processing result, then the feature extraction result is obtained based on the fifth processing result, and through the added pooling layer and the volume block, the obtained fifth processing result further reflects the detail features of the training image compared with the fourth processing result, so that the accuracy of the feature extraction result is improved, and the accuracy of the trained cattle classification model when applied to cattle image classification is further improved.
In some optional embodiments, the processing procedure of each of the first to fifth volume blocks is as follows: convolving an input image by utilizing the convolution layer of the convolution block to obtain a convolution result of the input image; carrying out batch normalization on the convolution result of the input image by using the batch normalization layer of the convolution block to obtain a batch normalization result of the input image; and activating the batch normalization result of the input image by using the activation layer of the rolling block to obtain the activation result of the input image as the processing result of the input image.
The technical scheme has the beneficial effects that the input image is processed by utilizing the convolution layer, the batch normalization layer and the activation layer in sequence, and the obtained processing result of the input image is more accurate.
In some optional embodiments, the classifying the feature extraction result of the training image by using the classification module of the model to be trained to obtain the prediction classification information of the training image includes: fully connecting the feature extraction results of the training images by using a full connection layer of the classification module to obtain a full connection result of the training images; and classifying the full-connection result of the training image by using a classifier of the classification module to obtain the prediction classification information of the training image.
The technical scheme has the advantages that the full-connection processing is carried out on the feature extraction result, then the classifier is used for classifying the full-connection result to obtain the prediction classification information, so that the prediction classification information can comprehensively reflect the features of the training image, and the accuracy of the cattle classification model obtained by training is higher when the model is applied to cattle image classification.
In some optional embodiments, the training the model to be trained by using the prediction classification information and the labeling classification information of the training image to obtain the cattle classification model includes: updating parameters of a feature extraction module of the model to be trained by using the prediction classification information and the labeling classification information of the training image to obtain a semi-training model; inputting the training image into the semi-training model, and outputting a plurality of feature vectors of the training image by using a full connection layer of a classification module of the semi-training model; performing feature classification on the plurality of feature vectors of the training image to obtain a plurality of prediction feature categories of the training image; and updating all parameters of the semi-training model by using the plurality of predicted characteristic categories and the plurality of labeled characteristic categories of the training images to obtain the cattle classification model.
The technical scheme has the beneficial effects that the model training is divided into two stages, a semi-training model is trained firstly, then the semi-training model is further trained, the final model can be quickly obtained, the required training data is less, and the efficiency is high; a plurality of characteristic vectors are output by utilizing a full connection layer of a semi-classification model, then the characteristic classification is carried out on the characteristic vectors, then the classified prediction characteristic categories are matched with a plurality of labeled characteristic categories to train the semi-training model, the accuracy of model training can be improved, and the cattle classification model capable of accurately identifying the cattle subdivided varieties is obtained.
In some alternative embodiments, the fine classification of cattle comprises Khundi, Mix, and Neli Ravi; the plurality of predicted feature classes and the plurality of annotated feature classes each include one or more of a speckle class, an eye class, a body color class, a horn class, a tail class, a body class, a neck class, and a breast class.
The technical scheme has the advantages that the subdivision categories of the cattle are defined in a targeted manner, the data volume of model training is reduced on the basis of realizing accurate identification of the subdivision categories of the cattle in a specific region, and the efficiency of model training is improved; the method provides the characteristic classes of multiple cows, increases the accuracy of cow image classification, improves the discrimination capability of detail difference, and particularly discriminates two or more subdivided cow varieties with extremely high overall similarity.
In a second aspect, the present application provides an apparatus for classifying cattle, the apparatus comprising: the feature extraction module is used for extracting features of an image to be classified by using the feature extraction module of the cattle classification model to obtain a feature extraction result of the image to be classified, wherein the image to be classified is a cattle image to be classified; and the image classification module is used for classifying the feature extraction result of the image to be classified by utilizing the classification module of the cattle classification model to obtain the prediction classification information of the image to be classified, and the prediction classification information is used for indicating the prediction detail classification of cattle.
In some alternative embodiments, the training process of the cattle classification model is as follows: performing feature extraction on a training image by using a feature extraction module of a model to be trained to obtain a feature extraction result of the training image, wherein the training image is a cattle image used for model training; classifying the feature extraction result of the training image by utilizing a classification module of the model to be trained to obtain prediction classification information of the training image, wherein the prediction classification information is used for indicating the prediction detailed classification of the cattle; and training the model to be trained by utilizing the prediction classification information and the labeling classification information of the training image to obtain the cattle classification model, wherein the labeling classification information is used for indicating the labeling detail classification of the cattle.
In some optional embodiments, the performing, by using the feature extraction module of the model to be trained, feature extraction on the training image to obtain a feature extraction result of the training image includes: processing the training image by using a first convolution block of the feature extraction module to obtain a first processing result of the training image; and acquiring a feature extraction result of the training image based on the first processing result of the training image.
In some optional embodiments, the obtaining a feature extraction result of the training image based on the first processing result of the training image includes: pooling a first processing result of the training image by using a first pooling layer of the feature extraction module to obtain a first pooling result of the training image; processing the first pooling result of the training image by using a second convolution block of the feature extraction module to obtain a second processing result of the training image; and acquiring a feature extraction result of the training image based on a second processing result of the training image.
In some optional embodiments, the obtaining a feature extraction result of the training image based on the second processing result of the training image includes: pooling a second processing result of the training image by using a second pooling layer of the feature extraction module to obtain a second pooling result of the training image; processing the second pooling result of the training image by using a third convolution block of the feature extraction module to obtain a third processing result of the training image; and acquiring a feature extraction result of the training image based on a third processing result of the training image.
In some optional embodiments, the obtaining a feature extraction result of the training image based on the third processing result of the training image includes: pooling a third processing result of the training image by using a third pooling layer of the feature extraction module to obtain a third pooling result of the training image; processing the third pooling result of the training image by using a fourth convolution block of the feature extraction module to obtain a fourth processing result of the training image; and acquiring a feature extraction result of the training image based on a fourth processing result of the training image.
In some optional embodiments, the obtaining a feature extraction result of the training image based on the fourth processing result of the training image includes: pooling a fourth processing result of the training image by using a fourth pooling layer of the feature extraction module to obtain a fourth pooling result of the training image; processing a fourth pooling result of the training image by using a fifth convolution block of the feature extraction module to obtain a fifth processing result of the training image; and taking the fifth processing result of the training image as the feature extraction result of the training image.
In some optional embodiments, the processing procedure of each of the first to fifth volume blocks is as follows: convolving an input image by utilizing the convolution layer of the convolution block to obtain a convolution result of the input image; carrying out batch normalization on the convolution result of the input image by using the batch normalization layer of the convolution block to obtain a batch normalization result of the input image; and activating the batch normalization result of the input image by using the activation layer of the rolling block to obtain the activation result of the input image as the processing result of the input image.
In some optional embodiments, the classifying the feature extraction result of the training image by using the classification module of the model to be trained to obtain the prediction classification information of the training image includes: fully connecting the feature extraction results of the training images by using a full connection layer of the classification module to obtain a full connection result of the training images; and classifying the full-connection result of the training image by using a classifier of the classification module to obtain the prediction classification information of the training image.
In some optional embodiments, the training the model to be trained by using the prediction classification information and the labeling classification information of the training image to obtain the cattle classification model includes: updating parameters of a feature extraction module of the model to be trained by using the prediction classification information and the labeling classification information of the training image to obtain a semi-training model; inputting the training image into the semi-training model, and outputting a plurality of feature vectors of the training image by using a full connection layer of a classification module of the semi-training model; performing feature classification on the plurality of feature vectors of the training image to obtain a plurality of prediction feature categories of the training image; and updating all parameters of the semi-training model by using the plurality of predicted characteristic categories and the plurality of labeled characteristic categories of the training images to obtain the cattle classification model.
In some alternative embodiments, the fine classification of cattle comprises Khundi, Mix, and Neli Ravi; the plurality of predicted feature classes and the plurality of annotated feature classes each include one or more of a speckle class, an eye class, a body color class, a horn class, a tail class, a body class, a neck class, and a breast class.
In a third aspect, the present application provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the cattle image classification method according to any one of the above methods when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of any of the above-described bovine image classification methods.
The above is an overview of the technical solutions of the present application, and the detailed embodiments of the present application are described below in order to make those skilled in the art fully understand the present application.
Drawings
The present application is further described below with reference to the drawings and examples.
Fig. 1 is a schematic flowchart of a method for classifying cattle images according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a training process of a cattle classification model according to an embodiment of the present application;
fig. 3 is a schematic processing procedure diagram of data augmentation processing according to an embodiment of the present disclosure;
FIG. 4 is a schematic flowchart of feature extraction performed on a training image according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a process for obtaining a feature extraction result based on a first processing result according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a process for obtaining a feature extraction result based on a second processing result according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a process of obtaining a feature extraction result based on a third processing result according to an embodiment of the present application;
fig. 8 is a schematic flowchart of a process for obtaining a feature extraction result based on a fourth processing result according to an embodiment of the present application;
FIG. 9 is a flowchart illustrating a process for processing each volume block according to an embodiment of the present disclosure;
FIG. 10 is a schematic flowchart of obtaining the predictive classification information of the training image according to an embodiment of the present application;
fig. 11 is a schematic flowchart of a process of training a model to be trained to obtain a cattle classification model according to an embodiment of the present application;
fig. 12 is an overall framework schematic diagram of a cattle image classification method provided in an embodiment of the present application;
fig. 13 is an algorithm framework diagram of a cattle image classification method according to an embodiment of the present application;
FIG. 14 is a schematic diagram of a calculation process of a maximum pooling per volume block calculation method according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a cattle image classification device according to an embodiment of the present application;
fig. 16 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a program product for implementing a method for classifying cattle images according to an embodiment of the present application.
Detailed Description
The present application is further described with reference to the accompanying drawings and the detailed description, and it should be noted that, in the present application, the embodiments or technical features described below may be arbitrarily combined to form a new embodiment without conflict.
Referring to fig. 1, an embodiment of the present application provides a method for classifying cattle images, where the method includes steps S101 to S102.
In the embodiment of the present application, the kind of cattle is, for example, beef cattle or dairy cows.
Wherein the cow and/or beef cattle can be further classified into yellow cattle, buffalo, yak, etc.
Step S101: and performing feature extraction on the image to be classified by using a feature extraction module of the cattle classification model to obtain a feature extraction result of the image to be classified, wherein the image to be classified is the cattle image to be classified.
Step S102: and classifying the feature extraction result of the image to be classified by utilizing a classification module of the cattle classification model to obtain the prediction classification information of the image to be classified, wherein the prediction classification information is used for indicating the prediction detail classification of cattle.
Wherein the fine classification is a fine branch classification in a breed classification referring to cattle, for example, Pakistan is the second most countries of buffalo breed, where Neli-Ravi breed predominates. The widespread demand for Neli and Ravi varieties has led to the emergence of a new hybrid "Neli-Ravi" since the 1960 s. The Neli-Ravi varieties have small differences with other varieties, and the identification and classification of the Neli-Ravi varieties become the most concerned problems of the production center of the Pakistan dairy products.
Therefore, feature extraction is carried out on the cattle image to be classified, and the extracted feature extraction result is classified so as to obtain prediction classification information indicating the cattle prediction detail classification; the method of firstly extracting the features and then classifying realizes the accurate identification of the cattle subdivided varieties.
Referring to fig. 2, in some embodiments, the training process of the cattle classification model may include steps S201 to S203.
Step S201: and performing feature extraction on a training image by using a feature extraction module of the model to be trained to obtain a feature extraction result of the training image, wherein the training image is a cattle image for model training.
Step S202: and classifying the feature extraction result of the training image by utilizing the classification module of the model to be trained to obtain the prediction classification information of the training image, wherein the prediction classification information is used for indicating the prediction detail classification of the cattle.
Step S203: and training the model to be trained by utilizing the prediction classification information and the labeling classification information of the training image to obtain the cattle classification model, wherein the labeling classification information is used for indicating the labeling detail classification of the cattle.
The training images can be shot manually on site and then input, or can be obtained by importing the training images from the existing database; in some application scenarios, the training images collect data from the bakistan buffalo research center. The obtained datasets may be publicly available in a dataset repository named Mendeley. The entire data was normalized and preprocessed, including all class labels, for self-activated CNN training.
Therefore, the model to be trained is trained by utilizing the cattle image and the labeling detail category, the cattle image is subjected to feature extraction firstly, then the feature extraction result is classified to obtain prediction classification information, and the model to be trained is trained by utilizing the labeling detail category and the prediction classification information; the cattle classification model is obtained by utilizing the cattle image and the marked fine classification training, so that when the cattle classification model is applied to cattle image classification, the fine classification type of the cattle can be accurately identified.
In some embodiments, the training images may be subjected to data augmentation to enhance the training intensity of the model, which may be performed using various augmentation techniques. For example, in some application scenarios, the boundary points of the image may be transformed using randomly positioned reflections in the X and Y directions. The X and Y transforms specify left, right, up, and down reflections on the image. The random X, Y translation is set to [ -4,4 ]. It defines a random shift in the X and Y dimensions of a particular range of pixel intensities across the image. The pixel range used is-4 to 4. In addition, data augmentation may also be achieved by applying random cropping to perform certain bitwise hiding and finding operations, including cropping and randomly erasing a given image. Fig. 3 shows the training image results after partial data augmentation.
In some embodiments, the training images are augmented with data primarily prior to training and testing, in order to avoid overfitting and other deep learning problems.
Referring to fig. 4, in some embodiments, the step S201 may include steps S301 to S302.
Step S301: and processing the training image by using the first convolution block of the feature extraction module to obtain a first processing result of the training image.
Step S302: and acquiring a feature extraction result of the training image based on the first processing result of the training image.
Therefore, a first processing result is obtained by processing the training image through the first convolution block, then a feature extraction result is obtained based on the first processing result, and the obtained processing result can reflect the features of the training image through the processing of the convolution block, so that the accuracy of the feature extraction result is improved, and the accuracy of the trained cattle classification model when applied to cattle image classification is further improved.
Referring to fig. 5, in some embodiments, the step S302 may include steps S401 to S403.
Step S401: and pooling the first processing result of the training image by using a first pooling layer of the feature extraction module to obtain a first pooling result of the training image.
Step S402: and processing the first pooling result of the training image by using a second convolution block of the feature extraction module to obtain a second processing result of the training image.
Step S403: and acquiring a feature extraction result of the training image based on a second processing result of the training image.
Therefore, a first pooling layer is added and used for obtaining a first pooling result after pooling the first processing result, then a second volume block is added and further used for processing the first pooling result to obtain a second processing result, then a feature extraction result is obtained based on the second processing result, and through the added pooling layer and the processing of the volume block, the obtained second processing result can reflect the detailed features of the training image better than the first processing result, so that the accuracy of the feature extraction result is improved, and the accuracy of the trained cattle classification model when applied to cattle image classification is further improved.
Referring to fig. 6, in some embodiments, the step S403 may include steps S501 to S503.
Step S501: and pooling a second processing result of the training image by using a second pooling layer of the feature extraction module to obtain a second pooling result of the training image.
Step S502: and processing the second pooling result of the training image by using a third convolution block of the feature extraction module to obtain a third processing result of the training image.
Step S503: and acquiring a feature extraction result of the training image based on a third processing result of the training image.
Therefore, a second pooling layer is added and used for pooling the second processing result to obtain a second pooling result, a third volume block is added to further process the second pooling result to obtain a third processing result, a feature extraction result is obtained based on the third processing result, and the obtained third processing result reflects the detailed features of the training image further compared with the second processing result through the added pooling layer and the processing of the volume block, so that the accuracy of the feature extraction result is improved, and the accuracy of the trained cattle classification model when applied to cattle image classification is further improved.
Referring to fig. 7, in some embodiments, the step S503 may include steps S601 to S603.
Step S601: and pooling a third processing result of the training image by using a third pooling layer of the feature extraction module to obtain a third pooling result of the training image.
Step S602: and processing the third pooling result of the training image by using a fourth convolution block of the feature extraction module to obtain a fourth processing result of the training image.
Step S603: and acquiring a feature extraction result of the training image based on a fourth processing result of the training image.
Therefore, a third pooling layer is added and used for pooling the third processing result to obtain a third pooling result, a fourth volume block is added and further used for processing the third pooling result to obtain a fourth processing result, a feature extraction result is obtained based on the fourth processing result, and the obtained fourth processing result further reflects the detail features of the training image compared with the third processing result through the added pooling layer and the processing of the volume block, so that the accuracy of the feature extraction result is improved, and the accuracy of the trained cattle classification model when applied to cattle image classification is further improved.
Referring to fig. 8, in some embodiments, the step S603 may include steps S701 to S703.
Step S701: and pooling a fourth processing result of the training image by using a fourth pooling layer of the feature extraction module to obtain a fourth pooling result of the training image.
Step S702: and processing the fourth pooling result of the training image by using a fifth convolution block of the feature extraction module to obtain a fifth processing result of the training image.
Step S703: and taking the fifth processing result of the training image as the feature extraction result of the training image.
Therefore, a fourth pooling layer is added and used for pooling the fourth processing result to obtain a fourth pooling result, a fifth volume block is added to further process the fourth pooling result to obtain a fifth processing result, a feature extraction result is obtained based on the fifth processing result, and the obtained fifth processing result further reflects the detail features of the training image compared with the fourth processing result through the added pooling layer and the processing of the volume block, so that the accuracy of the feature extraction result is improved, and the accuracy of the trained cattle classification model when applied to cattle image classification is further improved.
Referring to fig. 9, in some embodiments, the process of each of the first to fifth volume blocks may include steps S801 to S803. The input image is an image input to the volume block. For example, when the pooling result is input into a volume block, the pooling result is the input image; for another example, when the feature extraction result is input to the volume block, the feature extraction result is an input image.
Step S801: and performing convolution on the input image by utilizing the convolution layer of the convolution block to obtain a convolution result of the input image.
Step S802: and carrying out batch normalization on the convolution result of the input image by utilizing the batch normalization layer of the convolution block to obtain a batch normalization result of the input image.
Step S803: and activating the batch normalization result of the input image by using the activation layer of the rolling block to obtain the activation result of the input image as the processing result of the input image.
Therefore, the input image is sequentially processed by the convolution layer, the batch normalization layer and the activation layer, and the obtained processing result of the input image is more accurate.
Referring to fig. 10, in some embodiments, the step S202 may include steps S901 to S902.
Step S901: and fully connecting the feature extraction results of the training images by using a fully-connected layer of the classification module to obtain a fully-connected result of the training images.
Step S902: and classifying the full-connection result of the training image by using a classifier of the classification module to obtain the prediction classification information of the training image.
Therefore, the feature extraction result is subjected to full-connection processing, and then the full-connection result is classified by using the classifier to obtain the prediction classification information, so that the prediction classification information can comprehensively reflect the features of the training image, and the accuracy of the obtained cattle classification model after training is higher when the model is applied to cattle image classification.
Referring to fig. 11, in some embodiments, the step S203 may include steps S1001 to S1004.
Step S1001: and updating the parameters of the feature extraction module of the model to be trained by utilizing the prediction classification information and the labeling classification information of the training image to obtain a semi-training model.
Step S1002: and inputting the training image into the semi-training model, and outputting a plurality of feature vectors of the training image by using a full connection layer of a classification module of the semi-training model.
Step S1003: and carrying out feature classification on the plurality of feature vectors of the training image to obtain a plurality of prediction feature categories of the training image.
Step S1004: and updating all parameters of the semi-training model by using the plurality of predicted characteristic categories and the plurality of labeled characteristic categories of the training images to obtain the cattle classification model.
In some application scenarios, a traditional pre-trained CNN (convolutional neural network) is trained on large data, the specific features of which may contribute to many aspects of medical and other computer vision fields using migration-extracted features, learning these extracted features may contribute to a small data set for the proposed study, but when large amounts of data are applied on the pre-trained CNN, these data may become less accurate and less reliable. Thus, using a particular dataset to continue training the pre-trained CNN, regardless of the data size, the dataset and domain-specific features may provide more reliable results.
Therefore, model training is divided into two stages, a half-training model is trained firstly, then the half-training model is further trained, a final model can be obtained quickly, the required training data are few, and the efficiency is high; a plurality of characteristic vectors are output by utilizing a full connection layer of a semi-classification model, then the characteristic classification is carried out on the characteristic vectors, then the classified prediction characteristic categories are matched with a plurality of labeled characteristic categories to train the semi-training model, the accuracy of model training can be improved, and the cattle classification model capable of accurately identifying the cattle subdivided varieties is obtained.
In some embodiments, the fine classification of cattle may include Khundi, Mix, and Neli Ra vi; the plurality of predicted feature classes and the plurality of annotated feature classes each include one or more of a speckle class, an eye class, a body color class, a horn class, a tail class, a body class, a neck class, and a breast class. In other embodiments, the fine classification of cattle may include buffalo, cattle and yak.
Therefore, the subdivision categories of the cattle are defined in a targeted manner, the data volume of model training is reduced on the basis of realizing accurate identification of the subdivision varieties of the cattle in a specific region, and the efficiency of the model training is improved; the method provides the characteristic classes of multiple cows, increases the accuracy of cow image classification, improves the discrimination capability of detail difference, and particularly discriminates two or more subdivided cow varieties with extremely high overall similarity.
Referring to fig. 12 and 13, in some application scenarios, intelligent identification of image features may be achieved by employing a self-activating convolutional neural network. The identification method is as follows:
the overall architecture is divided into 23 layers, which contain 5 convolutional blocks with different numbers of different filters and sizes. The first layer is an image (img ═ r, c) input layer, with an input size of 256x256x3 for each given image instance. The second layer is a convolutional layer, half the size of the image being several filters with a kernel size of 3x 3. Padding and stride as 1 means that the pixel-to-pixel sliding for a given image is 1, so that each 3x3 kernel evolves over the entire image. The 2D convolution calculation is input by formula (1), as shown in formulas (2) and (3).
imput image=I=img[r,c] (1)
Figure BDA0003317123140000151
Figure BDA0003317123140000152
(2) The expression 3x3 kernel mask is then multiplied by the pixels of the input image, i and j representing the rows and columns, respectively, of the input matrix, and at the end of the convolution operation the output matrix is calculated with the y (i; j) variable.
Each batch normalization layer is used in all volume blocks, where batch normalization is used for input normalization by calculating the variance σ B2 and mean μ B for each color channel.
Figure BDA0003317123140000153
(4) In the formula gxiThe normalization of the activation is being measured, which is predicted by dividing the difference between the input variable χ i and the mean μ B by the square root of the square difference σ β 2 and the constant value. Constant value invariance is added to stabilize the minimum variance value. Reconsidering such activation as gyi(ii) a The variance of the zero mean and unit values is critical for scaling. The scaling is shown in equation (5).
gyi=γgxi+β (5)
(5) In the formula, learnable factors γ and β are added to increase the activation output. The multiplication will convert the previously activated output to a new value with a learnable parameter.
The maximum pool level is downsampled to a given upper input to a higher intensity value. The maximum pooling calculation for each volume block is shown in fig. 14.
For an iteration of the 2D matrix of images, the input image selects the window as shown in equation (6).
w=Iselected2x2 (6)
The maximum value of each increment window 'w' is taken from the left to the right of the image, then the next row is started, thus starting the increment in the y direction, and finally a max-window downloaded by max-pool is collected for each upper layer activated input. The maximum pool is shown in formula (7).
Max-Pooling=max(w(x;y)) (7)
After the convolution data is obtained, a batch normalization layer is used for carrying out normalization processing on the convolution data. Thus, the variation of the input batch data is reduced. After data normalization, a ReLU (linear rectification function) based activation will be performed. The working principle of the ReLU function is similar to a threshold, which assigns a negative value of zero and an upper limit of the same intensity value as calculated in equation (8).
Figure BDA0003317123140000161
The normalized feature vector is then passed to the max-posing layer with kernel size 2, the step size (stride) is also kept at 2, and the 2 points are slipped as the next input pool. And at the moment, all the layer parameters are summarized into a data table. Since only the convolutional layer changes the number of filter variations in the entire network, batch normalization and ReLU (linear rectification function) activation are performed in each convolutional block. However, the architecture of the weights of each layer has been shown in the CNN architecture, as shown in fig. 2. The pooling effect in each architecture has been examined using the trained network to activate the weights of each layer. Thus, in the proposed architecture diagram, the weights of the convolutions with batch normalization and ReLU activation are shown as one stack. In the convolution showing color input, the following batch normalization shows how the color is activated and the input of the next layer is updated/altered. Finally, in the 5 th convolution block, convolution weights and batch normalization highlight more decision strength. The final activation weight is passed to the fully-connected layer as three classes and activation based on the flexible maximum transfer function (Softmax) is applied.
The proposed deep CNN architecture derives spatial information from the bovine image and combines rich feature vectors. In addition, with a transfer learning based approach, rich features are transferred to the active layer for better classification. Several ml-based classifiers are employed to classify instances into relevant target classes. Different data segmentations are used to verify the trustworthiness of the data sets and features, which do not change abruptly. All results are standardized and reliable, making the proposed method an accurate way to identify different cattle breeds.
Therefore, based on the principle, the maximum accurate range of the proposed cattle image classification method reaches 93%, and the accuracy rate of more than 85% can be guaranteed.
Referring to fig. 15, an embodiment of the present application further provides an apparatus for classifying cattle, where the apparatus includes: the feature extraction module 101 is configured to perform feature extraction on an image to be classified by using a feature extraction module of a cattle classification model to obtain a feature extraction result of the image to be classified, where the image to be classified is a cattle image to be classified; an image classification module 102, configured to classify the feature extraction result of the image to be classified by using the classification module of the cattle classification model to obtain prediction classification information of the image to be classified, where the prediction classification information is used to indicate a prediction detail classification of cattle. The specific implementation manner of the method is consistent with the implementation manner and achieved technical effect described in the embodiment of the above cattle image classification method, and some contents are not repeated.
In some embodiments, the training process of the cattle classification model may be as follows: performing feature extraction on a training image by using a feature extraction module of a model to be trained to obtain a feature extraction result of the training image, wherein the training image is a cattle image used for model training; classifying the feature extraction result of the training image by utilizing a classification module of the model to be trained to obtain prediction classification information of the training image, wherein the prediction classification information is used for indicating the prediction detailed classification of the cattle; and training the model to be trained by utilizing the prediction classification information and the labeling classification information of the training image to obtain the cattle classification model, wherein the labeling classification information is used for indicating the labeling detail classification of the cattle.
In some embodiments, the performing, by using a feature extraction module of the model to be trained, feature extraction on the training image to obtain a feature extraction result of the training image may include: processing the training image by using a first convolution block of the feature extraction module to obtain a first processing result of the training image; and acquiring a feature extraction result of the training image based on the first processing result of the training image.
In some embodiments, the obtaining a feature extraction result of the training image based on the first processing result of the training image may include: pooling a first processing result of the training image by using a first pooling layer of the feature extraction module to obtain a first pooling result of the training image; processing the first pooling result of the training image by using a second convolution block of the feature extraction module to obtain a second processing result of the training image; and acquiring a feature extraction result of the training image based on a second processing result of the training image.
In some embodiments, the obtaining the feature extraction result of the training image based on the second processing result of the training image may include: pooling a second processing result of the training image by using a second pooling layer of the feature extraction module to obtain a second pooling result of the training image; processing the second pooling result of the training image by using a third convolution block of the feature extraction module to obtain a third processing result of the training image; and acquiring a feature extraction result of the training image based on a third processing result of the training image.
In some embodiments, the obtaining a feature extraction result of the training image based on the third processing result of the training image may include: pooling a third processing result of the training image by using a third pooling layer of the feature extraction module to obtain a third pooling result of the training image; processing the third pooling result of the training image by using a fourth convolution block of the feature extraction module to obtain a fourth processing result of the training image; and acquiring a feature extraction result of the training image based on a fourth processing result of the training image.
In some embodiments, the obtaining a feature extraction result of the training image based on the fourth processing result of the training image may include: pooling a fourth processing result of the training image by using a fourth pooling layer of the feature extraction module to obtain a fourth pooling result of the training image; processing a fourth pooling result of the training image by using a fifth convolution block of the feature extraction module to obtain a fifth processing result of the training image; and taking the fifth processing result of the training image as the feature extraction result of the training image.
In some embodiments, the processing procedure of each of the first to fifth volume blocks may be as follows: convolving an input image by utilizing the convolution layer of the convolution block to obtain a convolution result of the input image; carrying out batch normalization on the convolution result of the input image by using the batch normalization layer of the convolution block to obtain a batch normalization result of the input image; and activating the batch normalization result of the input image by using the activation layer of the rolling block to obtain the activation result of the input image as the processing result of the input image.
In some embodiments, the classifying, by the classification module of the model to be trained, the feature extraction result of the training image to obtain the predicted classification information of the training image may include: fully connecting the feature extraction results of the training images by using a full connection layer of the classification module to obtain a full connection result of the training images; and classifying the full-connection result of the training image by using a classifier of the classification module to obtain the prediction classification information of the training image.
In some embodiments, the training the model to be trained by using the prediction classification information and the labeled classification information of the training image to obtain the cattle classification model may include: updating parameters of a feature extraction module of the model to be trained by using the prediction classification information and the labeling classification information of the training image to obtain a semi-training model; inputting the training image into the semi-training model, and outputting a plurality of feature vectors of the training image by using a full connection layer of a classification module of the semi-training model; performing feature classification on the plurality of feature vectors of the training image to obtain a plurality of prediction feature categories of the training image; and updating all parameters of the semi-training model by using the plurality of predicted characteristic categories and the plurality of labeled characteristic categories of the training images to obtain the cattle classification model.
In some embodiments, the fine classification of cattle may include Khundi, Mix, and Neli Ravi; the plurality of predicted feature classes and the plurality of annotated feature classes may each include one or more of a speckle class, an eye class, a body color class, a horn class, a tail class, a body class, a neck class, and a breast class. In other embodiments, the fine classification of cattle may include buffalo, cattle and yak.
Referring to fig. 16, an embodiment of the present application further provides an electronic device 200, where the electronic device 200 includes at least one memory 210, at least one processor 220, and a bus 230 connecting different platform systems.
The memory 210 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)211 and/or cache memory 212, and may further include Read Only Memory (ROM) 213.
The memory 210 further stores a computer program, and the computer program can be executed by the processor 220, so that the processor 220 executes the steps of the method for classifying cattle images in this embodiment, and a specific implementation manner of the method is consistent with the implementation manner and achieved technical effects described in the embodiments of the method for classifying cattle images, and some contents are not described again.
Memory 210 may also include a utility 214 having at least one program module 215, such program modules 215 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Accordingly, the processor 220 may execute the computer programs described above, and may execute the utility 214.
Bus 230 may be a local bus representing one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or any other type of bus structure.
The electronic device 200 may also communicate with one or more external devices 240, such as a keyboard, pointing device, bluetooth device, etc., and may also communicate with one or more devices capable of interacting with the electronic device 200, and/or with any devices (e.g., routers, modems, etc.) that enable the electronic device 200 to communicate with one or more other computing devices. Such communication may be through input-output interface 250. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 260. The network adapter 260 may communicate with other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
The embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium is used for storing a computer program, and when the computer program is executed, the steps of the method for classifying a bovine image in the embodiments of the present application are implemented, and a specific implementation manner of the method is consistent with the implementation manner and the achieved technical effect described in the embodiments of the method for classifying a bovine image, and some contents are not described again.
Fig. 17 shows a program product 300 provided by the present embodiment for implementing the above-mentioned cattle image classification method, which may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product 300 of the present invention is not so limited, and in this application, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Program product 300 may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that can communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the C language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
While the present application is described in terms of various aspects, including exemplary embodiments, the principles of the invention should not be limited to the disclosed embodiments, but are also intended to cover various modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.

Claims (14)

1. A method for classifying cattle images, the method comprising:
performing feature extraction on an image to be classified by using a feature extraction module of a cattle classification model to obtain a feature extraction result of the image to be classified, wherein the image to be classified is a cattle image to be classified;
and classifying the feature extraction result of the image to be classified by utilizing a classification module of the cattle classification model to obtain the prediction classification information of the image to be classified, wherein the prediction classification information is used for indicating the prediction detail classification of cattle.
2. The method for classifying cattle images according to claim 1, wherein the cattle classification model is trained as follows:
performing feature extraction on a training image by using a feature extraction module of a model to be trained to obtain a feature extraction result of the training image, wherein the training image is a cattle image used for model training;
classifying the feature extraction result of the training image by utilizing a classification module of the model to be trained to obtain prediction classification information of the training image, wherein the prediction classification information is used for indicating the prediction detailed classification of the cattle;
and training the model to be trained by utilizing the prediction classification information and the labeling classification information of the training image to obtain the cattle classification model, wherein the labeling classification information is used for indicating the labeling detail classification of the cattle.
3. The method for classifying cattle images according to claim 2, wherein the extracting features of the training images by using the feature extraction module of the model to be trained to obtain the feature extraction result of the training images comprises:
processing the training image by using a first convolution block of the feature extraction module to obtain a first processing result of the training image;
and acquiring a feature extraction result of the training image based on the first processing result of the training image.
4. The method according to claim 3, wherein the obtaining a feature extraction result of the training image based on the first processing result of the training image comprises:
pooling a first processing result of the training image by using a first pooling layer of the feature extraction module to obtain a first pooling result of the training image;
processing the first pooling result of the training image by using a second convolution block of the feature extraction module to obtain a second processing result of the training image;
and acquiring a feature extraction result of the training image based on a second processing result of the training image.
5. The method according to claim 4, wherein the obtaining of the feature extraction result of the training image based on the second processing result of the training image comprises:
pooling a second processing result of the training image by using a second pooling layer of the feature extraction module to obtain a second pooling result of the training image;
processing the second pooling result of the training image by using a third convolution block of the feature extraction module to obtain a third processing result of the training image;
and acquiring a feature extraction result of the training image based on a third processing result of the training image.
6. The method according to claim 5, wherein the obtaining of the feature extraction result of the training image based on the third processing result of the training image comprises:
pooling a third processing result of the training image by using a third pooling layer of the feature extraction module to obtain a third pooling result of the training image;
processing the third pooling result of the training image by using a fourth convolution block of the feature extraction module to obtain a fourth processing result of the training image;
and acquiring a feature extraction result of the training image based on a fourth processing result of the training image.
7. The method according to claim 6, wherein the obtaining a feature extraction result of the training image based on a fourth processing result of the training image includes:
pooling a fourth processing result of the training image by using a fourth pooling layer of the feature extraction module to obtain a fourth pooling result of the training image;
processing a fourth pooling result of the training image by using a fifth convolution block of the feature extraction module to obtain a fifth processing result of the training image;
and taking the fifth processing result of the training image as the feature extraction result of the training image.
8. The method according to any one of claims 3 to 7, wherein each of the first to fifth volume blocks processes an input image as follows:
convolving the input image by utilizing the convolution layer of the convolution block to obtain a convolution result of the input image;
carrying out batch normalization on the convolution result of the input image by using the batch normalization layer of the convolution block to obtain a batch normalization result of the input image;
and activating the batch normalization result of the input image by using the activation layer of the rolling block to obtain the activation result of the input image as the processing result of the input image.
9. The method for classifying cattle images according to claim 2, wherein the classifying module of the model to be trained is used for classifying the feature extraction results of the training images to obtain the predicted classification information of the training images, and the method comprises the following steps:
fully connecting the feature extraction results of the training images by using a full connection layer of the classification module to obtain a full connection result of the training images;
and classifying the full-connection result of the training image by using a classifier of the classification module to obtain the prediction classification information of the training image.
10. The method for classifying cattle images according to claim 9, wherein the training the model to be trained by using the prediction classification information and the labeling classification information of the training images to obtain the cattle classification model comprises:
updating parameters of a feature extraction module of the model to be trained by using the prediction classification information and the labeling classification information of the training image to obtain a semi-training model;
inputting the training image into the semi-training model, and outputting a plurality of feature vectors of the training image by using a full connection layer of a classification module of the semi-training model;
performing feature classification on the plurality of feature vectors of the training image to obtain a plurality of prediction feature categories of the training image;
and updating all parameters of the semi-training model by using the plurality of predicted characteristic categories and the plurality of labeled characteristic categories of the training images to obtain the cattle classification model.
11. The method of classifying bovine images according to claim 10, wherein the fine classification of bovine comprises Khundi, Mix and Neli Ravi;
the plurality of predicted feature classes and the plurality of annotated feature classes each include one or more of a speckle class, an eye class, a body color class, a horn class, a tail class, a body class, a neck class, and a breast class.
12. An ox image classification apparatus, characterized in that the apparatus comprises:
the feature extraction module is used for extracting features of an image to be classified by using the feature extraction module of the cattle classification model to obtain a feature extraction result of the image to be classified, wherein the image to be classified is a cattle image to be classified;
and the image classification module is used for classifying the feature extraction result of the image to be classified by utilizing the classification module of the cattle classification model to obtain the prediction classification information of the image to be classified, and the prediction classification information is used for indicating the prediction detail classification of cattle.
13. An electronic device, characterized in that the electronic device comprises a memory and a processor, the memory stores a computer program, and the processor realizes the steps of the cattle image classification method according to any one of claims 1-11 when executing the computer program.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when being executed by a processor, carries out the steps of the method for classifying bovine images according to any one of claims 1 to 11.
CN202111234047.0A 2021-10-22 2021-10-22 Cattle image classification method and device, electronic equipment and storage medium Pending CN113887505A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111234047.0A CN113887505A (en) 2021-10-22 2021-10-22 Cattle image classification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111234047.0A CN113887505A (en) 2021-10-22 2021-10-22 Cattle image classification method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113887505A true CN113887505A (en) 2022-01-04

Family

ID=79004385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111234047.0A Pending CN113887505A (en) 2021-10-22 2021-10-22 Cattle image classification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113887505A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973332A (en) * 2022-06-21 2022-08-30 河北农业大学 Weight measuring method and device, electronic equipment and living livestock measuring system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973332A (en) * 2022-06-21 2022-08-30 河北农业大学 Weight measuring method and device, electronic equipment and living livestock measuring system

Similar Documents

Publication Publication Date Title
WO2022042002A1 (en) Training method for semi-supervised learning model, image processing method, and device
US10635979B2 (en) Category learning neural networks
WO2020238293A1 (en) Image classification method, and neural network training method and apparatus
US11482022B2 (en) Systems and methods for image classification
US10410353B2 (en) Multi-label semantic boundary detection system
CN110135231B (en) Animal face recognition method and device, computer equipment and storage medium
US10650286B2 (en) Classifying medical images using deep convolution neural network (CNN) architecture
CN111767954A (en) Vehicle fine-grained identification model generation method, system, equipment and storage medium
CN110349147B (en) Model training method, fundus macular region lesion recognition method, device and equipment
US20220148291A1 (en) Image classification method and apparatus, and image classification model training method and apparatus
CN115601602A (en) Cancer tissue pathology image classification method, system, medium, equipment and terminal
WO2024060684A1 (en) Model training method, image processing method, device, and storage medium
CN112287957A (en) Target matching method and device
CN111667474A (en) Fracture identification method, apparatus, device and computer readable storage medium
US20230401838A1 (en) Image processing method and related apparatus
CN109982088B (en) Image processing method and device
Tanwar et al. Deep learning-based hybrid model for severity prediction of leaf smut rice infection
CN113887505A (en) Cattle image classification method and device, electronic equipment and storage medium
Han Multimodal brain image analysis and survival prediction using neuromorphic attention-based neural networks
CN115641317B (en) Pathological image-oriented dynamic knowledge backtracking multi-example learning and image classification method
CN113408546B (en) Single-sample target detection method based on mutual global context attention mechanism
CN115984179A (en) Nasal bone fracture identification method and device, terminal and storage medium
CN115359511A (en) Pig abnormal behavior detection method
CN109308936B (en) Grain crop production area identification method, grain crop production area identification device and terminal identification equipment
CN113688264B (en) Method and device for identifying organism weight, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination