CN115311689A - Cattle face identification feature extraction model construction method and cattle face identification method - Google Patents

Cattle face identification feature extraction model construction method and cattle face identification method Download PDF

Info

Publication number
CN115311689A
CN115311689A CN202211064047.5A CN202211064047A CN115311689A CN 115311689 A CN115311689 A CN 115311689A CN 202211064047 A CN202211064047 A CN 202211064047A CN 115311689 A CN115311689 A CN 115311689A
Authority
CN
China
Prior art keywords
feature
local
global
features
cattle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211064047.5A
Other languages
Chinese (zh)
Inventor
赵建敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia University of Science and Technology
Original Assignee
Inner Mongolia University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia University of Science and Technology filed Critical Inner Mongolia University of Science and Technology
Priority to CN202211064047.5A priority Critical patent/CN115311689A/en
Publication of CN115311689A publication Critical patent/CN115311689A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cattle face identification feature extraction model construction method and a cattle face identification method, which relate to the technical field of image processing and comprise the following steps: preprocessing a cattle face detection data set, inputting the preprocessed cattle face detection data set into a neural network model, extracting global features and local features, and performing normalization processing; respectively calculating a similar global feature center and a similar local feature center based on the global feature and the local feature after normalization processing, constructing a global feature triple and a local feature triple, calculating a global feature ternary loss function and a local feature ternary loss function, and calculating total loss by using a fusion metric loss function; and continuously iterating and updating the model parameters in a reverse direction according to the total loss until the model converges to obtain a feature extraction model, and further carrying out the cattle face identity recognition. On one hand, the method utilizes the multi-position image to construct effective local hard cases and combines the global hard cases, thereby improving the learning efficiency; on the other hand, a feature fusion learning mechanism is provided, image features are extracted, and the identity of the cow is identified.

Description

Cattle face identification feature extraction model construction method and cattle face identification method
Technical Field
The invention relates to the technical field of image processing, in particular to a method for constructing a cattle face identity recognition feature extraction model and a cattle face identity recognition method.
Background
The identification of individual cattle is the first link of cattle filing and tracing in accurate animal husbandry production, animal husbandry finance and animal husbandry insurance claim settlement. When the variety, the quantity and the breeding environment of the cattle in the pasture breeding are dynamically changed, a large amount of manpower is consumed for updating the identity information of the cattle in the pasture by using a traditional identification mode.
In recent years, non-contact recognition methods based on computer vision have been widely focused by scholars at home and abroad, and the identity of an individual is recognized through a cow lip print, a cow eye iris, a cow eye retina and the like by taking animal biological characteristics as an identity mark. The non-contact identification method based on computer vision substitutes the cattle individual identification into the automatic identification era from semi-automatic identification, the identification accuracy of the retina, iris and the like of the cattle eye is high, but the problems of high sampling difficulty, high sampling condition requirement, low image quality and the like greatly affect the identification effect, and the use of the non-contact identification method in the precise feeding daily filing is affected by the problem of poor convenience.
The cow face image is selected as the biological characteristic mark, and the method has the advantages of low image sampling requirement, convenience in acquisition, small constraint on cows, low labor risk and the like. In the prior art, a ternary loss function is adopted to minimize the distance between an anchor point and a positive sample with the same identity and maximize the distance between the anchor point and a negative sample by directly increasing the inter-class edge of a feature space, so that a very remarkable effect is achieved. However, the number of triples versus the number of samples presents O (n) 2 ) And O (n) 3 ) (n is data of training samples) where there are a large number of invalid triples, affecting learning efficiency.
Therefore, how to improve the learning ability and learning efficiency of the neural network, improve the feature extraction ability, and realize the cattle face identification of the real breeding environment is a technical problem that needs to be solved urgently by technical personnel in the field.
Disclosure of Invention
In view of the above, the invention provides a cattle face identification feature extraction model construction method and a cattle face identification method, on one hand, effective local hard cases are constructed by utilizing multi-part images, and the learning efficiency is improved by combining global hard cases; on the other hand, a feature fusion learning mechanism is provided, image features are extracted, and the identity of the cow is identified.
In order to achieve the above purpose, the invention provides the following technical scheme:
a method for constructing a cow face identity recognition feature extraction model comprises the following steps:
acquiring a cattle face detection data set, and preprocessing the cattle face detection data set to obtain a training data set;
inputting the training data set into a neural network model to extract global features and local features, and performing normalization processing on the global features and the local features;
respectively calculating a similar global feature center and a similar local feature center based on the global features and the local features after normalization processing, and constructing global feature triples and local feature triples;
calculating a global characteristic ternary loss function and a local characteristic ternary loss function by using the global characteristic triple and the local characteristic triple, and calculating total loss by using a fusion measurement loss function;
and continuously iterating and updating the model parameters in the reverse direction according to the total loss until the model converges to obtain a feature extraction model.
The technical effect that above-mentioned technical scheme reaches does: the construction mode of the cattle face identification feature extraction model is disclosed, and the cattle face features can be extracted from the cattle face image, so that the identity of the cattle can be confirmed, and the learning capability and the feature extraction capability of the model can be improved.
Optionally, the obtaining of the training data set specifically includes the following steps:
randomly sampling m types from the bovine face detection data set each time by using a class sample balanced sampling method, randomly sampling n samples in each class, and obtaining m images by the batch size;
and (4) carrying out image scaling and standardization processing on each sample to obtain batch processing samples as a training data set.
Optionally, the extracting the global feature and the local feature specifically includes the following steps:
selecting a classification model, removing a full-connection layer in the classification model, and reserving a final convolution output layer as a characteristic extraction initial model;
inputting the training data set into the feature extraction initial model, and globally pooling the final convolution output layer features by using an average pooling method to obtain global features g il
Dividing the image into several regions, and flattening and pooling the output part of the convolution layer to obtain local features of each region
Figure BDA0003827551230000031
Optionally, the classification model includes VGG16, VGG19, inclusion V1, inclusion V2, inclusion V3, resnet50, and Resnet101.
Optionally, the constructing a global feature triple and a local feature triple specifically includes the following steps:
carrying out normalization processing on the global features and the local features by adopting a normalization method of a formula (1), and forming a global feature set G (G) by the normalized global features 11 ,g 12 ,…,g 1n ,…,g il ,…,g m1 ,g m2 ,...,g mn ) E.g. G, forming a local feature set P by the normalized local features,
Figure BDA0003827551230000032
Figure BDA0003827551230000033
calculating the i-type global feature center g i The formula is as follows:
Figure BDA0003827551230000034
in the formula:g il representing the characteristics of the ith class and the ith sample, wherein n is the sampling number of the samples of the same class;
computing the kth local center within class i
Figure BDA0003827551230000035
The formula is as follows:
Figure BDA0003827551230000036
in the formula:
Figure BDA0003827551230000041
the local features of the kth part of the ith sample are expressed and averaged to obtain the local features of the kth part of the ith sample
Figure BDA0003827551230000042
Based on class local feature centers
Figure BDA0003827551230000043
Local positive sample
Figure BDA0003827551230000044
Local negative sample
Figure BDA0003827551230000045
Constructing a local feature triple; using class centers g i Positive sample g il Negative sample g jl And constructing a global feature triple.
Optionally, the calculating the total loss specifically includes the following steps:
in the global feature set G, calculating global feature loss l by using a global feature ternary loss function g The formula is as follows:
Figure BDA0003827551230000046
in the formula: n is the total number of samples,g jl Representing the characteristics of the jth class and the ith sample; delta g is a global characteristic heterogeneous distance boundary and is a hyper-parameter during training; d (g) i ,g j ) Represents a feature vector g i And g j The metric distance between them is expressed by Euclidean distance, and the formula is as follows:
Figure BDA0003827551230000047
in the local feature set P, local metric loss l is calculated by using a local ternary loss function based on local feature metric p The formula is as follows:
Figure BDA0003827551230000048
in the formula: deltap denotes the distance boundaries between different classes of region features,
Figure BDA0003827551230000049
local features of the kth part of the jth class I/th sample are shown; l p Calculating class ternary loss of each region of each class in turn to continuously optimize the distance between features;
for the global feature loss l g And local metric loss l p And carrying out smooth weighting, and calculating the total loss l by using a fusion metric loss function, wherein the formula is as follows:
l=(1-λ)*l g +λ*l p (7);
in the formula: lambda is a smooth coefficient and is adjusted during training.
The invention also discloses a cattle face identity recognition method based on local feature measurement and feature fusion learning, which utilizes the feature extraction model and comprises the following steps:
acquiring Niu Lianmu standard detection data sets, and processing the Niu Lianmu standard detection data sets to obtain a cattle face area image;
inputting the cattle face region image into the cattle face identification feature extraction model, and extracting global features;
and carrying out normalization processing on the global features, and classifying through a k-NN classifier to identify the identity of the cow.
Optionally, the Niu Lianmu standard detection data set is processed, specifically:
and inputting the Niu Lianmu standard detection data set into a target detection model for model training, acquiring a cattle face detection model, identifying a cattle face region and acquiring a segmented cattle face region image.
Optionally, the target detection model is a target detection model including fastern, YOLO series, and SSD series.
Optionally, the classifying is performed by a k-NN classifier, which specifically includes:
preprocessing images in a cattle face image library, inputting the preprocessed cattle face images into the cattle face identification feature extraction model, extracting global features and performing normalization processing to form a cattle face feature library;
and performing k-NN classification by using the global features extracted from the image to be detected and the cattle face features in the cattle face feature library to determine the identity of the cattle.
According to the technical scheme, compared with the prior art, the invention discloses a method for constructing a cattle face identification feature extraction model and a cattle face identification method, and the method has the following beneficial effects:
(1) The invention utilizes quasi-local feature centers
Figure BDA0003827551230000051
Local positive sample
Figure BDA0003827551230000052
Local negative sample
Figure BDA0003827551230000053
Constructing a local feature triple, designing a local ternary loss function based on local feature measurement, calculating ternary loss by using the local feature triple, and performing targeted supervision training on deep convolutional nervesLocal area feature learning capabilities of the network;
(2) The invention utilizes class centers g i Positive sample g il Negative sample g jl Constructing a global feature triple, designing a global ternary loss function, designing a fusion loss function by combining local feature loss and global feature loss, improving the expression capability of global features by using regional fine-grained feature learning, improving the feature extraction capability and enhancing the feature representation performance of the model;
(3) The invention provides a complete solution of cattle face identification integrating target detection, feature extraction and feature classification, does not need to retrain the model under the condition of changing the scale of a pasture and the scale of a cattle herd, and is suitable for being used in a real breeding environment; moreover, the cow face image is selected as the biological characteristic identifier, and the method has the advantages of low image sampling requirement, convenience in acquisition, small constraint on cows, low labor risk and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a training schematic diagram of a face identification feature extraction model;
FIG. 2 is a detailed training flowchart of the face identification feature extraction model;
fig. 3 is a schematic diagram of a step of identifying a cow face.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Example 1
The embodiment of the invention discloses a method for constructing a cattle face identification feature extraction model, which comprises the following steps as shown in figure 1:
acquiring a cattle face detection data set, and preprocessing the cattle face detection data set to obtain a training data set;
inputting the training data set into a neural network model to extract global features and local features, and performing normalization processing on the global features and the local features;
respectively calculating a similar global feature center and a similar local feature center based on the global features and the local features after normalization processing, and constructing global feature triples and local feature triples;
calculating a global characteristic ternary loss function and a local characteristic ternary loss function by using the global characteristic triple and the local characteristic triple, and calculating total loss by using a fusion measurement loss function;
and continuously iterating and updating the model parameters in the reverse direction according to the total loss until the model converges to obtain the feature extraction model.
The technology takes the cattle face image as a biological characteristic identifier, has the advantages of low image sampling requirement, convenience in acquisition, small constraint on cattle, low labor risk and the like, provides a complete solution for cattle face identification integrating target detection, characteristic extraction and characteristic classification, and is suitable for real breeding environments.
In order to achieve the above objective, in the method for identifying an identity of a cow face based on multi-part triple mining and feature fusion learning provided in this embodiment, a core model training step is shown in fig. 2.
(1) And sampling the samples. By utilizing a class sample balanced sampling method, m classes are randomly sampled from the bovine face detection data set every time, n samples are randomly sampled in each class, and the balanced arrangement samples are ensured in each sampling, wherein the batch size of the images is m x n.
(2) And (5) image preprocessing. And (4) carrying out image scaling and standardization processing on each sample to obtain batch processing samples. Specifically, the bovine face image is scaled to an image with a length of 224 × 224 pixels, and the normalization process is a normal operation, i.e., subtracting the channel mean from the pixel value of each channel, and then dividing the channel mean by the variance of the corresponding channel, wherein the R, G, B three-channel mean is 0.485, 0.456, and 0.406, and the variance is 0.229, 0.224, and 0.225, respectively.
(3) Inputting the image into a neural network to extract global features and local features, wherein the specific method comprises the following steps:
the neural network model can adopt general mainstream classification models such as VGG16, VGG19, inclusion V1, inclusion V2, inclusion V3, resnet50, resnet101 and the like, a full connection layer is removed, and a final convolution output layer is reserved; image x il After the neural network model is input, the final convolution output layer features are subjected to global pooling by using an average pooling method to obtain global features g il (ii) a Dividing the image into several regions, and flattening and pooling the output part of the convolution layer to obtain local features of each region
Figure BDA0003827551230000081
In the present embodiment, the VGG16 model is taken as an example, the structure is shown in table 1, and the input image x il Dimension 224 x 3, remove fully connected layers, and finally convolution outputs a feature map of 7 x 512, and global feature g is obtained through global pooling il The feature layer is divided into 512 dimensions by 2*2, four areas are obtained, and local features are obtained in each area
Figure BDA0003827551230000082
Is 512 dimensions.
TABLE 1VGG16 feature extraction model structure composition Table
Figure BDA0003827551230000083
Figure BDA0003827551230000091
(4) Feature normalization processing, vector (x) 1 ,x 2 ,...,x i ,...,x n ) The normalization method is adopted as follows:
Figure BDA0003827551230000092
carrying out normalization processing on the global features and the local features, and forming a global feature set G (G) by the normalized global features 11 ,g 12 ,...,g 1n ,...,g il ,...,g m1 ,g m2 ,...,g mn ) E.g. G, forming a local feature set P by the normalized local features,
Figure BDA0003827551230000093
(5) Computing the i-th class global feature center g i The formula is as follows:
Figure BDA0003827551230000101
in the formula: g il Representing the characteristics of the ith class and the ith sample, wherein n is the sampling number of the samples of the same class;
computing the kth local center within class i
Figure BDA0003827551230000102
The formula is as follows:
Figure BDA0003827551230000103
in the formula:
Figure BDA0003827551230000104
the local features of the kth part of the ith sample are expressed and averaged to obtain the local features of the kth part of the ith sample
Figure BDA0003827551230000105
(6) In the global feature set G, calculating global feature loss l by using a global feature ternary loss function g The formula is as follows:
Figure BDA0003827551230000106
in the formula: n is the total number of samples, g jl Representing the characteristics of the jth class and the ith sample; Δ g is a global feature heterogeneous distance boundary, and is a hyper-parameter during training; d (g) i ,g j ) Represents a feature vector g i And g j The metric distance between them is expressed by Euclidean distance, and the formula is as follows:
Figure BDA0003827551230000107
using class centers g i Positive sample g il Negative sample g jl The construction of the triple is an innovation of the triple construction in the technology.
(7) In the local feature set P, local metric loss l is calculated by using a local ternary loss function based on local feature metric p The formula is as follows:
Figure BDA0003827551230000108
in the formula: d (p) i ,p j ) Is a vector p i And p j The measurement distance between the two is Euclidean distance, as shown in formula (5); deltap denotes the distance boundaries between different classes of region features,
Figure BDA0003827551230000109
local features of the kth part of the jth class I/th sample are shown; l. the p Calculating class ternary loss of each region of each class in turn to continuously optimize the distance between features;
using local-like feature centers
Figure BDA0003827551230000111
Local positive sample
Figure BDA0003827551230000112
Local negative sample
Figure BDA0003827551230000113
The local triad is the innovation of the local triad construction of the technology.
(8) For global feature loss l g And local metric loss l p And carrying out smooth weighting, and calculating the total loss l by using a fusion metric loss function, wherein the formula is as follows:
l=(1-λ)*l g +λ*l p (7);
in the formula: lambda is a smooth coefficient, and deltag and deltap are hyper-parameters related to the algorithm and are adjusted during training.
(9) And after the total loss l is calculated, updating the model parameters by reversely and continuously iterating by using a conventional optimization method of the deep convolutional neural network until the model is converged to obtain a feature extraction model.
Example 2
The embodiment discloses a method for identifying the identity of a cow face based on local feature measurement and feature fusion learning, which utilizes the feature extraction model in the embodiment 1 and comprises the following steps:
acquiring Niu Lianmu standard detection data sets, and processing Niu Lianmu standard detection data sets to obtain a cattle face area image;
inputting the cattle face region image into the cattle face identification feature extraction model, and extracting global features;
and carrying out normalization processing on the global features, and classifying through a k-NN classifier to identify the identity of the cow.
In this embodiment, target detection models such as fast RCNN, YOLO series, SSD series, etc. may be selected, and Niu Lianmu standard detection data sets are input into the target detection models for model training, so as to obtain a cattle face detection model, identify a cattle face region, and obtain a segmented cattle face region image. For example, specifically, a Niu Lianmu standard detection data set is used for training a YOLOv5s target detection model, an image to be detected is input, and a cattle face region image in the image is output.
Further, the classifying by the k-NN classifier specifically includes:
in the cattle filing stage, cattle face images are collected, global features are extracted by using the feature extraction model in the embodiment and stored in a cattle face feature library for identity recognition, and the detailed steps are shown in the right side of fig. 3 and specifically include: firstly, preprocessing images in an ox face image library, scaling the ox face image into an image with the width of 224 pixels, normalizing the image into a conventional operation, namely subtracting a channel mean value from a pixel value of each channel, and dividing the normalized image by a variance of a corresponding channel, wherein the average values of R, G, B three channels are respectively 0.485, 0.456 and 0.406, and the variances are respectively 0.229, 0.224 and 0.225; secondly, the preprocessed cow face images are sent into a trained feature extraction model to extract global features, and then normalization processing is carried out on the features to form a cow face feature library.
And performing k-NN classification by using the global features extracted from the image to be detected and the cattle face features in the cattle face feature library to determine the identity of the cattle.
Fig. 3 shows the steps of the present technology for identifying the cow face, which are analyzed in detail as follows:
the first step is as follows: inputting an image to be detected to a cattle face detection model;
the second step is that: detecting a cattle face area by a cattle face detection model, and carrying out image segmentation on the cattle face area;
the third step: preprocessing the segmented cow face region image, including image scaling and standardization;
the fourth step: sending the preprocessed cow face region image into a cow face identity recognition feature extraction model, and outputting global features;
the fifth step: carrying out normalization processing on the global features;
and a sixth step: and inputting the extracted global features and the features of the cow face feature library into a k-NN classifier for classification, and outputting the cow identities.
The invention utilizes quasi-local feature centers
Figure BDA0003827551230000121
Local positive sample
Figure BDA0003827551230000122
Local negative sample
Figure BDA0003827551230000123
Constructing a local feature triple, designing a local ternary loss function based on local feature measurement, calculating ternary loss by using the local feature triple, and carrying out targeted supervision and training on the local region feature learning capacity of the deep convolutional neural network; using class centers g i Positive sample g il Negative sample g jl And constructing a global feature triple, designing a global ternary loss function, designing a fusion degree loss function by combining local feature loss and global feature loss, improving the performance capability of global features by using regional fine-grained feature learning, improving the feature extraction capability and enhancing the feature characterization performance of the model. In conclusion, the technology provides a complete solution for bovine face recognition which integrates target detection, feature extraction and feature classification, and under the condition that the range scale and the cattle herd scale change, the model does not need to be retrained, so that the method is suitable for being used in a real breeding environment.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A construction method of a cow face identity recognition feature extraction model is characterized by comprising the following steps:
acquiring a cattle face detection data set, and preprocessing the cattle face detection data set to obtain a training data set;
inputting the training data set into a neural network model to extract global features and local features, and performing normalization processing on the global features and the local features;
respectively calculating a similar global feature center and a similar local feature center based on the global feature and the local feature after normalization processing, and constructing a global feature triple and a local feature triple;
calculating a global characteristic ternary loss function and a local characteristic ternary loss function by using the global characteristic triple and the local characteristic triple, and calculating total loss by using a fusion measurement loss function;
and continuously iterating and updating the model parameters in the reverse direction according to the total loss until the model converges to obtain a feature extraction model.
2. The method for constructing the bovine face identification feature extraction model according to claim 1, wherein the obtaining of the training data set specifically includes the following steps:
randomly sampling m types from the bovine face detection data set by using a class sample balanced sampling method, randomly sampling n samples in each class, and obtaining m x n images in batch size;
and (4) carrying out image scaling and standardization processing on each sample to obtain batch processing samples as a training data set.
3. The method for constructing the bovine face identification feature extraction model according to claim 1, wherein the extracting of the global features and the local features specifically comprises the following steps:
selecting a classification model, removing a full-connection layer in the classification model, and reserving a final convolution output layer as a characteristic extraction initial model;
inputting the training data set into the feature extraction initial model, and globally pooling the final convolution output layer features by using an average pooling method to obtain global features g il
The image is equally divided into a plurality of areas, and the output part of the convolution layer is subjected to the tie pooling to obtain the local characteristics of each area
Figure FDA0003827551220000021
4. The method for constructing the cow face identification feature extraction model according to claim 3, wherein the classification model comprises VGG16, VGG19, inclusion V1, inclusion V2, inclusion V3, resnet50 and Resnet101.
5. The method for constructing the cow face identification feature extraction model according to claim 1, wherein the constructing of the global feature triplet and the local feature triplet specifically includes the following steps:
carrying out normalization processing on the global features and the local features by adopting a normalization method of a formula (1), and forming a global feature set G (G) by the normalized global features 11 ,g 12 ,...,g 1n ,...,g il ,...,g m1 ,g m2 ,...,g mn ) Belongs to G, a local feature set P is formed by the normalized local features,
Figure FDA0003827551220000022
Figure FDA0003827551220000023
computing the i-th class global feature center g i The formula is as follows:
Figure FDA0003827551220000024
in the formula: g is a radical of formula il Representing the characteristics of the ith class and the ith sample, wherein n is the sampling number of the samples of the same class;
computing the kth local center within class i
Figure FDA0003827551220000025
The formula is as follows:
Figure FDA0003827551220000026
in the formula:
Figure FDA0003827551220000027
the local features of the kth part of the ith sample are expressed and averaged to obtain the local features of the kth part of the ith sample
Figure FDA0003827551220000028
Based on class local feature centers
Figure FDA0003827551220000029
Local positive sample
Figure FDA00038275512200000210
Local negative sample
Figure FDA00038275512200000211
Constructing a local feature triple; using class centers g i Positive sample g il Negative sample g jl And constructing a global feature triple.
6. The method for constructing the cow face identification feature extraction model according to claim 5, wherein the calculating of the total loss specifically includes the following steps:
in the global feature set G, calculating global feature loss l by using a global feature ternary loss function g The formula is as follows:
Figure FDA0003827551220000031
in the formula: n is the total number of samples, g jl Representing the characteristics of the jth class and the ith sample; delta g is a global characteristic heterogeneous distance boundary and is a hyper-parameter during training; d (g) i ,g j ) Represents a feature vector g i And g j The measured distance between the two is expressed by Euclidean distance, and the formula is as follows:
Figure FDA0003827551220000032
in the local feature set P, local metric loss l is calculated by using a local ternary loss function based on local feature metric p The formula is as follows:
Figure FDA0003827551220000033
in the formula: deltap denotes the distance boundaries between different classes of region features,
Figure FDA0003827551220000034
local features of the kth part of the jth class I/th sample are shown; l p Calculating the class ternary loss of each region of each class in turn to continuously optimize the distance between the features;
for the global feature loss l g And local metric loss l p And carrying out smooth weighting, and calculating the total loss l by using a fusion metric loss function, wherein the formula is as follows:
l=(1-λ)*l g +λ*l p (7);
in the formula: lambda is a smooth coefficient and is adjusted during training.
7. A method for identifying the identity of a cow face based on local feature measurement and feature fusion learning, which is characterized in that a feature extraction model according to any one of claims 1-6 is utilized, and the method comprises the following steps:
acquiring Niu Lianmu standard detection data sets, and processing the Niu Lianmu standard detection data sets to obtain a cattle face area image;
inputting the cattle face region image into the cattle face identification feature extraction model, and extracting global features;
and carrying out normalization processing on the global features, and classifying through a k-NN classifier to identify the identity of the cattle.
8. The method for identifying a bovine face based on local feature measurement and feature fusion learning according to claim 7, wherein the Niu Lianmu standard detection data set is processed, specifically:
and inputting the Niu Lianmu standard detection data set into a target detection model for model training, acquiring a cattle face detection model, identifying a cattle face region and acquiring a segmented cattle face region image.
9. The method of claim 8, wherein the object detection model is one of fast RCNN, YOLO series and SSD series.
10. The method for identifying a bovine face based on local feature measurement and feature fusion learning according to claim 7, wherein the classification is performed by a k-NN classifier, specifically:
preprocessing images in a cattle face image library, inputting the preprocessed cattle face images into the cattle face identification feature extraction model, extracting global features and performing normalization processing to form a cattle face feature library;
and performing k-NN classification by using the global features extracted from the image to be detected and the cattle face features in the cattle face feature library to determine the identity of the cattle.
CN202211064047.5A 2022-09-01 2022-09-01 Cattle face identification feature extraction model construction method and cattle face identification method Pending CN115311689A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211064047.5A CN115311689A (en) 2022-09-01 2022-09-01 Cattle face identification feature extraction model construction method and cattle face identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211064047.5A CN115311689A (en) 2022-09-01 2022-09-01 Cattle face identification feature extraction model construction method and cattle face identification method

Publications (1)

Publication Number Publication Date
CN115311689A true CN115311689A (en) 2022-11-08

Family

ID=83865286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211064047.5A Pending CN115311689A (en) 2022-09-01 2022-09-01 Cattle face identification feature extraction model construction method and cattle face identification method

Country Status (1)

Country Link
CN (1) CN115311689A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117711027A (en) * 2024-01-17 2024-03-15 内蒙古工业大学 Cow individual identity recognition method based on nose line
CN118038498A (en) * 2024-04-10 2024-05-14 四川农业大学 Fine granularity-based bee and monkey identity recognition method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117711027A (en) * 2024-01-17 2024-03-15 内蒙古工业大学 Cow individual identity recognition method based on nose line
CN118038498A (en) * 2024-04-10 2024-05-14 四川农业大学 Fine granularity-based bee and monkey identity recognition method

Similar Documents

Publication Publication Date Title
CN106778902B (en) Dairy cow individual identification method based on deep convolutional neural network
AU2020102885A4 (en) Disease recognition method of winter jujube based on deep convolutional neural network and disease image
CN110120040B (en) Slice image processing method, slice image processing device, computer equipment and storage medium
CN111985536B (en) Based on weak supervised learning gastroscopic pathology image Classification method
CN109902590B (en) Pedestrian re-identification method for deep multi-view characteristic distance learning
CN115311689A (en) Cattle face identification feature extraction model construction method and cattle face identification method
WO2021238455A1 (en) Data processing method and device, and computer-readable storage medium
CN109325431B (en) Method and device for detecting vegetation coverage in feeding path of grassland grazing sheep
CN110120042B (en) Crop image pest and disease damage area extraction method based on SLIC super-pixel and automatic threshold segmentation
Li et al. Image recognition of grape downy mildew and grape powdery mildew based on support vector machine
Qing et al. Automated detection and identification of white-backed planthoppers in paddy fields using image processing
CN113674252A (en) Histopathology image diagnosis system based on graph neural network
CN107256398A (en) The milk cow individual discrimination method of feature based fusion
CN112101333A (en) Smart cattle farm monitoring and identifying method and device based on deep learning
CN110874835B (en) Crop leaf disease resistance identification method and system, electronic equipment and storage medium
CN101216886B (en) A shot clustering method based on spectral segmentation theory
CN116386120A (en) Noninductive monitoring management system
CN108520539B (en) Image target detection method based on sparse learning variable model
CN116206208B (en) Forestry plant diseases and insect pests rapid analysis system based on artificial intelligence
Poonguzhali et al. Crop condition assessment using machine learning
CN111259913A (en) Cell spectral image classification method based on bag-of-word model and textural features
CN113947780B (en) Sika face recognition method based on improved convolutional neural network
CN115909401A (en) Cattle face identification method and device integrating deep learning, electronic equipment and medium
DE202022105962U1 (en) A blade damage detection system based on shape
CN112241954B (en) Full-view self-adaptive segmentation network configuration method based on lump differentiation classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination