CN114283366A - Method and device for identifying individual identity of dairy cow and storage medium - Google Patents

Method and device for identifying individual identity of dairy cow and storage medium Download PDF

Info

Publication number
CN114283366A
CN114283366A CN202111597589.4A CN202111597589A CN114283366A CN 114283366 A CN114283366 A CN 114283366A CN 202111597589 A CN202111597589 A CN 202111597589A CN 114283366 A CN114283366 A CN 114283366A
Authority
CN
China
Prior art keywords
cow
feature
image
individual
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111597589.4A
Other languages
Chinese (zh)
Inventor
戴百生
沈维政
张哲�
严士超
李洋
孙雨坤
孙佳
张永根
熊本海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Agricultural University
Original Assignee
Northeast Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Agricultural University filed Critical Northeast Agricultural University
Priority to CN202111597589.4A priority Critical patent/CN114283366A/en
Publication of CN114283366A publication Critical patent/CN114283366A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for identifying the individual identity of a cow and a storage medium, comprising the following steps: carrying out depth feature extraction on the historical cow image through a depth convolution neural network to generate a feature template; acquiring a cow image to be identified; and matching the dairy cow image to be identified with the characteristic template so as to identify the individual identity of the dairy cow. By adopting the technical scheme of the invention, the milk identification precision can be improved, and when a new cow is added into a cattle group, the characteristics can be stored in the template characteristic library relatively quickly to complete registration, thereby avoiding repeated training of the characteristic extraction network.

Description

Method and device for identifying individual identity of dairy cow and storage medium
Technical Field
The invention belongs to the technical field of deep learning, and particularly relates to a method and a device for identifying the identity of a dairy cow individual based on deep feature extraction and matching, and a storage medium.
Background
The individual identification of the dairy cows is not only a prerequisite for the automatic analysis of the behavior of the dairy cows, the evaluation of the body shapes and physical conditions and the evaluation of the weight of the dairy cows, but also an important component part for the large-scale informatization and the fine development of the animal husbandry. At present, Radio Frequency Identification (RFID) is widely used in farms as a technical means for distinguishing individual cows, and is mainly used for locating or identifying cows. However, the application of RFID requires that the cow wear an ear tag. Ear tags are invasive and can affect the welfare of animals. Furthermore, the complex farm environment results in severe loss and corrosion damage of the ear tag, increasing economic costs. The Local Position Measurement (LPM) system based on the radar technology is expensive and cannot be widely used in production. WLAN tracking systems are less expensive but still require the wearing of a transponder on the cow, and such systems are also contact-type, again with loss or loss. Experiments show that the non-contact system based on the computer has lower cost and is beneficial to animal welfare. Therefore, the application of vision-based non-contact individual identification of cows has certain value.
With the rapid development of computer vision technology in the field of intelligent animal husbandry, the research and application of vision-based dairy cow individual identification are increasing. The individual identification areas of the dairy cows commonly used in the current research comprise body parts such as faces, mouths, backs, tails and the like. The facial features and the texture of the oronasal region of the cow contain abundant individual information. A certain achievement is achieved based on the research of the face and mouth and nose images of the cows, the face or nose and mouth regions of the cows are classified by using a traditional algorithm, most of the traditional algorithm adopts a feature extraction algorithm, and features are extracted by a classifier and the cows are identified. However, the traditional algorithm has the problems of complexity in the aspects of feature extraction and image preprocessing and low identification precision of the dairy cows, and cannot well meet the actual requirements. The task of individual identification performed by using a convolutional neural network as a technical means is particularly excellent. However, there still exist some problems in the identification of the milk cow individuals based on the convolutional neural network, the trained convolutional neural network model stores the information of the existing milk cows in the milk cow farm, and the model needs to be retrained once the new milk cows are added. However, the training of convolutional neural networks is time consuming and time-consuming.
Disclosure of Invention
The invention aims to provide a method and a device for identifying the individual identity of a cow and a storage medium, which can improve the milk identification precision. For newly input cow data, registration can be completed quickly without retraining the feature extraction network, repeated training is avoided, and time is saved.
In order to achieve the purpose, the invention adopts the following technical scheme:
an individual identification method for dairy cows comprises the following steps:
s1, performing depth feature extraction on the historical cow image through a depth convolution neural network to generate a feature template;
s2, acquiring a cow image to be identified;
and step S3, matching the dairy cow image to be identified with the feature template to identify the individual identity of the dairy cow.
Preferably, step S1 is specifically: the depth feature extraction calculation formula of the image by using the VGG-16 network is as follows:
Y(P)=x∈Rd
wherein P is a cow image, x is a characteristic matrix of the cow image P, and R is a real number; projecting P into d-dimensional feature space to obtain feature Y (P) in d-dimensional space;
and (3) performing feature extraction on each cow image P of the cow images in the data set by using a feature extraction function Y to obtain N feature vectors Y (P), and storing the cow image ID and the corresponding Y (P) in a template feature library to generate a feature template.
Preferably, the VGG-16 network consists of 13 convolutional layers, 4 maximum pooling layers, 3 fully-connected layers, and 1 softmax; the convolution layer is constructed by a convolution kernel with the size of 3 multiplied by 3 and a filter with the step length of 1, and the maximum pooling layer is constructed by a pooling kernel with the size of 2 multiplied by 2 and a filter with the step length of 2; the VGG-16 network serves as a depth feature extractor, and after convolution and maximum pooling operations, data is flattened by using a Flatten () function to obtain a feature vector with the dimension of 1 x n.
Preferably, step S2 is specifically: and matching the features Y (I) of the cow image I to be identified with the N features Y (P) in the feature template by calculating the similarity between the feature vectors mapped in the d-dimensional space according to the feature extraction function Y.
Preferably, the similarity between the feature vectors mapped in the d-dimensional space is calculated by euclidean distance, and the calculation formula is:
Figure BDA0003431871470000031
wherein I represents the cow image to be identified, PjRepresenting j cow images in a feature template, I representing n dimensions of a feature vector, I and PjFeature dimensions of (2) are consistent, dEujThe similarity distance obtained by matching the depth features Y (I) of the cow image I to be identified with the j depth features Y (P) in the feature template is represented.
Preferably, the similarity between the feature vectors mapped in the d-dimensional space is calculated by cosine similarity, and the calculation formula is as follows:
Figure BDA0003431871470000032
wherein, I represents the image of the cow to be identified, PjRepresenting j cow images in a feature template, I representing n dimensions of a feature vector, I and PjAre consistent in characteristic dimension, dCosjThe cosine similarity obtained by matching the depth features Y (I) of the cow image I to be identified with the depth features Y (P) of j in the feature template is represented.
The invention also provides a device for identifying the individual identity of the dairy cow, which comprises:
the extraction module is used for carrying out depth feature extraction on the historical cow image through a depth convolution neural network to generate a feature template;
the acquisition module is used for acquiring a cow image to be identified;
and the identification module is used for matching the dairy cow image to be identified with the characteristic template so as to identify the individual identity of the dairy cow.
Preferably, the image is subjected to depth feature extraction by using a VGG-16 network; for newly input cow data, the feature extraction module does not need to be trained again, and repeated training is avoided; the VGG-16 network consists of 13 convolutional layers, 4 maximum pooling layers, 3 full-connectivity layers and 1 softmax; the convolution layer is constructed by a convolution kernel with the size of 3 multiplied by 3 and a filter with the step length of 1, and the maximum pooling layer is constructed by a pooling kernel with the size of 2 multiplied by 2 and a filter with the step length of 2; the VGG-16 network is used as a depth feature extractor, and after convolution and maximum pooling operation, a Flatten () function is used to Flatten data to obtain a feature vector with 1 x n dimension.
Preferably, the identification module matches the features of the cow image to be identified with the features in the feature template by calculating the similarity between the feature vectors mapped in the multidimensional space so as to identify the individual identity of the cow.
The present invention also provides a storage medium having stored thereon machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a dairy cow individual identification method.
The method comprises the steps of carrying out depth feature extraction on historical cow images through a depth convolution neural network model to generate a feature template; matching the dairy cow image to be identified with the characteristic template to identify the individual identity of the dairy cow; when a new cow is added to the dairy, the registration can be completed quickly. By adopting the technical scheme of the invention, the individual identification precision of the dairy cows is improved, and the registration time of new dairy cows can be saved.
Drawings
FIG. 1 is a flow chart of the method for identifying the individual identity of a cow according to the present invention;
FIG. 2 is a schematic diagram of the architecture of a VGG-16 network;
FIG. 3 is a schematic diagram of Euclidean distance feature matching;
FIG. 4 is a schematic diagram of cosine similarity feature matching;
fig. 5 is a schematic structural diagram of the individual cow identity recognition device of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Example 1:
as shown in fig. 1, the invention provides a method for identifying the individual identity of a cow, which comprises the following steps:
s1, performing depth feature extraction on the historical cow image through a depth convolution neural network to generate a feature template;
s2, acquiring a cow image to be identified;
and step S3, matching the dairy cow image to be identified with the feature template to identify the individual identity of the dairy cow.
As an embodiment of the present invention, step S1 specifically includes: the depth feature extraction calculation formula of the image by using the VGG-16 network is as follows:
Y(P)=x∈Rd (1)
wherein P is a cow image, x is a characteristic matrix of the cow image P, and R is a real number; projecting P into d-dimensional feature space to obtain feature Y (P) in d-dimensional space;
and (3) performing feature extraction on each cow image P of the cow images in the data set by using a feature extraction function Y to obtain N feature vectors Y (P), and storing the cow image ID and the corresponding Y (P) in a template feature library to generate a feature template.
As an embodiment of the present invention, as shown in FIG. 2, a VGG-16 network consists of 13 convolutional layers, 4 maximum pooling layers, 3 fully-connected layers, and 1 softmax; the convolution layer is constructed by a convolution kernel with the size of 3 multiplied by 3 and a filter with the step length of 1, and the maximum pooling layer is constructed by a pooling kernel with the size of 2 multiplied by 2 and a filter with the step length of 2; the combination of convolutional layers with small filters (3 x 3) can improve the performance of the network by deepening the network structure, and an activation function (ELU) is used, wherein the ELU combines the characteristics of a sigmoid function and a ReLU function, including soft saturation and unsaturation. And comparing the ELU function with the ReLU function to be a negative value, so that the feedback information is richer. And the average output value of the activation unit can be pushed to 0, so that the effect of batch normalization is achieved. Therefore, the ELU can return richer information to the input change, and the ELU can enable the average value of the whole output value to be close to 0, so that the ELU has robustness. Meanwhile, the padding parameter is the convolution of the same, more edge information can be extracted by continuously supplementing the information around the convolution matrix, and the information extracted by the features is richer. The VGG-16 network serves as a depth feature extractor, and after convolution and maximum pooling operations, the data is flattened by using a Flatten () function to obtain a feature vector with the dimension of 1 x n.
As an embodiment of the present invention, step S2 specifically includes: and matching the features Y (I) of the cow image I to be identified with the N features Y (P) in the feature template by calculating the similarity between the feature vectors mapped in the d-dimensional space according to the feature extraction function Y.
As an embodiment of the present invention, the euclidean distance is used to calculate the distance between any two points in the euclidean space, and the same is applied to the similarity calculation of two n-dimensional vectors, and as shown in fig. 3, the similarity between the feature vectors mapped in the d-dimensional space is calculated by the euclidean distance, and the calculation formula is:
Figure BDA0003431871470000061
wherein I represents the cow image to be identified, PjRepresenting j cow images in a feature template, I representing n dimensions of a feature vector, I and PjFeature dimensions of (2) are consistent, dEujThe similarity distance obtained by matching the depth features Y (I) of the cow image I to be identified with the j depth features Y (P) in the feature template is represented.
As an embodiment of the present invention, as shown in fig. 4, the similarity between the feature vectors mapped in the d-dimensional space is calculated by cosine similarity, and the calculation formula is as follows:
Figure BDA0003431871470000062
wherein, I represents the image of the cow to be identified, PjRepresenting j cow images in a feature template, I representing n dimensions of a feature vector, I and PjAre consistent in characteristic dimension, dCosjThe cosine similarity obtained by matching the depth features Y (I) of the cow image I to be identified with the depth features Y (P) of j in the feature template is represented.
Due to dEujAnd dCosjBoth are used to measure the similarity of feature matching, hence with feature similarity fsjAnd (4) showing. And fs isjAnd PjCorresponding IDjStored in a list FL programmed using pythonxIn (1), the following formula is shown:
Figure BDA0003431871470000071
x=1,...,m,j=1,...,N
x (x is the number of categories of predicted images) lists are obtained, and each feature ID in the lists has the possibility of successful matching as a candidate cow. After two candidate cow lists are obtained through the formula 2 and the formula 3, the candidate cow lists are sorted to improve the identification accuracy. And returns a given k (k) from the sorted candidate list<<N) nearest similar features Ck(x) As shown in the following formula
Ck(x)=Rankk(Sort(FLx|x=1,...,m)) (5)
Wherein Sort is a sorting function due to fsjTwo feature similarity metric methods representing euclidean distance and cosine similarity, which are set as ascending sort functions when euclidean distance is used to calculate the similarity of feature vectors. The smaller the euclidean distance, the more similar the two features. When calculating the similarity of feature vectors using cosine similarity, the function is set as a descending function. The larger the cosine value, the more similar these two functions are. The Rank function is used to obtain the first k values of the sorted list. And judging the accuracy rate of the consistency of the feature IDs of the first k candidates after the candidate cattle list is sorted and the feature ID corresponding to the predicted image I.
The method adopts the VGG-16 deep convolution neural network as a feature extractor to extract the deep features, and adopts the Euclidean distance as a feature matching measurement method to determine the identity of the cow. By adopting the technical scheme of the invention, the identification precision of the individual dairy cows is improved, the registration can be quickly completed for newly input dairy cow data without retraining the feature extraction network, the repeated training is avoided, and the time is saved.
Example 2:
as shown in fig. 5, the present invention also provides a device for identifying the individual cow, comprising:
the extraction module is used for carrying out depth feature extraction on the historical cow image through a depth convolution neural network model to generate a feature template;
the acquisition module is used for acquiring a cow image to be identified;
and the identification module is used for matching the dairy cow image to be identified with the characteristic template so as to identify the individual identity of the dairy cow.
Further, performing depth feature on the image by using a VGG-16 network; the VGG-16 network consists of 13 convolutional layers, 4 maximum pooling layers, 3 full-connectivity layers and 1 softmax; the convolution layer is constructed by a convolution kernel with the size of 3 multiplied by 3 and a filter with the step length of 1, and the maximum pooling layer is constructed by a pooling kernel with the size of 2 multiplied by 2 and a filter with the step length of 2; the VGG-16 network is used as a depth feature extractor, and after convolution and maximum pooling operation, a Flatten () function is used to Flatten data to obtain a feature vector with 1 x n dimension.
Further, the identification module matches the features of the cow image to be identified with the features in the feature template by calculating the similarity between the feature vectors mapped in the multidimensional space so as to identify the individual identity of the cow.
The present invention also provides a storage medium having stored thereon machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a dairy cow individual identification method.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The method for identifying the individual identity of the dairy cow is characterized by comprising the following steps:
s1, performing depth feature extraction on the historical cow image through a depth convolution neural network to generate a feature template;
s2, acquiring a cow image to be identified;
and step S3, matching the dairy cow image to be identified with the feature template to identify the individual identity of the dairy cow.
2. The method for identifying the individual dairy cow according to claim 1, wherein the step S1 specifically comprises: the depth feature extraction calculation formula of the image by using the VGG-16 network is as follows:
Y(P)=x∈Rd
wherein P is a cow image, x is a characteristic matrix of the cow image P, and R is a real number; projecting P into d-dimensional feature space to obtain feature Y (P) in d-dimensional space;
and (3) performing feature extraction on each cow image P of the cow images in the data set by using a feature extraction function Y to obtain N feature vectors Y (P), and storing the cow image ID and the corresponding Y (P) in a template feature library to generate a feature template.
3. The method for identifying the individual dairy cow as claimed in claim 2, wherein the VGG-16 network is composed of 13 convolutional layers, 4 maximum pooling layers, 3 full connection layers and 1 softmax; the convolution layer is constructed by a convolution kernel with the size of 3 multiplied by 3 and a filter with the step length of 1, and the maximum pooling layer is constructed by a pooling kernel with the size of 2 multiplied by 2 and a filter with the step length of 2; the VGG-16 network serves as a depth feature extractor, and after convolution and maximum pooling operations, data is flattened by using a Flatten () function to obtain a feature vector with the dimension of 1 x n.
4. The method for identifying the individual dairy cow as claimed in claim 3, wherein the step S2 is specifically as follows: and matching the features Y (I) of the cow image I to be identified with the N features Y (P) in the feature template by calculating the similarity between the feature vectors mapped in the d-dimensional space according to the feature extraction function Y.
5. The method of identifying an individual cow according to claim 4, wherein the similarity between the mapped feature vectors in the d-dimensional space is calculated by euclidean distance, and the calculation formula is:
Figure FDA0003431871460000021
wherein I represents the cow image to be identified, PjRepresenting j cow images in a feature template, I representing n dimensions of a feature vector, I and PjFeature dimensions of (2) are consistent, dEujThe similarity distance obtained by matching the depth features Y (I) of the cow image I to be identified with the j depth features Y (P) in the feature template is represented.
6. The method for identifying the individual dairy cow as claimed in claim 4, wherein the similarity between the feature vectors mapped in the d-dimensional space is calculated by cosine similarity, and the calculation formula is:
Figure FDA0003431871460000022
wherein, I represents the image of the cow to be identified, PjRepresenting j cow images in a feature template, I representing n dimensions of a feature vector, I and PjAre consistent in characteristic dimension, dCosjThe cosine similarity obtained by matching the depth features Y (I) of the cow image I to be identified with the depth features Y (P) of j in the feature template is represented.
7. An individual milk cow identification device, comprising:
the extraction module is used for carrying out depth feature extraction on the historical cow image through a depth convolution neural network to generate a feature template;
the acquisition module is used for acquiring a cow image to be identified;
and the identification module is used for matching the dairy cow image to be identified with the characteristic template so as to identify the individual identity of the dairy cow.
8. The individual cow identification device according to claim 7, wherein the image is subjected to deep feature extraction by using a VGG-16 network; for newly input cow data, the feature extraction module does not need to be trained again, and repeated training is avoided; the VGG-16 network consists of 13 convolutional layers, 4 maximum pooling layers, 3 full-connectivity layers and 1 softmax; the convolution layer is constructed by a convolution kernel with the size of 3 multiplied by 3 and a filter with the step length of 1, and the maximum pooling layer is constructed by a pooling kernel with the size of 2 multiplied by 2 and a filter with the step length of 2; the VGG-16 network serves as a depth feature extractor, and after convolution and maximum pooling operations, data is flattened by using a Flatten () function to obtain a feature vector with the dimension of 1 x n.
9. The device for identifying the individual cow identity according to claim 7, wherein the identification module matches the features of the cow image to be identified with the features in the feature template by calculating the similarity between the feature vectors mapped in the multidimensional space to identify the individual cow identity.
10. A storage medium storing machine executable instructions to cause a processor to implement the method of identifying an individual cow according to any one of claims 1 to 6.
CN202111597589.4A 2021-12-24 2021-12-24 Method and device for identifying individual identity of dairy cow and storage medium Pending CN114283366A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111597589.4A CN114283366A (en) 2021-12-24 2021-12-24 Method and device for identifying individual identity of dairy cow and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111597589.4A CN114283366A (en) 2021-12-24 2021-12-24 Method and device for identifying individual identity of dairy cow and storage medium

Publications (1)

Publication Number Publication Date
CN114283366A true CN114283366A (en) 2022-04-05

Family

ID=80874824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111597589.4A Pending CN114283366A (en) 2021-12-24 2021-12-24 Method and device for identifying individual identity of dairy cow and storage medium

Country Status (1)

Country Link
CN (1) CN114283366A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457593A (en) * 2022-07-26 2022-12-09 南京清湛人工智能研究院有限公司 Cow face identification method, system, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197605A (en) * 2018-01-31 2018-06-22 电子科技大学 Yak personal identification method based on deep learning
CN108549860A (en) * 2018-04-09 2018-09-18 深源恒际科技有限公司 A kind of ox face recognition method based on deep neural network
CN109815869A (en) * 2019-01-16 2019-05-28 浙江理工大学 A kind of finger vein identification method based on the full convolutional network of FCN
CN112784822A (en) * 2021-03-08 2021-05-11 口碑(上海)信息技术有限公司 Object recognition method, object recognition device, electronic device, storage medium, and program product

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197605A (en) * 2018-01-31 2018-06-22 电子科技大学 Yak personal identification method based on deep learning
CN108549860A (en) * 2018-04-09 2018-09-18 深源恒际科技有限公司 A kind of ox face recognition method based on deep neural network
CN109815869A (en) * 2019-01-16 2019-05-28 浙江理工大学 A kind of finger vein identification method based on the full convolutional network of FCN
CN112784822A (en) * 2021-03-08 2021-05-11 口碑(上海)信息技术有限公司 Object recognition method, object recognition device, electronic device, storage medium, and program product

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
俞勇: "《人工智能实践:动手做你自己的AI》", 31 August 2019, 上海科技教育出版社 *
尚荣华等: "《计算智能导论》", 30 September 2019, 西安电子科技大学 *
房俊龙等: ""采用改进 CenterNet 模型检测群养生猪目标"", 《农业工程学报》 *
高洪波: "《工程实践典型算法应用的设计与实现》", 31 December 2018, 北京邮电大学出版社 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457593A (en) * 2022-07-26 2022-12-09 南京清湛人工智能研究院有限公司 Cow face identification method, system, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN111814584B (en) Vehicle re-identification method based on multi-center measurement loss under multi-view environment
CN107292298B (en) Ox face recognition method based on convolutional neural networks and sorter model
JP6935377B2 (en) Systems and methods for automatic inference of changes in spatiotemporal images
CN106778902B (en) Dairy cow individual identification method based on deep convolutional neural network
Oquab et al. Weakly supervised object recognition with convolutional neural networks
CN110909618B (en) Method and device for identifying identity of pet
Weng et al. Cattle face recognition based on a Two-Branch convolutional neural network
Bello et al. Image-based individual cow recognition using body patterns
Eitzinger et al. Assessment of the influence of adaptive components in trainable surface inspection systems
CN110942091A (en) Semi-supervised few-sample image classification method for searching reliable abnormal data center
CN113420709A (en) Cattle face feature extraction model training method and system and cattle insurance method and system
Rodrigues et al. Evaluating cluster detection algorithms and feature extraction techniques in automatic classification of fish species
CN113705596A (en) Image recognition method and device, computer equipment and storage medium
Van Zyl et al. Unique animal identification using deep transfer learning for data fusion in siamese networks
Wang et al. Pig face recognition model based on a cascaded network
Bello et al. Deep belief network approach for recognition of cow using cow nose image pattern
Zarbakhsh et al. Low-rank sparse coding and region of interest pooling for dynamic 3D facial expression recognition
CN114283366A (en) Method and device for identifying individual identity of dairy cow and storage medium
CN113704534A (en) Image processing method and device and computer equipment
CN116884067A (en) Micro-expression recognition method based on improved implicit semantic data enhancement
CN113947780B (en) Sika face recognition method based on improved convolutional neural network
Campos et al. Global localization with non-quantized local image features
CN111079617A (en) Poultry identification method and device, readable storage medium and electronic equipment
CN111160398A (en) Missing label multi-label classification method based on example level and label level association
CN114758356A (en) Method and system for recognizing cow lip prints based on local invariant features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220405