CN117894037A - Method and computing device for identity authentication of cattle - Google Patents

Method and computing device for identity authentication of cattle Download PDF

Info

Publication number
CN117894037A
CN117894037A CN202311810864.5A CN202311810864A CN117894037A CN 117894037 A CN117894037 A CN 117894037A CN 202311810864 A CN202311810864 A CN 202311810864A CN 117894037 A CN117894037 A CN 117894037A
Authority
CN
China
Prior art keywords
face
cow
image
cattle
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311810864.5A
Other languages
Chinese (zh)
Inventor
盖珂珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Muguo Technology Co ltd
Original Assignee
Beijing Muguo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Muguo Technology Co ltd filed Critical Beijing Muguo Technology Co ltd
Priority to CN202311810864.5A priority Critical patent/CN117894037A/en
Publication of CN117894037A publication Critical patent/CN117894037A/en
Pending legal-status Critical Current

Links

Landscapes

  • Collating Specific Patterns (AREA)

Abstract

The invention provides a method and a computing device for identity authentication of cattle, wherein the method comprises the following steps: acquiring a cow face image to be authenticated; inputting the cattle face image to be authenticated into a cattle face detection model for detection, and dividing the cattle face in the image from the background to serve as a first cattle face image; inputting the first cow face image into an azimuth detection model, and identifying the face azimuth category of the cow face in the first cow face image; inputting the first cow face image into a pre-trained cow face feature extractor to obtain a feature vector of the first cow face image; and calculating the similarity between the feature vector of the first cow face image and cow face feature vectors of the same azimuth category in a pre-established cow face feature database, and determining that the cow corresponding to the feature vector with the highest similarity is the cow to be authenticated. According to the technical scheme provided by the invention, the identity authentication cost of the cow can be reduced, and the authentication efficiency is increased.

Description

Method and computing device for identity authentication of cattle
Technical Field
The invention relates to the technical field of artificial intelligence and the technical field of pasture information, in particular to a method and computing equipment for identifying and authenticating a cow.
Background
With the progress of society and the continuous expansion of the cultivation scale of cattle, the traditional cultivation mode is difficult to scientifically and effectively take care of and manage cattle groups. In a large-scale cattle farm, daily fine management of automation and informatization of individual cattle is realized, health condition tracking of each cattle and milk source and meat product tracking are realized, and the establishment and perfection of a quality tracking system are required, but the key point is identification of individual cattle identities. The traditional identification method has the advantages that the identification of the cattle individuals needs to be carried out by means of an external tool to mark a certain part of the body or a marking device is worn, the identification method is invasive, daily behaviors are seriously affected, and potential hazards can be caused.
Therefore, a technical scheme is needed, so that the identity authentication cost of the cow can be reduced, and the authentication efficiency is improved.
Disclosure of Invention
The invention aims to provide a method and computing equipment for identity authentication of cattle, which can reduce the identity authentication cost of cattle, increase authentication efficiency, reduce consumable materials and improve usability.
According to an aspect of the present invention, there is provided a method for identity authentication of a cow, comprising:
acquiring a cow face image to be authenticated;
Inputting the cattle face image to be authenticated into a cattle face detection model for detection, and dividing the cattle face in the image from the background to serve as a first cattle face image;
Inputting the first cow face image into an azimuth detection model, and identifying the face azimuth category of the cow face in the first cow face image;
inputting the first cow face image into a pre-trained cow face feature extractor to obtain a feature vector of the first cow face image;
And calculating the similarity between the feature vector of the first cow face image and cow face feature vectors of the same azimuth category in a pre-established cow face feature database, and determining that the cow corresponding to the feature vector with the highest similarity is the cow to be authenticated.
According to some embodiments, obtaining the cow face image to be authenticated includes:
video acquisition is carried out on the cattle face, wherein the video comprises videos of the front face, the left face and the right face of the cattle;
and performing frame extraction on the video to form image data.
According to some embodiments, inputting the to-be-authenticated cow face image into a cow face detection model for detection, including:
And detecting the image quality of the cow face image to be authenticated, and taking an image with qualified quality to perform cow face segmentation background operation.
According to some embodiments, identifying a face orientation category of a cow face in the first cow face image comprises:
And recognizing the cow face in the first cow face image as a front face, a left face or a right face.
According to some embodiments, inputting the first face image into a pre-trained face feature extractor to obtain feature vectors of the first face image, including:
and obtaining the characteristic vector of the cow face in the first cow face image by using a pre-trained cow face classification model.
According to some embodiments, training the bovine face classification model with an existing bovine face dataset comprising a frontal face, a left face, and a right face dataset, transforming the bovine face classification model into a bovine face feature extractor;
And inputting the front face image, the left face image and the right face image of the target cow into the cow face feature extractor to obtain corresponding cow face feature vectors of each cow and storing the cow face feature vectors into the cow face feature database.
According to some embodiments, a database of bovine facial features is pre-established:
carrying out cow face image acquisition on cow faces of all cow of a preset cow group, wherein the cow face image acquisition comprises a front face, a left face and a right face of a cow;
Inputting the cattle face image into a cattle face detection model, and performing cattle face and background segmentation operation to obtain the cattle face image;
Inputting the face image into an azimuth detection model, and identifying the face azimuth category in the face image;
Inputting the cow face image into a corresponding cow face feature extractor, obtaining a cow face feature vector, marking azimuth category, and storing the cow face feature vector and the azimuth category in the cow face feature database.
According to some embodiments, calculating the similarity between the feature vector of the first face image and the face feature vector of the same class in the pre-established face feature database includes:
and calculating the similarity between the cow face feature vector to be authenticated and the cow face feature vector of the same class in the cow face feature database by using a cosine similarity measurement algorithm.
According to some embodiments, calculating the similarity between the feature vector of the first cow face image and the cow face feature vector of the same azimuth category in the pre-established cow face feature database, and determining that the cow corresponding to the feature vector with the highest similarity is the cow to be authenticated comprises:
Acquiring a plurality of cow face images to be authenticated, determining face azimuth categories of the cow face images to be authenticated, and calculating a plurality of corresponding feature vectors;
respectively calculating the similarity of the feature vectors and the cow face feature vectors with the same azimuth category in a pre-established cow face feature database according to the face azimuth;
And determining the cow corresponding to the feature vector with the highest similarity as the cow to be authenticated.
According to another aspect of the present invention, there is provided a computing device comprising:
A processor; and
A memory storing a computer program which, when executed by the processor, causes the processor to perform the method of any one of the preceding claims.
According to another aspect of the invention there is provided a non-transitory computer readable storage medium having stored thereon computer readable instructions which, when executed by a processor, cause the processor to perform the method of any of the above.
According to the embodiment of the invention, the cow face is classified according to the face direction by detecting the cow face image, the cow face feature vector is extracted by using the cow face feature extractor, and the similarity calculation is carried out on the cow face in the cow face feature database established in advance, so that the authentication of the cow is realized. The invention can improve the cow authentication efficiency, distinguish the prior art to use the ear tag to authenticate, only need to carry on the video shooting to the cow and can carry on the cow face recognition, carry on the video shooting to the cow face conveniently and will not disturb the cow too much, easy to use is stronger.
According to some embodiments, in terms of cost, the identification authentication of the cattle is carried out without using the earmarks, so that a large amount of earmark cost caused by continuous damage and loss of the earmarks is saved, the steps of manually marking the earmarks of the cattle, sterilizing and the like are omitted, the medicine cost for sterilizing the ears of the cattle is saved, the cost is effectively reduced, the consumable is reduced, and the environment is protected.
According to some embodiments, different identity authentication strategies are adopted according to the number of available face photos of the target cow, so that different scene requirements can be met at the same time. In addition, the cow face is divided into three parts of a front face, a left face and a right face, the similarity is calculated respectively, the highest similarity is obtained, and the recognition accuracy of the cow face is higher than the hybrid recognition accuracy.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the description of the embodiments will be briefly described below.
Fig. 1 shows a flow chart of a method of identity authentication of a cow in accordance with an example embodiment.
Fig. 2 shows a flow chart of a method of face feature extraction according to an example embodiment.
Fig. 3 shows a flow chart of a method of face feature database creation according to an example embodiment.
Fig. 4 shows a flow chart of a method of cow image acquisition, processing and authentication in accordance with an exemplary embodiment.
Fig. 5 shows a schematic diagram of a bovine face classification store according to an example embodiment.
FIG. 6 illustrates a block diagram of a computing device in accordance with an exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another element. Accordingly, a first component discussed below could be termed a second component without departing from the teachings of the present inventive concept. As used herein, the term "and/or" includes any one of the associated listed items and all combinations of one or more.
The user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present invention are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of related data is required to comply with the relevant laws and regulations and standards of the relevant country and region, and is provided with corresponding operation entries for the user to select authorization or rejection.
Those skilled in the art will appreciate that the drawings are schematic representations of example embodiments and that the modules or flows in the drawings are not necessarily required to practice the invention and therefore should not be taken to limit the scope of the invention.
With the growing population, accelerated urbanization, climate change and increasing demands on food quality and safety standards, traditional farming approaches face a number of challenges. Conventional bovine identification methods typically rely on physical markers, such as ear tags, collars, or foot rings, which contain specific identifiers, such as bar codes or RFID tags. These methods improve the recognition efficiency to some extent, but may cause animal discomfort and require manual intervention to read and analyze the data.
The cattle identity authentication technology is an agricultural application technology which is rising with the development of the fields of computer vision and machine learning in recent years. It is mainly used for the management of animal husbandry, including tracking individual animal health, breeding conditions, feeding schedules, etc. By automating the identification system, farmers and administrators can manage and optimize their operations more effectively.
To this end, the invention proposes a method for identity authentication of cattle, which is capable of automatically extracting and analyzing facial features of cattle from photographs or videos using a deep learning algorithm and image processing techniques. The method has the advantages of non-invasiveness, no physical operation on animals, remote monitoring and real-time tracking can be realized, authentication efficiency is improved, and cost of manpower and material resources is reduced.
According to some embodiments, in some practical applications at present, some pastures are identified by models according to the face orientations of the cow face to improve the identification accuracy, and although the accuracy can reach 95.91%, the identification cost is high, and the requirements on equipment and hardware are high. Some pastures save cost, and the model is used for carrying out all-round face fuzzy recognition, so that the accuracy is low and only 90.18 percent. The cattle identification and authentication method provided by the invention not only has the identification accuracy, but also can effectively reduce the cost, and the identification accuracy rate can reach 96.91% in practical application.
Before describing embodiments of the present invention, some terms or concepts related to the embodiments of the present invention are explained.
EID: electronic IDentity, an electronic identification code.
RFID ear tag: radio Frequency Identification, radio frequency identification, or radio frequency identification technology.
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings.
Fig. 1 shows a flow chart of a method of identity authentication of a cow in accordance with an example embodiment.
Referring to fig. 1, the method of the present invention comprises the following steps: and acquiring a plurality of cow face images to be authenticated. Shooting the video of the cattle, and acquiring a cattle face image of the cattle to be authenticated through video frame extraction after acquiring the video of the cattle; determining face azimuth categories of a plurality of cow face images to be authenticated, namely a front face, a left face and a right face of the cow face, and calculating a plurality of corresponding feature vectors; respectively calculating the similarity of a plurality of feature vectors and the cow face feature vectors with the same azimuth category in a pre-established cow face feature database according to the face azimuth, performing similarity calculation on the front face feature vector and the front face feature vector in the pre-established cow face feature database, performing similarity calculation on the left face feature vector and the left face feature vector in the pre-established cow face feature database, and performing similarity calculation on the right face feature vector and the right face feature vector in the pre-established cow face feature database; and after the similarity is calculated respectively, selecting the cow corresponding to the feature vector with the highest similarity as the cow to be authenticated.
In S101, a cow face image to be authenticated is acquired.
According to some embodiments, video acquisition is performed on a cow face, including videos of the cow face, the left face and the right face, and the videos are frame-decimated to form image data.
According to some embodiments, for example, in a scenario of identity authentication of a batch of cattle in a pasture, photographs of all aspects of the batch of cattle, including a front face, a left face, and a right face, are obtained.
In S103, the cow face image to be authenticated is input into a cow face detection model for detection, and the cow face in the image is segmented from the background to be used as a first cow face image.
According to some embodiments, detecting the image quality of the cow face image to be authenticated, and taking the image with qualified quality for cow face segmentation background operation.
In S105, the first face image is input into the orientation detection model, and the face orientation type of the face in the first face image is identified.
According to some embodiments, the bovine face in the first bovine face image is identified as a frontal face, a left face, or a right face. For example, the orientation of the cow face image (frontal face, left face, right face) may be detected using a pre-trained Resnet neural network. According to other embodiments, the orientation of the cow face may be determined based on the eye position.
At S107, the first face image is input to a pre-trained face feature extractor to obtain feature vectors of the first face image.
According to some embodiments, a pre-constructed face feature extractor is utilized to obtain feature vectors of the faces in the first face image.
According to some embodiments, the face classification model is adapted to a face feature extractor such that the output of the face feature extractor is a feature vector extracted from the face photograph. For example, after training the model of the cow face classification based on the residual neural network ResNet, the model structure may be rewritten, the last fully connected output layer of the model is removed, and then the feature vector (for example, 512-dimensional vector) generated after the photograph passes through the rewritten model is taken as the feature vector of the cow face image. The residual neural network ResNet is used for training the cow face image, so that the cow face image training method has the advantage of being high in speed, and fewer cow photos are needed. Residual neural network ResNet solves the problem of degradation of network performance based on residual learning. For a stacked structure of base layers, the learned feature is denoted as H (x) when the input is x, it is desirable that it can learn the residual F (x) =h (x) -x, so that the original learned feature is in fact F (x) +x. This is so because residual learning is easier than original feature direct learning. When the residual is 0, the pile-up layer only maps identity, at least the network performance is not degraded. In practice the residual will not be 0, which will also allow the pile-up layer to learn new features on the basis of the input features, thus having better performance. Therefore, resnet also well solves the problem that the gradient disappears and even explodes in the deep network. The exemplary embodiment of the present application uses a residual network for training and recognition, but it is easy to understand that the technical solution of the present application is not limited thereto. Other suitable neural network structures may also be employed based on the teachings of the present application. In S109, the similarity between the feature vector of the first cow face image and the cow face feature vector of the same azimuth category in the pre-established cow face feature database is calculated, and the cow corresponding to the feature vector with the highest similarity is determined to be the cow to be authenticated.
According to some embodiments, a cosine similarity measurement algorithm is used to calculate the similarity between the face feature vector to be authenticated and the face feature vector of the same class in the face feature database.
According to some embodiments, cosine similarity is a method for measuring whether two non-zero vectors are close in direction in space. It evaluates the similarity between two vectors by calculating the cosine of the angle between them. The cosine similarity measure is to use the cosine value of the included angle of two vectors in the vector space as the measure of the individual difference between the two vectors. The value of the calculated similarity is between 0 and 1, the closer the calculated result is to 0, the larger the difference between the two vectors is, whereas the closer the calculated result is to 1, the higher the similarity between the two vectors is. Given two vectors A and B, the remaining chordal similarity is calculated as follows:
a i and B i in the formula represent the respective components of vectors a and B, respectively.
The result range of cosine similarity is between-1 and 1. When the two vectors are identical or point in the same direction, their cosine similarity is 1; when the two vectors are orthogonal (perpendicular to each other), the cosine similarity is 0; when the two vectors point in opposite directions, the cosine similarity is-1.
According to some embodiments, the cattle ear tag may cause stress reaction of cattle after being driven into the cattle ear, inflammation of the cattle ear, health and growth effects on cattle, and death of cattle may be caused in serious cases. The invention uses the cattle face recognition technology, only needs to photograph and pick up the image of the cattle at a distance, and has no health effect on the cattle.
In terms of usability, the method for marking the cattle requires steps of professional equipment, post disinfection and the like, and the situation that the cattle hurts people possibly exists, but when the cattle face image is acquired, the cattle face is convenient to shoot in a video mode and cannot be disturbed excessively, meanwhile, in the later reading process, the cattle face can be identified by shooting in the video mode only by slightly approaching the cattle, a professional ear tag reader is not needed, and a common smart phone is used. In terms of energy consumption, the cattle face recognition scheme is adopted, hardware equipment such as ear tags are not needed, and the method is environment-friendly and low in consumption.
According to some embodiments, the invention solves the problems of damage, loss and the like of hardware equipment by storing and backing up the cow face images and videos on a server.
In terms of cost, since the identification of the cow is not performed by using the ear tag, a great amount of ear tag cost caused by continuous damage and loss of the ear tag and medicine cost for sterilizing the cow ear are saved. In the aspect of authentication efficiency, the steps of manually marking the ear marks, sterilizing and the like on the cattle are omitted, and the cattle face can be shot by video conveniently and rapidly; on the other hand, when the identity of the cattle is authenticated, the professional reader is not needed to read the ear tag by manually approaching to the ears of the cattle, and the cattle face recognition can be carried out only by carrying out video shooting on the cattle by using a mobile phone, so that the efficiency is more efficient.
According to some embodiments, under the conditions of lack of authority, fake making and the like, the cattle ear tag can uniformly identify all cattle through the cattle face identification technology, has authority and can not be fake.
Fig. 2 shows a flow chart of a method of face feature extraction according to an example embodiment.
Referring to fig. 2, a face feature extractor is first constructed in accordance with some embodiments. For example, the following procedure may be followed.
Data collection is carried out: a large number of cow face images are collected, and various illumination conditions, angles and expression changes are ensured to be covered.
Data preprocessing and labeling: the image is preprocessed, including scaling, cropping, normalization, etc., to reduce the effects of illumination, background, and other environmental factors.
Selecting a model and training the model: a suitable deep learning architecture is selected as a base model, such as VGGFace, facenet, resNet, for example, and a labeled training data set is prepared, each sample corresponds to a specific cow, and the training model is used for learning to extract the facial features of the cow from the input image.
Performing model fine adjustment and optimization: and fine tuning the model, adjusting the super parameters to improve the performance, evaluating the performance of the model on different data subsets by using methods such as cross validation and the like, and improving the performance according to the needs.
By constructing the cow face feature extractor in advance, the subsequent cow identity recognition process can be obviously simplified, and the overall efficiency and accuracy are improved. At the same time, such a system may also support large scale animal husbandry management and decisions, such as tracking animal health, optimizing feeding strategies, etc.
Referring to fig. 2, in S201, a face feature extractor is constructed in advance.
For example, the trained model structure is rewritten, the last fully connected output layer of the model is removed, and then the feature vector is generated by the photograph through the rewritten model as output.
According to some embodiments, the cow face feature extractor is utilized to obtain cow face feature vectors of the target cow, and the cow face feature vectors are stored in the database to obtain pre-stored cow face feature vectors.
According to some embodiments, the output of the face feature extractor is caused to be a feature vector extracted by the face photograph.
At S203, a bovine face feature extractor is trained using existing bovine face data sets including front face, left face, and right face data sets.
According to some embodiments, the hyper-parameters of the model may be adjusted by cross-validation or the like to optimize model performance.
At S205, the front face image, the left face image and the right face image of the target cow are input into the cow face feature extractor, so as to obtain the corresponding cow face feature vectors of each cow, and the cow face feature vectors are stored in the cow face feature database.
According to some embodiments, the output of the cow face feature extractor is a feature vector extracted by cow face photographs.
According to the embodiment, a plurality of feature vectors are calculated according to the face orientation respectively, cow right face feature vectors are extracted from the right face image, cow left face feature vectors are extracted from the left face image, cow right face feature vectors are extracted from the right face image, and then the cow right face feature vectors are classified and stored in a database.
Fig. 3 shows a flow chart of a method of face feature database creation according to an example embodiment.
Referring to fig. 3, at S301, cow face image acquisition is performed on cow faces of all cow of a predetermined cow group, including a front face, a left face, and a right face of a cow.
According to some embodiments, three orientations of the cow face, namely images of the front face, the left face and the right face, and video are acquired, and the video is extracted into images.
In S303, the face image is input into the face detection model, and the face and background segmentation operation is performed to obtain the face image.
According to some embodiments, the face image is filtered based on key point detection and quality detection of the face image to obtain an initial image, and the face image is obtained by dividing the face from the background in the image.
According to some embodiments, the cow face image is labeled for classification to distinguish whether two sample cows are from a cow's label, see fig. 5.
In S305, the face image is input into the orientation detection model, and the face orientation type in the face image is identified.
According to some embodiments, the orientation detection model identifies a bovine face in the bovine face image and classifies the orientation of the bovine face, i.e., the frontal face, the left face, or the right face.
In S307, the face image is input to the face feature extractor, and the face feature vector is obtained and the azimuth category is marked and stored in the face feature database.
Fig. 4 shows a flow chart of a method for cow image acquisition, model training and authentication according to an example embodiment.
Referring to the example embodiment of fig. 4, at S401, a cow face video is acquired using an applet or app.
According to an example embodiment, a worker uses an applet or app on a smartphone to perform a cow face video acquisition, including a cow's front face video, left face video, and right face video.
At S403, the video is decimated into images.
According to an example embodiment, a face video is uploaded to the background for video frame extraction into a large number of images.
In S405, the background performs image quality detection.
According to an example embodiment, the background performs image quality detection on the frame-extracted image, if the detection quality is not qualified, the cow face video acquisition is performed again,
At S407, the image is stored to the cloud server.
According to an example embodiment, image quality detection is performed in the background, and a face image uploaded by an image with qualified detection quality is stored in a cloud server.
At S409, the image is fed into yolov model, and the face is segmented from the background.
According to some embodiments, a target detection algorithm model is used to segment the face of the cow in the image from the background, preserving the face image.
In S411, the folder is stored and labeled with a category label.
According to some embodiments, classification is performed by identifying the face orientation of the cow, and storing the classification in the corresponding folders, respectively, see fig. 5.
At S413, the face classification network is sent to perform training to obtain a face feature extractor.
According to some embodiments, after a large number of cow face images are acquired, the cow face images are sent to a cow face recognition network for training until the cow face recognition network distinguishes the images of different cow samples in the sample cow face data set from each other, and training is finished to obtain a trained cow face classification model. The face classification model may then be modified to a face feature extractor, e.g., the final fully connected output layer of the model may be removed to obtain the face feature extractor.
The training of the bovine face classification model is exemplarily described below.
And (5) collecting the cattle face photo. For example, 30 photographs were taken of each cow, 10 photographs of the front face, left face, and right face. When the photo is acquired, the face of the cow is ensured to be full of most areas of the photo as much as possible, so that the model can better recognize the cow face.
The collected photos are respectively stored in the corresponding classified folders, and the names of the folders are the serial numbers of the cattle.
Preprocessing the acquired cow face data set, and dividing the cow face data set into a test set and a training set. After the image acquisition is completed, a random partitioning of the training set and the test set may be performed. The ratio of the test set to the training set may be set when data partitioning is performed. For example, in this example, the ratio is set to 4:1, i.e., 8 training sets and 2 test sets.
And training the model by using a residual neural network and storing the trained model. After the bovine face data set is divided into the corresponding training set and test set, the model can be trained next. In model training, a residual neural network ResNet is used. The model training content mainly comprises: convolution operations, pooling operations, activation functions, full connection layers, and objective functions. After the original training set photo is input, multiple convolution pooling operations are performed until all features are extracted. And then, all the extracted features are butted with the full-connection layer, and finally the full-connection layer outputs a number, namely the predicted value of the cow number. And the real number corresponding to the head cattle is the target value. And finally, repairing the difference value between the predicted value and the target value by the model, and repeating training to achieve a good training effect.
According to some embodiments, when training the data set, the folder path is configured as the folder paths of the training set and the test set, and after the training is completed, the generated model file can be automatically saved in the folder, and the naming operation can also be performed on the generated model file in the training code.
And evaluating the accuracy of the test set on the trained model. After training is completed and the model is successfully stored, relevant prediction is carried out by using the cow face photos of the test set, and the accuracy of the prediction is counted. According to an embodiment, the test set folder path is configured at the time of using the test set prediction. When the statistical accuracy is calculated, the number of the cow is the number in the naming of the photo in the embodiment, so that the cow can be directly compared with the output of the full-connection layer of the final model, if the result is consistent, the recognition is correct, otherwise, the recognition is wrong. The operation result shows that the accuracy reaches 99.1 percent.
At S415, identity information of the cow is authenticated using a cow face feature extractor.
According to some embodiments, obtaining a plurality of to-be-authenticated cow face images and calculating a plurality of corresponding feature vectors; respectively calculating the similarity of a plurality of feature vectors and the cow face feature vectors with the same azimuth category in a pre-established cow face feature database according to the face azimuth; and determining the cow corresponding to the feature vector with the highest similarity as the cow to be authenticated.
According to some embodiments, the identification process of the cattle only records the face information of the cattle, but also records a lot of related information, such as pasture of the cattle, animal main information, RFID ear tag information worn by the cattle, and the like, so that each cattle needs to perform video acquisition independently.
In accordance with some embodiments, in addition to bovine face recognition, other biometric techniques are being studied for livestock identification, such as iris scan and gait analysis. Overall, the goal of bovine identification technology is to increase the efficiency of animal husbandry, reduce human error, improve animal welfare, and support sustainable agricultural practices.
FIG. 6 illustrates a block diagram of a computing device according to an example embodiment of the invention.
As shown in fig. 6, computing device 30 includes processor 12 and memory 14. Computing device 30 may also include a bus 22, a network interface 16, and an I/O interface 18. The processor 12, memory 14, network interface 16, and I/O interface 18 may communicate with each other via a bus 22.
The processor 12 may include one or more general purpose cpus (Central Processing Unit, processors), microprocessors, or application specific integrated circuits, etc. for executing associated program instructions. According to some embodiments, computing device 30 may also include a high performance display adapter (GPU) 20 that accelerates processor 12.
Memory 14 may include machine-system-readable media in the form of volatile memory, such as Random Access Memory (RAM), read Only Memory (ROM), and/or cache memory. Memory 14 is used to store one or more programs including instructions as well as data. The processor 12 may read instructions stored in the memory 14 to perform the methods according to embodiments of the invention described above.
Computing device 30 may also communicate with one or more networks through network interface 16. The network interface 16 may be a wireless network interface.
Bus 22 may be a bus including an address bus, a data bus, a control bus, etc. Bus 22 provides a path for exchanging information between the components.
It should be noted that, in the implementation, the computing device 30 may further include other components necessary to achieve normal operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method. The computer readable storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, dvds, CD-roms, micro-drives, and magneto-optical disks, ROM, RAM, EPROM, EEPROM, DRAM, VRAM, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ics), network storage devices, cloud storage devices, or any type of media or device suitable for storing instructions and/or data.
Embodiments of the present invention also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above.
It will be clear to a person skilled in the art that the solution according to the invention can be implemented by means of software and/or hardware. "Unit" and "module" in this specification refer to software and/or hardware capable of performing a specific function, either alone or in combination with other components, where the hardware may be, for example, a field programmable gate array, an integrated circuit, or the like.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present invention.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The exemplary embodiments of the present invention have been particularly shown and described above. It is to be understood that this invention is not limited to the precise arrangements, instrumentalities and instrumentalities described herein; on the contrary, the invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. A method for identity authentication of a bovine, comprising:
acquiring a cow face image to be authenticated;
Inputting the cattle face image to be authenticated into a cattle face detection model for detection, and dividing the cattle face in the image from the background to serve as a first cattle face image;
Inputting the first cow face image into an azimuth detection model, and identifying the face azimuth category of the cow face in the first cow face image;
inputting the first cow face image into a pre-trained cow face feature extractor to obtain a feature vector of the first cow face image;
And calculating the similarity between the feature vector of the first cow face image and cow face feature vectors of the same azimuth category in a pre-established cow face feature database, and determining that the cow corresponding to the feature vector with the highest similarity is the cow to be authenticated.
2. The method of claim 1, wherein acquiring the image of the cow face to be authenticated comprises:
video acquisition is carried out on the cattle face, wherein the video comprises videos of the front face, the left face and the right face of the cattle;
and performing frame extraction on the video to form image data.
3. The method of claim 1, wherein inputting the face image to be authenticated into a face detection model for detection, comprises:
And detecting the image quality of the cow face image to be authenticated, and taking an image with qualified quality to perform cow face segmentation background operation.
4. The method of claim 1, wherein identifying the face orientation category of the face in the first face image comprises:
And recognizing the cow face in the first cow face image as a front face, a left face or a right face.
5. The method of claim 1, wherein inputting the first face image into a pre-trained face feature extractor to obtain feature vectors for the first face image comprises:
and obtaining the characteristic vector of the cow face in the first cow face image by using a pre-trained cow face classification model.
6. The method as recited in claim 5, further comprising:
training the cow face classification model by using an existing cow face data set, and modifying the cow face classification model into the cow face feature extractor, wherein the cow face data set comprises a front face data set, a left face data set and a right face data set;
And inputting the front face image, the left face image and the right face image of the target cow into the cow face feature extractor to obtain corresponding cow face feature vectors of each cow and storing the cow face feature vectors into the cow face feature database.
7. The method of claim 1, further comprising pre-building the database of bovine facial features:
carrying out cow face image acquisition on cow faces of all cow of a preset cow group, wherein the cow face image acquisition comprises a front face, a left face and a right face of a cow;
Inputting the cattle face image into a cattle face detection model, and performing cattle face and background segmentation operation to obtain the cattle face image;
Inputting the face image into an azimuth detection model, and identifying the face azimuth category in the face image;
Inputting the cow face image into a corresponding cow face feature extractor, obtaining a cow face feature vector, marking azimuth category, and storing the cow face feature vector and the azimuth category in the cow face feature database.
8. The method of claim 1, wherein calculating the similarity of the feature vector of the first face image to the same class of face feature vectors in a pre-established face feature database comprises:
and calculating the similarity between the cow face feature vector to be authenticated and the cow face feature vector of the same class in the cow face feature database by using a cosine similarity measurement algorithm.
9. The method of claim 8, wherein calculating the similarity between the feature vector of the first cow face image and the cow face feature vector of the same azimuth category in the pre-established cow face feature database, and determining that the cow corresponding to the feature vector with the highest similarity is the cow to be authenticated, comprises:
Acquiring a plurality of cow face images to be authenticated, determining face azimuth categories of the cow face images to be authenticated, and calculating a plurality of corresponding feature vectors;
respectively calculating the similarity of the feature vectors and the cow face feature vectors with the same azimuth category in a pre-established cow face feature database according to the face azimuth;
And determining the cow corresponding to the feature vector with the highest similarity as the cow to be authenticated.
10. A computing device, comprising:
A processor; and
A memory storing a computer program which, when executed by the processor, causes the processor to perform the method of any one of claims 1-9.
CN202311810864.5A 2023-12-26 2023-12-26 Method and computing device for identity authentication of cattle Pending CN117894037A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311810864.5A CN117894037A (en) 2023-12-26 2023-12-26 Method and computing device for identity authentication of cattle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311810864.5A CN117894037A (en) 2023-12-26 2023-12-26 Method and computing device for identity authentication of cattle

Publications (1)

Publication Number Publication Date
CN117894037A true CN117894037A (en) 2024-04-16

Family

ID=90648023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311810864.5A Pending CN117894037A (en) 2023-12-26 2023-12-26 Method and computing device for identity authentication of cattle

Country Status (1)

Country Link
CN (1) CN117894037A (en)

Similar Documents

Publication Publication Date Title
CN107229947B (en) Animal identification-based financial insurance method and system
Crouse et al. LemurFaceID: A face recognition system to facilitate individual identification of lemurs
Kumar et al. Real-time recognition of cattle using animal biometrics
Matthé et al. Comparison of photo‐matching algorithms commonly used for photographic capture–recapture studies
US9514356B2 (en) Method and apparatus for generating facial feature verification model
Cai et al. Cattle face recognition using local binary pattern descriptor
Jwade et al. On farm automatic sheep breed classification using deep learning
CN110929650B (en) Method and device for identifying livestock and feed identity, computing equipment and readable storage medium
Weng et al. Cattle face recognition based on a Two-Branch convolutional neural network
Cisar et al. Computer vision based individual fish identification using skin dot pattern
Tharwat et al. Two biometric approaches for cattle identification based on features and classifiers fusion
US20240087368A1 (en) Companion animal life management system and method therefor
CN114067351A (en) Image-based cattle uniqueness identification method, device and management system
CN106991449B (en) Method for identifying blueberry varieties in assistance of living scene reconstruction
Pinthong et al. Image Classification of Forage Plants in Fabaceae Family Using Scale Invariant Feature Transform Method
Chen et al. Locality constrained sparse representation for cat recognition
IT201800000640A1 (en) METHOD AND SYSTEM FOR THE UNIQUE BIOMETRIC RECOGNITION OF AN ANIMAL, BASED ON THE USE OF DEEP LEARNING TECHNIQUES
CN114550212A (en) Goat face detection and identification method based on lightweight model
CN111914814A (en) Wheat rust detection method and device and computer equipment
Kumar et al. Animal Biometrics
CN113420709A (en) Cattle face feature extraction model training method and system and cattle insurance method and system
Kaur et al. Cattle identification with muzzle pattern using computer vision technology: a critical review and prospective
CN110414369B (en) Cow face training method and device
CN117351404A (en) Milk cow delivery stress degree judging and recognizing method and system
CN117894037A (en) Method and computing device for identity authentication of cattle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination