CN112132026A - Animal identification method and device - Google Patents

Animal identification method and device Download PDF

Info

Publication number
CN112132026A
CN112132026A CN202011006350.0A CN202011006350A CN112132026A CN 112132026 A CN112132026 A CN 112132026A CN 202011006350 A CN202011006350 A CN 202011006350A CN 112132026 A CN112132026 A CN 112132026A
Authority
CN
China
Prior art keywords
animal
images
target
image
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011006350.0A
Other languages
Chinese (zh)
Inventor
米谷禾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Saiante Technology Service Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202011006350.0A priority Critical patent/CN112132026A/en
Publication of CN112132026A publication Critical patent/CN112132026A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The application relates to the technical field of artificial intelligence, and provides an animal identification method and device. The animal identification method comprises the following steps: detecting animal parts contained in a plurality of animal images of a target animal to determine animal part areas in the animal images, wherein the animal images are obtained by image acquisition of the target animal at different angles; generating animal part images corresponding to the animal part areas in the animal images to obtain a plurality of animal part images; dividing the plurality of animal position images into at least one class according to the types of animal positions contained in the plurality of animal position images, and determining the comparison result of various types of animal position images; and determining the type of the target animal based on the comparison result of the various types of animal position images. The technical scheme of the embodiment of the application can accurately and quickly finish the identification of the animal.

Description

Animal identification method and device
Technical Field
The application relates to the field of artificial intelligence, in particular to an animal identification method and device.
Background
In life, some animals may be seen frequently, but the species and sex of the animals cannot be distinguished, and the growth speed, the form, the flavor and the like of the animals are different due to different species and sex. In the process of raising animals, in order to improve the yield and the benefit, parthenocarpy is sometimes carried out, so that the sex of the animals needs to be correctly identified, and meanwhile, the sex of the animals needs to be accurately identified in the process of artificial propagation so as to be convenient for parent selection, parent purchase and matching. Therefore, the identification of animals is of great significance. However, the prior art lacks a correlation technique for distinguishing the species and sex of animals.
Disclosure of Invention
The embodiment of the application provides an animal identification method and device, and therefore animal identification can be accurately and rapidly completed at least to a certain extent.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided an animal identification method including: detecting animal parts contained in a plurality of animal images of a target animal to determine animal part areas in the animal images, wherein the animal images are obtained by image acquisition of the target animal at different angles; generating animal part images corresponding to the animal part areas in the animal images to obtain a plurality of animal part images; dividing the plurality of animal images into at least one class according to the types of animal parts contained in the plurality of animal part images, and determining the comparison result of various types of animal part images; and determining the type of the target animal based on the comparison result of the various types of animal position images.
According to an aspect of an embodiment of the present application, there is provided an animal recognition apparatus including: a first detection unit configured to detect an animal part included in a plurality of animal images of a target animal to determine an animal part region in each animal image, the plurality of animal images being obtained by image-capturing the target animal from different angles; a generating unit configured to generate animal part images corresponding to the animal part areas in the respective animal images, to obtain a plurality of animal part images; the dividing unit is configured to divide the animal position images into at least one type according to the types of the animal positions contained in the animal position images and determine comparison results of the animal position images of the types; and the determining unit is configured to determine the type of the target animal based on the comparison result of the animal part images of the types.
In some embodiments of the present application, based on the foregoing solution, the apparatus further includes: the extraction unit is configured to acquire video streams aiming at the target animal through different angles and extract a plurality of video frame images from the video streams; the second detection unit is configured to detect a target animal contained in each extracted video frame image to obtain a posture form of the target animal contained in each extracted video frame image; a first selection unit configured to select the plurality of animal images from the plurality of video frame images according to a posture configuration of a target animal included in each of the video frame images.
In some embodiments of the present application, based on the foregoing solution, the second detection unit includes: a detection subunit configured to perform target animal detection in the respective video frame images to determine an animal detection frame containing the target animal in the respective video frame images; a first determining subunit, configured to determine a pose form of the target animal included in each of the video frame images according to an adjacent side length ratio of the animal detection frame in each of the video frame images.
In some embodiments of the present application, based on the foregoing scheme, the detecting subunit is configured to: and detecting each video frame image by using an image detection model, wherein training samples of the image detection model comprise video frame image samples marked with animal detection frames and enhanced images obtained by performing image enhancement on the video frame image samples.
In some embodiments of the present application, based on the foregoing scheme, the dividing unit is configured to: calculating the similarity between each type of animal position image and a plurality of animal images in an animal database respectively to obtain a plurality of similarity comparison results; taking the plurality of similarity comparison results as comparison results of the various types of animal part images; the determination unit is configured to: according to the similarity comparison results, determining the matching degree between the target animal and the animal images; and determining the type of the target animal according to the matching degree.
In some embodiments of the present application, based on the foregoing scheme, the dividing unit includes: the generating subunit is configured to generate hash codes of the various types of animal part images, and determine a target hash code according to the hash codes of the various types of animal part images; the comparison subunit is configured to compare the target hash codes with hash codes of a plurality of animal images in an animal database respectively to obtain comparison results of the various animal position images; the determination unit is configured to: and taking the animal type of the animal image which is the same as the target hash code as the type of the target animal.
In some embodiments of the present application, based on the foregoing scheme, the generating subunit is configured to: extracting the features of the various animal part images to obtain feature vectors corresponding to the various animal part images; calculating radial basis function mapping matrixes corresponding to the various animal part images based on the feature vectors; and generating hash codes corresponding to the various animal part images based on the radial basis function mapping matrix.
In some embodiments of the present application, based on the foregoing solution, the apparatus further includes: a judging unit configured to judge whether the target animal belongs to an animal whose sex can be discriminated according to an external feature, according to a kind of the target animal; the obtaining unit is configured to obtain the external characteristics of the target animal if the external characteristics are positive, and determine the sex of the target animal according to the external characteristics; and the second selection unit is configured to select a target animal position image from the plurality of animal position images if the sex of the target animal is not determined, and determine the sex of the target animal according to the target animal position image.
In some embodiments of the present application, based on the foregoing scheme, the second selecting unit is configured to: determining a sex recognition part corresponding to the type according to the type of the target animal; and selecting the target animal part image from the plurality of animal part images according to the sex recognition part.
In some embodiments of the present application, after obtaining a plurality of animal images of a target animal, detecting animal parts included in the plurality of animal images to determine an animal part region and generating an animal part image corresponding to the animal part region, then classifying the plurality of animal part images, determining the comparison result of the animal part images of various types, further determining the type of the target animal, the technical scheme in the embodiment of the application does not need a special animal identification tool or a special animal identification model, only needs a plurality of animal images to complete the type identification of the target animal, meanwhile, a plurality of animal part images are obtained for the target animal, the type of the target animal is determined through the comparison result of the plurality of animal part images, and compared with a single animal part of a single animal image, the accuracy of the identification result is guaranteed.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a diagram illustrating an exemplary system architecture to which aspects of embodiments of the present application may be applied;
fig. 2 shows a flow diagram of an animal identification method according to an embodiment of the present application;
fig. 3 shows a flow diagram of an animal identification method according to an embodiment of the present application;
fig. 4 shows a flow diagram of an animal identification method according to an embodiment of the present application;
fig. 5 shows a flow diagram of an animal identification method according to an embodiment of the present application;
fig. 6 shows a flow diagram of an animal identification method according to an embodiment of the present application;
fig. 7 shows a flow diagram of an animal identification method according to an embodiment of the present application;
fig. 8 shows a flow diagram of an animal identification method according to an embodiment of the present application;
fig. 9 shows a flow diagram of an animal identification method according to an embodiment of the present application;
fig. 10 shows a block diagram of an animal identification apparatus according to an embodiment of the present application;
FIG. 11 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
It is to be noted that the terms used in the specification and claims of the present application and the above-described drawings are only for describing the embodiments and are not intended to limit the scope of the present application. It will be understood that the terms "comprises," "comprising," "includes," "including," "has," "having," and the like, when used herein, specify the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be further understood that, although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element without departing from the scope of the present invention. Similarly, a second element may be termed a first element. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.
As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired communication links, wireless communication links, or fiber optic cables, to name a few.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, portable computers, desktop computers, and the like.
In an embodiment of the present application, the terminal 101 may perform image capture on a target animal to be identified, and then transmit a plurality of animal images obtained by image capture on the target animal from different angles to the server 105 through the network 104. After receiving the plurality of animal images, the server 105 detects animal parts included in each animal image to determine animal part areas in each animal image, then the server 105 generates animal part images corresponding to the animal part areas in each animal image to obtain a plurality of animal part images, divides the plurality of animal part images into at least one class according to types of the animal parts included in the plurality of animal part images, determines comparison results of each class of animal part images, and determines the type of the target animal based on the comparison results of each class of animal part images. After identifying the type of the target animal, the server 105 may return the identification result to the terminal 101 through the network 104, and the terminal 101 may present the identification result of the target animal to the user.
The animal identification method provided by the embodiment of the present application is generally executed by the server 105, and accordingly, the animal identification apparatus is generally disposed in the server 105. However, it is easily understood by those skilled in the art that the animal identification method provided in the embodiment of the present application may also be executed by the terminal devices 101, 102, and 103, and accordingly, the animal identification apparatus may also be disposed in the terminal devices 101, 102, and 103, which is not particularly limited in the exemplary embodiment. For example, in an exemplary embodiment, the user may upload a plurality of animal images of the target animal to the server 105 through the terminal devices 101, 102, 103, and the server 105 processes the plurality of animal images of the target animal by using the animal identification method provided in the embodiment of the present application and sends the obtained identification result of the target animal to the terminal devices 101, 102, 103.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
fig. 2 shows a flow chart of an animal identification method according to an embodiment of the present application, which may be performed by a server, which may be the server 105 shown in fig. 1, but which may also be performed by a terminal device, such as the terminal 101 shown in fig. 1. Referring to fig. 2, the method includes:
step S210, detecting animal parts contained in a plurality of animal images of a target animal to determine animal part areas in each animal image, wherein the animal images are obtained by carrying out image acquisition on the target animal at different angles;
step S220, generating animal part images corresponding to the animal part areas in the animal images to obtain a plurality of animal part images;
step S230, dividing the plurality of animal position images into at least one type according to the types of animal positions contained in the plurality of animal position images, and determining comparison results of various types of animal position images;
and step S240, determining the type of the target animal based on the comparison result of the various types of animal position images.
These steps are described in detail below.
In step S210, animal parts included in a plurality of animal images of a target animal, which are acquired by image-capturing the target animal from different angles, are detected to determine animal part regions in the respective animal images.
The target animal may be any animal, the animal part refers to each part constituting the whole animal, and the parts of different kinds of animals are different, for example, the parts of birds may include mouth, feather, head, tail, wings, paws, legs, eyes, etc.
Specifically, after detecting an animal image acquisition request, the terminal device can start the camera device to acquire images of a target animal at different angles, and acquire a plurality of animal images acquired from different angles. The reason for acquiring a plurality of animal images from different angles is to better be able to detect animal parts from the animal images.
In one embodiment, the detecting the animal parts included in the plurality of animal images of the target animal may be detecting the animal parts included in each animal image by using an animal part detection model, and the training sample of the animal part detection model includes a video frame image sample labeled with the animal parts and an enhanced image obtained by performing image enhancement processing on the video frame image sample.
It is understood that the animal parts included in the plurality of animal images of the detection target animal may be all animal parts of the detection target animal, or may be important animal parts in the detection target animal. For example, for some animals, feathers are important animal parts for sex differentiation, so that only feathers of the animals can be detected, and compared with the detection of all animal parts, the detection of only important animal parts saves calculation resources and improves the detection speed.
It will also be appreciated that the accuracy of animal identification can be ensured by capturing multiple images of the target animal from different angles, as compared to just one image of the animal.
Step S220 is to generate animal part images corresponding to the animal part regions in the respective animal images, and obtain a plurality of animal part images.
Specifically, an animal region area may be cut out from each animal image, and an animal region image including the animal region area may be obtained.
For example, if four animal images of the target animal are obtained as P1, P2, P3 and P4, respectively, and an animal region A, B, C of the four animal images P1, P2, P3 and P4 is detected, an animal region image P11 corresponding to the animal region a may be generated for the animal image P1, an animal region image P12 corresponding to the animal region B may be generated for the animal image P1, and an animal region image P13 corresponding to the animal region C may be generated for the animal image P1; an animal region image P21 corresponding to the animal region a is generated for the animal image P2, an animal region image P22 corresponding to the animal region B is generated for the animal image P2, and an animal region image P23 corresponding to the animal region C is generated for the animal image P2; an animal region image P31 corresponding to the animal region a is generated for the animal image P3, an animal region image P32 corresponding to the animal region B is generated for the animal image P3, and an animal region image P33 corresponding to the animal region C is generated for the animal image P3; an animal region image P41 corresponding to the animal region a is generated for the animal image P4, an animal region image P42 corresponding to the animal region B is generated for the animal image P4, and an animal region image P43 corresponding to the animal region C is generated for the animal image P4; finally, a plurality of animal region images P11, P12, P13, P21, P22, P23, P31, P32, P33, P41, P42, and P43 are obtained.
Step S230, dividing the plurality of animal position images into at least one type according to the types of the animal positions contained in the plurality of animal position images, and determining the comparison result of the various types of animal position images.
Specifically, after the plurality of animal part images are obtained in step 2, the plurality of animal part images may be further classified according to the types of animal parts included in the plurality of animal part images, so as to obtain various types of animal part images.
Continuing with the illustration of step S220, assuming that the types of the animal parts contained in P11, P21, P31 and P41 are eyes and the types of the animal parts are the same, they are classified into one type, the types of the animal parts contained in P12, P22, P32 and P42 are heads and the types of the animal parts are the same, they are classified into one type, the types of the animal parts contained in P13, P23, P33 and P43 are nose and the types of the animal parts are the same, they are classified into one type, the types of the animal parts contained in P14, P24, P34 and P44 are body and the types of the animal parts are the same, they are classified into one type, thereby obtaining four types (eyes, head, nose, body) of animal part images.
After obtaining the various animal part images, the various animal part images can be compared with the plurality of animal images in the animal database to obtain comparison results of the various animal part images. It will be appreciated that the animal database has a vast amount of animal image data resources, and therefore, comparison of various types of animal position images with the image resources in the animal database can be performed.
And step S240, determining the type of the target animal based on the comparison result of the various types of animal position images.
Specifically, the comparison results of the various types of animal position images may be the same or similar, or may be different or dissimilar, and more specifically, the comparison result may be that the similarity is a specific numerical value, for example, 50% or 80%, so that the type of the target animal may be determined according to the same or similar animal types in the comparison results.
In an embodiment of the present application, the comparison result based on each type of animal region image may be a comparison result of similarity between each type of animal region image and a plurality of animal images in an animal database, and in this embodiment, as shown in fig. 5, the method may specifically include steps S510 to S540, which are described in detail below:
and step S510, calculating the similarity between each type of animal position image and a plurality of animal images in an animal database respectively to obtain a plurality of similarity comparison results.
Specifically, in order to obtain the comparison result of each type of animal part image, the similarity between each type of animal part image and a plurality of animal images in the animal database may be calculated, so as to obtain the similarity comparison result of each type of animal part image.
It should be noted that the algorithm for calculating the picture similarity may be a method for calculating the euclidean distance, calculating the hash value, and the like in the prior art.
And step S520, taking the comparison results of the plurality of similarity degrees as comparison results of the various animal part images.
Step S530, according to the multiple similarity comparison results, determining the matching degree between the target animal and the multiple animal images respectively.
After the similarity comparison results of the various animal position images are obtained, namely after a plurality of similarities are obtained, a plurality of matching degrees between the target animal and a plurality of animals in the animal database can be determined according to the plurality of similarities, and then the type of the target animal is determined according to the plurality of matching degrees.
In some embodiments, the determining the plurality of matching degrees between the target animal and the plurality of animals in the animal database according to the plurality of similarity degrees may be performed by calculating an average value of the plurality of similarity degrees, assigning a weight to each animal part, calculating a weighted sum, and calculating an average value of remaining similarity degrees by removing the maximum similarity degree and the minimum similarity degree to obtain the matching degree.
Continuing with the illustration of step S230, assuming that the animal database includes 5 animal images, the animal types are lion, tiger, crocodile, monkey, and rabbit, and the similarity between the eye type animal region image and the 5 animal images in the animal database is calculated as 10%, 20%, 30%, 40%, and 50%, the similarity between the head type animal region image and the 5 animal images in the animal database is calculated as 15%, 25%, 35%, 45%, and 55%, the similarity between the nose type animal region image and the 5 animal images in the animal database is calculated as 18%, 28%, 38%, 42%, and 55%, and the similarity between the body type animal region image and the 5 animal images in the animal database is calculated as 11%, 21%, 31%, 42%, and 51%.
After obtaining a plurality of similarities between each type of animal position image and a plurality of animal images in an animal database, the matching degree can be calculated by an average value method, and according to the similarities between the eye type animal position image, the head type animal position image, the nose type animal position image and the body type animal position image and the lion of 10%, 15%, 18% and 11%, respectively, the matching degree between the target animal and the lion can be calculated to be (10% + 15% + 18% + 11%)/4 ═ 22.25%; according to the similarity between the eye animal part image, the head animal part image, the nose animal part image and the body animal part image and the tiger being 20%, 25%, 28% and 21%, respectively, the matching degree between the target animal and the lion can be calculated to be (20% + 25% + 28% + 21%)/4 being 23.5%; according to the similarity of the eye animal part image, the head animal part image, the nose animal part image and the body animal part image with the crocodile by 30%, 35%, 38% and 31%, respectively, the matching degree of the target animal and the lion can be calculated to be (30% + 35% + 38% + 31%)/4 ═ 33.5%; according to the similarity of the eye animal part image, the head animal part image, the nose animal part image and the body animal part image to the monkey, namely 40%, 45%, 42% and 42%, respectively, the matching degree of the target animal and the lion can be calculated to be (40% + 45% + 42% + 42%)/4-42.25%; according to the similarity of the eye animal part image, the head animal part image, the nose animal part image and the body animal part image to the mouse, which is 50%, 55% and 51%, respectively, the matching degree of the target animal and the lion can be calculated to be (50% + 55% + 55% + 51%)/4 ═ 52.75%.
With continued reference to fig. 5, in step S540, the type of the target animal is determined according to the matching degree.
After the matching degrees are determined to be obtained, the matching degrees can be sorted, the animals with the matching degrees larger than a preset threshold value are taken out, the type of the target animal is obtained, or the type of the animal with the matching degree as the maximum value can be directly used as the type of the target animal.
In an embodiment of the present application, the comparison result of each type of animal part image may also be based on comparison of hash codes of each type of animal part image, and in this embodiment, as shown in fig. 6, the method specifically includes steps S610 to S640, which are described in detail as follows:
and S610, generating hash codes of the various animal part images, and determining target hash codes according to the hash codes of the various animal part images.
Specifically, hash codes can be generated for various animal part images, and after the hash codes of various animal parts are obtained, the target hash codes can be determined through the hash codes of various animal parts.
The target hash code may be determined by calculating an average value of the hash codes of the various animal part images, taking the calculated average value as the target hash code, or taking a maximum value of the hash codes of the various animal part images as the target hash code, or obtaining the target hash code in other manners.
In an embodiment of the present application, the hash code may be generated by calculating a radial basis function mapping matrix corresponding to each type of animal part image, as shown in fig. 7, specifically, the hash code may include:
and step S710, performing feature extraction on the various animal part images to obtain feature vectors corresponding to the various animal part images.
And S720, calculating radial basis function mapping matrixes corresponding to the various animal part images based on the feature vectors.
And step S730, generating hash codes corresponding to the various animal part images based on the radial basis function mapping matrix.
After the feature vectors corresponding to the various animal parts are obtained in step S710, the radial basis function mapping matrices corresponding to the various animal part images can be calculated according to the formula one:
Figure BDA0002695498280000121
wherein x is the feature vector,
Figure BDA0002695498280000122
and respectively corresponding radial basis function mapping matrixes a1, a2 and … am to the various animal region images, wherein the m preset characteristic vectors are respectively used as first preset constants.
After the radial basis function mapping matrices corresponding to the various animal part images are obtained through calculation in step S720, hash codes corresponding to the various animal part images can be calculated through a formula two:
Figure BDA0002695498280000123
wherein the content of the first and second substances,
Figure BDA0002695498280000124
and mapping the radial basis function matrix, wherein P is a preset coefficient mapping matrix, and f (x) is a hash code corresponding to the picture.
Continuing to refer to fig. 6, in step S620, comparing the target hash code with hash codes of a plurality of animal images in an animal database, respectively, to obtain comparison results of the various types of animal part images.
Step S630, regarding the animal type of the animal image identical to the target hash code as the type of the target animal.
It can be understood that the animal database includes a plurality of animal images, and the plurality of animal images can be subjected to feature extraction to generate hash codes of the respective animal images, so that after the target hash code is obtained, the animal database can search for the animal image identical to the target hash code, and the type of the animal image identical to the target hash code obtained by searching is taken as the type of the target animal.
Based on the technical scheme of the above embodiment, after a plurality of animal images of a target animal are obtained, animal parts included in the plurality of animal images are detected to determine an animal part area, and an animal part image corresponding to the animal part area is generated, then the plurality of animal part images are classified, and comparison results of various animal part images are determined, so that the type of the target animal is determined, the technical scheme in the embodiment of the application does not need a special recognition tool or a special recognition model, only the plurality of animal images are needed to complete the type recognition of the target animal, meanwhile, the plurality of animal part images are compared, so that the accuracy of the recognition result is ensured, and some implementation manners of each step in fig. 2 are explained as follows:
in an embodiment of the present application, the manner of acquiring the plurality of animal images of the target animal may be to acquire the video stream of the target animal through different angles, and then obtain the plurality of animal images based on the acquired video stream, as shown in fig. 3, specifically including steps S310 to S330, which are described in detail as follows:
step S310, collecting video streams aiming at the target animal through different angles, and extracting a plurality of video frame images from the video streams.
Specifically, after detecting an animal image acquisition request, the terminal device acquires video streams for a target animal from different angles by using the camera device, and extracts a plurality of video frame images from the video streams of the target animal.
The extracting of the plurality of video frame images may be randomly extracting the plurality of video frame images from a video stream of the target animal, or extracting the plurality of video frame images according to a preset rule, where the preset rule may be extracting frames according to a preset fixed sampling step length, for example, according to an interval of 5 frames, to obtain the plurality of video frame images.
Step S320, detecting the target animal included in each extracted video frame image to obtain the posture and shape of the target animal included in each video frame image.
Since the video streams are captured from different angles, the target animals included in the plurality of video frame images extracted from the video streams may have different posture forms with respect to the forward direction, and therefore, the target animals included in each extracted video frame image can be detected to obtain the posture forms of the target animals included in each video frame image.
Step S330 is to select the plurality of animal images from the plurality of video frame images according to the posture and shape of the target animal included in each of the video frame images.
Specifically, a plurality of animal images are selected from the plurality of video frame images based on the posture of the target animal included in each video frame image, and since the plurality of selected animal images are followed by the detection of the animal part included in the plurality of animal images, the method of selection may be such that the animal part to be detected is detected from the posture of the target motion and the plurality of animal images are selected based on the principle advantageous for the detection of the animal part.
In an embodiment of the present application, as shown in fig. 4, step S320 may specifically include step S410 to step S420, which are described in detail as follows:
and S410, detecting the target animal in each video frame image so as to determine an animal detection frame containing the target animal in each video frame image.
In specific implementation, after a plurality of video frame images are extracted, target animal detection can be performed in each video frame image, and the target animal detection can be realized by determining an animal detection frame containing a target animal in each video frame image.
In an embodiment, the detection of the target animal may be performed by an image detection model, in this embodiment, the step S410 specifically includes:
and detecting each video frame image by using an image detection model, wherein training samples of the image detection model comprise video frame image samples marked with animal detection frames and enhanced images obtained by performing image enhancement on the video frame image samples.
Specifically, after a plurality of video frame images are extracted and obtained, the plurality of video frame images are input into an image detection model, and each video frame image is detected by using the image detection model to obtain an animal detection frame containing the target animal in each video frame image.
In this case, the image detection model may also use PVAnet, which can be faster while maintaining the detection accuracy. In the training process for PVAnet, the selected training samples include: the method comprises the steps of marking a video frame image sample with an animal detection frame, and carrying out image enhancement processing on the video frame image sample to obtain an enhanced image.
The image enhancement processing is to perform enhancement processing on video frame image samples by using methods such as rotation, brightness, contrast, noise addition and the like, wherein the video frame image samples are obtained by collecting video streams aiming at sample animals at different angles and extracting the video streams.
Continuing to refer to fig. 4, step S420 determines the pose form of the target animal included in each video frame image according to the adjacent side length ratio of the animal detection frame in each video frame image.
Specifically, the posture of the target animal included in each of the video frame images may be determined from a correspondence relationship established in advance, and the posture of the animal may be a standing posture when the adjacent side length ratio is 1, or a lying posture when the adjacent side length ratio is 0.8.
In one embodiment of the application, after the type of the target animal is obtained, manual intervention can be performed on the finally obtained type of the target animal, if the user considers that the final result is obviously wrong in combination with the analysis conclusion of the user, the intervention can be performed, the animal identification process is repeated, analysis is performed again, and a proper result is given.
In one embodiment of the present application, after the category of the target animal is obtained, a picture of the target animal can be viewed from the animal database, and a detailed description of the target animal can be viewed. The advantage of this embodiment lies in can helping people to inquire the life habit of animal fast, and is convenient swift again, does not receive the restraint of time and place, and then is favorable to carrying out artificial propagation, improves output and income.
In an embodiment of the present application, after identifying the type of the target animal, the gender of the target animal can be determined according to the type of the target animal, and in this embodiment, as shown in fig. 8, the method specifically includes steps S810 to S830, which are described in detail as follows:
and step S810, judging whether the target animal belongs to an animal with the sex capable of being identified according to the external characteristics according to the type of the target animal.
It is understood that some animals may be directly sex-discriminated according to their external characteristics, for example, a male lion and a female lion may be discriminated according to temples and body types, and a cock and a hen may be discriminated according to a rooster comb, so that before the identification of the target animal, it is possible to first judge whether the target animal belongs to an animal whose sex can be discriminated according to the external characteristics, according to the kind of the target animal.
And S820, if yes, acquiring external features of the target animal, and determining the sex of the target animal according to the external features.
If the determination result in step S810 is yes, the external characteristics of the target animal can be directly obtained, and the sex of the target animal can be determined according to the external characteristics.
And step S830, if not, selecting a target animal position image from the plurality of animal position images, and determining the sex of the target animal according to the target animal position image.
On the contrary, if the determination result in step S810 is negative, the target animal position image may be selected from the plurality of animal position images, and the sex of the target animal may be determined based on the target animal position image.
In specific implementation, the target animal position image can be input into the recognition model, and a gender recognition result of the target animal output by the recognition model is obtained; the identification model is obtained after training based on the animal sample image and the label of the animal sample image; the label of the animal sample image is predetermined from the animal image sample. The label of the image of the animal sample refers to the sex of the animal sample, i.e. male and female. For each animal sample, the sex of the animal sample is manually distinguished, and the manually distinguished result is used as a label of an animal sample image.
The recognition model may be a model established based on a Convolutional Neural Network (CNN) and a modified Convolutional Neural Network. The improved convolutional neural network comprises at least RCNN (region with CNN feature), Fast-RCNN or MobileNet, which is not limited in this embodiment of the present invention.
In one embodiment, different gender identification parts can be adopted for different types of animals, so that when identifying the gender of an animal, the gender identification part corresponding to the type can be determined according to the type of the animal, and in this embodiment, as shown in fig. 9, the method specifically includes:
step S910, determining a sex recognition part corresponding to the type according to the type of the target animal;
step S920, selecting the target animal site image from the plurality of animal site images according to the sex identification site.
In this embodiment, a relationship correspondence table of different types of animals and sex recognition sites may be established in advance, and the relationship correspondence table specifies that different sex recognition sites may be used for different types of animals, for example, some animals may be distinguished according to body types, some animals may be distinguished according to feathers, and some animals may be distinguished according to reproductive organs.
Therefore, it is possible to specify a sex recognition site corresponding to the type of the target animal according to the type of the target animal, and then select the target animal site image from the plurality of animal site images according to the sex recognition site.
Embodiments of the apparatus of the present application are described below, which may be used to perform the animal identification methods of the above-described embodiments of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the animal identification method described above in the present application.
Fig. 10 shows a block diagram of an animal recognition apparatus according to an embodiment of the present application, and referring to fig. 10, an animal recognition apparatus 1000 according to an embodiment of the present application includes: a first detection unit 1002, a generation unit 1004, a division unit 1006, and a determination unit 1008.
The first detecting unit 1002 is configured to detect animal parts included in a plurality of animal images of a target animal to determine animal part areas in the respective animal images, wherein the plurality of animal images are obtained by image-capturing the target animal from different angles; a generating unit 1004 configured to generate an animal part image corresponding to the animal part region in each animal image, obtaining a plurality of animal part images; a dividing unit 1006, configured to divide the plurality of animal position images into at least one class according to the types of animal positions included in the plurality of animal position images, and determine comparison results of the various classes of animal position images; a determining unit 1008 configured to determine the type of the target animal based on the comparison result of the types of animal position images.
In some embodiments of the present application, the apparatus further comprises: the extraction unit is configured to acquire video streams aiming at the target animal through different angles and extract a plurality of video frame images from the video streams; the second detection unit is configured to detect a target animal contained in each extracted video frame image to obtain a posture form of the target animal contained in each extracted video frame image; a first selection unit configured to select the plurality of animal images from the plurality of video frame images according to a posture configuration of a target animal included in each of the video frame images.
In some embodiments of the present application, the second detection unit includes: a detection subunit configured to perform target animal detection in the respective video frame images to determine an animal detection frame containing the target animal in the respective video frame images; a first determining subunit, configured to determine a pose form of the target animal included in each of the video frame images according to an adjacent side length ratio of the animal detection frame in each of the video frame images.
In some embodiments of the present application, the detection subunit is configured to: and detecting each video frame image by using an image detection model, wherein training samples of the image detection model comprise video frame image samples marked with animal detection frames and enhanced images obtained by performing image enhancement on the video frame image samples.
In some embodiments of the present application, the dividing unit 1006 is configured to: calculating the similarity between each type of animal position image and a plurality of animal images in an animal database respectively to obtain a plurality of similarity comparison results; taking the plurality of similarity comparison results as comparison results of the various types of animal part images; the determining unit 1008 is configured to: according to the similarity comparison results, determining the matching degree between the target animal and the animal images; and determining the type of the target animal according to the matching degree.
In some embodiments of the present application, the dividing unit 1006 includes: the generating subunit is configured to generate hash codes of the various types of animal part images, and determine a target hash code according to the hash codes of the various types of animal part images; the comparison subunit is configured to compare the target hash codes with hash codes of a plurality of animal images in an animal database respectively to obtain comparison results of the various animal position images; the determining unit 1008 is configured to: and taking the animal type of the animal image which is the same as the target hash code as the type of the target animal.
In some embodiments of the present application, the generating subunit is configured to: extracting the features of the various animal part images to obtain feature vectors corresponding to the various animal part images; calculating radial basis function mapping matrixes corresponding to the various animal part images based on the feature vectors; and generating hash codes corresponding to the various animal part images based on the radial basis function mapping matrix.
In some embodiments of the present application, the apparatus further comprises: a judging unit configured to judge whether the target animal belongs to an animal whose sex can be discriminated according to an external feature, according to a kind of the target animal; the obtaining unit is configured to obtain the external characteristics of the target animal if the external characteristics are positive, and determine the sex of the target animal according to the external characteristics; and the second selection unit is configured to select a target animal position image from the plurality of animal position images if the sex of the target animal is not determined, and determine the sex of the target animal according to the target animal position image.
In some embodiments of the present application, the second selecting unit is configured to: determining a sex recognition part corresponding to the type according to the type of the target animal; and selecting the target animal part image from the plurality of animal part images according to the sex recognition part.
FIG. 11 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
It should be noted that the computer system 1100 of the electronic device shown in fig. 11 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 11, a computer system 1100 includes a Central Processing Unit (CPU)1101, which can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1102 or a program loaded from a storage section 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data necessary for system operation are also stored. The CPU 1101, ROM 1102, and RAM 1103 are connected to each other by a bus 1104. An Input/Output (I/O) interface 1105 is also connected to bus 1104.
The following components are connected to the I/O interface 1105: an input portion 1106 including a keyboard, mouse, and the like; an output section 1107 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 1108 including a hard disk and the like; and a communication section 1109 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 1109 performs communication processing via a network such as the internet. A driver 1110 is also connected to the I/O interface 1105 as necessary. A removable medium 1111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1110 as necessary, so that a computer program read out therefrom is mounted into the storage section 1108 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 1109 and/or installed from the removable medium 1111. When the computer program is executed by a Central Processing Unit (CPU)1101, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which is computer readable instructions that may be stored in a storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and the storage medium may be volatile or non-volatile. The computer readable instructions include instructions to cause a computing device (which may be a personal computer, a server, a touch terminal, or a network device) to execute the method according to the embodiment of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of identifying an animal, the method comprising:
detecting animal parts contained in a plurality of animal images of a target animal to determine animal part areas in the animal images, wherein the animal images are obtained by image acquisition of the target animal at different angles;
generating animal part images corresponding to the animal part areas in the animal images to obtain a plurality of animal part images;
dividing the plurality of animal position images into at least one class according to the types of animal positions contained in the plurality of animal position images, and determining the comparison result of various types of animal position images;
and determining the type of the target animal based on the comparison result of the various types of animal position images.
2. The method of claim 1, further comprising:
acquiring video streams aiming at the target animal through different angles, and extracting a plurality of video frame images from the video streams;
detecting the target animal contained in each extracted video frame image to obtain the posture form of the target animal contained in each video frame image;
and selecting the plurality of animal images from the plurality of video frame images according to the posture and the shape of the target animal contained in each video frame image.
3. The method according to claim 2, wherein the detecting the target animal included in each extracted video frame image to obtain the posture and shape of the target animal included in each extracted video frame image comprises:
carrying out target animal detection in each video frame image so as to determine an animal detection frame containing the target animal in each video frame image;
and determining the posture form of the target animal contained in each video frame image according to the adjacent side length ratio of the animal detection frame in each video frame image.
4. The method of claim 3, wherein performing a target animal detection in each of the video frame images to determine an animal detection box containing the target animal in each of the video frame images comprises:
and detecting each video frame image by using an image detection model, wherein training samples of the image detection model comprise video frame image samples marked with animal detection frames and enhanced images obtained by performing image enhancement on the video frame image samples.
5. The method of claim 1, wherein determining the comparison of the types of animal region images comprises: calculating the similarity between each type of animal position image and a plurality of animal images in an animal database respectively to obtain a plurality of similarity comparison results;
taking the plurality of similarity comparison results as comparison results of the various types of animal part images;
determining the type of the target animal based on the comparison result of the various types of animal position images, wherein the determining step comprises the following steps: according to the similarity comparison results, determining the matching degree between the target animal and the animal images;
and determining the type of the target animal according to the matching degree.
6. The method of claim 1, wherein determining the comparison of the types of animal region images comprises: generating hash codes of the various animal part images, and determining a target hash code according to the hash codes of the various animal part images;
comparing the target hash code with hash codes of a plurality of animal images in an animal database respectively to obtain comparison results of all types of animal position images;
determining the type of the target animal based on the comparison result of the various types of animal position images, wherein the determining step comprises the following steps: and taking the animal type of the animal image which is the same as the target hash code as the type of the target animal.
7. The method of claim 6, wherein generating the hash code of the types of animal part images comprises:
extracting the features of the various animal part images to obtain feature vectors corresponding to the various animal part images;
calculating radial basis function mapping matrixes corresponding to the various animal part images based on the feature vectors;
and generating hash codes corresponding to the various animal part images based on the radial basis function mapping matrix.
8. The method of claim 1, further comprising:
judging whether the target animal belongs to an animal with the gender capable of being identified according to external characteristics or not according to the type of the target animal;
if so, acquiring external features of the target animal, and determining the sex of the target animal according to the external features;
and if not, selecting a target animal position image from the plurality of animal position images, and determining the sex of the target animal according to the target animal position image.
9. The method of claim 8, wherein selecting a target animal site image from the plurality of animal site images comprises:
determining a sex recognition part corresponding to the type according to the type of the target animal;
and selecting the target animal part image from the plurality of animal part images according to the sex recognition part.
10. An animal identification device, characterized in that the device comprises:
a first detection unit configured to detect an animal part included in a plurality of animal images of a target animal to determine an animal part region in each animal image, the plurality of animal images being obtained by image-capturing the target animal from different angles;
a generating unit configured to generate animal part images corresponding to the animal part areas in the respective animal images, to obtain a plurality of animal part images;
the dividing unit is configured to divide the animal position images into at least one type according to the types of the animal positions contained in the animal position images and determine comparison results of the animal position images of the types;
and the determining unit is configured to determine the type of the target animal based on the comparison result of the animal part images of the types.
CN202011006350.0A 2020-09-22 2020-09-22 Animal identification method and device Pending CN112132026A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011006350.0A CN112132026A (en) 2020-09-22 2020-09-22 Animal identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011006350.0A CN112132026A (en) 2020-09-22 2020-09-22 Animal identification method and device

Publications (1)

Publication Number Publication Date
CN112132026A true CN112132026A (en) 2020-12-25

Family

ID=73842574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011006350.0A Pending CN112132026A (en) 2020-09-22 2020-09-22 Animal identification method and device

Country Status (1)

Country Link
CN (1) CN112132026A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150078626A1 (en) * 2013-09-17 2015-03-19 William Brian Kinard Animal / pet identification system and method based on biometrics
CN107480591A (en) * 2017-07-10 2017-12-15 北京航空航天大学 Flying bird detection method and device
WO2018094892A1 (en) * 2016-11-22 2018-05-31 深圳市沃特沃德股份有限公司 Pet type recognition method and device, and terminal
CN108416269A (en) * 2017-09-14 2018-08-17 翔创科技(北京)有限公司 Livestock information acquisition system, Database and recognition methods, program, medium, equipment
CN108921026A (en) * 2018-06-01 2018-11-30 平安科技(深圳)有限公司 Recognition methods, device, computer equipment and the storage medium of animal identification
JP2019159588A (en) * 2018-03-09 2019-09-19 東芝ライテック株式会社 Determination device, determination method and determination system
KR20200042379A (en) * 2018-10-15 2020-04-23 심준원 Animal Identification Method Combining Multiple Object Identification Techniques, Method and Apparatus for Providing Animal Insurance Services Using the Same
US20200143157A1 (en) * 2015-07-01 2020-05-07 Viking Genetics Fmba System and method for identification of individual animals based on images of the back
CN111523479A (en) * 2020-04-24 2020-08-11 中国农业科学院农业信息研究所 Biological feature recognition method and device for animal, computer equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150078626A1 (en) * 2013-09-17 2015-03-19 William Brian Kinard Animal / pet identification system and method based on biometrics
US20200143157A1 (en) * 2015-07-01 2020-05-07 Viking Genetics Fmba System and method for identification of individual animals based on images of the back
WO2018094892A1 (en) * 2016-11-22 2018-05-31 深圳市沃特沃德股份有限公司 Pet type recognition method and device, and terminal
CN107480591A (en) * 2017-07-10 2017-12-15 北京航空航天大学 Flying bird detection method and device
CN108416269A (en) * 2017-09-14 2018-08-17 翔创科技(北京)有限公司 Livestock information acquisition system, Database and recognition methods, program, medium, equipment
JP2019159588A (en) * 2018-03-09 2019-09-19 東芝ライテック株式会社 Determination device, determination method and determination system
CN108921026A (en) * 2018-06-01 2018-11-30 平安科技(深圳)有限公司 Recognition methods, device, computer equipment and the storage medium of animal identification
KR20200042379A (en) * 2018-10-15 2020-04-23 심준원 Animal Identification Method Combining Multiple Object Identification Techniques, Method and Apparatus for Providing Animal Insurance Services Using the Same
CN111523479A (en) * 2020-04-24 2020-08-11 中国农业科学院农业信息研究所 Biological feature recognition method and device for animal, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUNG NGUYEN等: "Animal Recognition and Identification with Deep Convolutional Neural Networks for Automated Wildlife Monitoring", IEEE INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA,2017), 18 January 2018 (2018-01-18), pages 40 - 49 *
拉毛杰等: "基于卷积神经网络的畜牧业动物图像识别研究", 软件, vol. 41, no. 8, 31 August 2020 (2020-08-31), pages 43 - 45 *

Similar Documents

Publication Publication Date Title
CN109117808B (en) Face recognition method and device, electronic equipment and computer readable medium
CN110532996B (en) Video classification method, information processing method and server
CN111787356B (en) Target video clip extraction method and device
CN108288051B (en) Pedestrian re-recognition model training method and device, electronic equipment and storage medium
JP6994588B2 (en) Face feature extraction model training method, face feature extraction method, equipment, equipment and storage medium
CN108230291B (en) Object recognition system training method, object recognition method, device and electronic equipment
CN108229375B (en) Method and device for detecting face image
CN109426831B (en) Image similarity matching and model training method and device and computer equipment
CN107679070B (en) Intelligent reading recommendation method and device and electronic equipment
CN111400548B (en) Recommendation method and device based on deep learning and Markov chain
CN109241890B (en) Face image correction method, apparatus and storage medium
CN110851641A (en) Cross-modal retrieval method and device and readable storage medium
CN108388889B (en) Method and device for analyzing face image
WO2022103684A1 (en) Face-aware person re-identification system
CN111401343B (en) Method for identifying attributes of people in image and training method and device for identification model
CN112188306A (en) Label generation method, device, equipment and storage medium
CN110728188A (en) Image processing method, device, system and storage medium
CN111353429A (en) Interest degree method and system based on eyeball turning
CN115600013B (en) Data processing method and device for matching recommendation among multiple subjects
CN111967383A (en) Age estimation method, and training method and device of age estimation model
CN112132026A (en) Animal identification method and device
CN115546845A (en) Multi-view cow face identification method and device, computer equipment and storage medium
CN114821424A (en) Video analysis method, video analysis device, computer device, and storage medium
CN110674342B (en) Method and device for inquiring target image
CN114092746A (en) Multi-attribute identification method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210126

Address after: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant after: Shenzhen saiante Technology Service Co.,Ltd.

Address before: 1-34 / F, Qianhai free trade building, 3048 Xinghai Avenue, Mawan, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong 518000

Applicant before: Ping An International Smart City Technology Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination