CN112819885A - Animal identification method, device and equipment based on deep learning and storage medium - Google Patents

Animal identification method, device and equipment based on deep learning and storage medium Download PDF

Info

Publication number
CN112819885A
CN112819885A CN202110194977.1A CN202110194977A CN112819885A CN 112819885 A CN112819885 A CN 112819885A CN 202110194977 A CN202110194977 A CN 202110194977A CN 112819885 A CN112819885 A CN 112819885A
Authority
CN
China
Prior art keywords
target
animal
image
detected
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110194977.1A
Other languages
Chinese (zh)
Inventor
刘露
蔺昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Inveno Technology Co ltd
Original Assignee
Shenzhen Inveno Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Inveno Technology Co ltd filed Critical Shenzhen Inveno Technology Co ltd
Priority to CN202110194977.1A priority Critical patent/CN112819885A/en
Publication of CN112819885A publication Critical patent/CN112819885A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention belongs to the technical field of deep learning, and discloses an animal identification method, device, equipment and storage medium based on deep learning. The method comprises the following steps: acquiring an image to be detected, and detecting the type and the corresponding position area of each animal in the image to be detected through a target detection model according to the image to be detected; screening out a target animal according to the type of the animal, and determining a target detection position area to which the target animal belongs according to the target animal; adjusting the image to be detected according to the target detection position area to obtain a target subgraph; and carrying out key point identification on the target subgraph through the key point detection model to obtain target key points and reference position coordinates of the target key points in the target subgraph, and determining target position coordinates of the target key points in the image to be detected by combining the target subgraph so as to identify the animal through the target position coordinates. Through the mode, the detection and identification of the animal key points can be realized in a general environment.

Description

Animal identification method, device and equipment based on deep learning and storage medium
Technical Field
The invention relates to the technical field of deep learning, in particular to an animal identification method, device, equipment and storage medium based on deep learning.
Background
At present, since a key point of an object can be used for various purposes such as estimation of a behavior of a subject, generation of a virtual animation, detection of a degree of motion accuracy, and the like, it is also important for detection of a key point of an object. However, in the current industry, only animal key point detection in a laboratory scene is performed, and a designed algorithm only detects animal key points in a single laboratory scene, so that the complexity is low, the method cannot be applied to an actual life scene, and an animal identification and key point detection scheme in a general scene is not provided.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide an animal identification method, an animal identification device, animal identification equipment and a storage medium based on deep learning, and aims to solve the technical problem that the prior art cannot identify animal key points in a general scene.
In order to achieve the above object, the present invention provides an animal identification method based on deep learning, the method comprising the steps of:
acquiring an image to be detected, and detecting through a target detection model according to the image to be detected to obtain the type and the corresponding position area of each animal in the image to be detected;
screening out a target animal according to the type of the animal, and determining a target detection position area to which the target animal belongs according to the target animal;
adjusting the image to be detected according to the target detection position area to obtain a target subgraph;
performing key point identification on the target subgraph through a key point detection model to obtain a target key point and a reference position coordinate of the target key point in the target subgraph;
and determining the target position coordinates of the target key points in the image to be detected according to the reference position coordinates and the target subgraph so as to identify the animal through the target position coordinates.
Optionally, before the obtaining of the image to be detected and the obtaining of the type and the corresponding position area of each animal in the image to be detected through the target detection model according to the image to be detected, the method further includes:
acquiring a sampled animal image of a marked position and a type in a preset database;
obtaining preset target detection training data according to the marked positions and the sampled animal images of the types;
constructing an initial target detection model;
and training the initial target detection model according to the preset target detection training data to obtain a target detection model.
Optionally, the screening out a target animal according to the type of the animal, and determining a target detection position region to which the target animal belongs according to the target animal includes:
screening out a target animal according to the type of the animal, and determining the proportion of the target animal in the image to be detected;
determining the position coordinate of the upper left corner of the target animal in the image to be detected according to the target animal and the image to be detected;
and determining a target detection position area to which the target animal belongs according to the ratio of the target animal in the image to be detected and the position coordinate of the upper left corner.
Optionally, the adjusting the image to be detected according to the target detection position region to obtain a target subgraph includes:
cutting the image to be detected according to the target detection position area and the target animal;
and obtaining a target subgraph only containing the target animal according to the cut image.
Optionally, the performing, by using a keypoint detection model, keypoint identification on the target sub-graph to obtain a target keypoint and a reference position coordinate of the target keypoint in the target sub-graph further includes:
acquiring a sampled animal image marked with key point coordinates in a preset database;
obtaining preset key point detection training data according to the sampled animal image marked with the key point coordinates;
constructing a key point detection base model;
and training the key point detection base model according to the preset key point detection training data to obtain a key point detection model.
Optionally, the determining, according to the reference position coordinates and the target sub-image, target position coordinates of the target key points in the image to be detected includes:
obtaining the offset position of the target subgraph relative to the image to be detected according to the position coordinate of the upper left corner;
and determining the target position coordinates of the target key points in the image to be detected according to the reference position coordinates and the offset position.
Optionally, before the obtaining of the image to be detected and the obtaining of the type and the corresponding position area of each animal in the image to be detected through the target detection model according to the image to be detected, the method further includes:
acquiring an original image, and judging whether the original image contains an animal image;
and when the original image contains an animal image, taking the original image as an image to be detected.
In addition, in order to achieve the above object, the present invention also provides an animal recognition device based on deep learning, including:
the acquisition module is used for acquiring an image to be detected and detecting the image to be detected through a target detection model according to the image to be detected to obtain the types and the corresponding position areas of all animals in the image to be detected;
the screening module is used for screening a target animal according to the type of the animal and determining a target detection position area to which the target animal belongs according to the target animal;
the adjusting module is used for adjusting the image to be detected according to the target detection position area to obtain a target subgraph;
the identification module is used for carrying out key point identification on the target subgraph through a key point detection model to obtain a target key point and a reference position coordinate of the target key point in the target subgraph;
and the determining module is used for determining the target position coordinates of the target key points in the image to be detected according to the reference position coordinates and the target subgraph so as to identify the animal through the target position coordinates.
Further, to achieve the above object, the present invention also proposes a deep learning based animal recognition apparatus including: a memory, a processor and a deep learning based animal identification program stored on the memory and executable on the processor, the deep learning based animal identification program being configured to implement the steps of the deep learning based animal identification method as described above.
Furthermore, in order to achieve the above object, the present invention also provides a storage medium having stored thereon a deep learning based animal identification program, which when executed by a processor implements the steps of the deep learning based animal identification method as described above.
The method comprises the steps of obtaining an image to be detected, and detecting through a target detection model according to the image to be detected to obtain the types and corresponding position areas of all animals in the image to be detected; screening out a target animal according to the type of the animal, and determining a target detection position area to which the target animal belongs according to the target animal; adjusting the image to be detected according to the target detection position area to obtain a target subgraph; performing key point identification on the target subgraph through a key point detection model to obtain a target key point and a reference position coordinate of the target key point in the target subgraph; and determining the target position coordinates of the target key points in the image to be detected according to the reference position coordinates and the target subgraph so as to identify the animal through the target position coordinates. According to the method, the type and the position of the animal are detected by the target detection model, the key point and the key point coordinate of the target animal are detected by the key point detection model, and the key point coordinate of the target animal in the scene image is obtained finally, so that the detection and the identification of the animal and the key point thereof can be realized in a general environment, and the key point detection complexity in the general environment is reduced.
Drawings
Fig. 1 is a schematic structural diagram of a deep learning-based animal recognition device of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of the deep learning-based animal identification method according to the present invention;
FIG. 3 is a schematic overall flowchart of an embodiment of the deep learning-based animal identification method according to the present invention;
FIG. 4 is a schematic diagram of the deep learning-based animal identification method according to an embodiment of the present invention;
FIG. 5 is a flowchart of a key point detection algorithm model training process according to an embodiment of the deep learning-based animal identification method of the present invention;
FIG. 6 is a diagram of a detection result of an embodiment of the deep learning-based animal identification method of the present invention;
FIG. 7 is a flowchart illustrating a second embodiment of the deep learning-based animal identification method according to the present invention;
FIG. 8 is a flowchart of a target detection algorithm model training process according to an embodiment of the deep learning-based animal identification method of the present invention;
fig. 9 is a block diagram showing the structure of a first embodiment of the deep learning-based animal recognition apparatus according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an animal recognition device based on deep learning in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the deep learning based animal recognition apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of deep learning based animal identification devices and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and an animal recognition program based on deep learning.
In the deep learning based animal recognition apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 of the deep learning based animal recognition device of the present invention may be provided in a deep learning based animal recognition device which calls a deep learning based animal recognition program stored in the memory 1005 through the processor 1001 and executes the deep learning based animal recognition method provided by the embodiment of the present invention.
The embodiment of the invention provides an animal identification method based on deep learning, and referring to fig. 2, fig. 2 is a schematic flow diagram of a first embodiment of the animal identification method based on deep learning.
In this embodiment, the animal identification method based on deep learning includes the following steps:
step S10: and acquiring an image to be detected, and detecting the type and the corresponding position area of each animal in the image to be detected through a target detection model according to the image to be detected.
It should be noted that, the execution main body of the embodiment may be a terminal device, such as a desktop computer and a notebook computer, and may be applicable to a web application, which is not limited in this embodiment.
It is understood that the image to be detected refers to an image which needs to be identified by an animal, and the image to be detected includes an animal image.
It can be understood that the target detection model is a fast RCNN (target detection algorithm) model based on deep learning, and comprises a feature extraction feature network and a target detection network, wherein the feature extraction network is a deep neural network formed by a convolutional neural network, the deep neural network is trained by using an ImageNet classification data set, the extracted features are extracted by using the deep neural network, and then are input into the target detection network, and the target detection network can further identify objects in the pictures based on the features.
In a specific implementation, as shown in fig. 3 and 4, fig. 3 is a schematic overall flow diagram of this embodiment, fig. 4 is a schematic working diagram of this embodiment, an image to be detected is input to a terminal device, a type and a corresponding position area of each animal in the image to be detected are obtained through detection of a target detection model in animal recognition software of this embodiment, which is provided in the terminal device, and then a key point detection model pair in the animal recognition software is used to determine key point coordinates of the target animal. For example, the image to be detected is subjected to a fast RCNN target detection algorithm model to obtain that animals including a dog and a cat are contained in the image, if the target animal is the dog, the position area where the dog is located is determined, and then the key point detection model is utilized to obtain the key point coordinates of the dog. As shown in FIG. 5, the boxes represent the location areas where the Faster RCNN target detection algorithm outputs the dogs, and the small circles represent the keypoint locations of each dog detected by the keypoint detection algorithm.
Step S20: and screening out a target animal according to the type of the animal, and determining a target detection position area to which the target animal belongs according to the target animal.
It should be noted that the target animal refers to an animal whose key points need to be detected and identified, for example, a puppy whose key points need to be detected is screened out, other types of objects are removed, a position area where the puppy is located is determined, and the position area where the puppy is located is used as a target detection position area.
Further, the screening out a target animal according to the type of the animal and determining a target detection position region to which the target animal belongs according to the target animal includes: screening out a target animal according to the type of the animal, and determining the proportion of the target animal in the image to be detected; determining the position coordinate of the upper left corner of the target animal in the image to be detected according to the target animal and the image to be detected; and determining a target detection position area to which the target animal belongs according to the ratio of the target animal in the image to be detected and the position coordinate of the upper left corner.
It should be noted that the ratio of the target animal in the image to be detected refers to the width and height of the target animal in the image to be detected.
In specific implementation, the target detection network positions the coordinates of the upper left corner of the target animal and the width and height of the object contained in the picture to be detected based on the features of the target animal obtained by the feature extraction network, and further determines the position area where the target animal is located.
Step S30: and adjusting the image to be detected according to the target detection position area to obtain a target subgraph.
It should be noted that after the target animal and the target detection position where the target animal is located are obtained, the image to be detected is adjusted to obtain a target subgraph.
Further, the adjusting the image to be detected according to the target detection position area to obtain a target subgraph includes: cutting the image to be detected according to the target detection position area and the target animal; and obtaining a target subgraph only containing the target animal according to the cut image.
In the specific implementation, after a target detection position area and a target animal are detected by a target detection model, an image to be detected is cut to obtain a target subgraph only containing the target animal. For example, to detect the identified puppy and the key points, only the puppy is screened out from the image to be detected, and other types of objects are removed. And cutting out the picture of the region where the selected puppy is located to obtain a target sub-picture only containing the region where the puppy is located, wherein one or more target sub-pictures can be obtained by cutting out the image to be detected.
Step S40: and carrying out key point identification on the target subgraph through a key point detection model to obtain a target key point and a reference position coordinate of the target key point in the target subgraph.
It should be noted that the key point detection model is also based on a deep learning algorithm model, and the key point detection model can detect the coordinates of key points of a specific class of animals in the picture, wherein the need to detect those key points is determined by the application. For example, if it is desired to detect the locations of the eyes, ears, nose, paws, etc. that identify the puppies, these locations may be defined as key points that require algorithmic detection. The key point detection model comprises a feature extraction network and a key point positioning network, wherein the feature extraction network is also trained by ImageNet classification data and is used for extracting picture features; the key point positioning network is used for positioning the position coordinates of the key point to be detected.
It can be understood that the coordinates of the animal key points in each target sub-image in the target sub-image, that is, the reference position coordinates, can be obtained by inputting the obtained target sub-images into the key point detection model for key point identification.
Further, the performing, by the keypoint detection model, keypoint identification on the target sub-graph to obtain a target keypoint and a reference position coordinate of the target keypoint in the target sub-graph further includes: acquiring a sampled animal image marked with key point coordinates in a preset database; obtaining preset key point detection training data according to the sampled animal image marked with the key point coordinates; constructing a key point detection base model; and training the key point detection base model according to the preset key point detection training data to obtain a key point detection model.
It should be noted that the preset database refers to a database built in the terminal device.
In a specific implementation, as shown in fig. 6, before performing the keypoint identification on the target sub-graph through the keypoint detection model, a proper number of sampling animal image training data sets labeled with the coordinates of the keypoints in a preset database are acquired as preset keypoint detection training data. The key point detection base model is constructed by designing the structure of the key point positioning network and writing a training code by using a tensoflow frame after the structure of the key point positioning network is well designed. And continuously inputting preset key point detection training data into the key point detection base model for training until the network detection precision of the key point detection network in the key point detection base model reaches the use standard, stopping training, and taking the model file of the trained key point detection base model as a final key point detection algorithm model for an actual detection scene.
In the embodiment, the identification process of the animal key points in the animal identification process can be more accurate by training the key point detection model in advance.
Step S50: and determining the target position coordinates of the target key points in the image to be detected according to the reference position coordinates and the target subgraph so as to identify the animal through the target position coordinates.
It should be noted that the target position coordinates refer to coordinates of key points of the target animal in the target sub-image in the image to be detected.
Further, the determining the target position coordinates of the target key points in the image to be detected according to the reference position coordinates and the target subgraph includes: obtaining the offset position of the target subgraph relative to the image to be detected according to the position coordinate of the upper left corner; and determining the target position coordinates of the target key points in the image to be detected according to the reference position coordinates and the offset position.
It should be noted that the offset position of each target sub-image with respect to the image to be detected is the position output by the target detection model, and this position is the position coordinate of the upper left corner of the target animal on the image to be detected.
In the specific implementation, the offset position of the target subgraph and the reference position coordinates of the key points in the target subgraph are spliced to obtain the key point coordinates of each target animal needing to be detected and identified in the image to be detected, namely the target position coordinates of the target animal in the image to be detected.
In the embodiment, the type and the corresponding position area of each animal in the image to be detected are obtained by obtaining the image to be detected and detecting the image to be detected through a target detection model; screening out a target animal according to the type of the animal, and determining a target detection position area to which the target animal belongs according to the target animal; adjusting the image to be detected according to the target detection position area to obtain a target subgraph; performing key point identification on the target subgraph through a key point detection model to obtain a target key point and a reference position coordinate of the target key point in the target subgraph; and determining the target position coordinates of the target key points in the image to be detected according to the reference position coordinates and the target subgraph, and identifying the animal through the target position coordinates.
Referring to fig. 7, fig. 7 is a flowchart illustrating a second embodiment of the deep learning-based animal identification method according to the present invention.
Based on the first embodiment, the animal identification method based on deep learning of this embodiment further includes, before step S10:
step S101: and acquiring a sampling animal image of the marked position and the type in a preset database.
It should be noted that the preset database refers to a database built in the terminal device.
It can be understood that, before step S10, the images containing the animals that appear are obtained through the preset database built in the terminal device, an appropriate number of images are screened to mark the positions and categories of the animals in the images, and the images with the positions and categories marked are used as the images of the sampled animals
Step S102: and obtaining preset target detection training data according to the marked positions and the sampled animal images of the types.
It should be noted that the sampled animal image with the animal position and the animal type marked is used as the preset target detection training data.
Step S103: and constructing an initial target detection model.
It should be noted that the initial target detection model is constructed by writing algorithm training code based on tenserflow framework according to the algorithm principle of fast RCNN.
Step S104: and training the initial target detection model according to the preset target detection training data to obtain a target detection model.
In a specific implementation, as shown in fig. 8, an initial target detection model is trained by preset target detection training data, and when the algorithm accuracy reaches the standard, the training is terminated, and the trained initial target detection model is stored to obtain a target detection model.
Further, before the obtaining of the image to be detected and the obtaining of the species of each animal and the corresponding position area in the image to be detected through the target detection model according to the image to be detected, the method further comprises: acquiring an original image, and judging whether the original image contains an animal image; and when the original image contains an animal image, taking the original image as an image to be detected.
It should be noted that, before the animal identification process, the collected original image is screened, whether the collected original image contains an animal image is judged, and if the collected original image contains an animal image, the original image is input into the identification software as an image to be detected to perform the animal identification operation.
In this embodiment, the original image containing the animal image is directly detected by excluding the original image not containing the animal image, so that the efficiency of animal identification is improved.
In the embodiment, a sampled animal image of a marked position and a type in a preset database is obtained; obtaining preset target detection training data according to the marked positions and the sampled animal images of the types; constructing an initial target detection model; training the initial target detection model according to the preset target detection training data to obtain a target detection model, and training to obtain the target detection model before animal identification operation, so that the type identification and positioning of the target animal in the animal identification process are more accurate.
In addition, referring to fig. 9, an embodiment of the present invention further provides an animal recognition device based on deep learning, including:
the acquisition module 10 is configured to acquire an image to be detected, and obtain the type and the corresponding position area of each animal in the image to be detected through detection of a target detection model according to the image to be detected;
the screening module 20 is configured to screen out a target animal according to the type of the animal, and determine a target detection position region to which the target animal belongs according to the target animal;
the adjusting module 30 is configured to adjust the image to be detected according to the target detection position region to obtain a target sub-image;
the identification module 40 is used for carrying out key point identification on the target subgraph through a key point detection model to obtain a target key point and a reference position coordinate of the target key point in the target subgraph;
and the determining module 50 is configured to determine a target position coordinate of the target key point in the image to be detected according to the reference position coordinate and the target subgraph, so as to identify the animal through the target position coordinate.
In the embodiment, the type and the corresponding position area of each animal in the image to be detected are obtained by obtaining the image to be detected and detecting the image to be detected through a target detection model; screening out a target animal according to the type of the animal, and determining a target detection position area to which the target animal belongs according to the target animal; adjusting the image to be detected according to the target detection position area to obtain a target subgraph; performing key point identification on the target subgraph through a key point detection model to obtain a target key point and a reference position coordinate of the target key point in the target subgraph; and determining the target position coordinates of the target key points in the image to be detected according to the reference position coordinates and the target subgraph, and identifying the animal through the target position coordinates.
In an embodiment, the obtaining module 10 is further configured to obtain a sampled animal image of a labeled position and a type in a preset database;
obtaining preset target detection training data according to the marked positions and the sampled animal images of the types;
constructing an initial target detection model;
and training the initial target detection model according to the preset target detection training data to obtain a target detection model.
In an embodiment, the screening module 20 is further configured to screen out a target animal according to the type of the animal, and determine a ratio of the target animal in the image to be detected;
determining the position coordinate of the upper left corner of the target animal in the image to be detected according to the target animal and the image to be detected;
and determining a target detection position area to which the target animal belongs according to the ratio of the target animal in the image to be detected and the position coordinate of the upper left corner.
In an embodiment, the adjusting module 30 is further configured to crop the image to be detected according to the target detection position area and the target animal;
and obtaining a target subgraph only containing the target animal according to the cut image.
In an embodiment, the obtaining module 10 is further configured to obtain a sampled animal image labeled with the coordinates of the key points in a preset database;
obtaining preset key point detection training data according to the sampled animal image marked with the key point coordinates;
constructing a key point detection base model;
and training the key point detection base model according to the preset key point detection training data to obtain a key point detection model.
In an embodiment, the determining module 50 is further configured to obtain an offset position of the target sub-image relative to the image to be detected according to the upper left corner position coordinate;
and determining the target position coordinates of the target key points in the image to be detected according to the reference position coordinates and the offset position.
In an embodiment, the obtaining module 10 is further configured to obtain an original image, and determine whether the original image contains an animal image;
and when the original image contains an animal image, taking the original image as an image to be detected.
Furthermore, an embodiment of the present invention further provides a storage medium, on which a deep learning based animal identification program is stored, and when being executed by a processor, the deep learning based animal identification program implements the steps of the deep learning based animal identification method as described above.
Since the storage medium adopts all technical solutions of all the embodiments, at least all the beneficial effects brought by the technical solutions of the embodiments are achieved, and no further description is given here.
It should be noted that the above-described work flows are only exemplary, and do not limit the scope of the present invention, and in practical applications, a person skilled in the art may select some or all of them to achieve the purpose of the solution of the embodiment according to actual needs, and the present invention is not limited herein.
In addition, the technical details that are not described in detail in this embodiment can be referred to the animal identification method based on deep learning provided in any embodiment of the present invention, and are not described herein again.
Further, it is to be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A deep learning-based animal identification method is characterized by comprising the following steps:
acquiring an image to be detected, and detecting through a target detection model according to the image to be detected to obtain the type and the corresponding position area of each animal in the image to be detected;
screening out a target animal according to the type of the animal, and determining a target detection position area to which the target animal belongs according to the target animal;
adjusting the image to be detected according to the target detection position area to obtain a target subgraph;
performing key point identification on the target subgraph through a key point detection model to obtain a target key point and a reference position coordinate of the target key point in the target subgraph;
and determining the target position coordinates of the target key points in the image to be detected according to the reference position coordinates and the target subgraph so as to identify the animal through the target position coordinates.
2. The animal recognition method based on deep learning of claim 1, wherein before the obtaining of the image to be detected and the detection of the target detection model according to the image to be detected to obtain the type and the corresponding position region of each animal in the image to be detected, the method further comprises:
acquiring a sampled animal image of a marked position and a type in a preset database;
obtaining preset target detection training data according to the marked positions and the sampled animal images of the types;
constructing an initial target detection model;
and training the initial target detection model according to the preset target detection training data to obtain a target detection model.
3. The deep learning-based animal recognition method according to claim 1, wherein the screening out a target animal according to the type of the animal and determining a target detection position region to which the target animal belongs according to the target animal comprises:
screening out a target animal according to the type of the animal, and determining the proportion of the target animal in the image to be detected;
determining the position coordinate of the upper left corner of the target animal in the image to be detected according to the target animal and the image to be detected;
and determining a target detection position area to which the target animal belongs according to the ratio of the target animal in the image to be detected and the position coordinate of the upper left corner.
4. The animal recognition method based on deep learning of claim 1, wherein the adjusting the image to be detected according to the target detection position region to obtain a target subgraph comprises:
cutting the image to be detected according to the target detection position area and the target animal;
and obtaining a target subgraph only containing the target animal according to the cut image.
5. The deep learning-based animal recognition method of claim 1, wherein the performing of the keypoint recognition on the target sub-graph by the keypoint detection model to obtain target keypoints and reference position coordinates of the target keypoints in the target sub-graph further comprises:
acquiring a sampled animal image marked with key point coordinates in a preset database;
obtaining preset key point detection training data according to the sampled animal image marked with the key point coordinates;
constructing a key point detection base model;
and training the key point detection base model according to the preset key point detection training data to obtain a key point detection model.
6. The animal recognition method based on deep learning of claim 3, wherein the determining the target position coordinates of the target key points in the image to be detected according to the reference position coordinates and the target subgraph comprises:
obtaining the offset position of the target subgraph relative to the image to be detected according to the position coordinate of the upper left corner;
and determining the target position coordinates of the target key points in the image to be detected according to the reference position coordinates and the offset position.
7. The animal recognition method based on deep learning of any one of claims 1 to 6, wherein before the obtaining of the image to be detected and the detection of the object detection model according to the image to be detected to obtain the type and the corresponding position region of each animal in the image to be detected, the method further comprises:
acquiring an original image, and judging whether the original image contains an animal image;
and when the original image contains an animal image, taking the original image as an image to be detected.
8. A deep learning based animal recognition apparatus, characterized in that the deep learning based animal recognition apparatus comprises:
the acquisition module is used for acquiring an image to be detected and detecting the image to be detected through a target detection model according to the image to be detected to obtain the types and the corresponding position areas of all animals in the image to be detected;
the screening module is used for screening a target animal according to the type of the animal and determining a target detection position area to which the target animal belongs according to the target animal;
the adjusting module is used for adjusting the image to be detected according to the target detection position area to obtain a target subgraph;
the identification module is used for carrying out key point identification on the target subgraph through a key point detection model to obtain a target key point and a reference position coordinate of the target key point in the target subgraph;
and the determining module is used for determining the target position coordinates of the target key points in the image to be detected according to the reference position coordinates and the target subgraph so as to identify the animal through the target position coordinates.
9. An animal recognition device based on deep learning, the device comprising: a memory, a processor and a deep learning based animal identification program stored on the memory and executable on the processor, the deep learning based animal identification program being configured to implement the steps of the deep learning based animal identification method as claimed in any one of claims 1 to 7.
10. A storage medium having stored thereon a deep learning based animal identification program which, when executed by a processor, implements the steps of the deep learning based animal identification method according to any one of claims 1 to 7.
CN202110194977.1A 2021-02-20 2021-02-20 Animal identification method, device and equipment based on deep learning and storage medium Pending CN112819885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110194977.1A CN112819885A (en) 2021-02-20 2021-02-20 Animal identification method, device and equipment based on deep learning and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110194977.1A CN112819885A (en) 2021-02-20 2021-02-20 Animal identification method, device and equipment based on deep learning and storage medium

Publications (1)

Publication Number Publication Date
CN112819885A true CN112819885A (en) 2021-05-18

Family

ID=75864475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110194977.1A Pending CN112819885A (en) 2021-02-20 2021-02-20 Animal identification method, device and equipment based on deep learning and storage medium

Country Status (1)

Country Link
CN (1) CN112819885A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387658A (en) * 2022-03-24 2022-04-22 浪潮云信息技术股份公司 Image target attribute detection method, device, equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549876A (en) * 2018-04-20 2018-09-18 重庆邮电大学 The sitting posture detecting method estimated based on target detection and human body attitude
CN110705520A (en) * 2019-10-22 2020-01-17 上海眼控科技股份有限公司 Object detection method, device, computer equipment and computer readable storage medium
CN110826476A (en) * 2019-11-02 2020-02-21 国网浙江省电力有限公司杭州供电公司 Image detection method and device for identifying target object, electronic equipment and storage medium
CN111444928A (en) * 2020-03-30 2020-07-24 北京市商汤科技开发有限公司 Key point detection method and device, electronic equipment and storage medium
CN111476211A (en) * 2020-05-15 2020-07-31 深圳市英威诺科技有限公司 Tensorflow frame-based face positioning method and system
CN111507134A (en) * 2019-01-31 2020-08-07 北京奇虎科技有限公司 Human-shaped posture detection method and device, computer equipment and storage medium
CN111552837A (en) * 2020-05-08 2020-08-18 深圳市英威诺科技有限公司 Animal video tag automatic generation method based on deep learning, terminal and medium
CN111626086A (en) * 2019-02-28 2020-09-04 北京市商汤科技开发有限公司 Living body detection method, living body detection device, living body detection system, electronic device, and storage medium
CN111914664A (en) * 2020-07-06 2020-11-10 同济大学 Vehicle multi-target detection and track tracking method based on re-identification
CN112131935A (en) * 2020-08-13 2020-12-25 浙江大华技术股份有限公司 Motor vehicle carriage manned identification method and device and computer equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549876A (en) * 2018-04-20 2018-09-18 重庆邮电大学 The sitting posture detecting method estimated based on target detection and human body attitude
CN111507134A (en) * 2019-01-31 2020-08-07 北京奇虎科技有限公司 Human-shaped posture detection method and device, computer equipment and storage medium
CN111626086A (en) * 2019-02-28 2020-09-04 北京市商汤科技开发有限公司 Living body detection method, living body detection device, living body detection system, electronic device, and storage medium
CN110705520A (en) * 2019-10-22 2020-01-17 上海眼控科技股份有限公司 Object detection method, device, computer equipment and computer readable storage medium
CN110826476A (en) * 2019-11-02 2020-02-21 国网浙江省电力有限公司杭州供电公司 Image detection method and device for identifying target object, electronic equipment and storage medium
CN111444928A (en) * 2020-03-30 2020-07-24 北京市商汤科技开发有限公司 Key point detection method and device, electronic equipment and storage medium
CN111552837A (en) * 2020-05-08 2020-08-18 深圳市英威诺科技有限公司 Animal video tag automatic generation method based on deep learning, terminal and medium
CN111476211A (en) * 2020-05-15 2020-07-31 深圳市英威诺科技有限公司 Tensorflow frame-based face positioning method and system
CN111914664A (en) * 2020-07-06 2020-11-10 同济大学 Vehicle multi-target detection and track tracking method based on re-identification
CN112131935A (en) * 2020-08-13 2020-12-25 浙江大华技术股份有限公司 Motor vehicle carriage manned identification method and device and computer equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387658A (en) * 2022-03-24 2022-04-22 浪潮云信息技术股份公司 Image target attribute detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111476227B (en) Target field identification method and device based on OCR and storage medium
CN109961009B (en) Pedestrian detection method, system, device and storage medium based on deep learning
CN111898411B (en) Text image labeling system, method, computer device and storage medium
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
CN109117760B (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN112070076B (en) Text paragraph structure reduction method, device, equipment and computer storage medium
CN106326802B (en) Quick Response Code bearing calibration, device and terminal device
CN114155244B (en) Defect detection method, device, equipment and storage medium
CN110197238B (en) Font type identification method, system and terminal equipment
CN112101317A (en) Page direction identification method, device, equipment and computer readable storage medium
CN112183307A (en) Text recognition method, computer device, and storage medium
CN112381092A (en) Tracking method, device and computer readable storage medium
CN113673500A (en) Certificate image recognition method and device, electronic equipment and storage medium
CN112308069A (en) Click test method, device, equipment and storage medium for software interface
JP7337937B2 (en) Magnified Image Acquisition and Storage
CN111768405A (en) Method, device, equipment and storage medium for processing annotated image
CN112819885A (en) Animal identification method, device and equipment based on deep learning and storage medium
CN110490022A (en) A kind of bar code method and device in identification picture
CN117115823A (en) Tamper identification method and device, computer equipment and storage medium
CN111401465A (en) Training sample optimization method, device, equipment and storage medium
CN111401158A (en) Difficult sample discovery method and device and computer equipment
CN116610304A (en) Page code generation method, device, equipment and storage medium
CN111539390A (en) Small target image identification method, equipment and system based on Yolov3
CN114758384A (en) Face detection method, device, equipment and storage medium
CN113780116A (en) Invoice classification method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination