CN117809212A - Multi-view animal identity recognition method and recognition device thereof - Google Patents

Multi-view animal identity recognition method and recognition device thereof Download PDF

Info

Publication number
CN117809212A
CN117809212A CN202311557839.0A CN202311557839A CN117809212A CN 117809212 A CN117809212 A CN 117809212A CN 202311557839 A CN202311557839 A CN 202311557839A CN 117809212 A CN117809212 A CN 117809212A
Authority
CN
China
Prior art keywords
animal
segmentation
instance
video
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311557839.0A
Other languages
Chinese (zh)
Inventor
韩亚宁
陈可
蔚鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202311557839.0A priority Critical patent/CN117809212A/en
Publication of CN117809212A publication Critical patent/CN117809212A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a multi-view animal identity recognition method and a recognition device thereof, comprising the following steps: and acquiring single animal behavior videos at the same moment under at least two visual angles and multiple animal behavior videos at the same moment under at least two visual angles. The method has the advantages that a deep learning model reuse strategy is used, a deep learning instance segmentation model and a deep learning identity recognition model are trained firstly and then respectively, animal instance segmentation tasks and animal identity recognition tasks are designed to be complementary tasks, and the problems that identity data need to be manually marked, an animal identity set is difficult to mark, animal identity recognition is difficult and errors are large are directly solved by combining video acquisition under multiple view angles, and the defects that the acquisition amount of robust identity features is difficult to acquire and the accuracy of the identity recognition is low are further solved. Compared with the prior art, the multi-view animal identity recognition method and the recognition device thereof can achieve the purposes of easily obtaining the steady identity characteristic acquisition quantity and improving the identity recognition accuracy.

Description

Multi-view animal identity recognition method and recognition device thereof
Technical Field
The invention relates to the technical field of visual identification, in particular to a multi-view animal identity identification method and an identification device thereof.
Background
In the drug development of social disorder diseases such as autism, anxiety and phobia, the identification of the difference in behavior of animals before and after administration is an important indicator for judging the efficacy of drugs.
However, limited by animal tracking technology, the existing methods cannot identify the identity of each animal in the natural social process of the animal, only allow free movement of one animal, and other animals need to bind the movement by a certain method to reduce the interference to the animal identity, such as three-box social experiments. The root of the limitation comes from the problems that the appearance of animals is too similar, so that the error of the traditional algorithm for identifying the identity of the animals is difficult to reduce, the set of the animal identities is difficult to mark, the identification of the animals is difficult and the error is large, namely, the acquisition of the robust identity features is difficult, and the identification accuracy is low.
Disclosure of Invention
The invention provides a multi-view animal identity recognition method and a recognition device thereof, which aim to solve the defects that an animal identity set is difficult to mark, animal identity recognition is difficult and has large error, so that the acquisition amount of robust identity features is difficult to acquire and the accuracy of the identity recognition is low.
The technical scheme adopted by the invention is a multi-view animal identity recognition method, which comprises the following steps:
acquiring single animal behavior videos at the same moment under at least two visual angles and multiple animal behavior videos at the same moment under at least two visual angles;
extracting animal instance outlines in the single-animal behavior video and the multi-animal behavior video, and using the animal instance outlines for training a deep learning instance segmentation model;
performing instance segmentation on the single-animal behavior video and the multi-animal behavior video based on a deep learning instance segmentation model, obtaining a single-animal segmentation instance from the single-animal behavior video, and obtaining a multi-animal segmentation instance from the multi-animal behavior video;
splicing single-animal segmentation examples under different visual angles, taking the spliced images as input patterns, and using the input patterns for training a deep learning identity recognition model;
and splicing the multiple animal segmentation examples under different visual angles to obtain part or all patterns of the single animal, and identifying and matching the part or all patterns of the single animal based on the deep learning identity identification model so as to obtain the identity information of the single animal.
Preferably, the step of extracting animal instance outlines in the single-animal behavioral videos and the multiple-animal behavioral videos includes:
extracting the overlapping parts of animals shielded at different times in the single-animal behavior video as a single-animal background video, and extracting the overlapping parts of animals shielded at different times in the multi-animal behavior video as a multi-animal background video;
and respectively eliminating the single-animal background video and the multi-animal background video in the single-animal behavior video and the multi-animal behavior video, thereby obtaining animal example outlines.
Preferably, after obtaining the outline of the animal instance, the method further comprises synthesizing the outline of the animal instance with different backgrounds to generate an image containing the outline labels of the animal instance.
Preferably, the deep learning instance segmentation model is trained based on yolact++.
Preferably, the deep learning identity recognition model is trained based on a deep learning image classification model of Efficient Net.
Preferably, the step of stitching the multiple animal segmentation examples under different viewing angles to obtain a partial or complete pattern of a single animal includes:
and obtaining part or all of single-animal segmentation examples of the single animal from the multi-animal segmentation examples, and then splicing the part or all of single-animal segmentation examples of the single animal under different visual angles to obtain part or all of patterns of the single animal.
Preferably, a part or all of single animal segmentation examples are obtained from the multi-animal segmentation examples, the projection relation in the multi-animal segmentation examples is obtained through video shooting parameters, and projections under different visual angles are spliced after the projection size is adjusted to obtain part or all of patterns of the single animal.
Preferably, a projection relation in a multi-animal segmentation example is obtained, projections under different view angles are compared and matched with single-animal segmentation examples in a database, the size of the matched projections is adjusted, and then the projections under different view angles are spliced.
Preferably, the single-animal segmentation examples under different visual angles are spliced, and the spliced image is a picture.
The invention also provides a multi-view animal identity recognition device, which comprises:
the multi-camera array behavior acquisition unit is used for acquiring single-animal behavior videos at the same moment under at least two visual angles and multi-animal behavior videos at the same moment under at least two visual angles;
the example segmentation model training unit is used for extracting animal example outlines in the single-animal behavior video and the multi-animal behavior video and training the deep learning example segmentation model by using the animal example outlines;
the instance segmentation unit is used for carrying out instance segmentation on the single-animal behavior video and the multi-animal behavior video based on the deep learning instance segmentation model, obtaining a single-animal segmentation instance from the single-animal behavior video and obtaining a multi-animal segmentation instance from the multi-animal behavior video;
the single-animal identification unit is used for splicing the single-animal segmentation examples under different visual angles, taking the spliced image as an input pattern, and using the input pattern for training the deep learning identification model;
and the multi-animal identification unit is used for splicing the multi-animal segmentation examples under different visual angles to obtain part or all patterns of the single animal, and identifying and matching the part or all patterns of the single animal based on the deep learning identification model so as to obtain the identification information of the single animal.
Compared with the prior art, the invention has the following beneficial effects:
the application discloses a multi-view animal identity recognition method, which comprises the following steps: acquiring single animal behavior videos at the same moment under at least two visual angles and multiple animal behavior videos at the same moment under at least two visual angles; extracting animal instance outlines in the single-animal behavior video and the multi-animal behavior video, and using the animal instance outlines for training a deep learning instance segmentation model; performing instance segmentation on the single-animal behavior video and the multi-animal behavior video based on a deep learning instance segmentation model, obtaining a single-animal segmentation instance from the single-animal behavior video, and obtaining a multi-animal segmentation instance from the multi-animal behavior video; splicing single-animal segmentation examples under different visual angles, taking the spliced images as input patterns, and using the input patterns for training a deep learning identity recognition model; and splicing the multiple animal segmentation examples under different visual angles to obtain part or all patterns of the single animal, and identifying and matching the part or all patterns of the single animal based on the deep learning identity identification model so as to obtain the identity information of the single animal. According to the method, a deep learning model reuse strategy is used, a deep learning instance segmentation model and a deep learning identity recognition model are trained firstly and then respectively, an animal instance segmentation task and an animal identity recognition task are designed to be complementary tasks, and video acquisition under multiple view angles is combined, so that the problems that identity data are required to be manually marked, an animal identity set is difficult to mark, animal identity recognition is difficult and error is large are directly solved, and the defects that the acquisition amount of robust identity features is difficult to obtain and the accuracy of the identity recognition is low are further solved.
The application also discloses a multi-view animal identification device, include: the multi-camera array behavior acquisition unit is used for acquiring single-animal behavior videos at the same moment under at least two visual angles and multi-animal behavior videos at the same moment under at least two visual angles; the example segmentation model training unit is used for extracting animal example outlines in the single-animal behavior video and the multi-animal behavior video and training the deep learning example segmentation model by using the animal example outlines; the instance segmentation unit is used for carrying out instance segmentation on the single-animal behavior video and the multi-animal behavior video based on the deep learning instance segmentation model, obtaining a single-animal segmentation instance from the single-animal behavior video and obtaining a multi-animal segmentation instance from the multi-animal behavior video; the single-animal identification unit is used for splicing the single-animal segmentation examples under different visual angles, taking the spliced image as an input pattern, and using the input pattern for training the deep learning identification model; and the multi-animal identification unit is used for splicing the multi-animal segmentation examples under different visual angles to obtain part or all patterns of the single animal, and identifying and matching the part or all patterns of the single animal based on the deep learning identification model so as to obtain the identification information of the single animal. The problems that identity data are required to be manually marked, an animal identity set is difficult to mark, animal identity identification is difficult and errors are large are directly solved, and the defects that the acquired quantity of robust identity features is difficult to acquire and the accuracy of the identity identification is low are further solved.
Compared with the prior art, the multi-view animal identity recognition method and the recognition device thereof can achieve the purposes of easily obtaining the steady identity characteristic acquisition quantity and improving the identity recognition accuracy.
Drawings
The invention is described in detail below with reference to examples and figures, wherein:
fig. 1 shows a schematic flow chart of a multi-view animal identification method according to an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings. Examples of the embodiments are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements throughout, or elements having like or similar functionality. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
The invention discloses a multi-view animal identity recognition method, please refer to fig. 1, comprising the following steps:
s10: acquiring single animal behavior videos at the same moment under at least two visual angles and multiple animal behavior videos at the same moment under at least two visual angles;
s20: extracting animal instance outlines in the single-animal behavior video and the multi-animal behavior video, and using the animal instance outlines for training a deep learning instance segmentation model;
s30: performing instance segmentation on the single-animal behavior video and the multi-animal behavior video based on a deep learning instance segmentation model, obtaining a single-animal segmentation instance from the single-animal behavior video, and obtaining a multi-animal segmentation instance from the multi-animal behavior video;
s40: splicing single-animal segmentation examples under different visual angles, taking the spliced images as input patterns, and using the input patterns for training a deep learning identity recognition model;
s50: and splicing the multiple animal segmentation examples under different visual angles to obtain part or all patterns of the single animal, and identifying and matching the part or all patterns of the single animal based on the deep learning identity identification model so as to obtain the identity information of the single animal.
According to the method, a deep learning model reuse strategy is used, a deep learning instance segmentation model and a deep learning identity recognition model are trained firstly and then respectively, an animal instance segmentation task and an animal identity recognition task are designed to be complementary tasks, and video acquisition under multiple view angles is combined, so that the problems that identity data are required to be manually marked, an animal identity set is difficult to mark, animal identity recognition is difficult and error is large are directly solved, and the defects that the acquisition amount of robust identity features is difficult to obtain and the accuracy of the identity recognition is low are further solved.
It should be noted that adding physical tags to animals is an effective solution for identifying the identity of animals, however, these physical tags affect the behavior of animals in social situations, and the increase in the number of animals also results in an increase in the overall equipment cost. In addition, directly identifying the identity of an animal through an image is an alternative to a physical tag, identifying the identity of the animal by using biological appearance characteristics such as nasal marks and the like, and then verifying that the biological appearance characteristics such as nasal marks and the like are only in partial species such as cats and dogs and the like is not applicable to mice commonly used in drug development.
Different from the method for identifying the identity of the animal, the method and the device identify the identity of the animal through deep learning, train the deep learning example segmentation model through a large number of animal example outlines obtained from single-animal behavior videos and multiple-animal behavior videos, ensure that the deep learning example segmentation model has a large number of identified animal example outlines, splice the single-animal example segmentation at multiple angles to obtain a large number of input patterns, train the deep learning identity recognition model through the large number of input patterns, and enable the deep learning identity recognition model to have a large number of identified input patterns, so that the deep learning identity recognition model can also identify the animal according to the deep learning identity recognition model when the animal appearances are very similar.
In addition, the application also solves the problem that enough training data are difficult to acquire when the deep learning is used for identifying multiple animal identities. Meanwhile, when a new animal is added, the subsequent identification of the new animal can be automatically realized only by acquiring single animal behavior videos of the new animal at the same moment under at least two visual angles.
It should be further noted that, the purpose of obtaining videos (single animal behavior video and multiple animal behavior video) at the same moment under at least two visual angles is to solve the problems of feature loss and visual angle deviation in image-based animal identification, and to be able to comprehensively capture stable and complete animal appearance features under multiple visual angles. In the application, at least videos at the same time under two visual angles are needed, obviously, more videos under the visual angles can be captured according to actual requirements, correspondingly, the identification result is more accurate, the time for collecting the animal instance outline is faster, and the calculation amount is larger.
The animal instance outline is a graph obtained by partially matting out the animal in a video, and the same animal has different animal instance outlines when the animal is in different postures and is far from or near to a camera, so that in order to train a deep learning instance segmentation model in a subsequent step, the animal instance outline needs to be obtained in advance so as to serve the subsequent instance segmentation.
When the instance is segmented, the segmentation instance is carried out in the video according to the trained deep learning instance segmentation model, the contour area obtained after calculation and recognition in the computer vision task can be obtained specifically after the pixel points are calculated and recognized.
The spliced image is used as an input pattern, wherein the spliced image not only comprises a static picture but also comprises continuous video. For example, continuous video can take video of a certain action or a certain behavior of an animal as an input pattern, whereby higher recognition accuracy can be obtained.
In some embodiments, the step of extracting animal instance contours in the single-animal behavioral video and the multiple-animal behavioral video comprises:
extracting the overlapping parts of animals shielded at different times in the single-animal behavior video as a single-animal background video, and extracting the overlapping parts of animals shielded at different times in the multi-animal behavior video as a multi-animal background video;
and respectively eliminating the single-animal background video and the multi-animal background video in the single-animal behavior video and the multi-animal behavior video, thereby obtaining animal example outlines.
Specifically, the step of extracting the animal example outline in the video firstly extracts a single-animal background video and a multi-animal background video from the single-animal behavior video and the multi-animal behavior video, and the purpose of extracting the single-animal background video and the multi-animal background video is to obtain the animal example outline without interference of the background videos (the single-animal background video and the multi-animal background video), and the step is also used as a key step for eliminating the background videos in the single-animal behavior video and the multi-animal behavior video. It should be noted that, because the animals in the single animal behavior video and the multiple animal behavior video are not in a motionless state at all times, when the animals walk away, the part of the animal shielding background can be identified, so that the animal background can be obtained by extracting the overlapping parts of the animals shielded at different times.
In some embodiments, after obtaining the animal instance outline, further comprising synthesizing the animal instance outline with a different background to generate an image comprising the animal instance outline annotation.
After the outline of the animal example is extracted, different animal example outlines and different backgrounds can be synthesized, for example, the animal is required to be identified in a certain background environment, the background environment and the outline of the animal example can be synthesized, and the synthesized image is marked, so that the accuracy of subsequent identification can be further improved.
In some embodiments, the deep learning instance segmentation model is trained based on yolact++.
Specifically, yolact++ has near real-time dynamic object segmentation capability, and its picture can reach transmission frame number of nearly 40 per second, while also having extremely high accuracy. Furthermore, yolact++ is a modified version of YOLACT that includes a deeper backbone network that is practical, a larger feature pyramid, and an introduced ProtoNet mechanism that serves to better segment and represent the shape of the animal instance outline.
In some embodiments, the deep-learning identification model is trained based on the deep-learning image classification model of EfficientNet.
In particular, the afflicientnet is a new convolutional neural network architecture proposed in recent years, and the main objective is to improve the efficiency of the model, record the maintenance or improvement of the performance of the model, and reduce the complexity and the calculation requirement of the model as much as possible. In the composite scaling, the depth, the width and the resolution are scaled simultaneously, so that parameters of the model are effectively utilized, and the performance of the model is improved.
In some embodiments, the step of stitching multiple animal segmentation instances at different viewing angles to obtain a partial or complete pattern of a single animal comprises:
and obtaining part or all of single-animal segmentation examples of the single animal from the multi-animal segmentation examples, and then splicing the part or all of single-animal segmentation examples of the single animal under different visual angles to obtain part or all of patterns of the single animal.
In the above steps, the multiple animal segmentation examples under different viewing angles may be spliced, and from the spliced multiple animal partial or total patterns, a partial or total pattern of a single animal may be obtained.
Specifically, a part or all of single animal segmentation examples of a single animal are obtained from multiple animal segmentation examples, so that the calculation amount is small, only certain specific animal individuals need to be identified in certain identification scenes, all animal individuals do not need to be identified, and if the multiple animal segmentation examples under different visual angles are spliced, the calculation redundancy in the splicing process is caused, and the response speed is slower.
In some specific embodiments, a part or all of single animal segmentation examples of a single animal are obtained from multiple animal segmentation examples, a projection relation in the multiple animal segmentation examples is obtained through video shooting parameters, and projections under different visual angles are spliced after the projection size is adjusted to obtain a part or all of patterns of the single animal.
Specifically, the video shooting parameters include parameters such as the distance of an animal individual from the photographing apparatus, the interval position between different photographing apparatuses, the viewing angle size of the photographing apparatus, the zoom length of the photographing apparatus, and the like. The projection relation in the multi-animal segmentation example is obtained through the parameter information, and after splicing, part or all of the patterns of the single animal are obtained. It should be noted that, due to the blocking of animals by different animals or the environment, a complete single-animal segmentation example is sometimes not obtained from multiple animal segmentation examples, and more particularly, a partial single-animal segmentation example, so that only a partial pattern of a single animal can be obtained after the partial single-animal segmentation example is spliced.
In some more embodiments, a projection relationship in a multi-animal segmentation example is obtained, projections at different viewing angles are compared and matched with a single-animal segmentation example in a database, the size of the matched projections is adjusted, and then the projections at different viewing angles are spliced.
Specifically, after obtaining the projection and before splicing, comparing and matching the obtained projection with a single-animal segmentation example in a database, wherein the single-animal segmentation example in the database refers to a single-animal segmentation example in the database, so as to judge whether the projection of the single animal at different visual angles falls into the single-animal segmentation example in the database, and after the comparison and matching are successful, adjusting the size of the projection after the matching and then splicing. On the one hand, the accuracy of recognition can be improved, and on the other hand, an incorrect splicing process can be avoided.
In some embodiments, the single-action segmentation examples under different view angles are spliced, and the spliced image is a picture.
Specifically, in order to reduce the amount of calculation and increase the recognition speed, the spliced image is outputted as one picture.
The invention discloses a multi-view animal identity recognition device, which comprises:
the multi-camera array behavior acquisition unit is used for acquiring single-animal behavior videos at the same moment under at least two visual angles and multi-animal behavior videos at the same moment under at least two visual angles;
the example segmentation model training unit is used for extracting animal example outlines in the single-animal behavior video and the multi-animal behavior video and training the deep learning example segmentation model by using the animal example outlines;
the instance segmentation unit is used for carrying out instance segmentation on the single-animal behavior video and the multi-animal behavior video based on the deep learning instance segmentation model, obtaining a single-animal segmentation instance from the single-animal behavior video and obtaining a multi-animal segmentation instance from the multi-animal behavior video;
the single-animal identification unit is used for splicing the single-animal segmentation examples under different visual angles, taking the spliced image as an input pattern, and using the input pattern for training the deep learning identification model;
and the multi-animal identification unit is used for splicing the multi-animal segmentation examples under different visual angles to obtain part or all patterns of the single animal, and identifying and matching the part or all patterns of the single animal based on the deep learning identification model so as to obtain the identification information of the single animal.
The problems that identity data are required to be manually marked, an animal identity set is difficult to mark, animal identity identification is difficult and errors are large are directly solved, and the defects that the acquired quantity of robust identity features is difficult to acquire and the accuracy of the identity identification is low are further solved.
Specifically, the multi-view animal identification device comprises a multi-camera array behavior acquisition unit for shooting single-animal behavior videos and multi-animal behavior videos under different view angles.
Preferably, the multi-camera array behavior acquisition unit is a multi-camera calibration system in the publication number of CN112862900B, the patent name of multi-view camera calibration equipment, calibration method and storage medium.
The method comprises the steps of calibrating a camera, obtaining internal and external parameters of the camera, calculating projection relations among multiple animal segmentation examples in multiple view angles according to the parameters of the camera, matching single animal segmentation examples belonging to the same animal according to projection distances, adjusting the size, then splicing, inputting a deep learning identity recognition model, and deducing the identity of each animal.
In the description of the present specification, the terms "embodiment," "present embodiment," "in one embodiment," and the like, if used, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples; furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In the description of the present specification, the terms "connected," "mounted," "secured," "disposed," "having," and the like are to be construed broadly, e.g., as being either fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art as the case may be.
In the description of this specification, relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The embodiments have been described so as to facilitate a person of ordinary skill in the art in order to understand and apply the present technology, it will be apparent to those skilled in the art that various modifications may be made to these examples and that the general principles described herein may be applied to other embodiments without undue burden. Therefore, the present application is not limited to the above embodiments, and modifications to the following cases should be within the scope of protection of the present application: (1) the technical scheme of the invention is taken as the basis and combined with the new technical scheme implemented by the prior common general knowledge, and the technical effect produced by the new technical scheme is not beyond that of the invention; (2) equivalent replacement of part of the characteristics of the technical scheme of the invention by adopting the known technology produces the technical effect the same as that of the invention; (3) the technical scheme of the invention is taken as a basis for expanding, and the essence of the expanded technical scheme is not beyond the technical scheme of the invention; (4) equivalent transformation made by the content of the specification and the drawings of the invention is directly or indirectly applied to other related technical fields.

Claims (10)

1. A multi-view animal identification method, comprising:
acquiring single animal behavior videos at the same moment under at least two visual angles and multiple animal behavior videos at the same moment under at least two visual angles;
extracting animal instance outlines in the single-animal behavior video and the multi-animal behavior video, and using the animal instance outlines for training a deep learning instance segmentation model;
performing instance segmentation on the single-animal behavior video and the multi-animal behavior video based on a deep learning instance segmentation model, obtaining a single-animal segmentation instance from the single-animal behavior video, and obtaining a multi-animal segmentation instance from the multi-animal behavior video;
splicing single-animal segmentation examples under different visual angles, taking the spliced images as input patterns, and using the input patterns for training a deep learning identity recognition model;
and splicing the multiple animal segmentation examples under different visual angles to obtain part or all patterns of the single animal, and identifying and matching the part or all patterns of the single animal based on the deep learning identity identification model so as to obtain the identity information of the single animal.
2. The method of claim 1, wherein the step of extracting animal instance outlines from the single animal behavioral video and the multiple animal behavioral video comprises:
extracting the overlapping parts of animals shielded at different times in the single-animal behavior video as a single-animal background video, and extracting the overlapping parts of animals shielded at different times in the multi-animal behavior video as a multi-animal background video;
and respectively eliminating the single-animal background video and the multi-animal background video in the single-animal behavior video and the multi-animal behavior video, thereby obtaining animal example outlines.
3. The method of claim 2, further comprising synthesizing the animal instance outline with a different background after obtaining the animal instance outline, and generating an image comprising the animal instance outline annotation.
4. The method for multi-view animal identification of claim 1, wherein the deep learning instance segmentation model is trained based on yolact++.
5. The multi-view animal identification method of claim 1, wherein the deep learning identification model is trained based on a deep learning image classification model of EfficientNet.
6. The method for identifying multiple view angle animals according to claim 1, wherein the step of splicing multiple animal segmentation examples under different view angles to obtain a part or all of the patterns of a single animal comprises:
and obtaining part or all of single-animal segmentation examples of the single animal from the multi-animal segmentation examples, and then splicing the part or all of single-animal segmentation examples of the single animal under different visual angles to obtain part or all of patterns of the single animal.
7. The multi-view animal identification method according to claim 6, wherein part or all of single animal segmentation examples of the single animal are obtained from the multi-animal segmentation examples, projection relations in the multi-animal segmentation examples are obtained through video shooting parameters, and projections under different view angles are spliced after the projection sizes are adjusted to obtain part or all of patterns of the single animal.
8. The multi-view animal identification method of claim 7, wherein the projection relationship in the multi-view animal segmentation example is obtained, the projections under different view angles are compared and matched with the single-animal segmentation example in the database, the size of the matched projections is adjusted, and then the projections under different view angles are spliced.
9. The method for identifying the identity of the multi-view animal according to claim 1, wherein the single-animal segmentation examples under different view angles are spliced, and the spliced image is a picture.
10. A multi-view animal identification device, comprising:
the multi-camera array behavior acquisition unit is used for acquiring single-animal behavior videos at the same moment under at least two visual angles and multi-animal behavior videos at the same moment under at least two visual angles;
the example segmentation model training unit is used for extracting animal example outlines in the single-animal behavior video and the multi-animal behavior video and training the deep learning example segmentation model by using the animal example outlines;
the instance segmentation unit is used for carrying out instance segmentation on the single-animal behavior video and the multi-animal behavior video based on the deep learning instance segmentation model, obtaining a single-animal segmentation instance from the single-animal behavior video and obtaining a multi-animal segmentation instance from the multi-animal behavior video;
the single-animal identification unit is used for splicing the single-animal segmentation examples under different visual angles, taking the spliced image as an input pattern, and using the input pattern for training the deep learning identification model;
and the multi-animal identification unit is used for splicing the multi-animal segmentation examples under different visual angles to obtain part or all patterns of the single animal, and identifying and matching the part or all patterns of the single animal based on the deep learning identification model so as to obtain the identification information of the single animal.
CN202311557839.0A 2023-11-21 2023-11-21 Multi-view animal identity recognition method and recognition device thereof Pending CN117809212A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311557839.0A CN117809212A (en) 2023-11-21 2023-11-21 Multi-view animal identity recognition method and recognition device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311557839.0A CN117809212A (en) 2023-11-21 2023-11-21 Multi-view animal identity recognition method and recognition device thereof

Publications (1)

Publication Number Publication Date
CN117809212A true CN117809212A (en) 2024-04-02

Family

ID=90432591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311557839.0A Pending CN117809212A (en) 2023-11-21 2023-11-21 Multi-view animal identity recognition method and recognition device thereof

Country Status (1)

Country Link
CN (1) CN117809212A (en)

Similar Documents

Publication Publication Date Title
Yang et al. Visual perception enabled industry intelligence: state of the art, challenges and prospects
CN108764048B (en) Face key point detection method and device
Scharr et al. Leaf segmentation in plant phenotyping: a collation study
Sun et al. Transferring deep knowledge for object recognition in low-quality underwater videos
CN109190508B (en) Multi-camera data fusion method based on space coordinate system
CN110458895B (en) Image coordinate system conversion method, device, equipment and storage medium
Ruan et al. Multi-correlation filters with triangle-structure constraints for object tracking
EP3146474B1 (en) Non-invasive multimodal biometrical identification method of animals
Liu et al. Pose-guided R-CNN for jersey number recognition in sports
CN111476883B (en) Three-dimensional posture trajectory reconstruction method and device for multi-view unmarked animal
Chen et al. End-to-end learning of object motion estimation from retinal events for event-based object tracking
Wang et al. 3d pose estimation for fine-grained object categories
CN111695431A (en) Face recognition method, face recognition device, terminal equipment and storage medium
CN111986234A (en) Self-adaptive resolution ratio livestock video information processing method based on artificial intelligence
CN110414298B (en) Monkey face multi-attribute joint identification method
CN108932509A (en) A kind of across scene objects search methods and device based on video tracking
Noe et al. Automatic detection and tracking of mounting behavior in cattle using a deep learning-based instance segmentation model
CN108830222A (en) A kind of micro- expression recognition method based on informedness and representative Active Learning
CN112528823A (en) Striped shark movement behavior analysis method and system based on key frame detection and semantic component segmentation
JP4792471B2 (en) Information element extraction method for image sequence data search and recording medium recording the method
CN112070181B (en) Image stream-based cooperative detection method and device and storage medium
CN117133014A (en) Live pig face key point detection method
CN117809212A (en) Multi-view animal identity recognition method and recognition device thereof
Yang et al. Fusion of RetinaFace and improved FaceNet for individual cow identification in natural scenes
CN111079617A (en) Poultry identification method and device, readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination