CN111666881A - Giant panda pacing, bamboo eating and oestrus behavior tracking analysis method - Google Patents

Giant panda pacing, bamboo eating and oestrus behavior tracking analysis method Download PDF

Info

Publication number
CN111666881A
CN111666881A CN202010510090.4A CN202010510090A CN111666881A CN 111666881 A CN111666881 A CN 111666881A CN 202010510090 A CN202010510090 A CN 202010510090A CN 111666881 A CN111666881 A CN 111666881A
Authority
CN
China
Prior art keywords
background
pixel
sample set
current pixel
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010510090.4A
Other languages
Chinese (zh)
Other versions
CN111666881B (en
Inventor
张名岳
汪子君
刘玉良
蔡志刚
侯蓉
张晓卉
安俊辉
张瑛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHENGDU RESEARCH BASE OF GIANT PANDA BREEDING
Original Assignee
CHENGDU RESEARCH BASE OF GIANT PANDA BREEDING
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHENGDU RESEARCH BASE OF GIANT PANDA BREEDING filed Critical CHENGDU RESEARCH BASE OF GIANT PANDA BREEDING
Priority to CN202010510090.4A priority Critical patent/CN111666881B/en
Publication of CN111666881A publication Critical patent/CN111666881A/en
Application granted granted Critical
Publication of CN111666881B publication Critical patent/CN111666881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Psychiatry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of information, and provides a giant panda pacing, bamboo eating and oestrus behavior tracking analysis method. The method aims to solve the problems that the motion background of the pandas is complex, and the traditional background extraction algorithm is difficult to achieve the ideal foreground target extraction effect. The main scheme comprises the steps of 1, inputting a panda video image, and performing foreground object extraction on a video frame by using an improved vibe method; step 2, performing morphological corrosion expansion on the extracted foreground template; step 3, taking the minimum circumscribed rectangle of the outline with the maximum area of the communicated region as a target region, and taking the centroid of the target region as the position of the target; and 4, performing the same operation as in the step 1-3 on each frame of image, outputting the motion track and the motion speed of the pandas, and analyzing the behaviors.

Description

Giant panda pacing, bamboo eating and oestrus behavior tracking analysis method
Technical Field
The invention relates to the technical field of information, and provides a giant panda pacing, bamboo eating and oestrus behavior tracking analysis method.
Background
Pandas are rare and endangered wild animals peculiar to China. Over the years, panda populations face survival pressure of habitat loss and fragmentation due to human activities such as cutting down of large-area forests, destroying forests, wasteland for hunting, construction of large-scale infrastructures such as roads and railways and the like. At present, only 1864 wild giant pandas are distributed on Minshan, Qinling and Qinling mountain. The ex-situ protection, namely artificial breeding and propagation, is one of the basic approaches for protecting endangered species, is the supplement and the extension of in-situ protection, namely habitat protection, and has important effects on increasing the population quantity, maintaining the breeding population of the existing captive pandas and maintaining the continuation of the species. However, the panda species group which is currently bred in a circle has the problems of high morbidity, low birth rate, poor health condition, behavior degeneration and the like.
The protection of endangered wild animals comprises two important means of in-situ protection and circuitous protection, and the circuitous protection is taken as an important supplementary means for in-situ protection of pandas and has made a major breakthrough in recent years. The captive breeding of pandas started in 1936, 11-month-old Hakeni Shi-mais (Rush Harkness) in 1936, obtained one male panda of more than two months in Wen-Chungcao slope of Sichuan, and named as "Perlin" (panda International lineage No. 1). "Sulin" was the first living panda to be brought out of the country and was shown in the United states zoo in Chicago in 1937 at 2 months. After the new China is established, pandas are bred in the Chengdu zoo for the first time in 1953, and the history of breeding pandas in China is started. Although panda breeding has gone through more than 70 years, panda breeding history is abnormally tortuous. The process is slow from 1936 to 90 s of the last century, and breeding of pandas in captivity is very difficult. Since the last 90 s, especially 2000 s, the breeding, breeding and disease prevention and control technology of captive pandas has advanced a lot, but there are still some directions to be studied.
Most of the behavior recognition technologies based on videos are used for recognizing human behaviors, the behavior recognition of animals is few, and only a few researchers research the behavior recognition of animals such as pigs and chickens. The research on pandas is mainly in the biological fields of genes, heredity and the like, a small amount of detection research on pandas in static images exists, the research on behavior recognition of pandas in videos is at a blank stage at present, and the research and analysis are carried out on behavior recognition and tracking of pandas in videos, so that the physiological, mental and health states and breeding states of the pandas are monitored, and the research contributes to improving the health state and population quantity of panda populations.
In order to master the physiological health, mental health state and oestrus state of the pandas, bamboo eating, pacing and oestrus behaviors of the pandas need to be detected, recorded and analyzed, and reasonable measures are taken in time if abnormality occurs to ensure that the pandas are in a healthy state.
Disclosure of Invention
The invention aims to solve the problems that the motion background of a panda is complex, and the traditional background extraction algorithm is difficult to achieve the ideal foreground target extraction effect.
In order to solve the technical problems, the invention adopts the following technical scheme:
a giant panda pacing behavior tracking analysis method is characterized by comprising the following steps:
step 1, inputting a panda video image, and performing foreground object extraction on a video frame by using an improved vibe method;
step 2, performing morphological corrosion expansion on the extracted foreground template;
step 3, taking the minimum circumscribed rectangle of the outline with the maximum area of the communicated region as a target region, and taking the centroid of the target region as the position of the target;
and 4, performing the same operation as in the step 1-3 on each frame of image, outputting the motion track and the motion speed of the pandas, and analyzing the behaviors.
In the above technical solution, the improved vibe method includes the following steps:
step 1.1, initializing a background, selecting the first n frames of a video by using a multi-frame averaging method to construct an initial background B0;
step 1.2, establishing a sample set M (x, y) { v1, v2,. once.vn }, of which vi is an 8-neighborhood random sample value of (x, y), and i is 1, 2, …, N, for each pixel (x, y) of the initial background B0;
step 1.3, calculating an ith frame image fi (i ═ 2, 3.. n):
TBi=FOTSU(abs(fi-Brd))
Figure BDA0002527913740000021
TFi=FOTSU(abs(fi-fi-1))
Ri=TFi+(1-a)·TBi
wherein B isrdDenotes the background of the rd sample in each sample set, rd being a randomly selected value from {1, 2.. N }, FOTSU(. to) represents the background segmentation threshold, TB, after foreground segmentation calculated using the OTSU methodiThe segmentation threshold, Inf, representing the calculation of the background difference result by the OTSU methodi(x, y) represents the binarization result of the ith frame image at (x, y), TFiSegmentation threshold, R, representing the result of calculating a frame difference by the OTSU methodiRepresenting the value of the ith frame radius threshold value R, α is a weighting coefficient, which is generally a few tenths of a day;
such as InfiWhen (x, y) is 1, the following processing is performed:
step 1.3.1, judging whether the current pixel (x, y) is a background, and judging whether the current pixel is the background by calculating the similarity degree of the current pixel (x, y) and a corresponding sample set, wherein the specific calculation is as follows:
Figure BDA0002527913740000031
cntjthe judgment result of the similarity degree of the current pixel (x, y) and the jth background sample pixel in the background sample set is represented, and if the sum of the comparison results of the current pixel and all background pixel points in the background sample set is more than or equal to a threshold value T, the current pixel point is judged to be the background pixel; otherwise, it is a foreground pixel; f. ofiShowing the ith frame video frame to refer to the current video frame; dis represents solving the Euclidean distance between two pixels; v. ofjRepresenting the jth pixel point in the background sample set;
Figure BDA0002527913740000032
DBi(x, y) represents a judgment result that a pixel point (x, y) in the ith frame image is a foreground or background pixel point, and the current pixel is a foreground pixel, namely DBi (x, y) is 1;
the current pixel (x, y) is a background pixel, i.e., DBiWhen (x, y) is 0, background updating is carried out according to the probability of 1/theta, the background updating is divided into two parts of current sample set updating and neighborhood updating, theta is a time sampling factor and generally takes 16, it is unnecessary to update a background model in each new video frame, and when a pixel point is classified as a background point, the pixel point has the probability of 1/theta to update the background model;
one is sample set update, with the pixel value f of the current pixel (x, y)i(x, y) replacing one randomly selected sample v in the corresponding background sample set M (x, y)id is vid=fi(x,y);
Secondly, neighborhood updating, namely randomly selecting a current pixel (x) at a position in 8 neighborhoods of the current pixel (x, y)1,y1) Then, the corresponding background sample set M (x) is obtained1,y1) Selecting a sample for Chinese shorthandv1Replacement with the current pixel, i.e. vi=fi(x,y)。
The invention provides a method for identifying bamboo eating and oestrus behaviors of pandas, which comprises the following steps:
step 1, inputting a panda video image, and performing foreground target extraction on a video frame by using an improved vibe method to obtain a foreground target image;
step 2, constructing a multi-scale space pyramid in the foreground target image, acquiring candidate points of dense tracks through dense sampling, and extracting the dense tracks from different spatial scales;
step 3, using ut to represent the horizontal component in the optical flow field, vt to represent the vertical component in the optical flow field, ω ═ (ut, vt) to represent the dense optical flow field between the t frame and the t +1 frame image, for the characteristic point Pt ═ (xt, yt) on the t frame image in the optical flow field ωtThe median filter M is used for smoothing, and the position on the t +1 th frame corresponding to the point after smoothing is defined as:
Figure BDA0002527913740000041
wherein
Figure BDA0002527913740000042
Is (x)t,yt) Circular region of centre, ωtFor light flow field, M is median filtering (please supplement), and the characteristic points tracked in the subsequent frames are connected in series to form a motion track (P)t,Pt+1,……);
Step 4, tracking the characteristic points in the optical flow field to form a motion track, in order to avoid tracking drift phenomenon caused by long-time tracking, constraining the tracking length L, constructing a characteristic descriptor along a dense track, collecting HOG and track shapes as shape descriptors, and using HOF and MBH as motion descriptors;
step 5, adopting Principal Component Analysis (PCA) to perform dimensionality reduction on the acquired feature descriptors, mapping data from a high-dimensional space to a low-latitude space, and simultaneously keeping as much main information as possible during mapping to obtain the dimensionality-reduced feature descriptors with the feature dimensionality d;
step 6, based on Fisher Vector feature coding and classification, modeling local features by adopting a Gaussian Mixture Model (GMM), taking the number K of Gaussian clusters, and training local feature sets by utilizing an EM (effective velocity) algorithm to solve the GMM; secondly, encoding the feature descriptor subjected to dimension reduction by using a Fisher Vector, wherein the feature dimension obtained after encoding is 2 multiplied by d multiplied by K;
and 8, finally, sending the obtained coded feature descriptors into an SVM classifier for classification.
In the above technical solution, the improved vibe method includes the following steps:
step 1.1, initializing a background, selecting the first n frames of a video by using a multi-frame averaging method to construct an initial background B0;
step 1.2, establishing a sample set M (x, y) { v1, v2,. once.vn }, of which vi is an 8-neighborhood random sample value of (x, y), and i is 1, 2, …, N, for each pixel (x, y) of the initial background B0;
step 1.3, calculating an ith frame image fi (i ═ 2, 3.. n):
TBi=FOTSU(abs(fi-Brd))
Figure BDA0002527913740000051
TFi=FOTSU(abs(fi-fi-1))
Ri=TFi+(1-a)·TBi
wherein B isrdDenotes the background of the rd sample in each sample set, rd being a randomly selected value from {1, 2.. N }, FOTSU(. to) represents the background segmentation threshold, TB, after foreground segmentation calculated using the OTSU methodiThe segmentation threshold, Inf, representing the calculation of the background difference result by the OTSU methodi(x, y) represents the binarization result of the ith frame image at (x, y), TFiSegmentation representing results of calculating frame differences using OTSU methodThreshold value, RiRepresenting the value of the ith frame radius threshold value R, α is a weighting coefficient, which is generally a few tenths of a day;
such as InfiWhen (x, y) is 1, the following processing is performed:
step 1.3.1, judging whether the current pixel (x, y) is a background, and judging whether the current pixel is the background by calculating the similarity degree of the current pixel (x, y) and a corresponding sample set, wherein the specific calculation is as follows:
Figure BDA0002527913740000052
cntjthe judgment result of the similarity degree of the current pixel (x, y) and the jth background sample pixel in the background sample set is represented, and if the sum of the comparison results of the current pixel and all background pixel points in the background sample set is more than or equal to a threshold value T, the current pixel point is judged to be the background pixel; otherwise, it is a foreground pixel; f. ofiShowing the ith frame video frame to refer to the current video frame; dis represents solving the Euclidean distance between two pixels; v. ofjRepresenting the jth pixel point in the background sample set;
Figure BDA0002527913740000061
DBi(x, y) represents a judgment result that a pixel point (x, y) in the ith frame image is a foreground or background pixel point, and the current pixel is a foreground pixel, namely DBi (x, y) is 1;
the current pixel (x, y) is a background pixel, i.e., DBiWhen (x, y) is 0, performing background updating with the probability of 1/theta, wherein the background updating is divided into two parts, namely current sample set updating and neighborhood updating, and theta is a time sampling factor;
one is sample set update, with the pixel value f of the current pixel (x, y)i(x, y) replacing one randomly selected sample v in the corresponding background sample set M (x, y)id is vid=fi(x,y);
Second, neighborhood updating, in which a current pixel (x, y) at a position is randomly selected from 8 neighborhoods of the current pixel (x, y) ((x1,y1) Then, the corresponding background sample set M (x) is obtained1,y1) Selecting a sample v for Chinese shorthand1Replacement with the current pixel, i.e. vi=fi(x,y)。
Because the invention adopts the technical scheme, the invention has the following beneficial effects:
because pandas live in artificially constructed environments, the activity space is limited compared with the wild environment, the environment is not rich enough, some pandas may become mentally boring after a period of time, which causes some mental problems, and the behavior is repetitive actions, such as repeatedly walking in a closed route, which is also called board marking. In order to identify the pacing behavior of the carved plate of a giant panda, the following method is adopted:
1) because the motion background of the pandas is complex, the traditional background extraction algorithm is difficult to achieve the ideal foreground target extraction effect, and the improved vibe algorithm is used for extracting the panda foreground target. By adopting a multi-frame averaging method to construct an initial background and then performing background model modeling, the problems that the traditional vibe algorithm cannot reflect scene changes in time and the quality of extracted foreground targets is low are solved, and the accuracy of extracting the foreground targets is effectively improved.
2) Performing morphological corrosion expansion operation and connected domain analysis on the extracted image, taking the outline with the largest area of the connected region as a giant panda activity region, solving the minimum circumscribed rectangle of the region, and taking the centroid of the region as the position of the giant panda; and carrying out the same tracking operation on each frame of image to obtain the motion trail of the pandas. Compared with the traditional method, the method provided by the invention ensures higher tracking accuracy and reduces the computational complexity.
3) By analyzing the motion trail repeatability, whether the motion belongs to stereotypy motion behavior or not is judged, whether the motion habit of the pandas is abnormal or not is judged, if the motion habit is abnormal, problems of mental states are shown, and corresponding measures are taken in time for treatment.
Drawings
FIG. 1 is a schematic diagram of the process steps of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the process of identifying the behavior of pandas, the following key problems mainly exist:
the first key problem is how to accurately track the pandas and record the motion tracks and the motion duration of the pandas.
The second key problem is how to accurately identify the behavior of the pandas, because the appearance, behavior and action of the pandas are greatly different from those of human beings, most of the existing behavior identification based on videos identifies the behavior of the human bodies, and a reasonable algorithm needs to be designed for realizing accurate identification and recording aiming at the characteristics of irregular actions and variable appearance of the pandas.
Pacing behavior of giant pandas is tracked and analyzed. The giant pandas repeatedly walk along the same route for more than three times on the same road section is called pacing behavior, and the occurrence of the behavior may mean that the mental condition of the giant pandas is abnormal to a certain extent, and unconscious repeated behavior occurs, and certain means need to be adopted for intervention in time to improve the mental condition.
The panda eating behavior is identified and analyzed, the panda mainly eats bamboo, the eating time is identified and analyzed through research, the comparison is carried out on the eating time and the average time, if the eating time is abnormal, whether the analysis is related to the abnormal health condition of the panda, such as the tooth is in a problem or the digestive system is in a problem, and a treatment means needs to be taken timely.
The panda oestrus behavior is identified, and the occurrence frequency of some special behaviors of the panda in the oestrus period is suddenly increased, such as handstand and rubbing vagina, side lifting and rubbing vagina, tail lifting and the like. By identifying and counting the occurrence times of the special behaviors, the estrus of the pandas can be effectively monitored, and preparation is made for breeding the pandas.
The invention provides a giant panda pacing behavior tracking analysis method, which is characterized by comprising the following steps of:
step 1, inputting a panda video image, and performing foreground object extraction on a video frame by using an improved vibe method;
step 2, performing morphological corrosion expansion on the extracted foreground template;
step 3, taking the minimum circumscribed rectangle of the outline with the maximum area of the communicated region as a target region, and taking the centroid of the target region as the position of the target;
and 4, performing the same operation as in the step 1-3 on each frame of image, outputting the motion track and the motion speed of the pandas, and analyzing the behaviors.
In the above technical solution, the improved vibe method includes the following steps:
step 1.1, initializing a background, selecting the first n frames of a video by using a multi-frame averaging method to construct an initial background B0;
step 1.2, establishing a sample set M (x, y) { v1, v2,. once.vn }, of which vi is an 8-neighborhood random sample value of (x, y), and i is 1, 2, …, N, for each pixel (x, y) of the initial background B0;
step 1.3, calculating an ith frame image fi (i ═ 2, 3.. n):
TBi=FOTSU(abs(fi-Brd))
Figure BDA0002527913740000091
TFi=FOTSU(abs(fi-fi-1))
Ri=TFi+(1-a)·TBi
wherein B isrdDenotes the background of the rd sample in each sample set, rd being a randomly selected value from {1, 2.. N }, FOTSU(. to) represents the background segmentation threshold, TB, after foreground segmentation calculated using the OTSU methodiThe segmentation threshold, Inf, representing the calculation of the background difference result by the OTSU methodi(x, y) represents the binarization result of the ith frame image at (x, y), TFiSegmentation threshold, R, representing the result of calculating a frame difference by the OTSU methodiRepresenting the value of the ith frame radius threshold value R, α is a weighting coefficient, which is generally a few tenths of a day;
such as InfiWhen (x, y) is 1, the following processing is performed:
step 1.3.1, judging whether the current pixel (x, y) is a background, and judging whether the current pixel is the background by calculating the similarity degree of the current pixel (x, y) and a corresponding sample set, wherein the specific calculation is as follows:
Figure BDA0002527913740000092
cntjthe judgment result of the similarity degree of the current pixel (x, y) and the jth background sample pixel in the background sample set is represented, and if the sum of the comparison results of the current pixel and all background pixel points in the background sample set is more than or equal to a threshold value T, the current pixel point is judged to be the background pixel; otherwise, it is a foreground pixel; f. ofiShowing the ith frame video frame to refer to the current video frame; dis represents solving the Euclidean distance between two pixels; v. ofjRepresenting the jth pixel point in the background sample set;
Figure BDA0002527913740000093
DBi(x, y) represents a judgment result that a pixel point (x, y) in the ith frame image is a foreground or background pixel point, and the current pixel is a foreground pixel, namely DBi (x, y) is 1;
the current pixel (x, y) is a background pixel, i.e., DBiWhen (x, y) is 0, background updating is carried out according to the probability of 1/theta, the background updating is divided into two parts of current sample set updating and neighborhood updating, theta is a time sampling factor and generally takes 16, it is unnecessary to update a background model in each new video frame, and when a pixel point is classified as a background point, the pixel point has the probability of 1/theta to update the background model;
one is sample set update, with the pixel value f of the current pixel (x, y)i(x, y) replacing one randomly selected sample v in the corresponding background sample set M (x, y)id is vid=fi(x,y);
Secondly, neighborhood updating, namely randomly selecting a current pixel (x) at a position in 8 neighborhoods of the current pixel (x, y)1,y1) Then, the corresponding background sample set M (x) is obtained1,y1) Selecting a sample v for Chinese shorthand1Replacement with the current pixel, i.e. vi=fi(x,y)。
The bamboo eating behavior of the pandas is recognized and analyzed, the bamboo eating time of the pandas can often identify the health state of the pandas, if the eating time is suddenly increased or decreased greatly, the situation is continuously caused for several times, whether the pandas have problems in teeth or digestive systems needs to be checked, and the pandas need to be treated in time. The method comprises the following two steps of recognizing and recording the bamboo eating behavior of pandas: firstly, the behavior of eating bamboo is identified, secondly, the time length of eating bamboo is recorded and compared with historical data, and whether abnormal conditions exist or not is analyzed.
The panda oestrus behavior identification and analysis has the advantages that the panda oestrus behavior identification and analysis is realized, the panda reproduction is an important factor influencing the panda population quantity, and the panda oestrus period is short, so that the panda oestrus behavior is monitored in advance, the panda oestrus period is accurately grasped, the panda reproduction can be promoted to a great extent, the panda quantity is enlarged, and the important significance is realized.
The estrus behavior of pandas is marked by odor or rubbing yin, and moves rapidly, lifts tail, bumps and the like. The giant panda oestrus behavior identification analysis mainly comprises two steps: firstly, classification and identification are carried out on the oestrus behaviors, secondly, the times of the oestrus behaviors are recorded, and the times of occurrence of each behavior are recorded, so that the oestrus period of the pandas can be conveniently and subsequently analyzed.
Based on foreground object extraction and dense track panda behavior recognition, the panda moves in an artificially constructed activity area, and the background is relatively complex. If dense track extraction is directly carried out on the basis of an original image, the result is that the feature bit number is too high, the calculation amount is large, and a large amount of background redundant information is contained, and aiming at the problem, a behavior identification method based on foreground object extraction and dense tracks is provided. Firstly, extracting a target region of a video frame, then extracting a dense track in the target region and constructing a feature descriptor along the track, reducing feature dimension of the obtained feature descriptor by using Principal Component Analysis (PCA), reducing calculated amount, modeling local features by using a Gaussian mixture model, coding the features by using a Fisher vector, and finally training and classifying by using an SVM.
The invention provides a method for identifying bamboo eating and oestrus behaviors of pandas, which comprises the following steps:
step 1, inputting a panda video image, and performing foreground target extraction on a video frame by using an improved vibe method to obtain a foreground target image;
step 2, constructing a multi-scale space pyramid in the foreground target image, acquiring candidate points of dense tracks through dense sampling, and extracting the dense tracks from different spatial scales;
step 3, using ut to represent the horizontal component in the optical flow field, vt to represent the vertical component in the optical flow field, ω ═ (ut, vt) to represent the dense optical flow field between the t frame and the t +1 frame image, for the characteristic point Pt ═ (xt, yt) on the t frame image in the optical flow field ωtThe median filter M is used for smoothing, and the position on the t +1 th frame corresponding to the point after smoothing is defined as:
Figure BDA0002527913740000111
wherein
Figure BDA0002527913740000112
Is (x)t,,yt) Circular region of centre, ωtFor light flow field, M is median filtering (please supplement), and the characteristic points tracked in the subsequent frames are connected in series to form a motion track (P)t,Pt+1,……);
Step 4, tracking the characteristic points in the optical flow field to form a motion track, in order to avoid tracking drift phenomenon caused by long-time tracking, constraining the tracking length L, constructing a characteristic descriptor along a dense track, collecting HOG and track shapes as shape descriptors, and using HOF and MBH as motion descriptors;
step 5, adopting Principal Component Analysis (PCA) to perform dimensionality reduction on the acquired feature descriptors, mapping data from a high-dimensional space to a low-latitude space, and simultaneously keeping as much main information as possible during mapping to obtain the dimensionality-reduced feature descriptors with the feature dimensionality d;
step 6, based on Fisher Vector feature coding and classification, modeling local features by adopting a Gaussian Mixture Model (GMM), taking the number K of Gaussian clusters, and training local feature sets by utilizing an EM (effective velocity) algorithm to solve the GMM; secondly, encoding the feature descriptor subjected to dimension reduction by using a Fisher Vector, wherein the feature dimension obtained after encoding is 2 multiplied by d multiplied by K;
and 8, finally, sending the obtained coded feature descriptors into an SVM classifier for classification.
In the above technical solution, the improved vibe method includes the following steps:
step 1.1, initializing a background, selecting the first n frames of a video by using a multi-frame averaging method to construct an initial background B0;
step 1.2, establishing a sample set M (x, y) { v1, v2,. once.vn }, of which vi is an 8-neighborhood random sample value of (x, y), and i is 1, 2, …, N, for each pixel (x, y) of the initial background B0;
step 1.3, calculating an ith frame image fi (i ═ 2, 3.. n):
TBi=FOTSU(abs(fi-Brd))
Figure BDA0002527913740000121
TFi=FOTSU(abs(fi-fi-1))
Ri=TFi+(1-a)·TBi
wherein B isrdDenotes the background of the rd sample in each sample set, rd being a randomly selected value from {1, 2.. N }, FOTSU(. to) represents the background segmentation threshold, TB, after foreground segmentation calculated using the OTSU methodiThe segmentation threshold, Inf, representing the calculation of the background difference result by the OTSU methodi(x, y) represents the binarization result of the ith frame image at (x, y), TFiSegmentation threshold, R, representing the result of calculating a frame difference by the OTSU methodiRepresenting the value of the ith frame radius threshold value R, α is a weighting coefficient, which is generally a few tenths of a day;
such as Infi(x,y) is 1, the following processing is performed:
step 1.3.1, judging whether the current pixel (x, y) is a background, and judging whether the current pixel is the background by calculating the similarity degree of the current pixel (x, y) and a corresponding sample set, wherein the specific calculation is as follows:
Figure BDA0002527913740000122
cntjthe judgment result of the similarity degree of the current pixel (x, y) and the jth background sample pixel in the background sample set is represented, and if the sum of the comparison results of the current pixel and all background pixel points in the background sample set is more than or equal to a threshold value T, the current pixel point is judged to be the background pixel; otherwise, it is a foreground pixel; f. ofiShowing the ith frame video frame to refer to the current video frame; dis represents solving the Euclidean distance between two pixels; v. ofjRepresenting the jth pixel point in the background sample set;
Figure BDA0002527913740000131
DBi(x, y) represents a judgment result that a pixel point (x, y) in the ith frame image is a foreground or background pixel point, and the current pixel is a foreground pixel, namely DBi (x, y) is 1;
the current pixel (x, y) is a background pixel, i.e., DBiWhen (x, y) is 0, performing background updating with the probability of 1/theta, wherein the background updating is divided into two parts, namely current sample set updating and neighborhood updating, and theta is a time sampling factor;
one is sample set update, with the pixel value f of the current pixel (x, y)i(x, y) replacing one randomly selected sample v in the corresponding background sample set M (x, y)id is vid=fi(x,y);
Secondly, neighborhood updating, namely randomly selecting a current pixel (x) at a position in 8 neighborhoods of the current pixel (x, y)1,y1) Then, the corresponding background sample set M (x) is obtained1,y1) Selecting a sample v for Chinese shorthand1Using the current imageSubstitution of the element is vi=fi(x,y)。

Claims (4)

1. A giant panda pacing behavior tracking analysis method is characterized by comprising the following steps:
step 1, inputting a panda video image, and performing foreground object extraction on a video frame by using an improved vibe method;
step 2, performing morphological corrosion expansion on the extracted foreground template;
step 3, taking the minimum circumscribed rectangle of the outline with the maximum area of the communicated region as a target region, and taking the centroid of the target region as the position of the target;
and 4, performing the same operation as in the step 1-3 on each frame of image, outputting the motion track and the motion speed of the pandas, and analyzing the behaviors.
2. The giant panda pacing behavior tracking analysis method according to claim 1, wherein the improved vibe method comprises the steps of:
step 1.1, initializing a background, selecting the first n frames of a video by using a multi-frame averaging method to construct an initial background B0;
step 1.2, establishing a sample set M (x, y) { v1, v2,. once.vn }, of which vi is an 8-neighborhood random sample value of (x, y), and i is 1, 2, …, N, for each pixel (x, y) of the initial background B0;
step 1.3, calculating an ith frame image fi (i ═ 2, 3.. n):
TBi=FOTSU(abs(fi-Brd))
Figure FDA0002527913730000011
TFi=FOTSU(abs(fi-fi-1))
Ri=TFi+(1-a)·TBi
wherein B isrdMeans that the rd sample composition in each sample set is selectedRd is a value randomly selected from {1, 2.. An }, FOTSU(. to) represents the background segmentation threshold, TB, after foreground segmentation calculated using the OTSU methodiThe segmentation threshold, Inf, representing the calculation of the background difference result by the OTSU methodi(x, y) represents the binarization result of the ith frame image at (x, y), TFiSegmentation threshold, R, representing the result of calculating a frame difference by the OTSU methodiRepresenting the value of the ith frame radius threshold value R, α is a weighting coefficient, which is generally a few tenths of a day;
such as InfiWhen (x, y) is 1, the following processing is performed:
step 1.3.1, judging whether the current pixel (x, y) is a background, and judging whether the current pixel is the background by calculating the similarity degree of the current pixel (x, y) and a corresponding sample set, wherein the specific calculation is as follows:
Figure FDA0002527913730000021
cntjthe judgment result of the similarity degree of the current pixel (x, y) and the jth background sample pixel in the background sample set is represented, and if the sum of the comparison results of the current pixel and all background pixel points in the background sample set is more than or equal to a threshold value T, the current pixel point is judged to be the background pixel; otherwise, it is a foreground pixel; f. ofiShowing the ith frame video frame to refer to the current video frame; dis represents solving the Euclidean distance between two pixels; v. ofjRepresenting the jth pixel point in the background sample set;
Figure FDA0002527913730000022
DBi(x, y) represents a judgment result that a pixel point (x, y) in the ith frame image is a foreground or background pixel point, and the current pixel is a foreground pixel, namely DBi (x, y) is 1;
the current pixel (x, y) is a background pixel, i.e., DBiWhen (x, y) is 0, performing background updating with the probability of 1/theta, wherein the background updating is divided into two parts, namely current sample set updating and neighborhood updating, and theta is a time sampling factor;
one is sample set update, with the pixel value f of the current pixel (x, y)i(x, y) replacing one randomly selected sample v in the corresponding background sample set M (x, y)id is vid=fi(x,y);
Secondly, neighborhood updating, namely randomly selecting a current pixel (x) at a position in 8 neighborhoods of the current pixel (x, y)1,y1) Then, the corresponding background sample set M (x) is obtained1,y1) Selecting a sample v for Chinese shorthand1Replacement with the current pixel, i.e. vi=fi(x,y)。
3. A method for identifying bamboo eating and oestrus behaviors of pandas is characterized by comprising the following steps:
step 1, inputting a panda video image, and performing foreground target extraction on a video frame by using an improved vibe method to obtain a foreground target image;
step 2, constructing a multi-scale space pyramid in the foreground target image, acquiring candidate points of dense tracks through dense sampling, and extracting the dense tracks from different spatial scales;
step 3, using ut to represent the horizontal component in the optical flow field, vt to represent the vertical component in the optical flow field, ω ═ (ut, vt) to represent the dense optical flow field between the t frame and the t +1 frame image, for the characteristic point Pt ═ (xt, yt) on the t frame image in the optical flow field ωtThe median filter M is used for smoothing, and the position on the t +1 th frame corresponding to the point after smoothing is defined as:
Figure FDA0002527913730000031
wherein
Figure FDA0002527913730000032
Is (x)t,yt) Circular region of centre, ωtM is median filtering (please supplement) for light flow field, and the characteristic points tracked in the subsequent frames are connected in series to form the motion trail(Pt,Pt+1,......);
Step 4, tracking the characteristic points in the optical flow field to form a motion track, in order to avoid tracking drift phenomenon caused by long-time tracking, constraining the tracking length L, constructing a characteristic descriptor along a dense track, collecting HOG and track shapes as shape descriptors, and using HOF and MBH as motion descriptors;
step 5, adopting Principal Component Analysis (PCA) to perform dimensionality reduction on the acquired feature descriptors, mapping data from a high-dimensional space to a low-latitude space, and simultaneously keeping as much main information as possible during mapping to obtain the dimensionality-reduced feature descriptors with the feature dimensionality d;
step 6, based on Fisher Vector feature coding and classification, modeling local features by adopting a Gaussian Mixture Model (GMM), taking the number K of Gaussian clusters, and training local feature sets by utilizing an EM (effective velocity) algorithm to solve the GMM; secondly, encoding the feature descriptors subjected to dimension reduction by using a FisherVector, wherein the feature dimension obtained after encoding is 2 multiplied by d multiplied by K;
and 8, finally, sending the obtained coded feature descriptors into an SVM classifier for classification.
4. The panda bamboo eating and oestrus behavior recognition method according to claim 3, wherein the improved vibe method comprises the steps of:
step 1.1, initializing a background, selecting the first n frames of a video by using a multi-frame averaging method to construct an initial background B0;
step 1.2, establishing a sample set M (x, y) { v1, v2,. once.vn }, of which vi is an 8-neighborhood random sample value of (x, y), and i is 1, 2, …, N, for each pixel (x, y) of the initial background B0;
step 1.3, calculating an ith frame image fi (i ═ 2, 3.. n):
TBi=FOTSU(abs(fi-Brd))
Figure FDA0002527913730000033
TFi=FOTSU(abs(fi-fi-1))
Ri=TFi+(1-a)·TBi
wherein B isrdDenotes the background of the rd sample in each sample set, rd being a randomly selected value from {1, 2.. N }, FOTSU(. to) represents the background segmentation threshold, TB, after foreground segmentation calculated using the OTSU methodiThe segmentation threshold, Inf, representing the calculation of the background difference result by the OTSU methodi(x, y) represents the binarization result of the ith frame image at (x, y), TFiSegmentation threshold, R, representing the result of calculating a frame difference by the OTSU methodiRepresenting the value of the ith frame radius threshold value R, α is a weighting coefficient, which is generally a few tenths of a day;
such as InfiWhen (x, y) is 1, the following processing is performed:
step 1.3.1, judging whether the current pixel (x, y) is a background, and judging whether the current pixel is the background by calculating the similarity degree of the current pixel (x, y) and a corresponding sample set, wherein the specific calculation is as follows:
Figure FDA0002527913730000041
cntjthe judgment result of the similarity degree of the current pixel (x, y) and the jth background sample pixel in the background sample set is represented, and if the sum of the comparison results of the current pixel and all background pixel points in the background sample set is more than or equal to a threshold value T, the current pixel point is judged to be the background pixel; otherwise, it is a foreground pixel; f. ofiShowing the ith frame video frame to refer to the current video frame; dis represents solving the Euclidean distance between two pixels; v. ofjRepresenting the jth pixel point in the background sample set;
Figure FDA0002527913730000042
DBi(x, y) represents whether the pixel point (x, y) in the ith frame image is foreground or backgroundThe current pixel is a foreground pixel, that is, DBi (x, y) is 1;
the current pixel (x, y) is a background pixel, i.e., DBiWhen (x, y) is 0, performing background updating with the probability of 1/theta, wherein the background updating is divided into two parts, namely current sample set updating and neighborhood updating, and theta is a time sampling factor;
one is sample set update, with the pixel value f of the current pixel (x, y)i(x, y) replacing one randomly selected sample v in the corresponding background sample set M (x, y)id is vid=fi(x,y);
Secondly, neighborhood updating, namely randomly selecting a current pixel (x) at a position in 8 neighborhoods of the current pixel (x, y)1,y1) Then, the corresponding background sample set M (x) is obtained1,y1) Selecting a sample v for Chinese shorthand1Replacement with the current pixel, i.e. vi=fi(x,y)。
CN202010510090.4A 2020-06-08 2020-06-08 Giant panda pacing, bamboo eating and estrus behavior tracking analysis method Active CN111666881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010510090.4A CN111666881B (en) 2020-06-08 2020-06-08 Giant panda pacing, bamboo eating and estrus behavior tracking analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010510090.4A CN111666881B (en) 2020-06-08 2020-06-08 Giant panda pacing, bamboo eating and estrus behavior tracking analysis method

Publications (2)

Publication Number Publication Date
CN111666881A true CN111666881A (en) 2020-09-15
CN111666881B CN111666881B (en) 2023-04-28

Family

ID=72386859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010510090.4A Active CN111666881B (en) 2020-06-08 2020-06-08 Giant panda pacing, bamboo eating and estrus behavior tracking analysis method

Country Status (1)

Country Link
CN (1) CN111666881B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016527A (en) * 2020-10-19 2020-12-01 成都大熊猫繁育研究基地 Panda behavior recognition method, system, terminal and medium based on deep learning
CN113963298A (en) * 2021-10-25 2022-01-21 东北林业大学 Wild animal identification tracking and behavior detection system, method, equipment and storage medium based on computer vision

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2015252A1 (en) * 2007-07-08 2009-01-14 Université de Liège Visual background extractor
CN103125443A (en) * 2013-03-06 2013-06-05 成都大熊猫繁育研究基地 Method for timely releasing panda pairs to allow natural mating
WO2013149966A1 (en) * 2012-04-02 2013-10-10 Thomson Licensing Method for calibration free gaze tracking using low cost camera
CN104331905A (en) * 2014-10-31 2015-02-04 浙江大学 Surveillance video abstraction extraction method based on moving object detection
CN105741319A (en) * 2016-01-22 2016-07-06 浙江工业大学 Improved visual background extraction method based on blind updating strategy and foreground model
CN106571014A (en) * 2016-10-24 2017-04-19 上海伟赛智能科技有限公司 Method for identifying abnormal motion in video and system thereof
CA3001063A1 (en) * 2015-10-14 2017-04-20 President And Fellows Of Harvard College A method for analyzing motion of a subject representative of behaviour, and classifying animal behaviour
CN108198207A (en) * 2017-12-22 2018-06-22 湖南源信光电科技股份有限公司 Multiple mobile object tracking based on improved Vibe models and BP neural network
CN108230364A (en) * 2018-01-12 2018-06-29 东南大学 A kind of foreground object motion state analysis method based on neural network
CN108346160A (en) * 2017-12-22 2018-07-31 湖南源信光电科技股份有限公司 The multiple mobile object tracking combined based on disparity map Background difference and Meanshift
CN109377517A (en) * 2018-10-18 2019-02-22 哈尔滨工程大学 A kind of animal individual identifying system based on video frequency tracking technology
CN109614928A (en) * 2018-12-07 2019-04-12 成都大熊猫繁育研究基地 Panda recognition algorithms based on limited training data
CN109670440A (en) * 2018-12-14 2019-04-23 央视国际网络无锡有限公司 The recognition methods of giant panda face and device
US20190197696A1 (en) * 2017-08-04 2019-06-27 Université de Liège Foreground and background detection method
CN110060278A (en) * 2019-04-22 2019-07-26 新疆大学 The detection method and device of moving target based on background subtraction
CN110931024A (en) * 2020-02-18 2020-03-27 成都大熊猫繁育研究基地 Audio-based prediction method and system for natural mating result of captive pandas
CN111144236A (en) * 2019-12-10 2020-05-12 华南师范大学 Method, system and storage medium for analyzing mating behavior of cockroach

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2015252A1 (en) * 2007-07-08 2009-01-14 Université de Liège Visual background extractor
WO2013149966A1 (en) * 2012-04-02 2013-10-10 Thomson Licensing Method for calibration free gaze tracking using low cost camera
CN103125443A (en) * 2013-03-06 2013-06-05 成都大熊猫繁育研究基地 Method for timely releasing panda pairs to allow natural mating
CN104331905A (en) * 2014-10-31 2015-02-04 浙江大学 Surveillance video abstraction extraction method based on moving object detection
CA3001063A1 (en) * 2015-10-14 2017-04-20 President And Fellows Of Harvard College A method for analyzing motion of a subject representative of behaviour, and classifying animal behaviour
CN105741319A (en) * 2016-01-22 2016-07-06 浙江工业大学 Improved visual background extraction method based on blind updating strategy and foreground model
CN106571014A (en) * 2016-10-24 2017-04-19 上海伟赛智能科技有限公司 Method for identifying abnormal motion in video and system thereof
US20190197696A1 (en) * 2017-08-04 2019-06-27 Université de Liège Foreground and background detection method
CN108198207A (en) * 2017-12-22 2018-06-22 湖南源信光电科技股份有限公司 Multiple mobile object tracking based on improved Vibe models and BP neural network
CN108346160A (en) * 2017-12-22 2018-07-31 湖南源信光电科技股份有限公司 The multiple mobile object tracking combined based on disparity map Background difference and Meanshift
CN108230364A (en) * 2018-01-12 2018-06-29 东南大学 A kind of foreground object motion state analysis method based on neural network
CN109377517A (en) * 2018-10-18 2019-02-22 哈尔滨工程大学 A kind of animal individual identifying system based on video frequency tracking technology
CN109614928A (en) * 2018-12-07 2019-04-12 成都大熊猫繁育研究基地 Panda recognition algorithms based on limited training data
CN109670440A (en) * 2018-12-14 2019-04-23 央视国际网络无锡有限公司 The recognition methods of giant panda face and device
CN110060278A (en) * 2019-04-22 2019-07-26 新疆大学 The detection method and device of moving target based on background subtraction
CN111144236A (en) * 2019-12-10 2020-05-12 华南师范大学 Method, system and storage medium for analyzing mating behavior of cockroach
CN110931024A (en) * 2020-02-18 2020-03-27 成都大熊猫繁育研究基地 Audio-based prediction method and system for natural mating result of captive pandas

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016527A (en) * 2020-10-19 2020-12-01 成都大熊猫繁育研究基地 Panda behavior recognition method, system, terminal and medium based on deep learning
CN112016527B (en) * 2020-10-19 2022-02-01 成都大熊猫繁育研究基地 Panda behavior recognition method, system, terminal and medium based on deep learning
CN113963298A (en) * 2021-10-25 2022-01-21 东北林业大学 Wild animal identification tracking and behavior detection system, method, equipment and storage medium based on computer vision

Also Published As

Publication number Publication date
CN111666881B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
Yin et al. Using an EfficientNet-LSTM for the recognition of single Cow’s motion behaviours in a complicated environment
Chen et al. Recognition of feeding behaviour of pigs and determination of feeding time of each pig by a video-based deep learning method
Han et al. Comprehensive machine learning analysis of Hydra behavior reveals a stable basal behavioral repertoire
CN107145862B (en) Multi-feature matching multi-target tracking method based on Hough forest
Song et al. Unsupervised Alignment of Actions in Video with Text Descriptions.
CN111738218B (en) Human body abnormal behavior recognition system and method
Hu et al. Dual attention-guided feature pyramid network for instance segmentation of group pigs
CN111666881A (en) Giant panda pacing, bamboo eating and oestrus behavior tracking analysis method
CN115830078B (en) Multi-target pig tracking and behavior recognition method, computer equipment and storage medium
CN109902564A (en) A kind of accident detection method based on the sparse autoencoder network of structural similarity
CN110490055A (en) A kind of Weakly supervised Activity recognition localization method and device recoded based on three
CN115830490A (en) Multi-target tracking and behavior statistical method for herd health pigs
Li et al. Y-BGD: Broiler counting based on multi-object tracking
Lin et al. Bird posture recognition based on target keypoints estimation in dual-task convolutional neural networks
Murari Recurrent 3D convolutional network for rodent behavior recognition
Zhang et al. Detecting kangaroos in the wild: the first step towards automated animal surveillance
Perez et al. CNN-based action recognition and pose estimation for classifying animal behavior from videos: A survey
Modolo et al. Learning semantic part-based models from google images
Jiang et al. Detecting and tracking of multiple mice using part proposal networks
El-Henawy et al. A new muzzle classification model using decision tree classifier
Beddiar et al. Vision based abnormal human activities recognition: An overview
Li et al. Recognition of fine-grained sow nursing behavior based on the SlowFast and hidden Markov models
Zhang et al. EORNet: An improved rotating box detection model for counting juvenile fish under occlusion and overlap
CN110751034B (en) Pedestrian behavior recognition method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant