CN113610892A - Real-time imaging system for detecting abnormal activities of livestock based on artificial intelligence - Google Patents

Real-time imaging system for detecting abnormal activities of livestock based on artificial intelligence Download PDF

Info

Publication number
CN113610892A
CN113610892A CN202110882433.4A CN202110882433A CN113610892A CN 113610892 A CN113610892 A CN 113610892A CN 202110882433 A CN202110882433 A CN 202110882433A CN 113610892 A CN113610892 A CN 113610892A
Authority
CN
China
Prior art keywords
outlier
individual
rumination
time
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110882433.4A
Other languages
Chinese (zh)
Inventor
张丽娇
潘磊
廖晓君
逯勇
魏小霜
陈磊
杨映红
宋伟伟
王姜飞
姚巧珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110882433.4A priority Critical patent/CN113610892A/en
Publication of CN113610892A publication Critical patent/CN113610892A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention provides a real-time imaging system for detecting abnormal activities of livestock based on artificial intelligence, which comprises: the ruminant comprises a rumination duration counting module, a rumination abnormity detection module and a real-time imaging module, wherein the rumination duration counting module is used for indirectly acquiring the actual rumination duration of an outlier individual in a ruminant herd by utilizing the grazing duration and the durations of other behaviors except the rumination behavior, the rumination abnormity detection module is used for analyzing the predicted rumination duration and the actual rumination duration and detecting whether the outlier individual is abnormally ruminated, and the real-time imaging module is used for tracking and imaging the outlier individual with the abnormal rumination in real time; according to the method, the duration of the rumination is indirectly obtained by counting the durations of other behaviors except the rumination behavior, and the abnormal activity of the livestock is judged by combining the predicted duration of the rumination and the counted duration of the rumination, so that a sensor does not need to be worn by the animal, and the cost is saved.

Description

Real-time imaging system for detecting abnormal activities of livestock based on artificial intelligence
Technical Field
The invention relates to the field of livestock raising, in particular to a real-time imaging system for detecting abnormal activities of livestock based on artificial intelligence.
Background
The existing method for detecting the abnormal activity of the livestock according to the rumination duration generally comprises the steps of placing a sensor near the mouth of the animal to collect a sound signal, and extracting the sound signal during the rumination from the sound signal according to sound characteristics so as to obtain the rumination duration; this approach requires sensors to be worn by each animal, is costly and can affect the aggressiveness with which the animal eats.
Disclosure of Invention
In order to solve the above problems, the present invention provides an artificial intelligence-based abnormal activity detection real-time imaging system for livestock, comprising:
the ruminant animal feedback system comprises a ruminant animal feedback module, a ruminant animal feedback module and a ruminant animal feedback module, wherein the ruminant animal feedback module is used for indirectly acquiring the actual ruminant animal feedback time length of an outlier individual in a ruminant animal group by utilizing the grazing time length and the time length of other behaviors except the ruminant behavior; the other behaviors comprise eating behaviors, and the eating duration acquisition step comprises the following steps: intercepting an outlier individual food intake image sequence from the outlier individual image sequence based on the occurrence probability of the head state of the outlier individual, and acquiring food intake duration according to the outlier individual food intake image sequence;
the ruminant abnormity detection module is used for analyzing the predicted ruminant time length and the actual ruminant time length to detect whether the outlier individual is ruminant abnormally;
and the real-time imaging module is used for carrying out real-time tracking imaging on the outlier individuals with the rumination abnormality.
Preferably, the system further comprises identifying the outlier individual:
analyzing a moving track of the ruminant individual to obtain a main connected domain and a discrete connected domain;
and identifying the outlier corresponding to the discrete connected domain according to the deviation degree of the moving direction of the discrete connected domain from the moving direction of the main connected domain.
Preferably, the predicted rumination duration of the outlier individual is obtained by fitting the age and the pasture quality of the outlier individual by using a neural network.
Preferably, the other behaviors further include normal standing behavior, normal lying behavior, normal walking behavior, drinking behavior.
Preferably, the capturing the outlier individual food intake image sequence from the outlier individual image sequence specifically includes:
determining a low head initial frame according to the head state of the outlier;
and sequentially taking the outlier individual images from the outlier individual image sequence from a low-head starting frame, adding the outlier individual images into an analysis subsequence, calculating the probability of appearance of the analysis subsequence according to the probability of appearance of the head state of the outlier individual in each frame of the outlier individual image after each addition, and intercepting the sequence of the outlier individual food images from the analysis subsequence when the probability meets a preset condition.
Preferably, the obtaining process of the occurrence probability of the head state is:
and calculating the occurrence probability of the head state in the current outlier individual image by using an occurrence probability calculation model according to the existing condition of pasture around the outlier individual in the current outlier individual image, the accumulated moving distance of the outlier individual corresponding to the current outlier individual image, and the accumulated time difference between the current frame and the end frame of the previous outlier individual feeding image sequence.
Preferably, whether the outlier is in the eating state is judged according to the head state of the outlier; acquiring the accumulated actual moving distance of an outlier individual in the current frame and the accumulated actual time difference between the current frame and the ending frame of the previous outlier individual feeding image sequence based on the current frame and the head-down starting frame of the analyzed subsequence;
according to the number of outlier individual image frames in a feeding state before the current frame in the analysis subsequence and the moving distance which can be offset by the energy obtained by a frame of feeding behavior, carrying out distance offset on the accumulated actual moving distance to obtain an accumulated moving distance;
and according to the number of the outlier individual image frames in the eating state before the current frame in the analysis subsequence and the time difference which can be offset by the energy obtained by one frame of eating behavior, carrying out time difference offset on the accumulated actual time difference to obtain the accumulated time difference.
Preferably, the head state of the outlier in the frame is judged based on the head and neck and torso lengths of the outlier in the image.
Preferably, according to the pasture proportion of a pasturing area, the pasture variety, the temperature, the pasturing intensity, the distance between a pasture field and a captive area and the motion amount of an outlier individual in a preset time period, the normal standing time of the outlier individual is predicted by using a standing time prediction model;
and predicting the normal lying time of the outlier individual by using a lying time prediction model according to the pasture proportion, the pasture distribution and the motion amount of the outlier individual in a preset time period.
The invention has the beneficial effects that:
1. according to the method, the duration of the rumination is indirectly obtained by counting the durations of other behaviors except the rumination behavior, and the abnormal activity of the livestock is judged by combining the predicted duration of the rumination and the counted duration of the rumination, so that a sensor does not need to be worn by the animal, and the cost is saved.
2. The method comprises the steps of obtaining an outlier individual eating image sequence through the occurrence probability of the head state of an outlier individual in continuous multi-frame outlier individual images, and counting eating duration according to the length of the outlier individual eating image sequence and a sampling interval; the method for acquiring the eating duration is based on the natural law of the livestock eating, and the acquired eating duration is more accurate.
3. According to the method, the normal standing time and the normal lying time of the outlier can be accurately predicted by utilizing the neural network based on objective factors such as pasture proportion and the like of the pasturing area and the motion quantity of the subjective factor outlier, and the error of the follow-up acquisition of the actual rumination time is reduced.
Drawings
Fig. 1 is a system configuration diagram in an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, the following detailed description will be given with reference to the accompanying examples. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The purpose of the invention is as follows: and detecting the abnormal rumination behaviors of the ruminants in the grazing process, and realizing real-time imaging.
The specific scenes aimed by the invention are as follows: in a ruminant grazing scene, the grazing area and the pasture type are known conditions; a vision sensor and a temperature sensor are deployed in the grazing area, wherein the vision sensor is used for real-time detection and imaging, and the temperature sensor is used for acquiring temperature information of the grazing area; the system comprises a plurality of vision sensors, a server and a vision sensor, wherein the vision sensors are carried by an unmanned aerial vehicle and used for collecting images of a grazing area at a bird's-eye view angle, the vision sensors are consistent in posture and located at the same height and different positions to perform cooperative work, images collected by the vision sensors can be spliced to obtain a real-time panoramic image of the grazing area, the image processing is calculated through the deployed server, the images are collected by the vision sensors and then transmitted to the server, and the server performs analysis and calculation.
Example (b):
the embodiment provides a livestock abnormal activity detection real-time imaging system based on artificial intelligence, and as shown in figure 1, the system comprises an outlier individual identification module, a rumination duration prediction module, a rumination duration statistic module, a rumination abnormality detection module and a real-time imaging module; wherein:
an outlier individual identification module for identifying outlier individuals in a ruminant herd at grazing time; specifically, the method comprises the following steps:
a) acquiring the movement tracks of a ruminant animal group and a ruminant animal individual:
the images collected by the vision sensors are subjected to image splicing to obtain a real-time panoramic image of a grazing area, the pose of each vision sensor is known, so that a homography matrix between any two vision sensors is easily obtained, the panoramic image splicing is realized through projection transformation, the realization of feature point matching is not needed, the initial pose of each vision sensor is required to be fixed, and uncontrollable factors such as self vibration and abnormal inclination of the unmanned aerial vehicle are not considered.
Inputting the real-time panoramic image into a key point extraction network, and outputting a key point thermodynamic diagram with the size equal to that of the real-time panoramic image; in the embodiment, an encoder-decoder structure is adopted by a key point extraction network; the training process of the key point extraction network comprises the steps of constructing a training data set by adopting a plurality of grazing area panoramic images, artificially marking the central point of the trunk of the ruminant as a key point, using hot spots generated by the key point through Gaussian kernel convolution as labels, and training by adopting a mean square error loss function.
The obtained continuous multiple key point thermodynamic diagrams are superposed based on a forgetting coefficient, and are superposed in a form of one frame/second, specifically, Y is alpha Y '+ (1-alpha) Y, wherein Y is a superposition result after the current frame thermodynamic diagrams are superposed, Y' is a superposition result before the current frame thermodynamic diagrams are superposed, Y is the current frame thermodynamic diagrams, 1-alpha is the forgetting coefficient, and alpha is 0.95 in the application.
And performing double-threshold processing on the thermodynamic diagram superposition result, wherein the first thermodynamic threshold is set to be 0.04, and the second thermodynamic threshold is set to be 0.8, namely, pixels with pixel values in a [0.04, 0.8] interval in the thermodynamic diagram superposition result are track pixels, and pixels with pixel values in a [0.8, 1] interval in the thermodynamic diagram superposition result are stay pixels.
And performing connected domain analysis on track pixels, wherein the connected domain with the largest number of pixels is a main connected domain, the other connected domains are discrete connected domains, track line segments of each connected domain are obtained through a least square method, the track line segments are mapped to thermodynamic diagram superposition results, the heat values of two end points on the line segments are obtained, the direction in which the end point with the smaller heat value points to the end point with the larger heat value is taken as the moving direction, the track line segments are combined with the moving direction to obtain the track vector of the connected domains, the track vector of the main connected domain represents the whole moving track of the ruminant group, and the track vectors of the other discrete connected domains represent the moving track of individual ruminants.
b) According to the deviation degree of the movement direction of the discrete connected domain from the movement direction of the main connected domain, the outlier corresponding to the discrete connected domain is identified, and the outlier refers to an outlier caused by self factors such as body discomfort.
And calculating an included angle between the discrete connected domain track vector and the main connected domain track vector, wherein the included angle represents the deviation degree, and when the obtained track included angle is smaller than a preset included angle threshold value, the ruminant individual corresponding to the discrete connected domain is judged to be an outlier individual.
The rumination duration prediction module is used for obtaining the predicted rumination duration of the outlier according to the age and the pasture quality of the outlier:
a) the method for acquiring the age of the outlier comprises the following steps: acquiring an enclosure of an outlier individual, acquiring the body width w of the outlier individual according to the width of the enclosure, and acquiring the trunk length l of the outlier individual, wherein the trunk length is acquired by various methods, and the method is not specifically exemplified; in order to ensure the accuracy of the obtained age, acquiring an unstitched image acquired by a vision sensor with the minimum distance to the outlier, and acquiring the distance c between the outlier in the image and the origin of the image coordinate system according to the image coordinates; calculating the moving speed v according to the positions of the outliers in the multi-frame un-spliced images; constructing a prediction vector [ l, w, c, v ]; and inputting the prediction vector into a trained full-connection network for feature mapping, and obtaining the age bracket of the outlier individual after the feature mapping is processed by a softmax function.
When the full-connection network is trained, a training data set is constructed by adopting a plurality of groups of prediction vectors corresponding to different types of ruminants, the age group of the corresponding ruminant is used as a training label, and a cross entropy loss function is adopted for training.
b) Obtaining the quality of the pasture: in the embodiment, the pasture quality is classified into four grades in a fuzzy way, and particularly, the pasture quality classification method is a known technology and can be realized by an image classification model, such as a neural network classification model.
c) Inputting the age range and the pasture quality corresponding to the outlier individual into a rumination duration prediction network to obtain the predicted rumination duration of the outlier individual; the rumination duration prediction network is a regression network, specifically, the training details are that the age group and the pasture quality are a group of data, a training data set is constructed by utilizing multiple groups of data, the corresponding rumination duration of the ruminant is a training label, the unit of the rumination duration is minutes, and a mean square error loss function is adopted for training.
The rumination duration counting module is used for acquiring the actual rumination duration of the outlier individual after the outlier individual is outlier according to the grazing duration and the durations of other behaviors except the rumination behavior; wherein the other behaviors include a normal standing behavior, a normal lying and resting behavior, a normal walking behavior, a drinking behavior, and a eating behavior; it should be noted that the grazing duration in the present invention refers to the grazing duration after the ruminant is out of stock, the total grazing duration is obtained in real time, the total grazing duration is obtained by subtracting the non-out-of-stock duration, and the grazing durations corresponding to different out-of-stock individuals may be different.
Specifically, the method for acquiring the duration of each behavior in other behaviors comprises the following steps:
a) length of normal standing: predicting the normal standing time of the outlier individual according to the influence characteristics of the pasture proportion, the pasture variety, the temperature, the pasture intensity, the distance between a pasture field and a captive area, the motion quantity of the outlier individual in a preset time period and the like; specifically, all the influence characteristics form a first one-dimensional vector, the first one-dimensional vector is sent to a standing time prediction network, the standing time prediction network is a fully-connected network, and the predicted normal standing time is output; the network training details are that a training data set is constructed by adopting a plurality of first one-dimensional vectors, labels are corresponding normal standing time obtained through artificial observation, and a loss function is a mean square error function.
The method for acquiring the influence characteristics comprises the following steps:
and processing the real-time panoramic image by utilizing the first semantic segmentation network to obtain a pasture segmentation map, counting the number of pixels of which the pixel categories are pasture in the pasture segmentation map, wherein the ratio of the counted number of the pixels of the pasture to the total number of the pixels in the pasture segmentation map is the pasture proportion.
The pasture variety, the pasture intensity and the distance between the pasture field and the captive breeding area are obtained through artificial statistics; the temperature is acquired in real time by a temperature sensor.
In the embodiment, the preset time period is a total grazing duration acquired in real time, the amount of exercise is obtained according to the moving distance and the moving speed of an outlier in the total grazing duration, specifically, a complete image sequence of the outlier in the total grazing duration is acquired, the outlier has multiple moving behaviors in the total grazing duration, the moving behavior refers to that the outlier starts to move to stop moving, the moving behavior in the total grazing duration is acquired based on the complete image sequence of the outlier, wherein, a method for acquiring the starting movement and the stopping movement of the animal based on the image is well known, each moving behavior corresponds to one moving time period, each moving time period is subdivided based on the speed change of the outlier, specifically, the difference value between the speed at the current moment and the initial speed of the moving time period is acquired, and when the difference value is greater than a preset speed difference threshold value, the moving time period is divided according to the current time, then, the speed of the next moment at the current moment is the initial speed, the speed difference is recalculated and compared with the speed difference threshold, finally, one or more sub-movement time periods are obtained, and the exercise amount of the outlier individual is obtained according to the movement distance and the movement speed in all the sub-movement time periods, specifically, the exercise amount calculation model is as follows:
Figure BDA0003192835390000041
m is a motion quantity characterization value; u. ofiThe average moving speed of outlier individuals in the ith sub-moving time period is U, and the U is the average moving speed of the ruminant obtained through statistics; n represents that n sub-moving time periods are obtained in the grazing period; beta is a fitting coefficient; l isiThe moving distance of the outlier in the ith sub-moving time period; wherein the moving distance LiThe average moving speed u is obtained by the image coordinates of the outlier in the images of the starting frame and the ending frame in the ith sub-moving time periodiAccording to the moving distance LiObtaining the image sampling interval and the image frame number in the ith sub-moving time period; it should be noted that, when fitting the motion amount calculation model, the embodiment obtains the average moving speed and moving distance data corresponding to all the sub-moving time periods of the ruminant within three days to perform model fitting.
b) Length of normal lying and rest: predicting the normal lying time length of an outlier according to the pasture proportion, the pasture distribution and the motion amount of the outlier in a preset time period in a pasturing area, specifically, forming a second one-dimensional vector by the pasture proportion, the pasture distribution and the motion amount of the outlier in the preset time period, sending the second one-dimensional vector to a lying time length prediction network, wherein the lying time length prediction network is a fully-connected network, and outputting and predicting the normal lying time length; the network training details are that a training data set is constructed by adopting a plurality of second one-dimensional vectors, labels are corresponding normal lying and resting time lengths obtained through artificial observation, and a loss function is a mean square error function.
The method for acquiring the influence characteristic of pasture distribution comprises the following steps: and acquiring a mean value of the horizontal and vertical coordinates according to the horizontal and vertical coordinates of all pasture pixel points in the pasture segmentation graph to obtain a mean value coordinate point, wherein the sum of Euclidean distances from all pasture pixel points to the mean value coordinate point represents pasture distribution.
It should be noted that, in order to ensure the accuracy of the normal standing time and the normal lying time, an adjustment coefficient is further set in the embodiment, the set adjustment coefficient is suggested to be greater than 1, the adjustment coefficient in the embodiment is 1.2, and the adjustment coefficient is respectively multiplied by the predicted normal standing time and the predicted normal lying time to increase the two times.
It should be noted that the normal standing behavior and the normal lying behavior in the present invention refer to that the outlier is only standing or lying while standing or lying, and no other behaviors occur at the same time.
c) The drinking time is as follows: processing the real-time panoramic image by utilizing a first semantic segmentation network, and obtaining a water area segmentation image, wherein the training process of the first semantic segmentation network is as follows: collecting panoramic images of a plurality of grazing areas to construct a training data set, wherein the collected panoramic images are the same in size, marking pixels in the panoramic images, the marking types comprise pasture, water areas, the ground and the like, and the loss function adopts a cross entropy loss function.
And after the outlier individual identification module identifies the outlier individual, performing target tracking on the outlier individual, wherein tracking of the outlier individual is realized based on BoundigBox calculation IoU in the embodiment.
When the key point of the outlier individual is located in the area beside the water area and stays for a short time, judging that the outlier individual drinks water, wherein the staying time in the area beside the water area is the drinking time; and extracting pixel points of the water area from the water area segmentation graph, and performing limited expansion to obtain an area beside the water area.
d) The method for acquiring the eating time specifically comprises the following steps:
i) acquiring the head state of an outlier individual corresponding to each frame of outlier individual image in the outlier individual image sequence:
performing semantic segmentation processing on the multiple real-time panoramic images by using a second semantic segmentation network, extracting outlier individual pixel points to obtain an outlier individual segmentation map, acquiring a first minimum circumscribed rectangle of each outlier individual based on each outlier individual segmentation map, and acquiring the number of the outlier individual pixel points on two wide sides of the first minimum circumscribed rectangle; the method comprises the steps of obtaining a linear equation y which is ax + b according to the slope a of a straight line where the width side of a first minimum external rectangle is located, adjusting an offset b value from one side with a large number of outlier individual pixel points on the width side, gradually translating the straight line y to the other side, obtaining the b value when the number of the outlier individual pixel points on the straight line y changes suddenly for the first time in the translation process, wherein the straight line y in the sudden change is a segmentation straight line, the distance from a starting straight line to the segmentation straight line is the length f1 of the trunk of the outlier individual, the distance from a stopping straight line to the segmentation straight line is the length f2 of the neck of the outlier individual, and the length f2 of the neck changes along with the change of the head movement of the outlier individual.
The head and neck length f2 is affected by the lateral head movement of an outlier, so the calculated head and neck length f2 needs to be corrected, and the specific correction process is as follows:
for a region between a termination straight line and a segmentation straight line in an outlier individual segmentation graph, performing edge extraction on the region, and generating a second minimum circumscribed rectangle based on outlier individual pixel points in the region, wherein the second minimum circumscribed rectangle is the minimum circumscribed rectangle of the head and neck of the outlier individual; taking the straight line where the long side of the first minimum external rectangle is as a reference straight line, obtaining the minimum included angle theta between the side of the second minimum external rectangle and the reference straight line, and then correcting the length of the head and neck
Figure BDA0003192835390000061
In the invention, the default lateral head angle of the outlier is not more than 45 degrees.
Judging the head state of the outlier in the image of the outlier according to the length of the trunk and the corrected length of the head and the neck, specifically, when the head state is judged to be the outlier in the image of the outlier
Figure BDA0003192835390000062
When the head state of the outlier in the frame is lower than the preset ratio threshold value, the outlier in the frame is judged to be in the eating state.
In addition, the method considers that the judgment of the eating state of the outlier according to the head state of the outlier under the condition that the ground unevenness exists in the grazing area can cause misjudgment, and particularly calculates when the body of the outlier is in a concave position and the head is in a convex position for eatingComing out
Figure BDA0003192835390000063
May be greater than or equal to a predetermined ratio threshold, and therefore, when calculated from a frame of outlier individual images
Figure BDA0003192835390000064
When the value is larger than or equal to the preset ratio threshold value, the invention also needs to judge that a union region of a second minimum circumscribed rectangle of n continuous frames of the images of the outliers from the frame is obtained as the head activity region of the outliers, similarly, a union region of a third minimum circumscribed rectangle of n continuous frames of the images of the outliers is obtained as the trunk activity region of the outliers, and n is 5 in the embodiment; if the area ratio of the head activity area to the trunk activity area is larger than a preset area threshold, the head state of the outlier in the frame is judged to be a head-lowering state, namely the outlier is in a eating state.
ii) intercepting the outlier individual eating image sequence in the outlier individual image sequence based on the head state of the outlier individual, and obtaining the eating time length according to the length of the outlier individual eating image sequence and the image sampling interval.
Specifically, the process of intercepting the outlier individual eating image sequence is as follows: determining a low head initial frame according to the head state of the outlier; sequentially taking the outlier individual images from the outlier individual image sequence from a low head initial frame, adding the outlier individual images into an analysis subsequence, adding the outlier individual images into the analysis subsequence after each time, calculating the probability of the occurrence of the analysis subsequence according to the occurrence probability of the head state of the outlier in each frame of the image of the outlier, specifically, analyzing the product of the occurrence probability of the head state of the outlier in each frame of the image of the outlier of the subsequence as the probability of the occurrence of the analysis subsequence, when the possibility meets the preset condition, namely the probability product is less than or equal to the probability threshold value, acquiring the corresponding outlier individual image as a joining end frame, and taking the image sequence from the head-down initial frame to the frame before the joining end frame as an analysis subsequence, and intercepting the sequence of the outlier individual eating images in the analysis subsequence, and specifically, acquiring the last frame of the outlier individual image of which the head state is the low head state in the analysis subsequence, wherein the last frame is far away.The group individual image is a food intake ending frame, and an outlier individual food intake image sequence is obtained according to the head-down starting frame and the food intake ending frame of the analysis subsequence; in the examples the probability threshold is set at 3 x 10-4The implementer can adjust the probability threshold value according to the actual situation.
The product of the occurrence probabilities is combined with the probability threshold value to obtain the outlier individual eating image sequence, which conforms to the natural law of livestock eating, namely the eating time lasts longer, and the possibility of continuing eating is smaller.
Preferably, the calculation process of the appearance probability of the head state of the outlier in each frame of the image of the outlier is as follows: calculating the occurrence probability of the head state in the current outlier individual image by using an occurrence probability calculation model according to the existing condition of pasture around the outlier individual in the current outlier individual image, the accumulated moving distance of the outlier individual corresponding to the current outlier individual image, and the accumulated time difference between the current frame and the end frame of the previous outlier individual feeding image sequence; the occurrence probability calculation model specifically comprises the following steps:
P1(x)=f1(x)*f2(g(x))*f3(h(x))
f1(x) Showing the condition of pasture around the outlier individual in the x-th frame outlier individual image, if pasture pixel points exist in 8 adjacent pixel points around the outlier individual key point in the current frame, f1(x) Is 1, otherwise f1(x) Is 0.
Figure BDA0003192835390000065
σ1、σ2To adjust the coefficients, σ in the examples1The value is 1/256, σ2A value of 1/15; g (x) represents the moving distance k which can be counteracted by the energy obtained from the frame number r of the outlier individual image in the eating state before the current x-th frame in the analysis subsequence and the eating behavior of one frame1The accumulated actual moving distance g (x) is subjected to distance deduction to obtain an accumulated moving distance of an outlier corresponding to the current x-th frame outlier image, and specifically, g (x) is g (x) -rk1(ii) a h (x) representsAccording to the number r of the outlier individual image frames in the eating state before the current frame in the sub-sequence to be analyzed and the time difference k which can be counteracted by the energy obtained by a frame of eating behavior2A cumulative time difference obtained by subtracting the time difference from the cumulative actual time difference h (x), specifically, h (x) ═ h (x) -rk2(ii) a Wherein the units of g (x) and G (x) are pixels.
Determining k according to resolution of vision sensor and height of unmanned aerial vehicle1Determining k according to the sampling interval of the vision sensor2In the embodiment, the resolution of the vision sensor is 4240 ten thousand pixels, the sampling interval is one second and one frame, the height of the unmanned aerial vehicle is 200 meters, in the image acquired at the height, one pixel represents 6 centimeters, and the actual distance for canceling the distance of one minute from the swarm of individuals is 6 meters, namely 100 pixels, so that k in the embodiment1Has a value of 5/3, k2Has a value of 1/3, wherein k1The value of (c) is variable, and the practitioner can determine k according to the resolution of the vision sensor actually used and the altitude of the drone1A specific value of (a); acquiring the accumulated actual moving distance G (x) of an outlier individual in the current frame based on the current frame and the low head initial frame of the analysis subsequence, wherein the unit is the pixel distance; and acquiring the accumulated actual time difference H (x) between two frames according to the ending frames of the current frame and the previous outlier individual eating image sequence, wherein the unit is second. For each sequence of outlier feeding images, g (x) and h (x) need to be set to 0.
It is noted that the sum of the occurrence probabilities of all the head states of each frame of outlier individual image is 1; p1(x) Representing the probability of occurrence of a low head state in the current x-th frame outlier individual image, 1-P1(x) Representing the probability of occurrence of a non-heads-down state in the current x-th frame outlier individual image.
After obtaining the food image sequence of the outlier individual each time, the food image sequence of the outlier individual needs to be verified, and whether the obtained food image sequence of the outlier individual is really an image sequence corresponding to the food behavior is judged, wherein the specific verification method comprises the following steps:
acquiring an actual eating track of the outlier individuals in the eating process according to the sequence of the eating images of the outlier individuals;
obtaining a predicted eating trajectory: acquiring image coordinates of outliers in an initial frame image of an outlier individual feeding image sequence, taking the image coordinates as an initial circle center, taking the average trunk length of a ruminant as a radius, making an initial semicircle along the orientation direction of the outliers, taking the orientation direction of the outliers as an initial straight line pointing to a segmentation straight line, uniformly selecting a plurality of search points on the initial semicircle, selecting 5 search points in the embodiment, respectively taking each search point of the 5 search points as a search circle center, taking the average trunk length of the ruminant as a radius, making a search semicircle along a perpendicular line of a tangent line of the search circle center in a direction far away from the initial circle center, repeating the step of uniformly selecting the search points on the initial semicircle as the search circle center to obtain the search semicircle, and obtaining a plurality of search semicircles; and obtaining a plurality of predicted food intake tracks according to the initial circle center and the search circle center, specifically, taking the initial circle center as a starting point, sequentially selecting a search point on the search semicircle according to the appearance sequence of the search semicircle, and obtaining the predicted food intake tracks according to the starting point and the selected search point.
It should be noted that, in the embodiment, it is assumed that the visual field range of the outlier is 180 °, therefore, the semi-circle is generated in the process of acquiring the predicted eating trajectory, and the practitioner may set the visual field range of the outlier according to the actual situation, for example, the visual field range is 320 °.
Matching the actual food intake track with the predicted food intake track, wherein the matching is successful, the image sequence of the food intake of the outlier individual is really the image sequence corresponding to the food intake behavior, and the image sequence of the food intake of the outlier individual is reserved, otherwise, the image sequence of the food intake of the outlier individual is removed.
And calculating the eating duration according to the total length of the reserved outlier individual eating image sequence and the sampling interval of the vision sensor acquisition image.
e) Normal walk duration: the trunk central point of the outlier individual is obtained by utilizing the key point extraction network, when the trunk central point of the outlier individual is in the moving state of the non-eating behavior, the outlier individual is judged to be in the normal walking state, and the normal walking duration is calculated according to the number of the image frames of the outlier individual and the sampling interval, wherein the trunk central point of the outlier individual is in the moving state of the non-eating behavior.
And at this moment, subtracting the normal walking time length, the drinking time length, the eating time length, the normal standing time length and the normal lying and resting time length which are adjusted by the adjusting coefficients from the grazing time length to obtain the actual rumination time length.
The ruminant abnormity detection module is used for detecting whether the outlier individual is ruminant abnormally according to the predicted ruminant time length and the actual ruminant time length; specifically, a difference absolute value between the predicted rumination duration and the actual rumination duration is obtained, and if the ratio of the difference absolute value to the predicted rumination duration is larger than or equal to a preset abnormality judgment threshold, the outlier individual is abnormal in rumination.
The real-time imaging module is used for carrying out real-time tracking imaging on the outlier individuals with the rumination abnormality; the rumination duration can be directly related to the health condition of the livestock, so that the method judges whether the livestock with the rumination behavior is abnormal in activity according to the rumination duration, generates abnormal prompt information when the rumination of the outlier is detected, controls the visual sensor to continuously acquire images of the outlier with the rumination by using the server, and acquires the activity state of the outlier with the rumination in real time.
The foregoing is intended to provide those skilled in the art with a better understanding of the invention, and is not intended to limit the invention to the particular forms disclosed, since modifications and variations can be made without departing from the spirit and scope of the invention.

Claims (9)

1. An artificial intelligence based livestock abnormal activity detection real-time imaging system, characterized in that the system comprises:
the ruminant animal feedback system comprises a ruminant animal feedback module, a ruminant animal feedback module and a ruminant animal feedback module, wherein the ruminant animal feedback module is used for indirectly acquiring the actual ruminant animal feedback time length of an outlier individual in a ruminant animal group by utilizing the grazing time length and the time length of other behaviors except the ruminant behavior; the other behaviors comprise eating behaviors, and the eating duration acquisition step comprises the following steps: intercepting an outlier individual food intake image sequence from the outlier individual image sequence based on the occurrence probability of the head state of the outlier individual, and acquiring food intake duration according to the outlier individual food intake image sequence;
the ruminant abnormity detection module is used for analyzing the predicted ruminant time length and the actual ruminant time length to detect whether the outlier individual is ruminant abnormally;
and the real-time imaging module is used for carrying out real-time tracking imaging on the outlier individuals with the rumination abnormality.
2. The system of claim 1, further comprising identifying the outlier individual:
analyzing a moving track of the ruminant individual to obtain a main connected domain and a discrete connected domain;
and identifying the outlier corresponding to the discrete connected domain according to the deviation degree of the moving direction of the discrete connected domain from the moving direction of the main connected domain.
3. The system of claim 1, wherein the fitting process of the age, pasture quality of the outlier with a neural network yields a predicted length of rumination for the outlier.
4. The system of claim 1, wherein the other behaviors further include normal standing behavior, normal lying and resting behavior, normal walking behavior, drinking behavior.
5. The system of claim 1, wherein the sequence of outlier individual food images is truncated from the temporal sequence of outlier individual images by:
determining a low head initial frame according to the head state of the outlier;
and sequentially taking the outlier individual images from the outlier individual image sequence from a low-head starting frame, adding the outlier individual images into an analysis subsequence, calculating the probability of appearance of the analysis subsequence according to the probability of appearance of the head state of the outlier individual in each frame of the outlier individual image after each addition, and intercepting the sequence of the outlier individual food images from the analysis subsequence when the probability meets a preset condition.
6. The system of claim 5, wherein the probability of occurrence of the head state is obtained by:
and calculating the occurrence probability of the head state in the current outlier individual image by using an occurrence probability calculation model according to the existing condition of pasture around the outlier individual in the current outlier individual image, the accumulated moving distance of the outlier individual corresponding to the current outlier individual image, and the accumulated time difference between the current frame and the end frame of the previous outlier individual feeding image sequence.
7. The system of claim 6, wherein determining whether an outlier is in a fed state is based on a head state of the outlier; acquiring the accumulated actual moving distance of an outlier individual in the current frame and the accumulated actual time difference between the current frame and the ending frame of the previous outlier individual feeding image sequence based on the current frame and the head-down starting frame of the analyzed subsequence;
according to the number of outlier individual image frames in a feeding state before the current frame in the analysis subsequence and the moving distance which can be offset by the energy obtained by a frame of feeding behavior, carrying out distance offset on the accumulated actual moving distance to obtain an accumulated moving distance;
and according to the number of the outlier individual image frames in the eating state before the current frame in the analysis subsequence and the time difference which can be offset by the energy obtained by one frame of eating behavior, carrying out time difference offset on the accumulated actual time difference to obtain the accumulated time difference.
8. The system of claim 7, wherein the head status of an outlier in the frame is determined based on the head and neck and torso lengths of the outlier in the image.
9. The system of claim 4, wherein the standing time prediction model is used to predict the normal standing time of an outlier individual based on pasture proportions, pasture type, temperature, grazing intensity, distance between the pasture and the captive area, amount of motion of the outlier individual over a preset time period in the pasturing area;
and predicting the normal lying time of the outlier individual by using a lying time prediction model according to the pasture proportion, the pasture distribution and the motion amount of the outlier individual in a preset time period.
CN202110882433.4A 2021-08-02 2021-08-02 Real-time imaging system for detecting abnormal activities of livestock based on artificial intelligence Pending CN113610892A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110882433.4A CN113610892A (en) 2021-08-02 2021-08-02 Real-time imaging system for detecting abnormal activities of livestock based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110882433.4A CN113610892A (en) 2021-08-02 2021-08-02 Real-time imaging system for detecting abnormal activities of livestock based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN113610892A true CN113610892A (en) 2021-11-05

Family

ID=78306525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110882433.4A Pending CN113610892A (en) 2021-08-02 2021-08-02 Real-time imaging system for detecting abnormal activities of livestock based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN113610892A (en)

Similar Documents

Publication Publication Date Title
EP2925121B1 (en) System and method for predicting the health outcome of a subject
Leroy et al. A computer vision method for on-line behavioral quantification of individually caged poultry
Guzhva et al. Now you see me: Convolutional neural network based tracker for dairy cows
US10796141B1 (en) Systems and methods for capturing and processing images of animals for species identification
US20210216758A1 (en) Animal information management system and animal information management method
CN110598658A (en) Convolutional network identification method for sow lactation behaviors
Lu et al. An automatic splitting method for the adhesive piglets’ gray scale image based on the ellipse shape feature
KR102506029B1 (en) Apparatus and method for monitoring growing progress of livestock individual based on image
EP3933766B1 (en) A system and a method for livestock monitoring
Kuan et al. An imaging system based on deep learning for monitoring the feeding behavior of dairy cows
Xu et al. Automatic sheep behaviour analysis using mask r-cnn
EP4107653B1 (en) Method, system and computer programs for traceability of living specimens
US20230342902A1 (en) Method and system for automated evaluation of animals
CN113610892A (en) Real-time imaging system for detecting abnormal activities of livestock based on artificial intelligence
WO2023041904A1 (en) Systems and methods for the automated monitoring of animal physiological conditions and for the prediction of animal phenotypes and health outcomes
CN114358163A (en) Food intake monitoring method and system based on twin network and depth data
Yuan et al. Stress-free detection technologies for pig growth based on welfare farming: A review
WO2024061936A1 (en) A system and method for estimating body condition score of an animal
Taghavi et al. Cow key point detection in indoor housing conditions with a deep learning model
CN115968810B (en) Identification method and identification system for epidemic pigs
EP4187505A1 (en) A method and system for the identification of animals
WO2023180587A2 (en) System and method for detecting lameness in cattle
Tung et al. Livestock Posture Recognition Using Deep Learning
WO2022115916A1 (en) Livestock monitoring and management
Wang et al. An Intelligent Edge-IoT Platform With Deep Learning for Body Condition Scoring of Dairy Cow

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination