CN114677614A - Single sow lactation time length calculation method based on computer vision - Google Patents

Single sow lactation time length calculation method based on computer vision Download PDF

Info

Publication number
CN114677614A
CN114677614A CN202210099732.5A CN202210099732A CN114677614A CN 114677614 A CN114677614 A CN 114677614A CN 202210099732 A CN202210099732 A CN 202210099732A CN 114677614 A CN114677614 A CN 114677614A
Authority
CN
China
Prior art keywords
lactation
sow
calculating
computer vision
piglets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210099732.5A
Other languages
Chinese (zh)
Inventor
李泊
徐伟杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Agricultural University
Original Assignee
Nanjing Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Agricultural University filed Critical Nanjing Agricultural University
Publication of CN114677614A publication Critical patent/CN114677614A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Pure & Applied Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a computer vision-based method for calculating lactation time of a single sow, which comprises the following steps: (1) collecting overlooking videos of sows and piglets in the lactation period; (2) establishing a data set; (3) training the whole bodies and key parts of sows and piglets to obtain target detection models; (4) carrying out part matching and head-tail discrimination on sows and piglets in the video frame; (5) acquiring a sow lactation interested area; (6) marking the frames meeting the conditions as nursing frames; (7) and extracting the video segments meeting the conditions and calculating the time length of the lactation behavior. The method has the advantages of high accuracy, strong robustness, fewer constraint conditions and the like, has better adaptability to the complex environment of the pigsty, can be applied to two conditions of a limiting fence obstetric table and welfare cultivation, consumes less computing resources, and is suitable for a video monitoring system of an actual cultivation environment.

Description

Single sow lactation time length calculation method based on computer vision
Technical Field
The invention relates to image processing, computer vision and pattern recognition, in particular to a single sow lactation duration calculation method based on computer vision.
Background
In large-scale live pig breeding, management of sows in the lactation period is one of important production links, and is directly related to the slaughtering rate and economic benefit of pigs. In the management of the lactation period of the sows, the lactation capacity is a problem needing important attention, is directly related to the growth and development and health level of piglets, and is also an important index for measuring the reproductive capacity of the sows. The single lactation duration of each lactation of the sow, namely the time length from the massage of the breast of the sow by the piglet before lactation to the end of lactation is an important behavior parameter index related to the lactation capacity. Data such as daily lactation times and the like can be counted through the index, and management personnel are assisted to master the lactation capacity of the sows. The labor intensity of manually observing the lactation behavior of the sows in the lactation period is high, and continuous long-time monitoring cannot be carried out. Therefore, a lactation behavior monitoring means which is free of human intervention, accurate, reliable and strong in continuity needs to be introduced, the data of the single lactation duration of the sow are counted, the feeding personnel are assisted to carry out objective standard assessment on the lactation capacity of the sow, and intelligent management of the lactation condition of the sow in the lactation period is achieved.
Compared with the modes of wearing a sensor, ultrasonic wave, audio analysis and the like, the computer vision technology has become the most commonly used mode in pig behavior monitoring research due to the unique advantages of no contact, low equipment installation cost, easy data understanding, adaptability to various environments and the like. The core of the calculation of the single lactation behavior duration of the sow is the identification of the lactation behavior in the video, and the search of documents in the prior art shows that the identification research of the lactation behavior of the sow based on video images is less at the present stage, a typical method is a lactation behavior identification algorithm which is provided by Aqing Yang et al, divides a lactation sow area based on a full convolution neural network, fuses time domain motion characteristics, and the results are published in Biosystems Engineering and Computers and Electronics in Agriculture of International journal, and a patent is applied for a convolutional network identification method of the lactation behavior of the sow (publication No. CN 110598658A). Although the method can identify the lactation behaviors of the sows in the monitoring video, the behavior identification is realized by classifying short video segment samples, and the lactation behavior video segments cannot be extracted from a long-segment monitoring video. Moreover, the method has higher requirements on manual labeling and hardware.
Disclosure of Invention
The invention aims to: the invention aims to provide a single sow lactation duration calculation method based on computer vision, which is applied to a welfare breeding type or limiting fence breeding type sow pigsty video monitoring system and provides a non-contact acquisition scheme for acquiring sow lactation behavior data.
The technical scheme is as follows: the principle of the single sow lactation duration calculation method based on computer vision is as follows: the method comprises the steps of firstly, utilizing a target detection model based on deep learning to realize the detection of the whole targets and key parts of sows and piglets frame by frame, and estimating the positions of the key parts which are missed to be detected by utilizing the relation of the key parts of the sows on the spatial position. On the basis, a map structure model (viral structure models) is established for each pig target by using the detected and estimated position, so that the key position matched with the piglet by each sow is determined. Then, according to the sow key position information, a lactation interested area is determined. And finally, determining the starting frame and the ending frame of the lactation behavior according to the number of the heads of the piglets in the lactation interesting region. The method comprises the following steps:
(1) collecting overlooking videos of sows and piglets in the lactation period;
(2) Establishing a data set;
(3) training the whole bodies and key parts of sows and piglets to obtain target detection models;
(4) carrying out position matching and head-tail discrimination on sows and piglets in the video frame;
(5) acquiring a sow lactation interested area;
(6) marking the frames meeting the conditions as nursing frames;
(7) and extracting the video segments meeting the conditions and calculating the time length of the lactation action.
The data set in the step (2) comprises a training image data set for target detection and a data set for describing the geometric relationship between the pig part and the whole body.
The step (4) is specifically as follows:
(4.1) performing frame-by-frame processing on the monitoring video by using the trained target detection model to obtain a rectangular surrounding frame of the whole pig and key parts of the whole pig;
(4.2) when the sow is in a side lying posture, constructing a graph structure model based on the information of internal key parts;
and (4.3) calculating the score of the graph structure model by using the geometric constraint relation between the key parts, thereby determining the position of the part matched with the pig target.
The step (5) is specifically as follows: estimating related data of sow lactation according to the matched position of the sow, and determining a sow lactation interested area; the sow lactation related data comprise the body length, the body direction and the lactation area size of the sow.
The step (6) is specifically as follows: and counting the number of the heads of the piglets in the suckling interest area, and marking the video frames with the number of the heads exceeding a certain number as suckling frames.
A computer storage medium having stored thereon a computer program which, when executed by a processor, implements a computer vision-based method of calculating a length of time during a single sow lactation as described above.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to realize the computer vision-based method for calculating the lactation time length of single sow.
Has the beneficial effects that: compared with the prior art, the invention has the following advantages:
1. analyzing the lactation behavior based on the result of the mature and stable target detection model, having higher robustness and adaptability to interference factors such as illumination change, shielding and the like in the actual breeding environment, and being suitable for the nursing sows in welfare breeding and the limited fence obstetric bed environment;
2. the core target detection and graph structure model construction technology is lower than the image segmentation and video identification technology in the aspects of sample marking cost and computational resource requirements, has no higher requirements on hardware equipment, and is suitable for being applied to the actual culture environment.
Drawings
FIG. 1 is a flow chart of the steps of the present invention;
FIG. 2 is a schematic diagram of the structure of the graph structure model according to the present embodiment;
fig. 3 is a schematic diagram of the extraction method of the sow lactation region of interest according to the embodiment.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
As shown in figure 1, the invention discloses a computer vision-based method for calculating the single lactation behavior duration of a sow, which specifically comprises the following steps:
and step S1, collecting top view videos of the sows and piglets in the lactation period. A camera is arranged right above a pigsty, the installation height is 2-3 m, and a top view video containing one lactation sow and 6-10 piglets is obtained. In one embodiment of the invention, the recorded video resolution is 2048 × 1536 pixels, the frame rate is 10 frames/second, the size of a pigsty is 2.4 × 3m, the breed of the sow is a binary hybrid sow of Changbai and Dabai, and the postnatal age of the piglet is 2-21 days.
Step S2, establishing data set
S2.1, establishing a training image data set A for target detection. In the collected video, every 500 frames of images are selected to form a training set. And (3) manually labeling the whole sow (in two postures of lateral lying and non-lateral lying), the breast area of the sow, the whole piglet, the head of the sow/piglet, the tail of the sow/piglet and 6 types of target rectangular surrounding frames in the training set image by using LabelImg software. And amplifying the training set through vertical overturning, horizontal overturning and 180-degree rotation, and taking all amplified data as a training data set A of the target detection model. The definition of 6 object classes is shown in table 1:
TABLE 1 class definitions for object detection
Figure BDA0003492003700000041
S2.2, establishing a data set B of the geometric relation between the parts and the whole of the pig. And (3) taking the central coordinates of the rectangular bounding box marked in the training set A, and calculating the geometric characteristic values among the parts, wherein the specific types are shown in Table 2. The characteristics related to the distance are normalized by using the head-tail distance of the target of the pig, so that the characteristic value is not influenced by different sizes of the pigs. The mean and standard deviation of each feature in the statistical table were taken as data set B.
TABLE 2 characterization data definition in dataset B
Figure BDA0003492003700000042
And S.3, training a target detection model of the whole and key parts of the pig based on the target detection training set A, and realizing detection of the 6 types of targets. After comprehensive consideration of the detection accuracy and the operation speed, a deep neural network-based YOLOv5 (young only look once v5) is selected as a target detection model in one embodiment of the invention.
Step S4, taking the unprocessed image of the new frame from the input video
Step S5, carrying out position matching and head-tail discrimination on the sow and piglet targets in the image
And S5.1, detecting the whole and key positions of the 6 types of pigs in the image by using the trained target detection model. Each target detection result is represented by the length and width of a rectangular surrounding frame and coordinates of the top left vertex.
And S5.2, judging whether the sow posture detection is in side lying. If yes, entering the next flow and executing the step S5.3; otherwise, go to step s.4 and process a new frame of image.
And S5.3, taking an integral pig target without a structural model of a structural diagram, and determining the type and the number of parts inside a rectangular frame. Defining the whole rectangular surrounding frame of the whole pig as A, the part rectangular surrounding frame as B, and when A # B > 0.5 xB is satisfied, the part is considered to be located inside the whole rectangular frame.
And S5.4, constructing a graph structure model. Constructing a graph structure model shown in figure 2, wherein V isp,Vh,Vt,VbRespectively represent the integral node, the head node, the tail node and the breast area node of the pig. And if a plurality of detection results exist in the same position type in the rectangular frame of the whole pig, respectively constructing a plurality of graph structure models.
And S5.5, judging whether the graph structure model is complete. If the node in the graph structure model is complete, executing step S5.7; otherwise, step S5.6 is performed.
And step S5.6, predicting the positions of the nodes of the missing parts. For the piglet and sow targets, if only one of head and tail nodes exists in the map structure model, the positions of the missing parts are predicted simply by using the assumption that the head and tail of the pig and the central point of the whole are on the same straight line based on the positions of the known nodes. Taking head node missing as an example, let (x) p,yp),(xh,yh),(xt,yt) Image coordinates, mu _ d, representing the entire node, head node, and tail node of the pig, respectivelyph、μ_dptRespectively are the normalized average distance values between the whole node and the head node of the pig in the training set B and between the whole node and the tail node. The predicted position of the center point of the head node is shown in the following formula.
Figure BDA0003492003700000051
For the sow target, if the nodes of the breast area in the graph structure model are missing or the nodes of the head and the tail are missing, the detection result of the node of the same type nearest to the current frame is searched forward frame by frame to be used as the estimated part node.
And S5.7, updating the existing graph structure model. The predicted nodes are added into the constructed graph structure model according to the structure shown in FIG. 2.
And S5.8, calculating scores of the structural models of the current rectangular in-frame pictures of the pigs one by one. And calculating the matching degree between the graph structure model and each node in the graph structure model according to the coordinates of each node in the graph structure model, wherein the matching degree is used as the model score.
The score of the graph structure model was calculated using equations (1) and (2) for the sow and piglet goals, respectively.
Figure BDA0003492003700000061
Figure BDA0003492003700000062
In the formula
Figure BDA0003492003700000063
Is a Gaussian probability density function, dph、dpt、dpbThe normalized distances of the head, tail and breast areas of the pig relative to the central point are respectively. Thetaht、θhb、θtbThe included angles of the head and tail, the head and breast area, and the tail and breast area, respectively, relative to the central node of the pig are shown in table 2. μ and σ correspond to the mean and standard deviation, respectively, of the variables in data set B. w denotes the weight of each variable score. The parameters in one embodiment of the present invention are shown in table 3.
TABLE 3 Key parameter values in an embodiment
Figure BDA0003492003700000064
Figure BDA0003492003700000071
And step S5.9, determining the position of each key part of the current pig target. And selecting the graph structure model with the highest score from the current pig detection rectangular frame as a final part matching result, and determining the position of each part.
And S5.10, judging whether the whole pig targets detected in the current image are matched with the parts. If yes, then enter the next flow, go to step S6; otherwise, step S5.3 is performed.
Step S6, obtaining a sow lactation interested area
And S6.1, determining the data such as the body length, the body direction, the breast-feeding area size and the like of the sow. As shown in fig. 3, the body length d of the sow is definedsowThe Euclidean distance between head and tail nodes, the size of the nursing region dbThe distance between the central node of the sow and the head-tail connecting line of the sow and the body direction theta of the sowsowIs the included angle between the head-tail junction line of the sow and the horizontal direction. In the current image frame, the above three types of data are calculated.
S6.2, taking the coordinates of the original point at the upper left corner of the image as a rotation center point of the current image frame, and rotating the current image frame by a counterclockwise rotation angle thetasowObtaining an image Irot. If the coordinates of the breast area nodes in the original image are (x) b,yb) The image coordinate after rotation is (x)b_rot,yb_rot)。
Figure BDA0003492003700000072
And S6.3, extracting a rectangular nursing interesting area from the rotated image. As shown in fig. 3, the upper left corner and the upper right corner of the rectangular region of interest are respectively corresponding to the head node and the tail node, and the width of the rectangle is dsowThe height of the rectangle is set to 2db
Step S6.4, rotating the current image clockwise by an angle thetasowWherein the coordinates of the end points of the rectangular nursing interested area are also rotated clockwise by an angle thetasowObtaining the coordinates of four top points of the key lactation area, and defining the area surrounded by four points as Ab
Step S7, marking the frame meeting the condition as a nursing frame
And S7.1, counting the number of the heads of the piglets in the suckling interest area. Judging whether the head node coordinates of the piglets are in the area A one by onebInternally, finally determining the number N of the head of the piglet meeting the conditions in the current framepiglet
Step S7.2, if N of the current framepigletAnd (4) marking the current frame as a lactation frame (1) when the number of the piglets is more than half of the total number of the piglets. Otherwise, it is marked as non-nursing frame (0).
Step S8, determining whether the single lactation behavior end condition is satisfied. If the condition (one) is satisfied at the same time, the current frame is marked as 0; (II) there is a frame marked 1 in front; (III) the number of frames consecutively marked as 0 is greater than a certain threshold T gapThen go to step S9. Otherwise, returning to step S4, a new frame image is processed.
Step S9, look ahead for the first frame and the last frame marked as 1, marked as a lactation start frame and a lactation end frame, respectively.
And step S10, calculating the time length between the lactation starting frame and the lactation ending frame. If the duration exceeds a certain threshold TtimeThe length of the lactation behavior is recorded.
In step S11, all frames are marked as 0, and the process returns to step S4 to process a new frame image.

Claims (7)

1. A single sow lactation duration calculation method based on computer vision is characterized by comprising the following steps:
(1) collecting overlooking videos of sows and piglets in the lactation period;
(2) establishing a data set;
(3) training the whole body of the sow and the piglet and a target detection model of the key parts of the sow and the piglet;
(4) carrying out position matching and head-tail discrimination on sows and piglets in the video frame;
(5) acquiring a sow lactation interested area;
(6) marking the frames meeting the conditions as nursing frames;
(7) and extracting the video segments meeting the conditions and calculating the time length of the lactation action.
2. The method for calculating lactation duration of single sow based on computer vision as claimed in claim 1, wherein the data set in step (2) comprises a training image data set for target detection and a data set describing geometrical relationship between parts and the whole of the pig.
3. The method for calculating the lactation duration of the single sow based on the computer vision as claimed in claim 1, wherein the step (4) is specifically as follows:
(4.1) performing frame-by-frame processing on the monitoring video by using the trained target detection model to obtain a rectangular surrounding frame of the whole pig and key parts of the whole pig;
(4.2) when the sow is in a side lying posture, constructing a graph structure model based on the information of internal key parts;
and (4.3) calculating the score of the graph structure model by using the geometric constraint relation between the key parts, thereby determining the position of the part matched with the pig target.
4. The method for calculating the lactation duration of the single sow based on the computer vision as claimed in claim 1, wherein the step (5) is specifically as follows: estimating related data of sow lactation according to the matched position of the sow, and determining a sow lactation interested area; the sow lactation related data comprise the body length, the body direction and the lactation area size of the sow.
5. The method for calculating the lactation duration of the single sow based on the computer vision as claimed in claim 1, wherein the step (6) is specifically as follows: counting the number of the heads of the piglets in the suckling interest area, and marking the video frames with the number of the heads exceeding a certain number as suckling frames.
6. A computer storage medium on which a computer program is stored, which computer program, when being executed by a processor, carries out a computer vision-based method of calculating a lactation length of a sow according to any one of claims 1 to 5.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements a computer vision based single sow lactation length calculation method as claimed in any one of claims 1-5.
CN202210099732.5A 2022-01-24 2022-01-27 Single sow lactation time length calculation method based on computer vision Pending CN114677614A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2022100798961 2022-01-24
CN202210079896 2022-01-24

Publications (1)

Publication Number Publication Date
CN114677614A true CN114677614A (en) 2022-06-28

Family

ID=82071747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210099732.5A Pending CN114677614A (en) 2022-01-24 2022-01-27 Single sow lactation time length calculation method based on computer vision

Country Status (1)

Country Link
CN (1) CN114677614A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116114610A (en) * 2023-02-22 2023-05-16 四川农业大学 Piglet fostering device and evaluation method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116114610A (en) * 2023-02-22 2023-05-16 四川农业大学 Piglet fostering device and evaluation method

Similar Documents

Publication Publication Date Title
Chen et al. Behaviour recognition of pigs and cattle: Journey from computer vision to deep learning
Yang et al. A review of video-based pig behavior recognition
Jiang et al. Automatic behavior recognition of group-housed goats using deep learning
Hu et al. Cow identification based on fusion of deep parts features
Zin et al. Image technology based cow identification system using deep learning
Mohamed et al. Msr-yolo: Method to enhance fish detection and tracking in fish farms
Zhu et al. Recognition and drinking behaviour analysis of individual pigs based on machine vision
CN109492535B (en) Computer vision sow lactation behavior identification method
Gan et al. Fast and accurate detection of lactating sow nursing behavior with CNN-based optical flow and features
Noe et al. Automatic detection and tracking of mounting behavior in cattle using a deep learning-based instance segmentation model
Phyo et al. A hybrid rolling skew histogram-neural network approach to dairy cow identification system
Gan et al. Spatiotemporal graph convolutional network for automated detection and analysis of social behaviours among pre-weaning piglets
CN112883915A (en) Automatic wheat ear identification method and system based on transfer learning
Isa et al. CNN transfer learning of shrimp detection for underwater vision system
CN114677614A (en) Single sow lactation time length calculation method based on computer vision
CN112528823B (en) Method and system for analyzing batcharybus movement behavior based on key frame detection and semantic component segmentation
Yang et al. A defencing algorithm based on deep learning improves the detection accuracy of caged chickens
Bello et al. Mask YOLOv7-based drone vision system for automated cattle detection and counting
CN115830078B (en) Multi-target pig tracking and behavior recognition method, computer equipment and storage medium
Bello et al. Behavior recognition of group-ranched cattle from video sequences using deep learning
CN115984959A (en) Method and system for detecting abnormal behavior of cattle based on neural network and centroid tracking
Xingshi et al. Light-weight recognition network for dairy cows based on the fusion of YOLOv5s and channel pruning algorithm.
Li et al. Recognition of fine-grained sow nursing behavior based on the SlowFast and hidden Markov models
Gu et al. A two-stage recognition method based on deep learning for sheep behavior
Li et al. Lameness detection system for dairy cows based on instance segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination