CN115497021A - Method for identifying fine granularity of sow lactation behavior based on computer vision - Google Patents

Method for identifying fine granularity of sow lactation behavior based on computer vision Download PDF

Info

Publication number
CN115497021A
CN115497021A CN202211121705.XA CN202211121705A CN115497021A CN 115497021 A CN115497021 A CN 115497021A CN 202211121705 A CN202211121705 A CN 202211121705A CN 115497021 A CN115497021 A CN 115497021A
Authority
CN
China
Prior art keywords
lactation
fine
behavior
sow
grained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211121705.XA
Other languages
Chinese (zh)
Inventor
李泊
徐伟杰
陈天明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Agricultural University
Original Assignee
Nanjing Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Agricultural University filed Critical Nanjing Agricultural University
Publication of CN115497021A publication Critical patent/CN115497021A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/84Arrangements for image or video recognition or understanding using pattern recognition or machine learning using probabilistic graphical models from image or video features, e.g. Markov models or Bayesian networks
    • G06V10/85Markov-related models; Markov random fields
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a computer vision-based fine-grained identification method for sow lactation behaviors, which comprises the following steps of: collecting a prostrate video of sows and piglets in the lactation period; establishing a fine-grained classification data set of the sow lactation behavior; training a behavior recognition model for fine-grained classification of sow lactation behaviors; inputting the long-segment monitoring video into a behavior recognition model to generate a behavior category label sequence of fine-grained classification of the corresponding sow lactation behaviors; the prior knowledge of the sow lactation behaviors is utilized, a hidden Markov model is combined to model a fine-grained classification problem of the sow lactation behaviors, a Viterbi algorithm is utilized to carry out error correction on a class label sequence, and classification results of four classes of a piglet milk arching state, a lactation-in state, a lactation interruption state and a non-lactation state are output. The invention can avoid the shielding interference of the sow obstetric table limit fence as much as possible, has strong robustness, less consumed computing resources and higher running speed, and is suitable for a video monitoring system of an actual breeding environment.

Description

Computer vision-based fine-grained identification method for sow lactation behavior
Technical Field
The invention relates to image processing, computer vision and interactive behavior recognition, in particular to a method for recognizing fine granularity of a sow lactation behavior based on computer vision.
Background
According to statistics, the mortality rate of the piglets before weaning is as high as 10-15% in the background of large-scale enterprise breeding management. The number of healthy piglets which can be provided by each sow on average is an important factor influencing the income of a pig farm. Enterprises intervene the breeding and lactation of sows as manually as much as possible so as to fully exploit the breeding and lactation potential of sows and strive for larger litter size of piglets. Therefore, for better manual intervention, data statistics and analysis of sow lactation behaviors are of great importance. However, recording the information of the lactation state, duration and the like of the sow by a manual monitoring mode requires great manpower input. Also, manual recording can result in subjective factors affecting the data. Therefore, the analysis of the behavior of the lactating sows by using an automatic identification technology in the field of live pig breeding becomes an important subject. However, the current technical research mainly focuses on a single identification technology of whether the sow is lactating or not, and ignores information of a lactation initiation process and a lactation termination process. In actual production, the manner and the duration information of the starting and ending processes of sow lactation largely represent the lactation habits and the maternal level of sows. Therefore, the careful classification and information acquisition of the lactation process of the sows have important significance on the behavior analysis of sows of different breeds and the selection and elimination of the sows in pig farm cultivation.
In the field of animal behavior recognition, a great deal of research work is being done by students on behavior classification techniques based on wearable sensors. However, the problems of friction damage, falling off and the like easily occur when the live pig wears the sensor. As technology advances, non-contact computer vision technology is beginning to be used for behavior and posture recognition of sows. The typical method comprises the steps of extracting specific parameters from an image, formulating a judgment standard according to the parameters, obtaining a contour of a pig by Otsu threshold segmentation method for the pig image by Zhuweixing and the like, judging the behavior of the pig by calculating the similarity of the contour, and applying a patent of a contour-based pig drinking behavior identification method (publication No. CN 107437069A) based on the method. Although the method realizes the recognition of simple behavior gestures, the time sequence motion characteristics of the pigs are not considered in the recognition process, and the recognition capability of complex behaviors is limited. In addition, in the sow behavior recognition field, chamomile and Aqing Yang et al combine a deep neural network and an optical flow method for judging the behavior of sow lactation, and the results are published in International journal, biosystems Engineering and Computers and Electronics in Agriculture, and apply for a computer vision sow lactation behavior recognition method (publication No. CN 109492535A), disclosing a method for realizing lactation behavior recognition by using an optical flow method, a convolutional neural network and a support vector machine. However, the sow lactation behavior identification method only considers the identification of whether lactation behavior occurs or not, and detailed information of a lactation process cannot be extracted.
Therefore, it is an urgent problem to be solved by those skilled in the art to provide a method for implementing a refined classification of sow lactation behaviors, i.e. a fine-grained classification identification.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a method for identifying the fine granularity of the lactation behavior of a sow based on computer vision, so that the fine granularity behavior category of the lactation of the sow is preliminarily detected by utilizing a neural network model with a double-flow structure, the fine granularity classification problem of the lactation behavior of the sow is modeled by utilizing a hidden Markov model, a final state sequence is obtained by utilizing a Viterbi algorithm for correction, and the fine granularity identification of the lactation behavior is realized.
The technical scheme is as follows: the invention relates to a computer vision-based fine-grained identification method for sow lactation behaviors, which comprises the following steps of:
(1) Collecting the top shot videos of sows and piglets in the lactation period.
(2) Establishing a fine-grained classification data set of the lactation behavior of the sows.
(3) And constructing and training a double-flow structured behavior recognition model for fine-grained classification of the lactation behaviors of the sows.
(4) Inputting the monitoring video into the trained model to generate a category label sequence for fine-grained classification of the lactation behavior of the sow.
(5) And preprocessing the behavior class label sequence.
(6) And setting hidden Markov model parameters, correcting errors by using a Viterbi algorithm, and outputting a final fine-grained classification result of the sow lactation behavior.
The data set in the step (2) comprises a behavior recognition model training data set and a sow lactation video clip data set for verifying the feasibility of the method.
The step (3) is specifically as follows:
and (3.1) constructing a neural network behavior recognition model with a double-flow structure, and realizing feature extraction of videos with different frame rates.
And (3.2) fusing the extracted features, and classifying based on the fused feature result.
And (3.3) inputting the data set into the constructed behavior recognition model, and performing model training.
The step (4) is specifically as follows:
and (4.1) segmenting the long video segments into short video segments, inputting the short video segments into the trained behavior recognition model, outputting and storing the behavior category label sequences of the short video segments one by one.
And (4.2) when the whole long video is segmented to the end, obtaining a behavior category label sequence of the whole video.
And (5) the preprocessing mode comprises filtering and binarization processing of the class label sequence.
The step (6) is specifically as follows:
and (6.1) setting parameters of a hidden Markov model according to the logical relation among the four categories of sow lactation fine-grained classification, and modeling the sow lactation behavior fine-grained classification problem by using the hidden Markov model.
And (6.2) inputting an observation sequence consisting of the observation values into a Viterbi algorithm, correcting errors by using the Viterbi algorithm and outputting a final fine-grained behavior classification label result.
A computer storage medium having stored thereon a computer program which, when executed by a processor, implements a computer vision based fine-grained identification method of sow lactation performance as described above.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a method for fine-grained identification of sow lactation behavior based on computer vision as described above when executing the computer program.
Has the advantages that: compared with the prior art, the invention has the following advantages:
1. the fine-grained classification method for the sow lactation behaviors based on the behavior recognition technical framework can realize fine classification of the sow lactation process, so that richer information is extracted;
2. the neural network behavior recognition technology of the core double-flow structure and the classification sequence correction technology based on the behavior transformation prior knowledge and the hidden Markov model are lower than the image segmentation and target detection technology in the aspects of sample marking cost and calculation resource requirements, and have the advantages of strong robustness, less calculation resource consumption, high running speed and suitability for application in the actual culture environment.
Drawings
FIG. 1 is a flow chart of the steps of the present invention;
FIG. 2 is a diagram of a SlowFast model neural network architecture for use in one embodiment herein;
fig. 3 is a schematic diagram of a method for modifying a tag sequence of sow lactation behavior by using a hidden markov model according to an embodiment of the present disclosure.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
As shown in fig. 1, a method for identifying fine granularity of sow lactation behavior based on spatiotemporal feature fusion comprises the following steps:
s1, collecting a prostrate video of a sow and a piglet in a lactation period; the method comprises the steps of installing a camera right above a pigsty, obtaining a top-down video of a suckling period sow and a piglet containing 1 sow and 8-12 piglets at the installation height of 2.2-2.3m, wherein the resolution of a video frame collected in the embodiment is 2048 multiplied by 1536 pixels, the frame rate is 30 frames/second, the specification of a obstetric table is 2.37m long, 1.75m wide and 0.5m high, the sow variety is a Huang-Huai-Hai black pig, and the day age of the piglet after the birth is 1-28 days.
S2, establishing a fine-grained classification video data set of the sow lactation behavior.
S21, dividing fine granularity behaviors of lactation of sows; according to behavior characteristics, sow lactation related behaviors are divided into four states, namely piglet milk arching, lactation-in, lactation interruption and non-lactation; the detailed definitions are shown in table 1.
TABLE 1 Fine-grained behavioral Classification Definitions for sow lactation
Categories Action description
Piglet arcade PAM Less than half of piglets suck at the breast of the sow
Nursing SBF Most piglets continuously suck in the breast area of the sow
Interrupting nursing EBF The large swing of the sow trunk can stop the lactation
Nonmunommaliased state NBF Behavior unrelated to lactation
S22, establishing a fine-grained classification video data set of the sow lactation behavior; and editing and sorting the original video, and recording the obtained data as a training set, a verification set and a test set respectively. In the present embodiment, the image size is uniformly kept 2048 × 1536, the video used for test evaluation includes both a short video of 30 seconds and a long video of 15-30min, and a data set P and a data set Q are respectively created. The data set P includes: training set, verification set and test set. The data composition is shown in table 2.
TABLE 2 categorical definition dataset of fine-grained behavior of sow lactation
Figure BDA0003847387060000041
Figure BDA0003847387060000051
S3, constructing and training a sow lactation behavior fine-grained classification behavior recognition model with a double-flow structure; the neural network model of the dual-flow structure used in this embodiment is a SlowFast model, and a schematic diagram thereof is shown in fig. 2.
S31, constructing a main channel of the double-current network; the main structure comprises a slow channel part and a fast channel part. The two have the same composition structure, and the network structure sequence is as follows: the method comprises an input layer, a convolution layer, a pooling layer, a residual block 1, a residual block 2, a residual block 3 and a residual block 4, wherein the number of channels of the convolution layer of a slow channel and the 4 residual blocks is 64, 128, 256 and 512 in sequence, and the number of channels of a layer corresponding to a fast channel is 1/8 of the number of channels of the slow channel.
S32, realizing the feature fusion of the double-current network; the slow channel and the fast channel realize the feature fusion of different space-time resolutions through a lateral connection technology, and are respectively connected laterally after the pooling layer, the residual block 2, the residual block 3 and the residual block 4. And (3) carrying out 3D convolution on the feature map layer output by the space-time feature extraction network, wherein the convolution kernel size is as follows: 5 multiplied by 1, normalization operation is carried out, the activation function used by the convolution layer is a Relu function, and the processed fast channel characteristics and the characteristics of the corresponding dimensionality of the slow channel are spliced.
S33, realizing classification based on the fused features; and converting the final output characteristics of the two network channels into vectors with the lengths of 512 and 64 by using global average pooling respectively, sending the two vectors into a classification layer together, and realizing classification by using a softmax activation function.
S34, training the SlowFast model by using a data set P; and the weight file generated in each training cycle is verified by using the verification set, in the embodiment, the batch-size is set to be 8 during training, 110 cycles are trained, and the one time of the weight file with the best effect on the verification set is selected for subsequent identification.
And S4, inputting the data set Q into a trained SlowFast model, and identifying and generating a category label sequence for fine-grained primary classification of the lactation behavior of the sow.
S41, utilizing an ffmpeg tool to cut and segment the long video; the long video segment is cut into 30 seconds/segment in the present embodiment in a cyclic manner, in chronological order.
And S42, inputting the segmented short videos into a SlowFast model one by one to identify the video content.
S43, recording the identification result of the model on the sow lactation video; the model outputs the labels of the video classification and the confidence level, and in this embodiment, the label results with the confidence level greater than 0.3 are taken out and stored, and the label results are recorded one by one at the rate of one label per second.
S44, storing the category label sequence result of the whole video; and when the whole section of the monitoring video is segmented to the end, storing and outputting the class label sequence of the whole section of the video.
And S5, preprocessing the behavior category tag sequence to obtain a preprocessed fine-grained category tag sequence of the lactation behavior of the sow.
S51, carrying out median filtering treatment on the suckling and non-suckling states of the piglets in the milk-arching state; in this embodiment, the three categories in the obtained sequence are processed by using a median filter with a step size of 60, and the result after the filtering process is recorded.
S52, carrying out binarization processing on the interrupted lactation behaviors in the category sequences; in this embodiment, the case greater than 0.5 is set to 1, and the case less than 0.5 is set to 0, depending on the confidence level of the interrupted nursing action. And recording the processed result to generate a preprocessed fine-grained classification tag sequence of the sow lactation behavior.
S6, modeling is carried out on the fine-grained classification problem of the sow lactation behaviors by using a hidden Markov model, error correction is carried out on the classification label sequence by using a Viterbi algorithm, a schematic diagram of the hidden Markov model for correction is shown in figure 3, and main parameters used in the correction process are marked as { A, B, o t ,i t A and B are respectively an observation probability matrix and a state transition probability matrix, o t And i t Respectively representing an observed value and a state value at the time t;
s61, parameter setting is carried out on the hidden Markov model; the principle of setting is that two types of wrong judgment are easy to occur, a larger observation probability is set, and an extremely low transition probability is set between two behaviors which do not accord with the breast-feeding behavior rule. In the present embodiment, the observation probability matrix is shown in table 3, and the state transition probability matrix is shown in table 4.
TABLE 3 Observation State probability matrix A
Figure BDA0003847387060000061
TABLE 4 State transition probability distribution matrix B
Figure BDA0003847387060000071
S62, dividing the preprocessed category label sequence into observation sequences; in the embodiment, the category label sequence of the pretreated sow lactation behavior fine-grained classification is taken as an observation sequence, and the observation sequence is set as { o ] every 30min 1 ,o 2 ,o 3 ,o 4 ...o t }。
S63, inputting the observation sequence into a Viterbi algorithm; reading the text content of the observation sequence with the duration of 30 min.
S64, outputting and recording the state sequence; denote the output sequence of states as { i 1 ,i 2 ,i 3 ,i 4 ...i t And storing the results as classification results of the modified lactation behaviors of the sows.
All or part of the processes in the methods of the embodiments can be implemented by computer program instructions, and the programs can be stored in a computer readable storage medium, and when executed, the programs can include the processes of the embodiments. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above. In conclusion, after the scheme is adopted, the invention provides a new idea and a new method for identifying the fine-grained lactation behavior of the sows in the delivery room environment, can effectively overcome the defects of time consumption and labor consumption of the traditional manual monitoring mode, introduces the fine-grained lactation behavior classification on the basis of the traditional sow lactation behavior, and can extract more useful information from the sow lactation process.

Claims (8)

1. A fine-grained identification method of sow lactation behavior based on computer vision is characterized by comprising the following steps:
(1) Collecting top-down videos of sows and piglets in the lactation period;
(2) Establishing a fine-grained classification data set of the sow lactation behavior;
(3) Constructing and training a double-flow structured behavior recognition model for fine-grained classification of sow lactation behaviors;
(4) Inputting the video into the trained model to generate a category label sequence for fine-grained classification of the lactation behavior of the sow;
(5) Preprocessing the behavior category label sequence;
(6) And setting hidden Markov model parameters, correcting errors by using a Viterbi algorithm, and outputting a final fine-grained classification result of the sow lactation behavior.
2. The computer vision-based fine-grained identification method for sow lactation behaviors as claimed in claim 1, wherein the data set in the step (2) comprises a behavior recognition model training data set and a sow lactation video segment data set for verifying feasibility of the method.
3. The method for identifying the fine granularity of the lactation behavior of the sow based on the computer vision as claimed in claim 1, wherein the step (3) is specifically as follows:
(3.1) constructing a neural network behavior recognition model with a double-flow structure, and realizing feature extraction of videos with two different input frame rates;
(3.2) fusing the extracted features;
and (3.3) inputting the data set into the constructed behavior recognition model, and performing model training.
4. The fine-grained identification method for sow lactation behaviors based on computer vision as claimed in claim 1, wherein the step (4) is specifically as follows:
(4.1) segmenting the long video segments into short video segment input models by using the trained behavior recognition models, and storing behavior category label sequences of the short video segments one by one;
and (4.2) when the whole long video is segmented to the end, obtaining a behavior class label sequence of the whole video.
5. The method for fine-grained identification of sow lactation behavior based on computer vision as claimed in claim 1, wherein the preprocessing mode in step (5) comprises filtering and binarization processing of class label sequences.
6. The method for identifying the fine granularity of the lactation behavior of the sow based on the computer vision as claimed in claim 1, wherein the step (6) is specifically as follows:
(6.1) setting hidden Markov model parameters by combining the logic relation among the four types of sow lactation fine-grained classification, and modeling the sow lactation fine-grained classification problem by using the hidden Markov model;
and (6.2) inputting an observation sequence consisting of the observation values into a Viterbi algorithm, correcting errors by using the Viterbi algorithm, and outputting a final fine-grained behavior classification label result.
7. A computer storage medium on which a computer program is stored which, when being executed by a processor, carries out a computer vision-based fine-grained identification method of sow lactation performance as claimed in any one of claims 1 to 6.
8. Computer arrangement comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor when executing the computer program implements a computer vision based fine-grained identification method of sow lactation performance as claimed in any one of claims 1-6.
CN202211121705.XA 2022-09-09 2022-09-15 Method for identifying fine granularity of sow lactation behavior based on computer vision Pending CN115497021A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211102073 2022-09-09
CN2022111020732 2022-09-09

Publications (1)

Publication Number Publication Date
CN115497021A true CN115497021A (en) 2022-12-20

Family

ID=84467927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211121705.XA Pending CN115497021A (en) 2022-09-09 2022-09-15 Method for identifying fine granularity of sow lactation behavior based on computer vision

Country Status (1)

Country Link
CN (1) CN115497021A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116110586B (en) * 2023-04-13 2023-11-21 南京市红山森林动物园管理处 Elephant health management system based on YOLOv5 and SlowFast

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116110586B (en) * 2023-04-13 2023-11-21 南京市红山森林动物园管理处 Elephant health management system based on YOLOv5 and SlowFast

Similar Documents

Publication Publication Date Title
Dong et al. PGA-Net: Pyramid feature fusion and global context attention network for automated surface defect detection
Shi et al. Multiscale multitask deep NetVLAD for crowd counting
CN112613428B (en) Resnet-3D convolution cattle video target detection method based on balance loss
CN113592896B (en) Fish feeding method, system, equipment and storage medium based on image processing
CN115497021A (en) Method for identifying fine granularity of sow lactation behavior based on computer vision
Zin et al. Cow identification system using ear tag recognition
Lu et al. Toward good practices for fine-grained maize cultivar identification with filter-specific convolutional activations
Li et al. Chicken image segmentation via multi-scale attention-based deep convolutional neural network
Wang et al. Pig face recognition model based on a cascaded network
CN114092699A (en) Method and system for cluster pig image segmentation based on transfer learning
CN115830078B (en) Multi-target pig tracking and behavior recognition method, computer equipment and storage medium
CN113470076A (en) Multi-target tracking method for yellow-feather chickens in flat-breeding henhouse
CN117197459A (en) Weak supervision semantic segmentation method based on saliency map and attention mechanism
Bello et al. Behavior recognition of group-ranched cattle from video sequences using deep learning
CN116886869A (en) Video monitoring system and video tracing method based on AI
CN115359511A (en) Pig abnormal behavior detection method
CN114677614A (en) Single sow lactation time length calculation method based on computer vision
CN115240647A (en) Sound event detection method and device, electronic equipment and storage medium
CN114092746A (en) Multi-attribute identification method and device, storage medium and electronic equipment
CN113159049A (en) Training method and device of weak supervision semantic segmentation model, storage medium and terminal
Qi et al. Deep Learning Based Image Recognition In Animal Husbandry
Zhai et al. Instance segmentation method of adherent targets in pig images based on improved mask R-CNN
CN111488891A (en) Image identification processing method, device, equipment and computer readable storage medium
Yanwen et al. ANOMALY DETECTION FOR HERD PIGS BASED ON YOLOX.
CN112733883B (en) Point supervision target detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination