CN113516070A - Pig counting method - Google Patents

Pig counting method Download PDF

Info

Publication number
CN113516070A
CN113516070A CN202110772938.5A CN202110772938A CN113516070A CN 113516070 A CN113516070 A CN 113516070A CN 202110772938 A CN202110772938 A CN 202110772938A CN 113516070 A CN113516070 A CN 113516070A
Authority
CN
China
Prior art keywords
pig
pigs
detection
image
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110772938.5A
Other languages
Chinese (zh)
Inventor
柯海滨
刘云明
刘聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xiwei Intelligent Technology Co ltd
Original Assignee
Shenzhen Xiwei Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xiwei Intelligent Technology Co ltd filed Critical Shenzhen Xiwei Intelligent Technology Co ltd
Priority to CN202110772938.5A priority Critical patent/CN113516070A/en
Publication of CN113516070A publication Critical patent/CN113516070A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of deep learning, and discloses a pig counting method, which comprises the following steps: establishing a pig point model based on deep learning; acquiring a real-time video of the pig; inputting real-time videos of pigs into a pig point model for detection to obtain a detection result image; and counting pigs on the detection result image to obtain the number of pigs. The invention solves the problems of high labor cost investment, large workload, low efficiency, high hardware cost investment and low accuracy in the prior art.

Description

Pig counting method
Technical Field
The invention belongs to the technical field of deep learning, and particularly relates to a pig counting method.
Background
Along with economic development, the demand of people on pork is higher and higher, the yield of the pork is also higher and higher, the most breeding mode adopted at present is large-stall pig breeding, the change of the free-range mode to the large-scale intensive mode in the pig breeding industry is realized, the breeding mode has the characteristics that the density of pigs is high, the number of pigs is large, the occupied land is saved, the number of breeding heads of a unit is increased, the intensive breeding is suitable, statistics is convenient, the life of sows is clear in a large stall, and the statistics of branding is not easy to make mistakes.
In the traditional technology, the pig is counted manually, the labor cost is high, the workload of workers is large, the efficiency is low, errors are easy to generate, and the accuracy is low; with the introduction of modern equipment in the breeding industry, the fixed camera is adopted for counting in the prior art, the camera needs to be installed in a pig farm in a mode of requiring large construction change, the hardware cost investment is high, and the pig is a living object and is easy to hide at a shooting dead angle, so that the counting accuracy is low.
Disclosure of Invention
In order to solve the problems of high labor cost input, large workload, low efficiency, high hardware cost input and low accuracy in the prior art, the invention aims to provide a pig counting method with low cost input, small workload, high efficiency and high accuracy.
The technical scheme adopted by the invention is as follows:
a pig counting method comprises the following steps:
establishing a pig point model based on deep learning;
acquiring a real-time video of the pig;
inputting real-time videos of pigs into a pig point model for detection to obtain a detection result image;
and counting pigs on the detection result image to obtain the number of pigs.
Furthermore, the pig counting method is based on a pig counting system, the pig counting system comprises shooting equipment and a data processing center, and the shooting equipment is in communication connection with the data processing center;
the shooting equipment is used for moving and progressively shooting real-time videos of the pigs, sending the real-time videos of the pigs to the data processing center for processing, and receiving and displaying detection result images;
and the data processing center is used for receiving the real-time videos of the pigs, establishing a pig point model for processing and detecting the real-time videos of the pigs, obtaining detection result images, counting the pigs and sending the detection result images to the shooting equipment for displaying.
Furthermore, the shooting equipment comprises a main control module, a touch display screen, a communication module and a camera, wherein the main control module is respectively in communication connection with the touch display screen, the communication module and the camera, and the communication module is in communication connection with the data processing center.
Further, the method for establishing the pig point number model based on deep learning comprises the following steps:
acquiring an initial pig image data set, and preprocessing the initial pig image data set to obtain a preprocessed pig image data set;
and establishing a YOLO v5 model based on deep learning, and inputting the preprocessed pig image data set into a YOLO v5 model for training to obtain a pig point model.
Further, the pre-processing includes geometric transformation processing, noise addition processing, optical transformation processing, and normalization processing for each image in the initial pig image dataset.
Further, the YOLO v5 model includes an input terminal, a Backbone module, a tack module, and a Prediction module, which are connected in sequence.
Further, the input end processes the input preprocessed pig image by using a Mosaic data enhancement method;
the Backbone module comprises a Focus structure and a CSP structure;
the structure of the Neck module is an FPN + PAN structure;
the Prediction module performs a Loss calculation using the GIOU _ Loss function.
Further, the method for detecting the pig points by inputting the real-time video of the pig into the pig point model to obtain a detection result image comprises the following steps:
performing frame interception and pretreatment on the real-time video of the pig to obtain continuous pretreated real-time images of the pig;
inputting the real-time images of the pre-processed pigs into a pig point model for detection to obtain a pig detection image containing a pre-selection frame;
screening a preselection frame of the current pig detection image by using a softnms algorithm to obtain the current pig detection image containing a detection frame;
carrying out pig tracking on the current pig detection image containing the detection frame by using an SORT algorithm to obtain pig IDs in all detection frames of the current pig detection image;
traversing all the preprocessed real-time pig images to obtain continuous pig detection images containing pig IDs, and taking the continuous pig detection images containing the pig IDs as detection result images.
Further, the simple SORT algorithm includes a Kalman filtering algorithm and a Hungarian matching algorithm.
Further, according to the ID of the pig, the pig point algorithm is used for carrying out pig point on the detection result image to obtain the number of the pig, and the method comprises the following steps:
establishing a two-dimensional coordinate system in the pig detection image based on the size of the pig detection image and the moving progressive shooting direction;
acquiring a pig coordinate corresponding to the current pig ID based on a two-dimensional coordinate system;
if the coordinates of the current pig ID pig reach the maximum value, the number of pigs is increased by one;
and traversing the pig detection images to obtain the final number of pigs.
The invention has the beneficial effects that:
1) the invention provides a pig counting method based on deep learning, which is used for detecting and identifying shot pigs in real time, conveniently extracting the number of the pigs by using a pig counting model, avoiding counting in a manual mode, reducing the labor cost input, improving the counting efficiency and avoiding errors;
2) the mobile progressive shooting method avoids dead angles of a fixed camera, improves accuracy of pig counting, adopts portable and portable shooting equipment to obtain real-time videos of pigs, and reduces hardware investment cost.
Other advantageous effects of the present invention will be further described in the detailed description.
Drawings
FIG. 1 is a flow chart of the pig-out method of the present invention.
Fig. 2 is a network structure diagram of the YOLO v5 model.
FIG. 3 is an image of the detection result of the problem of false identification and missed identification of the pig.
Fig. 4 is a complete detection result image.
FIG. 5 is a schematic representation of a pig showing the number of dots.
Fig. 6 is a block diagram of a swine points system.
Detailed Description
The invention is further explained below with reference to the drawings and the specific embodiments.
Example 1:
as shown in fig. 1, the present embodiment provides a pig counting method, which includes the following steps:
the method for establishing the pig point model based on deep learning comprises the following steps:
acquiring an initial pig image data set, and preprocessing the initial pig image data set to obtain a preprocessed pig image data set;
the preprocessing comprises geometric transformation processing, noise increasing processing, optical transformation processing and normalization processing which are carried out on each image in the initial pig image data set;
the geometric transformation process enriches the positions, scales and the like of objects appearing in the image, so that the translation invariance and scale invariance of the model, such as translation, turning, scaling, clipping and the like, are met; the optical transformation processing is used for increasing images under different illumination and scenes, and typical operations comprise random disturbance of brightness, contrast, hue and saturation and transformation between channel color gamuts; the noise processing is added, certain disturbance such as Gaussian noise is added in an original image, so that the model has anti-interference performance on the noise which is possibly encountered, and the generalization capability of the model is improved; after the normalization processing of the image is completed, the image needs to be cut, and the image is scaled to a fixed size;
establishing a YOLO v5 model based on deep learning, inputting the preprocessed pig image data set into a YOLO v5 model for training to obtain a pig point model;
the network structure of the YOLO v5 model is shown in fig. 2, and comprises an input end, a backhaul module, a Neck module and a Prediction module which are connected in sequence;
the input end uses a Mosaic data enhancement method to process the input preprocessed pig image, the Mosaic data enhancement adopts 4 pictures, and the pictures are spliced in the modes of random zooming, random cutting and random arrangement, and the advantages are as follows:
1) enriching the data set: 4 pictures are randomly used, randomly scaled and then randomly distributed for splicing, so that a detection data set is greatly enriched, and particularly, a plurality of small targets are added by random scaling, so that the network robustness is better;
2) GPU reduction: the random zooming in the data enhancement has higher requirements on the number of the GPUs, the random zooming function of a single GPU is realized, and the GPU requirements are reduced;
therefore, during the Mosaic enhancement training, the data of 4 pictures can be directly calculated, so that the Mini-batch size does not need to be large, and a better effect can be achieved by one GPU;
the Backbone module comprises a Focus structure and a CSP structure, which are not shown in a YOLO v3 and a YOLO v4, wherein the comparison key is a slicing operation, a 4 × 4 × 3 image is sliced into a 2 × 2 × 12 feature map, an original 608 × 608 × 3 image is input into the Focus structure, and is firstly changed into a 304 × 304 × 12 feature map by adopting the slicing operation, and then is subjected to a convolution operation of 32 convolution kernels to finally become a 304 × 304 × 32 feature map;
the structure of the Neck module is an FPN + PAN structure, two CSP structures are designed in YOLO v5, the CSP1_ X structure is applied to a Backbone module of a Backbone, the other CSP2_ X structure is applied to the Neck module, the FPN structure constructs a network pyramid for three basic characteristic layers output by a skeleton network through FPN, the number of channels is adjusted by convolution of 1X1, then the characteristics are fused by using upsampling, and finally three effective characteristic layers are output for training prediction;
a bottom-up characteristic pyramid is added behind the FPN layer, wherein the pyramid comprises two PAN structures, so that the FPN + PAN structure of the scheme is formed, and in combination with the operation, the FPN layer transmits strong semantic characteristics from top to bottom, the characteristic pyramid transmits strong positioning characteristics from bottom to top, and different detection layers are subjected to characteristic aggregation from different trunk layers, so that the characteristic extraction capability is further improved;
the Prediction module uses the GIOU _ Loss function to perform Loss calculation, and the CIOU _ Loss function considers three important geometric factors of the regression function of the target frame: the overlapping area, the distance of the central point and the length-width ratio are all taken into consideration;
acquiring a real-time video of the pig;
the method comprises the following steps of inputting real-time videos of pigs into a pig point model for detection to obtain detection result images, wherein the detection result images comprise the following steps:
performing frame interception and pretreatment on the real-time video of the pig to obtain continuous pretreated real-time images of the pig;
inputting the real-time images of the pre-processed pigs into a pig point model for detection to obtain a pig detection image containing a pre-selection frame;
screening a preselection frame of the current pig detection image by using a softening non-maximum value inhibition softnms algorithm to obtain the current pig detection image containing a detection frame;
most target detection methods use a non-maximum value inhibition nms algorithm for post-processing, and the common method is to sort detection frames according to scores, then keep the frame with the highest score, and delete other frames with overlapping areas larger than a certain proportion with the frame, so that the greedy method has the problems: if the No. 1 selection box and the No. 2 selection box are the current detection results and the scores of the No. 1 selection box and the No. 2 selection box are respectively 0.95 and 0.80, if the traditional nms algorithm is adopted for processing, the No. 1 box with the highest score is selected firstly, and then the No. 2 box is deleted due to overlarge overlapping area with the No. 1 box, so that the number of pigs is small; on the other hand, the threshold of the nms algorithm is not easy to determine, the problem can occur when the threshold is small, false detection is easy to increase when the threshold is too high, and the number of pigs is large;
by adopting the softnms algorithm with confidence ranking, if a frame No. 3 is detected, the final target is to detect a frame No. 1 and a frame No. 2 and reject the frame No. 3, the original nms algorithm can only detect a frame No. 1 and reject a frame No. 2 and a frame No. 3, and the softnms algorithm can perform confidence ranking on the detection frames No. 1, 2 and 3, and the confidence ranking of the three frames can be known to be in turn from large to small: 1>2>3 (all can obtain the size relation because of punishment, all can obtain this kind of relation), if have chosen the appropriate confidence threshold value again, can keep number 1 and number 2, reject number 3 at the same time, realize that preselection frame screens and obtains the function of the detection frame, solve the pig and only shelter from the stacking problem, prevent the pig and only overlap and verify the missed detection that causes, solve the pig of pig ID as pig trough of number 121 that shown in figure 3 only mistake and pig of ID as pig side of number 119 pig only miss the recognition problem, get the complete detection result image shown in figure 4;
carrying out pig tracking on the current pig detection image containing the detection frame by using a simple online real-time tracking (SORT) algorithm to obtain pig IDs in all detection frames of the current pig detection image;
the SORT algorithm is used as a rough framework, the core of the SORT algorithm is two algorithms including a Kalman filtering algorithm and a Hungarian matching algorithm, a main task is to give an image sequence, find moving objects in the image sequence, identify moving objects of different frames, namely give a determined accurate id, and the objects can be arbitrary, such as pedestrians, vehicles, various animals and the like;
the kalman filter algorithm is divided into two processes: prediction and updating, prediction process: when the pig only moves and the initial positioning and moving processes of the pig are Gaussian distribution, the final estimated position distribution is more dispersed, namely more inaccurate; and (3) updating: when the pig is observed and positioned only by the sensor and the initial positioning and observation are both Gaussian distribution, the observed position distribution is more concentrated, namely more accurate; the Hungarian matching algorithm solves the problem of allocation;
traversing all the preprocessed real-time images of the pigs to obtain continuous pig detection images containing pig IDs, and taking the continuous pig detection images containing the pig IDs as detection result images;
counting the number of pigs in the detection result image to obtain the number of pigs, and the method comprises the following steps:
establishing a two-dimensional coordinate system in the pig detection image based on the size of the pig detection image and the moving progressive shooting direction;
in this embodiment, the lower border of the screen is used as a boundary line, if the pig gradually moves from the far end to the near end and disappears in the lower border, one is added, the counting rule can ensure the accuracy of counting, based on the rule, the posture of the camera is ensured, the pig only changes from top to bottom, if the pig changes from left to right, the right border is selected as the counting boundary line, the picture is formed by a two-dimensional coordinate system origin, namely, the coordinate of the upper left point is (0,0) point, and the coordinate of the lower right corner is (the width of the horizontal axis of the image, the height of the vertical axis of the image);
acquiring a pig coordinate corresponding to the current pig ID based on a two-dimensional coordinate system, wherein the y value is the value of the position of the pig (the midpoint of a detection frame for removing the pig) on the longitudinal axis (the upper boundary of the image) in the picture;
as shown in (a), (b) and (c) of fig. 5, if the pig coordinate of the current pig ID reaches the maximum value, the y maximum value is the height of the picture, and the y value of the pig coordinate gradually increases to the y maximum value of the picture, the current pig ID object disappears in the lower boundary, and the number of pigs increases by one;
and traversing the pig detection images to obtain the final number of pigs.
The invention provides a pig counting method based on deep learning, which is used for detecting and identifying shot pigs in real time, conveniently extracting the number of the pigs by using a pig counting model, avoiding counting in a manual mode, reducing the labor cost input, improving the counting efficiency and avoiding errors.
Example 2:
as shown in fig. 6, this embodiment provides a pig counting system based on embodiment 1, which is applied to the pig counting method of embodiment 1, the pig counting system includes a shooting device and a data processing center, and the shooting device is in communication connection with the data processing center;
the shooting equipment can be a smart phone or a tablet, performs large-column counting based on a mode that a mobile camera shoots a video through the smart phone or the tablet, is used for mobile progressive shooting of a real-time video of a pig, sends the real-time video of the pig to a data processing center for processing, and receives and displays a detection result image;
and the data processing center is used for receiving the real-time videos of the pigs, establishing a pig point model for processing and detecting the real-time videos of the pigs, obtaining detection result images, counting the pigs and sending the detection result images to the shooting equipment for displaying.
Preferably, the shooting equipment comprises a main control module, a touch display screen, a communication module and a camera, wherein the main control module is respectively in communication connection with the touch display screen, the communication module and the camera, and the communication module is in communication connection with the data processing center;
the camera is used for collecting real-time videos of the pigs and sending the videos to the main control module for processing;
the main control module is used for receiving the real-time video of the collected pigs and sending the real-time video to the data processing center through the communication module;
the touch display screen is used for operating the shooting equipment and displaying the detected images of the pigs.
The mobile progressive shooting method avoids dead angles of a fixed camera, improves accuracy of pig counting, adopts portable and portable shooting equipment to obtain real-time videos of pigs, and reduces hardware investment cost.
The present invention is not limited to the above-described alternative embodiments, and various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.

Claims (10)

1. A pig counting method is characterized in that: the method comprises the following steps:
establishing a pig point model based on deep learning;
acquiring a real-time video of the pig;
inputting real-time videos of pigs into a pig point model for detection to obtain a detection result image;
and counting pigs on the detection result image to obtain the number of pigs.
2. The method of claim 1, wherein the number of pigs is: the pig counting method is based on a pig counting system, the pig counting system comprises shooting equipment and a data processing center, and the shooting equipment is in communication connection with the data processing center;
the shooting equipment is used for moving and progressively shooting real-time videos of the pigs, sending the real-time videos of the pigs to the data processing center for processing, and receiving and displaying detection result images;
and the data processing center is used for receiving the real-time videos of the pigs, establishing a pig point model for processing and detecting the real-time videos of the pigs, obtaining detection result images, counting the pigs and sending the detection result images to the shooting equipment for displaying.
3. The method of claim 2, wherein the number of pigs is: the shooting equipment comprises a main control module, a touch display screen, a communication module and a camera, wherein the main control module is in communication connection with the touch display screen, the communication module and the camera respectively, and the communication module is in communication connection with a data processing center.
4. The method of claim 1, wherein the number of pigs is: the method for establishing the pig point model based on deep learning comprises the following steps:
acquiring an initial pig image data set, and preprocessing the initial pig image data set to obtain a preprocessed pig image data set;
and establishing a YOLO v5 model based on deep learning, and inputting the preprocessed pig image data set into a YOLO v5 model for training to obtain a pig point model.
5. The method of claim 4, wherein the number of pigs is: the preprocessing comprises geometric transformation processing, noise adding processing, optical transformation processing and normalization processing which are carried out on each image in the initial pig image data set.
6. The method of claim 4, wherein the number of pigs is: the YOLO v5 model comprises an input end, a Backbone module, a Neck module and a Prediction module which are connected in sequence.
7. The method of claim 6, wherein the number of pigs is: the input end processes the input preprocessed pig image by using a Mosaic data enhancement method;
the Backbone module comprises a Focus structure and a CSP structure;
the structure of the Neck module is an FPN + PAN structure;
and the Prediction module performs Loss calculation by using a GIOU _ Loss function.
8. The method of claim 1, wherein the number of pigs is: the method is characterized in that a real-time video of the pig is input into a pig point model for detection to obtain a detection result image, and comprises the following steps:
performing frame interception and pretreatment on the real-time video of the pig to obtain continuous pretreated real-time images of the pig;
inputting the real-time images of the pre-processed pigs into a pig point model for detection to obtain a pig detection image containing a pre-selection frame;
screening a preselection frame of the current pig detection image by using a softnms algorithm to obtain the current pig detection image containing a detection frame;
carrying out pig tracking on the current pig detection image containing the detection frame by using an SORT algorithm to obtain pig IDs in all detection frames of the current pig detection image;
traversing all the preprocessed real-time pig images to obtain continuous pig detection images containing pig IDs, and taking the continuous pig detection images containing the pig IDs as detection result images.
9. The method of claim 8, wherein the number of pigs is: the SORT algorithm comprises a Kalman filtering algorithm and a Hungarian matching algorithm.
10. The method of claim 1, wherein the number of pigs is: according to the pig ID, the pig point algorithm is used for carrying out pig point on the detection result image to obtain the pig number, and the method comprises the following steps:
establishing a two-dimensional coordinate system in the pig detection image based on the size of the pig detection image and the moving progressive shooting direction;
acquiring a pig coordinate corresponding to the current pig ID based on a two-dimensional coordinate system;
if the coordinates of the current pig ID pig reach the maximum value, the number of pigs is increased by one;
and traversing the pig detection images to obtain the final number of pigs.
CN202110772938.5A 2021-07-08 2021-07-08 Pig counting method Pending CN113516070A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110772938.5A CN113516070A (en) 2021-07-08 2021-07-08 Pig counting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110772938.5A CN113516070A (en) 2021-07-08 2021-07-08 Pig counting method

Publications (1)

Publication Number Publication Date
CN113516070A true CN113516070A (en) 2021-10-19

Family

ID=78067128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110772938.5A Pending CN113516070A (en) 2021-07-08 2021-07-08 Pig counting method

Country Status (1)

Country Link
CN (1) CN113516070A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119648A (en) * 2021-11-12 2022-03-01 史缔纳农业科技(广东)有限公司 Pig counting method for fixed channel
CN114467795A (en) * 2021-11-24 2022-05-13 河南牧原智能科技有限公司 Pig checking system and method
CN115126999A (en) * 2022-05-19 2022-09-30 合肥拉塞特机器人科技有限公司 Pig point counting algorithm based on pictures and videos

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108508782A (en) * 2018-01-29 2018-09-07 江苏大学 Pig behavior tracking identification monitoring device based on ARM and method
CN111507179A (en) * 2020-03-04 2020-08-07 杭州电子科技大学 Live pig feeding behavior analysis method
CN112597877A (en) * 2020-12-21 2021-04-02 中船重工(武汉)凌久高科有限公司 Factory personnel abnormal behavior detection method based on deep learning
CN113033376A (en) * 2021-03-22 2021-06-25 陕西科技大学 Captive goat counting method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108508782A (en) * 2018-01-29 2018-09-07 江苏大学 Pig behavior tracking identification monitoring device based on ARM and method
CN111507179A (en) * 2020-03-04 2020-08-07 杭州电子科技大学 Live pig feeding behavior analysis method
CN112597877A (en) * 2020-12-21 2021-04-02 中船重工(武汉)凌久高科有限公司 Factory personnel abnormal behavior detection method based on deep learning
CN113033376A (en) * 2021-03-22 2021-06-25 陕西科技大学 Captive goat counting method based on deep learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119648A (en) * 2021-11-12 2022-03-01 史缔纳农业科技(广东)有限公司 Pig counting method for fixed channel
CN114467795A (en) * 2021-11-24 2022-05-13 河南牧原智能科技有限公司 Pig checking system and method
CN115126999A (en) * 2022-05-19 2022-09-30 合肥拉塞特机器人科技有限公司 Pig point counting algorithm based on pictures and videos
CN115126999B (en) * 2022-05-19 2023-06-23 合肥拉塞特机器人科技有限公司 Pig counting algorithm based on pictures and videos

Similar Documents

Publication Publication Date Title
CN113516070A (en) Pig counting method
CN106709436B (en) Track traffic panoramic monitoring-oriented cross-camera suspicious pedestrian target tracking system
CN109345547B (en) Traffic lane line detection method and device based on deep learning multitask network
CN110084241B (en) Automatic ammeter reading method based on image recognition
CN106971185B (en) License plate positioning method and device based on full convolution network
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN105262991B (en) A kind of substation equipment object identifying method based on Quick Response Code
CN108154102A (en) A kind of traffic sign recognition method
CN110287907B (en) Object detection method and device
CN108985170A (en) Transmission line of electricity hanger recognition methods based on Three image difference and deep learning
CN106228548A (en) The detection method of a kind of screen slight crack and device
CN113537049B (en) Ground point cloud data processing method and device, terminal equipment and storage medium
CN106817677A (en) A kind of indoor objects information identifying method, apparatus and system based on multisensor
CN112686172A (en) Method and device for detecting foreign matters on airport runway and storage medium
CN116052222A (en) Cattle face recognition method for naturally collecting cattle face image
CN111062331A (en) Mosaic detection method and device for image, electronic equipment and storage medium
CN111950345A (en) Camera identification method and device, electronic equipment and storage medium
CN111461076A (en) Smoke detection method and smoke detection system combining frame difference method and neural network
CN113033386B (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
CN112070792B (en) Edge growth connection method and device for image segmentation
CN109409214A (en) The method and apparatus that the target object of a kind of pair of movement is classified
CN114550069B (en) Piglet nipple counting method based on deep learning
CN115690778A (en) Method for detecting, tracking and counting mature fruits based on deep neural network
Wang et al. Deep learning-based human activity analysis for aerial images
CN108967246B (en) Shrimp larvae positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination