CN114170673A - Method for identifying pig feeding behavior in video based on convolutional neural network - Google Patents

Method for identifying pig feeding behavior in video based on convolutional neural network Download PDF

Info

Publication number
CN114170673A
CN114170673A CN202111286420.7A CN202111286420A CN114170673A CN 114170673 A CN114170673 A CN 114170673A CN 202111286420 A CN202111286420 A CN 202111286420A CN 114170673 A CN114170673 A CN 114170673A
Authority
CN
China
Prior art keywords
pig
image
neural network
convolutional neural
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111286420.7A
Other languages
Chinese (zh)
Inventor
郭杰
陈丽
肖德琴
熊本海
黄晓宁
伍晓仪
王凯
汤钦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wens Foodstuff Group Co Ltd
Original Assignee
Wens Foodstuff Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wens Foodstuff Group Co Ltd filed Critical Wens Foodstuff Group Co Ltd
Priority to CN202111286420.7A priority Critical patent/CN114170673A/en
Publication of CN114170673A publication Critical patent/CN114170673A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for identifying pig feeding behaviors in a video based on a convolutional neural network, which comprises the steps of firstly obtaining a pig video, acquiring images from the video, then selecting images for training the neural network from the acquired images by using an image selection algorithm, then inputting the images into the convolutional neural network for parameter training of the neural network, and finally after the parameter training is finished, identifying the pig feeding behaviors in the images by using the three convolutional neural networks, and identifying the food types in a pig trough; the method identifies the feeding behavior of the pigs by collecting and selecting the images in the videos of the pigs, has no external interference to the pigs, and identifies whether the pigs are in a standing state or a feeding state on the pig feed trough by identifying the direction state of the heads of the pigs.

Description

Method for identifying pig feeding behavior in video based on convolutional neural network
Technical Field
The invention relates to the technical field of target detection and identification of video images, in particular to a method for identifying the feeding behavior of pigs in videos based on a convolutional neural network.
Background
In the wisdom animal husbandry, it is very important to analyze and identify the feeding behavior of pigs, which has a great influence on optimizing productivity and improving health monitoring. And the disease has a significant impact on the feeding behavior of pigs, identification of the feeding behavior of pigs is also often used to predict disease in pigs. In the aspects of feeding time, feeding frequency, feeding amount and the like, pigs are different, in order to monitor feeding behaviors, the real-time positioning system and the ultra-wide band system can be used for estimating the time spent by the pigs in the pig feeding troughs, but the fact that the pigs are fed in the pig feeding troughs cannot be specifically described. Sensors such as RFID, accelerometers are commonly used to measure multiple parameters of a single pig, but are invasive, uncomfortable and stressful to the pig, and the cost of the sensor is high. In addition, a general feeding behavior is basically based on the detection of the existence of pigs in a feeding area, so that the judgment of whether the pigs are in a feeding state is made, and the feeding time is calculated by using the time of the pigs in the feeding area, however, the existence of the pigs in the feeding area does not mean that the pigs are fed, and in the detection process, the pigs are placed in a whole trough, and the pigs are easy to be squeezed together when being fed, so that the pigs are easy to touch a series of detection devices to cause device damage, or the pigs move synchronously to cause missed detection.
Therefore, a method for identifying the feeding behavior of the pigs in the video based on the convolutional neural network is provided.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the prior art, and provides a method for identifying the feeding behavior of pigs in a video based on a convolutional neural network.
In order to achieve the purpose, the technical scheme provided by the invention is as follows: the method for identifying the feeding behavior of the pigs in the video based on the convolutional neural network comprises the following steps:
step 1: obtaining the dynamics of the pig: acquiring a video of a pig, and acquiring an image from the video;
step 2: image selection: selecting an image for training a neural network from the acquired images by using an image selection algorithm, wherein the selected data set comprises three kinds of images, and no image of the pig exists, no image of the pig with the head upward exists, and no image of the pig with the head downward exists;
and step 3: training neural network parameters: respectively inputting all three kinds of images, namely an upward image of the head of a pig, a downward image of the head of the pig and an image without the pig into a first convolutional neural network for detecting the pig in the image, a second convolutional neural network for identifying the feeding behavior of the pig in the image and a third convolutional neural network for identifying the food category in a pig trough for parameter training of the neural networks;
and 4, step 4: identifying the behaviors of the pigs: after parameter training is completed, the obtained three convolutional neural networks can identify the pig feeding behavior in the image, and identify the food type in the pig trough at the same time.
And 5: the feeding position is set as follows: measure a plurality of pig body widths that become, obtain pig body width data that become, cut apart the feed groove to data to the installation baffle forms pig feed position, and the baffle uses vertical baffle, and the baffle height is higher than pig four limbs and is less than pig back, and the baffle top does not set up shelters from the structure.
Preferably, in step 1, the video of the pig is captured by a camera installed right above the pig feeding trough, images are collected from the video at a rate of one frame per second, and the collection time of the collected images is recorded.
Preferably, in step 2, the image selection algorithm is a background extraction algorithm based on an adaptive gaussian mixture model, which automatically adjusts the number of gaussians used to model a given pixel, and for a given number of gaussians B, a new data sample xtThe probability of being a background pixel is as follows:
Figure BDA0003332747780000031
in the formula (I), the compound is shown in the specification,
Figure BDA0003332747780000032
representing a weight of ω at time ti,tThe ith gaussian probability density function of (2),
Figure BDA0003332747780000033
and ui,tRespectively the variance value and the mean value of the ith gaussian at time t,
Figure BDA0003332747780000034
is a covariance matrix, I is an identity matrix,
parameter omegai,t,μi,tAnd σi,tThe updates over time t are as follows:
ωi,t=ωi,t+(Οi,ti,t)
μi,t=μi,ti,t(1/ωi,ti,t
Figure BDA0003332747780000035
in the formula, deltai,t=xti,tFor a data sample, if its distance has a maximum ωi,tHas a mahalanobis distance of less than 3, the degree of ownershipi,tSet to 1, then the other Oi,tSet to 0, Gaussian distribution by weight ωi,tArranged from large to small, the gaussian B is updated as follows:
Figure BDA0003332747780000036
in the formula, cf is a measure of the largest part of data belonging to the foreground under the condition of not influencing a background model, a background extraction algorithm based on an adaptive Gaussian mixture model is used for calculating a pixel change area between continuous images, if the pixel change area between two continuous images is small, the next image is discarded, the method enables any image reflecting the significant change of the pig state to be detected and selected by extracting the background, the selected image data set comprises three kinds of images, the images of the pig, the image with the upward head of the pig and the image with the downward head of the pig do not exist, and each selected image is marked with the type to which the selected image belongs, wherein the pig with the upward head of the pig is considered to be in a standing state near a pig feeding trough, and the pig with the downward head of the pig is considered to be in a feeding state near the pig feeding trough.
Preferably, in step 3, the convolutional neural network for detecting the pig in the image first divides all three types of images into two types, the pig exists in the image and does not exist in the image, the convolutional neural network for identifying the feeding behavior of the pig in the image second divides the image in which the pig exists into two types, the head of the pig in the image is upward and the head of the pig in the image is downward, the convolutional neural network for identifying the food type in the pig feed trough third divides the image in which the pig does not exist into four types, the pig feed trough does not contain food, the food is fattening fertilizer, the food is sow feed and the food is piglet feed, the three convolutional neural networks are all based on a Caffe convolutional neural network framework, except that the number of output types is changed according to different image data sets for training the network, the framework is kept unchanged, the three convolutional neural networks are all classified into a feature extraction layer and a layer, the feature extraction layer is composed of four rolling layers, and each convolutional layer is followed by a pooling layer, wherein the convolutional kernel size kernelsize of four convolutional layers is set to 3, the convolutional step size stride is set to 1, the padding width padding is set to 0, all four pooling layers are maximum pooling with the convolutional kernel size set to 2, the convolutional step size is set to 2, the padding width is set to 0, the classification layer is composed of three fully connected layers, the activation function uses a ReLu function, wherein the ReLu function is defined as f (x) max (0, x), meanwhile, a droout method is used to avoid overfitting of the neural network, the loss function of the neural network uses a softmax cross entropy loss function, and for a given sample with a class label y, the softmax cross entropy loss can be calculated by the following formula:
Figure BDA0003332747780000041
where C is the number of classes of convolutional neural network outputs and z is the predicted output of the convolutional neural network for all classes.
Preferably, in step 4, after the parameter training of the three convolutional neural networks is completed, one image is input into the convolutional neural network one, the convolutional neural network one detects the image and divides the image into two types, if a pig exists in the image, inputting the image into a second convolutional neural network, identifying whether the pigs in the image are in a standing state or a feeding state, if the pigs do not exist in the image, inputting the image into three convolutional neural networks, identifying the food category in the pig feed trough in the image, the obtained three convolutional neural networks can identify the standing state of the pigs, the feeding state of the pigs and the food types in the pig feed troughs in the images, namely identify the feeding behavior of the pigs in the images, meanwhile, the method of collecting images of the video and using an image selection algorithm is combined, so that the feeding behavior of the pigs in the video can be identified;
preferably, in the step 5, the baffle is arranged, so that the feeding of the pigs is not influenced, only one adult pig can be accommodated, and the shielding structure is not arranged at the top of the pig, so that the video recording work of the pigs is not influenced.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. according to the method, only the camera is used for collecting the relevant data of the pigs, so that the method is easy to deploy and has no external interference to the pigs;
2. according to the method, the pigs are identified and analyzed in a state that the heads of the pigs are upward and downward, so that the pigs on the pig feeding trough can be confirmed to be in a feeding state, and the accuracy of identification and analysis is improved;
3. the method uses a background extraction algorithm based on a self-adaptive Gaussian mixture model to purposefully select the images in the video, so that the identification efficiency is improved;
4. the method only detects and identifies the pigs by using the convolutional neural network, and a good effect can be obtained even if the images have interference such as weak light and the like, so that the identification accuracy is high in a complex environment of a farm;
5. according to the method, the single pig feeding position is used, only one adult pig can be accommodated in the feeding position, and after the adult pig enters, other adult pigs cannot enter, so that specific pig counting is facilitated, the feeding position of the pig does not shield the integral form of the pig, the head raising and lowering actions of the pig are not influenced, the detection precision of the pig is improved, and the detection omission phenomenon is avoided.
Drawings
FIG. 1 is a diagram of a convolutional neural network used in the method of the present invention;
FIG. 2 is a schematic flow chart of the method of the present invention.
Detailed Description
The present invention will be further described with reference to the following specific examples.
As shown in fig. 1 and fig. 2, in the method for identifying pig feeding behavior in a video based on a convolutional neural network in this embodiment, an image selection algorithm is used to select an image collected from the video, and then three convolutional neural networks are used to identify pig feeding behavior in the image, which includes the following steps:
step 1: obtaining the dynamics of the pig: acquiring a pig video, acquiring an image from the video, acquiring the pig video, shooting the pig video by a camera arranged right above a pig feed trough, acquiring the image from the video at an image acquisition rate of one frame per second, and recording the acquisition time of the acquired image;
step 2: image selection: selecting an image for training a neural network from the acquired images using an image selection algorithm, wherein the selected data set comprises three types of images, no image of a pig with its head up, and no image of a pig with its head down, selecting an image for training a neural network from the acquired images using an image selection algorithm, the image selection algorithm being a background extraction algorithm based on an adaptive Gaussian mixture model that automatically adjusts the number of Gauss used to model a given pixel, for a given Gauss number B, a new data sample xtThe probability of being a background pixel is as follows:
Figure BDA0003332747780000061
in the formula (I), the compound is shown in the specification,
Figure BDA0003332747780000062
representing a weight of ω at time ti,tThe ith gaussian probability density function of (2),
Figure BDA0003332747780000063
and ui,tRespectively the variance value and the mean value of the ith gaussian at time t,
Figure BDA0003332747780000064
is a covariance matrix, I is an identity matrix,
parameter omegai,t,μi,tAnd σi,tThe updates over time t are as follows:
ωi,t=ωi,t+(Οi,ti,t)
μi,t=μi,ti,t(1/ωi,ti,t
Figure BDA0003332747780000065
in the formula, deltai,t=xti,tFor a data sample, if its distance has a maximum ωi,tHas a mahalanobis distance of less than 3, the degree of ownershipi,tSet to 1, then the other Oi,tSet to 0, Gaussian distribution by weight ωi,tArranged from large to small, the gaussian B is updated as follows:
Figure BDA0003332747780000071
in the formula, cf is a measure of the largest part of data belonging to the foreground under the condition of not influencing a background model, a background extraction algorithm based on an adaptive Gaussian mixture model is used for calculating a pixel change area between continuous images, if the pixel change area between two continuous images is small, the next image is discarded, the method enables any image reflecting the significant change of the state of a pig to be detected and selected by extracting the background, the selected image data set comprises three kinds of images, the images of the pig, the image with the upward head of the pig and the image with the downward head of the pig do not exist, and each selected image is marked with the type to which the selected image belongs, wherein the pig with the upward head of the pig is considered to be in a standing state near a pig feeding trough, and the pig with the downward head of the pig is considered to be in a feeding state near the pig feeding trough;
and step 3: training neural network parameters: respectively inputting all three kinds of images, namely an upward-head image of a pig, a downward-head image of the pig and an image without the pig into a first convolutional neural network for detecting the pig in the image, a second convolutional neural network for identifying the feeding behavior of the pig in the image and a third convolutional neural network for identifying the food type in a pig trough for parameter training of the neural networks, firstly dividing all three kinds of images into two kinds by the first convolutional neural network for detecting the pig in the image, dividing the images with the pig in the image, the pig in the image and the pig in the image into two kinds, dividing the images without the pig feeding behavior in the image into four kinds by the second convolutional neural network for identifying the feeding behavior of the pig in the image, dividing the images with the pig in the image, the pig in the image with the head upward direction and the pig in the image with the head downward direction, and dividing the third convolutional neural network for identifying the food type in the pig trough into four kinds, and dividing the images without food in the pig trough, Food for fattening, food for sow and food for piglet, all three convolutional neural networks based on the Caffe convolutional neural network framework, except for varying the number of output classes according to the image data set used for training the network, the framework remains the same, and consists of a feature extraction layer and a classification layer, the feature extraction layer consists of four convolutional layers, and each convolutional layer is followed by a pooling layer, wherein the convolutional kernel size of the four convolutional layers is set to 3, the convolutional step size is set to 1, the padding width padding is set to 0, the four pooling layers are all the convolutional kernel size is set to 2, the convolutional step size is set to 2, the padding width is set to the maximum pooling of 0, the classification layer consists of three fully connected layers, the activation function uses the ReLu function, wherein the ReLu function is defined as f (x) max (0, x), and the dropout method is used to avoid overfitting of the neural network, the loss function of the neural network uses a softmax cross-entropy loss function, whose softmax cross-entropy loss can be calculated for a given sample with a class label y by the following equation:
Figure BDA0003332747780000081
wherein C is the number of the output classes of the convolutional neural network, and z is the predicted output of the convolutional neural network to all the classes;
and 4, step 4: identifying the behaviors of the pigs: after parameter training is finished, the obtained three convolutional neural networks can identify the feeding behavior of the pigs in the images and identify the food types in the pig feed troughs, the method for collecting the images by video and using an image selection algorithm is combined to identify the feeding behavior of the pigs in the video, after the parameter training is finished, the obtained three convolutional neural networks can identify the feeding behavior of the pigs in the images and identify the food types in the pig feed troughs, after the parameter training of the three convolutional neural networks is finished, one image is input into the first convolutional neural network, the first convolutional neural network detects the image and is divided into two types, if the pig exists in the image, the image is input into the second convolutional neural network, the second convolutional neural network identifies whether the pigs in the image are in a standing state or in a feeding state, and if the pig does not exist in the image, the image is input into the three convolutional neural networks, the three convolutional neural networks are used for identifying the food types in the pig feed trough in the image, namely the obtained three convolutional neural networks can be used for identifying the standing state of the pig in the image, the feeding state of the pig and the food types in the pig feed trough, namely identifying the feeding behavior of the pig in the image, and meanwhile, the method of collecting the image by video and using an image selection algorithm is combined, so that the feeding behavior of the pig in the video can be identified;
and 5: the feeding position is set as follows: measure a plurality of pig of one-tenth body width, obtain pig body width data of one-tenth, cut apart the feed groove to data to the installation baffle, form pig feed position, the baffle uses vertical baffle, and the baffle height is higher than pig four limbs and is less than pig back, and the baffle top does not set up the shelter structure, and the baffle setting does not influence pig feed, and can only hold one-tenth pig, and the top does not set up the video admission work that the shelter structure does not influence the pig.
The above-mentioned embodiments are merely preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, so that the changes in the shape and principle of the present invention should be covered within the protection scope of the present invention.

Claims (6)

1. The method for identifying the feeding behavior of the pigs in the video based on the convolutional neural network is characterized by comprising the following steps of:
step 1: obtaining the dynamics of the pig: acquiring a video of a pig, and acquiring an image from the video;
step 2: image selection: selecting an image for training a neural network from the acquired images by using an image selection algorithm, wherein the selected data set comprises three kinds of images, and no image of the pig exists, no image of the pig with the head upward exists, and no image of the pig with the head downward exists;
and step 3: training neural network parameters: respectively inputting all three kinds of images, namely an upward image of the head of a pig, a downward image of the head of the pig and an image without the pig into a first convolutional neural network for detecting the pig in the image, a second convolutional neural network for identifying the feeding behavior of the pig in the image and a third convolutional neural network for identifying the food category in a pig trough for parameter training of the neural networks;
and 4, step 4: identifying the behaviors of the pigs: after parameter training is completed, the obtained three convolutional neural networks can identify the feeding behavior of the pigs in the images, and identify the food types in the pig feed trough at the same time, and the feeding behavior of the pigs in the videos can be identified by combining the method of acquiring the images by videos and using an image selection algorithm;
and 5: the feeding position is set as follows: measure a plurality of pig body widths that become, obtain pig body width data that become, cut apart the feed groove to data to the installation baffle forms pig feed position, and the baffle uses vertical baffle, and the baffle height is higher than pig four limbs and is less than pig back, and the baffle top does not set up shelters from the structure.
2. The method for identifying pig feeding behavior in video based on convolutional neural network as claimed in claim 1, wherein: in the step 1, the video of the pig is shot and obtained by a camera arranged right above the pig feeding trough, the image is collected from the video, the image collection rate is one frame per second, and the collection time of the collected image is recorded at the same time.
3. The method for identifying pig feeding behavior in video based on convolutional neural network as claimed in claim 1, wherein: in step 2, the image selection algorithm is a background extraction algorithm based on an adaptive gaussian mixture model, which automatically adjusts the number of gaussians used to model a given pixel, and for a given number of gaussians B, a new data sample xtThe probability of being a background pixel is as follows:
Figure FDA0003332747770000021
in the formula (I), the compound is shown in the specification,
Figure FDA0003332747770000022
representing a weight of ω at time ti,tThe ith gaussian probability density function of (2),
Figure FDA0003332747770000023
and ui,tRespectively the variance value and the mean value of the ith gaussian at time t,
Figure FDA0003332747770000024
is a covariance matrix, I is an identity matrix,
parameter omegai,t,μi,tAnd σi,tThe updates over time t are as follows:
ωi,t=ωi,t+(Οi,ti,t)
μi,t=μi,ti,t(1/ωi,ti,t
Figure FDA0003332747770000025
in the formula, deltai,t=xti,tFor a data sample, if its distance has a maximum ωi,tHas a mahalanobis distance of less than 3, the degree of ownershipi,tSet to 1, then the other Oi,tSet to 0, Gaussian distribution by weight ωi,tArranged from large to small, the gaussian B is updated as follows:
Figure FDA0003332747770000026
in the formula, cf is a measure of the largest part of data belonging to the foreground under the condition of not influencing a background model, a background extraction algorithm based on an adaptive Gaussian mixture model is used for calculating a pixel change area between continuous images, if the pixel change area between two continuous images is small, the next image is discarded, the method enables any image reflecting the significant change of the pig state to be detected and selected by extracting the background, the selected image data set comprises three kinds of images, the images of the pig, the image with the upward head of the pig and the image with the downward head of the pig do not exist, and each selected image is marked with the type to which the selected image belongs, wherein the pig with the upward head of the pig is considered to be in a standing state near a pig feeding trough, and the pig with the downward head of the pig is considered to be in a feeding state near the pig feeding trough.
4. The method for identifying pig feeding behavior in video based on convolutional neural network as claimed in claim 1, wherein: in the step 3, the convolutional neural network I for detecting the pig in the image firstly divides all three types of images into two types, the pig exists in the image and does not exist in the image, the convolutional neural network II for identifying the feeding behavior of the pig in the image divides the image with the pig existing into two types, the head of the pig in the image is upward and the head of the pig in the image is downward, the convolutional neural network III for identifying the food type in the pig feeding trough divides the image without the pig into four types, the pig feeding trough does not contain food, the food is a fertilizer, the food is a sow feed and the food is a piglet feed, the three convolutional neural networks are all based on a Caffe convolutional neural network framework, except that the number of output types is changed according to different image data sets used for training the network, the framework is kept unchanged and consists of a feature extraction layer and a classification layer, the feature extraction layer consists of four rolling layers, and each convolutional layer is followed by a pooling layer, wherein the convolutional kernel size kernelsize of four convolutional layers is set to 3, the convolutional step size stride is set to 1, the padding width padding is set to 0, all four pooling layers are maximum pooling with the convolutional kernel size set to 2, the convolutional step size is set to 2, the padding width is set to 0, the classification layer is composed of three fully connected layers, the activation function uses a ReLu function, wherein the ReLu function is defined as f (x) max (0, x), meanwhile, a droout method is used to avoid overfitting of the neural network, the loss function of the neural network uses a softmax cross entropy loss function, and for a given sample with a class label y, the softmax cross entropy loss can be calculated by the following formula:
Figure FDA0003332747770000031
where C is the number of classes of convolutional neural network outputs and z is the predicted output of the convolutional neural network for all classes.
5. The method for identifying pig feeding behavior in video based on convolutional neural network as claimed in claim 1, wherein: in the step 4, after the parameter training of the three convolutional neural networks is finished, one image is input into the first convolutional neural network, the first convolutional neural network detects the image and divides the image into two types, if the image has pigs, inputting the image into a second convolutional neural network, identifying whether the pigs in the image are in a standing state or a feeding state, if the pigs do not exist in the image, inputting the image into three convolutional neural networks, identifying the food category in the pig feed trough in the image, the obtained three convolutional neural networks can identify the standing state of the pigs, the feeding state of the pigs and the food types in the pig feed troughs in the images, namely identify the feeding behavior of the pigs in the images, meanwhile, the method of collecting images of the video and using an image selection algorithm is combined, and the feeding behavior of the pigs in the video can be identified.
6. The method for identifying pig feeding behavior in video based on convolutional neural network as claimed in claim 1, wherein: in step 5, the baffle sets up, does not influence the pig and only can hold one and become the pig to the top does not set up and shelters from the video admission work that the structure does not influence the pig.
CN202111286420.7A 2021-11-01 2021-11-01 Method for identifying pig feeding behavior in video based on convolutional neural network Pending CN114170673A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111286420.7A CN114170673A (en) 2021-11-01 2021-11-01 Method for identifying pig feeding behavior in video based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111286420.7A CN114170673A (en) 2021-11-01 2021-11-01 Method for identifying pig feeding behavior in video based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN114170673A true CN114170673A (en) 2022-03-11

Family

ID=80477756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111286420.7A Pending CN114170673A (en) 2021-11-01 2021-11-01 Method for identifying pig feeding behavior in video based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN114170673A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036820A (en) * 2023-08-21 2023-11-10 青岛中沃兴牧食品科技有限公司 Pig classification model based on visual image and method thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036820A (en) * 2023-08-21 2023-11-10 青岛中沃兴牧食品科技有限公司 Pig classification model based on visual image and method thereof
CN117036820B (en) * 2023-08-21 2024-03-19 青岛中沃兴牧食品科技有限公司 Pig classification model based on visual image and method thereof

Similar Documents

Publication Publication Date Title
US9984183B2 (en) Method for automatic behavioral phenotyping
Wu et al. Detection and counting of banana bunches by integrating deep learning and classic image-processing algorithms
Zhou et al. Strawberry maturity classification from UAV and near-ground imaging using deep learning
Zhang et al. Real-time sow behavior detection based on deep learning
CN109325431B (en) Method and device for detecting vegetation coverage in feeding path of grassland grazing sheep
Jingqiu et al. Cow behavior recognition based on image analysis and activities
CN107330403B (en) Yak counting method based on video data
CN111985445B (en) Grassland insect pest monitoring system and method based on unmanned aerial vehicle multispectral remote sensing
CN114818909B (en) Weed detection method and device based on crop growth characteristics
CN107610122B (en) Micro-CT-based single-grain cereal internal insect pest detection method
KR102265809B1 (en) Method and apparatus for detecting behavior pattern of livestock using acceleration sensor
CN113298023B (en) Insect dynamic behavior identification method based on deep learning and image technology
Chen et al. A kinetic energy model based on machine vision for recognition of aggressive behaviours among group-housed pigs
CN111476119B (en) Insect behavior identification method and device based on space-time context
Hasan et al. Fish diseases detection using convolutional neural network (CNN)
CN114170673A (en) Method for identifying pig feeding behavior in video based on convolutional neural network
CN110490161B (en) Captive animal behavior analysis method based on deep learning
CN112883915A (en) Automatic wheat ear identification method and system based on transfer learning
CN111160422B (en) Analysis method for detecting attack behaviors of group-raised pigs by adopting convolutional neural network and long-term and short-term memory
Ban et al. A lightweight model based on YOLOv8n in wheat spike detection
CN115439789A (en) Intelligent identification method and identification system for life state of silkworm
Fang et al. Classification system study of soybean leaf disease based on deep learning
CN114022831A (en) Binocular vision-based livestock body condition monitoring method and system
Mishra et al. Convolutional Neural Network Method for Effective Plant Disease Prediction
CN108205652A (en) A kind of recognition methods of action of having a meal and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination