CN114358163A - Food intake monitoring method and system based on twin network and depth data - Google Patents

Food intake monitoring method and system based on twin network and depth data Download PDF

Info

Publication number
CN114358163A
CN114358163A CN202111622223.8A CN202111622223A CN114358163A CN 114358163 A CN114358163 A CN 114358163A CN 202111622223 A CN202111622223 A CN 202111622223A CN 114358163 A CN114358163 A CN 114358163A
Authority
CN
China
Prior art keywords
eating
image
network
images
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111622223.8A
Other languages
Chinese (zh)
Inventor
魏晓莉
沈维政
王鑫杰
戴百生
严士超
李洋
张永根
熊本海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Agricultural University
Original Assignee
Northeast Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Agricultural University filed Critical Northeast Agricultural University
Priority to CN202111622223.8A priority Critical patent/CN114358163A/en
Publication of CN114358163A publication Critical patent/CN114358163A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a twin network and depth data-based feed intake monitoring method and system, which comprises the following steps: acquiring images before eating and images after eating of the cows for a plurality of times; inputting the before-eating image and the after-eating image into a twin network, respectively mapping the twin network and the after-eating image to the same vector space through a feature extraction network to obtain a before-eating multi-dimensional feature vector and a after-eating multi-dimensional feature vector, and tiling the two multi-dimensional feature vectors; making a difference between the two tiled multidimensional feature vectors to obtain a new feature vector; and calculating the new characteristic vector through one-time full connection to obtain the feed intake. According to the method, the feed intake of the dairy cows in a single time can be predicted without preprocessing the feed pile images before and after ingestion, the influence of illumination is small, the difference of the prediction performance under different illumination conditions is small, and the stability and the accuracy are improved. In addition, the method can be directly combined with other methods based on computer vision to realize the complete non-contact monitoring of the single feed intake of the individual cows.

Description

Food intake monitoring method and system based on twin network and depth data
Technical Field
The invention belongs to the field of agriculture and livestock breeding systems, and particularly relates to a twin network and depth data-based feed intake monitoring method and system.
Background
The feed intake is one of the main factors influencing the growth and development and lactation performance of the dairy cows, is also an important index reflecting the individual health condition of the dairy cows, and is also an important basis for evaluating the feed utilization rate and the feeding benefit and adjusting the decision of a pasture. Therefore, the feed intake monitoring has very important significance for the fine feeding of the dairy cows.
At present, the main methods for monitoring the feed intake of dairy cows include a method based on the combination of RFID and a feed intake tank with a weighing sensor, a feed intake estimation method based on wearable equipment, a near infrared spectrum analysis method and the like. The method based on the combination of the RFID and the feeding trough with the weighing sensor has higher precision and accuracy, but the method has high cost, needs frequent cleaning and maintenance and is less applied to pastures; the feed intake estimation method based on the wearable equipment acquires the feeding behavior parameters of the dairy cows through equipment such as intelligent collars, foot rings, halters and the like to construct a feed intake estimation prediction model, the method easily causes stress response of the dairy cows and only estimates, and the precision needs to be improved. The near-infrared spectrum analysis method mainly utilizes equipment such as a near-infrared spectrometer and the like to analyze the milk cow dung so as to determine feed components, digestibility and the like, and then estimates the feed intake of the milk cow through related calculation. Therefore, for the problem of monitoring the feed intake of dairy cow individuals, more research is needed to obtain a feed intake monitoring method with lower cost, higher precision and stronger practicability.
In recent years, with the continuous development of optical imaging technology, the feed intake of cows is monitored by adopting a computer vision mode, which neither needs expensive equipment nor avoids stress reaction caused by wearing. For example, a three-dimensional camera is used for measuring the volume of the feed, the relation between the volume and the weight of the feed is obtained through linear and secondary least square t-test regression analysis, the error of the system in 22.68kg is 0.5kg, the feasibility of a computer vision mode is proved by the technology, but the simulation precision of the computer vision mode needs to be improved; and a plurality of pictures are taken from various angles by utilizing a plurality of high-pixel RGB cameras to carry out three-dimensional reconstruction on the monitored feed piles in the specific area, the feed intake is predicted through the shape and volume change of the feed piles, and the simulation precision is high. Under laboratory conditions, for a feed pile below 7kg, the error of the calculated mass is 0.483kg, and the estimation error under cowshed conditions is less than 0.5kg, but the main limitations of this method are that during the ingestion of a cow in a real scene, the feed pile is difficult to concentrate in the area within the mark, and the mark for determining the feed range is also easily contaminated; the method comprises the steps of respectively subtracting RGB-D images of a feed stack before and after the dairy cow eats on four channels to keep negative values, generating a new tensor training convolutional neural network model to monitor the feed intake of an individual dairy cow, and displaying results, wherein the absolute average error of the system model on feed intake prediction is 0.127kg, and the mean square error is 0.034kg 2. These techniques show the potential to use computer vision techniques to measure feed intake, feed volume and feed weight, but there are still some problems with the prior art, mainly including that the accuracy needs to be improved, the data processing process is cumbersome, and it is difficult to work stably in a complex environment.
Disclosure of Invention
The invention aims to provide a twin network and depth data-based feed intake monitoring method and system, and aims to solve the problems of high cost, insufficient precision, complex data processing process and the like of the traditional feed intake monitoring method.
On one hand, in order to achieve the above purpose, the invention provides a twin network and depth data-based food consumption monitoring method, which comprises the following steps:
acquiring images before eating and images after eating of the cows for a plurality of times;
inputting the before-eating image and the after-eating image into a twin network, respectively mapping the twin network to the same vector space through a feature extraction network to obtain a before-eating image multi-dimensional feature vector and a after-eating image multi-dimensional feature vector, and tiling the two multi-dimensional feature vectors;
making a difference between the tiled before-eating multi-dimensional feature vector and the tiled after-eating multi-dimensional feature vector to obtain a new feature vector;
and calculating the new characteristic vector through one-time full connection to obtain the feed intake.
Optionally, the process of acquiring the before-eating image and the after-eating image of the cow for several times of ingestion includes:
the method comprises the steps of acquiring images before food intake and images after food intake under different illumination conditions through a depth camera, wherein the images before food intake and the images after food intake are depth images, and the different illumination conditions comprise weak light, strong light, indoor weak light and no illumination.
Optionally, the in-process of the image before the food intake and the image after the food intake of gathering the milk cow several times of ingestion still includes:
when the ingestion starts, after the ear tag of the cow is induced by the RFID inductor, the ID of the cow and the weight of the feed before eating are collected and recorded;
and after the ingestion is finished, when the after-ingestion image is acquired, the weight of the after-ingestion feed is synchronously acquired and the ingestion duration is obtained.
Optionally, before inputting the before-eating image and the after-eating image into the twin network, the method further comprises:
and carrying out data enhancement processing on the image, wherein the data enhancement processing comprises vertical turning, horizontal turning and vertical and horizontal turning.
Optionally, in the twin network, in a process of mapping the before-food image and the after-food image to the same vector space through a feature extraction network, the feature extraction network adopts a residual error network ResNet101 structure, and the residual error network adopts jump connection.
Optionally, the process of extracting features through the residual error network ResNet101 structure includes:
performing convolution processing on the first layer of convolution layer, and calculating residual errors through the four subsequent residual error layers; each residual layer comprises a plurality of residual blocks, each residual block comprises three convolutional layers, and the sizes of convolution kernels of the three convolutional layers are 1x1, 3x3 and 1x1 respectively.
In another aspect, the present invention provides a twin network and depth data based food consumption monitoring system, including:
the acquisition module is used for acquiring depth data of the cows after eating for a plurality of times;
the database module is used for storing the depth data;
and the processing module is used for inputting the depth data acquired by the acquisition module into the twin network for processing to obtain the feed intake.
Optionally, the acquisition module includes a depth camera, an RFID sensor, and a weight sensor;
the depth camera is used for acquiring images before food intake and images after food intake under different illumination conditions, wherein the image types of the images before food intake and the images after food intake are depth images, and the different illumination conditions comprise weak light, strong light, indoor weak light and no illumination;
the RFID sensor is used for sensing the appearance of the ear tag and then acquiring the ID and ingestion duration data of the dairy cow;
the weight sensor is used for respectively collecting the weight of the feed before and after eating.
Optionally, the processing module includes a feature extraction module and a feed intake calculation module;
the feature extraction module is used for performing two-path feature extraction on the depth data to obtain two paths of features;
and the feed intake calculation module is used for making a difference between the two paths of characteristics and obtaining the feed intake through full-connection calculation.
Optionally, the feature extraction module adopts a two-path residual error network ResNet101 structure, the residual error network is in jump connection, the residual error network ResNet101 structure includes one convolution layer and four residual error layers, each residual error layer is composed of a plurality of residual error blocks, each residual error block includes 3 convolution layers, and the sizes of the convolution layers are 1x1, 3x3 and 1x1, respectively.
The invention has the technical effects that:
the invention provides a method for predicting the feed intake of a cow based on depth data and a twin network, which comprises the steps of mapping two feed pile depth images before and after the cow eats to the same vector space through a feature extraction network shared by two weights, subtracting the two feed pile depth images, and sending the obtained feature vectors to a feed intake calculation layer for calculation to realize the prediction of the single feed intake of the cow. According to the method, the single-time feed intake prediction of the dairy cows can be realized without preprocessing the feed pile images before and after ingestion, the influence of illumination is small, the prediction performance difference is small under different illumination conditions, and the method has higher stability and accuracy compared with the prior art. In addition, the method can be directly combined with other computer vision-based methods to realize the complete non-contact monitoring of the single feed intake of the individual cows.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
fig. 1 is a schematic structural diagram of a feed intake monitoring model in the first embodiment of the invention;
FIG. 2 is a flow chart of data collection according to a first embodiment of the present invention;
fig. 3 is a schematic diagram of a feature extraction network structure in the first embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
Example one
As shown in fig. 1, the present embodiment provides a cow feed intake monitoring method based on twin network and deep data processing, including:
1) inputting two images before and after ingestion into a twin network, and mapping the images to the same vector space through a feature extraction network respectively;
2) tiling the two acquired multidimensional feature vectors, solving the difference of the two acquired multidimensional feature vectors, and generating a new feature vector;
3) and calculating the new characteristic vector through one-time full connection to obtain the feed intake.
The data acquisition process comprises the following steps:
the invention mainly comprises a depth camera, a feeding trough, a weighing sensor, a data transmission control terminal, an RFID (radio frequency identification) sensor, a computer and a set of software system developed by C + +.
The image data is collected by an ORBBEC Astra Mini depth camera, the working distance is 0.4-2m, the field angle is H58.4 degrees, the V45.5 degrees, the working temperature is 10-40 ℃, the precision per meter is 3mm, a depth processing chip MX400 is arranged above a feed trough, and the distance from the camera to the ground is 97 cm; the image acquisition specification is 480 x 640 pixels.
The experimental data are obtained by a simulation experiment in an accurate feeding technology and equipment research room of northeast agriculture university, the ingestion scene of the cow under the semi-open cowshed environment is simulated manually, feed pile images before and after ingestion under different illumination conditions (weak light, strong light, indoor weak light and no illumination) are collected, and the feed is used as a full-mixed TMR daily ration. As shown in fig. 2, the data acquisition process is as follows: after sensing the ear tag, the system activation, the fodder image of depth camera shooting in the manger (before the simulation is eaten), simultaneously the RFID inductor begins to gather and take notes milk cow ID, long wait data of time of eating, fodder pile weight before the eating is recorded to weight sensor, save to local database, the ear tag leaves the area of eating after the simulation is eaten, feed pile weight after the weighing sensor gathers the eating, calculate this food intake and deposit the database in, simultaneously the depth camera shoots again in the manger RGB and the depth image (after the simulation is eaten) and transmit to the computer and save.
And in consideration of the influence of the illumination condition on the image acquisition of the depth camera, acquiring the feed pile images under five different illumination levels from strong light to no light. In the experimental process, 0-31kg of feed pile depth images and 483 groups of RGB images are collected together, the depth data is used by the model established by the invention, and the RGB data is used in a comparison model.
The training model needs a large amount of sample data, and the data diversity also determines the precision and generalization capability of the model, so the invention enhances the test data, and the data enhancement comprises the following steps: and vertically overturning, horizontally overturning and vertically and horizontally overturning to obtain 1932 groups of depth and color images.
Establishing a sample set according to the acquired data:
in order to determine the food intake of the dairy cows through images, the difference between the images of the feed pile before and after eating must be determined, and the twin network needs to take two depth images as input, so that the depth images obtained through experiments need to be combined two by two to generate a new data set. In the experiment, sample data are combined pairwise to form a combined data 24150 group with the weight difference value between [0,8200] g, the combined data is used as input data, the weight difference value between two feed pile images in the combination (namely the feed intake) is used as a label to establish a data set, and the data set is established according to the weight difference value between 8: 2 into a training set and a test set.
The twin network-based feed intake monitoring model provided by the invention mainly comprises a feature extraction module and a feed intake calculation module, wherein each branch adopts the same network structure and shares weight, so that the parameter quantity of the model is reduced, the consistency of mapping space is also ensured, and the specific calculation process is as follows:
1) inputting two images before and after ingestion into a twin network, and mapping the images to the same vector space through a feature extraction network respectively;
2) tiling the two acquired multidimensional feature vectors, solving the difference of the two acquired multidimensional feature vectors, and generating a new feature vector;
3) and calculating the new characteristic vector through one-time full connection to obtain the feed intake.
The feature extraction network uses a ResNet101 structure, the residual error network uses jump connection, that is, when the input of the neural network is x and the function mapping (i.e. output) to be fitted is H (x), when x and F (x) dimensions are the same, the original function mapping H (x) adopts the calculation mode:
H(x)=F(x)+x (1)
when x and f (x) are different in dimension, the original function mapping h (x) is calculated by:
H(x)=F(x)+W(x) (2)
where W (x) represents the convolution operation, the effect is to adjust the dimension of x.
The introduction of the residual error network improves the correlation between input and output, thereby ensuring good convergence in a deep network, and effectively avoiding the problems of gradient disappearance or gradient explosion, the network internal structure is shown in fig. 3, the structure mainly comprises a single convolutional layer and 4 residual error layers, each residual error layer comprises a plurality of residual error blocks, each residual error block comprises 3 convolutional layers, and the sizes of convolutional kernels are 1x1, 3x3 and 1x1 respectively.
The LOSS function is MSE-LOSS function with LOSS value IlossThe calculation method of (2) is shown in formula (3):
Figure BDA0003438506000000091
wherein
Figure BDA0003438506000000092
To predict value, yiN is the number of training set samples.
Example two
In the process of training the model, the invention uses a random gradient descent method, the size of the batch-size is set to be 32, the weight attenuation is 0.1, the model is stored every time the loss value of the model on the verification set is reduced, and the model with the lowest loss is finally selected. The iteration termination condition of the algorithm training is that the training is full of 500 epochs. Because the twin network is not easy to converge during training, the method firstly trains the feature extraction network during training, then fixes the weight of the feature extraction network and trains the feed intake calculation layer.
The model evaluation method adopts the average absolute error (MAE) and the root mean square relative error (RMSE) to measure the evaluation performance of the model, and the formula is as follows:
Figure BDA0003438506000000093
Figure BDA0003438506000000094
wherein
Figure BDA0003438506000000095
In order to measure the food intake,
Figure BDA0003438506000000096
in order to predict the value of the target,
Figure BDA0003438506000000097
are averages.
The learning rates of the invention are 0.05, 0.1 and 0.5 respectively. In the method, the loss value reduction speed becomes slow along with the increase of the iteration batch, when the learning rate is high, the loss value reduction speed is high, but convergence is not easy, in contrast, the training with low learning rate can reduce the loss value to be lower, so the algorithm learning rate is set to be 0.05.
In order to determine the appropriate network depth, the identification performance of the twin network model under the 4 types of feature extraction network layers is compared, as shown in table 1, as the network layers are increased, the model prediction error is continuously reduced, the stability is continuously improved, when the network depth is increased from 50 layers to 101 layers, the MAE and the RMSE are respectively reduced by 0.14 percent and 0.1 percent, and the influence of continuously deepening the network on the improvement of the model performance is very small, so that the feature extraction network layer number is set to be 101.
TABLE 1
Figure BDA0003438506000000101
Wherein the network layer number refers to the number of convolutional layers + fully-connected layers, rather than the number of all layers in the network
The invention tries three ways of obtaining the difference of the extracted feature vectors, the first way directly splices and fuses two 512-dimensional vectors into 1024-dimensional vectors, the second way is to subtract the two vectors, and the third way is to divide the two vectors. The comparison results are shown in table 2. From this, the subtraction effect is best.
TABLE 2
Figure BDA0003438506000000102
Figure BDA0003438506000000111
In order to determine the influence of the illumination condition on the model performance, the model is tested under five conditions of weak light, strong light, indoor weak light and no illumination respectively, and the results are shown in table 2, wherein the influence of the illumination condition on the model is small, the predicted performance difference is not large under the five conditions, the RMSE is minimum under the condition of no illumination, and the MAE is minimum under the strong light.
TABLE 3
Figure BDA0003438506000000112
In order to determine the prediction performance difference between the twin-network-based feed intake prediction model and other models, when the feature extraction network structures are all ResNet101, other two feed intake calculation models are trained by other methods, the first model respectively calculates the weight of a single feed pile image before and after feeding, and then the weight is subtracted to obtain the current feed intake (hereinafter referred to as the weight subtraction-based feed intake prediction model, WSNet). The second method is to make a subtraction of a negative value on a depth channel from the feed pile images before and after ingestion to obtain a new tensor, train a residual network model with the tensor as an input variable and the ingestion as an output variable, and calculate the ingestion (hereinafter, referred to as an ingestion prediction model based on picture subtraction, ISNet, it is worth mentioning that the prediction precision of the method is higher than that of a model using RGB-D four-channel subtraction data as training data, and the subtraction of RGB data does not play a positive role in the model calculation process). Comparative analysis of the predicted performance of the three models is shown in table 4.
TABLE 4
Figure BDA0003438506000000121
Comparison analysis of MAE with RMSE revealed that: the error of the single feed intake prediction model based on weight subtraction is the largest, the stability is the worst, the twin network model is the best when the single feed intake prediction model based on image subtraction is the next, in 4860 groups of test data, the maximum error of the single feed intake prediction model based on weight subtraction reaches 1359.23g, and the maximum error of the twin network model is 507.46 g; the MAE of the twin network decreased 49.4% and 7.5% respectively, and the RMSE decreased 51.9% and 4.2% respectively, compared to the ResNet model based on weight subtraction and the ResNet model based on image subtraction. Therefore, compared with other two models, the twin network model has higher precision and stronger stability and can better calculate the feed intake of the dairy cows, because in the twin network model, the input image can filter out some errors and redundant information in the original data after feature extraction, effective information is extracted and the difference is solved, so that the information used in the feed intake calculation is more accurate, and the model prediction performance is improved.
The invention designs a set of data acquisition method and system, which takes 24150 collected feed heap depth images before and after ingestion of cows as data sources, optimally trains the constructed twin network-based feed intake monitoring model, determines the learning rate of a network algorithm to be 0.05, sets the number of layers of a feature extraction network to be 101, and can achieve better performance under the difference obtaining mode of subtraction of feature vectors. In the range of 0-8200g, the average absolute error MAE of the cow feed intake prediction is 100.6g, and the root mean square error RMSE is 128.02g, which is superior to the prior art. The effectiveness of the feed intake monitoring model in feature extraction of images before and after feed intake, calculation of high-dimensional image space difference and feed intake quantification is demonstrated.
Under different illumination conditions, the model prediction performance is not greatly different, the RMSE is the smallest under the condition of no illumination, the MAE is the smallest under strong light, the model prediction performance is visible, the influence of illumination on the model simulation precision can be effectively avoided by taking the depth image as a data source, the model prediction performance has higher stability and accuracy compared with the prior art, and meanwhile, the feasibility of monitoring the feed intake by using a depth camera in a semi-open cattle farm is verified.
The constructed twin network and depth data monitoring model can accurately reflect the change situation of the feed intake of the dairy cows, and the method is combined with other computer vision methods, so that the complete non-contact monitoring of the feed intake of the individual dairy cows can be realized. In future applications, the depth data and the RGB data are considered to be fused to provide more effective information, so as to further realize the identification and classification research of the feeding behavior, activity area and other information of the dairy cow.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A food intake monitoring method based on a twin network and depth data is characterized by comprising the following steps:
acquiring images before eating and images after eating of the cows for a plurality of times;
inputting the before-eating image and the after-eating image into a twin network, respectively mapping the twin network to the same vector space through a feature extraction network to obtain a before-eating image multi-dimensional feature vector and a after-eating image multi-dimensional feature vector, and tiling the two multi-dimensional feature vectors;
making a difference between the tiled before-eating multi-dimensional feature vector and the tiled after-eating multi-dimensional feature vector to obtain a new feature vector;
and calculating the new characteristic vector through one-time full connection to obtain the feed intake.
2. The method of claim 1, wherein the capturing of the pre-feeding image and the post-feeding image of the cow's food taken several times comprises:
the method comprises the steps of acquiring images before food intake and images after food intake under different illumination conditions through a depth camera, wherein the images before food intake and the images after food intake are depth images, and the different illumination conditions comprise weak light, strong light, indoor weak light and no illumination.
3. The method of claim 1, wherein the process of acquiring the before-feeding image and the after-feeding image of the cow feeding for a plurality of times further comprises:
when the ingestion starts, after the ear tag of the cow is induced by the RFID inductor, the ID of the cow and the weight of the feed before eating are collected and recorded;
and after the ingestion is finished, when the after-ingestion image is acquired, the weight of the after-ingestion feed is synchronously acquired and the ingestion duration is obtained.
4. The method of claim 3, wherein before inputting the pre-prandial image and the post-prandial image into the twin network, the method further comprises:
and carrying out data enhancement processing on the image, wherein the data enhancement processing comprises vertical turning, horizontal turning and vertical and horizontal turning.
5. The method according to claim 1, wherein in the twin network, in the process of respectively mapping the before-eating image and the after-eating image to the same vector space through a feature extraction network, the feature extraction network adopts a residual error network ResNet101 structure, and the residual error network adopts jump connection.
6. The method of claim 5, wherein the performing feature extraction through the residual network ResNet101 structure comprises:
performing convolution processing on the first layer of convolution layer, and calculating residual errors through the four subsequent residual error layers; each residual layer comprises a plurality of residual blocks, each residual block comprises three convolutional layers, and the sizes of convolution kernels of the three convolutional layers are 1x1, 3x3 and 1x1 respectively.
7. A food consumption monitoring system based on twin network and depth data is characterized by comprising:
the acquisition module is used for acquiring depth data of the cows after eating for a plurality of times;
the database module is used for storing the depth data;
and the processing module is used for inputting the depth data acquired by the acquisition module into the twin network for processing to obtain the feed intake.
8. The system of claim 7, wherein the acquisition module comprises a depth camera, an RFID sensor, and a weight sensor;
the depth camera is used for acquiring images before food intake and images after food intake under different illumination conditions, wherein the image types of the images before food intake and the images after food intake are depth images, and the different illumination conditions comprise weak light, strong light, indoor weak light and no illumination;
the RFID sensor is used for sensing the appearance of the ear tag and then acquiring the ID and ingestion duration data of the dairy cow;
the weight sensor is used for respectively collecting the weight of the feed before and after eating.
9. The system of claim 7, wherein the processing module comprises a feature extraction module and a feed intake calculation module;
the feature extraction module is used for performing two-path feature extraction on the depth data to obtain two paths of features;
and the feed intake calculation module is used for making a difference between the two paths of characteristics and obtaining the feed intake through full-connection calculation.
10. The system according to claim 9, wherein the feature extraction module employs a two-way residual network ResNet101 structure, the residual network employs a skip connection, the residual network ResNet101 structure includes one convolutional layer and four residual layers, each residual layer is composed of a number of residual blocks, each residual block includes 3 convolutional layers, and the convolutional core sizes are 1x1, 3x3 and 1x1, respectively.
CN202111622223.8A 2021-12-28 2021-12-28 Food intake monitoring method and system based on twin network and depth data Pending CN114358163A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111622223.8A CN114358163A (en) 2021-12-28 2021-12-28 Food intake monitoring method and system based on twin network and depth data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111622223.8A CN114358163A (en) 2021-12-28 2021-12-28 Food intake monitoring method and system based on twin network and depth data

Publications (1)

Publication Number Publication Date
CN114358163A true CN114358163A (en) 2022-04-15

Family

ID=81103329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111622223.8A Pending CN114358163A (en) 2021-12-28 2021-12-28 Food intake monitoring method and system based on twin network and depth data

Country Status (1)

Country Link
CN (1) CN114358163A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116912767A (en) * 2023-07-03 2023-10-20 东北农业大学 Milk cow individual feed intake monitoring method based on machine vision and point cloud data

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033446A (en) * 2019-04-10 2019-07-19 西安电子科技大学 Enhancing image quality evaluating method based on twin network
CN110415209A (en) * 2019-06-12 2019-11-05 东北农业大学 A kind of cow feeding quantity monitoring method based on the estimation of light field space or depth perception
CN110839557A (en) * 2019-10-16 2020-02-28 北京海益同展信息科技有限公司 Sow oestrus monitoring method, device and system, electronic equipment and storage medium
CN110991222A (en) * 2019-10-16 2020-04-10 北京海益同展信息科技有限公司 Object state monitoring and sow oestrus monitoring method, device and system
CN111264405A (en) * 2020-02-19 2020-06-12 北京海益同展信息科技有限公司 Feeding method, system, device, equipment and computer readable storage medium
CN112931289A (en) * 2021-03-10 2021-06-11 中国农业大学 Pig feeding monitoring method and device
CN113516201A (en) * 2021-08-09 2021-10-19 中国农业大学 Estimation method of residual material amount in meat rabbit feed box based on deep neural network
CN113706482A (en) * 2021-08-16 2021-11-26 武汉大学 High-resolution remote sensing image change detection method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033446A (en) * 2019-04-10 2019-07-19 西安电子科技大学 Enhancing image quality evaluating method based on twin network
CN110415209A (en) * 2019-06-12 2019-11-05 东北农业大学 A kind of cow feeding quantity monitoring method based on the estimation of light field space or depth perception
CN110839557A (en) * 2019-10-16 2020-02-28 北京海益同展信息科技有限公司 Sow oestrus monitoring method, device and system, electronic equipment and storage medium
CN110991222A (en) * 2019-10-16 2020-04-10 北京海益同展信息科技有限公司 Object state monitoring and sow oestrus monitoring method, device and system
CN111264405A (en) * 2020-02-19 2020-06-12 北京海益同展信息科技有限公司 Feeding method, system, device, equipment and computer readable storage medium
CN112931289A (en) * 2021-03-10 2021-06-11 中国农业大学 Pig feeding monitoring method and device
CN113516201A (en) * 2021-08-09 2021-10-19 中国农业大学 Estimation method of residual material amount in meat rabbit feed box based on deep neural network
CN113706482A (en) * 2021-08-16 2021-11-26 武汉大学 High-resolution remote sensing image change detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RAN BEZEN 等: ""Computer vision system for measuring individual cow feed intake using RGB-D camera and deep learning algorithms"", 《COMPUTER AND ELECTRONICS IN AGRICULTURE》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116912767A (en) * 2023-07-03 2023-10-20 东北农业大学 Milk cow individual feed intake monitoring method based on machine vision and point cloud data
CN116912767B (en) * 2023-07-03 2023-12-22 东北农业大学 Milk cow individual feed intake monitoring method based on machine vision and point cloud data

Similar Documents

Publication Publication Date Title
Wu et al. Using a CNN-LSTM for basic behaviors detection of a single dairy cow in a complex environment
CN110426112B (en) Live pig weight measuring method and device
KR102062609B1 (en) A portable weighting system for livestock using 3D images
Jingqiu et al. Cow behavior recognition based on image analysis and activities
US11950576B2 (en) Multi-factorial biomass estimation
Zhang et al. Algorithm of sheep body dimension measurement and its applications based on image analysis
US11282199B2 (en) Methods and systems for identifying internal conditions in juvenile fish through non-invasive means
CN109784200B (en) Binocular vision-based cow behavior image acquisition and body condition intelligent monitoring system
TWI718572B (en) A computer-stereo-vision-based automatic measurement system and its approaches for aquatic creatures
CN112232978B (en) Aquatic product length and weight detection method, terminal equipment and storage medium
CN107077626A (en) Animal non-intrusion type multi-modal biological characteristic identification system
CN116778430B (en) Disease monitoring system and method for beef cattle cultivation
CN114898405B (en) Portable broiler chicken anomaly monitoring system based on edge calculation
CN115512215A (en) Underwater biological monitoring method and device and storage medium
Yu et al. An intelligent measurement scheme for basic characters of fish in smart aquaculture
CN114358163A (en) Food intake monitoring method and system based on twin network and depth data
Junior et al. Fingerlings mass estimation: A comparison between deep and shallow learning algorithms
Deng et al. DETECTION OF BEHAVIOUR AND POSTURE OF SHEEP BASED ON YOLOv3.
Rosales et al. Oreochromis niloticus growth performance analysis using pixel transformation and pattern recognition
EP4183491A1 (en) Sorting animals based on non-invasive determination of animal characteristics
Wang et al. Vision-based measuring method for individual cow feed intake using depth images and a Siamese network
CN114022831A (en) Binocular vision-based livestock body condition monitoring method and system
CN113989745A (en) Non-contact monitoring method for feeding condition of ruminants
Yuan et al. Stress-free detection technologies for pig growth based on welfare farming: A review
CN116912767B (en) Milk cow individual feed intake monitoring method based on machine vision and point cloud data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination