CN114358163A - A method and system for monitoring feed intake based on twin network and depth data - Google Patents
A method and system for monitoring feed intake based on twin network and depth data Download PDFInfo
- Publication number
- CN114358163A CN114358163A CN202111622223.8A CN202111622223A CN114358163A CN 114358163 A CN114358163 A CN 114358163A CN 202111622223 A CN202111622223 A CN 202111622223A CN 114358163 A CN114358163 A CN 114358163A
- Authority
- CN
- China
- Prior art keywords
- eating
- image
- network
- images
- residual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 235000021050 feed intake Nutrition 0.000 title claims abstract description 67
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000012544 monitoring process Methods 0.000 title claims abstract description 22
- 241000283690 Bos taurus Species 0.000 claims abstract description 44
- 239000013598 vector Substances 0.000 claims abstract description 41
- 238000000605 extraction Methods 0.000 claims abstract description 30
- 235000013365 dairy product Nutrition 0.000 claims abstract description 23
- 238000005286 illumination Methods 0.000 claims abstract description 12
- 238000013507 mapping Methods 0.000 claims abstract description 5
- 230000037406 food intake Effects 0.000 claims abstract 14
- 238000004364 calculation method Methods 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 13
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 11
- 235000012631 food intake Nutrition 0.000 claims 10
- 235000013305 food Nutrition 0.000 claims 1
- 230000000291 postprandial effect Effects 0.000 claims 1
- 238000007781 pre-processing Methods 0.000 abstract description 2
- 238000012549 training Methods 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000005303 weighing Methods 0.000 description 3
- 238000004497 NIR spectroscopy Methods 0.000 description 2
- 238000010835 comparative analysis Methods 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 210000003608 fece Anatomy 0.000 description 2
- 230000004634 feeding behavior Effects 0.000 description 2
- 238000000692 Student's t-test Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000009395 breeding Methods 0.000 description 1
- 230000001488 breeding effect Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000037213 diet Effects 0.000 description 1
- 235000005911 diet Nutrition 0.000 description 1
- 235000019621 digestibility Nutrition 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000003862 health status Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006651 lactation Effects 0.000 description 1
- 244000144972 livestock Species 0.000 description 1
- 239000010871 livestock manure Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012821 model calculation Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003938 response to stress Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012353 t test Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
技术领域technical field
本发明属于农业、畜类养殖体系领域,特别是涉及一种基于孪生网络与深度数据的采食量监测方法及系统。The invention belongs to the field of agriculture and livestock breeding systems, and in particular relates to a method and system for monitoring feed intake based on twin network and depth data.
背景技术Background technique
采食量是影响奶牛生长发育以及泌乳性能的主要因素之一,也是反应奶牛个体健康状况的重要指标,同时采食量又是评价饲料利用率、饲喂效益及调整牧场决策的重要依据。因此,采食量监测对于奶牛的精细化饲喂具有十分重要的意义。Feed intake is one of the main factors affecting the growth and development and lactation performance of dairy cows, and it is also an important indicator to reflect the individual health status of dairy cows. Therefore, feed intake monitoring is of great significance for the refined feeding of dairy cows.
目前,奶牛个体采食量的监测主要方法有基于RFID与带称重传感器的采食槽结合的方法、基于可穿戴设备的采食量估测法、近红外光谱分析法等。基于RFID与带称重传感器的采食槽结合的方法具有较高的精确度和准确度,但其所需成本高、且需要频繁清洗和维护,在牧场中应用较少;基于可穿戴设备的采食量估测法通过智能项圈、脚环和笼头等设备采集奶牛进食行为参数构建采食量评估预测模型,这种方法容易引起奶牛应激反应且只是估测,精度有待提高。近红外光谱分析法主要是利用近红外光谱仪等设备对奶牛粪便进行分析,以确定饲料成分、消化率等,进而通过相关计算来估测奶牛采食量,这种方式虽无应激且成本较低,但奶牛粪便获取难度较大,很难精确到奶牛个体,且估测准确性较低。因此,对于奶牛个体采食量监测问题,需要更多的研究以得到成本较低、精度较高、实用性更强的采食量监测方法。At present, the main methods for monitoring individual feed intake of dairy cows include a method based on the combination of RFID and a feeding trough with a weighing sensor, a feed intake estimation method based on wearable devices, and a near-infrared spectroscopy analysis method. The method based on RFID combined with the feeding trough with load cell has high precision and accuracy, but it requires high cost, frequent cleaning and maintenance, and is less used in pastures; wearable device-based The feed intake estimation method collects the feeding behavior parameters of dairy cows through devices such as smart collars, foot rings and bridles to construct a feed intake assessment prediction model. Near-infrared spectroscopy mainly uses equipment such as near-infrared spectrometers to analyze cow dung to determine feed components, digestibility, etc., and then estimate cow feed intake through relevant calculations. Although this method is stress-free and relatively expensive. Low, but it is difficult to obtain cow manure, it is difficult to be accurate to individual cows, and the estimation accuracy is low. Therefore, for the monitoring of individual feed intake of dairy cows, more research is needed to obtain a feed intake monitoring method with lower cost, higher accuracy and more practicability.
近年来,随着光学成像技术的不断发展,也有采用计算机视觉的方式来监测奶牛采食量,这种方式既不需要昂贵的设备,也避免了因穿戴而引起的应激反应。例如利用三维相机测量饲料体积,通过线性和二次最小二乘t-test回归分析得出体积与饲料重量之间的关系,系统在22.68kg内的误差为0.5kg,该技术证明了计算机视觉方式的可行性,但其模拟精度有待提高;以及利用多个的高像素RGB相机,从各种角度拍摄多张照片来对特定区域内被监测的饲料堆进行三维重建,通过饲料堆的形状和体积变化来预测采食量,模拟精度较高。在实验室条件下,对于7kg以下的饲料堆,计算质量的误差为0.483kg,在牛棚条件下的估计误差也小于0.5kg,但这种方法的主要局限性在于,在实际场景下的奶牛采食过程中,饲料堆很难集中在标记内的区域,且用于确定饲料范围的标记也容易被污染;将奶牛采食前后的饲料堆RGB-D图像在四个通道上分别作保留负值的减法,生成新的张量训练卷积神经网络模型以监测个体奶牛采食量,结果显示,该系统模型对采食量预测的绝对平均误差为0.127kg,均方误差为0.034kg2,但由于光照等因素的影响,即使是相同的物体,同一种颜色,在非同一时刻的两张照片中,RGB值也有差异,在RGB三个通道上的减法很难取得有意义的结果,另一方面,两张彩色图像在减法中可能丢失一部分有效信息,且训练和使用模型前需要预处理大量数据。这些技术展示了使用计算机视觉技术来测量饲料摄入量、饲料体积和饲料重量的潜力,但现有技术仍存在一些问题,主要包括精度有待提高,数据处理过程繁琐、在复杂环境中难以稳定工作等问题。In recent years, with the continuous development of optical imaging technology, computer vision is also used to monitor the feed intake of dairy cows, which does not require expensive equipment and avoids the stress response caused by wearing. For example, a 3D camera is used to measure the feed volume, and the relationship between the volume and the feed weight is obtained through linear and quadratic least squares t-test regression analysis. The error of the system within 22.68kg is 0.5kg. This technology proves that the computer vision method It is feasible, but its simulation accuracy needs to be improved; and using multiple high-pixel RGB cameras to take multiple photos from various angles to carry out 3D reconstruction of the monitored feed pile in a specific area, through the shape and volume of the feed pile Changes are used to predict feed intake, and the simulation accuracy is high. Under laboratory conditions, the error in the calculated mass is 0.483kg for feed piles under 7kg, and the estimation error under cowshed conditions is also less than 0.5kg, but the main limitation of this method is that in real scenarios, the During the feeding process, it is difficult for the feed piles to be concentrated in the marked area, and the marks used to determine the feed range are also easily contaminated; A new tensor is generated to train a convolutional neural network model to monitor the feed intake of individual dairy cows. The results show that the absolute average error of the system model for the prediction of feed intake is 0.127kg, and the mean square error is 0.034kg2, but Due to the influence of factors such as lighting, even if it is the same object and the same color, the RGB values are different in the two photos at different times. It is difficult to obtain meaningful results by subtracting the three RGB channels. On the one hand, two color images may lose some effective information in the subtraction, and a large amount of data needs to be preprocessed before training and using the model. These technologies demonstrate the potential of using computer vision technology to measure feed intake, feed volume and feed weight, but there are still some problems with the existing technologies, mainly including the need to improve the accuracy, cumbersome data processing process, and difficult to work stably in complex environments And other issues.
发明内容SUMMARY OF THE INVENTION
本发明的目的是提供一种基于孪生网络与深度数据的采食量监测方法及系统,为解决传统采食量监测方法存在的成本高、精度不足、数据处理过程繁琐等问题。The purpose of the present invention is to provide a feed intake monitoring method and system based on twin network and depth data, in order to solve the problems of high cost, insufficient precision, and cumbersome data processing process existing in traditional feed intake monitoring methods.
一方面为实现上述目的,本发明提供了一种基于孪生网络与深度数据的采食量监测方法,包括:On the one hand, the present invention provides a method for monitoring feed intake based on twin network and depth data, including:
采集奶牛若干次采食的进食前图像和进食后图像;Collect pre-feeding images and post-feeding images of several feeding cows;
将所述进食前图像和所述进食后图像输入孪生网络,分别经过特征提取网络映射到同一向量空间,获得进食前图像多维特征向量和进食后图像多维特征向量,并将两个多维特征向量平铺;Input the image before eating and the image after eating into the twin network, map them to the same vector space through the feature extraction network respectively, obtain the multidimensional feature vector of the image before eating and the multidimensional feature vector of the image after eating, and equalize the two multidimensional feature vectors. shop;
对平铺后的所述进食前多维特征向量和平铺后的所述进食后多维特征向量做差,获得新的特征向量;Making a difference between the tiled multidimensional feature vector before eating and the tiled multidimensional feature vector after eating to obtain a new feature vector;
将所述新的特征向量经由一次全连接计算得到采食量。The new feature vector is calculated through a full connection to obtain the feed intake.
可选的,采集奶牛若干次采食的进食前图像和进食后图像的过程中包括:Optionally, the process of collecting pre-feeding images and post-feeding images of several feeding times of the cow includes:
通过深度相机采集不同光照条件下的所述进食前图像和所述进食后图像,所述进食前图像和所述进食后图像的图像类型为深度图像,所述不同光照条件包括弱光、强光、室内光、室内弱光和无光照。The pre-eating image and the post-eating image under different lighting conditions are collected by a depth camera, the image types of the pre-eating image and the post-eating image are depth images, and the different lighting conditions include low light, strong light , indoor light, indoor low light and no light.
可选的,采集奶牛若干次采食的进食前图像和进食后图像的过程中还包括:Optionally, the process of collecting pre-feeding images and post-feeding images of several feeding times of the dairy cow also includes:
采食开始时通过RFID感应器感应到所述奶牛的耳标后,开始采集并记录奶牛ID和进食前饲料重量;After the ear tag of the cow is sensed through the RFID sensor at the beginning of feeding, the cow ID and feed weight before eating are collected and recorded;
采食结束后,采集所述进食后图像时,同步采集进食后饲料重量并获得采食时长。After the feeding is completed, when the post-feeding image is collected, the weight of the post-feeding feed is simultaneously collected and the feeding time is obtained.
可选的,将所述进食前图像和所述进食后图像输入孪生网络之前,所述方法还包括:Optionally, before inputting the image before eating and the image after eating into the twin network, the method further includes:
对所述图像进行数据增强处理,所述数据增强处理包括垂直翻转、水平翻转和垂直水平翻转。Data enhancement processing is performed on the image, and the data enhancement processing includes vertical flipping, horizontal flipping, and vertical-horizontal flipping.
可选的,在所述孪生网络中,对所述进食前图像和所述进食后图像分别进行特征提取网络映射到同一向量空间的过程中,所述特征提取网络采用了残差网络ResNet101结构,残差网络采用跳跃连接。Optionally, in the twin network, in the process of mapping the image before eating and the image after eating to the same vector space by feature extraction network, the feature extraction network adopts the residual network ResNet101 structure, The residual network uses skip connections.
可选的,通过所述残差网络ResNet101结构进行特征提取的过程中包括:Optionally, the process of feature extraction through the residual network ResNet101 structure includes:
先通过第一层卷积层进行卷积处理,再通过后面四个残差层计算残差;其中每个所述残差层包括若干个残差块,每个所述残差块包括三个卷积层,且三个所述卷积层的卷积核大小分别为1x1,3x3,1x1。First perform convolution processing through the first convolution layer, and then calculate the residual through the following four residual layers; each of the residual layers includes several residual blocks, and each of the residual blocks includes three residual blocks A convolution layer, and the convolution kernel sizes of the three convolution layers are 1x1, 3x3, and 1x1, respectively.
另一方面,本发明提供了一种基于孪生网络与深度数据的采食量监测系统,包括:On the other hand, the present invention provides a feed intake monitoring system based on twin network and depth data, including:
采集模块,用于采集奶牛若干次采食的深度数据;The acquisition module is used to collect the depth data of several feeding times of dairy cows;
数据库模块,用于存储所述深度数据;a database module for storing the depth data;
处理模块,用于将所述采集模块采集的深度数据输入孪生网络进行处理,获得采食量。The processing module is used for inputting the depth data collected by the collection module into the twin network for processing to obtain the feed intake.
可选的,所述采集模块包括深度相机、RFID感应器和重量传感器;Optionally, the acquisition module includes a depth camera, an RFID sensor and a weight sensor;
所述深度相机用于采集不同光照条件下的进食前图像和进食后图像,所述进食前图像和所述进食后图像的图像类型为深度图像,所述不同光照条件包括弱光、强光、室内光、室内弱光和无光照;The depth camera is used to collect images before eating and after eating under different lighting conditions, and the image types of the images before eating and after eating are depth images, and the different lighting conditions include low light, strong light, Indoor light, indoor low light and no light;
所述RFID感应器用于感应到耳标出现后,采集奶牛ID和采食时长数据;The RFID sensor is used to collect cow ID and feeding time data after sensing the appearance of the ear tag;
所述重量传感器用于分别采集进食前和进食后的饲料重量。The weight sensor is used to collect feed weights before and after eating, respectively.
可选的,所述处理模块包括特征提取模块和采食量计算模块;Optionally, the processing module includes a feature extraction module and a feed intake calculation module;
所述特征提取模块用于对所述深度数据进行两路特征提取,获取两路特征;The feature extraction module is used to perform two-way feature extraction on the depth data to obtain two-way features;
所述采食量计算模块用于对两路特征做差,并经过全连接计算获得采食量。The feed intake calculation module is used to make a difference between the two-way features, and obtain the feed intake through full connection calculation.
可选的,所述特征提取模块采用了两路残差网络ResNet101结构,残差网络采用跳跃连接,所述残差网络ResNet101结构包括一个卷积层和四个残差层,每个所述残差层由若干个残差块组成,每个所述残差块包括3个卷积层,所述卷积核大小分别为1x1,3x3和1x1。Optionally, the feature extraction module adopts a two-way residual network ResNet101 structure, the residual network adopts a skip connection, and the residual network ResNet101 structure includes one convolutional layer and four residual layers, each of the residual The difference layer is composed of several residual blocks, and each residual block includes 3 convolution layers, and the size of the convolution kernel is 1x1, 3x3 and 1x1 respectively.
本发明的技术效果为:The technical effect of the present invention is:
本发明提出了一种基于深度数据和孪生网络的奶牛采食量预测方法,将奶牛采食前后的两张饲料堆深度图像经由两个权值共享的特征提取网络映射至同一向量空间后相减,将所得特征向量送入采食量计算层进行计算,实现对奶牛单次采食量的预测。本发明无需对采食前后的饲料堆图像进行预处理即可实现奶牛单次采食量预测,且受光照影响小,在不同光照条件下预测性能差异不大,相比于现有技术更具稳定性与准确性。此外此方法可直接与其他基于计算机视觉的方法结合,实现完全非接触式的个体奶牛单次采食量监测。The present invention proposes a method for predicting feed intake of dairy cows based on depth data and twin network. The depth images of two feed piles before and after feeding by dairy cows are mapped to the same vector space through a feature extraction network shared by two weights and then subtracted. , and the obtained feature vector is sent to the feed intake calculation layer for calculation, so as to realize the prediction of the single feed intake of dairy cows. The method can realize the prediction of single feed intake of dairy cows without preprocessing the images of feed piles before and after feeding, and is less affected by illumination, and the prediction performance has little difference under different illumination conditions, which is more efficient than the prior art. Stability and accuracy. In addition, this method can be directly combined with other computer vision-based methods to achieve a completely non-contact monitoring of individual cows' LI.
附图说明Description of drawings
构成本申请的一部分的附图用来提供对本申请的进一步理解,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:The accompanying drawings constituting a part of the present application are used to provide further understanding of the present application, and the schematic embodiments and descriptions of the present application are used to explain the present application and do not constitute an improper limitation of the present application. In the attached image:
图1为本发明实施例一中的采食量监测模型结构示意图;1 is a schematic structural diagram of a feed intake monitoring model in
图2为本发明实施例一中的数据采集流程图;Fig. 2 is the data acquisition flow chart in the first embodiment of the present invention;
图3为本发明实施例一中的特征提取网络结构示意图。FIG. 3 is a schematic structural diagram of a feature extraction network in
具体实施方式Detailed ways
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。It should be noted that the embodiments in the present application and the features of the embodiments may be combined with each other in the case of no conflict. The present application will be described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。It should be noted that the steps shown in the flowcharts of the accompanying drawings may be executed in a computer system, such as a set of computer-executable instructions, and, although a logical sequence is shown in the flowcharts, in some cases, Steps shown or described may be performed in an order different from that herein.
实施例一Example 1
如图1所示,本实施例中提供一种基于孪生网络与深度数据处理的奶牛采食量监测方法,包括:As shown in Figure 1, the present embodiment provides a method for monitoring feed intake of dairy cows based on twin network and deep data processing, including:
1)将采食前和采食后两张图像输入孪生网络,分别经过特征提取网络映射到同一向量空间;1) Input the two images before and after feeding into the twin network, and map them to the same vector space through the feature extraction network respectively;
2)将获取到的两个多维特征向量平铺,求其差异,生成新的特征向量;2) tile the obtained two multi-dimensional feature vectors, find their differences, and generate new feature vectors;
3)将新的特征向量经由一次全连接计算得到采食量。3) Calculate the feed intake from the new feature vector through a full connection.
数据的采集过程:Data collection process:
本发明主要由深度相机、采食槽、称重传感器、数据传输控制终端、RFID射频识别感应器、电脑以及一套由C++开发的软件系统组成。The invention mainly consists of a depth camera, a feeding trough, a weighing sensor, a data transmission control terminal, an RFID radio frequency identification sensor, a computer and a set of software systems developed by C++.
图像数据使用ORBBEC Astra Mini深度相机采集,其工作距离0.4-2m,视场角H58.4°,V 45.5°,工作温度10℃-40℃,每米精度3mm,深度处理芯片MX400,该相机放置在饲料槽上方,相机到地面的距离为97cm;图像采集规格为480*640像素。The image data was collected with ORBBEC Astra Mini depth camera, its working distance is 0.4-2m, the field of view is H58.4°, V is 45.5°, the working temperature is 10°C-40°C, the accuracy per meter is 3mm, the depth processing chip MX400, the camera is placed Above the feed trough, the distance from the camera to the ground is 97cm; the image capture specification is 480*640 pixels.
试验数据在东北农业大学精准饲喂技术与装备研究室进行模拟实验获取,人工模拟半开放式牛棚环境下的奶牛的采食场景,采集不同光照条件(弱光、强光、室内光、室内弱光、无光照)下采食前后的饲料堆图像,使用饲料为全混合TMR日粮。如图2所示,数据采集流程为:RFID感应器在感应到耳标后,系统激活,深度相机拍摄饲料槽内的饲料图像(模拟采食前),同时RFID感应器开始采集并记录奶牛ID、采食时长等数据、重量传感器记录进食前饲料堆重量,存储至本地数据库,模拟采食结束后耳标离开采食区域,称重传感器采集进食后饲料堆重量,计算本次采食量并存入数据库,同时深度相机再次拍摄饲料槽内RGB与深度图像(模拟采食后)并传输至电脑保存。The experimental data were obtained by simulation experiments in the Precision Feeding Technology and Equipment Laboratory of Northeast Agricultural University, artificially simulating the feeding scene of dairy cows in a semi-open cowshed environment, collecting different lighting conditions (low light, strong light, indoor light, indoor Images of feed piles before and after feeding in low light and no light), using the feed as a fully mixed TMR diet. As shown in Figure 2, the data collection process is: after the RFID sensor senses the ear tag, the system is activated, the depth camera captures the feed image in the feed trough (before simulating feeding), and the RFID sensor starts to collect and record the cow ID. , feeding time and other data, the weight sensor records the weight of the feed pile before eating, stores it in the local database, simulates that the ear tag leaves the feeding area after the feeding is finished, the weighing sensor collects the weight of the feed pile after eating, calculates the feed intake and calculates Store it in the database, and at the same time, the depth camera shoots the RGB and depth images in the feed trough again (after simulating feeding) and transmits them to the computer for saving.
考虑到光照条件对深度相机图像采集的影响,采集强光到无光照之间共五个不同光照等级下的饲料堆图像。实验过程一共采集到0-31kg的饲料堆深度图像与RGB图像各483组,本发明所建立的模型使用深度数据,RGB数据被用于对比模型中。Considering the influence of light conditions on the image acquisition of the depth camera, a total of five different light levels between strong light and no light were collected for the feed pile images. During the experiment, a total of 483 groups of depth images and RGB images of feed piles of 0-31 kg were collected. The model established by the present invention uses depth data, and RGB data is used in the comparison model.
训练模型需要大量的样本数据,同时数据多样性也决定了模型的精度与泛化能力,所以本发明对试验数据进行了增强,经过数据增强,包括:垂直翻转、水平翻转、垂直水平翻转,共得到1932组深度与彩色图像。Training the model requires a large amount of sample data, and the data diversity also determines the accuracy and generalization ability of the model. Therefore, the present invention enhances the test data. After data enhancement, it includes: vertical flip, horizontal flip, vertical and horizontal flip, a total of 1932 sets of depth and color images are obtained.
根据采集的数据建立样本集:Create a sample set based on the collected data:
为了通过图像确定奶牛进食量,必须确定进食前与进食后饲料堆的图像差异,且孪生网络需要以两张深度图像为输入,故需将实验获得的深度图像两两组合,生成新的数据集。本实验将样本数据两两组合,共生成重量差值在[0,8200]g之间的组合数据24150组,并将组合数据作为输入数据,将组合内两张饲料堆图像之间的重量差值(即该次采食量)作为标签建立数据集,按照8:2划分为训练集与测试集。In order to determine the feeding amount of dairy cows through images, it is necessary to determine the image difference between the feed piles before and after eating, and the twin network needs to take two depth images as input, so it is necessary to combine the depth images obtained in the experiment in pairs to generate a new data set . In this experiment, the sample data were combined in pairs to generate a total of 24150 groups of combined data with weight difference between [0,8200]g, and the combined data was used as input data, and the weight difference between the two images of feed piles in the combination was calculated. The value (that is, the feed intake) is used as a label to establish a data set, which is divided into a training set and a test set according to 8:2.
本发明提出的基于孪生网络的采食量监测模型,主要由特征提取模块与采食量计算模块两部分构成,每一路分支采用同样的网络结构且权值共享,这样既减少了模型的参数量,也保证了映射空间的一致性,具体计算过程为:The feed intake monitoring model based on the twin network proposed by the present invention is mainly composed of two parts: a feature extraction module and a feed intake calculation module. Each branch adopts the same network structure and shares weights, which not only reduces the amount of parameters of the model , which also ensures the consistency of the mapping space. The specific calculation process is as follows:
1)将采食前和采食后两张图像输入孪生网络,分别经过特征提取网络映射到同一向量空间;1) Input the two images before and after feeding into the twin network, and map them to the same vector space through the feature extraction network respectively;
2)将获取到的两个多维特征向量平铺,求其差异,生成新的特征向量;2) tile the obtained two multi-dimensional feature vectors, find their differences, and generate new feature vectors;
3)将新的特征向量经由一次全连接计算得到采食量。3) Calculate the feed intake from the new feature vector through a full connection.
其中特征提取网络使用ResNet101结构,残差网络采用跳跃连接,即在神经网络的输入为x,要拟合的函数映射(即输出)为H(x)的情况下,当x与F(x)维度相同时,原始的函数映射H(x)采用的计算方式为:The feature extraction network uses the ResNet101 structure, and the residual network uses skip connections, that is, when the input of the neural network is x and the function map to be fitted (ie, the output) is H(x), when x and F(x) When the dimensions are the same, the original function map H(x) is calculated as:
H(x)=F(x)+x (1)H(x)=F(x)+x (1)
当x与F(x)维度不同时,原始的函数映射H(x)采用的计算方式为:When the dimensions of x and F(x) are different, the calculation method of the original function map H(x) is:
H(x)=F(x)+W(x) (2)H(x)=F(x)+W(x) (2)
其中W(x)代表卷积操作,作用是调整x的维度。Where W(x) represents the convolution operation, which is used to adjust the dimension of x.
残差网络的引入提高了输入和输出的相关性,从而保证在深层网络良好的收敛性,可以有效避免梯度消失或梯度爆炸问题,其网络内部结构如图3所示,该结构主要由一个单独卷积层、4个残差层构成,每个残差层由若干个残差块组成,每个残差块包括3个卷积层,卷积核大小分别为1x1,3x3,1x1。The introduction of the residual network improves the correlation between input and output, thereby ensuring good convergence in the deep network, and can effectively avoid the problem of gradient disappearance or gradient explosion. The internal structure of the network is shown in Figure 3. The structure is mainly composed of a single It consists of convolution layer and 4 residual layers, each residual layer is composed of several residual blocks, each residual block includes 3 convolution layers, and the size of convolution kernel is 1x1, 3x3, 1x1 respectively.
损失函数选取MSE-LOSS函数,其损失值Iloss的计算方法如式(3)所示:The loss function selects the MSE-LOSS function, and the calculation method of the loss value I loss is shown in formula (3):
其中为预测值,yi为实际值,n为训练集样本数量。in is the predicted value, y i is the actual value, and n is the number of samples in the training set.
实施例二Embodiment 2
对模型进行训练的过程中,本发明使用随机梯度下降法,设置batch-size大小为32,权值衰减为0.1,每当模型在验证集上的损失值降低时,保存模型,最终选取loss最低的模型。算法训练的迭代终止条件为训练满500个epoch。由于孪生网络在训练时并不容易收敛,所以本发明在训练时先训练特征提取网络,再固定其权重,训练采食量计算层。In the process of training the model, the present invention uses the stochastic gradient descent method, sets the batch-size size to 32, and the weight decay to 0.1. Whenever the loss value of the model on the validation set decreases, the model is saved, and finally the lowest loss is selected. 's model. The iterative termination condition of algorithm training is that the training reaches 500 epochs. Since the twin network is not easy to converge during training, the present invention first trains the feature extraction network during training, and then fixes its weight to train the feed intake calculation layer.
模型评价方法采用平均绝对误差(MAE)、均方根相对误差(RMSE)来衡量模型评估性能,公式如下:The model evaluation method uses mean absolute error (MAE) and root mean square relative error (RMSE) to measure the model evaluation performance. The formula is as follows:
其中为采食量实测值,为预测值,为平均值。in is the measured value of feed intake, is the predicted value, is the average value.
本发明分别以0.05、0.1、0.5为学习率。其中,损失值降低速度随着迭代批次增加而变得缓慢,当学习率较大时,损失值下降速度较快,但不容易收敛,相比之下,学习率较小的训练可以使损失值降到更低,因此本发明将算法学习率设置为0.05。The present invention takes 0.05, 0.1, and 0.5 as the learning rates respectively. Among them, the loss value decreases slowly as the iteration batch increases. When the learning rate is large, the loss value decreases faster, but it is not easy to converge. In contrast, training with a small learning rate can make the loss value drops even lower, so the present invention sets the algorithm learning rate to 0.05.
为了确定适宜的网络深度,本发明对比了4种特征提取网络层数下孪生网络模型的识别性能,如表1所示,随着网络层数的增加,模型预测误差不断降低,稳定性不断上升,将网络深度从50层提升至101层时,MAE与RMSE只分别降低了0.14%和0.1%,可以看出再继续加深网络对于模型性能提升影响很小,故本发明设定特征提取网络层数为101。In order to determine the appropriate network depth, the present invention compares the recognition performance of the Siamese network model under four feature extraction network layers. , when the network depth is increased from 50 layers to 101 layers, the MAE and RMSE are only reduced by 0.14% and 0.1% respectively. It can be seen that further deepening the network has little effect on the model performance improvement. Therefore, the present invention sets the feature extraction network layer The number is 101.
表1Table 1
其中,这里的网络层数指卷积层+全连接层数量,而非网络中所有层的数量Among them, the number of network layers here refers to the number of convolutional layers + fully connected layers, not the number of all layers in the network
本发明尝试了三种求取提取到的特征向量差异的方式,第一种方式直接将两个512维向量拼融合为1024维向量,第二种方式为将两向量相减,第三种方式为将两向量作除法。比较结果如表2所示。由此可知,减法效果最好。The present invention tries three ways to find the difference of the extracted feature vectors. The first way is to directly merge two 512-dimensional vectors into a 1024-dimensional vector, the second way is to subtract the two vectors, and the third way To divide two vectors. The comparison results are shown in Table 2. It can be seen that the subtraction effect is the best.
表2Table 2
为确定光照条件对模型性能的影响,分别用在弱光、强光、室内光、室内弱光、无光照五种条件下对模型进行测试,结果如表2所示,光照条件对模型影响较小,在五种光下预测性能相差不大,在无光照条件下RMSE最小,在强光下MAE最小。In order to determine the influence of lighting conditions on the performance of the model, the models were tested under five conditions: weak light, strong light, indoor light, indoor weak light, and no light. The results are shown in Table 2. Small, the prediction performance is not much different under the five lights, the RMSE is the smallest under no-light conditions, and the MAE is the smallest under strong light.
表3table 3
为了确定本发明提出的基于孪生网络的采食量预测模型与其他模型之间的预测性能差异,在特征提取网络结构均为ResNet101时,用其他方法训练了另外两种采食量计算模型,第一个模型分别对采食前后的单张饲料堆图像的进行重量计算,然后相减得到本次采食量(以下称基于重量相减的采食量预测模型,WSNet)。第二个方法将采食前后的饲料堆图像在深度通道上作保留负值的减法,得到一个新的张量,将此张量作为输入变量,采食量作为输出变量,训练一个残差网络模型,计算采食量(以下称基于图片相减的采食量预测模型,ISNet,值得一提的是,该方法的预测精度高于以RGB-D四通道减法数据作为训练数据的模型,证明RGB数据的减法在模型计算过程中确实没有起到积极作用)。三个模型的预测性能对比分析如表4所示。In order to determine the prediction performance difference between the feed intake prediction model based on the twin network proposed by the present invention and other models, when the feature extraction network structure is ResNet101, other two feed intake calculation models are trained by other methods. A model calculates the weight of a single feed pile image before and after feeding, and then subtracts it to obtain the current feed intake (hereinafter referred to as the feed intake prediction model based on weight subtraction, WSNet). The second method subtracts the negative values of the feed pile images before and after feeding on the depth channel to obtain a new tensor. This tensor is used as the input variable, and the feed intake is used as the output variable to train a residual network. model to calculate feed intake (hereinafter referred to as the feed intake prediction model based on image subtraction, ISNet, it is worth mentioning that the prediction accuracy of this method is higher than that of the model using RGB-D four-channel subtraction data as training data, which proves that Subtraction of RGB data does not play an active role in model calculation). The comparative analysis of the prediction performance of the three models is shown in Table 4.
表4Table 4
通过MAE与RMSE对比分析发现:基于重量相减的单次采食量预测模型误差最大,稳定性最差,基于图像相减的单次采食量预测模型次之,孪生网络模型最优,在4860组测试数据中,基于重量相减的单次采食量预测模型最大误差达到1359.23g,孪生网络模型最大误差为507.46g;孪生网络的MAE较基于重量相减的ResNet模型与基于图像相减的Resnet模型分别下降49.4%和7.5%,RMSE分别下降51.9%和4.2%。由此可见,孪生网络模型相较于其他两模型精度更高、稳定性更强,能更好的计算奶牛采食量,这是因为在孪生网络模型中,由于输入图像经过特征提取后可以过滤掉原始数据中一些误差和冗余信息,提取到有效信息再求其差异,使得采食量计算时所用信息更加准确,提升模型预测性能。Through the comparative analysis of MAE and RMSE, it is found that the single feed intake prediction model based on weight subtraction has the largest error and the worst stability, the single feed intake prediction model based on image subtraction is second, and the twin network model is the best. In the 4860 sets of test data, the maximum error of the single feed intake prediction model based on weight subtraction is 1359.23g, and the maximum error of the twin network model is 507.46g; the MAE of the twin network is compared with the ResNet model based on weight subtraction and the image subtraction based model. The Resnet model drops by 49.4% and 7.5%, and the RMSE drops by 51.9% and 4.2%, respectively. It can be seen that the twin network model has higher accuracy and stronger stability than the other two models, and can better calculate the feed intake of dairy cows. This is because in the twin network model, the input image can be filtered after feature extraction. Some errors and redundant information in the original data are removed, and the effective information is extracted and then the difference is calculated, which makes the information used in the calculation of feed intake more accurate and improves the prediction performance of the model.
本发明设计了一套数据采集方法及系统,以采集到的24150组奶牛采食前后饲料堆深度图像为数据源,对构建的基于孪生网络的采食量监测模型进行优化训练,确定网络算法学习率为0.05,设置特征提取网络层数为101,特征向量做减法的差异求取方式下,模型可达到较优的性能。在0-8200g范围内,对奶牛采食量预测的平均绝对误差MAE为100.6g,均方根误差RMSE为128.02g,优于现有技术。说明了采食量监测模型对采食前后图像的特征提取、高维图像空间差异计算及采食量量化的有效性。The present invention designs a set of data acquisition method and system, which takes the collected 24150 groups of feed pile depth images before and after feeding as the data source, optimizes the training of the constructed feed intake monitoring model based on the twin network, and determines the learning of the network algorithm. The ratio is 0.05, the number of layers of the feature extraction network is set to 101, and the model can achieve better performance under the method of subtracting the difference of the feature vector. In the range of 0-8200g, the mean absolute error MAE for predicting the feed intake of dairy cows is 100.6g, and the root mean square error RMSE is 128.02g, which is superior to the prior art. The effectiveness of the feed intake monitoring model for image feature extraction, high-dimensional image spatial difference calculation and feed intake quantification before and after feeding was illustrated.
在不同光照条件下,模型预测性能相差不大,在无光照条件下RMSE最小,在强光下MAE最小,可见,以深度图像为数据源可有效避免光照对模型模拟精度的影响,相比于现有技术更具稳定性与准确性,同时验证了在半开放式牛场使用深度像机进行采食量监测的可行性。Under different lighting conditions, the prediction performance of the model is not much different. The RMSE is the smallest under no-illumination conditions, and the MAE is the smallest under strong light. It can be seen that using the depth image as the data source can effectively avoid the influence of illumination on the model simulation accuracy. Compared with The existing technology is more stable and accurate, and at the same time verifies the feasibility of using a depth camera to monitor feed intake in a semi-open cattle farm.
构建的基于孪生网络和深度数据监测模型能较精准地反应奶牛采食量变化情况,将此方法与其他计算机视觉方法结合,可实现完全非接触式的个体奶牛采食量监测。在未来的应用中,应考虑将深度数据与RGB数据融合以提供更多有效信息,进一步实现奶牛进食行为、活动区域等信息的识别与分类研究。The constructed monitoring model based on twin network and in-depth data can more accurately reflect the changes of dairy cattle feed intake. Combining this method with other computer vision methods can realize completely non-contact individual dairy cattle feed intake monitoring. In future applications, the fusion of depth data and RGB data should be considered to provide more effective information, and to further realize the identification and classification of cow feeding behavior, activity area and other information.
以上所述,仅为本申请较佳的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。The above are only the preferred specific embodiments of the present application, but the protection scope of the present application is not limited to this. Substitutions should be covered within the protection scope of this application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111622223.8A CN114358163A (en) | 2021-12-28 | 2021-12-28 | A method and system for monitoring feed intake based on twin network and depth data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111622223.8A CN114358163A (en) | 2021-12-28 | 2021-12-28 | A method and system for monitoring feed intake based on twin network and depth data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114358163A true CN114358163A (en) | 2022-04-15 |
Family
ID=81103329
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111622223.8A Pending CN114358163A (en) | 2021-12-28 | 2021-12-28 | A method and system for monitoring feed intake based on twin network and depth data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114358163A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116912767A (en) * | 2023-07-03 | 2023-10-20 | 东北农业大学 | Milk cow individual feed intake monitoring method based on machine vision and point cloud data |
CN119168425A (en) * | 2024-11-19 | 2024-12-20 | 内蒙古农业大学 | Method and system for predicting feed intake of periparturient dairy cows in pasture breeding scenarios |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110033446A (en) * | 2019-04-10 | 2019-07-19 | 西安电子科技大学 | Enhancing image quality evaluating method based on twin network |
CN110415209A (en) * | 2019-06-12 | 2019-11-05 | 东北农业大学 | A method for monitoring feed intake of dairy cows based on visual depth estimation of light field |
CN110839557A (en) * | 2019-10-16 | 2020-02-28 | 北京海益同展信息科技有限公司 | Sow oestrus monitoring method, device and system, electronic equipment and storage medium |
CN110991222A (en) * | 2019-10-16 | 2020-04-10 | 北京海益同展信息科技有限公司 | Object state monitoring and sow oestrus monitoring method, device and system |
CN111264405A (en) * | 2020-02-19 | 2020-06-12 | 北京海益同展信息科技有限公司 | Feeding method, system, device, equipment and computer readable storage medium |
CN112931289A (en) * | 2021-03-10 | 2021-06-11 | 中国农业大学 | Pig feeding monitoring method and device |
CN113516201A (en) * | 2021-08-09 | 2021-10-19 | 中国农业大学 | A method for estimating the amount of remaining material in a meat rabbit feed box based on a deep neural network |
CN113706482A (en) * | 2021-08-16 | 2021-11-26 | 武汉大学 | High-resolution remote sensing image change detection method |
-
2021
- 2021-12-28 CN CN202111622223.8A patent/CN114358163A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110033446A (en) * | 2019-04-10 | 2019-07-19 | 西安电子科技大学 | Enhancing image quality evaluating method based on twin network |
CN110415209A (en) * | 2019-06-12 | 2019-11-05 | 东北农业大学 | A method for monitoring feed intake of dairy cows based on visual depth estimation of light field |
CN110839557A (en) * | 2019-10-16 | 2020-02-28 | 北京海益同展信息科技有限公司 | Sow oestrus monitoring method, device and system, electronic equipment and storage medium |
CN110991222A (en) * | 2019-10-16 | 2020-04-10 | 北京海益同展信息科技有限公司 | Object state monitoring and sow oestrus monitoring method, device and system |
CN111264405A (en) * | 2020-02-19 | 2020-06-12 | 北京海益同展信息科技有限公司 | Feeding method, system, device, equipment and computer readable storage medium |
CN112931289A (en) * | 2021-03-10 | 2021-06-11 | 中国农业大学 | Pig feeding monitoring method and device |
CN113516201A (en) * | 2021-08-09 | 2021-10-19 | 中国农业大学 | A method for estimating the amount of remaining material in a meat rabbit feed box based on a deep neural network |
CN113706482A (en) * | 2021-08-16 | 2021-11-26 | 武汉大学 | High-resolution remote sensing image change detection method |
Non-Patent Citations (1)
Title |
---|
RAN BEZEN 等: ""Computer vision system for measuring individual cow feed intake using RGB-D camera and deep learning algorithms"", 《COMPUTER AND ELECTRONICS IN AGRICULTURE》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116912767A (en) * | 2023-07-03 | 2023-10-20 | 东北农业大学 | Milk cow individual feed intake monitoring method based on machine vision and point cloud data |
CN116912767B (en) * | 2023-07-03 | 2023-12-22 | 东北农业大学 | Milk cow individual feed intake monitoring method based on machine vision and point cloud data |
CN119168425A (en) * | 2024-11-19 | 2024-12-20 | 内蒙古农业大学 | Method and system for predicting feed intake of periparturient dairy cows in pasture breeding scenarios |
CN119168425B (en) * | 2024-11-19 | 2025-01-28 | 内蒙古农业大学 | Method and system for predicting feed intake of periparturient dairy cows in pasture breeding scenarios |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Bezen et al. | Computer vision system for measuring individual cow feed intake using RGB-D camera and deep learning algorithms | |
Yu et al. | Segmentation and measurement scheme for fish morphological features based on Mask R-CNN | |
CN110426112B (en) | Live pig weight measuring method and device | |
Mollah et al. | Digital image analysis to estimate the live weight of broiler | |
KR102062609B1 (en) | A portable weighting system for livestock using 3D images | |
Zhang et al. | Algorithm of sheep body dimension measurement and its applications based on image analysis | |
AU2019399813A1 (en) | Intelligent pig group rearing weighing method and apparatus, electronic device and storage medium | |
CN110728259A (en) | Chicken group weight monitoring system based on depth image | |
CN114358163A (en) | A method and system for monitoring feed intake based on twin network and depth data | |
TWI718572B (en) | A computer-stereo-vision-based automatic measurement system and its approaches for aquatic creatures | |
Xiang et al. | Measuring stem diameter of sorghum plants in the field using a high-throughput stereo vision system | |
CN114898405B (en) | Portable broiler chicken anomaly monitoring system based on edge calculation | |
Sun et al. | Basic behavior recognition of yaks based on improved SlowFast network | |
Wang et al. | Vision-based measuring method for individual cow feed intake using depth images and a Siamese network | |
CN118658209B (en) | AI-based live pig abnormal behavior monitoring and early warning method and system | |
CN116703897B (en) | Pig weight estimation method based on image processing | |
Zhang et al. | Body weight estimation of yak based on cloud edge computing | |
Cao et al. | A method for detecting the death state of caged broilers based on improved Yolov5 | |
CN116013502A (en) | A method for early detection of sheep diseases | |
CN116311530A (en) | Piglet abnormal behavior identification method and system based on Kinect camera | |
CN114002646A (en) | A real-time tracking method of individual livestock based on artificial intelligence | |
CN116912767B (en) | Milk cow individual feed intake monitoring method based on machine vision and point cloud data | |
Yuan et al. | Stress-free detection technologies for pig growth based on welfare farming: A review | |
Anifah | Decision Support System Two Dimensional Cattle Weight Estimation using Fuzzy Rule Based System | |
CN114264355B (en) | Weight detection method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |