CN113706512B - Live pig weight measurement method based on deep learning and depth camera - Google Patents

Live pig weight measurement method based on deep learning and depth camera Download PDF

Info

Publication number
CN113706512B
CN113706512B CN202111014612.2A CN202111014612A CN113706512B CN 113706512 B CN113706512 B CN 113706512B CN 202111014612 A CN202111014612 A CN 202111014612A CN 113706512 B CN113706512 B CN 113706512B
Authority
CN
China
Prior art keywords
point cloud
live
cloud data
data
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111014612.2A
Other languages
Chinese (zh)
Other versions
CN113706512A (en
Inventor
王晓辰
郝云涛
武岩松
常虹飞
田茂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia University
Original Assignee
Inner Mongolia University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia University filed Critical Inner Mongolia University
Priority to CN202111014612.2A priority Critical patent/CN113706512B/en
Publication of CN113706512A publication Critical patent/CN113706512A/en
Application granted granted Critical
Publication of CN113706512B publication Critical patent/CN113706512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G17/00Apparatus for or methods of weighing material of special form or property
    • G01G17/08Apparatus for or methods of weighing material of special form or property for weighing livestock
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P60/00Technologies relating to agriculture, livestock or agroalimentary industries
    • Y02P60/80Food processing, e.g. use of renewable energies or variable speed drives in handling, conveying or stacking
    • Y02P60/87Re-use of by-products of food processing for fodder production

Abstract

The invention discloses a live pig weight measurement method based on deep learning and a depth camera, which is used for obtaining a depth image of a live pig; converting the depth image into three-dimensional point cloud data; preprocessing the three-dimensional point cloud data to remove noise; the three-dimensional point cloud data with noise removed is put into a point cloud convolutional neural network PointNet++ model to carry out deep learning, and background three-dimensional point cloud data is removed; projecting the three-dimensional point cloud data with the background removed to a three-dimensional coordinate system, and solving the corresponding extreme point position to obtain the corresponding characteristic point coordinate, thereby obtaining corresponding body scale data; and taking the body size data of the live pigs as independent variables, taking the body weight of the live pigs as dependent variables, and obtaining the body weight of the live pigs. According to the invention, a great amount of manpower and material resources are not required to weigh the live pigs one by one, but the weight is accurately estimated by collecting the images of the live pigs and analyzing and calculating the images, so that the aim of batch weighing is fulfilled.

Description

Live pig weight measurement method based on deep learning and depth camera
Technical Field
The invention relates to the field of agriculture, in particular to a live pig weight measurement method based on deep learning and a depth camera.
Background
China is the first large country of world pig raising production, and is the first world in both pig raising scale and pig consumption. The weight of the live pigs is an important index for evaluating the growth and development conditions of the live pigs, and is also a characteristic basis for the breeding evaluation of the reserve sow. The weight index is an important basis for evaluating the reproductive capacity and the feeding capacity of the sow. The sow with proper weight has high farrowing rate and high healthy farrowing rate. Meanwhile, the feeding amount of the live pigs is regulated and controlled by the weight data of the live pigs. In the process of raising and managing the live pigs, the raising work of the live pigs can be properly adjusted according to the weight data. The nutrition condition of the live pigs is known in time, and the health condition of the sow is prevented from being reduced.
However, weight measurement for live pigs has been a cumbersome and headache, even though many small pig farms and household farms are simply visually or directly neglected. The weight of the live pigs is visually measured, and most of domestic current live pig weight assessment methods are that a breeder visually measures the weight of the live pigs according to experience and scores the live pigs according to statistics, so that the scores tend to fluctuate greatly, and a large error exists when the existing visual assessment method is used.
There are two more accurate methods: body ruler estimation and direct measurement. The body ruler measurement and estimation mainly comprises the following steps: tape measurement estimation, backup body ruler and PIC weight speed measuring ruler. But the manual measurement error is great, and there is a lot of manpower that consume, efficiency extremely low scheduling problem moreover, because the size of live pig is great mostly, hardly guarantees that the stability of live pig can cause the threat to measuring personnel's safety. The method for direct measurement has wider application: platform scale (wagon balance) method. Compared with the former method, the method has the advantages that the measurement is accurate, but the platform balance consumes huge materials and money, and the traditional measurement method obviously has the defects of low efficiency, increasing the investment of labor cost undoubtedly in a one-by-one weighing mode before modern-scale cultivation with huge base, time and labor waste and longer emphasis period.
Disclosure of Invention
The invention provides a live pig weight measurement method based on deep learning and a depth camera in order to solve the problems.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a live pig weight measurement method based on deep learning and a depth camera comprises the following steps:
obtaining a depth image of a live pig;
converting the depth image into three-dimensional point cloud data;
preprocessing the three-dimensional point cloud data to remove noise;
operating a point cloud convolutional neural network PointNet++ model by using a PyTorch deep learning framework, putting the noise-removed three-dimensional point cloud data into the point cloud convolutional neural network PointNet++ model for deep learning, and removing the background three-dimensional point cloud data;
projecting the three-dimensional point cloud data with the background removed to a three-dimensional coordinate system, solving the minimum point of the z value on each slice point cloud, merging the minimum point of each z value into a new point row, removing discrete points from the point row to obtain a corresponding two-dimensional fitting line, solving the position of a corresponding extreme point, namely obtaining the corresponding characteristic point coordinate, and obtaining corresponding volume rule data; y=a 0 +a 1 x 1 +a 2 x 2 +…+a n x n
Taking the body size data of the live pigs as independent variables, taking the body weight of the live pigs as dependent variables, and obtaining the body weight of the live pigs according to the following formula:
Y=α 01 X 12 X 2 +…+α n X n
wherein Y is the weight of a raw pig, a 0 、a 1 、a 2 ...a n And epsilon is the coefficient corresponding to different types of pigs, and x is 1 、x 2 ...x n Is the body ruler data of live pigs.
Optionally, obtaining the depth image of the live pig includes: erecting a Kinect depth camera on the side of the feeder, and capturing depth images of live pigs which are still during feeding, wherein the distance range is
Wherein a is the viewing angle of the Kinect depth camera, L is the length of the feeding area of the pig, and D is the distance from the Kinect depth camera to the pig.
Optionally, converting the depth image into three-dimensional point cloud data includes: and converting the acquired depth image into three-dimensional point cloud data by using a self-contained point cloud acquisition function under the Kinectfor Window SDK2.0 software development environment.
Optionally, preprocessing the three-dimensional point cloud data, and removing noise includes: and filtering and denoising the three-dimensional point cloud data set by using a bilateral filtering method, wherein the expression is as follows:
J 0 =I J t+1 =f(J t )
wherein J is 0 For the initial image, J t And f (Jt) is a filter, which is the result after t iterations.
Optionally, the feature points include a ground plane, a top of the live pig, a first tail vertebra of the live pig, a first bone of the live pig, and a toe bone of the live pig, wherein the equation of the ground plane is
ax+by+cz+d=0
Corresponding body length data, including body length X 1 High X 2 Oblique length X 3 Sum of body widths X 4
Wherein the body length X 1 Is the top of head N (x) 1 ,y 1 ,z 1 ) To M (x) 2 ,y 2 ,z 2 ) Is expressed as: x is X 1 =|x 1 -x 2 |;
High X 2 Is the first coccyx M (x) 2 ,y 2 ,z 2 ) The linear distance to the ground plane ax+by+cz+d=0 is expressed as:
oblique length X 3 Is the bone A (x) 3 ,y 3 ,z 3 ) To toe bones of live pigs B (x) 4 ,y 4 ,z 4 ) Is expressed as:
body width X 4 Measured by a Kinect depth camera, the expression is: x is X 4 =|z u -z d |。
Compared with the prior art, the invention has the following technical progress:
according to the invention, the weight of the live pigs can be measured by sequentially passing the live pigs and erecting the shooting area of the depth camera in advance and obtaining the extraction parameters, and the weighing efficiency is greatly improved, the current and large-scale cultivation trend is complied with by adopting a emphasis mode similar to a pipelining operation
Compared with the traditional method for measuring the weight of the live pigs, the method for measuring the weight of the live pigs by deep learning is more convenient and safer. The method can realize non-contact measurement of the weight of the live pigs, reduces human intervention, greatly ensures the natural growth requirement of the live pigs, and achieves the aim of welfare cultivation.
The invention is focused on researching a non-contact live pig weight estimation system. The whole body image of the live pigs is collected, the weight of the live pigs is estimated through a deep learning algorithm, and the method has important significance for promoting accurate, modern and intelligent breeding. The invention aims to design an accurate and efficient non-contact detection system for the weight of the live pigs, solves the weight measurement problem through a deep learning algorithm, and reduces unnecessary loss of manpower and material resources; realizes informatization and standardization management of pig breeding, thereby helping the farm to enlarge the breeding scale and creating higher benefit.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
In the drawings:
FIG. 1 is a schematic flow chart of the method of the present invention.
Fig. 2 is a schematic structural view of the depth camera frame of the present invention.
Detailed Description
The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
As shown in fig. 1, the invention discloses a live pig weight measurement method based on deep learning and a depth camera, which comprises the following steps: s01: obtaining a depth image of a live pig;
as shown in fig. 2, obtaining a depth image of a live pig includes: erecting a Kinect depth camera on the side of the feeder, and capturing depth images of live pigs which are still during feeding, wherein the distance range is
The distance between the Kinect depth camera and the live pigs is adjustable by the formula, wherein a is the viewing angle of the Kinect depth camera, L is the length of a feeding area of the live pigs, D is the distance between the Kinect depth camera and the live pigs, and the Kinect depth camera is only 2.5-2.7 m away from the live pigs.
The Kinect depth camera acquires depth information by adopting TOF (Time OfLight) ranging technology, namely an infrared emitter actively emits continuous infrared light pulses, a depth sensor receives the infrared light pulses returned from an object, and the depth value of the object from the sensor is obtained according to the back-and-forth flight time of the infrared light pulses, so that the characteristic identification of the object to be detected is not limited by illumination, the requirement on external illumination is reduced, and smaller object data information can be acquired through higher depth fidelity and a greatly improved noise substrate.
Kinect depth camera calibration is a necessary process for accurately measuring a target object, and distortion of a lens can be corrected and coordinates of metric units of the target object in a world coordinate system can be obtained through Kinect depth camera calibration. In the image acquisition process of live pigs, the Kinect depth camera can be controlled to work through infrared triggering, the Kinect depth camera is installed beside a feeder of the pigs, the system can be opened every time the live pigs eat food in a collective mode, and when the pigs shield the infrared device, the system starts the Kinect depth camera to take pictures as the live pigs. The infrared triggering device adopts an E18-8MNK photoelectric sensor, the detection distance is 0-8 m, the rated current is 100mA, and the rated voltage is 5v.
The key and difficult points of the live pig weight detection system are embodied in the acquisition and processing process of live pig images, so that the acquisition process of live pig morphological information is particularly important, especially the framework and layout of a Kinect depth camera, in order to optimize the calculated amount and improve the weight measurement speed, the scheme of double Kinect depth cameras is abandoned under the condition that the biological ruler data are approximately symmetrical and the precision loss is minimum, and only a single Kinect depth camera is erected on the side.
The front end can successfully acquire and preprocess the live pig image. In an indefinite, more chaotic farm environment, accurate and clear images and extremely characteristic are obtained, and in the embodiment, a Kinect depth camera is erected on the side of the feeder. And capturing images of live pigs which are still when eating, so as to prevent head torsion of the live pigs from influencing the acquisition of the images, and converting the data from the depth image into three-dimensional point cloud data.
In the application of deep learning, the quantity and the fineness of the data sets manufactured directly influence the training result of the later model due to the extremely characteristics of the earlier image acquisition task, so that the accuracy of body size data measurement of live pigs is greatly influenced. The method comprises the steps of obtaining a large number of depth images of live pigs, including different body types and the same azimuth (side view image), selecting the obtained depth image data, converting the depth image data into a three-dimensional point cloud form, storing the three-dimensional point cloud data, and manufacturing a point cloud data set of the live pigs.
Image quality is a primary condition to ensure accuracy of calculating weight data. Because the environment of image extraction is complex, it is difficult to ensure absolute stillness of live pigs during image extraction, so as to ensure the reliability and stability of image quality. Meanwhile, unnecessary noise generated by pictures is reduced in the process of transmitting and converting the images into the three-dimensional point cloud data set, and the acquisition quality of the depth images is affected. Therefore, the collected data is preprocessed to ensure the accuracy of the point cloud data and improve the quality of the point cloud data.
S02: converting the depth image into three-dimensional point cloud data;
the point cloud is divided into two kinds in composition characteristics, one is an ordered point cloud and the other is an unordered point cloud. The point cloud data restored by the depth image can be arranged according to three-dimensional data, so that the information of adjacent points can be easily found, invalid point cloud sequences in the point cloud data can be eliminated, and valid sequences in the point cloud data can be reserved. Compared with the depth image, the processing of the point cloud data is easier and faster, so that the depth image needs to be converted into the point cloud data to prepare a corresponding point cloud data set. And converting the acquired depth image into point cloud data by using a corresponding point cloud acquisition function of the depth image under the Kinectfor Windows SDK2.0 software development environment.
S03: preprocessing the three-dimensional point cloud data to remove noise;
due to the unavoidable precision of the scanning device, and the slight shaking of the device during testing, or complex changes of the environment, the extracted point cloud data may contain a large number of hash points and isolated points, and the noise needs to be filtered out during the preparation of the data set. Compared with other filtering methods, the bilateral filtering method has stronger capability of keeping the edge data of the point cloud, is a compromise process combining the spatial proximity and similarity of the point cloud data, and simultaneously considers the spatial information and the similarity to achieve the purpose of edge protection and denoising. When the bilateral filter or other average-based boundary extraction filter is continuously applied, the bilateral filter is continuously performed, and the general expression is: j (J) 0 =I J t+1 =f(J t ) Wherein J is 0 For the initial image, J t F (J) is the result after t iterations t ) As a filter, continuously executing the bilateral filter does not remove the small-sized structure, but rather can better preserve the details of the data, but only remove noise such as burrs on the body of live pigs.
S04: operating a point cloud convolutional neural network PointNet++ model by using a PyTorch deep learning framework, putting the noise-removed three-dimensional point cloud data into the point cloud convolutional neural network PointNet++ model for deep learning, and removing the background three-dimensional point cloud data;
although noise reduction processing is performed, in the three-dimensional point cloud data acquisition, it is difficult to acquire the point cloud data of an individual live pig in the environment of a farm and neglect a complex background point cloud data set, so that in order to remove unnecessary background point cloud data and the like in a larger point cloud data amount, the point cloud data amount to be processed is reduced. Therefore, we need to perform segmentation processing on the point cloud data to obtain the point cloud data of live pigs separately.
According to the method, a PyTorch deep learning framework is utilized to run a point cloud convolutional neural network PointNet++ model, three-dimensional point cloud data with noise removed is put into the point cloud convolutional neural network PointNet++ model to carry out deep learning, the three-dimensional data is subjected to image processing and converted into a depth image, and the traditional convolutional neural network or a method of voxel forming and then applying a 3D convolutional neural network is adopted, so that the problem that part of data is lost or the calculation cost is overlarge is generally caused, and the characteristic of the three-dimensional point cloud data can be utilized to the greatest extent by the method of directly processing the point cloud, so that more characteristic information is obtained. The PointNet++ model can extend a traditional two-dimensional convolutional neural network to three-dimensional use, can directly process point cloud data, and extends a traditional picture input format to input of the point cloud data. The PointNet++ model is used for directly dividing the three-dimensional point cloud data, then the point cloud data is processed, the obtained result is more direct, the number is faster, and compared with the method, the required data set is smaller and the precision is higher. And compared with the time of preprocessing the depth image, the method consumes less energy and saves cost and manpower.
The PyTorch deep learning framework is a combination of Caffe2 and Torch, reconstructs and unifies code libraries of the two frameworks, deletes repeated components and shares upper-layer abstraction, obtains a unified framework, and supports efficient graph mode execution, mobile deployment, wide vendor integration and the like. The development is more flexible and the writing of codes is easier than that of the development. The PointNet++ model has higher precision, can directly process point cloud data, has simpler processing steps, is more effective for collecting and detecting the body weight, and ensures that the measurement of the follow-up data is more accurate.
SO5, calculating the body ruler data can be converted into the distance length of the corresponding characteristic points. And projecting the processed point cloud data to a three-dimensional coordinate system, firstly solving the minimum points of the z values on each slice point cloud, combining the minimum points of the z values into a new point row, and then removing discrete points of the point row to obtain a corresponding two-dimensional fitting line. When the characteristic points are extracted, the x and z values of the point columns are only required to be programmed and identified, then the corresponding directions are projected, a two-dimensional curve under a corresponding two-dimensional coordinate system is fitted, the positions of the corresponding extreme points are respectively obtained through mathematical functions, and corresponding characteristic point coordinates can be obtained, so that corresponding body scale data are obtained. Since the depth image of the live pig's body is acquired, the length of the data should be multiplied by 2 when calculating the body width of the live pig.
And extracting body ruler measuring points of the curve to obtain characteristic points of characteristic parts of the head top of the live pig, the first tail vertebra of the live pig, the nail bone of the live pig and the toe joint bone of the live pig. Calculating the pixel length of the body scale data by using the characteristic points and the ground plane equation ax+by+cz+d=0:
corresponding body length data, including body length X 1 High X 2 Oblique length X 3 Sum of body widths X 4
Wherein the body length X 1 Is the top of head N (x) 1 ,y 1 ,z 1 ) To M (x) 2 ,y 2 ,z 2 ) Is expressed as: x is X 1 =|x 1 -x 2 |;
High X 2 Is the first coccyx M (x) 2 ,y 2 ,z 2 ) The linear distance to the ground plane ax+by+cz+d=0 is expressed as:
oblique length X 3 Is the bone A (x) 3 ,y 3 ,z 3 ) To toe bones of live pigs B (x) 4 ,y 4 ,z 4 ) Is expressed as:
body width X 4 Measured by a Kinect depth camera, the expression is: x is X 4 =|z u -z d |。
S06: performing correlation analysis on the weight of live pigs and body size data thereof, performing multiple linear regression analysis by using a least square method of multiple linear regression, and performing body length X of the live pigs 1 High X 2 Oblique length X 3 Sum of body widths X 4 The body size data is used as a multi-element independent variable of the least square method, and the body weight of the live pigs is used as the independent variable to be analyzed. The following model can be derived:
Y=α 01 X 12 X 2 +…+α n X n
wherein Y is the weight of a raw pig, a 0 、a 1 、a 2 ...a n And epsilon is the coefficient corresponding to different types of pigs, and x is 1 、x 2 ...x n Is the body ruler data of live pigs.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (4)

1. The live pig weight measurement method based on the deep learning and the depth camera is characterized by comprising the following steps of:
obtaining a depth image of a live pig;
converting the depth image into three-dimensional point cloud data;
preprocessing the three-dimensional point cloud data to remove noise;
operating a point cloud convolutional neural network PointNet++ model by using a PyTorch deep learning framework, putting the noise-removed three-dimensional point cloud data into the point cloud convolutional neural network PointNet++ model for deep learning, and removing the background three-dimensional point cloud data;
projecting the three-dimensional point cloud data with the background removed to a three-dimensional coordinate system, solving the minimum point of the z value on each slice point cloud, merging the minimum point of each z value into a new point row, removing discrete points from the point row to obtain a corresponding two-dimensional fitting line, solving the position of a corresponding extreme point, namely obtaining the corresponding characteristic point coordinate, and obtaining corresponding volume rule data;
taking the body size data of the live pigs as independent variables, taking the body weight of the live pigs as dependent variables, and obtaining the body weight of the live pigs according to the following formula:
Y=α 01 X 12 X 2 +…+α n X n
wherein Y is the weight of a raw pig, a 0 、a 1 、a 2 ...a n And epsilon is the coefficient corresponding to different types of pigs, and x is 1 、x 2 ...x n Is the body ruler data of live pigs;
the characteristic points comprise a ground plane, the top of the head of the live pig, the first tail vertebra of the live pig, the nail bone of the live pig and the toe bone of the live pig, wherein the equation of the ground plane is that
ax+by+cz+d=0
Corresponding body length data, including body length X 1 High X 2 Oblique length X 3 Sum of body widths X 4
Wherein the body length X 1 Is the top of head N (x) 1 ,y 1 ,z 1 ) To M (x) 2 ,y 2 ,z 2 ) Is expressed as: x is X 1 =|x 1 -x 2 |;
High X 2 Is the first coccyx M (x) 2 ,y 2 ,z 2 ) The linear distance to the ground plane ax+by+cz+d=0 is expressed as:
oblique length X 3 Is the bone A (x) 3 ,y 3 ,z 3 ) To the toe bones of live pigsB(x 4 ,y 4 ,z 4 ) Is expressed as:
body width X 4 Measured by a Kinect depth camera, the expression is: x is X 4 =|z u -z d |。
2. The method for measuring the weight of the live pigs based on the deep learning and the depth camera according to claim 1, wherein the method comprises the following steps of: obtaining a depth image of a live pig includes: erecting a Kinect depth camera on the side of the feeder, and capturing depth images of live pigs which are still during feeding, wherein the distance range is
Wherein a is the viewing angle of the Kinect depth camera, L is the length of the feeding area of the pig, and D is the distance from the Kinect depth camera to the pig.
3. The method for measuring the weight of the live pigs based on the deep learning and the depth camera according to claim 2, wherein the method comprises the following steps of: converting the depth image into three-dimensional point cloud data includes: and converting the acquired depth image into three-dimensional point cloud data by using a self-contained point cloud acquisition function under the Kinectfor Windows SDK2.0 software development environment.
4. The method for measuring the weight of the live pigs based on the deep learning and the depth camera according to claim 3, wherein the method comprises the following steps of: preprocessing the three-dimensional point cloud data, wherein noise removal comprises the following steps: and filtering and denoising the three-dimensional point cloud data set by using a bilateral filtering method, wherein the expression is as follows:
J 0 =I J t+1 =f(J t )
wherein J is 0 For the initial image, J t F (J) is the result after t iterations t ) Is a filter.
CN202111014612.2A 2021-08-31 2021-08-31 Live pig weight measurement method based on deep learning and depth camera Active CN113706512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111014612.2A CN113706512B (en) 2021-08-31 2021-08-31 Live pig weight measurement method based on deep learning and depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111014612.2A CN113706512B (en) 2021-08-31 2021-08-31 Live pig weight measurement method based on deep learning and depth camera

Publications (2)

Publication Number Publication Date
CN113706512A CN113706512A (en) 2021-11-26
CN113706512B true CN113706512B (en) 2023-08-11

Family

ID=78658182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111014612.2A Active CN113706512B (en) 2021-08-31 2021-08-31 Live pig weight measurement method based on deep learning and depth camera

Country Status (1)

Country Link
CN (1) CN113706512B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972165B (en) * 2022-03-24 2024-03-15 中山大学孙逸仙纪念医院 Method and device for measuring time average shearing force

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415282A (en) * 2019-07-31 2019-11-05 宁夏金宇智慧科技有限公司 A kind of milk cow weight forecasting system
CN110986788A (en) * 2019-11-15 2020-04-10 华南农业大学 Automatic measurement method based on three-dimensional point cloud livestock phenotype body size data
CN111612850A (en) * 2020-05-13 2020-09-01 河北工业大学 Pig body size parameter measuring method based on point cloud
CN112712590A (en) * 2021-01-15 2021-04-27 中国农业大学 Animal point cloud generation method and system
KR20210096448A (en) * 2020-01-28 2021-08-05 전북대학교산학협력단 A contactless mobile weighting system for livestock using asymmetric stereo cameras
CN113313833A (en) * 2021-06-29 2021-08-27 西藏新好科技有限公司 Pig body weight estimation method based on 3D vision technology

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415282A (en) * 2019-07-31 2019-11-05 宁夏金宇智慧科技有限公司 A kind of milk cow weight forecasting system
CN110986788A (en) * 2019-11-15 2020-04-10 华南农业大学 Automatic measurement method based on three-dimensional point cloud livestock phenotype body size data
KR20210096448A (en) * 2020-01-28 2021-08-05 전북대학교산학협력단 A contactless mobile weighting system for livestock using asymmetric stereo cameras
CN111612850A (en) * 2020-05-13 2020-09-01 河北工业大学 Pig body size parameter measuring method based on point cloud
CN112712590A (en) * 2021-01-15 2021-04-27 中国农业大学 Animal point cloud generation method and system
CN113313833A (en) * 2021-06-29 2021-08-27 西藏新好科技有限公司 Pig body weight estimation method based on 3D vision technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多视角深度相机的猪体三维点云重构及体尺测量;尹令 等;《农业工程学报》;第35卷(第23期);201-208 *

Also Published As

Publication number Publication date
CN113706512A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
Kongsro Estimation of pig weight using a Microsoft Kinect prototype imaging system
AU2010219406B2 (en) Image analysis for making animal measurements
Shi et al. An automatic method of fish length estimation using underwater stereo system based on LabVIEW
Zhou et al. An integrated skeleton extraction and pruning method for spatial recognition of maize seedlings in MGV and UAV remote images
CN109141248B (en) Pig weight measuring and calculating method and system based on image
CN107437068B (en) Pig individual identification method based on Gabor direction histogram and pig body hair mode
CN109636779B (en) Method, apparatus and storage medium for recognizing integrated ruler of poultry body
CN112232978B (en) Aquatic product length and weight detection method, terminal equipment and storage medium
Liu et al. Automatic estimation of dairy cattle body condition score from depth image using ensemble model
CN107610122B (en) Micro-CT-based single-grain cereal internal insect pest detection method
CN111325217B (en) Data processing method, device, system and medium
CN111696150A (en) Method for measuring phenotypic data of channel catfish
CN110569735A (en) Analysis method and device based on back body condition of dairy cow
CN106846462B (en) insect recognition device and method based on three-dimensional simulation
AU2016327051A1 (en) Image analysis for making animal measurements including 3-D image analysis
CN113706512B (en) Live pig weight measurement method based on deep learning and depth camera
CN108305247B (en) Method for detecting tissue hardness based on CT image gray value
CN112825791A (en) Milk cow body condition scoring method based on deep learning and point cloud convex hull characteristics
CN109344917A (en) A kind of the species discrimination method and identification system of Euproctis insect
CN109492535B (en) Computer vision sow lactation behavior identification method
CN108682000B (en) Pig body length and body width detection method based on Kinect video
Güldenring et al. RumexWeeds: A grassland dataset for agricultural robotics
CN109166127B (en) Wearable plant phenotype sensing system
US10204405B2 (en) Apparatus and method for parameterizing a plant
Chen et al. Identification and detection of biological information on tiny biological targets based on subtle differences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant