CN113989391A - Animal three-dimensional model reconstruction system and method based on RGB-D camera - Google Patents

Animal three-dimensional model reconstruction system and method based on RGB-D camera Download PDF

Info

Publication number
CN113989391A
CN113989391A CN202111333549.9A CN202111333549A CN113989391A CN 113989391 A CN113989391 A CN 113989391A CN 202111333549 A CN202111333549 A CN 202111333549A CN 113989391 A CN113989391 A CN 113989391A
Authority
CN
China
Prior art keywords
depth
camera
point cloud
rgb
depth camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111333549.9A
Other languages
Chinese (zh)
Inventor
程曼
范才虎
袁洪波
张英杰
刘月琴
蔡振江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heibei Agricultural University
Original Assignee
Heibei Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heibei Agricultural University filed Critical Heibei Agricultural University
Priority to CN202111333549.9A priority Critical patent/CN113989391A/en
Publication of CN113989391A publication Critical patent/CN113989391A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisions for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisions for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses an animal body three-dimensional model reconstruction system based on an RGB-D camera, which comprises an animal detection channel, a detection mechanism and an ear tag reading mechanism, wherein the detection mechanism and the ear tag reading mechanism are sequentially arranged in the animal detection channel along the advancing direction of an animal, and are communicated with a terminal; the detection mechanism comprises at least two depth cameras which are distributed in a staggered mode and used for collecting three-dimensional point cloud data, and each depth camera comprises a color imaging part and a depth imaging part. The invention adopts the animal body three-dimensional model reconstruction system based on the RGB-D camera with the structure, utilizes the world coordinate system of one camera as a uniform coordinate system, and calculates the rotation matrix and the translation vector of other cameras relative to the coordinate system to splice point clouds, can quickly and accurately reconstruct the three-dimensional model of the target animal in the target area by a non-contact method, has simple algorithm and small calculated amount, and can achieve higher precision.

Description

Animal three-dimensional model reconstruction system and method based on RGB-D camera
Technical Field
The invention relates to an animal breeding technology, in particular to an animal body three-dimensional model reconstruction system and method based on an RGB-D camera.
Background
In the process of breeding, the body size parameters of animals are key indexes for measuring the growth and development conditions, the production performance and the genetic characteristics of the animals. At present, the measurement of an animal body ruler is mainly realized by directly measuring by staff through tools such as a ruler, a tape measure, compasses and the like, the method is time-consuming and labor-consuming, the subjectivity is high, the measured animal is easy to generate stress response, and potential danger and injury exist for both animals and people. Furthermore, such contact measurement methods present potential risks to both the physiological and psychological health of the animal.
In order to avoid the above problems, in recent years, non-contact measurement of animal modeling using a three-dimensional model has been secretly provided, and the body size information of an animal can be quickly, accurately and efficiently acquired using the three-dimensional model of the animal. Therefore, the method for researching the animal three-dimensional model construction has important practical significance.
At present, there are two main methods for reconstructing animal three-dimensional models: an image-based reconstruction method and a three-dimensional point cloud-based reconstruction method. Stereovision (SV) and motion from motion (SFM) are relatively common methods for three-dimensional reconstruction based on RGB images. The SV shoots the same scene by using two or more cameras at the same time, and then restores the three-dimensional structure by using the human eye parallax principle; SFM is a reconstruction of a three-dimensional structure using a camera by continuously changing a photographing angle and then using the same principle as SV. Therefore, the content and the quality of the images shot by the camera have certain requirements based on the image three-dimensional reconstruction method, and a certain contact ratio needs to exist among the images in order to accurately restore the target; moreover, the shot image is easily influenced by light, and the requirement on the shooting environment is high.
It is another popular method to use a scanning device capable of generating a three-dimensional point cloud to achieve three-dimensional reconstruction of a target, and the commonly used scanning devices include LiDAR, TLS, ALS, RGB-D cameras, and the like. Three-dimensional scanning devices typically scan a target using the TOF principle or the phase-shift scanning principle, and represent each point cloud as three-dimensional coordinates by digitizing the target and recording the scan distance of it. Compared with the method based on the image, the method is less influenced by external light, has relatively strong anti-interference capability and is more suitable for the breeding environment. Therefore, under the environment of a farm and the like, the invention of a fast and accurate animal three-dimensional model fast reconstruction method and system based on an RGB-D (depth) camera is very necessary.
Disclosure of Invention
The invention aims to provide an animal body three-dimensional model reconstruction system based on RGB-D cameras, which utilizes a world coordinate system of one camera as a unified coordinate system, calculates a rotation matrix and a translation vector of other cameras relative to the coordinate system to splice point clouds, can quickly and accurately reconstruct a three-dimensional model of a target animal in a target area by a non-contact method, has simple algorithm and small calculated amount, and can achieve higher precision.
In order to achieve the purpose, the invention provides an animal body three-dimensional model reconstruction system based on an RGB-D camera, which comprises an animal detection channel, a detection mechanism and an ear tag reading mechanism, wherein the detection mechanism and the ear tag reading mechanism are sequentially arranged in the animal detection channel along the advancing direction of an animal, and are communicated with a terminal;
the detection mechanism comprises at least two depth cameras which are distributed in a staggered mode and used for collecting three-dimensional point cloud data, and each depth camera comprises a color imaging part and a depth imaging part.
Preferably, the detection mechanism comprises three of the depth cameras;
the three depth cameras are respectively a top depth camera arranged at the top end of the animal detection channel and two side depth cameras arranged at two sides of the animal detection channel.
Preferably, the ear tag reading mechanism is an RFID ear tag reader.
The method of the animal body three-dimensional model reconstruction system based on the RGB-D camera comprises the following steps:
s1, calibrating the depth camera;
s2, camera position assessment
S20, obtaining a depth camera internal parameter matrix H _ rgb, a depth camera imaging internal parameter matrix H _ ir, a rotation matrix R _ ir and a translational vector T _ ir;
s21, calculating a conversion relation among point clouds acquired by the three depth cameras;
s3, acquiring a color three-dimensional point cloud;
s4, point cloud fusion;
s5, filtering the point cloud;
s6, extracting a target animal point cloud;
and S7, obtaining the three-dimensional modeling of the target animal.
Preferably, step S1 specifically includes the following steps:
s10, adjusting the relative position of the depth camera and the chessboard pattern calibration plate corresponding to the depth camera;
s11, triggering three depth cameras to collect infrared information and color information of the target animal after the ear tag reading mechanism collects that the target animal completely enters the animal detection channel, and acquiring a plurality of infrared images and color images;
s12, calibrating the infrared image and the color image in matlab software by using a Camera Calibration software package;
s13, respectively importing the infrared images and the color images into a matlab, and obtaining the Calibration error and the average Calibration error of each image through Camera Calibration;
and if the average calibration error is larger than the set error, deleting the images in sequence from the image with the maximum calibration error according to the descending order of the calibration error until the average calibration error is smaller than the set error.
Preferably, step S21 specifically includes the following steps:
s210, calculating a conversion relation between point clouds acquired by two side depth cameras:
Figure BDA0003349820800000031
Figure BDA0003349820800000032
wherein θ 1 is an included angle between two lateral depth camera planes, namely 180 °; l1 is the distance between the two side depth camera planes; r1 is the rotation matrix between the two side depth cameras, T1 is the translation vector between the two side depth cameras;
the value calculations substituted into θ 1 and L1 obtain a conversion relationship between the point clouds acquired by the two side depth cameras:
Figure BDA0003349820800000041
s211, calculating a conversion relation between point clouds acquired by any one side depth camera and the top depth camera:
Figure BDA0003349820800000042
Figure BDA0003349820800000043
where θ 2 is the angle between the side depth camera and the top depth camera plane, i.e., 90 °, L2 is the horizontal distance from the center of the side depth camera to the top depth camera, h1 is the height of the side depth camera, h2 is the height of the top depth camera, R2 is the rotation matrix between the top depth camera and the side depth camera, and T2 is the translation vector between the top depth camera and the side depth camera;
the values substituted into θ 2, L2, h1, and h2 calculate the conversion relationship between the point clouds acquired by this side depth camera and the top depth camera:
Figure BDA0003349820800000044
preferably, step S3 specifically includes the following steps:
s30, converting the depth data acquired by the depth camera into a three-dimensional point cloud in a world coordinate system by using the internal reference of the depth camera as a constraint condition:
Figure BDA0003349820800000051
wherein, [ H _ ir [ ]]-1Is an inverse of the depth camera internal reference matrix H _ ir; p _ ir is the depth information of a certain pixel in the depth image acquired by the depth camera, D is the depth value, and x 'and y' are the row and column positions of the depth value in the depth image respectively; p _ ir is a converted depth pixel for converting an original depth pixel into a world coordinate system, and x _ ir, y _ ir, and z _ ir respectively represent three-dimensional spatial positions of converted depth values in the world coordinate system;
s31, converting the depth three-dimensional point cloud data in the world coordinate system into a color camera coordinate system:
P-rgb=R*P-ir+T (6)
wherein R is a rotation matrix, T is a translation vector, and P _ ir is information of a certain depth pixel in a world coordinate system; p _ rgb is depth pixel information converted into a color camera coordinate system, and x _ rgb, y _ rgb, and z _ rgb are three-dimensional spatial positions of the depth value in the color camera coordinate system, respectively;
s32, solving color values of pixel points in the color image, and matching the color values with depth values in the depth image to obtain a color three-dimensional point cloud:
Figure BDA0003349820800000052
wherein P _ rgb is the depth pixel information in the color camera coordinate system obtained in step S31, H _ rgb is the color camera internal parameter matrix, P _ rgb is the information of a certain pixel in the color image obtained by the depth camera, where x "and y" are the row and column positions of the pixel in the color image, respectively, and C represents a color value;
s33, repeating the above operation on each depth value obtained by the depth camera to obtain a color three-dimensional point cloud of the whole reconstructed object;
and S34, limiting the range of the data acquired by each depth camera, and only extracting the color point cloud data of which each axial direction distance is within the fence.
Preferably, step S4 specifically includes the following steps:
and (3) with a certain side depth camera as a reference, converting point clouds acquired by the other side depth camera and the top depth camera into a coordinate system of the reference camera by using rotation matrixes R1 and R2 and translation vectors T1 and T2 respectively, and finishing initial registration of the point clouds:
Figure BDA0003349820800000061
wherein, P4 is the point cloud obtained after the transformation of the side point cloud P1, P5 is the point cloud obtained after the transformation of the top point cloud P3, and P6 is the point cloud after the fusion is completed.
Preferably, step S5 specifically includes the following steps:
s50, calculating the point cloud density of the fused point cloud P6
Figure BDA0003349820800000065
Figure BDA0003349820800000062
Wherein P (xp, yp, zp) is any point in the point cloud P6, q (xq, yq, zq) is any point except for the point P in the point cloud P6, dis (P, q) is the euclidean distance between the points P and q, min (dis (P, q)) is the minimum value of finding dis (P, q), dp is the minimum value of the euclidean distance between the points P and q, N is the number of points in the point cloud P6, and a is a count value;
s51, point-to-point cloud P6 anddensity of point cloud
Figure BDA0003349820800000063
Clustering is carried out for the threshold value, namely for any point P in P6, searching by a KD-tree near-field search algorithm with the point P as the center,
Figure BDA0003349820800000064
for all points in the radius, clustering the points into a set Q;
s52, (2) repeating the operation of the step S51 on other points in the set Q, and clustering the newly added points into the set Q until no new points are added;
s53, repeating the steps S51 and S52 to obtain a set Q1, Q2 and Q Q3..
Preferably, in step S6, since the animal in the point cloud P6 is the largest target and the other noise point clouds are small targets, the set with the largest number of point clouds is the target animal, and the set with the largest number of point clouds is extracted as the target animal point cloud.
Therefore, the invention adopts the animal body three-dimensional model reconstruction system based on the RGB-D camera with the structure, utilizes the world coordinate system of one camera as a uniform coordinate system, calculates the rotation matrix and the translation vector of other cameras relative to the coordinate system to splice point clouds, can quickly and accurately reconstruct the three-dimensional model of the target animal in the target area by a non-contact method, has simple algorithm and small calculated amount, and can achieve higher precision.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
Fig. 1 is a schematic structural diagram of an animal body three-dimensional model reconstruction system based on an RGB-D camera according to an embodiment of the present invention.
Wherein: 1. an animal detection channel; 2. an ear tag reading mechanism; 3. a top depth camera; 4. a side depth camera.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that the terms "upper", "inner", "outer", etc. indicate orientations or positional relationships based on those shown in the drawings or orientations or positional relationships that the products of the present invention conventionally use, which are merely for convenience of description and simplification of description, but do not indicate or imply that the devices or elements referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention.
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," and "connected" are to be construed broadly, e.g., as meaning fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Fig. 1 is a schematic structural diagram of an animal body three-dimensional model reconstruction system based on an RGB-D camera according to an embodiment of the present invention, as shown in fig. 1, the structure of the present invention includes an animal detection channel 1, and a detection mechanism and an ear tag reading mechanism 2 sequentially arranged inside the animal detection channel 1 along a direction in which an animal advances, and both the detection mechanism and the ear tag reading mechanism 2 are in communication with a terminal;
the detection mechanism comprises at least two depth cameras which are distributed in a staggered mode and used for collecting three-dimensional point cloud data, and each depth camera comprises a color imaging part and a depth imaging part.
Preferably, the detection mechanism comprises three depth cameras;
the three depth cameras are respectively a top depth camera 3 arranged at the top end of the animal detection channel 1 and two side depth cameras 4 arranged at two sides of the animal detection channel 1.
Preferably, the ear tag reading means 2 is an RFID ear tag reader.
The method of the animal body three-dimensional model reconstruction system based on the RGB-D camera comprises the following steps:
s1, calibrating the depth camera;
s2, camera position assessment
S20, obtaining a depth camera internal parameter matrix H _ rgb, a depth camera imaging internal parameter matrix H _ ir, a rotation matrix R _ ir and a translational vector T _ ir;
s21, calculating a conversion relation among point clouds acquired by the three depth cameras;
s3, acquiring a color three-dimensional point cloud;
s4, point cloud fusion;
s5, filtering the point cloud;
s6, extracting a target animal point cloud;
and S7, obtaining the three-dimensional modeling of the target animal.
Preferably, step S1 specifically includes the following steps:
s10, adjusting the relative position of the depth camera and the chessboard pattern calibration plate corresponding to the depth camera;
s11, triggering three depth cameras to collect infrared information and color information of the target animal after the ear tag reading mechanism 2 collects that the target animal completely enters the animal detection channel 1, and acquiring a plurality of infrared images and color images;
in step S11, each infrared image and color image frame form a calibration image, which is 20 images photographed by the checkerboard calibration board at different positions, different angles and different postures.
S12, calibrating the infrared image and the color image in matlab (commercial math software manufactured by MathWorks company in America) software by using a Camera Calibration software package;
s13, respectively importing the infrared images and the color images into a matlab, and obtaining the Calibration error and the average Calibration error of each image through Camera Calibration;
and if the average calibration error is larger than the set error, deleting the images in sequence from the image with the maximum calibration error according to the descending order of the calibration error until the average calibration error is smaller than the set error. The setting error in this embodiment is 0.15.
Preferably, step S21 specifically includes the following steps:
s210, calculating a conversion relation between point clouds acquired by the two side depth cameras 4:
Figure BDA0003349820800000101
Figure BDA0003349820800000102
wherein θ 1 is an included angle between the planes of the two side depth cameras 4, namely 180 °; l1 is the distance between the two side depth camera 4 planes R1 is the rotation matrix between the two side depth cameras 4, T1 is the translation vector between the two side depth cameras 4;
the value calculations substituted into θ 1 and L1 obtain a conversion relationship between the point clouds acquired by the two side depth cameras 4:
Figure BDA0003349820800000103
s211, calculating a conversion relation between point clouds acquired by any one of the side depth cameras 4 and the top depth camera 3:
Figure BDA0003349820800000104
Figure BDA0003349820800000105
where θ 2 is the angle between the planes of the side depth camera 4 and the top depth camera 3, i.e., 90 °, L2 is the horizontal distance from the center of the side depth camera 4 to the top depth camera 3, h1 is the height of the side depth camera 4, h2 is the height of the top depth camera 3, R2 is the rotation matrix between the top depth camera 3 and the side depth camera 4, and T2 is the translation vector of the rotation matrix between the top depth camera 3 and the side depth camera 4;
the values substituted into θ 2, L2, h1, and h2 calculate the conversion relationship between the point clouds acquired by this side depth camera 4 and the top depth camera 3:
Figure BDA0003349820800000111
preferably, step S3 specifically includes the following steps:
s30, converting the depth data acquired by the depth camera into a three-dimensional point cloud in a world coordinate system by using the internal reference of the depth camera as a constraint condition:
Figure BDA0003349820800000112
wherein, [ H _ ir [ ]]-1Is an inverse of the depth camera internal reference matrix H _ ir; p _ ir is depth information of a certain pixel in the depth image acquired by the depth camera, D is a depth value, and x 'and y' are depth values in the depth imageRow and column positions; p _ ir is a converted depth pixel for converting an original depth pixel into a world coordinate system, and x _ ir, y _ ir, and z _ ir respectively represent three-dimensional spatial positions of converted depth values in the world coordinate system;
s31, converting the depth three-dimensional point cloud data in the world coordinate system into a color camera coordinate system:
P-rgb=R*P-ir+T (6)
wherein R is a rotation matrix, T is a translation vector, and P _ ir is information of a certain depth pixel in a world coordinate system; p _ rgb is depth pixel information converted into a color camera coordinate system, and x _ rgb, y _ rgb, and z _ rgb are three-dimensional spatial positions of the depth value in the color camera coordinate system, respectively;
s32, solving color values of pixel points in the color image, and matching the color values with depth values in the depth image to obtain a color three-dimensional point cloud:
Figure BDA0003349820800000113
wherein P _ rgb is the depth pixel information in the color camera coordinate system obtained in step S31, H _ rgb is the color camera internal parameter matrix, P _ rgb is the information of a certain pixel in the color image obtained by the depth camera, where x "and y" are the row and column positions of the pixel in the color image, respectively, and C represents a color value;
s33, repeating the above operation on each depth value obtained by the depth camera to obtain a color three-dimensional point cloud of the whole reconstructed object;
and S34, limiting the range of the data acquired by each depth camera, and only extracting the color point cloud data of which each axial direction distance is within the fence.
Preferably, step S4 specifically includes the following steps:
with a certain side depth camera 4 as a reference, point clouds acquired by the other side depth camera 4 and the top depth camera 3 are respectively converted into a coordinate system of the reference camera by using rotation matrixes R1 and R2 and translation vectors T1 and T2, and initial registration of the point clouds is completed:
Figure BDA0003349820800000121
wherein, P4 is the point cloud obtained after the transformation of the side point cloud P1, P5 is the point cloud obtained after the transformation of the top point cloud P3, and P6 is the point cloud after the fusion is completed.
Preferably, step S5 specifically includes the following steps:
s50, calculating the point cloud density of the fused point cloud P6
Figure BDA0003349820800000122
Figure BDA0003349820800000123
Wherein P (xp, yp, zp) is any point in the point cloud P6, q (xq, yq, zq) is any point except for the point P in the point cloud P6, dis (P, q) is the euclidean distance between the points P and q, min (dis (P, q)) is the minimum value of finding dis (P, q), dp is the minimum value of the euclidean distance between the points P and q, N is the number of points in the point cloud P6, and a is a count value;
s51 point cloud P6 point cloud density
Figure BDA0003349820800000124
Clustering is carried out for the threshold value, namely for any point P in P6, searching by a KD-tree near-field search algorithm with the point P as the center,
Figure BDA0003349820800000131
for all points in the radius, clustering the points into a set Q;
s52, (2) repeating the operation of the step S51 on other points in the set Q, and clustering the newly added points into the set Q until no new points are added;
s53, repeating the steps S51 and S52 to obtain a set Q1, Q2 and Q Q3..
Preferably, in step S6, since the animal in the point cloud P6 is the largest target and the other noise point clouds are small targets, the set with the largest number of point clouds is the target animal, and the set with the largest number of point clouds is extracted as the target animal point cloud.
Therefore, the invention adopts the animal body three-dimensional model reconstruction system based on the RGB-D camera with the structure, utilizes the world coordinate system of one camera as a uniform coordinate system, calculates the rotation matrix and the translation vector of other cameras relative to the coordinate system to splice point clouds, can quickly and accurately reconstruct the three-dimensional model of the target animal in the target area by a non-contact method, has simple algorithm and small calculated amount, and can achieve higher precision.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the invention without departing from the spirit and scope of the invention.

Claims (10)

1. An animal body three-dimensional model reconstruction system based on an RGB-D camera is characterized in that: the detection mechanism and the ear tag reading mechanism are communicated with a terminal;
the detection mechanism comprises at least two depth cameras which are distributed in a staggered mode and used for collecting three-dimensional point cloud data, and each depth camera comprises a color imaging part and a depth imaging part.
2. The system for reconstructing an animal body three-dimensional model based on RGB-D camera as claimed in claim 1, wherein: the detection mechanism comprises three of the depth cameras;
the three depth cameras are respectively a top depth camera arranged at the top end of the animal detection channel and two side depth cameras arranged at two sides of the animal detection channel.
3. The system for reconstructing an animal body three-dimensional model based on RGB-D camera as claimed in claim 1, wherein: the ear tag reading mechanism is an RFID ear tag reader.
4. A method based on the system for reconstructing a three-dimensional model of an animal body based on RGB-D cameras as claimed in any one of claims 1 to 3, wherein: the method comprises the following steps:
s1, calibrating the depth camera;
s2, camera position assessment
S20, obtaining a depth camera internal parameter matrix H _ rgb, a depth camera imaging internal parameter matrix H _ ir, a rotation matrix R _ ir and a translational vector T _ ir;
s21, calculating a conversion relation among point clouds acquired by the three depth cameras;
s3, acquiring a color three-dimensional point cloud;
s4, point cloud fusion;
s5, filtering the point cloud;
s6, extracting a target animal point cloud;
and S7, obtaining the three-dimensional modeling of the target animal.
5. The method of the RGB-D camera based animal body three-dimensional model reconstruction system according to claim 4, wherein: step S1 specifically includes the following steps:
s10, adjusting the relative position of the depth camera and the chessboard pattern calibration plate corresponding to the depth camera;
s11, triggering three depth cameras to collect infrared information and color information of the target animal after the ear tag reading mechanism collects that the target animal completely enters the animal detection channel, and acquiring a plurality of infrared images and color images;
s12, calibrating the infrared image and the color image in matlab software by using a Camera Calibration software package;
s13, respectively importing the infrared images and the color images into a matlab, and obtaining the Calibration error and the average Calibration error of each image through Camera Calibration;
and if the average calibration error is larger than the set error, deleting the images in sequence from the image with the maximum calibration error according to the descending order of the calibration error until the average calibration error is smaller than the set error.
6. The method of the system for reconstructing a three-dimensional model of an animal body based on RGB-D cameras as claimed in claim 5, wherein: step S21 specifically includes the following steps:
s210, calculating a conversion relation between point clouds acquired by two side depth cameras:
Figure FDA0003349820790000021
Figure FDA0003349820790000022
wherein θ 1 is an included angle between two lateral depth camera planes, namely 180 °; l1 is the distance between the two side depth camera planes; r1 is the rotation matrix between the two side depth cameras, T1 is the translation vector between the two side depth cameras;
the value calculations substituted into θ 1 and L1 obtain a conversion relationship between the point clouds acquired by the two side depth cameras:
Figure FDA0003349820790000023
s211, calculating a conversion relation between point clouds acquired by any one side depth camera and the top depth camera:
Figure FDA0003349820790000031
Figure FDA0003349820790000032
where θ 2 is the angle between the side depth camera and the top depth camera plane, i.e., 90 °, L2 is the horizontal distance from the center of the side depth camera to the top depth camera, h1 is the height of the side depth camera, h2 is the height of the top depth camera, R2 is the rotation matrix between the top depth camera and the side depth camera, and T2 is the translation vector between the top depth camera and the side depth camera;
the values substituted into θ 2, L2, h1, and h2 calculate the conversion relationship between the point clouds acquired by this side depth camera and the top depth camera:
Figure FDA0003349820790000033
7. the method of the RGB-D camera based animal body three-dimensional model reconstruction system according to claim 6, wherein: step S3 specifically includes the following steps:
s30, converting the depth data acquired by the depth camera into a three-dimensional point cloud in a world coordinate system by using the internal reference of the depth camera as a constraint condition:
Figure FDA0003349820790000034
wherein, [ H _ ir [ ]]-1Is an inverse of the depth camera internal reference matrix H _ ir; p _ ir is the depth information of a certain pixel in the depth image acquired by the depth camera, D is the depth value, and x 'and y' are the row and column positions of the depth value in the depth image respectively; p _ ir is a converted depth pixel for converting an original depth pixel into a world coordinate system, and x _ ir, y _ ir, and z _ ir respectively represent three-dimensional spatial positions of converted depth values in the world coordinate system;
s31, converting the depth three-dimensional point cloud data in the world coordinate system into a color camera coordinate system:
P-rgb=R*P-ir+T (6)
wherein R is a rotation matrix, T is a translation vector, and P _ ir is information of a certain depth pixel in a world coordinate system; p _ rgb is depth pixel information converted into a color camera coordinate system, and x _ rgb, y _ rgb, and z _ rgb are three-dimensional spatial positions of the depth value in the color camera coordinate system, respectively;
s32, solving color values of pixel points in the color image, and matching the color values with depth values in the depth image to obtain a color three-dimensional point cloud:
Figure FDA0003349820790000041
wherein P _ rgb is the depth pixel information in the color camera coordinate system obtained in step S31, H _ rgb is the color camera internal parameter matrix, P _ rgb is the information of a certain pixel in the color image obtained by the depth camera, where x "and y" are the row and column positions of the pixel in the color image, respectively, and C represents a color value;
s33, repeating the above operation on each depth value obtained by the depth camera to obtain a color three-dimensional point cloud of the whole reconstructed object;
and S34, limiting the range of the data acquired by each depth camera, and only extracting the color point cloud data of which each axial direction distance is within the fence.
8. The method of the system for reconstructing a three-dimensional model of an animal body based on RGB-D cameras as set forth in claim 7, wherein: step S4 specifically includes the following steps:
and (3) with a certain side depth camera as a reference, converting point clouds acquired by the other side depth camera and the top depth camera into a coordinate system of the reference camera by using rotation matrixes R1 and R2 and translation vectors T1 and T2 respectively, and finishing initial registration of the point clouds:
Figure FDA0003349820790000042
wherein, P4 is the point cloud obtained after the transformation of the side point cloud P1, P5 is the point cloud obtained after the transformation of the top point cloud P3, and P6 is the point cloud after the fusion is completed.
9. The method of the system for reconstructing a three-dimensional model of an animal body based on RGB-D cameras as set forth in claim 8, wherein: step S5 specifically includes the following steps:
s50, calculating the point cloud density of the fused point cloud P6
Figure FDA0003349820790000051
Figure FDA0003349820790000052
Wherein P (xp, yp, zp) is any point in the point cloud P6, q (xq, yq, zq) is any point except for the point P in the point cloud P6, dis (P, q) is the euclidean distance between the points P and q, min (dis (P, q)) is the minimum value of finding dis (P, q), dp is the minimum value of the euclidean distance between the points P and q, N is the number of points in the point cloud P6, and a is a count value;
s51 point cloud P6 point cloud density
Figure FDA0003349820790000053
Clustering is carried out for the threshold value, namely for any point P in P6, searching by a KD-tree near-field search algorithm with the point P as the center,
Figure FDA0003349820790000054
for all points in the radius, clustering the points into a set Q;
s52, (2) repeating the operation of the step S51 on other points in the set Q, and clustering the newly added points into the set Q until no new points are added;
s53, repeating the steps S51 and S52 to obtain a set Q1, Q2 and Q Q3..
10. The method of the system for reconstructing a three-dimensional model of an animal body based on RGB-D cameras as set forth in claim 9, wherein: in step S6, since the animal in the point cloud P6 is the largest target and the other noise point clouds are small targets, the set with the largest number of point clouds is the target animal, and the set with the largest number of point clouds is extracted as the target animal point cloud.
CN202111333549.9A 2021-11-11 2021-11-11 Animal three-dimensional model reconstruction system and method based on RGB-D camera Pending CN113989391A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111333549.9A CN113989391A (en) 2021-11-11 2021-11-11 Animal three-dimensional model reconstruction system and method based on RGB-D camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111333549.9A CN113989391A (en) 2021-11-11 2021-11-11 Animal three-dimensional model reconstruction system and method based on RGB-D camera

Publications (1)

Publication Number Publication Date
CN113989391A true CN113989391A (en) 2022-01-28

Family

ID=79747990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111333549.9A Pending CN113989391A (en) 2021-11-11 2021-11-11 Animal three-dimensional model reconstruction system and method based on RGB-D camera

Country Status (1)

Country Link
CN (1) CN113989391A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071417A (en) * 2023-01-31 2023-05-05 河北农业大学 Sheep body ruler weight acquisition system and method based on Azure Kinect

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019100647A1 (en) * 2017-11-21 2019-05-31 江南大学 Rgb-d camera-based object symmetry axis detection method
CN110910454A (en) * 2019-10-11 2020-03-24 华南农业大学 Automatic calibration registration method of mobile livestock three-dimensional reconstruction equipment
CN113538666A (en) * 2021-07-22 2021-10-22 河北农业大学 Rapid reconstruction method for three-dimensional model of plant
CN113592862A (en) * 2021-09-27 2021-11-02 武汉科技大学 Point cloud data segmentation method, system, device and medium for steel plate surface defects

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019100647A1 (en) * 2017-11-21 2019-05-31 江南大学 Rgb-d camera-based object symmetry axis detection method
CN110910454A (en) * 2019-10-11 2020-03-24 华南农业大学 Automatic calibration registration method of mobile livestock three-dimensional reconstruction equipment
CN113538666A (en) * 2021-07-22 2021-10-22 河北农业大学 Rapid reconstruction method for three-dimensional model of plant
CN113592862A (en) * 2021-09-27 2021-11-02 武汉科技大学 Point cloud data segmentation method, system, device and medium for steel plate surface defects

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071417A (en) * 2023-01-31 2023-05-05 河北农业大学 Sheep body ruler weight acquisition system and method based on Azure Kinect
CN116071417B (en) * 2023-01-31 2024-01-12 河北农业大学 Sheep body ruler weight acquisition system and method based on Azure Kinect

Similar Documents

Publication Publication Date Title
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
CN108474658B (en) Ground form detection method and system, unmanned aerial vehicle landing method and unmanned aerial vehicle
CN112949478B (en) Target detection method based on tripod head camera
CN109523595A (en) A kind of architectural engineering straight line corner angle spacing vision measuring method
CN105336005B (en) A kind of method, apparatus and terminal obtaining target object sign data
CN111189416B (en) Structural light 360-degree three-dimensional surface shape measuring method based on characteristic phase constraint
CN111402411A (en) Scattered object identification and grabbing method based on line structured light
CN111062131A (en) Power transmission line sag calculation method and related device
CN110009745B (en) Method for extracting plane from point cloud according to plane element and model drive
CN113345084B (en) Three-dimensional modeling system and three-dimensional modeling method
US20240054662A1 (en) Capsule endoscope image three-dimensional reconstruction method, electronic device, and readable storage medium
CN115082777A (en) Binocular vision-based underwater dynamic fish form measuring method and device
CN115375842A (en) Plant three-dimensional reconstruction method, terminal and storage medium
CN107610219A (en) The thick densification method of Pixel-level point cloud that geometry clue perceives in a kind of three-dimensional scenic reconstruct
CN114812558B (en) Monocular vision unmanned aerial vehicle autonomous positioning method combining laser ranging
CN113989391A (en) Animal three-dimensional model reconstruction system and method based on RGB-D camera
CN112132900A (en) Visual repositioning method and system
JP7300895B2 (en) Image processing device, image processing method, program, and storage medium
CN115359130A (en) Radar and camera combined calibration method and device, electronic equipment and storage medium
CN112446926B (en) Relative position calibration method and device for laser radar and multi-eye fish-eye camera
CN113052890A (en) Depth truth value acquisition method, device and system and depth camera
CN112712590A (en) Animal point cloud generation method and system
CN116883480A (en) Corn plant height detection method based on binocular image and ground-based radar fusion point cloud
CN111768448A (en) Spatial coordinate system calibration method based on multi-camera detection
CN113408377A (en) Face living body detection method based on temperature information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220128