WO2016023075A1 - 3d imaging - Google Patents

3d imaging Download PDF

Info

Publication number
WO2016023075A1
WO2016023075A1 PCT/AU2015/000490 AU2015000490W WO2016023075A1 WO 2016023075 A1 WO2016023075 A1 WO 2016023075A1 AU 2015000490 W AU2015000490 W AU 2015000490W WO 2016023075 A1 WO2016023075 A1 WO 2016023075A1
Authority
WO
WIPO (PCT)
Prior art keywords
animal
data
point
point cloud
camera
Prior art date
Application number
PCT/AU2015/000490
Other languages
French (fr)
Inventor
Alen ALEMPIJEVIC
Bradley SKINNER
Malcolm MCPHEE
Brad WALMSLEY
Original Assignee
Meat & Livestock Australia Limited
Department Of Primary Industries For And On Behalf Of The State Of New South Wales
University Of Technology Sydney
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2014903163A external-priority patent/AU2014903163A0/en
Application filed by Meat & Livestock Australia Limited, Department Of Primary Industries For And On Behalf Of The State Of New South Wales, University Of Technology Sydney filed Critical Meat & Livestock Australia Limited
Publication of WO2016023075A1 publication Critical patent/WO2016023075A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K29/00Other apparatus for animal husbandry
    • AHUMAN NECESSITIES
    • A22BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
    • A22BSLAUGHTERING
    • A22B5/00Accessories for use during or after slaughtering
    • A22B5/0064Accessories for use during or after slaughtering for classifying or grading carcasses; for measuring back fat
    • A22B5/007Non-invasive scanning of carcasses, e.g. using image recognition, tomography, X-rays, ultrasound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products

Definitions

  • the present invention relates to a method and apparatus for 3 dimensional (3D) imaging and, in particular, to imaging an animal to estimate its physical conformational traits .
  • Body composition is assessed by the fat and muscle content of the animal.
  • Wastage i.e. high levels of fat
  • Muscle is the most valuable part of an animal.
  • Retail beef yield is often assessed (estimated) using muscle score (muscularity) which describes the shape of the animal using particular regions of its body as reference points, including the stifle, rump and loin.
  • Fatness can be assessed, in terms of fat depth (mm) , at several positions on the animal including the P8 and 12/13 th rib sites. Both muscle score and fatness are recognised as industry wide assessment criteria for animals. Accurate estimates of fatness and muscle score (additional to liveweight) are important indicators during the
  • Muscle scoring requires manual experience and skills that are hard to acquire.
  • estimating fat thickness either by hand or using ultrasound equipment requires considerable experience, with accurate operators not always readily available. All methods are subject to error due to equipment or operator.
  • a further important trait of the animal to be measured is that of hip height to gain information about frame size which indicates the maturity type.
  • the invention provides a method for estimating the physical traits of an animal comprising the steps of generating a digital representation of the curvature of a surface area of an animal and estimating the physical traits of the animal in dependence on the digital representation.
  • the digital representation is a digital signature .
  • Embodiments comprise the further steps of generating a digital image of the animal, the digital image comprising a point cloud representation of the surface area of the animal, the point cloud including multiple data points; and creating the digital signature from the digital image of the animal.
  • a range of image technologies can be used to generate the digital image, including 3D LIDAR Scanners, Synthetic- aperture radar (SAR) , time-of-flight camera (ie Microsoft Kinect v2), stereo camera, structured-light camera
  • 3D LIDAR Scanners Synthetic- aperture radar (SAR)
  • SAR Synthetic- aperture radar
  • time-of-flight camera ie Microsoft Kinect v2
  • stereo camera stereo camera
  • structured-light camera structured-light camera
  • the digital image is created from at least one camera image of the surface area of the animal, the camera image including depth information.
  • the surface area is predefined.
  • Embodiments comprise the further steps of selecting a reference point with respect to the point cloud
  • generating a point representing the point cloud for at least one data point on the point cloud generating a surface normal; calculating the angle of a ray cast between the point and the reference point with respect to the surface normal; and, generating a digital signature, the digital signature including a component corresponding to the angle.
  • Embodiments comprise the further steps of at least one data point creating a frame, for example the Darboux frame which has three orthogonal axes, wherein one of the three orthogonal axes is the surface normal at the at least one data point, a second of the three orthogonal axes being orthogonal to the surface normal and orthogonal to a ray cast between the at least one data point and a second data point.
  • a range of other suitable frames exist and could be utilized in this invention.
  • Embodiments comprise the further steps of at the second point using the Darboux frame to calculate the angle between at least one of the axes and the surface normal at the second point, wherein the digital signature includes a component corresponding to the angle.
  • Embodiments comprise the further steps of calculating the distance between the at least one data point and the second point; and, generating a digital signature, the digital signature including a component corresponding to the distance.
  • reference point is on a central axis of the point cloud.
  • the traits are at least one of muscle score, P8 fat depth and hip height.
  • Embodiments include the further steps of capturing at least one image of the surface area of the animal, generating a point cloud representation of the image; and calculating the surface curvature from the point cloud representation; and, generating the digital signature from the surface curvature.
  • Embodiments comprise the further steps of generating a reference frame for at least one data point in the point cloud; generating a surface normal for at least one data point in the point cloud; and generating the signature in dependence on the angle between the reference frame and the surface normal.
  • Embodiments comprise the further steps of calculating the distance between the at least one data point and at least a second data point and generating the signature in dependence on the distance. Embodiments comprise the further steps of filtering the data points in the point cloud.
  • Embodiments comprise the further steps of removing any outlying data points.
  • the invention provides a digital signature representing the curvature of an animal, comprising at least one of a first component representing the angle of the surface of the animal at least one point within a selected surface area of the animal with respect to a first reference point; and, a second component representing the distance between at least one point on the surface of the animal and a second point on the surface of the animal.
  • first and second reference points are the same reference point.
  • the invention provides an apparatus for estimating the physical traits of an animal comprising: means for generating a digital representation of the curvature of a surface area of an animal; and, means for estimating the physical traits of the animal in dependence on the digital representation.
  • the digital representation is a digital signature.
  • the digital signature is created from a digital image of the animal, the digital image comprising a point cloud representation of the predefined surface area of the animal, the point cloud including multiple data points.
  • Embodiments comprise at least one camera, the camera creating data representing the surface area of the animal, the data including depth data; and, means for converting the camera data into a point cloud representation of the animal .
  • the surface area is predefined.
  • Embodiments comprise means for selecting a reference point with respect to the point cloud; means for generating a point representing the point cloud; means for generating a surface normal for at least one data point in the point cloud; means for calculating the angle of a ray cast between the point and the reference point with respect to the surface normal; and, means for generating a digital signature, the digital signature including a component corresponding to the angle.
  • Embodiments comprise means for creating a Darboux frame at least one data point the Darboux frame having three orthogonal axes, wherein one of the three orthogonal axes is the surface normal at the at least one data point, a second of the three orthogonal axes being orthogonal to the surface normal and orthogonal to a ray cast between the at least one data point and a second data point.
  • Embodiments comprise means for at the second point using the Darboux frame to calculate the angle between at least one of the axes and the surface normal at the second point, wherein the digital signature includes a component corresponding to the angle.
  • Embodiments comprise means for calculating the distance between the at least one data point and the second point; and, means for generating a digital signature, the digital signature including a component corresponding to the distance.
  • the reference point is positioned on a central axis of the point cloud.
  • the traits are at least one of muscle score, P8 fat depth and hip height.
  • Embodiments comprise a camera for capturing at least one image of the selected region of the animal,
  • means for generating a point cloud representation of the image means for calculating the surface curvature from the point cloud representation; and means for
  • Embodiments comprise means for generating a reference frame for at least one data point in the point cloud
  • Embodiments comprise means for generating a surface normal for at least one data point in the point cloud; and means for calculating the signature in dependence on the angle between the reference frame and the surface normal.
  • Embodiments comprise means for calculating the distance between the at least one data point and at least a second data point and generating the signature in dependence on the distance.
  • Embodiments comprise means for filtering the data points in the point cloud.
  • filtering includes removing any outlying data points.
  • the surface area is the hindquarters.
  • the invention provides an apparatus for imaging an animal comprising: rotatable frame; the frame comprising an imaging cavity suitable for locating an animal for imaging; multiple camera control mechanisms, the camera control mechanisms suitable for changing the location of a camera within the frame; the frame being rotatable between a first position for receiving an animal and a second position for exiting an animal.
  • the invention provides a method for imaging an animal comprising the steps of: positioning an imaging apparatus to receive an animal; initiating a scanning controller to control multiple camera control mechanisms to capture data relating to the animal;
  • Figure 1 shows reference points used in fat assessment for cattle .
  • Figure 2 shows the three shape categories for muscling in cattle .
  • Figure 3 shows the camera positioning for imaging.
  • Figure 4 shows the camera positioning for imaging.
  • Figure 5 shows the coordinate system for a 3-D camera.
  • Figure 6 shows the calibration device
  • Figure 7 is a block diagram showing hardware in the system.
  • Figure 8 shows a process for generating classification algorithm for an animal.
  • Figure 9 shows the construction of a point cloud from RGBD data .
  • Figure 10 shows the generation of components of the signature .
  • Figure 11 shows a digital signature for cattle.
  • Figure 12 shows steps for classifying an animal.
  • Figure 13 shows a frame for scanning an animal.
  • Figure 14 shows a process for generating classification algorithm for an animal.
  • Figure 15 shows a sample 3-D model of a carcass.
  • Figure 16 illustrates the interaction of the frame and the animal for scanning. Detailed description
  • 3-D camera images are used to produce estimates of muscle score, fatness and frame score of cattle.
  • the surface curvature of specific areas of the animal is analysed and characterised and the characteristics are mapped against models of traits to estimate muscling and fatness.
  • the depth of fat at the P8 rump site 102 is the national standard in Australia for assessing fatness of cattle. The depth of fat across the ribs of the animal is also used for assessing fatness. Rib fatness is measured at the 12/13 th rib site 104. Other locations such as the short loin area 106 and the tailhead 108 are used as manual or visual indicators of fat depth at the P8 rump 102 and 12/13 th rib 104 sites.
  • the stifle region 110 is used since heavily muscled cattle are thickest through the stifle as opposed to lightly muscled cattle.
  • the representations in Figure 2 show rear views of cattle with different levels of muscling.
  • Figure 2a shows an animal with good muscling.
  • the topline 202 of the animal is well-rounded and the maximum width is through the stifle region 204.
  • the animal has a wide stance 206 and the stomach is not visible from behind.
  • Figure 2b shows an animal with average muscling.
  • the topline 212 is not as wide or as well rounded. Hip bones 218a 218b can be seen.
  • the stomach 220 is also visible.
  • the stance 216 is more narrow than the animal with good muscling.
  • Figure 2c shows an animal with poor muscling.
  • the topline 222 is narrow and the hip bones 228a 228b are prominent.
  • the animal tapers from the hip bones through the stifle region 224.
  • Stomach 230 is also clearly visible.
  • the stance is more narrow than both the animal with good muscling of Figure 2a and the animal with average muscling of Figure 2b.
  • the representations of Figure 2 show extreme
  • Embodiments of the invention capture the curvature of the hindquarter region of the animal to estimate muscle score and fatness.
  • the shape of the hindquarter region is captured using RGB- D 3-D cameras.
  • the images from the cameras capture both hindquarters in order to gain sufficient information to model the regions.
  • An example of the position of the cameras is shown in Figures 3a and 3b.
  • Two 3-D cameras 310 320 are positioned to capture images of the
  • Cameras 310 320 are
  • More than one camera is used to capture a wider image of the animal to facilitate modelling over a wider area.
  • additional cameras are used to obtain additional image information of the relevant regions.
  • the camera images are combined to produce a single three dimensional representation of the hindquarters of the animal. Movement of the animal can result in changing curvature in the hindquarters due to muscle flexing and muscle, fat and hip movement.
  • the images from the different cameras must be time
  • Cameras 310 320 operate at 30 Hz, taking 30 images per second, and are linked to clock 330.
  • Clock 330 is used to time stamp the images.
  • a 16.5 ms offset produces little error for the speed the animal is moving within the race.
  • cameras 310 320 are driven by a single trigger 340 to synchronise the exposures.
  • Figure 4 is a photograph showing the cameras 410 420 430 and animal 440 positioned in a dedicated section of a cattle race 450 in an embodiment.
  • the cattle race is an apparatus used to direct cattle from one place to another and is useful as a location for imaging cattle since it constrains the animals' movements considerably enabling static images to be obtained.
  • RGB-D cameras 410 420 are positioned in elevated positions to the rear of the animal to provide images of the
  • RGB camera 430 is positioned to the side of the animal facing the stifle region to provide further image information of the left stifle region from a different viewpoint.
  • Elements of the cattle race 450 are positioned within the field of view of the RGB-D cameras.
  • Image data from RGB-D 3-D cameras provides the distance D of the pixel from the reference point.
  • dimensional camera has a reference point from which all three dimensional coordinates of the image are mapped. Three dimensional coordinates of each pixel in the image with respect to a reference point associated with the camera can be computed.
  • the data output includes the x, y and z coordinates for each pixel in the reference plane, referred to as a collection of points or pointcloud.
  • Figure 5 shows RGB camera 510 with respect to animal 520. The position of points 1 and 2 on the surface of the animal, represented by individual pixels in the image from camera 510, are provided in the reference plane of the camera. Point 1 is positioned at distance Dl from the camera and has reference coordinates xl, yl, zl in the cameras reference plane.
  • Point 2 is positioned distance D2 from the camera and has coordinates x2, y2, z2 in the camera's reference plane. Point 1 and Point 2 are a subset of all points in the pointcloud. In a multi-camera system, equivalent pointcloud data is provided for each image from each camera in the reference plane of the respective camera.
  • the pointcloud data from each camera is translated into a common coordinate frame using a one off calibration process.
  • images should be selected with the same time stamp in order to provide a static pointcloud image of the animal at a particular point in time.
  • Figure 6 shows an apparatus 600 used for calibrating the cameras into a common reference plane.
  • Apparatus 600 has a vertical component 610 and a series of horizontal components 620 630 640 650 positioned perpendicular to the vertical component and component 620 perpendicular to the plane created by 610 and horizontal bars 630, 640 and 650.
  • the horizontal components are positioned at different heights along the vertical component.
  • the length of the vertical beam and the height of the horizontal components along the beam correlate to the height of cattle being imaged. In the embodiment of Figure 6, the length of vertical component 610 is 1.5 meters.
  • Beam 620 is
  • Beam 630 is positioned at 1.3 meters along the vertical component, beam 640 is positioned 1.1 meters along the vertical component and beam 650 is positioned 0.9 meters along the vertical component. Further embodiments of the invention include different numbers of beams or beams at different heights.
  • calibration apparatus 600 Before taking images of the animal, calibration apparatus 600 is positioned in the field of view of the cameras in the vicinity in which the hindquarters of the animal will be positioned.
  • the cameras take images of the calibration apparatus.
  • the 3D representation of each camera is matched to a 3D model of the apparatus using a least square optimization. This generates a set of extrinsic parameters, allowing the data from each 3D camera to be fused into a common reference frame, the parameters are be stored and used for point cloud construction.
  • a three dimensional point cloud representation of the relevant areas of the animal is created.
  • the point cloud can then be manipulated and viewed from different viewpoints to assess the curvature of the relevant regions.
  • curvature is modelled to estimate the fatness and muscling of the animal.
  • FIG. 8 The process for modelling the curvature is now discussed with reference to Figures 8 and 9.
  • images of the animal are taken using 3-D cameras.
  • the images include depth data identifying distance of each point in the image from the reference point of the camera. Images are selected from each camera with the same time stamp i.e. within 16.5ms.
  • Figures 9A and 9B show synchronised images of a steer positioned in a race.
  • Figure 9A shows the right side hindquarter from a rear elevated view.
  • Figure 9B shows a left side hindquarter from a rear elevated view.
  • the camera data is calibrated at 804 using a common reference point as discussed above with respect to Figure 6.
  • the raw point cloud data from each image is combined at 810 using the set of extrinsic parameters.
  • system will be programmed with a
  • the rotational properties of the image are also analysed during the validation stage to confirm the overall shape of the point cloud.
  • the data validation process adds reliability to the data and improves the robustness of the system.
  • the point cloud data for analysis is selected at 814 816 818.
  • Point cloud data not relevant to the analysis of curvature of the hindquarters of the animal is removed.
  • Outlier removal filters remove any points on the point cloud not
  • any points representing non-hindquarter parts of the animal are removed, for example the tail of the animal.
  • the system is programmed to identify points not complying with the general curvature of the point cloud. Additionally external elements having defined shapes are identified and removed, for example the bars of the race. Some embodiments include specific software programmes able to detect shapes not attributed to the hindquarters, for example the shape of the tail or the shape of metal cylindrical bars forming the race. Any point cloud data points not associated with the
  • Embodiments After removal of the outliers the point cloud is filtered to define a specific area of the animal.
  • Embodiments use a bounding box filter technique in which a specific length, height and width of the point cloud is selected for analysis. By reducing the data for analysis areas of the animal which provide little information for
  • Figures 9D and 9E show the development of the point cloud during down sampling and removing the outlier points.
  • the point cloud is smoothed and filled at 820 822. If the point cloud includes any holes, for example if light conditions prevented data from being captured in certain areas, these are filled by fitting a higher order function across the point cloud in the area of the hole. Typically, the holes in the point cloud are small, otherwise the point could have been rejected at 812 during the validation stage.
  • the uniform sampling filter at 822 adjusts the pointcloud data by resampling the areas where points have been added uniformly for added robustness.
  • the point cloud may be segmented into different sections for separate analysis. For example, the point cloud may be cut in half or quarters.
  • a viewpoint in space is selected at which the point cloud is viewed for measuring curvature.
  • the 3D (XYZ) viewpoint coordinates are aligned to either the common reference frame or the computed first order moment of the point cloud. Selection of view point is arbitrary. However, in preferred embodiments a central position creating a symmetrical view of the hindquarters is selected. In embodiments, if the common reference frame is utilised as the global viewpoint reference, then the viewpoint is established at the origin coordinates (0,0,0) . This represents a position underneath the hindquarters and is associated with looking up towards the inside of the top of the hindquarters. Otherwise, the 3D (XYZ) first order moment of the point cloud is computed and then used as the viewpoint reference. The first order moment is determined as the mean position of all the points in the cloud in all of the coordinate directions (XYZ) .
  • the fused point cloud representation is transformed into a compact signature derived from the statistical combination of surface curvature for all the point cloud data
  • a first contribution of the curvature signature is created by casting a ray from the viewpoint V to the centroid of the point cloud C forming a directional vector uc as shown in Figure 10a.
  • Figure 10 shows a number of points Pi on - li ⁇ the point cloud. For every data point of the point cloud an angle ⁇ is computed between the directional ray uc and surface normal at the data point.
  • Figure 10a shows
  • directional angle ⁇ at two points on the point cloud at point Pi the directional angle between uc and the surface normal at Pi is ⁇ , and at point Pi+1 the directional angle between uc and the surface normal is ⁇ +1.
  • the directional angles for all points on the point clouds are calculated and stored. In further embodiments, directional angles for a selection of points are
  • a second component for the curvature signature is
  • Figure 10b shows two neighbouring points Pi and Pj from the point cloud data. Point Pi and Point Pj are connected by a ray of length d. A reference frame is created at point Pi. Axes u is the surface normal ni at Pi. Axes v is perpendicular to both axis u and the ray connecting point Pi and point Pj . Axes W represents the remaining orthogonal axes. Orthogonal axes u, v and w form a Darboux frame at Pi. This Darboux frame is then used with respect to point Pj . The surface normal at point Pj is represented as nj .
  • Darboux frame constructed at Pi with axis u, v and w is projected onto Pj .
  • the angle between axis u and the projection of surface normal nj on the plane created by axis u and w is ⁇ ; the angle between ray surface normal nj and axis v is a; and the angle between the ray and axis W is ⁇ .
  • the distances and angles are computed for every point in the point cloud with respect to the surface normal and axes u,v,w for every point.
  • Figure 9H shows the surface angles on the point cloud from a selected viewpoint .
  • a Darboux frame is created at every point on the point cloud with respect to a number of points Pj .
  • points Pj are the direct neighbouring points of Pi, in further embodiments point Pj may be all points within a predefined area, in further embodiments points Pj may be all points connected for the calculation of the surface normal at Pi. It will be clear that further selection methods for points Pj could be used .
  • histograms from multiple images of the same animal are statistically combined to generate
  • the final feature vector is created from the superimposed histogram at 834 using the mean values but could be generated using any mathematical formula including for example the
  • the histogram embodies a statistical base representation of surface curvature and represents the digital signature for a particular animal. Specific angle ranges are included in the histogram to represent the digital
  • different angular ranges could be selected to provide the most distinctive signature for the animals at 838.
  • the example histogram representation contains 45 bins for each angle ⁇ , ⁇ , ⁇ and distance d followed by 81 bins for the directional component ⁇ . This results in a total of 216 bins, in further embodiments a different number and distributions of bins may be selected.
  • a manual (i.e. subjective) trait measurement is taken at 836.
  • the trait can be muscle score, for which a visual assessment is made, or P8 / rib fat depth measurement which is obtained using an ultrasound scanner. Further traits including animal condition score, muscle, fat and yield can also be used.
  • the traits are logged against the digital signature.
  • the digital signature is also referred to as a feature vector in the machine learning / pattern recognition community.
  • the values of the trait become the class labels for each digital signature at 840.
  • Machine learning algorithms such as libSVM (support vector machine) are employed to determine a non-linear mapping between the signature and the class label.
  • libSVM support vector machine
  • the Machine Learning approaches attempts to model the non-linear mapping between the high dimensional feature vectors and the trait .
  • a model reflecting the non-linear mapping between the statistical base surface curvature and the provided class label is established at 842. Once established, the model can be used to produce the appropriate classification and provide an estimate of uncertainty of this classification from the signature of any animal.
  • the sensor models are based exclusively on the feature vector and the traits P8 fat, rib fat or muscle score measurements for each animal in the data set.
  • GPR GAUSSIAN process Regression
  • Figure 12 shows a flow chart for classifying an animal from its signature. At 1210 the point cloud for a
  • the digital signature for the animal is created at 1220 using the method described above.
  • the model is used to classify the signature in order to gain a prediction of the physical attributes of the animal at 1230.
  • P8 fat, rib fat or categorical muscle scores can be calculated from any point cloud data using the digital signatures from the point cloud in order to determine the traits of the animal.
  • Embodiments of the invention compute and model the regions of an animal which have been deemed relevant and relate to reference points used in traditional fat and muscling assessment for cows and steers.
  • the use of machine learning enables assessment of P8 fat, rib fat and muscle scores to be determined from RGB-D camera images of a cow or steer.
  • Embodiments enable estimates of P8 fat, rib fat and muscle score using data gathered from cattle using a pair of RGB-D cameras, without the need for ultra-sonic measurements or trained assessors. Data obtained using cows show a 77% correct classification for P8 fat score and 88% correction classification for muscle score. In the case of steers with 86% classification accuracy for the P8 fat and 83% classification accuracy for muscle score were obtained. "Correct classification" in this context is defined as agreement with the data used to create the trait labels at 836.
  • Cameras 702 704 706 capture images of the relevant parts of the animals. As discussed above these may be positioned within a race or other structural area to confine an animal.
  • the cameras are connected to clock 708 to timestamp the images. In embodiments the clock is within the computer. Images from the cameras are stored in memory 720. Further details about the images including the animal reference, the date, the location and any other relevant information is stored with the images in memory 720.
  • Processor 730 is connected to memory 720.
  • the processor includes an image processor to interpret image data.
  • the processor includes a point cloud processor to generate the point cloud from the camera data.
  • the point cloud is a point cloud processor to generate the point cloud from the camera data.
  • processor is able to combine camera data and includes modules for validating and filtering point cloud data.
  • Input 710 is connected to processor 730 to allow manual or automated input to select processing parameters and modules including filtering parameters, validation
  • Processor 730 includes calculation processor for
  • Memory 720 includes point cloud storage for storing point cloud data, digital signature storage for storing digital signatures.
  • Input 710 is used to reference data stored in memory 730.
  • Processor 730 includes training processor to generate digital signature models.
  • Display 740 can be connected to each input 702 704 706 708 710 as well as the memory components and processor
  • embodiments of the invention can be used to model many visual traits of animals. Although the description above focuses on hindquarter imaging of steers to determine P8 fat measurements and muscle score it will be clear that the process can be used to image and model different animals and different parts of animals to determine different traits.
  • Body conditioning score is a further trait which is used to identify the condition of an animal.
  • the conditioning score includes an aspect of muscling and fat on the animal and can provide an indication of whether animals are below normal nutritional conditions.
  • conditioning scores include: 1) BayesNet (BN) algorithm (Cooper and Herskovits 1992) which is a probabilistic ML learning algorithm based on Bayes Theory, 2) the
  • Weight can also be measured.
  • Such measurements can be used in isolation to classify an animal or can be used in combination with curvature information to classify physical traits of an animal. Such measurements can be used directly or indirectly used as a predictor of body composition, lean meat yield,
  • embodiments of the invention can be used to model traits of live animals and carcasses.
  • Carcass Trait Estimation In a second embodiment the system is configured to model carcasses to estimate carcass traits.
  • the curvature signatures for hung carcasses are different from those of live animals due to the different positioning of the body.
  • equivalent methods can be used to capture image data, create three-dimensional pointcloud representations and use training methods to create models which predict physical traits of the carcass.
  • a different data capture technique is used to capture images of the hanging carcasses compared with that use to image live animals moving through a race.
  • Figure 13 shows an example of a frame to support an automated scanner arrangement configured to capture image data to produce a three dimensional image of a carcass.
  • Figure 16 shows a top view of the alignment of the frame with a processing chain of an abattoir carrying carcasses.
  • Frame 1300 is arranged to receive carcasses moving along a processing chain track within an abattoir.
  • the frame is arranged to rotate about central pivot point 1310.
  • Three legs 1320 1322 1324 extend generally horizontally from central pivot point 1310. The legs extend from the pivot point at generally similar angular spacings .
  • Camera posts 1330 1332 1334 extend generally vertically from each leg.
  • the top portion of each camera post is attached to a part circular portion 1340.
  • the part circular portion includes a cutaway portion extending between camera posts 1330 and 1332. The cutaway section provides an opening for
  • Abattoirs typically include elevated processing chains. Carcasses are hung from the processing chains and can be moved around the abattoir. The position of the frame is
  • Figure 16 illustrates the interaction of a frame 1600 and a carcass 1610 on a processing chain 1620.
  • a camera is attached to each camera post 1330 1332 1334 via sliding attachment mechanisms 1350 1352 1354.
  • the cameras are arranged to move up and down on the camera posts. Preferably the cameras can move throughout the full length of the camera posts. Multiple cameras are used to capture data to image the inner and outer surfaces of the carcass from different camera positions.
  • Structured Light RGBD Cameras can be used to capture data within frame 1300.
  • a camera is attached to each sliding camera mount 1350 1352 1354.
  • the cameras emit near IR on a wavelength of 828.6nm and are class 1M Laser product because of the high divergence of the light. That illumination at 100mm distance into a 7mm pinhole does not exceed 0.71mW and is completely eye-safe.
  • Operating temperature range is 4-40 C, power delivered via USB with Maximum Power Consumption of 2.25W.
  • the sensor has a minimum sensing range of 20cm and therefore requires some clearance from the carcass within the constraints of the abattoir processing system (ie: proximity of walls/other carcasses in the chillers). Other cameras can be used to capture the data.
  • Figure 14 shows the steps taken to classify a carcass using the system of Figures 13 and 16.
  • the cameras take images of the carcass at 1410.
  • the cameras are initially positioned at the bottom of the camera posts and move upwards on the posts continually scanning the carcass.
  • Pointcloud data is extracted from the RGBD data at 1420 and features are extracted from the RGB data at 1430.
  • the three dimensional carcass representation is built at 1440.
  • the localisation of the cameras is performed using the steps discussed above with respect to Figure 8.
  • the point cloud data is downsampled at 1442 and outlier removal filters are applied at 1444.
  • Factors affecting the motion of the carcass include the speed the carcass was moving on the chain, weight of the carcass and weight distribution of the carcass. This carcass motion is accounted for during point cloud
  • the system also accounts for the motion of the cameras on the posts when generating the point cloud.
  • the system includes motion models to account for these
  • the system uses simultaneous localisation and mapping of the carcass relative to three cameras to generate the three dimensional model.
  • the scanning system is devised to be able to obtain a full scan of the carcass by
  • dimensional representation of the carcass is built through the application of simultaneous localization and mapping (SLAM) via 3 cameras using the texture/shape of the carcass, and two models of motion of the carcass motion (pendulum model) and the camera motion along the rail.
  • SLAM simultaneous localization and mapping
  • the carcass position relative to the current views of the three cameras is continuously estimated as the cameras scan from the bottom of the frame to the top. For each of the three cameras this involves at time t completing:
  • Typical scanning time frames for scanning a sheep carcass from the 3 cameras scanning from top to bottom is 80 seconds. Data was continuously streaming from the RGBD cameras and simultaneously the 3D model was captured in real-time. Following, the scanning, the system rotated 60 degrees to allow the carcass to exit along the chain and then reverted to the original configuration (taking 18 seconds) . The total time to scan the carcass was 98 seconds; the time limitation is imposed by current servo configuration. With a redesign the total time could be reduced to 10 seconds.
  • the processing pipeline from RGBD data acquisition through to the estimation of lean meat shown in Figure 14 starts with acquisition of RGBD data from the hand held RGBD sensor and assembling a 3D model with colour already described in the capturing data section. Sufficient coverage of the entire carcass is completed.
  • the next step of this processing pipeline is transforming the information from all carcasses into a common coordinate frame tied to the position of the hook suspending the body on the processing chain. This step allows consistency in analysis over future parts of the processing pipeline.
  • Another approach which can be used to collect the 3D point cloud data is to manually scan the carcass. An operator moves the camera across the surface of the carcass to expose each area of the carcass to the camera with a small overlap. The scans can be performed in a sweeping zig-zag fashion from top to bottom (in a similar pattern to spray painting a contoured surface) .
  • the spatial resolution of the 3D data was set to a hard limit of 5mm.
  • the 3D data was married with a RGB colour resulting in surfaces such as those in Figure 15 (left (a) : external with colour, centre (b) : external with only 3D data, right (c) : internal with colour) .
  • a method of extracting smaller volumes of interest across all carcasses may be conducted, to effectively separate regions containing several muscle groups.
  • Embodiments use an automated tool based on carcass dimensions or identified muscle groups or regions to assist in
  • segmenting data to identify regions of interest for data analysis .
  • the extracted 3D volume is then transformed into a compact signature (feature extraction) at 1468.
  • feature extraction feature extraction
  • Embodiments have evaluated a number of approaches that examined colour (normalised RGB and HSV colour space) , surface length / volumetric information, surface curvatures and combination thereof.
  • the feature extraction and reduction steps produce a compact and information-rich representation of the raw point cloud data collected for each carcass.
  • RMSE root mean square error
  • the Lean of the Left Side of carcass is 82.25kg and 104.29kg, live weight 544kg and 740kg respectively. Therefore a method to add the live weight into the feature vector in addition to surface curvatures was devised in the feature reduction step.
  • Figure 15 shows three dimensional images created for carcasses .
  • the production of a feature vector at 1470 is the final stage of taking the transformed point cloud data and arranging it into a form that is amenable to the machine learning environment.
  • the feature vector is then used as input for training a machine learning algorithm at 1474 known as Gaussian
  • Process Regression which is a state of the art supervised learning method.
  • the training/testing approach proposed involves supervised learning, which infers a function from the "labelled" instances (i.e., observed values) of Lean in the feature vector 2.
  • the input to the machine learning scheme is expressed as a table of independent instances of the point cloud representation of each animal (the concept) to be learned.
  • ML machine leaning
  • a non-linear mapping between the statistical-based surface curvature signature/weight and the provided class label Lean is learnt at 1476.
  • predictions can then be made on the Lean (kg) of those animals present in the test set.
  • 3D point cloud data and weight i.e., feature vector
  • Lean the observed values Lean (kg) are used to construct a sensor model. Once built, the sensor model can be used to produce the appropriate classification or regression on the presentation of an instance vector gathered from a new animal.
  • Step 1 Acquire 3D point cloud data in the abattoir, weight (live or hot carcass weight) and Muscle Score
  • Step 2 Extract a representative volume from the
  • Step 3 Reduce the high dimensionality of the point cloud data by extracting features from the input signals to produce a compact and representative feature vector 1450 to 1468) ;
  • Step 3 Perform global optimisation of the feature-vector signatures using the parallel genetic algorithm with respect to Muscle Score to reduce the feature vector
  • Step 4 Train a sensor model based severelyexclusively" on the feature vector and weight (Live Weight or Hot Carcass Weight) with respect to Lean [kg] for each animal in the data set (1470 to 1476) ;
  • Step 5 The learned models can then be used to infer measured Lean [kg] from new point cloud data and weight (Live Weight or Hot Carcass Weight) without the need for any input from trained assessors;
  • 50x10 Fold Cross Validation randomised Gaussian Process learning schemes can be used. That is 90% of the data was provided to train the model and 10 % was used as a challenge (test) .
  • embodiments of the invention provide an efficient, fast and low cost quantitative live animal and carcass grading tool.
  • the scanning process can be
  • Some advantages provided by embodiments of the invention include the operating not having to touch the animal, the system is easy to set up and maintain, grading data can be obtained and assessments performed within a few seconds, accurate measurements of phenotype can be obtained.
  • results are repeatable, consistent and reliable .
  • Embodiments of the invention can be used with BeefSpecs, as well as many other decision making tools including market specs for meeting live export specs, condition scoring (or similar) as a management tool, Feedlot management, pregnancy, seed damage (hides) , heat stress, illness etc.
  • Embodiments can be used to estimate aspects of the body of an animal for any endpoint including sale for finishing, auctions, sale for slaughter, change in fatness, change in muscling on feed, growth, stage of pregnancy, milk, structural correctness in the seedstock industry, Sale by Description of store stock, Sale by
  • Embodiments of the system can predict characteristics of many species including all ruminant species used for agricultural production, dairy cows, horses (work, racing, wild and recreational) , pigs, goats, deer, birds,
  • breeding companion animals in show for body composition
  • dogs wildlife management (koala and kangaroo condition scoring or health, wild remote animals, etc)
  • camels camels
  • aquaculture fish dimensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Environmental Sciences (AREA)
  • Animal Husbandry (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Food Science & Technology (AREA)
  • Biophysics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

An apparatus for estimating the physical traits of an animal comprising means for generating a digital representation of the curvature of a surface area of an animal and means for estimating the physical traits of the animal in dependence on the digital representation.

Description

3D Imaging
The present invention relates to a method and apparatus for 3 dimensional (3D) imaging and, in particular, to imaging an animal to estimate its physical conformational traits .
Background
The important characteristics of an animal destined for consumption from a trading value perspective are size and body composition. Body composition is assessed by the fat and muscle content of the animal. Wastage (i.e. high levels of fat) is an industry issue that incurs huge penalties and associated losses for both producers and processors. Alternatively, low levels of fat can impact visual appeal and taste to consumers. Muscle is the most valuable part of an animal. Retail beef yield is often assessed (estimated) using muscle score (muscularity) which describes the shape of the animal using particular regions of its body as reference points, including the stifle, rump and loin. Fatness can be assessed, in terms of fat depth (mm) , at several positions on the animal including the P8 and 12/13th rib sites. Both muscle score and fatness are recognised as industry wide assessment criteria for animals. Accurate estimates of fatness and muscle score (additional to liveweight) are important indicators during the
lifetime of an animal, in order to determine feeding and marketing decisions, and for the estimation of retail beef yield . Several methods are currently used to determine muscle score and fatness for animals. Muscle scoring requires manual experience and skills that are hard to acquire. Also, estimating fat thickness, either by hand or using ultrasound equipment requires considerable experience, with accurate operators not always readily available. All methods are subject to error due to equipment or operator. A further important trait of the animal to be measured is that of hip height to gain information about frame size which indicates the maturity type.
Summary of the Invention
In a first aspect the invention provides a method for estimating the physical traits of an animal comprising the steps of generating a digital representation of the curvature of a surface area of an animal and estimating the physical traits of the animal in dependence on the digital representation. In an embodiment the digital representation is a digital signature .
Embodiments comprise the further steps of generating a digital image of the animal, the digital image comprising a point cloud representation of the surface area of the animal, the point cloud including multiple data points; and creating the digital signature from the digital image of the animal.
A range of image technologies can be used to generate the digital image, including 3D LIDAR Scanners, Synthetic- aperture radar (SAR) , time-of-flight camera (ie Microsoft Kinect v2), stereo camera, structured-light camera
(Microsoft Kinect vl) . The described examples have utilised a structured-light cameras (RGB-D camera) from Microsoft for a range of reasons including, low cost, robustness, open source support and ready availability on the market at the time.
In an embodiment the digital image is created from at least one camera image of the surface area of the animal, the camera image including depth information.
In an embodiment the surface area is predefined.
Embodiments comprise the further steps of selecting a reference point with respect to the point cloud;
generating a point representing the point cloud; for at least one data point on the point cloud generating a surface normal; calculating the angle of a ray cast between the point and the reference point with respect to the surface normal; and, generating a digital signature, the digital signature including a component corresponding to the angle.
Embodiments comprise the further steps of at least one data point creating a frame, for example the Darboux frame which has three orthogonal axes, wherein one of the three orthogonal axes is the surface normal at the at least one data point, a second of the three orthogonal axes being orthogonal to the surface normal and orthogonal to a ray cast between the at least one data point and a second data point. It will be obvious to those skilled in the art that a range of other suitable frames exist and could be utilized in this invention.
Embodiments comprise the further steps of at the second point using the Darboux frame to calculate the angle between at least one of the axes and the surface normal at the second point, wherein the digital signature includes a component corresponding to the angle.
Embodiments comprise the further steps of calculating the distance between the at least one data point and the second point; and, generating a digital signature, the digital signature including a component corresponding to the distance.
In embodiments reference point is on a central axis of the point cloud.
In embodiments the traits are at least one of muscle score, P8 fat depth and hip height.
Embodiments include the further steps of capturing at least one image of the surface area of the animal, generating a point cloud representation of the image; and calculating the surface curvature from the point cloud representation; and, generating the digital signature from the surface curvature. Embodiments comprise the further steps of generating a reference frame for at least one data point in the point cloud; generating a surface normal for at least one data point in the point cloud; and generating the signature in dependence on the angle between the reference frame and the surface normal.
Embodiments comprise the further steps of calculating the distance between the at least one data point and at least a second data point and generating the signature in dependence on the distance. Embodiments comprise the further steps of filtering the data points in the point cloud.
Embodiments comprise the further steps of removing any outlying data points.
In a second aspect the invention provides a digital signature representing the curvature of an animal, comprising at least one of a first component representing the angle of the surface of the animal at least one point within a selected surface area of the animal with respect to a first reference point; and, a second component representing the distance between at least one point on the surface of the animal and a second point on the surface of the animal. In embodiments the first and second reference points are the same reference point.
In a third aspect the invention provides an apparatus for estimating the physical traits of an animal comprising: means for generating a digital representation of the curvature of a surface area of an animal; and, means for estimating the physical traits of the animal in dependence on the digital representation.
In embodiments the digital representation is a digital signature.
In embodiments the digital signature is created from a digital image of the animal, the digital image comprising a point cloud representation of the predefined surface area of the animal, the point cloud including multiple data points.
Embodiments comprise at least one camera, the camera creating data representing the surface area of the animal, the data including depth data; and, means for converting the camera data into a point cloud representation of the animal .
In embodiments the surface area is predefined.
Embodiments comprise means for selecting a reference point with respect to the point cloud; means for generating a point representing the point cloud; means for generating a surface normal for at least one data point in the point cloud; means for calculating the angle of a ray cast between the point and the reference point with respect to the surface normal; and, means for generating a digital signature, the digital signature including a component corresponding to the angle.
Embodiments comprise means for creating a Darboux frame at least one data point the Darboux frame having three orthogonal axes, wherein one of the three orthogonal axes is the surface normal at the at least one data point, a second of the three orthogonal axes being orthogonal to the surface normal and orthogonal to a ray cast between the at least one data point and a second data point.
Embodiments comprise means for at the second point using the Darboux frame to calculate the angle between at least one of the axes and the surface normal at the second point, wherein the digital signature includes a component corresponding to the angle.
Embodiments comprise means for calculating the distance between the at least one data point and the second point; and, means for generating a digital signature, the digital signature including a component corresponding to the distance.
In embodiments the reference point is positioned on a central axis of the point cloud.
In embodiments the traits are at least one of muscle score, P8 fat depth and hip height. Embodiments comprise a camera for capturing at least one image of the selected region of the animal,
means for generating a point cloud representation of the image; and means for calculating the surface curvature from the point cloud representation; and means for
generating the digital signature from the surface
curvature .
Embodiments comprise means for generating a reference frame for at least one data point in the point cloud;
means for generating a surface normal for at least one data point in the point cloud; and means for calculating the signature in dependence on the angle between the reference frame and the surface normal. Embodiments comprise means for calculating the distance between the at least one data point and at least a second data point and generating the signature in dependence on the distance. Embodiments comprise means for filtering the data points in the point cloud.
In embodiments filtering includes removing any outlying data points.
In embodiments the surface area is the hindquarters. In a fourth aspect the invention provides an apparatus for imaging an animal comprising: rotatable frame; the frame comprising an imaging cavity suitable for locating an animal for imaging; multiple camera control mechanisms, the camera control mechanisms suitable for changing the location of a camera within the frame; the frame being rotatable between a first position for receiving an animal and a second position for exiting an animal.
In a fifth aspect the invention provides a method for imaging an animal comprising the steps of: positioning an imaging apparatus to receive an animal; initiating a scanning controller to control multiple camera control mechanisms to capture data relating to the animal;
detecting that the data capture is complete; and
positioning the imaging apparatus to exit the animal. Brief Description of Figures
Figure 1 shows reference points used in fat assessment for cattle .
Figure 2 shows the three shape categories for muscling in cattle . Figure 3 shows the camera positioning for imaging. Figure 4 shows the camera positioning for imaging.
Figure 5 shows the coordinate system for a 3-D camera.
Figure 6 shows the calibration device.
Figure 7 is a block diagram showing hardware in the system.
Figure 8 shows a process for generating classification algorithm for an animal.
Figure 9 shows the construction of a point cloud from RGBD data . Figure 10 shows the generation of components of the signature .
Figure 11 shows a digital signature for cattle.
Figure 12 shows steps for classifying an animal.
Figure 13 shows a frame for scanning an animal. Figure 14 shows a process for generating classification algorithm for an animal.
Figure 15 shows a sample 3-D model of a carcass.
Figure 16 illustrates the interaction of the frame and the animal for scanning. Detailed description
In embodiments of the invention, 3-D camera images are used to produce estimates of muscle score, fatness and frame score of cattle. The surface curvature of specific areas of the animal is analysed and characterised and the characteristics are mapped against models of traits to estimate muscling and fatness.
The areas analysed for fat and muscling assessments in embodiments of the invention are shown in figures 1 and 2. Generally, for both fat and muscling assessments, the more relevant areas are toward the rear of the animal. Fat and muscling properties of the animal in the hind region are widely regarded as representative of the fat and muscling traits across the whole animal and so this region is generally used for manual scoring.
For fatness, the depth of fat at the P8 rump site 102 is the national standard in Australia for assessing fatness of cattle. The depth of fat across the ribs of the animal is also used for assessing fatness. Rib fatness is measured at the 12/13th rib site 104. Other locations such as the short loin area 106 and the tailhead 108 are used as manual or visual indicators of fat depth at the P8 rump 102 and 12/13th rib 104 sites.
For muscling, again the hindquarter provides most
information. In particular, the stifle region 110 is used since heavily muscled cattle are thickest through the stifle as opposed to lightly muscled cattle. The representations in Figure 2 show rear views of cattle with different levels of muscling. Figure 2a shows an animal with good muscling. The topline 202 of the animal is well-rounded and the maximum width is through the stifle region 204. The animal has a wide stance 206 and the stomach is not visible from behind.
Figure 2b shows an animal with average muscling. The topline 212 is not as wide or as well rounded. Hip bones 218a 218b can be seen. The stomach 220 is also visible. The stance 216 is more narrow than the animal with good muscling.
Figure 2c shows an animal with poor muscling. The topline 222 is narrow and the hip bones 228a 228b are prominent. The animal tapers from the hip bones through the stifle region 224. Stomach 230 is also clearly visible. The stance is more narrow than both the animal with good muscling of Figure 2a and the animal with average muscling of Figure 2b. The representations of Figure 2 show extreme
representations of different muscle traits. The degree of muscling is commonly identified as a muscle score from A (very heavily muscled, similar to Figure 2a) to E (lightly muscled, similar to Figure 2c) . Embodiments of the invention capture the curvature of the hindquarter region of the animal to estimate muscle score and fatness.
The shape of the hindquarter region is captured using RGB- D 3-D cameras. The images from the cameras capture both hindquarters in order to gain sufficient information to model the regions. An example of the position of the cameras is shown in Figures 3a and 3b. Two 3-D cameras 310 320 are positioned to capture images of the
hindquarters of the animal. Cameras 310 320 are
positioned behind the animal and are elevated from the animal to create a sufficient field of view 312 322 capturing both hindquarters. More than one camera is used to capture a wider image of the animal to facilitate modelling over a wider area. In further embodiments, additional cameras are used to obtain additional image information of the relevant regions.
During data processing, the camera images are combined to produce a single three dimensional representation of the hindquarters of the animal. Movement of the animal can result in changing curvature in the hindquarters due to muscle flexing and muscle, fat and hip movement. In order to create an accurate static representation for modelling, the images from the different cameras must be time
synchronised. Cameras 310 320 operate at 30 Hz, taking 30 images per second, and are linked to clock 330. Clock 330 is used to time stamp the images. The cameras take images asynchronously but the timestamps (i.e. the time images are taken) are synchronised, these timestamps are used to identify which images are captured at the same time where the maximum offset between images is (1/30*2) =16.5ms . In practice, a 16.5 ms offset produces little error for the speed the animal is moving within the race.
In certain embodiments, cameras 310 320 are driven by a single trigger 340 to synchronise the exposures.
Figure 4 is a photograph showing the cameras 410 420 430 and animal 440 positioned in a dedicated section of a cattle race 450 in an embodiment. The cattle race is an apparatus used to direct cattle from one place to another and is useful as a location for imaging cattle since it constrains the animals' movements considerably enabling static images to be obtained.
RGB-D cameras 410 420 are positioned in elevated positions to the rear of the animal to provide images of the
hindquarters from above the animal. RGB camera 430 is positioned to the side of the animal facing the stifle region to provide further image information of the left stifle region from a different viewpoint. Elements of the cattle race 450 are positioned within the field of view of the RGB-D cameras.
Image data from RGB-D 3-D cameras provides the distance D of the pixel from the reference point. Each three
dimensional camera has a reference point from which all three dimensional coordinates of the image are mapped. Three dimensional coordinates of each pixel in the image with respect to a reference point associated with the camera can be computed. The data output includes the x, y and z coordinates for each pixel in the reference plane, referred to as a collection of points or pointcloud. Figure 5 shows RGB camera 510 with respect to animal 520. The position of points 1 and 2 on the surface of the animal, represented by individual pixels in the image from camera 510, are provided in the reference plane of the camera. Point 1 is positioned at distance Dl from the camera and has reference coordinates xl, yl, zl in the cameras reference plane. Point 2 is positioned distance D2 from the camera and has coordinates x2, y2, z2 in the camera's reference plane. Point 1 and Point 2 are a subset of all points in the pointcloud. In a multi-camera system, equivalent pointcloud data is provided for each image from each camera in the reference plane of the respective camera.
When combining pointcloud data from multiple cameras the pointcloud data from each camera is translated into a common coordinate frame using a one off calibration process. In order to gain a static 3D pointcloud image of the animal, images should be selected with the same time stamp in order to provide a static pointcloud image of the animal at a particular point in time.
Figure 6 shows an apparatus 600 used for calibrating the cameras into a common reference plane. Apparatus 600 has a vertical component 610 and a series of horizontal components 620 630 640 650 positioned perpendicular to the vertical component and component 620 perpendicular to the plane created by 610 and horizontal bars 630, 640 and 650. The horizontal components are positioned at different heights along the vertical component. The length of the vertical beam and the height of the horizontal components along the beam correlate to the height of cattle being imaged. In the embodiment of Figure 6, the length of vertical component 610 is 1.5 meters. Beam 620 is
positioned across the top of the vertical component, correlating to the height of a fully grown steer. Beam 630 is positioned at 1.3 meters along the vertical component, beam 640 is positioned 1.1 meters along the vertical component and beam 650 is positioned 0.9 meters along the vertical component. Further embodiments of the invention include different numbers of beams or beams at different heights.
Before taking images of the animal, calibration apparatus 600 is positioned in the field of view of the cameras in the vicinity in which the hindquarters of the animal will be positioned. The cameras take images of the calibration apparatus. The 3D representation of each camera is matched to a 3D model of the apparatus using a least square optimization. This generates a set of extrinsic parameters, allowing the data from each 3D camera to be fused into a common reference frame, the parameters are be stored and used for point cloud construction.
After fusing the point cloud of each camera, a three dimensional point cloud representation of the relevant areas of the animal, is created. The point cloud can then be manipulated and viewed from different viewpoints to assess the curvature of the relevant regions. The
curvature is modelled to estimate the fatness and muscling of the animal.
The process for modelling the curvature is now discussed with reference to Figures 8 and 9. At 802 images of the animal are taken using 3-D cameras. The images include depth data identifying distance of each point in the image from the reference point of the camera. Images are selected from each camera with the same time stamp i.e. within 16.5ms. Figures 9A and 9B show synchronised images of a steer positioned in a race. Figure 9A shows the right side hindquarter from a rear elevated view. Figure 9B shows a left side hindquarter from a rear elevated view.
The camera data is calibrated at 804 using a common reference point as discussed above with respect to Figure 6. The raw point cloud data from each image is combined at 810 using the set of extrinsic parameters.
At 812 the point cloud representation of the hindquarters of the animal is created as shown in Figure 9c.
Mathematical validation techniques are performed on the point cloud to confirm the point cloud is suitable for analysis. The dimensions of the point cloud, including the width, height and length are calculated to confirm that these conform with the expected dimensions of the animal. This validation confirms that there is no obvious problem with the optical set up of the cameras or the calibration and combination of the image data. Validation techniques are also used to confirm that the point cloud includes enough points to complete the analysis.
Typically, the system will be programmed with a
predetermined minimum number of points required to
determine the classification of the animal from the images. The rotational properties of the image are also analysed during the validation stage to confirm the overall shape of the point cloud. The data validation process adds reliability to the data and improves the robustness of the system.
After validation of the point cloud data, the point cloud data for analysis is selected at 814 816 818. Point cloud data not relevant to the analysis of curvature of the hindquarters of the animal is removed. Outlier removal filters remove any points on the point cloud not
representing the hindquarters. For example, points representing external structures including the race or insects flying around the animal are removed.
Additionally, any points representing non-hindquarter parts of the animal are removed, for example the tail of the animal. The system is programmed to identify points not complying with the general curvature of the point cloud. Additionally external elements having defined shapes are identified and removed, for example the bars of the race. Some embodiments include specific software programmes able to detect shapes not attributed to the hindquarters, for example the shape of the tail or the shape of metal cylindrical bars forming the race. Any point cloud data points not associated with the
hindquarters are removed.
After removal of the outliers the point cloud is filtered to define a specific area of the animal. Embodiments use a bounding box filter technique in which a specific length, height and width of the point cloud is selected for analysis. By reducing the data for analysis areas of the animal which provide little information for
classification are removed. This reduces the processing required during analysis while not impacting the accuracy of the system.
Figures 9D and 9E show the development of the point cloud during down sampling and removing the outlier points.
Finally, the point cloud is smoothed and filled at 820 822. If the point cloud includes any holes, for example if light conditions prevented data from being captured in certain areas, these are filled by fitting a higher order function across the point cloud in the area of the hole. Typically, the holes in the point cloud are small, otherwise the point could have been rejected at 812 during the validation stage. The uniform sampling filter at 822 adjusts the pointcloud data by resampling the areas where points have been added uniformly for added robustness.
After the point cloud is created and filtered using the steps of 802 to 822 discussed above, surface normal for every data point in the point cloud are computed at 824. For each point in the point cloud a normal axis from the surface is generated for each data point. Figure 9G shows the xyz point cloud and surface normal. Figure 9H shows the surface normals on each point of the point cloud data.
At 826 the point cloud may be segmented into different sections for separate analysis. For example, the point cloud may be cut in half or quarters. At 828 a viewpoint in space is selected at which the point cloud is viewed for measuring curvature.
The 3D (XYZ) viewpoint coordinates are aligned to either the common reference frame or the computed first order moment of the point cloud. Selection of view point is arbitrary. However, in preferred embodiments a central position creating a symmetrical view of the hindquarters is selected. In embodiments, if the common reference frame is utilised as the global viewpoint reference, then the viewpoint is established at the origin coordinates (0,0,0) . This represents a position underneath the hindquarters and is associated with looking up towards the inside of the top of the hindquarters. Otherwise, the 3D (XYZ) first order moment of the point cloud is computed and then used as the viewpoint reference. The first order moment is determined as the mean position of all the points in the cloud in all of the coordinate directions (XYZ) .
The fused point cloud representation is transformed into a compact signature derived from the statistical combination of surface curvature for all the point cloud data
associated with a particular animal. The feature
extraction and reduction steps produce a compact and information rich representation of the larger amount of raw point cloud data connected for each animal. A first contribution of the curvature signature is created by casting a ray from the viewpoint V to the centroid of the point cloud C forming a directional vector uc as shown in Figure 10a. Figure 10 shows a number of points Pi on - li ¬ the point cloud. For every data point of the point cloud an angle γ is computed between the directional ray uc and surface normal at the data point. Figure 10a shows
examples of the directional angle γ at two points on the point cloud, at point Pi the directional angle between uc and the surface normal at Pi is γί, and at point Pi+1 the directional angle between uc and the surface normal is γί+1. The directional angles for all points on the point clouds are calculated and stored. In further embodiments, directional angles for a selection of points are
calculated .
A second component for the curvature signature is
calculated as shown in Figure 10b. Figure 10b shows two neighbouring points Pi and Pj from the point cloud data. Point Pi and Point Pj are connected by a ray of length d. A reference frame is created at point Pi. Axes u is the surface normal ni at Pi. Axes v is perpendicular to both axis u and the ray connecting point Pi and point Pj . Axes W represents the remaining orthogonal axes. Orthogonal axes u, v and w form a Darboux frame at Pi. This Darboux frame is then used with respect to point Pj . The surface normal at point Pj is represented as nj .
Darboux frame constructed at Pi with axis u, v and w is projected onto Pj . The angle between axis u and the projection of surface normal nj on the plane created by axis u and w is Θ; the angle between ray surface normal nj and axis v is a; and the angle between the ray and axis W is ψ. The distances and angles are computed for every point in the point cloud with respect to the surface normal and axes u,v,w for every point. Figure 9H shows the surface angles on the point cloud from a selected viewpoint .
In embodiments a Darboux frame is created at every point on the point cloud with respect to a number of points Pj . In certain embodiments points Pj are the direct neighbouring points of Pi, in further embodiments point Pj may be all points within a predefined area, in further embodiments points Pj may be all points connected for the calculation of the surface normal at Pi. It will be clear that further selection methods for points Pj could be used .
The description above calculates angles in the Darboux frame but any suitable coordinate system could be used.
At 830 the distance and angles of all point cloud data points are combined into a histogram representing the point cloud as shown in Figure 11. The histogram
represents the frequency of each angle and distance in the point cloud.
In embodiments, histograms from multiple images of the same animal are statistically combined to generate
histogram signature for the animal at 832. The final feature vector is created from the superimposed histogram at 834 using the mean values but could be generated using any mathematical formula including for example the
minimum, maximum or median.
The histogram embodies a statistical base representation of surface curvature and represents the digital signature for a particular animal. Specific angle ranges are included in the histogram to represent the digital
signature. In further embodiments, different angular ranges could be selected to provide the most distinctive signature for the animals at 838.
In Figure 11 the example histogram representation contains 45 bins for each angle α, φ , Θ and distance d followed by 81 bins for the directional component γ. This results in a total of 216 bins, in further embodiments a different number and distributions of bins may be selected.
For each animal for which a point cloud is captured and corresponding digital signature created, a manual (i.e. subjective) trait measurement is taken at 836. The trait can be muscle score, for which a visual assessment is made, or P8 / rib fat depth measurement which is obtained using an ultrasound scanner. Further traits including animal condition score, muscle, fat and yield can also be used. The traits are logged against the digital signature. The digital signature is also referred to as a feature vector in the machine learning / pattern recognition community. The values of the trait become the class labels for each digital signature at 840.
Machine learning algorithms, such as libSVM (support vector machine) are employed to determine a non-linear mapping between the signature and the class label. By supplying a subset of the feature vectors and associated traits, noted as a training sample, the Machine Learning approaches attempts to model the non-linear mapping between the high dimensional feature vectors and the trait . Once the machine learning algorithms have been trained on a subset of the feature vectors, a model reflecting the non-linear mapping between the statistical base surface curvature and the provided class label is established at 842. Once established, the model can be used to produce the appropriate classification and provide an estimate of uncertainty of this classification from the signature of any animal. The sensor models are based exclusively on the feature vector and the traits P8 fat, rib fat or muscle score measurements for each animal in the data set.
Both the classification and regression experiments were performed using 10-fold cross validation on feature vectors containing a single statistical instance of each cow or steer. In addition, classification experiments were repeated another one hundred times effectively providing a 100 x 10 cross fold validation randomised learning scheme. Thus, providing an unbiased training and testing arrangement of the independent cow and steer. A challenge to the regression experiments (i.e. using
GAUSSIAN process Regression (GPR) ) predictions of P8, rib fat and muscle score were performed where a data set selected at random was kept aside and not used to build the sensor model but used to challenge the model and therefore provide an early indicator of the feasibility of predicting P8 fat, rib fat and categorical muscle scores from 3D images.
Figure 12 shows a flow chart for classifying an animal from its signature. At 1210 the point cloud for a
particular animal is created from images as discussed above. The digital signature for the animal is created at 1220 using the method described above. Finally, the model is used to classify the signature in order to gain a prediction of the physical attributes of the animal at 1230.
Once the models have been created, P8 fat, rib fat or categorical muscle scores can be calculated from any point cloud data using the digital signatures from the point cloud in order to determine the traits of the animal.
Embodiments of the invention compute and model the regions of an animal which have been deemed relevant and relate to reference points used in traditional fat and muscling assessment for cows and steers. The use of machine learning enables assessment of P8 fat, rib fat and muscle scores to be determined from RGB-D camera images of a cow or steer. Embodiments enable estimates of P8 fat, rib fat and muscle score using data gathered from cattle using a pair of RGB-D cameras, without the need for ultra-sonic measurements or trained assessors. Data obtained using cows show a 77% correct classification for P8 fat score and 88% correction classification for muscle score. In the case of steers with 86% classification accuracy for the P8 fat and 83% classification accuracy for muscle score were obtained. "Correct classification" in this context is defined as agreement with the data used to create the trait labels at 836.
Embodiments of the invention allow multiple images from RGBD cameras to be combined to create a point cloud representation of the image in order to determine
signatures for the point cloud which can be used to categorise the animal.
Digital signatures represented by histograms have been described above. It will be clear to those skilled in the art that other representations can be used to describe the curvature of a selected area of an animal.
The hardware of an embodiment of the system is shown in Figure 7. Cameras 702 704 706 capture images of the relevant parts of the animals. As discussed above these may be positioned within a race or other structural area to confine an animal. The cameras are connected to clock 708 to timestamp the images. In embodiments the clock is within the computer. Images from the cameras are stored in memory 720. Further details about the images including the animal reference, the date, the location and any other relevant information is stored with the images in memory 720.
Processor 730 is connected to memory 720. The processor includes an image processor to interpret image data. The processor includes a point cloud processor to generate the point cloud from the camera data. The point cloud
processor is able to combine camera data and includes modules for validating and filtering point cloud data. Input 710 is connected to processor 730 to allow manual or automated input to select processing parameters and modules including filtering parameters, validation
parameters and smoothing parameters .
Processor 730 includes calculation processor for
generating digital signatures from the point cloud data. Memory 720 includes point cloud storage for storing point cloud data, digital signature storage for storing digital signatures. Input 710 is used to reference data stored in memory 730.
Processor 730 includes training processor to generate digital signature models.
Display 740 can be connected to each input 702 704 706 708 710 as well as the memory components and processor
components .
It will be clear to those skilled in the art that
embodiments of the invention can be used to model many visual traits of animals. Although the description above focuses on hindquarter imaging of steers to determine P8 fat measurements and muscle score it will be clear that the process can be used to image and model different animals and different parts of animals to determine different traits.
Body Conditioning Score
Body conditioning score is a further trait which is used to identify the condition of an animal. The conditioning score includes an aspect of muscling and fat on the animal and can provide an indication of whether animals are below normal nutritional conditions. The condition score system uses a 1-9 scale where 1 = emaciated and 9 = very fat, based on the visual assessment of muscle and fat cover over the skeleton.
The 3D RGBD camera technology discussed above is used to asses cow body condition score by modelling the curvature of the animals. The same technique was used to train the model against body condition score. Examples of machine learning (ML) algorithms employed to provide body
conditioning scores include: 1) BayesNet (BN) algorithm (Cooper and Herskovits 1992) which is a probabilistic ML learning algorithm based on Bayes Theory, 2) the
Sequential Minimal-Optimisation (SMOl) algorithm (Piatt 1999) used to train function-based support vector
classifier; 3) the LibSVM algorithm (Chang and Lin 2011) is very efficient, effective and reliable implementation of a support vector classifier; and 4) the Gaussian
Processes (GP) used as the classifier mechanism (Rasmussen and Williams 2005) . The method reliably estimates body conditioning scores on cattle. A data driven supervised learning approach using classification and regression techniques produced a 77.6% ± 4.8% correct classification for condition score for cows . The above examples are discussed with respect to
estimating physical traits of cattle. The techniques are also applicable to other animals including sheep (lamb, hogget, and mutton) and pigs. Similar positioning
techniques using a race can be employed for these other animals. In further systems the animals may be held by operators while they are scanned.
Embodiments take physical measurements of an animal using the camera system or using designated measurement
equipment including lasers. Height, hip height, width, length can also be measured. Weight can also be measured.
Such measurements can be used in isolation to classify an animal or can be used in combination with curvature information to classify physical traits of an animal. Such measurements can be used directly or indirectly used as a predictor of body composition, lean meat yield,
muscularity, fat partitioning and location, marbling, maturity, pregnancy status, eating quality, carcase performance and compliance or animal health and well- being.
It will be clear to those skilled in the art that
embodiments of the invention can be used to model traits of live animals and carcasses.
Carcass Trait Estimation In a second embodiment the system is configured to model carcasses to estimate carcass traits. The curvature signatures for hung carcasses are different from those of live animals due to the different positioning of the body. However, equivalent methods can be used to capture image data, create three-dimensional pointcloud representations and use training methods to create models which predict physical traits of the carcass. In the example presented below the same methodology is followed to that described above except a different data capture technique is used to capture images of the hanging carcasses compared with that use to image live animals moving through a race.
Figure 13 shows an example of a frame to support an automated scanner arrangement configured to capture image data to produce a three dimensional image of a carcass. Figure 16 shows a top view of the alignment of the frame with a processing chain of an abattoir carrying carcasses.
Frame 1300 is arranged to receive carcasses moving along a processing chain track within an abattoir. The frame is arranged to rotate about central pivot point 1310. Three legs 1320 1322 1324 extend generally horizontally from central pivot point 1310. The legs extend from the pivot point at generally similar angular spacings . Camera posts 1330 1332 1334 extend generally vertically from each leg. The top portion of each camera post is attached to a part circular portion 1340. The part circular portion includes a cutaway portion extending between camera posts 1330 and 1332. The cutaway section provides an opening for
carcasses to enter the frame 1300 for scanning. Abattoirs typically include elevated processing chains. Carcasses are hung from the processing chains and can be moved around the abattoir. The position of the frame is
coordinated with the chain track such that carcasses moving along the chain track are received through the cutaway portion of the frame.
Figure 16 illustrates the interaction of a frame 1600 and a carcass 1610 on a processing chain 1620. At Figure 16
(a) the carcass 1610 is moving along processing chain 1620 towards frame 1600. Cutaway portion of frame 1600 is positioned to receive carcass 1600. The arrow in Figure 16(a) shows the direction of movement of the carcass towards frame 1600. At Figure 16 (b) the carcass has been received by the frame. The processing chain is stopped at the scanning position. The carcass is held in the central region of the frame while the carcass is scanned at Figure 16 (b) . After scanning is complete, the frame is rotated about the pivot point to align the cutaway section with the onward movement of the carcass along the chain track as shown in Figure 16(c) . The processing chain is
reactivated at Figure 16(d) carrying the carcass along the chain track and exiting the frame. The frame can be rotated back to its receive position as shown in Figure 16(e) to receive a further carcass 1630.
A camera is attached to each camera post 1330 1332 1334 via sliding attachment mechanisms 1350 1352 1354. The cameras are arranged to move up and down on the camera posts. Preferably the cameras can move throughout the full length of the camera posts. Multiple cameras are used to capture data to image the inner and outer surfaces of the carcass from different camera positions.
Structured Light RGBD Cameras can be used to capture data within frame 1300. A camera is attached to each sliding camera mount 1350 1352 1354.
In an embodiment, the cameras emit near IR on a wavelength of 828.6nm and are class 1M Laser product because of the high divergence of the light. That illumination at 100mm distance into a 7mm pinhole does not exceed 0.71mW and is completely eye-safe. Operating temperature range is 4-40 C, power delivered via USB with Maximum Power Consumption of 2.25W. The sensor has a minimum sensing range of 20cm and therefore requires some clearance from the carcass within the constraints of the abattoir processing system (ie: proximity of walls/other carcasses in the chillers). Other cameras can be used to capture the data.
Figure 14 shows the steps taken to classify a carcass using the system of Figures 13 and 16. When the carcass is received into scanning position within the frame the cameras take images of the carcass at 1410. Typically, the cameras are initially positioned at the bottom of the camera posts and move upwards on the posts continually scanning the carcass. Pointcloud data is extracted from the RGBD data at 1420 and features are extracted from the RGB data at 1430.
The three dimensional carcass representation is built at 1440. The localisation of the cameras is performed using the steps discussed above with respect to Figure 8. The point cloud data is downsampled at 1442 and outlier removal filters are applied at 1444.
The size and orientation of the carcass are unpredictable within certain boundaries. Typically, when a carcass is moved into the frame and stopped for scanning, the
stopping action produces a swinging motion in the carcass. The since the carcass is hung from the processing chain of the abattoir and a pendulum swinging effect can be
generated when the carcass is stopped for scanning.
Factors affecting the motion of the carcass include the speed the carcass was moving on the chain, weight of the carcass and weight distribution of the carcass. This carcass motion is accounted for during point cloud
creation. The system also accounts for the motion of the cameras on the posts when generating the point cloud. The system includes motion models to account for these
movements. The system uses simultaneous localisation and mapping of the carcass relative to three cameras to generate the three dimensional model.
In an automated approach the scanning system is devised to be able to obtain a full scan of the carcass by
simultaneously scanning the inside and outside of the carcass from the three camera posts and moving the cameras in a circular fashion to allow the carcass to enter the scanner along the processing chain.
In the three dimensional carcass model the three
dimensional representation of the carcass is built through the application of simultaneous localization and mapping (SLAM) via 3 cameras using the texture/shape of the carcass, and two models of motion of the carcass motion (pendulum model) and the camera motion along the rail. The carcass position relative to the current views of the three cameras is continuously estimated as the cameras scan from the bottom of the frame to the top. For each of the three cameras this involves at time t completing:
(1) obtaining motion of each camera from motors;
(2) applying a pendulum motion model to carcass; (3) extracting features from the texture of the RGB image;
(4) comparing the features from time frame t and t-1 to determine the relative motion:
(5) matching the location of the visual features with the 3D location of features to obtain a full 6DOF motion prior ;
(6) using the available carcass 3D shape and the carcass shape in current image (time t) to fuse the current reading into the full 3D representation. Typical scanning time frames for scanning a sheep carcass from the 3 cameras scanning from top to bottom is 80 seconds. Data was continuously streaming from the RGBD cameras and simultaneously the 3D model was captured in real-time. Following, the scanning, the system rotated 60 degrees to allow the carcass to exit along the chain and then reverted to the original configuration (taking 18 seconds) . The total time to scan the carcass was 98 seconds; the time limitation is imposed by current servo configuration. With a redesign the total time could be reduced to 10 seconds.
Once the three dimensional image of the carcass is
created, the same steps (1450 - 1478) are used as
described with respect to Figure 8 (820 - 842) . These are described briefly below in terms of the full processing pipeline for carcass assessment.
The processing pipeline from RGBD data acquisition through to the estimation of lean meat shown in Figure 14 starts with acquisition of RGBD data from the hand held RGBD sensor and assembling a 3D model with colour already described in the capturing data section. Sufficient coverage of the entire carcass is completed. The next step of this processing pipeline is transforming the information from all carcasses into a common coordinate frame tied to the position of the hook suspending the body on the processing chain. This step allows consistency in analysis over future parts of the processing pipeline. Another approach which can be used to collect the 3D point cloud data is to manually scan the carcass. An operator moves the camera across the surface of the carcass to expose each area of the carcass to the camera with a small overlap. The scans can be performed in a sweeping zig-zag fashion from top to bottom (in a similar pattern to spray painting a contoured surface) .
To generate a complete 3D model of the side of a carcass individual RGBD images were fused together using a
technique that exploits both RGB and 3D data in an
optimization framework. In order to make this process computationally tractable, in the manual scanning model as the optimization needs to be undertaken in real-time, the spatial resolution of the 3D data was set to a hard limit of 5mm. For each voxel (5mm*5mm*5mm point) the 3D data was married with a RGB colour resulting in surfaces such as those in Figure 15 (left (a) : external with colour, centre (b) : external with only 3D data, right (c) : internal with colour) .
Therefore, the following steps were taken to collect the data:
(1) Outside of the carcass was scanned (top to bottom) .
(2) Carcass was rotated and held still.
(3) Inside of the carcass was scanned (top to bottom) .
A method of extracting smaller volumes of interest across all carcasses may be conducted, to effectively separate regions containing several muscle groups. Embodiments use an automated tool based on carcass dimensions or identified muscle groups or regions to assist in
segmenting data to identify regions of interest for data analysis .
The extracted 3D volume is then transformed into a compact signature (feature extraction) at 1468. Embodiments have evaluated a number of approaches that examined colour (normalised RGB and HSV colour space) , surface length / volumetric information, surface curvatures and combination thereof. The feature extraction and reduction steps produce a compact and information-rich representation of the raw point cloud data collected for each carcass.
From the number of features examined the best performance in terms of root mean square error (RMSE) of the final estimated and measured Lean were obtained using the surface curvatures extracted from a subset of the 3D volume identified as the area describing the "butt shape". A twofold step may be employed to devise relationship between curvature and lean. The curvatures discussed in the first example above are suitable to encompass a trait (such as muscling) and are scale and rotationally
invariant, to be able to deal with animals moving through a race. Translation from muscling score to lean value (kg) requires prior knowledge of the animal weight (either Live Weight or Hot Carcass Weight) . For instance Table 1 contains data of two animals with a disparate Muscle Score (and Muscle %) of B- and D- .
However, the Lean of the Left Side of carcass is 82.25kg and 104.29kg, live weight 544kg and 740kg respectively. Therefore a method to add the live weight into the feature vector in addition to surface curvatures was devised in the feature reduction step.
Live Weight Fat Lean Hot carcass Primal Muscle Muscle (kg) (kg) (kg) Weight (kg) Weight (kg) % Score
544 34.27 82.25 153 149.91 54.87 B-
740 46.57 104.29 202 198.12 52.64 D-
Table 1.
Figure 15 shows three dimensional images created for carcasses .
To allow addition of weight and overcome the large
dimensionality space of the feature vector (n=308) with respect to the small dataset (m=32) an optimization step via Genetic Algorithm was employed to reduce the feature vector size (number of dimensions n) with respect to the discriminative power to estimate Muscle Score. This allowed adding weight (either Live Weight or Hot carcass Weight) as another element of the feature vector to allow estimation of lean. The Genetic Algorithm indeed did confirm that weight was essential in the feature vector composition . At this stage of the processing pipeline there exists a single signature for each carcass in the data set. All the signatures are then assembled into a feature vector
(surface curvature + weight) as separate instances for each carcass. The production of a feature vector at 1470 is the final stage of taking the transformed point cloud data and arranging it into a form that is amenable to the machine learning environment.
The feature vector is then used as input for training a machine learning algorithm at 1474 known as Gaussian
Process Regression which is a state of the art supervised learning method. The training/testing approach proposed involves supervised learning, which infers a function from the "labelled" instances (i.e., observed values) of Lean in the feature vector 2. In summary, the input to the machine learning scheme is expressed as a table of independent instances of the point cloud representation of each animal (the concept) to be learned. Once the machine leaning (ML) algorithms have been trained on a subset of the feature vectors, a non-linear mapping between the statistical-based surface curvature signature/weight and the provided class label Lean is learnt at 1476. Using a smaller test subset of unseen instances, predictions can then be made on the Lean (kg) of those animals present in the test set. The accumulation of inputs: 3D point cloud data and weight (i.e., feature vector) and the observed values Lean (kg) are used to construct a sensor model. Once built, the sensor model can be used to produce the appropriate classification or regression on the presentation of an instance vector gathered from a new animal.
The approach consists of getting a sensor model to learn to characterise the feature vector as inputs. The observed subjective muscle score data is used to build these models in a supervised learning manner as follows: Step 1: Acquire 3D point cloud data in the abattoir, weight (live or hot carcass weight) and Muscle Score
(steps 1410 to 1440;
Step 2 : Extract a representative volume from the
hindquarter area of the carcass; Step 3: Reduce the high dimensionality of the point cloud data by extracting features from the input signals to produce a compact and representative feature vector 1450 to 1468) ;
Step 3: Perform global optimisation of the feature-vector signatures using the parallel genetic algorithm with respect to Muscle Score to reduce the feature vector
(1468) ; Step 4: Train a sensor model based „exclusively" on the feature vector and weight (Live Weight or Hot Carcass Weight) with respect to Lean [kg] for each animal in the data set (1470 to 1476) ; Step 5: The learned models can then be used to infer measured Lean [kg] from new point cloud data and weight (Live Weight or Hot Carcass Weight) without the need for any input from trained assessors;
In embodiments 50x10 Fold Cross Validation randomised Gaussian Process learning schemes can be used. That is 90% of the data was provided to train the model and 10 % was used as a challenge (test) .
It will be clear to those skilled in the art that
embodiments of the invention provide an efficient, fast and low cost quantitative live animal and carcass grading tool. In particular, the scanning process can be
automated in abattoirs to receive carcasses moving on a chain track at a scanning apparatus, scan the carcass, allow the carcass to exit the scanning apparatus before receiving a subsequent carcass for scanning.
Some advantages provided by embodiments of the invention include the operating not having to touch the animal, the system is easy to set up and maintain, grading data can be obtained and assessments performed within a few seconds, accurate measurements of phenotype can be obtained.
Additionally, the results are repeatable, consistent and reliable .
Embodiments of the invention can be used with BeefSpecs, as well as many other decision making tools including market specs for meeting live export specs, condition scoring (or similar) as a management tool, Feedlot management, pregnancy, seed damage (hides) , heat stress, illness etc. Embodiments can be used to estimate aspects of the body of an animal for any endpoint including sale for finishing, auctions, sale for slaughter, change in fatness, change in muscling on feed, growth, stage of pregnancy, milk, structural correctness in the seedstock industry, Sale by Description of store stock, Sale by
Description of finished stock, Description of replacement females and sale bulls, Body condition score of breeding females, Structural soundness, Suitability for processing, Changes in body condition due to feeding environment
(feedlots) , Transport stress, heat stress, Carcase
performance, Lean Meat Yield, Stage of maturity, rate to maturity, Sex determination, automated drafting,
condition, etc
Embodiments of the system can predict characteristics of many species including all ruminant species used for agricultural production, dairy cows, horses (work, racing, wild and recreational) , pigs, goats, deer, birds,
breeding companion animals (in show for body composition) , dogs, wildlife management (koala and kangaroo condition scoring or health, wild remote animals, etc) , camels, aquaculture (fish dimensions), etc. At its broadest terms the tool could cover understanding the physical and compositional attributes of animal for the purposes of informing decisions and decision systems around their management for livestock and non-livestock purposes.
Modifications and variations as would be apparent to a skilled addressee are deemed to be within the scope of the present invention.
In the claims which follow and in the preceding
description of the invention, except where the context requires otherwise due to express language or necessary implication, the word "comprise" or variations such as "comprises" or "comprising" is used in an inclusive sense, i.e. to specify the presence of the stated features but not to preclude the presence or addition of further features in various embodiments of the invention.

Claims

Claims
1. A method for estimating the physical traits of an
animal comprising the steps of:
generating a digital representation of the curvature of a surface area of an animal; and,
estimating the physical traits of the animal in dependence on the digital representation.
2. A method according to claim 1 wherein the digital
representation is a digital signature.
3. A method according to claim 1 or 2 further comprising the steps of:
generating a digital image of the animal, the digital image comprising a point cloud representation of the surface area of the animal, the point cloud including multiple data points; and
creating the digital signature from the digital image of the animal.
4. A method according to claim 3 wherein the digital
image is created from at least one camera image of the surface area of the animal, the camera image including depth information.
5. A method according to claim 1, 2, 3 or 4 wherein the surface area is predefined.
6. A method according to claim 3, 4 or 5 comprising the further steps of:
selecting a reference point with respect to the point cloud;
generating a point representing the point cloud;
for at least one data point on the point cloud generating a surface normal;
calculating the angle of a ray cast between the point and the reference point with respect to the surface normal; and, generating a digital signature, the digital signature including a component corresponding to the angle.
A method according to any of claims 3, 4, 5 or 6 comprising the further steps of:
at least one data point creating a Darboux frame having three orthogonal axes, wherein one of the three orthogonal axes is the surface normal at the at least one data point, a second of the three
orthogonal axes being orthogonal to the surface normal and orthogonal to a ray cast between the at least one data point and a second data point.
A method according to claim 7 comprising the further step of:
at the second point using the Darboux frame to calculate the angle between at least one of the axes and the surface normal at the second point, wherein the digital signature includes a component
corresponding to the angle.
A method according to claim 3, 4, 5, 6, 7 or 8 comprising the further steps of:
calculating the distance between the at least one data point and the second point; and,
generating a digital signature, the digital signature including a component corresponding to the distance.
A method according to claim 9 wherein the reference point is the reference point of claim 6.
A method according to claim 6, 7, 8, 9 or 10 wherein the reference point is on a central axis of the point cloud . 12. A method according to any of claims 1 to 11 wherein the traits are at least one of muscle score, P8 fat depth and hip height.
13. A method according to claim 1 or 2 comprising the further steps of:
capturing at least one image of the surface area of the animal,
generating a point cloud representation of the image; and
calculating the surface curvature from the point cloud representation; and,
generating the digital signature from the surface curvature.
14. A method according to claim 13 comprising the further steps of:
generating a reference frame for at least one data point in the point cloud;
generating a surface normal for at least one data point in the point cloud; and
generating the signature in dependence on the angle between the reference frame and the surface normal.
15. A method according to claim 14 further comprising the step of calculating the distance between the at least one data point and at least a second data point and generating the signature in dependence on the
distance .
16. A method according to any of claims 3 to 15
comprising the further step of filtering the data points in the point cloud.
17. A method according to claim 16 wherein the step of filtering includes the steps of removing any outlying data points. 18. A digital signature representing the curvature of an animal, comprising at least one of:
a first component representing the angle of the surface of the animal at least one point within a selected surface area of the animal with respect to a first reference point; and,
a second component representing the distance between at least one point on the surface of the animal and a second point on the surface of the animal.
19. A digital signature according to claim 17 where the first and second reference points are the same reference point.
20. An apparatus for estimating the physical traits of an animal comprising:
means for generating a digital representation of the curvature of a surface area of an animal; and, means for estimating the physical traits of the animal in dependence on the digital representation. 21. An apparatus according to claim 20 wherein the
digital representation is a digital signature.
22. An apparatus according to claim 20 or 21 wherein the digital signature is created from a digital image of the animal, the digital image comprising a point cloud representation of the predefined surface area of the animal, the point cloud including multiple data points.
23. An apparatus according to claim 22 comprising:
at least one camera, the camera creating data
representing the surface area of the animal, the data including depth data; and,
means for converting the camera data into a point cloud representation of the animal.
24. An apparatus according to claim 20, 21, 22 or 23
wherein the predefined surface area is predefined. An apparatus according to claim 22 further
comprising :
means for selecting a reference point with respect to the point cloud;
generating a point representing the point cloud;
means for generating a surface normal for at least one data point in the point cloud;
means for calculating the angle of a ray cast between the point and the reference pointwith respect to the surface normal; and,
means for generating a digital signature, the digital signature including a component corresponding to the angle .
An apparatus according to claim 25 further
comprising :
means for creating a Darboux frame at least one data point the Darboux frame having three orthogonal axes, wherein one of the three orthogonal axes is the surface normal at the at least one data point, a second of the three orthogonal axes being orthogonal to the surface normal and orthogonal to a ray cast between the at least one data point and a second data point .
An apparatus according to claim 25 or 26 further comprising :
means for at the second point using the Darboux frame to calculate the angle between at least one of the axes and the surface normal at the second point, wherein the digital signature includes a component corresponding to the angle.
An apparatus according to any of claim 22 to 27 further comprising:
means for calculating the distance between the at least one data point and the second point; and, means for generating a digital signature , the digital signature including a component corresponding to the distance.
29. An apparatus according to claim 28 wherein the
reference point is the reference point of claim 25. 30. An apparatus according to claim 25, 26, 27, 28 or 29 wherein the reference point is positioned on a central axis of the point cloud.
31. An apparatus according to any of claims 20 to 30
wherein the traits are at least one of muscle score, P8 fat depth and hip height.
32. An apparatus according to claim 20 or 21 further
comprising :
a camera for capturing at least one image of the selected region of the animal,
means for generating a point cloud representation of the image; and
means for calculating the surface curvature from the point cloud representation; and,
means for generating the digital signature from the surface curvature.
33. An apparatus according to claim 32 further
comprising :
means for generating a reference frame for at least one data point in the point cloud;
means for generating a surface normal for at least one data point in the point cloud; and
means for calculating the signature in dependence on the angle between the reference frame and the surface normal . 34. An apparatus according to claim 33 further
comprising :
means for calculating the distance between the at least one data point and at least a second data point and generating the signature in dependence on the distance .
35. An apparatus according to any of claims 20 to 34
further comprising means for filtering the data points in the point cloud.
36. An apparatus according to claim 35 wherein filtering includes removing any outlying data points.
37. A method according to any of claims 1 to 19 wherein the surface area is the hindguarters . 38. An apparatus according to any of claims 20 to 36
claim wherein the surface area is the hindquarters.
39. A method according to claim 4 wherein the step of
generating a digital image comprises:
determining the position of each camera used to capture the camera image;
determining any movement of the animal during capture of the camera image;
generating the digital image in dependence on the position of each camera and any movement of the animal.
40. An apparatus according to claim 23 further
comprising :
means for determining the position of each camera used to create data;
determining any movement of the animal during capture of the data;
wherein the means for converting the camera data into point cloud data converts the data in dependence on the position of each camera and any movement of the animal.
41. An apparatus for imaging an animal comprising:
rotatable frame; the frame comprising an imaging cavity suitable for locating an animal for imaging;
multiple camera control mechanisms, the camera control mechanisms suitable for changing the location of a camera within the frame;
the frame being rotatable between a first position for receiving an animal and a second position for exiting an animal.
An apparatus according to claim 41 comprising
multiple cameras, the position of each camera being controlled by one of the multiple control mechanisms; the camera control mechanisms configured to control the position of the cameras.
An apparatus according to claim 42 wherein the cameras are configured to create data representing an animal located within the imaging cavity.
An apparatus according to claim 43 wherein the data comprises data representing the surface area of the animal, the data including depth data;
the apparatus further comprising means for converting the camera data into a point cloud representation of the animal .
An apparatus according to claim 44 further comprising the apparatus for estimating the physical traits of the animal according to of any of claims 20 to 36.
An apparatus according to claim 44 or 45 wherein the means for converting the camera data into a point cloud representation of the animal comprises:
means for determining the position of each camera during creation of data;
means for determining any movement of the animal during creation of data;
means for generating a three dimensional point cloud representation of the animal, accounting for position of the cameras and movement of the animal.
An apparatus according to any of claims 41 to 46, the apparatus being aligned with a processing chain the processing chain carrying an animal wherein in the first position the apparatus is configured to align with the processing chain to receive an animal and in the second position the apparatus is configured to align with the processing chain to exit the animal.
An apparatus according to claim 48 further
comprising :
detector means to detect that an animal has been received into the imaging cavity;
scanning controller to control the multiple camera control mechanisms to capture data of the animal;
frame controller to control movement of the frame to the second position.
An apparatus according to any of claims 18 to 36, 38 or 40 to 48 wherein the animal is a carcass.
An apparatus according to any of claims 18 to 36, 38 or 40 to 48 wherein the animal is a live animal.
A method for imaging an animal comprising the steps of:
positioning an imaging apparatus to receive an animal ;
initiating a scanning controller to control multiple camera control mechanisms to capture data relating to the animal;
detecting that the data capture is complete; and positioning the imaging apparatus to exit the animal.
A method for imaging an animal comprising the steps of:
positioning an imaging apparatus to receive an animal ;
initiating a scanning controller to control multiple camera control mechanisms to capture data relating to the animal;
detecting that the data capture is complete; and positioning the imaging apparatus to exit the animal.
52. A method according to claim 52 wherein the step of positioning comprises rotating the imaging apparatus.
53. A method for imaging an animal comprising the steps of :
positioning an imaging apparatus to receive an animal ;
initiating a scanning controller to control multiple camera control mechanisms to capture data relating to the animal;
detecting that the data capture is complete; and positioning the imaging apparatus to exit the animal.
54. A method according to claim 51 to 54 wherein the
scanning controller is arranged to move the cameras within the imaging apparatus.
55. A method according to claim 51 to 55 wherein the step of capturing data comprises capturing camera data representing the surface area of the animal, the data including depth data. 56. A method according to claim 56 comprising the further step of converting the camera data into a point cloud representation of the animal.
A method according to claim 51 to 57 further
comprising estimating the physical traits of the animal according to of any of claims 1 to 18.
59. A method according to claim 57 or 58 wherein the step of converting the camera data into a point cloud representation of the animal comprises:
determining the position of each camera during creation of data;
determining any movement of the animal during
creation of data;
generating a three dimensional point cloud
representation of the animal, accounting for position of the cameras and movement of the animal.
An apparatus according to any of claims 1 to 58 wherein the physical traits comprise at least one of: muscle score;
P8 fat depth;
hip height;
fat;
body conditioning score.
PCT/AU2015/000490 2014-08-13 2015-08-13 3d imaging WO2016023075A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2014903163A AU2014903163A0 (en) 2014-08-13 3d imaging
AU2014903163 2014-08-13

Publications (1)

Publication Number Publication Date
WO2016023075A1 true WO2016023075A1 (en) 2016-02-18

Family

ID=55303694

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2015/000490 WO2016023075A1 (en) 2014-08-13 2015-08-13 3d imaging

Country Status (1)

Country Link
WO (1) WO2016023075A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017030448A1 (en) * 2015-08-17 2017-02-23 Livestock Improvement Corporation Limited Method and apparatus for evaluating an animal
CN110400310A (en) * 2019-07-31 2019-11-01 宁夏金宇智慧科技有限公司 A kind of milk cow body condition Auto-Evaluation System
EP3567551A1 (en) * 2018-05-10 2019-11-13 Instytut Biotechnologii Przemyslu Rolno-Spozywczego Method of analyzing three-dimensional images for the purpose of animal carcass assessment
JP2020144122A (en) * 2019-02-28 2020-09-10 国立研究開発法人農業・食品産業技術総合研究機構 Three-dimensional measurement system, three-dimensional measurement device, three-dimensional measurement method, and three-dimensional measurement program
CN111914946A (en) * 2020-08-19 2020-11-10 中国科学院自动化研究所 Countermeasure sample generation method, system and device for outlier removal method
CN112233084A (en) * 2020-10-13 2021-01-15 深圳度影医疗科技有限公司 Ultrasonic image processing method, ultrasonic image processing apparatus, and computer-readable storage medium
CN112313703A (en) * 2018-06-15 2021-02-02 宝马股份公司 Incremental segmentation of point clouds
CN114491109A (en) * 2022-01-21 2022-05-13 河北地质大学 Fossil sample database system
WO2023244195A1 (en) * 2022-06-16 2023-12-21 Cowealthy Teknoloji Anonim Sirketi A system for determining the animal's body condition score
EP4403027A1 (en) * 2023-01-18 2024-07-24 Youdome Sarl Scanning system and scanning method for recording animal measurements

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4284034A (en) * 1980-04-30 1981-08-18 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Biocentrifuge system capable of exchanging specimen cages while in operational mode
US4939574A (en) * 1987-12-22 1990-07-03 Slagteriernes Forskningsinstitut Method and apparatus for classifying livestock carcasses and in particular cattle carcasses using a data processing system to determine the properties of the carcass
US5194036A (en) * 1991-02-14 1993-03-16 Normaclass Method for grading carcasses of large cattle, calves, hogs, or sheep and device for implementation thereof
US5327852A (en) * 1993-08-16 1994-07-12 Gingrich Jerry L Animal photo studio
WO1998008088A1 (en) * 1996-08-23 1998-02-26 Her Majesty The Queen In Right Of Canada, As Represented By The Department Of Agriculture And Agri-Food Canada Method and apparatus for using image analysis to determine meat and carcass characteristics
WO2001033493A1 (en) * 1999-10-29 2001-05-10 Pheno Imaging, Inc. System for measuring tissue size and marbling in an animal
WO2001058270A1 (en) * 2000-02-14 2001-08-16 Australian Food Industry Science Centre Animal handling apparatus
US6383069B1 (en) * 1998-02-20 2002-05-07 Stork Gamco Inc. Methods and apparatus for performing processing operations on a slaughtered animal or part thereof
US6810832B2 (en) * 2002-09-18 2004-11-02 Kairos, L.L.C. Automated animal house
WO2005034618A1 (en) * 2003-10-10 2005-04-21 Ab Svenska Mätanalys Method and device for the monitoring of pigs
US7399220B2 (en) * 2002-08-02 2008-07-15 Kriesel Marshall S Apparatus and methods for the volumetric and dimensional measurement of livestock
WO2010063527A1 (en) * 2008-12-03 2010-06-10 Delaval Holding Ab Arrangement and method for determining a body condition score of an animal
US8660631B2 (en) * 2005-09-08 2014-02-25 Bruker Biospin Corporation Torsional support apparatus and method for craniocaudal rotation of animals

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4284034A (en) * 1980-04-30 1981-08-18 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Biocentrifuge system capable of exchanging specimen cages while in operational mode
US4939574A (en) * 1987-12-22 1990-07-03 Slagteriernes Forskningsinstitut Method and apparatus for classifying livestock carcasses and in particular cattle carcasses using a data processing system to determine the properties of the carcass
EP0321981B1 (en) * 1987-12-22 1993-03-24 Slagteriernes Forskningsinstitut Method and apparatus for the determination of quality properties of individual cattle carcasses
US5194036A (en) * 1991-02-14 1993-03-16 Normaclass Method for grading carcasses of large cattle, calves, hogs, or sheep and device for implementation thereof
US5327852A (en) * 1993-08-16 1994-07-12 Gingrich Jerry L Animal photo studio
WO1998008088A1 (en) * 1996-08-23 1998-02-26 Her Majesty The Queen In Right Of Canada, As Represented By The Department Of Agriculture And Agri-Food Canada Method and apparatus for using image analysis to determine meat and carcass characteristics
US6383069B1 (en) * 1998-02-20 2002-05-07 Stork Gamco Inc. Methods and apparatus for performing processing operations on a slaughtered animal or part thereof
WO2001033493A1 (en) * 1999-10-29 2001-05-10 Pheno Imaging, Inc. System for measuring tissue size and marbling in an animal
WO2001058270A1 (en) * 2000-02-14 2001-08-16 Australian Food Industry Science Centre Animal handling apparatus
US7399220B2 (en) * 2002-08-02 2008-07-15 Kriesel Marshall S Apparatus and methods for the volumetric and dimensional measurement of livestock
US6810832B2 (en) * 2002-09-18 2004-11-02 Kairos, L.L.C. Automated animal house
WO2005034618A1 (en) * 2003-10-10 2005-04-21 Ab Svenska Mätanalys Method and device for the monitoring of pigs
US8660631B2 (en) * 2005-09-08 2014-02-25 Bruker Biospin Corporation Torsional support apparatus and method for craniocaudal rotation of animals
WO2010063527A1 (en) * 2008-12-03 2010-06-10 Delaval Holding Ab Arrangement and method for determining a body condition score of an animal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HETZEL, G. ET AL.: "3D Object Recognition from Range Images using Local Feature Histograms", COMPUTER VISION AND PATTERN RECOGNITION, 2001. CVPR 2001. PROCEEDINGS OF THE 2001 IEEE COMPUTER SOCIETY CONFERENCE ON (VOLUME:2, vol. 2, 2001, pages II-394 - II-399, ISBN: 0-7695-1272-0 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017030448A1 (en) * 2015-08-17 2017-02-23 Livestock Improvement Corporation Limited Method and apparatus for evaluating an animal
EP3567551A1 (en) * 2018-05-10 2019-11-13 Instytut Biotechnologii Przemyslu Rolno-Spozywczego Method of analyzing three-dimensional images for the purpose of animal carcass assessment
CN112313703B (en) * 2018-06-15 2024-05-28 宝马股份公司 Incremental Segmentation of Point Clouds
CN112313703A (en) * 2018-06-15 2021-02-02 宝马股份公司 Incremental segmentation of point clouds
JP7405412B2 (en) 2019-02-28 2023-12-26 国立研究開発法人農業・食品産業技術総合研究機構 3D measurement system, 3D measurement device, 3D measurement method, and 3D measurement program
JP2020144122A (en) * 2019-02-28 2020-09-10 国立研究開発法人農業・食品産業技術総合研究機構 Three-dimensional measurement system, three-dimensional measurement device, three-dimensional measurement method, and three-dimensional measurement program
CN110400310A (en) * 2019-07-31 2019-11-01 宁夏金宇智慧科技有限公司 A kind of milk cow body condition Auto-Evaluation System
CN111914946A (en) * 2020-08-19 2020-11-10 中国科学院自动化研究所 Countermeasure sample generation method, system and device for outlier removal method
CN111914946B (en) * 2020-08-19 2021-07-06 中国科学院自动化研究所 Countermeasure sample generation method, system and device for outlier removal method
CN112233084B (en) * 2020-10-13 2022-02-08 深圳度影医疗科技有限公司 Ultrasonic image processing method, ultrasonic image processing apparatus, and computer-readable storage medium
CN112233084A (en) * 2020-10-13 2021-01-15 深圳度影医疗科技有限公司 Ultrasonic image processing method, ultrasonic image processing apparatus, and computer-readable storage medium
CN114491109A (en) * 2022-01-21 2022-05-13 河北地质大学 Fossil sample database system
CN114491109B (en) * 2022-01-21 2022-10-21 河北地质大学 Fossil sample database system
WO2023244195A1 (en) * 2022-06-16 2023-12-21 Cowealthy Teknoloji Anonim Sirketi A system for determining the animal's body condition score
EP4403027A1 (en) * 2023-01-18 2024-07-24 Youdome Sarl Scanning system and scanning method for recording animal measurements
WO2024153661A1 (en) * 2023-01-18 2024-07-25 Youdome Sarl Scanning system and scanning method for recording animal measurements

Similar Documents

Publication Publication Date Title
WO2016023075A1 (en) 3d imaging
Qiao et al. Intelligent perception for cattle monitoring: A review for cattle identification, body condition score evaluation, and weight estimation
AU2019283978B2 (en) System and method for identification of individual animals based on images of the back
US10249054B2 (en) Method and device for automated parameters calculation of an object
US8971586B2 (en) Apparatus and method for estimation of livestock weight
CA2744146C (en) Arrangement and method for determining a body condition score of an animal
Wu et al. Extracting the three-dimensional shape of live pigs using stereo photogrammetry
US6974373B2 (en) Apparatus and methods for the volumetric and dimensional measurement of livestock
Liu et al. Automatic estimation of dairy cattle body condition score from depth image using ensemble model
US20050011466A1 (en) System and method for measuring animals
Pallottino et al. Comparison between manual and stereovision body traits measurements of Lipizzan horses
Pérez-Ruiz et al. Advances in horse morphometric measurements using LiDAR
US20140088939A1 (en) Evaluation of animal products based on customized models
CN111386075A (en) Livestock weight measuring system and livestock weight measuring method using same
Tscharke et al. Review of methods to determine weight and size of livestock from images
Lu et al. Extracting body surface dimensions from top-view images of pigs
Ling et al. Point cloud-based pig body size measurement featured by standard and non-standard postures
Alempijevic et al. Lean meat yield estimation using a prototype 3D imaging approach
Li et al. A posture-based measurement adjustment method for improving the accuracy of beef cattle body size measurement based on point cloud data
Zhao et al. Review on image-based animals weight weighing
US20230342902A1 (en) Method and system for automated evaluation of animals
KR102131559B1 (en) Gun type livestock weighing apparatus and a livestock weighing method using the same
Tedin et al. Towards automatic estimation of the body condition score of dairy cattle using hand-held images and active shape models
Battiato et al. Assessment of cow’s body condition score through statistical shape analysis and regression machines
EP3567551A1 (en) Method of analyzing three-dimensional images for the purpose of animal carcass assessment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15832598

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15832598

Country of ref document: EP

Kind code of ref document: A1