EP2972073A1 - Apparatus for volumetrically measuring an object in the body of an animal for slaughter - Google Patents
Apparatus for volumetrically measuring an object in the body of an animal for slaughterInfo
- Publication number
- EP2972073A1 EP2972073A1 EP14723314.2A EP14723314A EP2972073A1 EP 2972073 A1 EP2972073 A1 EP 2972073A1 EP 14723314 A EP14723314 A EP 14723314A EP 2972073 A1 EP2972073 A1 EP 2972073A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- depth
- camera
- carcass
- depth camera
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000003307 slaughter Methods 0.000 title abstract description 21
- 238000011156 evaluation Methods 0.000 claims abstract description 52
- 238000001514 detection method Methods 0.000 claims description 77
- 238000005259 measurement Methods 0.000 claims description 21
- 210000001519 tissue Anatomy 0.000 description 17
- 238000000034 method Methods 0.000 description 11
- 238000011161 development Methods 0.000 description 10
- 230000018109 developmental process Effects 0.000 description 10
- 230000033001 locomotion Effects 0.000 description 10
- 210000000988 bone and bone Anatomy 0.000 description 7
- 235000013372 meat Nutrition 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 210000000577 adipose tissue Anatomy 0.000 description 3
- 238000002591 computed tomography Methods 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 210000003205 muscle Anatomy 0.000 description 3
- 239000010868 animal carcass Substances 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000005755 formation reaction Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 235000020997 lean meat Nutrition 0.000 description 2
- 210000003141 lower extremity Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 210000001364 upper extremity Anatomy 0.000 description 2
- 210000000683 abdominal cavity Anatomy 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003776 cleavage reaction Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002224 dissection Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000013450 outlier detection Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000007017 scission Effects 0.000 description 1
- 238000004441 surface measurement Methods 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A22—BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
- A22B—SLAUGHTERING
- A22B5/00—Accessories for use during or after slaughtering
- A22B5/0064—Accessories for use during or after slaughtering for classifying or grading carcasses; for measuring back fat
- A22B5/007—Non-invasive scanning of carcasses, e.g. using image recognition, tomography, X-rays, ultrasound
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/14—Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/22—Measuring arrangements characterised by the use of optical techniques for measuring depth
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/245—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/28—Measuring arrangements characterised by the use of optical techniques for measuring areas
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- the invention relates to a device for measuring a carcass object, in particular for assessing a slaughter yield.
- a known procedure is, for example, to hang the carcass on a weighing device, to determine its weight and to make a statement about the existing volume of the carcass and on the expected yield based on the weight and previously defined model data.
- the carcass body is detected by means of a tomographic method, and the disc-shaped segments of the carcass that can be provided in this way are assembled into a virtual model.
- the object of the present invention is therefore to provide a device for the volumetric measurement of an ante-mortem object, which enables a correct determination of volumes of the carcass object and a reliable assessment of an expected slaughter yield with little effort and comparatively low costs.
- Ante-carcass objects in the sense of the solutions according to the invention may in particular be whole carcasses, carcass halves or parts such as ham.
- the carcass object has a first depth camera with a first depth camera detection area, in which a portion of a surface of the carcass object can be optically detected on a first side and in which spatial coordinates of pixels on the first side of the carcass object can be detected.
- the portion of the surface of the first side may be both a partial area of the surface and the entire surface of the first side.
- the spatial coordinates of the detected pixels are composed of their surface coordinates (x, y) and a depth value (z).
- the first depth camera is further capable of providing the spatial coordinates of the acquired pixels in the portion of the surface of the first page as space coordinate data transferable.
- the device according to the invention has a second depth camera with a second depth camera detection range, in which a portion of a surface of the carcass object can be optically detected on a second side and in which spatial coordinates of pixels on the second side of the carcass object can be detected.
- the portion of the surface of the second side may be both a partial area of the surface and the entire surface of the second side.
- the spatial coordinates of the captured pixels are composed of their area coordinates (x, y) and a depth value (z).
- the second depth camera is likewise capable of transferring the spatial coordinates of the detected pixels in the section of the surface of the second side as spatial coordinate data.
- the device according to the invention has a positioning device for positioning the first depth camera relative to the second depth camera, wherein positioning the depth camera detection areas of the first and second depth cameras relative to each other.
- the relative positioning of the depth cameras relative to each other is such that their optical axes are anti-parallel to each other and that, with a corresponding size of the carcass object and its sufficiently central positioning between the depth cameras, the detection range of the first depth camera is covered so far by the carcass object that the second Depth camera does not affect the first depth camera within its detection range. This applies in the same way vice versa, ie that the first depth camera does not affect the second depth camera in its detection range.
- the ante-mortem object is preferably guided past the depth cameras by means of a transport system in such a way that the ante-mortem object crosses the depth-camera detection areas of the depth cameras.
- roller hooks or conveyor belts come into consideration as a transport system in such slaughtered animal bodies.
- the capture of the pixels by the first and the second depth camera is carried out according to the invention in real time and simultaneously. At the same time means in this context that between the detection by the first and the second depth camera no or only such a sufficiently small movement of the carcass object is done that a summary of the spatial coordinates (x, y, z) of the captured pixels of both depth cameras in a common Room coordinate system remains possible.
- the real-time capability of the two depth cameras in particular requires a high image recording speed, ie, that the depth cameras are capable of are to simultaneously capture space coordinates in the depth camera detection areas.
- the device according to the invention also has an evaluation unit, which is connected to the first and second depth camera.
- connection between the evaluation unit and the depth cameras can be configured both wired and wireless and enables the transmission of the spatial coordinate data to the evaluation unit.
- the evaluation unit is able to detect the spatial coordinate data provided by the first and second depth cameras and to combine the acquired spatial coordinate data in a common spatial coordinate system to form combined spatial coordinate data.
- the common spatial coordinate system represents a three-dimensional coordinate system, in particular a Cartesian coordinate system with the directional axes x, y, z, in which at least the position of the two depth cameras and their alignment with one another are known.
- the evaluation unit is particularly advantageous in being able to provide a surface model of the carcass object from the spatial coordinate data combined in the common spatial coordinate system.
- the combined space coordinate data of the first and second sides of the carcass object are meshed with each other so as to generate a net-like surface model of the carcass object.
- volumes of the carcass object are then calculated.
- the amount of space coordinate data required is preferably selected such that a sufficiently accurate determination of the relevant volumes of the slaughter animal body object is ensured.
- the device according to the invention thus makes it possible to determine the volumes of an ante-mortem object in a particularly simple manner, whereby a significantly improved measuring accuracy and a higher throughput as well as lower costs for measuring can be achieved compared to conventional methods.
- maintaining a correct distance of the slaughter animal body object from the depth cameras is not mandatory, since the distance information can already be provided by the depth value per se. This eliminates the need for additional equipment required for an exact positioning of the carcass object.
- geometric models are stored in the evaluation unit. These are mathematical abstractions of parts of a normative carcass object, such as a hind leg of a pig's half. Such a model is not necessarily scalar fixed and was formed from averages determined by dissection experiments. The geometric model is therefore not determined by the carcass object to be measured.
- the surface model is an abstraction formed by the particular ante-mortem object to be measured. If points defined by the device of a geometric model on the carcass object are detected as measuring points, the location Draw the measurement points including the geometric model, a volume of the geometric model corresponding part of the carcass, for example, a hind leg of a pig half are determined. It is then a partial volume of the carcass object.
- the surface model can also be included, on the one hand to calculate ratios of volume and part volume, and on the other to support the recognition of the defined points of a geometric model.
- This advantageous refinement is based, in particular, on the fact that the apparatus according to the invention makes it possible for striking structures on the surface of one side of the slaughter animal body object to be ascertainable by means of the first or second depth camera and the spatial coordinate data provided by the latter and transmitted into the common spatial coordinate system.
- Such distinctive structures may be, for example, forelimbs and / or hind legs of the slaughtered animal, which, by their shape, stand out accordingly from the surface of the slaughtered animal body object.
- the measuring points are determined and assigned to the defined points of the relevant geometric model.
- the device additionally has at least one image camera.
- the image camera has an image camera detection area in which a relevant portion of a surface of the carcass object on the first side is optically detectable and in which light intensity values of pixels and their area coordinates on the surface of the carcass object on the first side are detectable.
- the image camera detection area is designed, for example, such that in the relevant section the entire surface of the carcass object on the first side can be detected.
- the image camera detection area is designed, for example, such that in the relevant section the entire surface of the carcass object on the first side can be detected.
- it is depending on the application It is also possible for only a partial area of the surface of the first side of the carcass object to be detected in the image camera detection area.
- Carcass half acts.
- a positioning of the carcass object according to the invention is carried out such that the relevant portion of the surface of the carcass object on the first side of the image camera is at least sufficiently facing.
- the image camera is formed by a 2D camera according to the invention and makes it possible to detect light intensity values (g) of pixels within the image camera detection area and the area coordinates (x, y) of the pixels on the relevant portion of the surface of the carcass object on the first side.
- the detection of the light intensity values can be provided, for example, in a known manner by determining grayscale values.
- a light gray scale value and, in the case of the present meat tissue, a dark gray scale value can be output.
- the image camera is preferably oriented such that its center axis, hereinafter also referred to as normal, is largely arranged at a right angle to the axis of movement of the carcass object.
- the center axis represents the optical axis of the image camera
- the axis of movement of the carcass object represents the axis of the image. Se, on which the carcass object is moved through the image camera detection area and the depth detection areas.
- the light intensity values of the pixels and the surface coordinates associated therewith can be transmitted in a transferable manner as light intensity value data by the image camera according to the invention.
- the positioning device fixes the position of the image camera relative to the first depth camera according to the development of the invention so that the image camera detection area and the first depth camera detection area overlap, at least partially, in a common detection area, that of the evaluation unit on the relevant one Section of the surface to be evaluated pixels in the common detection area lie.
- the first depth detection area and the image camera detection area can be either partially horizontally or vertically overlapping.
- the detection ranges of the first depth camera and the image camera and their positioning relative to each other are determined so that the common detection range is as large as possible to exploit the resolution of the first depth camera and image camera as well as possible.
- the detection of the pixels on the surface of the carcass object on the first page by the image camera and by the first depth camera according to the invention is carried out in real time and simultaneously.
- the real-time capability of the depth camera requires a high image recording speed, ie, the first depth camera is capable of simultaneously detecting spatial coordinates in the first depth camera detection area, which can be ensured, for example, by TOF cameras.
- the image camera according to the invention is likewise connected to the evaluation unit, wherein the evaluation unit detects and processes the light intensity value data provided by the image camera.
- connection between the image camera and the evaluation unit can likewise be wired or wireless and enables the transmission of the light intensity value data to the evaluation unit.
- the evaluation unit is able to associate the light intensity value data of pixels provided by the image camera with the spatial coordinate data provided by the first depth camera of pixels having matching surface coordinates (x, y).
- the provided data of the image camera and the first depth camera pixels are present in the common detection area, for which according to the invention both the area coordinates (x, y) and the light intensity value (g) and the depth value (z) are detected and where the area coordinates are from the Light intensity value data with the area coordinates from the spatial coordinate data are identical according to the invention.
- the associated light intensity value and space coordinate data are particularly advantageously provided as data tuples (x, y, z, g).
- the evaluation unit is furthermore able to convert the light intensity value data provided by the image camera into the image point. to identify te defined measuring points on the surface of the carcass object on the first side, in which case the first side is preferably a split side of a carcass half.
- the identification of measuring points means that characteristic structures on the preferably gap-side surface of the carcass object, for example muscles, fat tissue or bones, are recognized by the evaluation unit by means of image analysis and object recognition. For this purpose, computationally different tissue areas are detected and selected on the basis of the light intensity value differences in order to determine the contours of muscle, fat and bone by means of a contour tracking algorithm.
- Allow carcass object The area coordinates of these points are determined by the evaluation unit as measuring points. They form the basis for further surveying and evaluation of the expected slaughter yield.
- Ante basal body object can be determined, wherein the determination of the distances of the measuring points on the relevant portion of the preferably gap-side surface is carried out by integration of the spatial distances of sufficiently small partial distances of the total distance.
- surfaces within the surface of the relevant sections are preferably gap-side surface via an integration of sufficiently small, spatially exactly calculated partial surfaces, determined.
- the evaluation unit also combines the image data with different resolutions of the depth camera and the image camera in such a way that the light intensity value and the depth value are present for each pixel by means of the merged light intensity value data and spatial coordinate data in addition to the surface coordinates.
- the identified measurement points are assigned to the generated surface model of the carcass object based on their spatial coordinate data.
- partial volumes of the carcass object can be calculated, wherein the geometric coordinates are determined based on the spatial coordinates of the measuring points of the corresponding tissue area and wherein in the evaluation previously determined geometric models are deposited, which dependency of the tissue area dimensions relative to the volume of the part of the carcass object and optionally to the total volume of the carcass object.
- the geometrical models are assigned to the respectively measured tissue area on the basis of the measured measuring points on the basis of points defined there and subsequently the expected partial volume for each relevant tissue area is determined from the assigned data.
- the geometric models are created on the basis of the organic structures of the carcass object, wherein the organic structures, for example by Zerlege issued or computer tomography method, can be determined with high accuracy.
- the geometric models are to be understood as virtual components of the carcass object, whereby these virtual components can be assembled like a subassembly.
- the particular advantage of including the geometric models is that the data density and exactness, as achievable in particular by computer tomography methods, is aggregated in the geometric models and can thus be incorporated into a real-time capable system, without each being "in-line”.
- Computer tomography which is expensive, slow, and associated with radiation emission, requires the spatial coordinate data obtained by the depth cameras to provide high reliability of assignment of the respective geometric models the basis of data tuples with light-ink sity value data and spatial coordinate data have been determined to form certain sections of the carcass, and for these sections with spatial coordinate data to form a surface model of this section and to take into account the volume of this section thus emitted.
- an animal carcass half can be dissected into virtual slices transversely to the cleavage plane, whose thickness and position are defined by the length of a swirl which was optically detected.
- the spatial dimensions of the surface sections belonging to the virtual disk are now determined.
- the thus determined volume of the virtual disc allows an improvement of the accuracy, for example, the lean meat content, as different races and genetics primarily in their overall proportion, but less in local contexts differ.
- a measurement of the carcass object with a volume determination of the relevant tissue areas can be carried out by a device according to the invention and a reliable prediction of the expected battle yield can be made on the basis of the determined results.
- Another advantage of the device consists in a high accuracy of measurement, since any existing irregularities, for example due to the positioning of an ante-carcass object in distance and angle and in the case of a split side of a carcass half as the first side of a possibly non-planning gap side surface, by the detected depth values cor- can be rigged.
- the provision and application costs of such a device can be kept low by the components used according to the invention and a high throughput of slaughtered body objects to be measured can be ensured.
- the inclusion of the respective depth value according to the invention observing a precisely predetermined distance or a precisely predetermined angle between the carcass object and the device is not mandatory, since the distance information can already be provided by the depth value per se. This eliminates the need for additional equipment required for an exact positioning of the carcass object or to correct plano unevenness.
- the provision and operating costs of a device according to the invention are characterized comparatively low.
- the measurement can be carried out without contact, which also hygienic risks by known in the prior art additional devices for positioning of carcass objects, or additional provision in hygienic terms omitted.
- the described advantages also apply to the measurement of other carcass objects, which can be transported, for example, on a conveyor belt.
- a conveyor belt does not have the problem of uncontrolled movements.
- the solution according to the invention also offers a particular advantage here, since the positioning of the carcass object relative to the conveyor belt, in particular transversely to the longitudinal extent of the conveyor belt, may be inaccurate because the distance information already exists due to the depth value. By the distance information, however, not only the position of the carcass object relative to the image camera, but also relative to the conveyor belt is known.
- the device also comprises means for illuminating the carcass object, wherein the light color is expediently chosen so that a good pixel detection by the image camera and the depth cameras is possible.
- the image camera is designed as a color camera.
- the use of a color camera makes it possible to record the light intensity values separately according to individual color channels, in particular red, green and blue (RGB), and to store the light intensity values separately according to color channels in the light intensity value data and transmit them to the evaluation unit.
- the light intensity values can then be used for color channels for image analysis, whereby outlines of structures on the nip surface of the carcass object can be better recognized.
- the carcass object is a carcass half, which has a gap side and that the portion of the surface of the first side is the surface of the gap side and that with the image camera detection area and the first Depth camera detection area in each case the surface of the split side of the carcass half is optically detectable.
- the device further comprises an image camera which optically detects the gap side because the exposed organic structures facilitate identification of defined measurement points by means of light intensity value data. In this case, coarse positioning ensures that the gap side is aligned sufficiently with the image camera, but that the advantage remains that exact positioning by distance and angle is not required.
- marked structures on the second side ie the back of the carcass half can be determined by means of the second depth camera and the spatial coordinate data provided by this and transmitted in the common spatial coordinate system.
- Such prominent structures may be, for example, forelimbs and / or hind legs of the slaughtered animal, which by their shape stand out correspondingly from the back of the slaughtered animal body object.
- the depth value from the respectively determined spatial coordinates is used to identify measuring points on the relevant surface of an ante-mortem object. In this way, in particular in the case of an uneven gap-side surface of an animal carcass half, the measuring points can be better identified on the basis of inclusion of the depth value information. This is especially the case when a characteristic structure, for example at a transition of a cutting plane into the abdominal cavity, a characteristic structure can be better recognized by the depth value information than by the light intensity value data.
- the detectable depth value assumes a double function, in which the spatial coordinates of the measuring points identified from the light intensity value data of the pixels are provided by the latter and, in addition, the previous identification of the measuring points, in particular on the gap-side surface of the carcass half, only enabled or at least supported.
- the depth information can also be used to delimit the carcass object from the background and thus to define its contour. Depth values that lie outside of a defined range, namely depth values above a certain value, are then assigned to the background by an evaluation algorithm per se, without it being necessary to include the light intensity value data for this purpose. This method makes it possible to obviate the background walls customary in the prior art.
- a further advantageous development is based on a frequently occurring problem that the real surface shape and a model ideal shape of the carcass object do not match.
- the nip-side surface in the model ideal shape is an exact plane.
- the model distances of measuring points are based on the model ideal form. The deviation from the real surface shape and the model ideal shape causes an inaccuracy of the validity of the distances of the measuring points in space on the basis of the real surface form.
- the distance information already available by the depth camera that is to say the z value of the spatial coordinates, can be used.
- the depth values of points in these areas can be ruled out from the outset or less weighted by the formation of the ideal model surface. If these are not known, the points are detected by a plurality of points, which a, exceeding a defined value distance of the ideal model defined by most of the other points and on that basis excluded or weighted less.
- the model adaptation and outlier detection can use known methods, for example RANSAC.
- the spatial coordinates of the determined measuring points are projected onto the ideal model surface in accordance with the model knowledge of the cause of the deviation.
- the determination of the distance of the measuring points in space then takes place.
- a particularly advantageous embodiment of the invention provides that the depth cameras are designed as TOF cameras (time of flight camera).
- a TOF camera in a manner known per se, enables the determination of a distance between it and a detected object by means of a transit time method.
- the detected object is illuminated by means of a light pulse, wherein the camera determines the time for each illuminated pixel, which requires the light to the object and back from this back to the camera.
- TOF cameras Using a TOF camera is advantageous in many ways.
- TOF cameras generally have a simple structure and can therefore be provided relatively inexpensively.
- an advantageous variant of the device according to the invention provides that this at least one further depth camera with a further depth camera detection area in which a portion of a surface of the carcass object is optically detectable and in which further Jardinko ordinaten of pixels consisting of area coordinates and a depth value, wherein the further spatial coordinates can be transmitted in a transferable manner as spatial coordinate data, wherein the evaluation unit detects the space coordinate data provided by the further depth camera and wherein the spatial coordinate data of the further depth camera is in the common spatial coordinate system, together with the spatial coordinate data of the first and the second depth camera, are summarized as summarized spatial coordinate data.
- the further depth camera can be arranged in particular so that additional areas on the slaughter animal body object can be detected by the depth camera detection area. In this case, an increase in the detectable ranges and / or an improvement in the resolution can be achieved with the development.
- the arrangement of the further depth camera can take place in such a way that, in contrast to the first and second depth camera, the carcass object is detected in the same area, but at a different angle, and thus, for example, concave formations, in particular in FIG.
- the depth camera detection range of the further depth camera and the depth camera detection range of the first or the second depth camera overlap at least in sections.
- it is possible according to the invention to arrange the further depth camera in such a way that its depth camera detection area largely coincides with the depth. fenimageer conducteds Scheme the first or second depth camera coincides, the further depth camera, however, offers a higher resolution. In this way, for example, particularly relevant sections of the carcass object can be detected as cutouts with a correspondingly higher resolution.
- the space coordinate data provided by the further depth camera or the further depth cameras are detected and combined in the common space coordinate system, together with the space coordinate data of the first and the second depth camera, as combined space coordinate data and further processed according to the invention.
- a further advantageous embodiment of the device according to the invention further provides that it has at least one further image camera with a further image camera detection area in which a portion of a surface of the carcass object is optically detectable and in which light intensity values of pixels and their surface coordinates can be detected, wherein the light intensity values and the assigned area coordinates can be transmitted in a transferable manner as light intensity value data and wherein the position of the further image camera relative to one of the depth cameras is determined by the positioning device such that the further image camera detection area and the depth camera detection area of the depth camera overlap at least partially in a further common detection area and wherein the further Image camera is connected to the evaluation unit, wherein by the evaluation unit, by the further image camera b provided, light intensity value data can be detected and further processed.
- the detection and further processing of the light intensity value data is carried out analogously to the features listed and described in claim 3.
- the invention will be described as an embodiment with reference to
- Fig. 1 schematic representation with two depth cameras
- Fig. 2 illustrates a schematic diagram with additional image camera in more detail.
- it is a device for measuring a carcass object 1 in the form of a carcass half.
- An inventive device for volumetric measurement of a carcass half has in an embodiment of FIG. 1, a first depth camera 2 and a second depth camera 3.
- the carcass half is presently centrally located between the depth cameras 2 and 3 and is guided by means of a transport device (not shown) along a movement axis gt to the depth cameras 2 and 3.
- the carcass half further includes a first side and a second side, wherein the first side is a gap side and the second side is a rear side opposite the gap side.
- the gap side is illustrated here by the plane axis gc.
- the depth cameras 2 and 3 each have a depth camera detection range, the depth camera detection range of the first depth camera 2 being illustrated by the detection angle CCDI and the depth camera detection range of the second depth camera 3 by the detection angle CCD2.
- the depth cameras 2 and 3 are positioned via a positioning device 4 in such a way that the depth camera detection areas oppose each other and that the depth cameras 2 and 3 form a common normal nDi, 2.
- the depth cameras 2 and 3 are also particularly advantageously arranged so that the common normal nDi, 2 is oriented perpendicular to the axis of motion gt of the carcass half.
- the depth cameras 2 and 3 in the present exemplary embodiments according to FIGS. 1 and 2 are designed as TOF cameras (Time Of Flight cameras).
- the depth cameras 2 and 3 are connected according to the invention to an evaluation unit 5.
- the depth cameras 2 and 3 according to the invention are capable of detecting, at least partially, the surface of the side of the slaughter animal body half facing them in the respectively assigned depth camera detection area.
- the surface on the first side of the carcass half and thus the gap-side surface is detected by the first depth camera 2 in the present case, while in the present case the surface is detected by the second depth camera 3 the second side of the carcass half and thus the back surface are detectable.
- spatial coordinates of pixels on the respective surface of the carcass half can also be detected by the depth cameras 2 and 3, wherein the space coordinates each consist of area coordinates (x, y) and a depth value (z).
- the detected spatial coordinates are transmitted by the depth cameras 2 and 3 to the evaluation unit 5.
- the space coordinates transmitted by the depth cameras 2 and 3 are detected by the evaluation unit 5 and further processed in such a way that the acquired space coordinates are combined in a common space coordinate system (not shown).
- the common spatial coordinate system is in the present case formed by a three-dimensional Cartesian coordinate system with the directional axes X, Y, Z, wherein the respective position and orientation of the depth cameras 2 and 3 are known within the common spatial coordinate system.
- a surface model (not shown) of the carcass half is created by the evaluation unit 5 within the common space coordinate system, wherein the space coordinates are meshed together to form a virtual network.
- the device according to the invention additionally has an image camera 6 with an image camera detection area, illustrated here by the detection angle OCRGB.
- the image camera 6 is in this case in particular positionable by means of the positioning device 4 such that the image camera detection area and the depth camera detection area of the first depth camera 2 at least partially overlap.
- the image camera 6 and the carcass half are arranged relative to one another such that the slit-side surface of the slaughter animal half of the image camera 6 at least sufficiently faces, so that a successful detection of the slit-side surface of the slaughter animal half by the image camera 6 is provided.
- the image camera 6 is preferably oriented such that its normal nRGB is aligned at a right angle to the axis of motion gt of the carcass half.
- the image camera 6 is designed as an RGB camera and according to the invention is able to at least partially detect the slit-side surface of the slaughter animal body half within the image camera detection area. Moreover, by the image camera 6 in the image camera detection area, light intensity values (g) of pixels and their area coordinates (x, y) can be detected on the gap side surface of the carcass half.
- the detected light intensity value data and area coordinates are further combined by the image camera 6 into light intensity value data (x, y, g) and provided transferably.
- the image camera 6 is likewise connected to the evaluation unit 5, the evaluation unit 5 detecting and further processing the light intensity value data transmitted by the image camera 6.
- the further processing of the light intensity value data by the evaluation unit 5 takes place according to the invention such that the evaluation unit 5 identifies measuring points Pi, P2 defined on the light-side value data of the detected pixels on the slit-side surface of the slaughter animal body half.
- the identification of measuring points in this case means that characteristic structures are recognized by the evaluation unit 5, by means of image analysis and object recognition, on the gap-side surface of the carcass half, for example muscles, adipose tissue or bone. For this purpose, computationally different tissue areas are detected and selected on the basis of the light intensity value differences in order to determine the contours of meat, fat and bone by means of a suitable image processing algorithm.
- points are determined by the evaluation unit whose positional relationship with each other makes statements about quantities and qualities of the carcass half possible. The area coordinates of these points are determined by the evaluation unit in the present case as measuring points Pi, P2.
- the measuring points Pi, P2 are located here within the common detection range of the image camera 6 and the first depth camera 2.
- the evaluation unit 5 is particularly advantageous to be able to assign the measurement points Pi, P2 both their light intensity value data and their spatial coordinates.
- the identified measurement points Pi, P2 are particularly advantageously attributable to the generated surface model by the evaluation unit 5, so that additionally relevant tissue areas on the gap-side surface of the carcass half can be displayed within the surface model.
- partial volumes of the carcass half can also be determined in a particularly advantageous manner.
- geometric models are stored in the evaluation unit 5, which include a dependence of the tissue area dimensions relative to the total volume of the carcass half.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Food Science & Technology (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE202013002484.4U DE202013002484U1 (en) | 2013-03-15 | 2013-03-15 | Apparatus for volumetric measurement of an ante-mortem object |
PCT/DE2014/000123 WO2014139504A1 (en) | 2013-03-15 | 2014-03-14 | Apparatus for volumetrically measuring an object in the body of an animal for slaughter |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2972073A1 true EP2972073A1 (en) | 2016-01-20 |
Family
ID=50693408
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP14723314.2A Withdrawn EP2972073A1 (en) | 2013-03-15 | 2014-03-14 | Apparatus for volumetrically measuring an object in the body of an animal for slaughter |
Country Status (6)
Country | Link |
---|---|
US (1) | US20160029648A1 (en) |
EP (1) | EP2972073A1 (en) |
BR (1) | BR112015023420A2 (en) |
CA (1) | CA2905538A1 (en) |
DE (2) | DE202013002484U1 (en) |
WO (1) | WO2014139504A1 (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102014011821A1 (en) | 2014-08-08 | 2016-02-11 | Cargometer Gmbh | Device and method for determining the volume of an object moved by an industrial truck |
DE202014006472U1 (en) * | 2014-08-13 | 2015-11-16 | Csb-System Ag | Device for stun control of a slaughtered animal |
US9872011B2 (en) * | 2015-11-24 | 2018-01-16 | Nokia Technologies Oy | High-speed depth sensing with a hybrid camera setup |
GB201620638D0 (en) * | 2016-12-05 | 2017-01-18 | Equi+Poise Ltd | A gait analysis system |
US20200077667A1 (en) * | 2017-03-13 | 2020-03-12 | Frontmatec Smørum A/S | 3d imaging system and method of imaging carcasses |
CN110189347B (en) * | 2019-05-15 | 2021-09-24 | 深圳市优博讯科技股份有限公司 | Method and terminal for measuring volume of object |
CN114295053B (en) * | 2021-12-31 | 2023-11-28 | 北京百度网讯科技有限公司 | Method and device for determining volume of material, equipment, medium and product |
KR102560534B1 (en) * | 2022-04-19 | 2023-07-26 | 건설기계부품연구원 | Automatic Molding Sand Feeding System |
NL2032013B1 (en) * | 2022-05-30 | 2023-12-12 | Marel Poultry B V | Method, device and system for measuring a poultry slaughter product. |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5412420A (en) * | 1992-10-26 | 1995-05-02 | Pheno Imaging, Inc. | Three-dimensional phenotypic measuring system for animals |
US5349378A (en) * | 1992-12-21 | 1994-09-20 | Robotic Vision Systems, Inc. | Context independent fusion of range and intensity imagery |
DE4408604C2 (en) * | 1994-03-08 | 1996-05-02 | Horst Dipl Ing Eger | Procedure for assessing carcasses |
DE10050836B4 (en) * | 1999-10-21 | 2005-06-30 | Axel Hinz | Method for determining commercial value of cuts of pig carcasses |
CA2369710C (en) * | 2002-01-30 | 2006-09-19 | Anup Basu | Method and apparatus for high resolution 3d scanning of objects having voids |
US7399220B2 (en) * | 2002-08-02 | 2008-07-15 | Kriesel Marshall S | Apparatus and methods for the volumetric and dimensional measurement of livestock |
DE102004047773A1 (en) | 2004-09-27 | 2006-04-06 | Horst Eger | Method for determining physiological quantities of an animal carcass |
US20080273760A1 (en) * | 2007-05-04 | 2008-11-06 | Leonard Metcalfe | Method and apparatus for livestock assessment |
DE102007036294A1 (en) * | 2007-07-31 | 2009-02-05 | Gea Westfaliasurge Gmbh | Apparatus and method for providing information about animals when passing through an animal passage |
CA2711388C (en) * | 2008-01-22 | 2016-08-30 | Delaval Holding Ab | Arrangement and method for determining the position of an animal |
WO2010063527A1 (en) * | 2008-12-03 | 2010-06-10 | Delaval Holding Ab | Arrangement and method for determining a body condition score of an animal |
KR101665567B1 (en) * | 2010-05-20 | 2016-10-12 | 삼성전자주식회사 | Temporal interpolation of three dimension depth image method and apparatus |
AU2012319199B2 (en) * | 2011-10-06 | 2017-06-01 | Delaval Holding Ab | Method and apparatus for detecting lameness in livestock |
US8787621B2 (en) * | 2012-06-04 | 2014-07-22 | Clicrweight, LLC | Methods and systems for determining and displaying animal metrics |
US9167800B2 (en) * | 2012-06-04 | 2015-10-27 | Clicrweight, LLC | Systems for determining animal metrics and related devices and methods |
-
2013
- 2013-03-15 DE DE202013002484.4U patent/DE202013002484U1/en not_active Expired - Lifetime
-
2014
- 2014-03-14 DE DE112014001369.2T patent/DE112014001369A5/en not_active Withdrawn
- 2014-03-14 EP EP14723314.2A patent/EP2972073A1/en not_active Withdrawn
- 2014-03-14 BR BR112015023420A patent/BR112015023420A2/en not_active Application Discontinuation
- 2014-03-14 WO PCT/DE2014/000123 patent/WO2014139504A1/en active Application Filing
- 2014-03-14 US US14/776,850 patent/US20160029648A1/en not_active Abandoned
- 2014-03-14 CA CA2905538A patent/CA2905538A1/en not_active Abandoned
Non-Patent Citations (2)
Title |
---|
None * |
See also references of WO2014139504A1 * |
Also Published As
Publication number | Publication date |
---|---|
DE112014001369A5 (en) | 2015-11-26 |
DE202013002484U1 (en) | 2014-06-17 |
BR112015023420A2 (en) | 2017-07-18 |
WO2014139504A1 (en) | 2014-09-18 |
CA2905538A1 (en) | 2014-09-18 |
US20160029648A1 (en) | 2016-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2972073A1 (en) | Apparatus for volumetrically measuring an object in the body of an animal for slaughter | |
DE102012112322B4 (en) | Method for optically scanning and measuring an environment | |
DE60312869T2 (en) | IMAGE RECORDING SYSTEM AND METHOD FOR EVALUATING THE PHYSICAL CONSTITUTION | |
DE102012112321B4 (en) | Device for optically scanning and measuring an environment | |
DE102014104712B4 (en) | Registration of a scene disintegrating into clusters with visualized clusters | |
DE112010002174B4 (en) | Method and device for a practical 3D vision system | |
DE102013017500B3 (en) | Method and apparatus for optically scanning and measuring a scene | |
DE202012104890U1 (en) | Device for optically scanning and measuring an environment | |
EP1190211A1 (en) | Method for optically detecting the shape of objects | |
DE102009051826A1 (en) | Method for comparing the similarity of 3D pictorial objects | |
DE102007054906A1 (en) | Method for optical measurement of the three-dimensional geometry of objects | |
DE102013225283B4 (en) | Method and device for capturing an all-round view | |
DE102014101587A1 (en) | Registration of a scene with consistency check | |
DE102013200329A1 (en) | Method and device for misalignment correction for imaging methods | |
EP2972071B1 (en) | Device for measuring a slaughter animal body object | |
EP2997543B1 (en) | Device and method for the parameterisation of a plant | |
DE10050836B4 (en) | Method for determining commercial value of cuts of pig carcasses | |
DD292976A5 (en) | METHOD FOR THE ANALYSIS OF SLAUGHTER BODY AGENTS BY IMAGE PROCESSING | |
DE102014108924B4 (en) | A semi-supervised method for training an auxiliary model for multi-pattern recognition and detection | |
DE112016004401T5 (en) | Project an image onto an irregularly shaped display surface | |
DE102013218047B3 (en) | Method for the automatic display and / or measurement of bone changes in medical image data, as well as medical imaging device and electronically readable data carrier | |
DE102017203048B3 (en) | A method of determining a projection data set, projection determining system, computer program product and computer readable storage medium | |
EP2636019A1 (en) | Method and evaluation device for determining the position of a structure located in an object to be examined by means of x-ray computer tomography | |
DE102012004064B4 (en) | Method and device for the non-destructive determination of the internal dimensions of shoes | |
DE102010042733A1 (en) | Capture and display of textured three-dimensional geometries |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20151015 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20190328 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Effective date: 20210423 |