DK181542B1 - Meat cut tracking system and method for tracking meat - Google Patents
Meat cut tracking system and method for tracking meat Download PDFInfo
- Publication number
- DK181542B1 DK181542B1 DKPA202270476A DKPA202270476A DK181542B1 DK 181542 B1 DK181542 B1 DK 181542B1 DK PA202270476 A DKPA202270476 A DK PA202270476A DK PA202270476 A DKPA202270476 A DK PA202270476A DK 181542 B1 DK181542 B1 DK 181542B1
- Authority
- DK
- Denmark
- Prior art keywords
- meat
- cut
- image
- meat cut
- data
- Prior art date
Links
- 235000013372 meat Nutrition 0.000 title claims abstract description 564
- 238000000034 method Methods 0.000 title claims abstract description 142
- 230000008569 process Effects 0.000 claims abstract description 99
- 238000005520 cutting process Methods 0.000 claims description 18
- 230000007547 defect Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 description 33
- 244000144972 livestock Species 0.000 description 32
- 239000000872 buffer Substances 0.000 description 23
- 238000013459 approach Methods 0.000 description 12
- 238000012856 packing Methods 0.000 description 10
- 238000013528 artificial neural network Methods 0.000 description 9
- 238000007689 inspection Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 241000282887 Suidae Species 0.000 description 7
- 230000008859 change Effects 0.000 description 7
- 235000015277 pork Nutrition 0.000 description 7
- 230000008901 benefit Effects 0.000 description 5
- 210000000988 bone and bone Anatomy 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 238000005057 refrigeration Methods 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 238000011179 visual inspection Methods 0.000 description 5
- 241000191291 Abies alba Species 0.000 description 4
- 208000034656 Contusions Diseases 0.000 description 4
- 238000013500 data storage Methods 0.000 description 4
- 235000013305 food Nutrition 0.000 description 4
- 238000002360 preparation method Methods 0.000 description 4
- 238000003307 slaughter Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000001816 cooling Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000013386 optimize process Methods 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 210000003462 vein Anatomy 0.000 description 2
- 241000283690 Bos taurus Species 0.000 description 1
- 241000282320 Panthera leo Species 0.000 description 1
- 206010053615 Thermal burn Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 239000007795 chemical reaction product Substances 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000000875 corresponding effect Effects 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000001035 drying Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000002683 foot Anatomy 0.000 description 1
- 238000007710 freezing Methods 0.000 description 1
- 230000008014 freezing Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A22—BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
- A22B—SLAUGHTERING
- A22B5/00—Accessories for use during or after slaughtering
-
- A—HUMAN NECESSITIES
- A22—BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
- A22B—SLAUGHTERING
- A22B5/00—Accessories for use during or after slaughtering
- A22B5/0064—Accessories for use during or after slaughtering for classifying or grading carcasses; for measuring back fat
- A22B5/007—Non-invasive scanning of carcasses, e.g. using image recognition, tomography, X-rays, ultrasound
-
- A—HUMAN NECESSITIES
- A22—BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
- A22C—PROCESSING MEAT, POULTRY, OR FISH
- A22C17/00—Other devices for processing meat or bones
- A22C17/10—Marking meat or sausages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
Landscapes
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Food Science & Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Zoology (AREA)
- Wood Science & Technology (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Meat, Egg Or Seafood Products (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method of tracking a part of a meat cut between a first position of a slaughterhouse process line and a second position. Said method comprises: associating meat cut data with said meat cut, establishing a first image of said meat cut at a first image generating station and associating said first image with meat cut data, transporting said meat cut from said first image generating station to a meat cut generating station where said meat cut is cut in at least two meat cut parts, establishing a second image, at a second image generating station, of at least one of said at least two meat cut parts and when said second image is generated associating said meat cut parts with said meat cut data based on information derived from said first and said second images.
Description
DK 181542 B1 1
MEAT CUT TRACKING SYSTEM AND METHOD FOR TRACKING MEAT
[0001] The invention relates to a method and a system for tracking a meat cut in a slaughterhouse line.
[0002] In the art it is known to use vision-based tracking of meat cuts in a slaughterhouse. In the prior art document titled “Vision-based method for tracking meat cuts in slaughterhouses” published in Meat Science, Volume 96, Issue 1, January 2014, Pages 366-372 it is disclosed how to use a computer vision system for recognizing meat cuts at different points along a slaughterhouse production line.
Hence, it is disclosed to identify a meat cut between to photo sessions.
[0003] Further, prior art document WO2014/086822 disclose a system for automatically tracing of food items such as meat cuts when packing to be able to label it with information of origin of livestock, which slaughterhouse used, etc. The system benefits from using “attribute determining means” in the form of a weight. If several food items have a weight within a similarity threshold range, only one of these are released for moving towards a second weight to ensure that only one of these food items reach the second weight within the same correlation window. Thereby the disclosed system is able to recognise one food item from another. WO2014/086822 — also mention to use a digital camera as attribute determining means without specifying any details relating to used or implementation thereof.
[0004] Further, prior art document EP1939811 disclose a system for establishing blobs associated with object converted from a source unit with information of the source unit. The system is able to assume if a source unit is converted into additional — objects and thus it creates additional blobs for the new objects to be able to continue the tracking.
DK 181542 B1 2
[0005] A problem in the art is to maintain a reliable traceability when the meat cut is separated into two or more cut parts, change order on a conveyor or when the meat cut is handled such as turned upside down.
[0006] The inventors have identified the above-mentioned problems and challenges related to tracking of meat cuts and solved these problems by the present invention as described below.
[0007] In an aspect, the invention relates to a method of tracking a part of a meat — cut between a first position of a slaughterhouse process line and a second position.
Said method comprises: associating meat cut data with said meat cut, establishing a first image of said meat cut including a predetermined region of said meat cut at a first image generating station and associating said predetermined region of said first image with meat cut data, transporting said meat cut from said first image generating station toa meat cut generating station where said meat cut is cut in at least two meat cut parts, establishing a second image, at a second image generating station, of at least one of said at least two meat cut parts said second image including said predetermined region, and associating said at least one of said at least two meat cut parts with said meat cut data based on information derived from said predetermined region of both of said first and said second images.
[0008] In an exemplary embodiment of the invention, said method further comprises the step of sorting at least one of said at least two meat cut parts based on said meat cut data.
[0009] Sorting based on meat cut data is advantageous in that the sorting is not made based on visual appearance of the meat cut part but based on data associated with the meat cut. Hence, meat cut parts can be sorted according to percentage of fat, location / ID of farmer, documentation of origin, etc. Further different batches of meat cuts can be produced simultaneously in that the sorting into one of two batches may be done based on meat cut data such as quality, weight, etc. of the meat cut part.
DK 181542 B1 3
[0010] The meat tracking method of the present invention is advantageous in that it facilitates tracking a meat cut through a complete slaughterhouse process line including processing steps such as cutting up the meat cut into two or more cut parts and rotate or turn these cut parts upside-down. This is possible according to the present invention completely independent from external markers such as stamps, physical markers, etc.
[0011] The meat tracking method of the present invention is furthermore advantageous in that it facilitates establishing a link between meat cut data stored in a slaughterhouse data system with a physical location of the cut part of the meat cut i.e. — the meat cut after it being cut into parts.
[0012] The meat tracking method of the present invention is furthermore advantageous in that it is robust to disturbances in the sequence of the meat cuts / sliced parts on the conveyer. Hence, it is possible to identify one cut part from another and thereby link meat cut data to each individual cut meat part even if two cut meats parts change position on the conveyer, if one cut meat part is added or removed from the conveyer, if one cut meat part is rotated on the conveyer, etc.
[0013] Accordingly, the present invention is advantageous in that it solves the problem of an automatic sorting of cut meat parts in categories such as first and second categories e.g. based quality or appearance of the meat. This is possible because of e.g. the veterinarian data is linked to the individual cut meat part. Because the individual meat cut is recognizable by the controller of the tracking system, the controller is able to control a robot arm or a conveyer mechanism to sort cut meat parts into different categories automatically.
[0014] The slaughterhouse processing line (sometimes simply referred to as processing or process line) may start at the livestock arrival station where the living livestock are delivered to the slaughterhouse. With respect to the present invention, the relevant part to define as the start of the processing line is where the livestock is registered in the slaughterhouse data system. An alternative start position may be after the refrigerator where the first image generating station is located.
DK 181542 B1 4
[0015] The slaughterhouse data system (sometimes referred to simply as data system) comprises a controller such as a programmable logic controller, a cloud computer or similar. The data system at least facilitates the tracking of the meat cut but may also control the handling stations of the process line. In this document only the part of the data system concerning the meat cut tracking is referred to. This part include at least controller, data storage, image generators and identification tags.
[0016] The end of the processing line may be defined as where the individual parts of the livestock is leaving the slaughterhouse. Hence, since not all parts of the slaughtered livestock are leaving the slaughterhouse through the same exit so to speak, — the processing line may be said to have part specific endpoints. With respect to the present invention, the relevant part of the processing line to define as the end of the processing line is where the part of the livestock is recognized / identified after having been treated (such as meat cutting, deboning, rib-top cutting, etc. At this position, the part of the slaughtered livestock and preferably also its physical location e.g. on a carrier hook is linked with or added to the meat cut data, comprised by the slaughterhouse data system.
[0017] The treatment or handling of the meat cut downstream the refrigerator may take place at one or more of the meat cut handling stations. A plurality of such meat cut handling stations with different purposes may be distributed in areas of the — processing line referred to as cutting area, deboning area and delivery area. In the cutting area, the individual meat cuts are divided into two or more cuts preferably three cuts including ham, middle piece and front part. In the deboning are, the two or more parts are further processed e.g. by cutting three main parts into any given specification such as the parts a consumer can buy in a super marked or which are sent to the a refinement site for further preparation. ...
[0018] The first position may be coincident with the first image generating station and the second position may be coincident with the second image generating station.
However, the first position may also be where the pork for the first time is registered in the slaughterhouse data system and the second position may also be where the final cutting of parts of the meat cut exits the slaughterhouse processing line. Alternatively,
DK 181542 B1 the second position may be at a location outside the slaughterhouse such as at a refinement site, where the meat cut is subject to refinement. Hence, the first and second positions may be defined at any location of the processing line / refinement process where the first position is upstream the second position relative to the way the livestock 5 — travels through the processing line.
[0019] The image generating stations should be understood as stations on or at the process line that preferably does not physically interact with the meat cut to capture an image of thereof. Hence, a typical image generating station include a non-contact image generating apparatus that is able to generate an image of the meat cut.
[0020] The most important feature of such image generating apparatus is that it should be able to capture a detailed image of a particular part of the meat cut as the meat cut passes by the image generating station. As long as this criterion is fulfilled, the image generating apparatus could be of any type include computer vision systems with associated image processing, spectral camera types, kinect camera, 2D / 3D — cameras.
[0021] The meat cut handling station facilitates cutting the meat cut into two or more cut parts. It should be noted that sometimes a meat cut passes through a meat cut handling station without being cut. In such situation, the meat cut may leave the processing line intact e.g. as one complete half pork. As will be explained below, different types of meat cut handling stations may exist. These types include turning / rotating the meat cut or the cut part thereof, cut out pieces of particular relevant or which should be avoided on a given part of the meat cut, trimmed to a desired thickness or the like. The cut handling stations may be manually operated, fully operated e.g. by a collaborative robot or simi-automated.
[0022] The meat cut is preferably a part of a carcass from a livestock such as a pork. Hence, the meat cut may e.g. be the body of a pork or a loin, ham, middle piece, front part, etc. cut out from the meat cut. Hence, when referring to a cut part or a sliced part of a meat cut a reference is typically made to one of a ham, middle piece (loin) or
DK 181542 B1 6 front piece of one half of a pork carcass. To avoid confusion it should be noted that a reference to a meat cut may also be a reference to a cut part of a meat cut.
[0023] The region of the meat cut should be understood as a region of the meat side of the meat cut or of a cut part of meat cut. This region should be in line of site of the image apparatuses of both the first and second image generating station independent of the meat cut is laying or hanging or if it is cut into parts between the two image generating stations. In this way, it is always possible to capture an image from that region.
[0024] In an exemplary embodiment, said image generating stations is capturing at least one image of said meat cut.
[0025] Only capturing one image of the individual meat cuts is advantageous in that the amount of data obtained is reduced leading to a faster and more accurate recognition performed by the controller.
[0026] In an exemplary embodiment, said image generating stations is capturing — three images of said meat cut.
[0027] Capturing three images of the individual meat cut is advantageous in that then a higher resolution of individual regions of the meat cut is obtained. Thereby, minor details e.g. of meat structure is captured leading to improved recognition of the individual cut parts of the meat cut. Further less processing of the individual image is — required.
[0028] In an exemplary embodiment of the invention, said first image generating station captures an image of said meat cut while said meat cut is hanging from a hanger.
[0029] Preferably, the first image is generated while the meat cut is transported from a refrigeration station of the process line towards a conveyor via which the meat cutis transported towards a meat cut handling station.
DK 181542 B1 7
[0030] In an exemplary embodiment of the invention, said second image generation station is capturing said second image at the slaughterhouse processing line downstream said meat cut handling station.
[0031] Having the second image generation station at the processing line is advantageous in that it has the effect, that when the meat cut leaves the slaughterhouse, the meat cut is associated with the meat cut data. Thus, any sorting of the meat cuts based on meat cut data can be made prior to the meat cut leaving the conveyer of the process line.
[0032] In an exemplary embodiment of the invention, said second image generating station is capturing an image of at least one cut part of said meat cut.
[0033] Preferably an image is generated of all cut parts of a meat cut. Hence, the second image is preferably generated after the meat cut has been cut at a meat cut handling station.
[0034] It should be noted that it may be advantageous to have several second image generating stations. Having several second images allows tracking all cut parts of a single meat cut and thus facilitating a faster tracking system. In fact, the amount of image data needed at a particular second image generating station may also be reduced in that only part of the image data from the first image generating station relevant to the particular cut part of the meat cut is required a particular second image — generation system.
[0035] Thus, the second image generating station may physically be implemented as several independent second image generation stations at different physical locations.
[0036] Further, it should be noted, that a second image generation station may be located outside the slaughterhouse such as at a refinement site. If the meat cut need refinement which cannot be facilitated at the process line, the meat cut may be transported to a refinement site. At this site, the second image generation station may
DK 181542 B1 8 be positioned such that it is first at the refinement site, the meat cut is associated with the meat cut data.
[0037] Further, it should be noted that several additional image generating stations may be used if tracking is required outside the process line. Thus, if in addition to tracking at the end of the process line it is also required to track at a refinement site, then a third image generation station may be located at such refinement site. When a processing line comprises a first and second image generating station, the meat cut is associated with meat cut data before transportation to a refinement site. However, at the refinement site, it may be relevant to check, track or verify a meat cut. In this way, at the refinement site, the meat cuts and the associated meat cut data may be used to determine a certain type of refinement, labelling (when packed), etc.
[0038] In an exemplary embodiment, said meat cut imaged at said first image generation station is a half pig.
[0039] In an exemplary embodiment, said meat cut part imaged at said second image generation station is a front part, a middle cut or a ham of said half pig and wherein said meat cut tracking system comprises the step of associating meat cut data from said half pig with said front part, said middle cut and / or said ham based on image data from said first and from said second image generation stations.
[0040] In an exemplary embodiment, a loin of said middle cut is imaged at a third image generation station and wherein said meat cut tracking system comprises the step of associating meat cut data from said half pig with said lion based on image data from said first and / or said second image generation station with image data from said third image generation stations.
[0041] Adding an extra image generation station is advantageous in that if the meat cut is processed such as separated two or more times between the first and the second image generation station, the meat cut may be harder to recognize. Therefore, by generating more images along the process line may lead to less change to the meat cut between image stations and thus easier recognizing of the meat cut from image data generated at two subsequent image generating stations. Thus, such extra / third
DK 181542 B1 9 image generating station can seen as an intermediate tracking step which can be used as link between image data from first and third stations. The image data of the third station may be of a loin i.e. a further prepared part of a middle cut. And since the loin is a further preparation of the middle cut and e.g. the chine bones has been removed, loin may be hard to recognize.
[0042] The third station may be located at / after a refinement station where cut parts may be further prepared e.g. to loins etc.
[0043] The image data from all of the first, second and third image stations preferably include data from the same region of the meat cut.
[0044] In an exemplary embodiment of the invention, said first image and said second image are captured from different positions relative to said meat cut.
[0045] This is especially advantage wherein the first image generating station is positioned in the process line between the refrigerator and the meat cut handle station.
This is because in an embodiment, at this transport part of the process line, the meat cuts, mores specifically two half’s of e.g. a pork carcass are hanging side by side and thus pictures of both half’s can be taken more or less simultaneously at the same image generating station. During the process line, before the other end of the process line, the two meat cuts are separated and typically also each cut into two or more cut parts which then are transported on a conveyer. Thus, if sorting of the meat cuts in a first and in a second category is made prior to hanging the individual cut parts on carrier hooks, the second image(s) have to be captured while the cut meat cuts are laying on the conveyer.
[0046] In an exemplary embodiment, said first and second images are images of the meat side of said meat cut.
[0047] This is advantageous in that the structure of a meat side of a meat cut is unique compared to the structure of the skin side of the meat cut. Hence, the likelihood of identifying one meat cut among a plurality of meat cuts based on imaging of the meat cuts is higher if the images reproduce the meat side.
DK 181542 B1 10
[0048] In an exemplary embodiment, said first and second images are capturing a meat structure of the same part of said region of said meat cut.
[0049] Capturing meat structure of the same region of the meat cut at the first and second images is advantage in that the meat structure represents a unique identification an individual meat cut. Hence, the meat structure may be referred to as a unique biometric identifier for a particular cut part of the meat cut.
[0050] In an exemplary embodiment, said meat structure is derived from said images by image processing including classification, regression and segmentation
[0051] This is especially advantageous if only one image is captured in that meat structure of different parts of the meat cut then need to be derived from the same picture.
[0052] In an exemplary embodiment of the invention, said region is a predetermined region.
[0053] The region of the meat cut which is imaged is preferably predetermined in that in this way it can be ensure that the meat structure of this part remains intact throughout the process line and thus is able to be used to identify the meat cut when scanned at the second image generation station. Hence, it can be ensured, that this particular region of the meat cut is not divided when the meat cut is cut into cut meat cut parts and thus that it remains intact. — [0054] Alternatively, the predetermined region may specify a part of the meat cut that is stamped and thus such stamp can be used instead of or in combination with the meat structure to identify the meat cut.
[0055] Alternatively, the predetermined region may be specified as a part of the meat cut that is first visible when the meat cut is cut. Hence, a first station may capture an image of a meat cut after it has been cut in two / divided / separated and the predetermined region is part of the meat cut which was not visible before the meat cut were separated, divided or cut in two.
DK 181542 B1 11
[0056] Accordingly, the location of the predetermined region of the meat cut may be determined prior to the livestock is slaughtered.
[0057] In an exemplary embodiment of the invention, said meat structure of said predetermined region of said first image is selected and stored as first image data.
[0058] This is advantageous in that then the meat structure of part of the meat cut that is not cut through is stored and associated with a particular cut part of the meat cut. Thereby enabling tracking the cut meat part also after the meat cut have been separated in two or more individual cut meat parts. This is done by capturing an image of the predetermined region and derive the meat structure hereof at the second image generating station. In an embodiment, meat structure would be suitable image data i.e. the meat structure of meat in the predetermined region.
[0059] In an exemplary embodiment, said meat cut comprises a plurality of predetermined regions.
[0060] Defining more than one predetermined region is advantageous in that a plurality of parts of the meat cut and thereby a plurality of individual cut meat cuts can be tracked. Hence, one captured image may comprise several such as three predetermined regions.
[0061] In an exemplary embodiment, said meat cut data include information related to at least one type of data from the list comprising: veterinary data, physical — location in said system, physical location on a carrier hook of a Christmas tree, farmer data, weight, autoform data, sorting category, percentage of fat and percentage of meet.
[0062] In general all data from the process line prior to the fridge is relevant as meat cut data. This data include e.g. autoform data i.e. data from inspection of meat and fat percentage of the meat cut. Sorting category is also an important data type — which can be used towards the end of the process line to automize the sorting of meat parts.
[0063] Veterinary data should be understood as information of health of the livestock and thereby of meat cut therefrom. Such data may include information
DK 181542 B1 12 related to diseases, bruises, hair, certain defects, etc. that may influence the final category of meat from a particular livestock. Such information is very valuable to be able to associate with a meat cut or parts hereof at the end of the process line. This is because pricing of meat often depends on to which category it belongs and thus, the farmer who grow up livestock’s is also paid differently depending on in which category meat from his livestock’s ends up.
[0064] Physical location in said system should be understood as where in the process line, a particular meat cut or part hereof is located. Examples of physical locations could be in boning hall, refrigerator, conveyor, carrier hook of a Christmas tree, etc.
[0065] Farmer data should be understood as information of the farmer at which the livestock grew up. Farmer data is of particular relevance if payment to the farmer is made with respect to the quality of the meat cut. Hence, if e.g. a meat cut comprise bruises the payment is not as high as if it does not. This veterinarian data is associated with the meat cut prior to the refrigeration and linked to the meat cut again after it has been cut into parts based on the present inventive method.
[0066] In an exemplary embodiment of the invention, said veterinary data include location on the meat cut of defects.
[0067] Defects may include undesired hair, bruises, visual deformities, etc. Being able to track which part of a meat cut that suffers from such defects is advantageous in that it has the effect that only that specific cut part of the meat cut can be discharged or categorized in a second category.
[0068] In an exemplary embodiment, said meat cut data is updated with information of said physical location of said meat cut in said slaughterhouse process line.
[0069] This is advantageous in that it has the effect, that information of the exact level of carrier hangers on the Christmas tree and the exact carrier hanger of that level
DK 181542 B1 13 is added to the meat cut data. In this way the tracking of the met cut throughout the process line is completed.
[0070] In an exemplary embodiment of the invention, image data from said captured first image is divided in a plurality of data pools.
[0071] In an exemplary embodiment, the same first image data is comprised by a plurality of said plurality of data pools.
[0072] This is advantageous in that it has the effect that the data pools are rolling data pools understood as e.g. for each new image data captured a new data pool is created comprising image data from e.g. the previous 5 to 10 meat cuts or parts of meat — cuts passing the first image generating station. Image data from the following e.g. 5 to 10 meat cuts or parts of meat cuts are also added to that data pool. Hence, it is decided to have image data from 17 first images, then the 15 to 20 data pools may be established to ensure having image data enough in the data pool to recognise image data also if a meat cut or part of a meat cut is removed and subsequently added to the — process line.
[0073] In an exemplary embodiment, the data size of said data pools corresponds to first image data from maximum 100 first images, preferably maximum 50 first images, most preferably maximum 25 first images.
[0074] Tests has indicated that a data pool comprising image data from between 10 and 25 meat cuts / parts of meat cuts is a good balance between complexity in implementation, process line speed and accuracy / performance of the tracking method.
[0075] In an exemplary embodiment, the data size of said data pool corresponds to first image data generated during a timer period of maximum five minutes, preferably maximum 2 minutes, most preferably 1 minute.
[0076] One way of defining the data size of the data pool could be to allow a certain number of first images to be captured e.g. defined by a predefined number captured or number of imaged captured during a period of time. As an example
DK 181542 B1 14 between 12.000 and 20.000 pigs may be slaughtered during one day which equals up to 40.000 half pigs which is 10.000 half pigs in a slaughterhouse having four parallel process lines.
[0077] In an exemplary embodiment, a match is established between a first and second image data when the probability of a correct match is above 90%, preferably above 95%, most preferably above 98%.
[0078] Dividing image data in data pools is advantageous in that it has the effect, that the speed and accuracy in identification of the meat cut or part of the meat cut based on e.g. comparison of imaged data from the first and second imaged is increased.
This is both because of the amount of data image data from a second image is to be compare with but also because the risk of two meat cuts having similar meat structure in a data pool having 10 representations of meat structure (captured from first images) is lower than if the data pool is comprising 100 such representations.
[0079] More specific, because of the high volume of meat cuts handled by the process line there is generated a high volume of first image data. If the second image data should be compared to all this first image data this would be time consuming and risk causing a reduction of the speed with which meat cuts can be handled by the process line. On the other hand because two meat cuts or part(s) of a meat cut(s) may change position in the sequence of meat cuts on the process line, the first in first out — principles cannot be applied.
[0080] Image data should be understood as including at least a digital representation of the region of the meat cut that is captured by the first image generating apparatus at the first image generating station.
[0081] In an exemplary embodiment of the invention, said meat tracking method further comprises the step of providing said sorting category data to a data system allowing said data system to control said process line to physically sort cut parts of said meat cuts into one of at least two different categories based on said meat cut data.
DK 181542 B1 15
[0082] Such automatically sorting is advantageous in that visual inspection to sort cut meat parts into different categories by a work person would no longer be needed.
This is advantageous in that only one trained work person performs visual inspection and thus no deviation between the opinion of two work persons occurs. Further, it is advantageous in that one manual handling station can be avoided leading to an optimized process line.
[0083] When implemented in a process line, the sorting enable automatic sorting of cut meat part based e.g. on veterinarian data which is not possible in process lines today. This is possible no matter if the meat cut is rotated, hanged, cut in parts, etc. as — long as the second image captures the preferably predetermined region of the meat cut.
Thus, at any stage of the process line, it is possible to implement an image generating station and consequently identify each individual part of a meat cut.
[0084] In an exemplary embodiment, said tracked part of said meat cut is a cut part of said meat cut.
[0085] Accordingly, not only it is possible to track the entire meat cut if it is not being cut, but it is also possible to track a cut part of the meat cut.
[0086] In an exemplary embodiment, said tracking method enables tracking at least three cut parts of one meat cut.
[0087] This is advantageous in that it has the effect that a ham, a front part and a — middle piece of e.g. a pork can be tracked.
[0088] In an exemplary embodiment of the invention, said associating of said cut part of said meat cut with said meat cut data is made solely based on image recognition.
[0089] This is advantageous in that no additional information such as weight, trajectory and speed of the meat cut may be necessary to link the meat cut data to a — particular meat cut or part of a meat cut.
DK 181542 B1 16
[0090] The invention relates to a method according to any of the paragraphs
[0007] — [0089] implemented in a system according to any of the paragraphs [0091] —
[0110].
[0091] In an aspect, the invention relates to a meat tracking system configured to track a part of a meat cut between a first position of a slaughterhouse process line and a second position. Said system comprises: a first image generating station comprising a first image generating apparatus configured to generate a first image of said meat cut, a second image generating station comprising a second image generating apparatus configured for generating a second image of said meat cut, a meat cut — handling station, configured for cutting said meat cut in parts after the first image is generated and before the second image is generated, a conveyer configured for transporting said part of said meat cut from said first image generating station to said meat cut handling station, and a data system comprising meat cut data associated with individual meat cuts handled by said system. Wherein a region of said meat cut is — captured by both said first and second images. Wherein said data system is configured to associate said meat cut with said meat cut data when said second image is generated based on information derived from said captured region of both of said first and second images.
[0092] In an exemplary embodiment of the invention, second image generation station is located at the slaughterhouse processing line downstream said meat cut handling station.
[0093] Having the second image generation station at the processing line is advantageous in that it has the effect, that when the meat cut leaves the slaughterhouse, the meat cut is associated with the meat cut data. Thus, any sorting of the meat cuts > based on meat cut data can be made prior to the meat cut leaving the conveyer of the process line.
[0094] In an exemplary embodiment of the invention, said tracking system comprises a plurality of second image generating stations.
DK 181542 B1 17
[0095] Having several second image generating stations and thus several second images allows tracking all cut parts of a single meat cut and thus facilitating a faster tracking system. In fact, the amount of image data needed at a particular second image generating station may also be reduced in that only part of the data from the first image generating station relevant to the particular cut part of the meat cut is required a particular second image generation system.
[0096] Thus, the second image generating station may physically be implemented as several independent second image generation stations at different physical locations.
[0097] In an exemplary embodiment, a second image generation station is located at a refinement site.
[0098] If the meat cut need refinement which cannot be facilitated at the process line, the meat cut may be transported to a refinement site. At this site, the second image generation station may be positioned such that it is first at the refinement site, the meat cutis associated with the meat cut data.
[0099] In an exemplary embodiment, said tracking system further comprises a third image generation station located at a refinement site.
[0100] When a processing line comprises a first and second image generating station, the meat cut is associated with meat cut data before transportation to a refinement site. However, at the refinement site, it may be relevant to check, track or verify a meat cut. In this way, at the refinement site, the meat cuts and the associated data may be used to determine a certain type of refinement, labelling (when packed), etc.
[0101] In an exemplary embodiment of the invention, said meat tracking system comprises a further meat cut handling station, said further meat cut handling station being an automated sorting meat cut station configured to automatically physically sort cut parts of said meat cuts into one of at least two different categories based on said meat cut data.
DK 181542 B1 18
[0102] Such automatically sorting is advantageous in that visual inspection to sort cut meat parts into different categories by a work person would no longer be needed.
This is advantageous in that only one trained work person performs visual inspection and thus no deviation between the opinion of two work persons occurs. Further, it is advantageous in that one manual handling station can be avoided leading to an optimized process line.
[0103] In an exemplary embodiment, said meat tracking system comprises a further meat cut handling station, said further meat cut handling station being an automated meat cut rotating station configured to automatically rotate a cut part of said — meat cut.
[0104] Being able to rotate or change orientation in space of a meat cut or a cut part of the meat cut is advantageous in that it may be required to be able to process e.g. the cut part of the meat cut further. As an example of a process requiring e.g. turning a cut part of a meat cut upside down could be automatic (e.g. by a robot) hanging of a — ham on a carrier hook.
[0105] Note that rotating may include an upside-down rotation resulting in a change of side of the cut part of the meat cut that is resting on the conveyer.
[0106] In an exemplary embodiment, said meat tracking system comprises a further meat cut handling station, said further meat cut handling station being an automated meat cut hanging station configured to automatically hang a cut part of said meat cut on a carrier hanger.
[0107] Automating these meat cut handling stations is advantageous in that it has the effect, that a work person does not need to perform monotonous movement and lifts leading to a better physical working environment.
[0108] In an exemplary embodiment, said data system is a dedicated meat tracking system controller.
DK 181542 B1 19
[0109] It should be mentioned that the controller may be implemented as a cloud controller or alternatively, the controller controlling the tracking system may be the same as the controller controlling e.g. conveyor, meat handlings stations, etc.
[0110] The main task of the controller is the would be to associated meat data with image data of the meat cut and subsequent recognize the meat cut and associate the meat cut with a physical location at the conveyor or Christmas tree with carrier hooks.
[0111] The invention relates to a meat tracking system according to any of the paragraphs [0007] — [0089] implementing a method according to any of the paragraphs
[0091] — [0110]. — The drawings
[0112] For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts. The drawings illustrate embodiment of the invention and elements of different drawings can be combined within the scope of the invention:
Fig. 1 illustrates part of a process line, and
Fig. 2 illustrates a flow chart of tracking a meat cut.
[0113] The present invention is described in view of exemplary embodiments — only intended to illustrate the principles and implementation of the present invention.
The skilled person will be able to provide several embodiments within the scope of the claims.
[0114] Fig. 1 illustrates a principal overview of a slaughterhouse process line 3 starting at a first position 1 where livestock arrive and ending at one or more second — positions 2a, 2b, 2c where the animal products leave the slaughterhouse process line 3. The first position 1 could be defined as when the livestock 7a is hanged on a hanger
DK 181542 B1 20 comprising an identification tag. At this point, the meat cut data of the individual livestock 7a can be associated with the particular hanger via the identification tag.
[0115] The arrival of the livestock 7a is in this document referred to as a pig but could also be cattle’s and other animals identifiable by visible biometric identifiers.
The arrival is typically one single area at the slaughterhouse in that all livestock’s 7a need to go through the same stations at the beginning of the process line 3. The first position 1 may be anywhere such as a stable / arrival location where it is possible to associate meat cut data with the individual livestock 7a.
[0116] The process line 3 has no single termination, hence the termination or end — ofthe process line 3 at a second position 2a, 2b, 2c could be after one of several meat cut handling stations 6a, 6b, 6c, 6d (also simply referred to as stations) along the process line 3. The output of the process line may be large meat cuts such as half a pig and parts of such pig such as head, ears, feet, blood, ham, loins, etc. Hence, the meat cuts leaving the process line may be referred to as raw material for further processing.
Therefore, several stations 6 a, 6b, 6¢, 6d along the process line 3 may be referred to as second positions denoted 2b, 2c as they for an individual meat cut 7a, 7b, 7c or cut part of a meat cut 13a, 13b, 13c, 13d from the livestock represents the end of the process line 3.
[0117] Between the first and second positions 1, 2a, 2b, 2c the livestock’s 7a are — going through a plurality of processes starting with a registration, anaesthetization and slaughter, scalds and measures of e.g. meat or fat percentage, etc. these and further processes are sometimes referred to as being part of the so-called black slaughter line.
[0118] The process line 3 continues to process the carcass 7b in stations 6a, 6b, 6c, 6d belonging to a so-called clean slaughter line. These processes may include separating parts such as entrails, head, fat, etc. from the carcass 7b. A relevant process to pay attention to with respect to the present invention in this part of the process line 3 is the veterinarian control. Here each individual carcass is observed e.g. by a veterinarian for bruises, hair, etc. which may have influence on a sorting into categories of the individual meat cuts or parts hereof.
DK 181542 B1 21
[0119] The carcass 7b leaving the clean slaughter line would typically be a hanging carcass with only the meat / bones left that after further refinement could be found the refrigerated counter in a supermarket.
[0120] The next step in the process line 3 is the cooling of the hanging carcasses 7b prior to the carcasses 7b are separated in half 7c and subsequently being cut into parts 13a, 13b, 13c, 13d. Fig. 1 is focusing on the part of the process line 3 from the carcasses 7b are separated and the half pig 7c is being cut into parts 13a, 13b, 13c, 13d in that the main problem solve by the present invention is tracking the individual parts 13a, 13b, 13c, 13d.
[0121] Each of the processes described above could be referred to as a station 6a, 6b, 6c, 6d of the process line 3. What happens at the processes of the stations 6a, 6b, 6c, 6d of the process line 3 is not described in detail in that the present invention is focusing on tracking the cut parts 13a, 13b, 13c, 13d between such processes instead of the actual processes.
[0122] With this said, the veterinarian inspection station 6a should be mentioned.
As mentioned, this station is located is before the carcass 7b enters the fridge (not illustrated). At this state in the process line 3, the carcass is hanging from a hanger having an identification tag such as an RFID tag or similar. When the veterinarian is inspecting the carcass, all relevant data is stored in the data system 9 with reference to the identification tag.
[0123] The data system 9 is provided with information relating to the livestock 7a e.g. at the position defined at as the first position 1 as described above. This information is referred to as meat cut data and at this first position 1 it may include information e.g. of the farmer having delivered the livestock 7a. Hence the meat cut data may sometimes be referred to as metadata related to a particular meat cut 7a, 7b, 7c. Subsequently along the process line 3, the meat cut data may be updated.
Especially, it is important to update the meat cut data with information observed at the veterinarian station 6b. This information is referred to as veterinarian data. The
DK 181542 B1 22 veterinarian data is relevant in that it has impact on the sorting of parts of the meat cut 13a, 13b, 13c, 13d in different categories downstream the process line 3.
[0124] One thing is to associate meat cut data with a carcass 7b hanging on an identifiable hanger, but when the carcass 7b is divided in two such as in two half pigs 7c which are no longer hanging on the identifiable hanger the meat cut data should be associated directly with the half pig 7c or parts of the meat cut 13a, 13b, 13c, 13d.
[0125] The data system 9 is provided with input from image generation stations 4, 5a, 5b, Sc, 15 providing images of the meat cut 7a, 7b, 7c. These images preferably all captures a particular region 10 of the meat cut. This region 10 should be a part of the meat cut that is not going to be cut through. Alternatively, at least a sufficiently large part of the this region 10 should be maintained intact for the data system 9 to be able to recognize meat structure hereof on a subsequent or previous captured image.
[0126] The data system for the tracking system of the present invention obviously comprises a computer / controller communicating with the elements of the image generation stations e.g. via Ethernet. If no local station controllers are present, the main controller may store, time stamp and perform any necessary processing of the image data.
[0127] One technique which could be implemented to match the images of the meat cuts could be a modelling method based on machine learning such as a neural network or a feature-based analysis of the captured images. A neural network-based approach typically provides better performance over feature-based approaches when large training datasets are available for training of the neural network. Furthermore, a neural network-based approach does not require any specification of particular suitable features, but may utilize the whole dataset from which features would otherwise need to be extracted. Hence, neural network approaches are able to utilize the variability across a larger and more detailed dataset to achieve very high performance, since the utilized neural network model is not restricted to the more limited variability of features extracted from the dataset. On the other hand, a feature-based approach using features extracted from a dataset, e.g., including imaging data, may in general be more
DK 181542 B1 23 effective when the one or more extracted and/or engineered features are correlated to the output and when the applied dataset is relatively small.
[0128] In general, the performance of a neural network is more precise when trained on larger trainingdataset, whereas if the trainingdataset is smaller, the feature based approach provides the better performance, such as e.g. higher accuracy, precision etc.
[0129] In this particular case, the extracted features would need to be suitable for matching of the images of meat cuts. Examples of suitable features that may be used in a feature-based approach may include image data such as details of the images generated including the region etc. A further advantage of feature-based approaches is that the model retains its interpretability in the sense that it is possible to assess and understand, which of the utilized features that provides the most effective matching.
[0130] As example, such details could be distances between elements of the image such as between bones, length (dimension or geometry) of visible part of bones, fat, — sinews, or other details of the meat cut (or region hereof). Pixel intensities (values) is a further example of a feature that may be utilized for image matching
[0131] A match between two meat cuts may then be evaluated as a percentage match of the second image compared to the first images. Such comparison may e.g. be based on an image similarity metric method which may produce a quantitative — evaluation of similarity between the two images or regions thereof. A plurality of different methods exists including Mean Square, Normalized Correlation, Pattern
Intensity, Mutual Information and normalized mutual information just to mention a few. Hence based on such similarity metric method two images may be compared and if they are 100% similar then obviously a match exists, however if the established percentage match is only 92%, then no match may exist. The percentage threshold defining a match may be defined as above 92%, such as above 95%, above 96% above 97%, above 98% or above 99%.
[0132] Various types of feature-based approaches may be used to match the images of meat cuts, including supervised learning, unsupervised learning and
DK 181542 B1 24 reinforcement learning methods. While, unsupervised learning may advantageously be utilized when the outcome of the training data is not known, e.g. no labels exists for the training examples, supervised learning is generally preferred due to its much better performance with regard to, e.g., accuracy etc. A non-exhaustive list of examples of supervised learning /algorithm based approaches that may advantageously be utilized for matching images of meat cuts may comprise: various types of decision tree algorithms, support vector machine, elastic net, logistic regression modelling,
Boltzman machine, probability based modelling, Bayesian modelling incl. e.g. Naive
Bayes, to name few.
[0133] Further, a convolutional neural network is one example of a neural network model that may advantageously be utilized for datasets comprises imaging data. This type of neural network model typically comprises convolutional layers, pooling layers and one or more fully connected layers. Advantageous, a convolutional neural network is capable of learning features based on its one or more feature extraction layers, which is based on one or more convolutional layers. A Siamese neural network architecture, e.g. comprising convolutional layers, is an example of a neural network that may advantageously be trained to perform matching of images of the meat cuts.
Nevertheless, other types of neural network approaches, including various deep learning approaches may be trained to perform image matching of the meat cuts.
[0134] As an example, a matching model for matching two images may include a first step of approving the captured image as a replica of the expected side and part of the meat cut. Next a regression model may be used to compensated for different orientation of the individual meat cuts on the conveyor / hanger. Then the content of the images may be segmented so as to focus on or retrieve the relevant part of the meat — cut. The relevant part may be the part that in this document is referred to as the region 10 of the meat cut. When these regions are identified from the images of the first and second station the matching of the images can be done and the meat cut data can be associated with the meat cut and/or physical location of the meat cut.
[0135] On fig. 1 second image generation stations Sa, 5b, Sc are illustrated. Hence, the region 10 of the meat cut 7c captured by the first image generation station 4 should
DK 181542 B1 25 be sufficiently recognizable to be identify from image capturing the same region 10 at the second image generation stations Sa, 5b, Sc.
[0136] Only one second image generation station 5a, Sb, S¢ may be sufficient. In fig. 1, this could be the station denoted Sc located at a sorting station 6c. At such station, the captured image is provided to the data system 9 which compare the region of the captured image to a data base of previously captured imaged and establishes a match. Based on the meat cut data associated with the matching previous image, the particular part 13a, 13b, 13c, 13d is sorted e.g. in one of a first and a second category which may be indicative of the quality, expected price group, etc. of the part of the 10 — meat cut 13a, 13b, 13c, 13d. At fig. 1 the sorting is illustrated automated by a switching track (illustrated by arrow denoted 16) on the conveyor 8 but could also be implemented as a robot physically lifting the part 13a, 13b, 13c, 13d from the conveyor to one or two hangers. Note that such sorting station 6¢ may be considered the second position 2c, alternative the arrival station of the parts 13a, 13b, 13c, 13d at the two — categories may be referred to as the second position.
[0137] It should be noted that the orientation of the meat cut is known at the first station, therefore less processing of the captured image may be need compared to the processing of an image captured at a second station. At the first station, the data system may know that the half pig 7c is hanging with the front part downward. At the second station the half pig or part thereof may be turned around and thus additional image processing step(s) of determining orientation and maybe turning the image may be needed compared to the processing of the image captured at the first station. Such additional processing steps are not illustrated on flow chart of fig. 2.
[0138] An image generation station 4, Sa, 5b, 5c, 15 may comprise light settings adjusted to light up the meat cut or relevant region(s) hereof so that the one or more image generation apparatuses of the station is able to best possible capture an image of relevant parts of the meat cut.
[0139] The light settings should preferably help to increase the contrast between fat and meat of the meat cut as well as enhance veins and others that may as a whole
DK 181542 B1 26 be referred to as a biometric identifier which as mentioned may include meat structure.
Light sources having a colour temperature of 4000k could be used.
[0140] The image generation apparatus may be implemented as a camera. This camera and maybe also any lightning and control of the image generating stations — should be packed so as to be able to withstand washing and thereby comply with the requirements to hygiene associated with a slaughterhouse process line 3.
[0141] Each image generation station, and especially the first image generation station, may comprise several cameras that are capturing images of the same meat cut 71, 7b, 7c from different angles.
[0142] Thus, with reference to an axis through the center of the lens of the camera, a first camera may capture an image of the meat cut 7a, 7b, 7c from a position where the lens axis is perpendicular to the hanger / meat cut 7a, 7b, 7c. A second camera may capture an image of the meat cut upwards or downwards in an angle that is less than 90 degrees between a line that is perpendicular to the hanger / meat cut and the lens axis
[0143] Further a camera may be twisted around a Z axis (that is perpendicular to the lens axis). In this way, such camera may capture an image of a region of the meat cut that is “inside” the meat cut or hidden by a part of the meat cut. Such as capturing a region defined behind or partly covered by the ribs.
[0144] A camera may be positioned above a conveyor pointing downwards towards the meat cut as it closes up on the camera. A camera may be positioned so that the lens axis is raised e.g. 5-30cm such as 10-25cm above a converter.
[0145] A suitable camera may be a standard 2D vision camera, but a spectral, multi / hyperspectral camera or a 3D camera may be used. A hyperspectral camera has the advantage that signal strength is increased i.e. the captured image may be bigger or having higher resolution that an image from a 2D camera.
[0146] It should be noted that the camera may also be implemented as an ultrasonic camera, a computer tomography scanner, x-ray camera or similar.
DK 181542 B1 27
[0147] It should be noted that it may be necessary to inset an intermediate image generation station to capture an intermediate image. The intermediate image should serve as link between the first and second images. The intermediate image station may in some configurations of the process line be referred to as a third image station 15.
This may be relevant if e.g. the region 10 captured at the first station 4 is not possible to capture at the second station Sc. The intermediate image should capture the first region and a second region of the part of the meat cut 13a, 13b, 13c, 13d. Hence if the first region 10 is not possible to capture at the second station Sc, but the second region is, then the part of the meat cut 13a, 13b, 13c, 13d may still be tracked through the process line 3 according to the present invention via such intermediate image station.
[0148] An intermediate image station could e.g. be relevant if the part of the meat 13a, 13b, 13c, 13d is packed for transport and should be recognized afterwards, if the part of the meat cut is refined to an extent where the first region is no longer available, etc. The latter is exemplified by the third image generation station 15 capturing an image of the part 13d and using this image to associate the part 13d with the relevant meat cut data. When done the part 13d may, at a packing station 6c, be hanged on a hanger with an identification tag which is added to the meat cut data so that the location of the part 13d on the hanger is now available. Hence, at the refinement site 17, there is no need for an image generating station only an identification tag scanner to identify — the part 13d and its associated meat cut data. As illustrated the refinement site 17 is also denoted 2c as it may be argued that the refinement site also is part of process line 3 even though it is not physically part of the slaughterhouse.
[0149] During testing of the present invention, it turned out that the amount of data (i.e. captured images also referred to as pool size) which the data system have to compare a captured image to influences the accuracy of the result of the comparison.
In other words, the smaller pool size, the higher accuracy.
[0150] As an example, 2500 lifestock’s 7a may be slaughtered per day leading to 5000 half pigs 7c per day. Simply depending on that the meat cuts 7a, 7b, 7c are leaving the process line 3 in the exact same order as the enter / are established, is not possible
DK 181542 B1 28 in a real-life slaughterhouse process line 3. One reason for this, is because parts of meat cuts 13a, 13b, 13c, 13d are sometimes moved or removed e.g. due to inspection.
[0151] Per day according to the above example the maximum pool size would be at least 5000 captured images (one per half pig 7c). Obviously, to search a data pool comprising 5000 images is both more time, energy and processor consuming compared to a search of a data pool comprising e.g. 50 or 25 images. Further, there is a higher risk that among 5000 images that two images are more or less identical than among 30 images.
[0152] Therefore, the data pool of images captured at the first station 4 is dynamic — relative to the pictures captured at the second and / or third stations Sa, 5b, Sc, 15. The dynamic data pool may e.g. be established by one or more data buffers the content of which is changed over time. This can be implemented in various ways such as according to the first-in first-out buffer principles.
[0153] A non-limiting example is to store all captured images in a database and then define a window of e.g. 10 images as buffered pool size. The meat cut first imaged at the first station is first in the buffer, the second is second in the buffer and so on.
Hence, the first meat cut is expected to be the first arriving at the second station and so forth.
[0154] However, since the first meat cut may be removed and afterwards positioned again, the captured imaged at the second station may be looking in the 10 first station images to find a match. The first meat cut at the second station may in this example by the second in the buffer and the second meat cut at the second station may thus be the first in the buffer. Each time a match is established, the image is removed from the buffer.
[0155] The buffer may be filled up with non-matched images. Hence if the buffer size 1s 10 and five is removed and not reintroduced on the conveyer, these five images is not removed from the buffer and thus the buffer capacity or pool size is in practice reduced to only five.
DK 181542 B1 29
[0156] Such non-recognized imaged stored in the buffer may be moved to another buffer or storage after a certain time or meat cuts has passed the second station. This is because the data system after such period of time does not expect to find these meat cuts. At the end of the day, the five meat cuts that were taken out to be inspected may be passed by the second station and the data system then uses the imaged removed from the buffer for the matching.
[0157] Alternatively, to compensate for such non-recognized imaged, the buffer size may be increased as the data system determines that a meat cut must be removed from the conveyor or passed by the second station.
[0158] An alternative implementation is to establish a buffer having the size corresponding to the number of parts of meat cuts 13a, 13b, 13c, 13d that simultaneously can be on the conveyor 8. A size above this number would under normal operation be superfluous. In fact, the buffer size may be reduced with a conservative estimate of the number of parts 13a, 13b, 13c, 13d after the second image station Sa, Sb, 5c and with a conservative estimate of the number of parts 13a, 13b, 13c, 13d between the first station 4 and the second station
[0159] Another alternative implementation example could be using 35 buffers if the desired data pool size is 35. The 35 buffers may be established each of which comprising images captured prior to and after the first image of a specific meat cut part 13a, 13b, 13c, 13d. In this example a buffer or pool size of an image capture at the first station 4 may comprise e.g. 5 images captured prior to the first image and 30 imaged captured after the first image.
[0160] It may be relevant to include images in the buffer that were captured prior to the imaged searched for at the second station. Even though it is more likely that the meat cut part 13a, 13b, 13c, 13d is removed from and subsequently, after a plurality of parts of meat cuts has passed, positioned on the conveyor 8 again. Then the removed part is positioned on the conveyor again at a location in the sequence of meat cuts between the first and second stations, closer to the second station than from where in the sequency it was removed. With this said, if four meat cuts are removed for
DK 181542 B1 30 inspection, the “fifth” meat cut arrives four buffer locations prior than expected at the second station.
[0161] It should be mentioned that a match is considered established between the first and second image data when the probability of a correct match is above 90%, preferably above 95%, most preferably above 98%. As mentioned, such high accuracy is more likely to achieve the smaller pool size.
[0162] The data extracted from the captured images and used for matching in the data system may as mentioned be reduced to a region of the meat cut and thus a part of the entire captured image.
[0163] On the skin side of the meat cut, a stamp (how distinct which part of the stamp is may variate from stamp to stamp), skin structure, veins, etc. may be used alone or in combination to establish the biometric identification of the meat cut.
[0164] On the meat side of the meat cut, the meat structure, meat / fat / sinew / muscle / bone pattern, etc. may be used alone or in combination to establish the biometric identification of the meat cut. Such pattern comprised by the region 10 may be a unique “fingerprint” like identifier for a particular meat cut or part of a meat cut.
Accordingly, the region may comprise a visible (at least to a camera) fingerprint of a meat cut / part of the meat cut. Such fingerprint is at least unique for the purpose of image matching in the context of recognising meat cuts at a slaughterhouse process line.
[0165] These markers may be more or less clear on an image and thus when referring to a signal strength, this is a reference to how conspicuous these markers are.
Common for all the above markers (except for the stamp) is that a meat cut can be recognized without any additional stations or similar applying physical or visual markers on the meat cut.
[0166] Because the meat cut may change orientation along the process line the meat cut may be tracked based on a combination of skin side and meat side biometric
DK 181542 B1 31 indicators. Hence more than one image of a meat cut may be used to associate a meat cut with its meat cut data.
[0167] The relevant biometric indicators may be predetermined based on type of livestock, knowledge of where a meat cut is cut through, how it is orientated (hanging, laying with skin side up / down, front part in direction of movement, etc), how it is handled (e.g. turned around / upside down, etc.) and so on.
[0168] Therefore, captured imaged may be processed different from station to station. Hence at one station, the captured image may be analysed for orientation of the meat cut before the relevant region is determined. When determined image — processing may be applied to increase relevant characteristics of the image /region.
Before it is stored or used to match stored images.
[0169] It should be mentioned that the region 10 may first be visible and thereby possible to capture an image of when the half pig 7c is cut in the three at the three-part cutter station 6b of fig. 1. Accordingly, the meat structure of the region can first be captured after cutting. Thus, if the first image station is at the process line when the half pig 7c is hanging, the second image station is after the three-cutter. Then these two images (or regions thereof) are both associated with the same meat cut data and both images can be later be used to link a meat cut part to the meat cut data. Hence, a third image station in relation to a sorting station may capture an image which can be — used to like the final meat cut part to the meat cut data.
[0170] Fig. 2 is a flow chart of the steps of tracking of a meat cut 7a, 7b, 7c according to the present invention. As illustrated in fig. 1, the pig 7a arrives at the slaughterhouse at a first position of the process line 3, which is referred to as step S1 in the flow chart of fig. 2. The pig 7a is associated with initial meat cut data such as — pig health data, farmer data, etc. Upon arrival of the pig 7a, a veterinarian may inspect the pig 7a and add arrival veterinarian data to the initial meat cut data. All meat cut data is store in a data storage of the data system 9.
[0171] The second step S2 includes a plurality of slaughterhouse process line 3 processes which are not essential to the present invention. With this said, it should be
DK 181542 B1 32 mentioned that data generate relating to the individual carcass 7b may be added to the meat cut data of the individual carcass 7b.
[0172] The meat cut data and the individual carcass 7b may be linked via an identification tag. An identification tag may be physically attached to the carcass 7b or to the hanger on which the carcass is hanging. In this way by an identification tag scanner read/write access to the meat cut data can be obtained. An identification tag could be implemented as a stamp, QR code, barcode, RFID tag, etc.
[0173] It should be mentioned that it might be possible also to identify the carcass 7b by way of imaging the carcass 7b or a region 10 hereof. This is illustrated at the carcass 7b at handling station 6a where one or more regions 10 could be captured and used for identification.
[0174] Further it should be mentioned that along this first part of the process line 3, several parts may be separated from the pig 7a and used as is or further processed or refined. Accordingly for some parts of the of the pig 7a, the process line ends before — the refrigeration of the carcase 7b. This is indicated on fig. 2 with the stipulated box denoted 2a. The parts leaving the process line prior to the refrigeration station may not need to be identified.
[0175] In step S3, the carcass is inspected by a veterinarian or other type of qualified person and the meat cut data is updated with findings of this inspection.
[0176] After the veterinarian inspection the carcass is imaged at a first image station 4. The captured imaged may cover the carcass as a whole or in part and it should be noted that several images may be captured of the same carcase 7b. If several images are captured, these may focus on particular different regions 10 of the inside or outside of the carcase 7b. The meat cut data is updated with of linked to the image data.
[0177] Typically, the carcass 7b is hanging while being imaged at the first station 4. With this said, the carcass 7b may be divided in two half’s and / or laying e.g. on a conveyor 8 when it passes through the first station 4. Further, the first station may
DK 181542 B1 33 typically be located before the refrigerator cooling / freezing station but could also be located after.
[0178] If not divided before, then after the refrigeration, the carcass’s 7b are divided in two pig half’s 7c which are positioned on a conveyor 8. Hence, either the carcass 7b or a half pig 7c is passing by a first image generation station 4.
[0179] Thus, in this embodiment of the invention, in step S4, the carcass 7b passes hanging by the first image generation station 4 where at least on image is captured.
The at least one image may capture the entire side (e.g. front or back) of a carcass 7b.
Alternative two, three or more images are captured of the carcass 7b by one or more image generating apparatuses.
[0180] In an embodiment of the invention at least one of the images is captured in an angle such that the inside of the carcass is captured, more specifically a region 10 such as a predetermined part of the meat structure is captured. Since the carcass 7b comprises two half pigs, preferably an image of both sides of the inside of the carcass — 7bis captured. The reason for capturing an image of the inside of a carcass is that it has turned out that the meat structure is unique from one carcass to another. Thus, the meat structure, if not destroyed by cutting or other processing, is a good biological identifier for a meat cut 7 or part of a meat cut 13. Hence, if possible to capture an image of such a region 10 of a meat cut part 13 at a first image generating station 4 it — is possible to recognize the meat cut part 13 if the same region 10 is also captured at a second image generation station 5.
[0181] Accordingly, prior to the next step S5, the carcass is divided in two half pigs 7c which are positioned on a conveyor 8. Sometimes the processing of the meat cut ends here i.e. the end product of the process line 3 is the half a pig 7c which is transported to the end of the conveyor to a packing / hanging / transport preparation station. Such final process line station may be an example of a second position 2 where the half pig 7c is imaged again at a second image generation station 5. Based on the data associated with the captured image e.g. of the region 10, the data system 9 is able
DK 181542 B1 34 to automatically determine in which of one of several categories the half pig 7c should be sorted in. Such sorting may be performed in step S8
[0182] In step S6, the half pig 7c is cut into parts 13 at a cutting station 6b. in an embodiment of the invention, the half pig 7c is cut into three parts 13a, 13b and 13c asillustrated in fig. 1.
[0183] If fig. 1, the ham 13c is conveyed to an end of the process line 3 which may be referred to as a second end 2b. This end 2b may be a packing station 6d, where the ham 13c is packed / hanged or in other way made ready for transportation away from the slaughterhouse. As illustrated by the dashed second image generation station
Sa, the ham may be imaged. The image data may be used to associate a label of the packing or hanger with the meat cut data of the ham. Alternatively, the routing may be determined based on weight of the meat parts.
[0184] The middle part 13b is further divided at the cutting station 6b to a loin 13d and an additional meat cut at the optional step S7. This is just one example of a processing of a meat part 13. The additional meat cut and the loin 13d are following two different branches of the conveyor 8. The routing of the two meat cuts may e.g. be determined at a sorting station 6c in step S8 by the data system based on image captured by the stipulated intermediate second image station 15.
[0185] The loin 13d continues to a packing station 6b in step S9 from where it is transported to a refinement site 17 which may also be referred to as an end point such as a second position 2c of the process line 3. At the refinement site 17 in step S10, the loin 13d is imaged by a second image generation station 5b and thus first after the end of the process line 3, be associated with the meat data based on the second image captured at this site. In this way the final products from the loin 13d may all be associated with meat cut data originating from the livestock, carcass or half pig 7a, 7b, 7c. Step 9 and step 10 are not illustrated in fig. 2.
[0186] As with the middle part 13b, the front part 13a is also conveyed to a cutting station 6b where in this example the front part 13a is divided in two at the optional step S7. The two parts are in this example following two different conveyor
DK 181542 B1 35 tracks. As illustrated two intermediated image stations 15 are capturing imaged of the front part 13a. These images may be use either to determine which part should follow which conveyor track or to add additional image data to the data system and thereby facilitate an increased accuracy in identifying the parts cut from the front part 13a.
[0187] One of the cuts from the front part 13a is conveyed to a sorting station 6c in step S8. At this station 6¢ a second image generation station 5c is capturing an image of the cut which include the region 10. As described above, this second image is used to identify the meat cut of the front part 13a as part of the half pig 7c or carcass 7b imaged at the first image station 4. Thereby this part is linked to meat cut data including any veterinarian data provided. Either the veterinarian data itself include a category or the data system can define a category based on the veterinarian data. In either way, the sorting station 6c, which may comprise a robot arm, is able to sort the meat cuts e.g. in a first and in a second category. At fig. 1 the two categories are illustrated as two tracks of a conveyor but may be two hangers or one hanger and one conveyor track.
[0188] In case of hangers are used for dividing meat cuts in categories, the sorting and packing station 6c, 6d and thus step S8 and S9 may be one and the same station / step. The robot is able to physically hang the meat cut on a particular hook at a particular level of a particular hanger, the hanger being defined to carry meat cut of only one category. The information of where on the hanger the meat cut is positioned may be provided to the data system and thus, the meat cut data for the particular meat cut may be updated accordingly.
[0189] The automatic sorting of meat cuts by a robot arm is possible due to the meat cut data associated with the individual meat cuts and the data is available to the robot by the identification of the individual meat cut.
[0190] Referring to fig. 1, the meat cut following the 2. category track has a black upper left corner. This defect may be visible by image data captured by the second image station Sc. In this way the second image station 5¢ may act as quality control for the veterinarian data.
DK 181542 B1 36
[0191] However, such defect may also not be visible from the image, but be part of the meat cut data e.g. provided at the veterinarian inspection station and thus even if the defect is not visible by the second image station Sc, then because it is part of the meat cut data, the robot know about it and thus how the meat cut should be categorized (here in the second category). Thereby a source of human error is eliminated in that a person can only sort out something based on something that is visible.
[0192] Further, the sorting of identical looking parts of a meat cut may be done different because the sorting may be based on meat data and not on visual manual visual inspection as is the case in the art today. Hence it may be possible to sort parts — of meat cuts e.g. according to origin of livestock, percentage of fat, etc.
[0193] Further it may be possible to optimize the deboning part of the process line, as an example could be mentioned trimming of the fat layer.
[0194] Further it may be possible to produce more than one batch at the time. A batch may be a specific order for one customer. Hence, if one customer has ordered a particular meat cut with a particular percentage of fat and only on average every second livestock comply with that requirement. Then such order can be complied with over a longer period such as a day or more days. In fact, such order is not possible to produce today with state of the art technology in that there is no link between the meat cut data and the individual meat cut at the sorting station of the process line.
[0195] Not only allows the present invention this kind of order, but it also allows an automated sorting of meat cuts in such batches such as e.g. by a robot arm hanging meat cuts complying with batch requirements on a batch specific hanger.
[0196] Further, the present invention allows associating a meat cut with meat cut data such as guiding drying time for an airdried ham in a given climate. This is possible on anindividual meat cut part level due to the meat cut data which is / can be associated with each individual meat cut part.
[0197] It should be mentioned that the description relating to the front part 13a i.e. the sorting and / or packing of meat cuts based on recognition of the meat cut and
DK 181542 B1 37 thereby access to the related meat cut data based on the second image according to the invention may be applied to any of the meat cut parts 13. In fact even the half pig 7c may be sorted and packed according to the present invention.
[0198] It should be mentioned, that the packing stations 6d may be referred to as second positions 2 i.e. a position where a meat cut is recognized based on an image e.g. of a region 10 of the meat cut.
[0199] It should be mentioned that after the three parts 13a-13c are cut, one or more of these cuts may pass by an intermediate image generation station 16 where additional image data is captured. This may be used to increase accuracy in the tracking system by providing image data from different positions / stations along the process line 3.
[0200] The data system 9 may be built up of autonomous control systems such as one for each handling station including control of the conveyor. Alternatively, a master controller is provided to which the image generation stations can be seen as an intelligent sensor. Hence when this “sensor” recognizes a particular meat cut, the master controller may receive this information as input and provide output in the form of a control reference to a conveyor, station or robot controller regarding which category the meat cut should be sorted.
[0201] Such master controller may also receive input from a robot, that it's full capacity is reached and thus the master controller is providing meat cuts to other sorting robots.
[0202] The data storage may be a local tracking system storage or it may be a centralized data storage such as a cloud storage. A cloud storage may be relevant if e.g. meat cut data should be used at a refinement process external to the process line 3.
[0203] In the situation where an image of a meat cut is not recognized, the program executed by the master controller may determine what to do. Several options are possible. From the tracking system the master controller by receive an unknown
DK 181542 B1 38 command if the meat cut cannot be found in the data pool. Also if at the end of the day not used imaged are left in the data pool / buffer / storage, these may be labeled unknown. At the end of the day such unknown new images may be captured of the meat cuts and in the supposably limited data pool, there is a chance that a match can be established. In this way the tracking system could be said to be balanced. Meat cuts that are still not matched may be categorized by manual inspection.
[0204] From the above it is now clear that the invention relates to a tracking system that is able to recognize a meat cut solely from imaging the meat cut. Based on the captured image, the present invention is able to link a particular meat cut part or several meat cut part to meat cut data of the livestock from which they origin. This is advantageous in that then sorting of the meat cut can be improved such as automated and meat cut parts can be sorted in new sorting categories that are available today. All this can be done independent on surroundings i.e. without any physical markers, labels, stamps on the meat cut.
[0205] The invention has been exemplified above with the purpose of illustration with reference to specific examples of methods and robot systems. Details such as a specific method and system structures have been provided in order to understand embodiments of the invention. Note that detailed descriptions of well-known systems, devices, circuits, and methods have been omitted so as to not obscure the description of the invention with unnecessary details.
DK 181542 B1 39
List 1. First position 2. (2a-2c) Second position 3. Slaughterhouse process line 4. First image generating station 5. (5a-5¢) Second image generating station 6. Meat cut handling station 6a. Veterinarian inspection 6b. Cutting station 6c. Sorting station 6d. Packing / preparation for transportation 7. Meat cut 7a. Livestock such as a Pig 7b. Carcass 7c. Half pig 8. Conveyer 9. Controller 10. Region of meat cut 11. Hanger 12. - 13. Cut part of meat cut such as 13a. Front part 13b. Middle 13c. Ham 13d. loin 14. Meat side of meat cut 15. Third image generating station 16. Switching track 17. Refinement site
Claims (17)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DKPA202270476A DK181542B1 (en) | 2022-09-30 | 2022-09-30 | Meat cut tracking system and method for tracking meat |
PCT/DK2023/050230 WO2024067933A1 (en) | 2022-09-30 | 2023-09-27 | Meat cut tracking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DKPA202270476A DK181542B1 (en) | 2022-09-30 | 2022-09-30 | Meat cut tracking system and method for tracking meat |
Publications (2)
Publication Number | Publication Date |
---|---|
DK181542B1 true DK181542B1 (en) | 2024-04-23 |
DK202270476A1 DK202270476A1 (en) | 2024-04-23 |
Family
ID=88296944
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
DKPA202270476A DK181542B1 (en) | 2022-09-30 | 2022-09-30 | Meat cut tracking system and method for tracking meat |
Country Status (2)
Country | Link |
---|---|
DK (1) | DK181542B1 (en) |
WO (1) | WO2024067933A1 (en) |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5937080A (en) * | 1997-01-24 | 1999-08-10 | Design Systems, Inc. | Computer controlled method and apparatus for meat slabbing |
US7949154B2 (en) | 2006-12-18 | 2011-05-24 | Cryovac, Inc. | Method and system for associating source information for a source unit with a product converted therefrom |
EP2928308B1 (en) | 2012-12-04 | 2022-06-15 | Marel Iceland EHF | A method and a system for automatically tracing food items |
-
2022
- 2022-09-30 DK DKPA202270476A patent/DK181542B1/en active IP Right Grant
-
2023
- 2023-09-27 WO PCT/DK2023/050230 patent/WO2024067933A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
DK202270476A1 (en) | 2024-04-23 |
WO2024067933A1 (en) | 2024-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Qiao et al. | Intelligent perception for cattle monitoring: A review for cattle identification, body condition score evaluation, and weight estimation | |
Wang et al. | ASAS-NANP SYMPOSIUM: Applications of machine learning for livestock body weight prediction from digital images | |
EP1939811B1 (en) | Method and system for associating source information for a source unit with a product converted therefrom | |
NL1003647C2 (en) | Method and device for processing a slaughtered animal or part thereof in a slaughterhouse. | |
US20210041378A1 (en) | Systems and Methods for Using Three-Dimensional X-Ray Imaging in Meat Production and Processing Applications | |
Barbut | Meat industry 4.0: A distant future? | |
WO2016023075A1 (en) | 3d imaging | |
Tscharke et al. | Review of methods to determine weight and size of livestock from images | |
DK181542B1 (en) | Meat cut tracking system and method for tracking meat | |
Jijesh et al. | Development of machine learning based fruit detection and grading system | |
CN109507204A (en) | A kind of Fresh Grade Breast lignifying stage division and its device based on curvature detection | |
Zhao et al. | Review on image-based animals weight weighing | |
Wakholi et al. | Deep learning feature extraction for image-based beef carcass yield estimation | |
US20240000088A1 (en) | A method of tracking a food item in a processing facility, and a system for processing food items | |
Bastiaansen et al. | Continuous real-time cow identification by reading ear tags from live-stream video | |
Mason et al. | RoBUTCHER: A novel robotic meat factory cell platform | |
KR102547735B1 (en) | System and method for management of processing livestock products and computer program for the same | |
Lu et al. | Color Machine Vision Design Methodology of a Part-Presentation Algorithm for Automated Poultry Handling | |
Baloch et al. | The Quality Analysis of Food and Vegetable from Image Processing | |
US20240081355A1 (en) | Sub-primal cut identification and packaging optimization system and method | |
WO2018186796A1 (en) | Method and system for classifying animal carcass | |
CN115760154A (en) | Method and system for recording traceability information of livestock and poultry meat processing | |
Pame et al. | Machine-vision based quality evaluation of meat and meat products-a review. | |
Wadie et al. | Path generation for robotic cutting of carcasses | |
Vo | Traceability in the southern rock lobster (SRL) export supply chain: investigating lobster grading and identification using low-cost image-based biometrics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PAT | Application published |
Effective date: 20240331 |
|
PME | Patent granted |
Effective date: 20240423 |