WO2021022323A1 - Meat processing tracking, tracing and yield measurement - Google Patents

Meat processing tracking, tracing and yield measurement Download PDF

Info

Publication number
WO2021022323A1
WO2021022323A1 PCT/AU2020/050793 AU2020050793W WO2021022323A1 WO 2021022323 A1 WO2021022323 A1 WO 2021022323A1 AU 2020050793 W AU2020050793 W AU 2020050793W WO 2021022323 A1 WO2021022323 A1 WO 2021022323A1
Authority
WO
WIPO (PCT)
Prior art keywords
carcass
indicator
cuts
images
meat
Prior art date
Application number
PCT/AU2020/050793
Other languages
French (fr)
Inventor
Anthony James WHITE
Jamila GORDON
Original Assignee
Lumachain Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2019902774A external-priority patent/AU2019902774A0/en
Application filed by Lumachain Pty Ltd filed Critical Lumachain Pty Ltd
Publication of WO2021022323A1 publication Critical patent/WO2021022323A1/en
Priority to AU2022100022A priority Critical patent/AU2022100022A4/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • AHUMAN NECESSITIES
    • A22BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
    • A22BSLAUGHTERING
    • A22B5/00Accessories for use during or after slaughtering
    • A22B5/0064Accessories for use during or after slaughtering for classifying or grading carcasses; for measuring back fat
    • A22B5/007Non-invasive scanning of carcasses, e.g. using image recognition, tomography, X-rays, ultrasound
    • AHUMAN NECESSITIES
    • A22BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
    • A22BSLAUGHTERING
    • A22B7/00Slaughterhouse arrangements
    • A22B7/001Conveying arrangements
    • A22B7/007Means containing information relative to the carcass that can be attached to or are embedded in the conveying means
    • AHUMAN NECESSITIES
    • A22BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
    • A22CPROCESSING MEAT, POULTRY, OR FISH
    • A22C17/00Other devices for processing meat or bones
    • A22C17/0073Other devices for processing meat or bones using visual recognition, X-rays, ultrasounds, or other contactless means to determine quality or size of portioned meat
    • AHUMAN NECESSITIES
    • A22BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
    • A22CPROCESSING MEAT, POULTRY, OR FISH
    • A22C17/00Other devices for processing meat or bones
    • A22C17/0073Other devices for processing meat or bones using visual recognition, X-rays, ultrasounds, or other contactless means to determine quality or size of portioned meat
    • A22C17/008Other devices for processing meat or bones using visual recognition, X-rays, ultrasounds, or other contactless means to determine quality or size of portioned meat for measuring quality, e.g. to determine further processing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/02Food
    • G01N33/12Meat; fish
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N2021/1765Method using an image detector and processing of image signal
    • G01N2021/177Detector of the video camera type
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • G01N2021/8893Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques providing a video image and a processed signal for helping visual decision
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/067Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components
    • G06K19/07Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips
    • G06K19/0723Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips the record carrier comprising an arrangement for non-contact communication, e.g. wireless communication circuits on transponder cards, non-contact smart cards or RFIDs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Definitions

  • the present disclosure generally relates to the field of carcass processing, in particular to technology and associated methods of use for an abattoir. Background
  • Protein is an important part of the diet and many people obtain much of their protein from cuts of meat, for example cuts of beef, mutton, lamb, pork, venison, turkey, chicken and fish, amongst others.
  • cuts of meat for example cuts of beef, mutton, lamb, pork, venison, turkey, chicken and fish, amongst others.
  • the processing line may include a boning room with several stations. The carcasses traverse the stations, for example on a conveyor system and various cuts of meat are removed from the carcass. The cuts are packaged and shipped for consumption.
  • Meat workers at the stations use a knife and other utensils, for example a hook, to handle the meat and cut it into the required portions.
  • the skill of the meat workers enables efficient processing, increasing the production capacity of the abattoir, amongst other advantages.
  • the present disclosure relates to methods and technology for protein processing.
  • the methods and technology utilise image processing to determine characteristics of a carcass and/or of its cuts of meat. The determined characteristics are processed to provide useful information for the control or end-to-end monitoring of abattoir processes.
  • the methods and technology are useful to enable improved detection, segmentation or tracking of a source of a cut of meat.
  • a packaged cut of meat may be provided with indicia identifying or otherwise associated with the carcass from which it came. Combined with tracking methodologies for the carcass, each cut of meat may be traced all the way back to the primary producer.
  • the methods and technology are useful to enable feedback on aspects of efficiency of abattoir operations. For example, the efficiency of meat workers in a boning room in removing a maximum amount of meat from the carcass may be determined. The determination may be on a carcass-by-carcass basis.
  • Embodiments of a system for tracking cuts in a boning room station for use in a supply chain include: a carcass associated with a carcass identifier; a container associated with a container identifier; and a video monitor that tracks a cut from the carcass to the container such that the container identifier can be associated with the carcass identifier.
  • each cut from the carcass is assigned a unique identifier.
  • this identifier may be based on the type of cut or sub-cut taken from the carcass.
  • Each cut identifier may provide further associativity between the carcass and the container and tracking of the cut in a boning room.
  • the system further includes a hook associated with a hook identifier, and wherein the carcass is mounted on the hook.
  • the video monitor includes a machine learning component to identify the cut such that the cut can be associated with the container identifier.
  • the machine learning component may be a neural network.
  • the neural network may be trained using a training set of images or videos of cuts.
  • the neural network may be a convolutional neural network.
  • the neural network training may be evaluated by one or more of: feedback from a person; results of a validation dataset of images or video; or results of a statistical validation based on a test dataset of images or video.
  • the machine learning component may determine one or more features of the cut, wherein a feature includes: shape; colour; pattern; texture; order of the cut; and any other visual aspect of the cut that may be used to identify the cut.
  • Embodiments of a method for tracking cuts in a boning room station for use in a supply chain include: receiving an identifier associated with a carcass; receiving an identifier associated with a container; receiving images of the cutting table; associating the carcass identifier and the container identifier with each other, based on processing of the received images.
  • the processing includes identifying, from the images, a cut from the carcass and tracking the cut into the container.
  • a cut is identified from the images based on a unique identifier assigned to the cut.
  • the processing includes identifying corresponding carcass identifier, container identifier and received images based on time information associated with each identifier and the received images.
  • Embodiments of a method include: conveying a carcass to one or more stations at which cuts are removed from the carcass; after conveying the carcass to the one more stations, capturing images of the carcass and providing the images to a processor; receiving from the processor at least one indicator of the effectiveness of the removal of the cuts from the carcass, wherein the at least one indicator is determined from an analysis of the images of the carcass; and based on the received at least one indicator, provide an output.
  • the method further includes determining an indicator of an amount of meat left on the carcass after removing multiple cuts from the carcass.
  • the indicator may be an area of the carcass that has a particular colour, texture and/or based on a volume of the carcass as determined by images of the carcass.
  • the method further includes measuring a weight of the carcass at one or more points of a processing line of the carcass. Measuring the weight of the carcass may include measuring the weight of the carcass at an entry point of the boning room or at an exit point of the boning room. The weight of the carcass may be an input to determining one or more of: variations in bone size; a bone density; a meat density; a fat density; or a density difference between bone, meat and/or fat.
  • the method further includes analysing the carcass prior to entering the boning room.
  • the analysing may include determining a measure of fat and/or yield for the carcass.
  • the method may further include determining an expected achievable yield from internal imaging and determining a measured yield from the carcass after processing.
  • the internal imaging may include determining an x-ray image.
  • Embodiments of a system include: a conveyor configured to convey a carcass to one or more stations at which cuts are removed from the carcass; a video monitor configured to capture images of the carcass after the carcass is conveyed to the one or more stations; and a processor configured to: receive the images of the carcass from the video monitor; analyse the images of the carcass to determine at least one indicator of the effectiveness of the removal of cuts from the carcass; and based on the received at least one indicator, provide an output.
  • the processor is configured to determine an indicator of an amount of meat left on the carcass after multiple cuts are removed from the carcass.
  • the indicator may be an area of the carcass that has a particular colour, texture and/or based on a volume of the carcass as determined by images of the carcass.
  • the system further comprises one or more scales for measuring a weight of the carcass at one or more points of a processing line of the carcass.
  • the conveyor may operate between an entry point of a boning room and an exit point of the boning room and wherein the one more scales are configured to measure the weight of the carcass proximate an entry point to the boning room and/or proximate an exit point of the boning room.
  • the processor may be configured to receive the weight of the carcass and determine one or more of: variations in bone size; a bone density; a meat density; a fat density; or a density difference between bone, meat and/or fat.
  • Embodiments of a system for tracking cuts in a boning room for use in a supply chain include: a cutting table associated with the boning room, wherein the cutting table has one or more marks for receiving a cut from a carcass; and a video monitor that tracks a cut from a carcass to a mark of the cutting table.
  • the one or more marks may represent a two-dimensional area located on the cutting table capable of receiving a cut. In some embodiments, the one or more marks may represent a three-dimensional area located on the cutting table capable of receiving a cut.
  • the one or more marks are shaped and/or dimensioned to correspond to a type of cut.
  • the marked spaces are fixed to an upper surface of the cutting table.
  • the cutting table includes a moveable conveyor and the one or more marked spaces are located on an upper surface of the moveable conveyor.
  • Figure 1 a shows a diagrammatic representation of a part of an abattoir, in particular a station in a boning room in an abattoir.
  • Figure 1 b provides a block diagram of one example of a computer processing system configurable to provide functionality for the boning room.
  • Figure 2 illustrates an example video processing system for tracking and tracing carcasses and/or cuts of meat from carcasses, which may include a configured computer processing system as described with reference to Figure 1 b.
  • Figure 1 a illustrates an example scenario of the present disclosure as may be implemented in a boning room in an abattoir.
  • a video monitor 102 there is a hook 104, a cutting table 108 and containers 110 and 112.
  • the hook 104 may be on a conveyor that extends through the boning room.
  • a carcass 106 is mounted on the hook 104, typically outside of the boning room.
  • one or more cuts are removed from the carcass and provided at the cutting table 108.
  • one or more of the cuts may be further cut into one or more sub-cuts for retention.
  • the cuts and/or sub-cuts for retention may be placed into a container, for example the container 112.
  • the container 112 may in turn be placed into another container, for example the container 110.
  • the container 110 may be loaded with a plurality of containers 112, each of which contain one or more cuts or sub-cuts.
  • the container 110 may be a box and the container 112 a bag. Material from the carcass not for retention may be discarded in a discard location (not shown) or used for other purposes.
  • the boning room may include a plurality of cutting tables 108 at which the operations described above are performed. For example, different cuts may be taken from the carcass at different cutting tables 108, which make any required sub-cuts of the cuts received from the carcass.
  • the cutting tables 108 are therefore conveniently arranged along a path of the travel of the hook 104.
  • the hook 104 will be one of a plurality of spaced part hooks, each of which pass by the plurality of cutting tables 108.
  • each hook 104 includes a tracking mechanism, such as a radio frequency identifier (RFID).
  • RFID radio frequency identifier
  • the boning room may then include one or more RFID readers.
  • RFID readers By reading the RFID of the hook 104, the position of the hook at a cutting table 108 is determinable.
  • an RFID reader may read each hook as it enters the boning room, following which the computer processing system can determine, based on for example actual or inferred movement of the hook 104, when it is at each boning table 108.
  • there might be a plurality of RFID readers for example one located at each boning table 108, in which case a direct determination of when the hook 104 is at a boning table can be made and communicated to the computer processing system.
  • hook tracking is performed by a tracking mechanism that does not use RFID tags or similar.
  • tracking is performed by monitoring or detecting the position of a conveyor on which the hook 104 is mounted.
  • a conveyor system may be configured to monitor its position and therefore the hook 104 on which a carcass is mounted can be determined based on location at a carcass loading station and the conveyor system may similarly determine when the hook is at each cutting table 108.
  • the video monitor 102 includes a camera for producing one or more image sequences of the area of the cutting table 108.
  • the camera of the video monitor 102 is positioned above the cutting table to view the activities at the cutting table generally from above.
  • a single camera is positioned to have a view of two or more cutting tables 108, with image processing used to identify activities at each cutting table.
  • there is a camera with a view of the carcass after a station For example, a camera may view the carcass after it has been processed at the cutting table 108.
  • a camera may view the carcass after it has been processed at a group of cutting tables 108, for example all cutting tables in the boning room and/or in the abattoir.
  • one or more cameras view both each carcass at one or more points on a conveyor carrying the hook 104 and the cuts of meat removed from the carcasses at each of the cutting tables 108.
  • the camera may be a video camera or a still camera, configured to take images at predetermined intervals. In some embodiments, the camera operates in the visible light spectrum.
  • the video monitor 102 includes or is in communication with image processing circuitry to perform tracking and/or measurement functions, for example as described herein.
  • a person at or near the cutting table 108 may take the cuts from the carcass 106.
  • the video monitor 102 views the cutting table 108, as or following removal from the carcass of each cut, the video monitor 102 is used to at least one of a) identify the type of cut, and b) determine at least one of the containers 110, 112 each cut is placed in or otherwise associated with.
  • the video monitor 102 is used to at least one of a) identify the type of sub-cut, and b) determine at least one of the containers 110, 112 each sub-cut is placed in or otherwise associated with.
  • the configuration of the cutting table 108 facilitates use of the video monitor 102 as described above.
  • the use of the video monitor 102 to identify into which bag a cut or sub-cut is placed may be inhibited if two or more cuts or sub-cuts are stacked on top of each other, particularly if they are of the same cut or sub-cut category.
  • the cutting table 108 may therefore have marked spaces for individual cuts and/or sub-cuts.
  • the marked spaces may be fixed, for example by markings on an upper surface of the table.
  • the marked spaces may in other embodiments be mobile, for example with the cutting table including a movable conveyor with divisions allocating space for receiving a single cut or sub-cut of meat or at least a small number of cuts in a way that allows tracking of cuts using the video monitor 102.
  • the video monitor 102 may include bi-directional communication to indicate an event, for example, the scanning of a carcass identifier by the video monitor 102.
  • this bi-directional communication is in the form of indicator lights and/or audible tones.
  • the video monitor 102 and/or camera may be controlled remotely through network configurations to allow flexibility of panning, zooming and dynamic changes of the field of view.
  • the procedures for person(s) operating at a cutting table 108 include procedures that avoid stacking of cuts.
  • the procedures may involve not placing a cut on top of another and/or not making a cut unless there is room on the table to place the cut.
  • operators may have a controller to control the speed of, or temporarily stop conveyance of, a series of hooks 104 to facilitate compliance with the procedures when needed.
  • the hook 104 is associated with a specific carcass 106.
  • the carcass 106 is associated with data, for example including details regarding the animal, origin, location and/or feeding habits and other information associated with the animal. Therefore, by using the video monitor 102 to track the cuts from the carcass to the containers 110, 112, each container 110, 112 (or one thereof) can be associated with carcass data.
  • each container 110, 112 includes a tracking mechanism, such as a radio frequency identifier (RFID) or a barcode (e.g. a 1 D or 2D barcode) or other carrier of an identifier.
  • RFID radio frequency identifier
  • the boning room may then include one or more RFID readers for the containers.
  • the position of the container at a cutting table 108 is determinable.
  • an RFID reader may read each container as it is provided at a cutting table 108.
  • the obtaining of a container identifier may be by an optical reader.
  • the optical reader may for example, be a bar code reader, or the video monitor 102.
  • the container identifier is then associated with the data associated with the carcass from which the cut or sub-cut was taken.
  • the container identifier may additionally, or alternatively, be associated with information identifying the type of cut or sub-cut, according to the aforementioned identification using the video monitor 102.
  • the carcass data can be accessed by persons in the supply chain of the protein that was derived from the carcass 106.
  • wholesalers, retailers and/or consumers may access the data associated with the protein they have based on knowledge of the container 110 and/or 112 that the protein was in.
  • the video monitor 102 includes a camera that views the carcass 106 after one or more cuts have been removed and is utilised to make one or more determinations in relation to the carcass. For example, after all cuts of meat have been removed, the video monitor 102 may capture one or more images of the carcass. The one or more images are analysed to determine an indicator of an amount of meat left on the carcass.
  • the indicator may be, for example, the area of the carcass that has a particular colour or texture, for example a colour or texture corresponding to meat at the location.
  • the indicator may be, for example, a volume of the carcass as determined by images of the carcass.
  • the weight of the carcass is measured at one or more points of the processing line.
  • the weight of the carcass may be measured at one or both of an entry point and an exit point of a boning room by scales over which the conveyor for the carcass travels.
  • the hooks 104 may traverse scales at one or more points along the conveyor.
  • the weight of the carcass may also provide an input to one or more determinations for the carcass, for example to accommodate variations in bone size, utilising the density difference of bone and/or meat and/or fat.
  • the weight may be used in combination with determinations made from image processing, for example by the video monitor.
  • the carcass is analysed prior to entering the boning room.
  • an x-ray image or other internal image of the carcass may determine a measure of fat and/or yield for a carcass.
  • an expected achievable yield determined from internal imaging may be compared with a measured yield from the carcass after processing and/or the cuts of meat from the carcass.
  • the processing of video images and/or weight data may be weighted having regard to the internal imaging data. It will be appreciated that performing video imaging and/or weighing is typically a faster process than internal imaging technologies. Use of these within or at the output of an abattoir may therefore avoid a bottleneck through the abattoir, in comparison to using internal imaging technologies.
  • the weight of the cuts of meat taken from the carcass may also provide an input to the determination. For example, the higher the weight of the cuts, the higher determined yield.
  • the determination(s) made in relation to the carcass may be linked to the carcass in the same or similar manner as a cut of meat is tracked, as described herein.
  • tracking of the hook 104 is a suitable proxy.
  • the video monitor 102 works in combination with a mechanism to provide a plurality of views of the carcass 106.
  • the hook 104 may include a motor or mechanical arrangement that rotates the carcass 106 while it is in the field of view of the video monitor 102.
  • one or more flippers may turn the carcass to enable more than one side to be viewed.
  • the carcass data can be accessed to determine an actionable measure of efficiency or yield of abattoir operations.
  • the measure may be made actionable, for example by the association of the data with a carcass or hook and the association of the carcass or hook with processing stations or meat workers in the processing line.
  • the relevant hook(s) and the relevant container(s) are identified using RFID tags. It will be appreciated that in other embodiments either or both of the hook(s) and the relevant container(s) can be identified by other mechanisms.
  • Figure 2 illustrates an example video processing system for tracking and tracing in a supply chain, for example the supply chain of cuts and/or sub-cuts described with reference to Figure 1 a.
  • the system 200 includes a tracking system 200, in communication with a hook RFID 222, a video monitor 224 (for example the video monitor 102 of Figure 1 a), a container RFID 226, scales 228 and configured to receive internal imaging information, for example from an x-ray processing device (not shown).
  • modules This is intended to refer generally to a collection of processing functions that perform a function. It is not intended to refer to any particular structure of computer instructions or software or to any particular methodology by which the instructions have been developed.
  • a hook module 202 tracks each hook RFID 222, such as the RFID associated with a hook 104 as described with reference to Figure 1 a.
  • the hook module 202 determines a location of a hook.
  • the hook module determines a specific cutting table (such as the cutting table 108 described with reference to Figure 1 a) at which the corresponding carcass 106 on the hook will be or is cut.
  • the tracking system 220 receives a hook identifier via a RFID reader.
  • the hook module 202 uses predetermined information on the association of the location of the RFID reader with a cutting table 108 to make the determination, for example knowledge that the RFID reader is located, at a particular time (for example at the time of reading the RFID), at the cutting table 108.
  • the video monitor module 204 interacts with the video monitor 224 to monitor visual aspects of the carcass 106 and/or the corresponding cuts that come from the carcass 106.
  • the video monitor module 204 may track a cut as it comes from a carcass and follows it as it is moved across the cutting table and placed, or otherwise associated with a container such as 110, 112.
  • the video monitor module 204 may obtain and process images of the carcass, before and/or after one or more cuts have been taken from the carcass.
  • the container module 206 maintains an association of a cut with a container such as a box 110 or bag 112.
  • a container 110,112 contains a container RFID 226 which is read by a scanner and provided to the container module 206.
  • the weight module 205 receives information from one or more scales 228.
  • the weight information may be for the carcass, before or after one or more cuts have been taken from the carcass and/or for one or more cuts taken from the carcass.
  • the cuts may be of retained meat and/or discarded meat, fat or other waste.
  • the analysis module 203 analyses one or more images to determine an indicator of an amount of meat left on the carcass.
  • the analysis module 203 may determine the indicator may be, for example, the area of the carcass that has a particular colour or texture, for example a colour or texture corresponding to meat at the location.
  • the indicator may be, for example, a volume of the carcass as determined by images of the carcass.
  • the analysis module 203 may also analyse the carcass to determine, for example, an expected achievable yield. This can be determined from internal imaging which may then be compared with a measured yield from the carcass after processing and/or the cuts of meat from the carcass.
  • the timing of the taking of images by the video monitor 224 is correlated or otherwise associated with the timing of the reading of the hook and container RFIDs. Accordingly, the tracking system 220 can use the association by time to create an association between the carcass data and a container identifier.
  • a consumer can therefore scan the RFID of the container 110,112 to determine the cut and therefore determine the carcass that the cut came from.
  • a cut is placed individually in a container 112 (eg. a bag) and then placed inside another container 110 (e.g. a box).
  • container identifiers associated with the carcass data for one or both of the containers 112 and 110.
  • the cuts are placed in boxes directly.
  • a box 110 may then be used to identify all cuts, which might come from a single carcass or from a plurality of carcasses. In the latter case, the association may not be unique in that one container identifier could point to two sets of carcass data. Whether this occurs would depend on the packing techniques employed by the individual boning room or abattoir.
  • the carcass module 208 generates the association between the container identifier and the carcass data.
  • the carcass module 208 can receive a carcass identifier from a carcass database 212, associate the carcass identifier with a hook identifier and based on the information from the hook module 202, video monitor 204 and container module 206 generate the association between the container identifier and the carcass data, for example by associating the carcass identifier with each container identifier that a cut or sub-cut from the carcass was determined as being placed. This association may be stored, for example, in the carcass database 212.
  • the video training database 210 includes training and/or other data to train the video monitor module 204 to identify cuts.
  • the video monitor module 204 can be trained to identify the type of cut that comes from the carcass 106.
  • cuts include chuck, blade, striploin, sirloin butt and silverside.
  • the video monitor module 204 can therefore be trained using machine learning techniques to identify such cuts by first identifying features of a cut.
  • Features of a cut include the shape, colour, patterns and the order in which a cut was made.
  • the video monitor module 204 can be trained to identify the meat on a carcass, before and/or after one or more cuts have been taken from the carcass.
  • the machine learning required can be constrained or limited by allocation of a video monitor 224 to a cutting table dedicated to certain cuts, less than all the cuts.
  • the video monitor module 204 need therefore only learn to distinguish between the cuts for that cutting table.
  • the process of training the video monitor 204 to identify cuts or characteristics of carcasses includes determining a training set, which can be stored on the video training database 210, including standard cuts and non-standard cuts.
  • a video training database 210 would involve a training set of many thousands of images of different cuts or carcasses which can be input into a machine learning algorithm.
  • the process involves the video monitor 204 estimating a cut based on an image or video and then the video monitor is provided with feedback from a person or algorithm as to whether the estimate was correct or not. This may enable the machine learning to apply incremental learning techniques that improve the trained machine learning efficiently and effectively during the training process. Flence, as each estimate is factored in, the video monitor 204 improves any following estimates.
  • the video monitor can attempt to identify the cut or carcass characteristic.
  • the video monitor 204 would be able to be trained to identify a match over a certain threshold value.
  • the threshold value may be 80%, such that the video monitor module 204 is able to identify the correct cut 80% of the time or identify the relevant characteristic within a percentage of accuracy that provides a useful measure for the specific application.
  • a deep learning convolutional neural network can be used to classify objects of interest (such as cuts or scraps) and thereby determine their position.
  • the trained CNN can be then utilised in a tracking system 220 to efficiently track the position of the object of interest in a series of images, in close to or in real time.
  • Each image whether part of the initial training set or not can be stored in the video training database 210. This means that any estimate of a cut by the video monitor 204 can be used to improve any subsequent estimation.
  • the video monitor module 204 will make a statistical determination based on which cut was most likely to have occurred.
  • Video as used in this disclosure comprises individual images so training can be done on images or of sequences of images (compiled as videos).
  • Video can be pre-processed to improve results.
  • a temporal filter may be used to reduce the level of noise. The temporal filter can be performed by, for example, averaging three consecutive frames.
  • CNN convolutional neural network
  • Various CNN models and training protocols can be used to determine the best performing model in terms of accuracy and speed of cut identification.
  • CNNs are binary classifiers that classify expected objects (e.g. a standard cut) on images or videos against irrelevant objects (e.g. scraps or objects that are otherwise irrelevant for tracking purposes).
  • CNNs may produce multi-class classification to determine different classes of objects, for example, different cut types.
  • One example CNN model that can be used consists of three convolutional layers, two pooling layers and one fully connected layer.
  • Example tracking system
  • the tracking system 220 is implemented using an electronic device.
  • the electronic device is, or will include, a computer processing system.
  • Figure 1 b provides a block diagram of one example of a computer processing system 170.
  • System 170 as illustrated in Figure 1 b is a general-purpose computer processing system. It will be appreciated that Figure 1 b does not illustrate all functional or physical components of a computer processing system. For example, no power supply or power supply interface has been depicted, however system 170 will either carry a power supply or be configured for connection to a power supply (or both).
  • the computer processing system 170 includes at least one processing unit 140.
  • the processing unit 140 may be a single computer-processing device (e.g. a central processing unit, graphics processing unit, or other computational device), or may include a plurality of computer processing devices. In some instances all processing will be performed by processing unit 140, however in other instances processing may also, or alternatively, be performed by remote processing devices accessible and useable (either in a shared or dedicated manner) by the system 170.
  • system 170 includes a system memory 144 (e.g. a BIOS), volatile memory 148 (e.g. random access memory such as one or more DRAM modules), and non-volatile memory 150 (e.g. one or more hard disk or solid state drives).
  • system memory 144 e.g. a BIOS
  • volatile memory 148 e.g. random access memory such as one or more DRAM modules
  • non-volatile memory 150 e.g. one or more hard disk or solid state drives.
  • System 170 also includes one or more interfaces, indicated generally by 162 via which system 170 interfaces with various devices and/or networks.
  • other devices may be physically integrated with system 170, or may be physically separate.
  • connection between the device and system 170 may be via wired or wireless hardware and communication protocols, and may be a direct or an indirect (e.g. networked) connection.
  • Wired connection with other devices/networks may be by any appropriate standard or proprietary hardware and connectivity protocols.
  • system 102 may be configured for wired connection with other devices/communications networks by one or more of: USB; FireWire; eSATA; Thunderbolt; Ethernet; OS/2; Parallel; Serial; HDMI; DVI; VGA; SCSI; AudioPort.
  • Other wired connections are, of course, possible.
  • Wireless connection with other devices/networks may similarly be by any appropriate standard or proprietary hardware and communications protocols.
  • system 170 may be configured for wireless connection with other devices/communications networks using one or more of: infrared; Bluetooth; Wi-Fi; near field communications (NFC); Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), long term evolution (LTE), wideband code division multiple access (W-CDMA), code division multiple access (CDMA).
  • GSM Global System for Mobile Communications
  • EDGE Enhanced Data GSM Environment
  • LTE long term evolution
  • W-CDMA wideband code division multiple access
  • CDMA code division multiple access
  • system 170 connects - whether by wired or wireless means - allow data to be input into/received by system 170 for processing by the processing unit 140, and data to be output by system 170.
  • Example devices are described below, however it will be appreciated that not all computer-processing systems will include all mentioned devices, and that additional and alternative devices to those mentioned may well be used.
  • system 170 may include or connect to one or more input devices by which information/data is input into (received by) system 170.
  • input devices may include physical buttons, alphanumeric input devices (e.g. keyboards), pointing devices (e.g. mice, track pads and the like), touchscreens, touchscreen displays, microphones, accelerometers, proximity sensors, GPS devices and the like.
  • System 170 may also include or connect to one or more output devices controlled by system 170 to output information.
  • output devices may include devices such as indicators (e.g. LED, LCD or other lights), displays (e.g. CRT displays, LCD displays, LED displays, plasma displays, touch screen displays), audio output devices such as speakers, vibration modules, and other output devices.
  • System 100 may also include or connect to devices which may act as both input and output devices, for example memory devices (hard drives, solid state drives, disk drives, compact flash cards, SD cards and the like) which system 100 can read data from and/or write data to, and touch-screen displays which can both display (output) data and receive touch signals (input).
  • memory devices hard drives, solid state drives, disk drives, compact flash cards, SD cards and the like
  • touch-screen displays which can both display (output) data and receive touch signals (input).
  • System 170 may also connect to communications networks (e.g. the Internet, a local area network, a wide area network, a personal hotspot etc.) to communicate data to and receive data from networked devices, which may themselves be other computer processing systems.
  • communications networks e.g. the Internet, a local area network, a wide area network, a personal hotspot etc.
  • system 170 may be any suitable computer processing system such as, by way of non-limiting example, a desktop computer, a laptop computer, a netbook computer, tablet computer, a smart phone, a Personal Digital Assistant (PDA), a cellular telephone, a web appliance.
  • PDA Personal Digital Assistant
  • system 170 may act as a server in a client/server type architecture, the system 170 may also include user input/output directly via the user input/output interface 154 or alternatively receiving equivalent input/output of a user via a communications interface 164 for communication with a network 214.
  • system 170 includes or connects to will depend on the particular type of system 170. For example, if system 170 is a desktop computer it will typically connect to physically separate devices such as (at least) a keyboard, a pointing device (e.g. mouse), a display device (e.g. a LCD display). Alternatively, if system 170 is a laptop computer it will typically include (in a physically integrated manner) a keyboard, pointing device, a display device, and an audio output device. Further alternatively, if system 170 is a tablet device or smartphone, it will typically include (in a physically integrated manner) a touchscreen display (providing both input means and display output means), an audio output device, and one or more physical buttons.
  • a touchscreen display providing both input means and display output means
  • System 170 stores or has access to instructions and data which, when processed by the processing unit 140, configure system 170 to receive, process, and output data. Such instructions and data will typically include an operating system such as Microsoft Windows®, Apple OSX, Apple IOS, Android, Unix, or Linux. System 170 also stores or has access to instructions and data (i.e.
  • system 170 which, when processed by the processing unit 140, configure system 170 to perform various computer-implemented processes/methods in accordance with embodiments of the invention (as described below). It will be appreciated that in some cases part or all of a given computer-implemented method will be performed by system 170 itself, while in other cases processing may be performed by other devices in data communication with system 170.
  • Instructions and data are stored on a non-transient machine-readable medium accessible to system 170.
  • instructions and data may be stored on non transient memory 150.
  • Instructions may be transmitted to/received by system 170 via a data signal in a transmission channel enabled (for example) by a wired or wireless network connection.

Abstract

Systems and methods for determining the effectiveness of removal of cuts from a carcass are provided. In one embodiment, a method, includes: conveying a carcass to one or more stations at which cuts are removed from the carcass; after conveying the carcass to the one more stations, capturing images of the carcass and providing the images to a processor; receiving from the processor at least one indicator of the effectiveness of the removal of the cuts from the carcass, wherein the at least one indicator is determined from an analysis of the images of the carcass; and based on the received at least one indicator, provide an output.

Description

Meat Processing Tracking, Tracing and Yield Measurement
Field
The present disclosure generally relates to the field of carcass processing, in particular to technology and associated methods of use for an abattoir. Background
Protein is an important part of the diet and many people obtain much of their protein from cuts of meat, for example cuts of beef, mutton, lamb, pork, venison, turkey, chicken and fish, amongst others. Often completion of the formation of cuts of meat is in a processing line at an abattoir. The processing line may include a boning room with several stations. The carcasses traverse the stations, for example on a conveyor system and various cuts of meat are removed from the carcass. The cuts are packaged and shipped for consumption.
Operations in a boning room remain largely manual. Meat workers at the stations use a knife and other utensils, for example a hook, to handle the meat and cut it into the required portions. The skill of the meat workers enables efficient processing, increasing the production capacity of the abattoir, amongst other advantages.
In recent history, there has been demand for modified farming and carcass processing methods. For example, consumers have more interest in the source of the protein they consume. This interest may arise, for example, from a desire to know that the primary producer is one that uses sustainable practices. Knowledge of the source of protein may assist food safety and regulation, for example allowing identification of the source and processing of a cut of meat that has been found to be deficient, for example containing too high levels of a contaminant. Meeting this demand at an abattoir may have an associated cost in efficiency, for example requiring increased labour and/or a reduction in capacity.
Summary
The present disclosure relates to methods and technology for protein processing. For example, the methods and technology utilise image processing to determine characteristics of a carcass and/or of its cuts of meat. The determined characteristics are processed to provide useful information for the control or end-to-end monitoring of abattoir processes.
In some embodiments, the methods and technology are useful to enable improved detection, segmentation or tracking of a source of a cut of meat. For example, a packaged cut of meat may be provided with indicia identifying or otherwise associated with the carcass from which it came. Combined with tracking methodologies for the carcass, each cut of meat may be traced all the way back to the primary producer.
In some embodiments, the methods and technology are useful to enable feedback on aspects of efficiency of abattoir operations. For example, the efficiency of meat workers in a boning room in removing a maximum amount of meat from the carcass may be determined. The determination may be on a carcass-by-carcass basis.
Embodiments of a system for tracking cuts in a boning room station for use in a supply chain include: a carcass associated with a carcass identifier; a container associated with a container identifier; and a video monitor that tracks a cut from the carcass to the container such that the container identifier can be associated with the carcass identifier.
In some embodiments, each cut from the carcass is assigned a unique identifier. In one example, this identifier may be based on the type of cut or sub-cut taken from the carcass. Each cut identifier may provide further associativity between the carcass and the container and tracking of the cut in a boning room.
In some embodiments, the system further includes a hook associated with a hook identifier, and wherein the carcass is mounted on the hook.
In some embodiments, the video monitor includes a machine learning component to identify the cut such that the cut can be associated with the container identifier. The machine learning component may be a neural network. The neural network may be trained using a training set of images or videos of cuts. The neural network may be a convolutional neural network. The neural network training may be evaluated by one or more of: feedback from a person; results of a validation dataset of images or video; or results of a statistical validation based on a test dataset of images or video. The machine learning component may determine one or more features of the cut, wherein a feature includes: shape; colour; pattern; texture; order of the cut; and any other visual aspect of the cut that may be used to identify the cut. Embodiments of a method for tracking cuts in a boning room station for use in a supply chain, include: receiving an identifier associated with a carcass; receiving an identifier associated with a container; receiving images of the cutting table; associating the carcass identifier and the container identifier with each other, based on processing of the received images.
In some embodiments, the processing includes identifying, from the images, a cut from the carcass and tracking the cut into the container. In one example, a cut is identified from the images based on a unique identifier assigned to the cut.
In some embodiments, the processing includes identifying corresponding carcass identifier, container identifier and received images based on time information associated with each identifier and the received images.
Embodiments of a method, include: conveying a carcass to one or more stations at which cuts are removed from the carcass; after conveying the carcass to the one more stations, capturing images of the carcass and providing the images to a processor; receiving from the processor at least one indicator of the effectiveness of the removal of the cuts from the carcass, wherein the at least one indicator is determined from an analysis of the images of the carcass; and based on the received at least one indicator, provide an output.
In some embodiments, the method further includes determining an indicator of an amount of meat left on the carcass after removing multiple cuts from the carcass. The indicator may be an area of the carcass that has a particular colour, texture and/or based on a volume of the carcass as determined by images of the carcass.
In some embodiments, the method further includes measuring a weight of the carcass at one or more points of a processing line of the carcass. Measuring the weight of the carcass may include measuring the weight of the carcass at an entry point of the boning room or at an exit point of the boning room. The weight of the carcass may be an input to determining one or more of: variations in bone size; a bone density; a meat density; a fat density; or a density difference between bone, meat and/or fat.
In some embodiments, the method further includes analysing the carcass prior to entering the boning room. The analysing may include determining a measure of fat and/or yield for the carcass. The method may further include determining an expected achievable yield from internal imaging and determining a measured yield from the carcass after processing. The internal imaging may include determining an x-ray image.
Embodiments of a system, include: a conveyor configured to convey a carcass to one or more stations at which cuts are removed from the carcass; a video monitor configured to capture images of the carcass after the carcass is conveyed to the one or more stations; and a processor configured to: receive the images of the carcass from the video monitor; analyse the images of the carcass to determine at least one indicator of the effectiveness of the removal of cuts from the carcass; and based on the received at least one indicator, provide an output.
In some embodiments, the processor is configured to determine an indicator of an amount of meat left on the carcass after multiple cuts are removed from the carcass. The indicator may be an area of the carcass that has a particular colour, texture and/or based on a volume of the carcass as determined by images of the carcass.
In some embodiments, the system further comprises one or more scales for measuring a weight of the carcass at one or more points of a processing line of the carcass. The conveyor may operate between an entry point of a boning room and an exit point of the boning room and wherein the one more scales are configured to measure the weight of the carcass proximate an entry point to the boning room and/or proximate an exit point of the boning room. The processor may be configured to receive the weight of the carcass and determine one or more of: variations in bone size; a bone density; a meat density; a fat density; or a density difference between bone, meat and/or fat.
Embodiments of a system for tracking cuts in a boning room for use in a supply chain include: a cutting table associated with the boning room, wherein the cutting table has one or more marks for receiving a cut from a carcass; and a video monitor that tracks a cut from a carcass to a mark of the cutting table.
In some embodiments, the one or more marks may represent a two-dimensional area located on the cutting table capable of receiving a cut. In some embodiments, the one or more marks may represent a three-dimensional area located on the cutting table capable of receiving a cut.
In some embodiments, the one or more marks are shaped and/or dimensioned to correspond to a type of cut. In some embodiments, the marked spaces are fixed to an upper surface of the cutting table. In some embodiments, the cutting table includes a moveable conveyor and the one or more marked spaces are located on an upper surface of the moveable conveyor.
Further embodiments will become apparent from the following description and/or from the accompanying figures.
Brief description of the drawings
Figure 1 a shows a diagrammatic representation of a part of an abattoir, in particular a station in a boning room in an abattoir.
Figure 1 b provides a block diagram of one example of a computer processing system configurable to provide functionality for the boning room.
Figure 2 illustrates an example video processing system for tracking and tracing carcasses and/or cuts of meat from carcasses, which may include a configured computer processing system as described with reference to Figure 1 b.
Detailed description
Figure 1 a illustrates an example scenario of the present disclosure as may be implemented in a boning room in an abattoir. In this example, there is a video monitor 102, a hook 104, a cutting table 108 and containers 110 and 112. In general and by way of example, the hook 104 may be on a conveyor that extends through the boning room. Before the hook reaches the cutting table 108, a carcass 106 is mounted on the hook 104, typically outside of the boning room. At or near the cutting table 108, one or more cuts are removed from the carcass and provided at the cutting table 108. At the cutting table, one or more of the cuts may be further cut into one or more sub-cuts for retention. The cuts and/or sub-cuts for retention may be placed into a container, for example the container 112. The container 112 may in turn be placed into another container, for example the container 110. The container 110 may be loaded with a plurality of containers 112, each of which contain one or more cuts or sub-cuts. For example, the container 110 may be a box and the container 112 a bag. Material from the carcass not for retention may be discarded in a discard location (not shown) or used for other purposes. The boning room may include a plurality of cutting tables 108 at which the operations described above are performed. For example, different cuts may be taken from the carcass at different cutting tables 108, which make any required sub-cuts of the cuts received from the carcass. The cutting tables 108 are therefore conveniently arranged along a path of the travel of the hook 104. In addition, the hook 104 will be one of a plurality of spaced part hooks, each of which pass by the plurality of cutting tables 108.
In some embodiments, each hook 104 includes a tracking mechanism, such as a radio frequency identifier (RFID). In the case of an RFID, the boning room may then include one or more RFID readers. By reading the RFID of the hook 104, the position of the hook at a cutting table 108 is determinable. For example, an RFID reader may read each hook as it enters the boning room, following which the computer processing system can determine, based on for example actual or inferred movement of the hook 104, when it is at each boning table 108. In another example, there might be a plurality of RFID readers, for example one located at each boning table 108, in which case a direct determination of when the hook 104 is at a boning table can be made and communicated to the computer processing system.
In other embodiments, hook tracking is performed by a tracking mechanism that does not use RFID tags or similar. For example in other embodiments, tracking is performed by monitoring or detecting the position of a conveyor on which the hook 104 is mounted. For example, a conveyor system may be configured to monitor its position and therefore the hook 104 on which a carcass is mounted can be determined based on location at a carcass loading station and the conveyor system may similarly determine when the hook is at each cutting table 108.
In some embodiments, the video monitor 102 includes a camera for producing one or more image sequences of the area of the cutting table 108. In some embodiments, the camera of the video monitor 102 is positioned above the cutting table to view the activities at the cutting table generally from above. In some embodiments, there is a camera associated with each of the cutting tables 108. In some embodiments, a single camera is positioned to have a view of two or more cutting tables 108, with image processing used to identify activities at each cutting table. In some embodiments, there is a camera with a view of the carcass after a station. For example, a camera may view the carcass after it has been processed at the cutting table 108. In some embodiments, a camera may view the carcass after it has been processed at a group of cutting tables 108, for example all cutting tables in the boning room and/or in the abattoir. In some embodiments, one or more cameras view both each carcass at one or more points on a conveyor carrying the hook 104 and the cuts of meat removed from the carcasses at each of the cutting tables 108.
The camera may be a video camera or a still camera, configured to take images at predetermined intervals. In some embodiments, the camera operates in the visible light spectrum. The video monitor 102 includes or is in communication with image processing circuitry to perform tracking and/or measurement functions, for example as described herein.
A person at or near the cutting table 108 may take the cuts from the carcass 106. In embodiments in which the video monitor 102 views the cutting table 108, as or following removal from the carcass of each cut, the video monitor 102 is used to at least one of a) identify the type of cut, and b) determine at least one of the containers 110, 112 each cut is placed in or otherwise associated with. Alternatively or additionally, if one or more sub cuts are made at the cutting table 108, the video monitor 102 is used to at least one of a) identify the type of sub-cut, and b) determine at least one of the containers 110, 112 each sub-cut is placed in or otherwise associated with.
In some embodiments, the configuration of the cutting table 108 facilitates use of the video monitor 102 as described above. For example, the use of the video monitor 102 to identify into which bag a cut or sub-cut is placed may be inhibited if two or more cuts or sub-cuts are stacked on top of each other, particularly if they are of the same cut or sub-cut category. The cutting table 108 may therefore have marked spaces for individual cuts and/or sub-cuts. The marked spaces may be fixed, for example by markings on an upper surface of the table. The marked spaces may in other embodiments be mobile, for example with the cutting table including a movable conveyor with divisions allocating space for receiving a single cut or sub-cut of meat or at least a small number of cuts in a way that allows tracking of cuts using the video monitor 102.
In some embodiments, the video monitor 102 may include bi-directional communication to indicate an event, for example, the scanning of a carcass identifier by the video monitor 102. In one example, this bi-directional communication is in the form of indicator lights and/or audible tones. The video monitor 102 and/or camera may be controlled remotely through network configurations to allow flexibility of panning, zooming and dynamic changes of the field of view.
Similarly, the procedures for person(s) operating at a cutting table 108 include procedures that avoid stacking of cuts. For example, the procedures may involve not placing a cut on top of another and/or not making a cut unless there is room on the table to place the cut. Optionally, operators may have a controller to control the speed of, or temporarily stop conveyance of, a series of hooks 104 to facilitate compliance with the procedures when needed.
The hook 104 is associated with a specific carcass 106. The carcass 106 is associated with data, for example including details regarding the animal, origin, location and/or feeding habits and other information associated with the animal. Therefore, by using the video monitor 102 to track the cuts from the carcass to the containers 110, 112, each container 110, 112 (or one thereof) can be associated with carcass data.
In some embodiments, each container 110, 112 includes a tracking mechanism, such as a radio frequency identifier (RFID) or a barcode (e.g. a 1 D or 2D barcode) or other carrier of an identifier. The identifier may be machine-readable. In the case of an RFID, the boning room may then include one or more RFID readers for the containers. By reading the RFID of the container, the position of the container at a cutting table 108 is determinable. For example, an RFID reader may read each container as it is provided at a cutting table 108. In the case of an optically readable identifier, the obtaining of a container identifier may be by an optical reader. The optical reader may for example, be a bar code reader, or the video monitor 102. The container identifier is then associated with the data associated with the carcass from which the cut or sub-cut was taken. The container identifier may additionally, or alternatively, be associated with information identifying the type of cut or sub-cut, according to the aforementioned identification using the video monitor 102.
Accordingly, the carcass data can be accessed by persons in the supply chain of the protein that was derived from the carcass 106. For example, wholesalers, retailers and/or consumers may access the data associated with the protein they have based on knowledge of the container 110 and/or 112 that the protein was in.
In some embodiments, the video monitor 102 includes a camera that views the carcass 106 after one or more cuts have been removed and is utilised to make one or more determinations in relation to the carcass. For example, after all cuts of meat have been removed, the video monitor 102 may capture one or more images of the carcass. The one or more images are analysed to determine an indicator of an amount of meat left on the carcass. The indicator may be, for example, the area of the carcass that has a particular colour or texture, for example a colour or texture corresponding to meat at the location. The indicator may be, for example, a volume of the carcass as determined by images of the carcass.
In some embodiments, the weight of the carcass is measured at one or more points of the processing line. For example, the weight of the carcass may be measured at one or both of an entry point and an exit point of a boning room by scales over which the conveyor for the carcass travels. For example, the hooks 104 may traverse scales at one or more points along the conveyor. The weight of the carcass may also provide an input to one or more determinations for the carcass, for example to accommodate variations in bone size, utilising the density difference of bone and/or meat and/or fat. The weight may be used in combination with determinations made from image processing, for example by the video monitor.
In some embodiments, the carcass is analysed prior to entering the boning room. For example, an x-ray image or other internal image of the carcass may determine a measure of fat and/or yield for a carcass. For example, an expected achievable yield determined from internal imaging may be compared with a measured yield from the carcass after processing and/or the cuts of meat from the carcass. In other examples the processing of video images and/or weight data may be weighted having regard to the internal imaging data. It will be appreciated that performing video imaging and/or weighing is typically a faster process than internal imaging technologies. Use of these within or at the output of an abattoir may therefore avoid a bottleneck through the abattoir, in comparison to using internal imaging technologies.
If measured, for example by suitable scales at the cutting table or by scales operating on the containers 112 or 110, the weight of the cuts of meat taken from the carcass (and associated with the carcass by tracking, for example as described herein), may also provide an input to the determination. For example, the higher the weight of the cuts, the higher determined yield.
The determination(s) made in relation to the carcass may be linked to the carcass in the same or similar manner as a cut of meat is tracked, as described herein. In embodiments in which the video monitor 102 captures images of the carcass while still on the hook 104, then tracking of the hook 104 is a suitable proxy.
In some embodiments, the video monitor 102 works in combination with a mechanism to provide a plurality of views of the carcass 106. For example, if the video monitor 102 includes a camera with a view of the carcass while on the hook 104, the hook 104 may include a motor or mechanical arrangement that rotates the carcass 106 while it is in the field of view of the video monitor 102. In the case of a carcass on a conveyor, one or more flippers may turn the carcass to enable more than one side to be viewed.
Accordingly, the carcass data can be accessed to determine an actionable measure of efficiency or yield of abattoir operations. The measure may be made actionable, for example by the association of the data with a carcass or hook and the association of the carcass or hook with processing stations or meat workers in the processing line.
For simplicity of illustration, in the description of embodiments that follows it is assumed that the relevant hook(s) and the relevant container(s) are identified using RFID tags. It will be appreciated that in other embodiments either or both of the hook(s) and the relevant container(s) can be identified by other mechanisms.
Figure 2 illustrates an example video processing system for tracking and tracing in a supply chain, for example the supply chain of cuts and/or sub-cuts described with reference to Figure 1 a. In this example, the system 200 includes a tracking system 200, in communication with a hook RFID 222, a video monitor 224 (for example the video monitor 102 of Figure 1 a), a container RFID 226, scales 228 and configured to receive internal imaging information, for example from an x-ray processing device (not shown).
In the following description, reference is made to “modules”. This is intended to refer generally to a collection of processing functions that perform a function. It is not intended to refer to any particular structure of computer instructions or software or to any particular methodology by which the instructions have been developed.
In this example system 200, a hook module 202 tracks each hook RFID 222, such as the RFID associated with a hook 104 as described with reference to Figure 1 a. The hook module 202 determines a location of a hook. In particular, the hook module determines a specific cutting table (such as the cutting table 108 described with reference to Figure 1 a) at which the corresponding carcass 106 on the hook will be or is cut. For example, the tracking system 220 receives a hook identifier via a RFID reader. The hook module 202 uses predetermined information on the association of the location of the RFID reader with a cutting table 108 to make the determination, for example knowledge that the RFID reader is located, at a particular time (for example at the time of reading the RFID), at the cutting table 108.
The video monitor module 204 interacts with the video monitor 224 to monitor visual aspects of the carcass 106 and/or the corresponding cuts that come from the carcass 106. In some embodiments, the video monitor module 204 may track a cut as it comes from a carcass and follows it as it is moved across the cutting table and placed, or otherwise associated with a container such as 110, 112. In some embodiments, the video monitor module 204 may obtain and process images of the carcass, before and/or after one or more cuts have been taken from the carcass.
The container module 206 maintains an association of a cut with a container such as a box 110 or bag 112. In this example, a container 110,112 contains a container RFID 226 which is read by a scanner and provided to the container module 206.
The weight module 205 receives information from one or more scales 228. The weight information may be for the carcass, before or after one or more cuts have been taken from the carcass and/or for one or more cuts taken from the carcass. The cuts may be of retained meat and/or discarded meat, fat or other waste.
The analysis module 203 analyses one or more images to determine an indicator of an amount of meat left on the carcass. As above, the analysis module 203 may determine the indicator may be, for example, the area of the carcass that has a particular colour or texture, for example a colour or texture corresponding to meat at the location. Similarly, the indicator may be, for example, a volume of the carcass as determined by images of the carcass. The analysis module 203 may also analyse the carcass to determine, for example, an expected achievable yield. This can be determined from internal imaging which may then be compared with a measured yield from the carcass after processing and/or the cuts of meat from the carcass.
The timing of the taking of images by the video monitor 224 is correlated or otherwise associated with the timing of the reading of the hook and container RFIDs. Accordingly, the tracking system 220 can use the association by time to create an association between the carcass data and a container identifier.
By tracking the cut or cuts in the container, a consumer can therefore scan the RFID of the container 110,112 to determine the cut and therefore determine the carcass that the cut came from. In some embodiments, a cut is placed individually in a container 112 (eg. a bag) and then placed inside another container 110 (e.g. a box). There may be container identifiers associated with the carcass data for one or both of the containers 112 and 110. In other embodiments, the cuts are placed in boxes directly. In such embodiments, a box 110 may then be used to identify all cuts, which might come from a single carcass or from a plurality of carcasses. In the latter case, the association may not be unique in that one container identifier could point to two sets of carcass data. Whether this occurs would depend on the packing techniques employed by the individual boning room or abattoir.
The carcass module 208 generates the association between the container identifier and the carcass data. For example, the carcass module 208 can receive a carcass identifier from a carcass database 212, associate the carcass identifier with a hook identifier and based on the information from the hook module 202, video monitor 204 and container module 206 generate the association between the container identifier and the carcass data, for example by associating the carcass identifier with each container identifier that a cut or sub-cut from the carcass was determined as being placed. This association may be stored, for example, in the carcass database 212.
The video training database 210 includes training and/or other data to train the video monitor module 204 to identify cuts.
Video monitor
In some embodiments, the video monitor module 204 can be trained to identify the type of cut that comes from the carcass 106. In Australia, there are standard primal cuts that can be made from a carcass. For example, for beef, cuts include chuck, blade, striploin, sirloin butt and silverside. The video monitor module 204 can therefore be trained using machine learning techniques to identify such cuts by first identifying features of a cut. Features of a cut include the shape, colour, patterns and the order in which a cut was made. Similarly, the video monitor module 204 can be trained to identify the meat on a carcass, before and/or after one or more cuts have been taken from the carcass.
In some embodiments, the machine learning required can be constrained or limited by allocation of a video monitor 224 to a cutting table dedicated to certain cuts, less than all the cuts. The video monitor module 204 need therefore only learn to distinguish between the cuts for that cutting table.
The process of training the video monitor 204 to identify cuts or characteristics of carcasses includes determining a training set, which can be stored on the video training database 210, including standard cuts and non-standard cuts. Typically, a video training database 210 would involve a training set of many thousands of images of different cuts or carcasses which can be input into a machine learning algorithm. The process involves the video monitor 204 estimating a cut based on an image or video and then the video monitor is provided with feedback from a person or algorithm as to whether the estimate was correct or not. This may enable the machine learning to apply incremental learning techniques that improve the trained machine learning efficiently and effectively during the training process. Flence, as each estimate is factored in, the video monitor 204 improves any following estimates. When a new image or video of a cut is acquired by the video monitor 204, the video monitor can attempt to identify the cut or carcass characteristic. Typically, the video monitor 204 would be able to be trained to identify a match over a certain threshold value. In practice, the threshold value may be 80%, such that the video monitor module 204 is able to identify the correct cut 80% of the time or identify the relevant characteristic within a percentage of accuracy that provides a useful measure for the specific application.
In some embodiments, a deep learning convolutional neural network (CNN) can be used to classify objects of interest (such as cuts or scraps) and thereby determine their position. The trained CNN can be then utilised in a tracking system 220 to efficiently track the position of the object of interest in a series of images, in close to or in real time. Each image whether part of the initial training set or not can be stored in the video training database 210. This means that any estimate of a cut by the video monitor 204 can be used to improve any subsequent estimation.
In some examples, the video monitor module 204 will make a statistical determination based on which cut was most likely to have occurred.
Video training
Video as used in this disclosure comprises individual images so training can be done on images or of sequences of images (compiled as videos).
The training of a machine learning algorithm typically requires a large dataset of training images for it to accurately learn the required features. A separate validation dataset can be used after training to assess the performance (e.g. the accuracy and/or speed) of the trained algorithm on new unseen images or, in combination with or alternatively, feedback can be provided by an appropriate trained person as to whether the identification was correct. Video can be pre-processed to improve results. For example, a temporal filter may be used to reduce the level of noise. The temporal filter can be performed by, for example, averaging three consecutive frames.
The automated tracking according to the disclosure requires a fast and accurate machine learning algorithm to detect cuts or characteristics of carcasses in real time or close to real-time. One such architecture that is suitable for practising the present disclosure may be a convolutional neural network (CNN). Various CNN models and training protocols can be used to determine the best performing model in terms of accuracy and speed of cut identification. Typically, CNNs are binary classifiers that classify expected objects (e.g. a standard cut) on images or videos against irrelevant objects (e.g. scraps or objects that are otherwise irrelevant for tracking purposes). CNNs may produce multi-class classification to determine different classes of objects, for example, different cut types.
One example CNN model that can be used consists of three convolutional layers, two pooling layers and one fully connected layer. Example tracking system
The tracking system 220 is implemented using an electronic device. The electronic device is, or will include, a computer processing system. Figure 1 b provides a block diagram of one example of a computer processing system 170. System 170 as illustrated in Figure 1 b is a general-purpose computer processing system. It will be appreciated that Figure 1 b does not illustrate all functional or physical components of a computer processing system. For example, no power supply or power supply interface has been depicted, however system 170 will either carry a power supply or be configured for connection to a power supply (or both). It will also be appreciated that the particular type of computer processing system will determine the appropriate hardware and architecture, and alternative computer processing systems suitable for implementing aspects of the invention may have additional, alternative, or fewer components than those depicted, combine two or more components, and/or have a different configuration or arrangement of components.
The computer processing system 170 includes at least one processing unit 140. The processing unit 140 may be a single computer-processing device (e.g. a central processing unit, graphics processing unit, or other computational device), or may include a plurality of computer processing devices. In some instances all processing will be performed by processing unit 140, however in other instances processing may also, or alternatively, be performed by remote processing devices accessible and useable (either in a shared or dedicated manner) by the system 170.
Through a communications bus 142 the processing unit 140 is in data communication with a one or more machine-readable storage (memory) devices that store instructions and/or data for controlling operation of the processing system 140. In this instance system 170 includes a system memory 144 (e.g. a BIOS), volatile memory 148 (e.g. random access memory such as one or more DRAM modules), and non-volatile memory 150 (e.g. one or more hard disk or solid state drives).
System 170 also includes one or more interfaces, indicated generally by 162 via which system 170 interfaces with various devices and/or networks. Generally speaking, other devices may be physically integrated with system 170, or may be physically separate. Where a device is physically separate from system 170, connection between the device and system 170 may be via wired or wireless hardware and communication protocols, and may be a direct or an indirect (e.g. networked) connection.
Wired connection with other devices/networks may be by any appropriate standard or proprietary hardware and connectivity protocols. For example, system 102 may be configured for wired connection with other devices/communications networks by one or more of: USB; FireWire; eSATA; Thunderbolt; Ethernet; OS/2; Parallel; Serial; HDMI; DVI; VGA; SCSI; AudioPort. Other wired connections are, of course, possible.
Wireless connection with other devices/networks may similarly be by any appropriate standard or proprietary hardware and communications protocols. For example, system 170 may be configured for wireless connection with other devices/communications networks using one or more of: infrared; Bluetooth; Wi-Fi; near field communications (NFC); Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), long term evolution (LTE), wideband code division multiple access (W-CDMA), code division multiple access (CDMA). Other wireless connections are, of course, possible.
Generally speaking, the devices to which system 170 connects - whether by wired or wireless means - allow data to be input into/received by system 170 for processing by the processing unit 140, and data to be output by system 170. Example devices are described below, however it will be appreciated that not all computer-processing systems will include all mentioned devices, and that additional and alternative devices to those mentioned may well be used.
For example, system 170 may include or connect to one or more input devices by which information/data is input into (received by) system 170. Such input devices may include physical buttons, alphanumeric input devices (e.g. keyboards), pointing devices (e.g. mice, track pads and the like), touchscreens, touchscreen displays, microphones, accelerometers, proximity sensors, GPS devices and the like. System 170 may also include or connect to one or more output devices controlled by system 170 to output information. Such output devices may include devices such as indicators (e.g. LED, LCD or other lights), displays (e.g. CRT displays, LCD displays, LED displays, plasma displays, touch screen displays), audio output devices such as speakers, vibration modules, and other output devices. System 100 may also include or connect to devices which may act as both input and output devices, for example memory devices (hard drives, solid state drives, disk drives, compact flash cards, SD cards and the like) which system 100 can read data from and/or write data to, and touch-screen displays which can both display (output) data and receive touch signals (input).
System 170 may also connect to communications networks (e.g. the Internet, a local area network, a wide area network, a personal hotspot etc.) to communicate data to and receive data from networked devices, which may themselves be other computer processing systems.
It will be appreciated that system 170 may be any suitable computer processing system such as, by way of non-limiting example, a desktop computer, a laptop computer, a netbook computer, tablet computer, a smart phone, a Personal Digital Assistant (PDA), a cellular telephone, a web appliance. Although the system 170 may act as a server in a client/server type architecture, the system 170 may also include user input/output directly via the user input/output interface 154 or alternatively receiving equivalent input/output of a user via a communications interface 164 for communication with a network 214.
The number and specific types of devices which system 170 includes or connects to will depend on the particular type of system 170. For example, if system 170 is a desktop computer it will typically connect to physically separate devices such as (at least) a keyboard, a pointing device (e.g. mouse), a display device (e.g. a LCD display). Alternatively, if system 170 is a laptop computer it will typically include (in a physically integrated manner) a keyboard, pointing device, a display device, and an audio output device. Further alternatively, if system 170 is a tablet device or smartphone, it will typically include (in a physically integrated manner) a touchscreen display (providing both input means and display output means), an audio output device, and one or more physical buttons. To the extent that system 170 is an example of a user device 112A, 112B, then the user input devices as described above will typically be the means by which a user will interact with a system to generate the events relevant to the personalisation engine 108. A person skilled in the art would understand there may be other types of input devices which would operate similarly for the purposes of the present disclosure, such as a microphone for voice activated user commands or other devices not described here. System 170 stores or has access to instructions and data which, when processed by the processing unit 140, configure system 170 to receive, process, and output data. Such instructions and data will typically include an operating system such as Microsoft Windows®, Apple OSX, Apple IOS, Android, Unix, or Linux. System 170 also stores or has access to instructions and data (i.e. software) which, when processed by the processing unit 140, configure system 170 to perform various computer-implemented processes/methods in accordance with embodiments of the invention (as described below). It will be appreciated that in some cases part or all of a given computer-implemented method will be performed by system 170 itself, while in other cases processing may be performed by other devices in data communication with system 170.
Instructions and data are stored on a non-transient machine-readable medium accessible to system 170. For example, instructions and data may be stored on non transient memory 150. Instructions may be transmitted to/received by system 170 via a data signal in a transmission channel enabled (for example) by a wired or wireless network connection.
It will be understood that the invention disclosed and defined in this specification extends to all alternative combinations of two or more of the individual features mentioned or evident from the text or drawings. All of these different combinations constitute various alternative aspects of the invention.

Claims

1. A method, including: conveying a carcass to one or more stations at which cuts are removed from the carcass; after conveying the carcass to the one more stations, capturing images of the carcass and providing the images to a processor; receiving from the processor at least one indicator of the effectiveness of the removal of the cuts from the carcass, wherein the at least one indicator is determined from an analysis of the images of the carcass; and based on the received at least one indicator, provide an output.
2. The method of claim 1 further comprising determining an indicator of an amount of meat left on the carcass after removing multiple cuts from the carcass.
3. The method of claim 2 wherein the indicator is an area of the carcass that has a particular colour.
4. The method of claim 2 wherein the indicator in an area of the carcass that has a particular texture.
5. The method of claim 2 wherein the indicator is based on a volume of the carcass as determined by images of the carcass.
6. The method of any of the preceding claims further comprising measuring a weight of the carcass at one or more points of a processing line of the carcass.
7. The method of claim 6, wherein measuring the weight of the carcass comprises measuring the weight of the carcass at an entry point of a boning room or at an exit point of the boning room.
8. The method of claim 6 or 7, wherein the weight of the carcass is an input to determining one or more of: variations in bone size; a bone density; a meat density; a fat density; or a density difference between bone, meat and/or fat.
9. The method of any of the preceding claims further comprising analysing the carcass prior to entering the boning room.
10. The method of claim 9 wherein analysing comprises determining a measure of fat and/or yield for the carcass.
11. The method of claim 10 further comprising determining an expected achievable yield from internal imaging and determining a measured yield from the carcass after processing.
12. The method of claim 11 wherein internal imaging comprises determining an x-ray image.
13. A system, including: a conveyor configured to convey a carcass to one or more stations at which cuts are removed from the carcass; a video monitor configured to capture images of the carcass after the carcass is conveyed to the one or more stations; and a processor configured to: receive the images of the carcass from the video monitor; analyse the images of the carcass to determine at least one indicator of the effectiveness of the removal of cuts from the carcass; and based on the received at least one indicator, provide an output.
14. The system of claim 13 wherein the processor is configured to determine an indicator of an amount of meat left on the carcass after multiple cuts are removed from the carcass.
15. The system of claim 14 wherein the indicator is an area of the carcass that has a particular colour.
16. The system of claim 14 wherein the indicator is an area of the carcass that has a particular texture.
17. The system of claim 14 wherein the indicator is based on a volume of the carcass as determined from the images of the carcass.
18. The system of any of claims 13 to 17 further comprising one or more scales for measuring a weight of the carcass at one or more points of a processing line of the carcass.
19. The system of claim 18, wherein the conveyor operates between an entry point of a boning room and an exit point of the boning room and wherein the one more scales are configured to measure the weight of the carcass proximate an entry point to the boning room and/or proximate an exit point of the boning room.
20. The system of claim 18 or 19, wherein the processor is configured to receive the weight of the carcass and determine one or more of: variations in bone size; a bone density; a meat density; a fat density; or a density difference between bone, meat and/or fat.
PCT/AU2020/050793 2019-08-02 2020-08-03 Meat processing tracking, tracing and yield measurement WO2021022323A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2022100022A AU2022100022A4 (en) 2019-08-02 2022-02-01 Meat processing tracking, tracing and yield measurement

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
AU2019902774A AU2019902774A0 (en) 2019-08-02 Supply chain tracking and tracing
AU2019902774 2019-08-02
AU2020900469 2020-02-19
AU2020900469A AU2020900469A0 (en) 2020-02-19 Supply chain tracking and tracing

Related Child Applications (1)

Application Number Title Priority Date Filing Date
AU2022100022A Division AU2022100022A4 (en) 2019-08-02 2022-02-01 Meat processing tracking, tracing and yield measurement

Publications (1)

Publication Number Publication Date
WO2021022323A1 true WO2021022323A1 (en) 2021-02-11

Family

ID=74502431

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2020/050793 WO2021022323A1 (en) 2019-08-02 2020-08-03 Meat processing tracking, tracing and yield measurement

Country Status (2)

Country Link
AU (1) AU2022100022A4 (en)
WO (1) WO2021022323A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110273558A1 (en) * 2009-01-10 2011-11-10 Goldfinch Solutions, Llc System and Method for Analyzing Properties of Meat Using Multispectral Imaging
US20140079291A1 (en) * 2006-04-03 2014-03-20 Jbs Usa, Llc System and method for analyzing and processing food product
CA3047323A1 (en) * 2016-12-28 2018-07-05 Cryovac, Llc Automated process for determining amount of meat remaining on animal carcass

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140079291A1 (en) * 2006-04-03 2014-03-20 Jbs Usa, Llc System and method for analyzing and processing food product
US20110273558A1 (en) * 2009-01-10 2011-11-10 Goldfinch Solutions, Llc System and Method for Analyzing Properties of Meat Using Multispectral Imaging
CA3047323A1 (en) * 2016-12-28 2018-07-05 Cryovac, Llc Automated process for determining amount of meat remaining on animal carcass

Also Published As

Publication number Publication date
AU2022100022A4 (en) 2022-03-10

Similar Documents

Publication Publication Date Title
US11443417B2 (en) System and method for hyperspectral image processing to identify object
AU2018289400B2 (en) System and method for hyperspectral image processing to identify foreign object
US20210204553A1 (en) Image-data-based classification of meat products
US10726293B2 (en) Photo analytics calibration
US8854190B2 (en) Systems and methods to detect cross reads in RFID tags
US11555810B2 (en) Spectroscopic classification of conformance with dietary restrictions
US10354395B2 (en) Methods and apparatus to improve detection and false alarm rate over image segmentation
CN112307944A (en) Dish inventory information processing method, dish delivery method and related device
AU2022100022A4 (en) Meat processing tracking, tracing and yield measurement
CN110443622A (en) Data processing method, device and system based on block chain
CN111539346A (en) Food quality detection method and device
US20230252407A1 (en) Systems and methods of defining and identifying product display areas on product display shelves
EP4325512A1 (en) Artificial intelligence-based meal monitoring method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20849406

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20849406

Country of ref document: EP

Kind code of ref document: A1