EP3887105A1 - System for cutting and trimming meat cuts - Google Patents

System for cutting and trimming meat cuts

Info

Publication number
EP3887105A1
EP3887105A1 EP19812942.1A EP19812942A EP3887105A1 EP 3887105 A1 EP3887105 A1 EP 3887105A1 EP 19812942 A EP19812942 A EP 19812942A EP 3887105 A1 EP3887105 A1 EP 3887105A1
Authority
EP
European Patent Office
Prior art keywords
workpiece
starting material
operator
processing means
cutting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19812942.1A
Other languages
German (de)
French (fr)
Inventor
Paul Andreas Holger DIRAC
Lars Bager Christensen
Morten Askjær HASS
Niels Toftelund MADSEN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Teknologisk Institut
Original Assignee
Teknologisk Institut
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Teknologisk Institut filed Critical Teknologisk Institut
Publication of EP3887105A1 publication Critical patent/EP3887105A1/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B26HAND CUTTING TOOLS; CUTTING; SEVERING
    • B26DCUTTING; DETAILS COMMON TO MACHINES FOR PERFORATING, PUNCHING, CUTTING-OUT, STAMPING-OUT OR SEVERING
    • B26D5/00Arrangements for operating and controlling machines or devices for cutting, cutting-out, stamping-out, punching, perforating, or severing by means other than cutting
    • B26D5/007Control means comprising cameras, vision or image processing systems
    • AHUMAN NECESSITIES
    • A22BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
    • A22CPROCESSING MEAT, POULTRY, OR FISH
    • A22C17/00Other devices for processing meat or bones
    • A22C17/0073Other devices for processing meat or bones using visual recognition, X-rays, ultrasounds, or other contactless means to determine quality or size of portioned meat
    • A22C17/0086Calculating cutting patterns based on visual recognition
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B26HAND CUTTING TOOLS; CUTTING; SEVERING
    • B26DCUTTING; DETAILS COMMON TO MACHINES FOR PERFORATING, PUNCHING, CUTTING-OUT, STAMPING-OUT OR SEVERING
    • B26D2210/00Machines or methods used for cutting special materials
    • B26D2210/02Machines or methods used for cutting special materials for cutting food products, e.g. food slicers

Definitions

  • the present invention provides a system and a related method for facilitating instructions to the executing operator about cutting and trimming meat cuts in slaughterhouses.
  • pork bellies are often cut and trimmed to have length and width dimensions specified by the customer. Often the size is important for the customer due to certain size requirements of further processing equipment.
  • the trimming is often done manually.
  • a frame or a template with the right dimensions, is placed upon the belly to guide the operator to trim away the parts outside the frame.
  • this frame has several disadvantages.
  • the operator must carry the frame and use time to place the frame onto the belly. If the belly dimensions are not suited for the intended product, i .e. do not comply with the frame dimensions, the operator cannot utilize the frame correctly, and will typically not process the belly.
  • the frame is usually drawn up to one side of the belly and frequently squeezes the belly together, which results in the belly being insufficiently trimmed, i.e. it will be larger than specified .
  • the operator must himself decide how to place the frame on the belly and decide in what order to trim off excess product from the different sides of the belly, which can result in a suboptimal value of the trimmings.
  • Augmented Reality is an emerging technology for supporting industrial service and maintenance operations.
  • the potential includes hands-free support for complicated procedures, and updated assistance for performing a specific maintenance task, e.g. replacement of a defective component.
  • W02008102148 describes a method for handling food pieces by subjecting a food piece to an X-ray analysis for detection of foreign objects in the food piece, followed by transportation of the food piece from the analysis position to an operator, and providing the operator with an indication of the location of the foreign object on the food piece by a "static" marking on the food piece on the fly.
  • WO2014079448 describes a method for removing undesired parts or objects in pieces of meat, in which method the operator is assisted by an image on a screen that the operator carries in front of his eyes, and on which screen a (static) representation of the objects in question are projected .
  • the present invention provides a system for assisting an operator in the cutting and trimming of meat cuts, including removal of foreign objects and/or trimming/cutting according to dimension requirements.
  • the system of the present invention allows for a dynamic presentation of proposed cutting lines, i .e. the projected image aligns to, and fits with the actual position of the workpiece in question, even if the workpiece has been moved and changed position on the conveyor, e.g. as a result of the operators work and impact on the workpiece.
  • the system of the present invention may be applied at abattoirs, for processing incoming carcasses or parts of carcasses, intended for being cut into meat pieces of a certain size and quality.
  • the workpiece may in particular be a carcass, or a part hereof, e.g . a pork belly.
  • the system of the invention may also be used for detecting quality defects, such as the occurrence of foreign objects on the surface of the incoming meat item, and advice the operator of the occurrence and location of such quality defects, and may also propose preferred cutting lines, and even a preferred cutting sequence.
  • quality defects such as the occurrence of foreign objects on the surface of the incoming meat item
  • advice the operator of the occurrence and location of such quality defects and may also propose preferred cutting lines, and even a preferred cutting sequence.
  • system of the invention may include the use of AR-technology, e.g . by projecting the proposed cutting lines onto a screen in front of the operator.
  • the invention provides a system for displaying instructions to a meat processing operator, which system is described in more details below.
  • the invention provides a method for presenting information to a meat processing operator, which method is described in more details below.
  • the invention provides a system for assisting an operator (6) in the cutting and trimming of meat cuts, e.g . according to dimension requirements, or for detecting and correcting quality defects, such as the occurrence of foreign objects.
  • the system may be applied at abattoirs, for processing incoming carcasses or parts of carcasses, intended for being cut into meat pieces of a certain size.
  • the workpiece/starting material/meat item/target product (7) may in particular be a carcass, or a part hereof, e.g . a pork belly.
  • an in-let conveyor belt (2) comprising a position sensor, in communication with the processing means (4), and capable of conveying the workpiece/starting material (7);
  • a machine vision device (3) comprising at least two camera stations (3A, 3B), each station, in communication with the processing means (4), configured for identification of the incoming workpiece/starting material (7), for making size measurements, and capable of accessing the dimensions of the workpiece/starting material (7), and transmitting the digitalised data to the processing means (4) ;
  • a processing means (4) in communication with the in-let conveyor (2), the machine vision device (3), and the projecting means (5), and capable of:
  • a projection means (5) in communication with the in-let conveyor (2) and the processing means (4), and capable of displaying an over-layer of information, on the workpiece/target product (7) to be processed, to the executing cutting operator (6), while the workpiece/target product (7) is being conveyed;
  • the first camera station (3A) feeds data to the processing means (4) for identifying, and for locating, the incoming workpiece/starting material (7), and for calculating cutting lines, and, if present, optionally also identifies foreign objects and notes their position on the surface of the incoming workpiece/starting material (7);
  • the second camera station (3B) in a dynamic process, feeds data to the processing means (4) for identifying and locating the incoming workpiece/starting material (7), and communicates the current position of the workpiece/starting material (7) and the proposed cutting lines and location of foreign objects, if any, to the projecting means (5).
  • the system of the invention is capable of generating a vision based loop, wherein the second camera station (3B), in a dynamic process involving data from the first camera station (3A), that are analysed and computed by the processing means
  • the system further comprises one or more buffer-conveyors (2A) and/or an outlet-conveyor (2B).
  • system of the invention further comprises a means for storing information (a server/database) (9).
  • the projection means (5) for use according to the invention is a projector capable of over-laying an image onto the workpiece/target product (7).
  • the projection means (5) for use according to the invention is a printer or a laser marker.
  • the projection means (5) for use according to the invention is or a pair of smart-glasses (5A), presenting information to the executing cutting operator (6) in the form of an augmented reality (AR) vision of the scene.
  • AR augmented reality
  • an in-let conveyor belt (2) in communication with the processing means (4), and capable of conveying the starting material (7);
  • a machine vision device (3) in communication with the processing means (4), configured for making size measurements, and capable of accessing the dimensions of the starting material (7), and transmitting the digitalised data to the processing means (4);
  • a processing means (4) in communication with the conveyor (2), the machine vision device (3), and the projecting means (5), and capable of:
  • a projection means (5) in communication with the in-let conveyor (2), and the processing means (4), and capable of displaying an over-layer of information, on the meat item (7) to be processed, to the executing cutting operator (6), while the meat item (7) is being conveyed.
  • system of the present invention may be characterised as described above, wherein :
  • the machine vision device (3) comprises two camera stations (3A, 3B), each station, in communication with the processing means (4), configured for identification of the incoming starting material (7) ; and wherein
  • the first camera station (3A) provides data to the processing means (4) for identification of the incoming starting material (7), and for calculating cutting lines, and, if present, optionally also identifies foreign objects and notes their position on the surface of the incoming starting material (7);
  • the second camera station (3B) provides data to the processing means (4) for identification and for location of the incoming starting material (7) and communicates the position of cutting lines and foreign objects, if any, to the projecting means (5) .
  • the in-let conveyor The in-let conveyor
  • the processing means (4) For the processing means (4) to be able to calculate and track the position of the incoming workpiece/starting material (7), while in motion, and keep the projection means (5) synchronized with the conveyor belt (2), the system shall know the speed of the conveyor belt (2) . This may be accomplished in various ways.
  • the in-let conveyor belt (2) for use according to the invention may be equipped with a position sensor, in communication with, and receiving operational guidance from, the processing means (4) . This ensures that the system can performing certain functions, like controlling (determine and adjust) the speed, and synchronize with the projection means (5) .
  • the system of the invention comprises more conveyors, e.g . one or more buffer-conveyors (2A), and/or an outlet-conveyor (2B) .
  • the workpiece may be removed from the system and stored on a (buffer) conveyor belt (2A) before it is re-introduced to the system for continued processing, and identification/monitoring by a second (or further) camera station (3B) .
  • This may e.g . be accomplished by use of an outlet conveyor (2B) .
  • the machine vision device is configured to:
  • the machine vision device (3) for use according to the invention may be any machine vision device, configured for making size measurements, and capable of accessing the dimensions of the workpiece/starting material (7) while in motion .
  • the vision device shall be in communication with, and be able to transmit data to the processing means (4) .
  • the vision device (3) comprises the use of two cameras (3), one camera station (3A) for identifying the arriving workpiece/starting material (7) and for communicating image data to the processing means (4), and another camera station (3B) for tracing the conveyed workpiece/starting material (7) and communicating image data to the processing means (4) .
  • the processor (4) Based on the image data established by the first camera station (3A), the processor (4) extracts characteristics associated with the meat product in question (7), and calculates optimal cutting lines, and, optionally, also identifies foreign objects, if present, and notes their position on the meat surface. Computed cutting lines, along with the specific location of foreign objects, if any, are reported to the projection means (5), that projects the finding onto the workpiece/target product (7), while on the fly, in the form of over-lay lines.
  • Over-lays may e.g. be projected in different colours, for assisting the operator (6) in producing an optimized yield of the raw material by cutting along the projected lines in a specific order (e.g. green before red), and for performing a corrective action by removing any foreign objects detected as surface contamination .
  • the second camera station (3B) As the workpiece/target product (7) is being moved towards the processing area (8), the second camera station (3B), that is positioned above the processing area (8), identifies the incoming workpiece/target product (7), based on the information obtained by the first camera station and recorded by the processing means (4), and communicates the location and position of the moving workpiece/target product (7) to the projection means (5), in the form of cutting lines and the position of any foreign object to be removed.
  • the second camera station (3B) by employing feed-back/loop processes in cooperation with the processing means (4) and the projecting means (5), is able to provide dynamic cutting lines to the operator (6).
  • the machine vision device (3) for use according to the invention is a multispectral vision device.
  • a multispectral image is one that captures image data within specific wavelength ranges across the electromagnetic spectrum. The wavelengths may be separated by filters or by the use of instruments that are sensitive to particular wavelengths, including light from frequencies beyond the visible light range, i .e. infrared and ultra-violet. Spectral imaging can allow extraction of additional information the human eye fails to capture with its receptors for red, green and blue. Multispectral imaging measures light in a small number (typically 3 to 15) of spectral bands.
  • Multispectral imaging can also be accomplished by using camera(s) sensitive to all the relevant spectral bands and sequentially illuminating the object with each spectral band, while capturing a frame for each band.
  • the machine vision device (3) for use according to the invention may comprise a light source capable of emitting electromagnetic waves in the ranges 350nm to 700nm to (from UV to visible regions) and 700nm to 950nm (i .e. NIR), and a sensor capable of receiving electromagnetic waves within the same ranges (i .e. 350nm to 950nm), and may be a device as described in e.g. WO 2017/121713.
  • Machine vision systems essentially comes in three main categories:
  • ID vision analyses a digital signal one line at a time instead of looking at a whole picture at once, and is accomplished by use of a line-scan camera;
  • 2D vision is looking at the whole picture, and may be accomplished by use of an industrial camera/area-scan camera, e.g . a multispectral RGB (visible/colour images) camera; and
  • 3D vision systems typically comprise multiple cameras or one or more laser displacement sensors.
  • Commercially available 3D scanners or range cameras include the time-of-flight camera, the structure light camera, stereo camera or 3D camera, e.g. the Sick 3D ruler.
  • the vision device of the first camera station (3A) is a line- scan camera.
  • the vision device of the second camera station (3B) is an industrial 2D multispectral RGB camera, or area scan camera.
  • the machine vision device (3) for use according to the invention is calibrated for correct size measurement.
  • the processing means (4) for use according to the invention may be any commercially available processor/PC, in communication with the conveyor (2), the machine vision device (3), and the projecting means (5), and shall capable of:
  • the processing means (4) used according to the invention may also be used for calculating the optimal cutting lines, and for location of existing foreign objects, if any. Identification and feature extraction may be accomplished using pattern recognition and digital image processing, and particularly feature extraction .
  • the method of the invention further comprises a step (cl) wherein the digitalised data analysed in step (c) by the processing means (4) is transmitted to a means for storing information (server/database) (9).
  • the processing means (4) used according to the invention may be in communication with a means for storing information, i.e. a central server or database (9).
  • This central database (9) may contain pre-loaded information related to the products in question, e.g. origin of the product, product ID, product specifications, etc.
  • the processing means (4) can establish a product ID, e.g . by reference to a product catalogue or product specifications stored on the server (9), and the product ID may be allocated to each workpiece/target product (7), and transmitted to the server/database (9) for further action/use.
  • a product ID e.g . by reference to a product catalogue or product specifications stored on the server (9)
  • the product ID may be allocated to each workpiece/target product (7), and transmitted to the server/database (9) for further action/use.
  • the projection means The projection means
  • the projection means (5) for use according to the invention shall assist the executing operator (6) in cutting and/or trimming the meat item (7) in question .
  • the projection means (5) shall be in communication with the in-let conveyor (2), and the processing means (4), and shall receive guidance from the processing means (4) .
  • a projection means (5) for use according to the invention may be a monitor, a projector, a headlight, a laser, or a printer, capable of creating and/or marking a guiding cutting/trimming pattern on the workpiece/target product (7) to be processed, but may also include augmented reality (AR) equipment (5A), e.g. smart-glasses or the like, which over-lay an image of the cutting curves onto the meat item (7) .
  • AR augmented reality
  • the system may provide the executing cutting operator (6) information about the cutting and trimming of meat cuts according to dimension requirements, or about quality defects such as the occurrence of foreign objects that shall be removed.
  • the projector may also display information about the order in which the operator (6) should carry out the suggested cuts and trims.
  • the projection means (5) for use according to the invention is a projector capable of over-laying an image onto the workpiece/target product (7) .
  • the projection means (5) for use according to the invention is a printer or a laser marker.
  • the projection means (5) for use according to the invention is a pair of smart-glasses (5A), presenting information to the executing cutting operator (6) in the form of an augmented reality (AR) vision of the scene.
  • AR augmented reality
  • instructions about optimal cutting curves are displayed as an over-layer onto the workpiece/target product (7) .
  • instructions about the optimal sequence for cutting the curves is displayed as an over-layer onto the workpiece/target product (7) .
  • instructions about the optimal sequence by which cuts are performed is displayed by means of different colours or patterns for different curves.
  • instructions about the optimal cutting sequence is presented by displaying the cutting curves in a sequence.
  • instructions about the locations of foreign objects is displayed as an over-layer onto the workpiece/target product (7) .
  • instructions about the instructions displayed are the location of excess fat, cartilage and/or bone to be removed from the piece.
  • the invention provides a method for presenting information to a meat processing operator (6), by use of the system of the invention.
  • the method of the invention may be characterised by comprising the subsequent steps of:
  • step (c) analysing the digitalized data obtained by the first camera station (3A) in step (b) for determining identity (ID) and characteristics of the incoming
  • step (d) analysing the digitalized data obtained by the second camera station (3B) in step (b) for determining identity (ID) and characteristics of the incoming
  • step (e) using the projection means (5) of step (d) for presenting the cutting lines and the location of foreign objects, if any, calculated in step (c) to the meat
  • processing operator (6) in the form of an over-layer of the information on the workpiece/starting material (7) to be processed .
  • the method of the invention further comprises a step (cl) wherein the digitalised data analysed in step (c) by the processing means (4) is transmitted to a means for storing information (server/database) (9).
  • system further comprises one or more buffer- conveyors (2A) and/or an outlet-conveyor (2B) .
  • the digitalized data obtained in step (c) by the first camera station (3A) is analysed by feature extraction techniques.
  • the projection means (5) for use according to the invention is a projector capable of over-laying an image onto the workpiece/target product (7) .
  • the projection means (5) for use according to the invention is a printer or a laser marker.
  • the projection means (5) for use according to the invention is or a pair of smart-glasses (5A), presenting information to the executing cutting operator (6) in the form of an augmented reality (AR) vision of the scene.
  • AR augmented reality
  • the invention provides a method for presenting information to a meat processing operator (6), which method comprises the subsequent steps of:
  • step (b) obtaining one or more images of the incoming starting material (7) using a machine vision device (3), and transmitting the digitalized data to a processing means (4) ;
  • step (c) analysing the digitalized data obtained in step (b) using a processing means (4), and transmitting the digitalized information/instructions to a projection means (5);
  • step (d) using a projection means (5), presenting the information/instructions obtained in step (c) to the meat processing operator (6) in the form of an over-layer of the information on the meat item (7) to be processed.
  • the method of the invention for presenting information to a meat processing operator (6) comprises the subsequent steps of:
  • step (c) analysing the digitalized data obtained by the first camera station (3A) in step (b) for defining identity and for extracting feature characteristics, and for calculating cutting lines and, optionally, the location of existing foreign objects, if any, in communication with the processing means (4);
  • step (d) analysing the digitalized data obtained by the second camera station (3B) in step (b) for identification of the incoming meat item, in communication with the processing means (4), and transmitting the digitalized information/instructions to a projection means (5);
  • step (e) using the projection means (5) of step (d) for presenting the
  • step (c) information/instructions obtained in step (c) to the meat processing operator (6) in the form of an over-layer of the information on the meat item (7) to be processed.
  • step (c) may be performed by the processing means (4) using pattern recognition, that represents a branch of machine learning that focuses on the recognition of patterns and regularities in data, in this case data obtained from the vision device (3).
  • the projection means (5) used according to the method may e.g. be a projector capable of presenting (over-lay) an image onto the workpiece/target product (7), or a pair of smart-glasses (5A), presenting information to the executing cutting operator (6) in the form of an augmented reality (AR) vision of the scene, or it may be accomplished by marking the cutting curves onto the meat item (7) by means of a printer or a laser marking or similar.
  • augmented reality AR
  • the method of the invention may provide the executing cutting operator (6) information about the cutting and trimming of workpiece/starting material/target product (7) according to dimension requirements, or about quality defects such as the occurrence of foreign objects that shall be removed.
  • the projector may also display information about the order in which the operator (6) should carry out the suggested cuts and trims.
  • a vision camera (3) calibrated for size measurements, is used to assess the dimensions of the workpiece/starting material/target product (7) .
  • the processing means (4) used according to the method makes use of algorithms to analyse data from the vision camera (3) in order to:
  • the method of the invention is used for providing the executing cutting operator (6) information about possible quality defects, e.g . the occurrence of foreign objects on the meat item (7) .
  • FIG. 1 shows an embodiment of the system according to the invention : An in-let conveyor (2); A first camera station (3A) ; A second camera station (3B); A processing means (4) ; A projection means (5) ; The executing operator (6) ; The workpiece/ starting material (7) ;
  • FIG. 2 shows another embodiment of the system according to the invention : An in let conveyor (2); A first camera station (3A); A second camera station (3B) ; A processing means (4) ; A means for obtaining AR (5A) ; The executing operator (6); The
  • FIG. 3A shows a front/side view of an embodiment of the system according to the invention :
  • An in-let conveyor (2) A first camera station (3A) ;
  • FIG. 3B shows a front view an embodiment of the system according to the invention :
  • An in-let conveyor (2) A first camera station (3A) ;
  • FIG. 3C shows a side view an embodiment of the system according to the invention : An in-let conveyor (2) ; A first camera station (3A) ; A second camera station (3B); A projection means (5) ;
  • Fig. 4 shows another embodiment of the system according to the invention : An in let conveyor (2); A (buffer) conveyor (2A); An outlet conveyor (2B); A first camera station (3A); A second camera station (3B); A projection means (5);
  • FIG. 5 shows another embodiment of the system according to the invention : An in- let conveyor (2); A machine vision device (3); A processing means (4); A projection means (5)/A means for obtaining AR (5A); The executing operator (6); The
  • AR Augmented reality
  • Means for storing information (server/database).
  • AR Augmented Reality
  • contaminant detection and cutting recipe support has been benchmarked in a simulated trial environment, and the most promising channel has been demonstrated in a meat production operation, with a vision sensor setup as the supporting information provider.
  • Four different AR platforms candidates were benchmarked. Benchmarking was made with a simulated piece of meat wherein submerged contaminants were inserted to mimic bone fragments in meat.
  • the tested platforms included a tablet, a set of smart glasses, a video projector and a video monitor.
  • the operator performance was measured using a HTC VIVE based tracking system with a pointer attached to the controller. From the controller, the 3D position of the pointer is recorded together with the time spend for the task, leading to a capacity and quality measure for the 3D operation of pointing out the submerged contamination.
  • the pointer was tracked to give the position accuracy for pointing out the submerged target contamination.
  • the time spend for the entire pointing procedure was recorded to give an estimated efficiency for each operator performing the task.
  • a setup was built with a projector as the preferred platform to illustrate the potentiality in a real operation from a meat production.
  • the setup included a vision sensor, to determine the size of product (length and width) and the position of eventually surface contamination (Dyna-CQTM, available from DMRI, Denmark).
  • the results from the vision sensor is communicated to the operator using a tracking software for recognition and tracking of each product.
  • the relevant cutting instruction and calculated position of the detected surface contamination e.g. polymer fragments from wrapping films or carrier trays
  • Input to the tracking/identification software is a standard RGB camera placed in a fixed position relative to the projecting device.
  • the potential of using AR technology was demonstrated using a pilot scale production setup.
  • the vision Dyna-CQTM sensor gives the absolute size (length and width) of the incoming meat products and detects any surface contamination.
  • the product in the pilot experiment should meet specific size requirements. By comparing to the actual measured size, a cutting recipe in form of a set of cutting lines can be calculated for each product.
  • the calculated cutting lines are projected onto the meat surface in order to guide the operator in performing the optimized cutting operation.
  • the sequential order of the two cutting procedures leaves a significant volume of meat on either of the two trimmings: the caudal (hip) or ventral (belly) part. Actual price differences between the two parts may influence the yield of a specific cutting sequence.
  • the sequential order of cutting is communicated to the operator using colour codes.
  • the sequential order of cutting leaves the lower left-hand cut-off either on the ventral or the caudal part.
  • an optimization potential is available for the operator by selecting a specific cutting sequential order.
  • Benchmarking the four AR platforms is also relevant for the meat industry, as all components of the best performing system are off the shelf commercially available products. Different tracking software platforms are also available from different vendors, and the potential indicated in this experiment aligns with previous demonstrated yield improvement of fresh meat production .

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Forests & Forestry (AREA)
  • Mechanical Engineering (AREA)
  • Wood Science & Technology (AREA)
  • Zoology (AREA)
  • Food Science & Technology (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present invention provides a system for displaying instructions to the executing operator about cutting and trimming meat cuts in slaughterhouses, which system comprises an in-let conveyor belt (2), a machine vision device (3), a processing means (4) and a projection means (5). A related method obtains and analyses digitalized data of a meat item and presents cutting lines and the location of foreign objects, if any, on the projection means (5) to the operator.

Description

SYSTEM FOR CUTTING AND TRIMMING MEAT CUTS
TECHNICAL FIELD The present invention provides a system and a related method for facilitating instructions to the executing operator about cutting and trimming meat cuts in slaughterhouses.
BACKGROUND ART
When cutting carcasses and parts of carcasses, it is often a requirement that the resulting products must have certain predefined dimensions. For example, pork bellies are often cut and trimmed to have length and width dimensions specified by the customer. Often the size is important for the customer due to certain size requirements of further processing equipment.
The trimming is often done manually. To assist the operator to obtain the correct product size, a frame or a template, with the right dimensions, is placed upon the belly to guide the operator to trim away the parts outside the frame.
However, the use of this frame has several disadvantages. The operator must carry the frame and use time to place the frame onto the belly. If the belly dimensions are not suited for the intended product, i .e. do not comply with the frame dimensions, the operator cannot utilize the frame correctly, and will typically not process the belly.
The frame is usually drawn up to one side of the belly and frequently squeezes the belly together, which results in the belly being insufficiently trimmed, i.e. it will be larger than specified .
Contact between the frame and the underlying conveyor can result in the generation of foreign bodies, i .e. plastic fragments from the conveyor belt, which may end up as foreign objects in the products, thus reducing quality and result in consumer complaints.
Finally, the operator must himself decide how to place the frame on the belly and decide in what order to trim off excess product from the different sides of the belly, which can result in a suboptimal value of the trimmings.
Augmented Reality (AR) is an emerging technology for supporting industrial service and maintenance operations. The potential includes hands-free support for complicated procedures, and updated assistance for performing a specific maintenance task, e.g. replacement of a defective component.
W02008102148 describes a method for handling food pieces by subjecting a food piece to an X-ray analysis for detection of foreign objects in the food piece, followed by transportation of the food piece from the analysis position to an operator, and providing the operator with an indication of the location of the foreign object on the food piece by a "static" marking on the food piece on the fly.
WO2014079448 describes a method for removing undesired parts or objects in pieces of meat, in which method the operator is assisted by an image on a screen that the operator carries in front of his eyes, and on which screen a (static) representation of the objects in question are projected .
However, a system for cutting and trimming meat cuts assisted by a dynamic presentation of suggested cutting lines as described herein has not been disclosed .
SUMMARY OF THE INVENTION
The present invention provides a system for assisting an operator in the cutting and trimming of meat cuts, including removal of foreign objects and/or trimming/cutting according to dimension requirements.
In contrast to known systems for representing cutting lines on a moving meat target, that are based on a static representation relative to the position of the meat piece as it was when the image was recorded, and as a result can only be projected correctly onto the moving workpiece as long as it remains in the same relative position on the moving conveyor, the system of the present invention allows for a dynamic presentation of proposed cutting lines, i .e. the projected image aligns to, and fits with the actual position of the workpiece in question, even if the workpiece has been moved and changed position on the conveyor, e.g. as a result of the operators work and impact on the workpiece.
The system of the present invention may be applied at abattoirs, for processing incoming carcasses or parts of carcasses, intended for being cut into meat pieces of a certain size and quality. The workpiece may in particular be a carcass, or a part hereof, e.g . a pork belly.
The system of the invention may also be used for detecting quality defects, such as the occurrence of foreign objects on the surface of the incoming meat item, and advice the operator of the occurrence and location of such quality defects, and may also propose preferred cutting lines, and even a preferred cutting sequence.
In addition, the system of the invention may include the use of AR-technology, e.g . by projecting the proposed cutting lines onto a screen in front of the operator.
Thus, in its first aspect, the invention provides a system for displaying instructions to a meat processing operator, which system is described in more details below. In another aspect, the invention provides a method for presenting information to a meat processing operator, which method is described in more details below.
Other objects of the invention will be apparent to the person skilled in the art from reading the following detailed description and accompanying drawings.
Any combination of two or more of the embodiments described herein is considered within the scope of the present invention.
DETAILED DISCLOSURE OF THE INVENTION The system of the invention
In its first aspect, the invention provides a system for assisting an operator (6) in the cutting and trimming of meat cuts, e.g . according to dimension requirements, or for detecting and correcting quality defects, such as the occurrence of foreign objects.
The system may be applied at abattoirs, for processing incoming carcasses or parts of carcasses, intended for being cut into meat pieces of a certain size. The workpiece/starting material/meat item/target product (7) may in particular be a carcass, or a part hereof, e.g . a pork belly.
The system of the present invention may be characterised by comprising:
an in-let conveyor belt (2) comprising a position sensor, in communication with the processing means (4), and capable of conveying the workpiece/starting material (7);
a machine vision device (3), comprising at least two camera stations (3A, 3B), each station, in communication with the processing means (4), configured for identification of the incoming workpiece/starting material (7), for making size measurements, and capable of accessing the dimensions of the workpiece/starting material (7), and transmitting the digitalised data to the processing means (4) ;
a processing means (4), in communication with the in-let conveyor (2), the machine vision device (3), and the projecting means (5), and capable of:
calculating the speed of the in-let conveyor belt (2), for keeping the projection means (5) synchronized with the conveyor belt (2) ;
assessing the dimensions of the workpiece/starting material (7) and evaluating the possibility of compliance of each starting material dimensions to product requirements; calculating the optimal cutting/trimming pattern in view of the specifications for the final product/cut; and
recognizing and tracking the workpiece/starting material/target product (7) while being transported from the site of measurement (3) to the location of the executing cutting operator (6) ; and
a projection means (5), in communication with the in-let conveyor (2) and the processing means (4), and capable of displaying an over-layer of information, on the workpiece/target product (7) to be processed, to the executing cutting operator (6), while the workpiece/target product (7) is being conveyed; wherein,
the first camera station (3A) feeds data to the processing means (4) for identifying, and for locating, the incoming workpiece/starting material (7), and for calculating cutting lines, and, if present, optionally also identifies foreign objects and notes their position on the surface of the incoming workpiece/starting material (7); and
the second camera station (3B), in a dynamic process, feeds data to the processing means (4) for identifying and locating the incoming workpiece/starting material (7), and communicates the current position of the workpiece/starting material (7) and the proposed cutting lines and location of foreign objects, if any, to the projecting means (5).
In this way the system of the invention is capable of generating a vision based loop, wherein the second camera station (3B), in a dynamic process involving data from the first camera station (3A), that are analysed and computed by the processing means
(4), the result of which is transmitted to the projecting means (5), and allows for a "real time" feed-back via the projecting means, thus propagating instructions to the operator.
In one embodiment, the system further comprises one or more buffer-conveyors (2A) and/or an outlet-conveyor (2B).
In another embodiment, the system of the invention further comprises a means for storing information (a server/database) (9).
In a third embodiment, the projection means (5) for use according to the invention is a projector capable of over-laying an image onto the workpiece/target product (7).
In a fourth embodiment, the projection means (5) for use according to the invention is a printer or a laser marker.
In a fifth embodiment, the projection means (5) for use according to the invention is or a pair of smart-glasses (5A), presenting information to the executing cutting operator (6) in the form of an augmented reality (AR) vision of the scene.
In a further embodiment the system of the present invention may be characterised by comprising:
an in-let conveyor belt (2), in communication with the processing means (4), and capable of conveying the starting material (7);
a machine vision device (3), in communication with the processing means (4), configured for making size measurements, and capable of accessing the dimensions of the starting material (7), and transmitting the digitalised data to the processing means (4);
a processing means (4), in communication with the conveyor (2), the machine vision device (3), and the projecting means (5), and capable of:
calculating the speed of the in-let conveyor belt (2), to keep the projection means
(5) synchronized with the conveyor belt (2); assessing the dimensions of the starting material (7) and evaluate the possibility of compliance of each starting material dimensions to product requirements;
calculate the optimal cutting/trimming pattern in view of the specifications for the final product/cut; and
recognize and track the starting material (7) while being transported from the site of measurement (3) to the location of the executing cutting operator (6) ; and
a projection means (5), in communication with the in-let conveyor (2), and the processing means (4), and capable of displaying an over-layer of information, on the meat item (7) to be processed, to the executing cutting operator (6), while the meat item (7) is being conveyed.
In one embodiment, the system of the present invention may be characterised as described above, wherein :
the machine vision device (3) comprises two camera stations (3A, 3B), each station, in communication with the processing means (4), configured for identification of the incoming starting material (7) ; and wherein
the first camera station (3A) provides data to the processing means (4) for identification of the incoming starting material (7), and for calculating cutting lines, and, if present, optionally also identifies foreign objects and notes their position on the surface of the incoming starting material (7); and
the second camera station (3B) provides data to the processing means (4) for identification and for location of the incoming starting material (7) and communicates the position of cutting lines and foreign objects, if any, to the projecting means (5) .
The in-let conveyor
For the processing means (4) to be able to calculate and track the position of the incoming workpiece/starting material (7), while in motion, and keep the projection means (5) synchronized with the conveyor belt (2), the system shall know the speed of the conveyor belt (2) . This may be accomplished in various ways.
The in-let conveyor belt (2) for use according to the invention may be equipped with a position sensor, in communication with, and receiving operational guidance from, the processing means (4) . This ensures that the system can performing certain functions, like controlling (determine and adjust) the speed, and synchronize with the projection means (5) .
There are four widely used methods of applying encoders to conveyors: motor mount, roller shaft mount, belt/chain driven and surface mount. Any type of encoder may be employed according to the invention .
In another embodiment, the system of the invention comprises more conveyors, e.g . one or more buffer-conveyors (2A), and/or an outlet-conveyor (2B) . After identification of the incoming workpiece/starting material (7) at the first camera station (3A), that feeds the obtained ID to the processing means (4), the workpiece may be removed from the system and stored on a (buffer) conveyor belt (2A) before it is re-introduced to the system for continued processing, and identification/monitoring by a second (or further) camera station (3B) . This may e.g . be accomplished by use of an outlet conveyor (2B) .
The machine vision device
The machine vision device (3) for use according to the invention may be any machine vision device, configured for making size measurements, and capable of accessing the dimensions of the workpiece/starting material (7) while in motion .
For performing such operations, the vision device shall be in communication with, and be able to transmit data to the processing means (4) .
In one embodiment, the vision device (3) comprises the use of two cameras (3), one camera station (3A) for identifying the arriving workpiece/starting material (7) and for communicating image data to the processing means (4), and another camera station (3B) for tracing the conveyed workpiece/starting material (7) and communicating image data to the processing means (4) .
Based on the image data established by the first camera station (3A), the processor (4) extracts characteristics associated with the meat product in question (7), and calculates optimal cutting lines, and, optionally, also identifies foreign objects, if present, and notes their position on the meat surface. Computed cutting lines, along with the specific location of foreign objects, if any, are reported to the projection means (5), that projects the finding onto the workpiece/target product (7), while on the fly, in the form of over-lay lines.
Over-lays may e.g. be projected in different colours, for assisting the operator (6) in producing an optimized yield of the raw material by cutting along the projected lines in a specific order (e.g. green before red), and for performing a corrective action by removing any foreign objects detected as surface contamination .
As the workpiece/target product (7) is being moved towards the processing area (8), the second camera station (3B), that is positioned above the processing area (8), identifies the incoming workpiece/target product (7), based on the information obtained by the first camera station and recorded by the processing means (4), and communicates the location and position of the moving workpiece/target product (7) to the projection means (5), in the form of cutting lines and the position of any foreign object to be removed. In this way the second camera station (3B), by employing feed-back/loop processes in cooperation with the processing means (4) and the projecting means (5), is able to provide dynamic cutting lines to the operator (6).
Preferably, the machine vision device (3) for use according to the invention is a multispectral vision device. A multispectral image is one that captures image data within specific wavelength ranges across the electromagnetic spectrum. The wavelengths may be separated by filters or by the use of instruments that are sensitive to particular wavelengths, including light from frequencies beyond the visible light range, i .e. infrared and ultra-violet. Spectral imaging can allow extraction of additional information the human eye fails to capture with its receptors for red, green and blue. Multispectral imaging measures light in a small number (typically 3 to 15) of spectral bands.
Multispectral imaging can also be accomplished by using camera(s) sensitive to all the relevant spectral bands and sequentially illuminating the object with each spectral band, while capturing a frame for each band.
The machine vision device (3) for use according to the invention may comprise a light source capable of emitting electromagnetic waves in the ranges 350nm to 700nm to (from UV to visible regions) and 700nm to 950nm (i .e. NIR), and a sensor capable of receiving electromagnetic waves within the same ranges (i .e. 350nm to 950nm), and may be a device as described in e.g. WO 2017/121713. Machine vision systems essentially comes in three main categories:
ID vision analyses a digital signal one line at a time instead of looking at a whole picture at once, and is accomplished by use of a line-scan camera;
2D vision is looking at the whole picture, and may be accomplished by use of an industrial camera/area-scan camera, e.g . a multispectral RGB (visible/colour images) camera; and
3D vision systems typically comprise multiple cameras or one or more laser displacement sensors. Commercially available 3D scanners or range cameras include the time-of-flight camera, the structure light camera, stereo camera or 3D camera, e.g. the Sick 3D ruler.
In one embodiment, the vision device of the first camera station (3A) is a line- scan camera.
In one embodiment, the vision device of the second camera station (3B) is an industrial 2D multispectral RGB camera, or area scan camera.
In one embodiment, the machine vision device (3) for use according to the invention is calibrated for correct size measurement. The processing means
The processing means (4) for use according to the invention may be any commercially available processor/PC, in communication with the conveyor (2), the machine vision device (3), and the projecting means (5), and shall capable of:
calculating the speed of the in-let conveyor belt (2), to keep the projection means
(5) synchronized with the conveyor belt (2) ;
assessing the dimensions of the workpiece/starting material (7) and evaluate the possibility of compliance of each starting material dimensions to product requirements; calculate the optimal cutting/trimming pattern in view of the specifications for the final product/cut; and
recognize and track the workpiece/target product (7) while being transported from the site of measurement (3) to the location of the executing cutting operator (6) .
The processing means (4) used according to the invention may also be used for calculating the optimal cutting lines, and for location of existing foreign objects, if any. Identification and feature extraction may be accomplished using pattern recognition and digital image processing, and particularly feature extraction .
Means for storing information
In a particular embodiment, the method of the invention further comprises a step (cl) wherein the digitalised data analysed in step (c) by the processing means (4) is transmitted to a means for storing information (server/database) (9).
According to this embodiment, the processing means (4) used according to the invention may be in communication with a means for storing information, i.e. a central server or database (9). This central database (9) may contain pre-loaded information related to the products in question, e.g. origin of the product, product ID, product specifications, etc.
After receipt of the digitalised data from the vision device (3), the processing means (4) can establish a product ID, e.g . by reference to a product catalogue or product specifications stored on the server (9), and the product ID may be allocated to each workpiece/target product (7), and transmitted to the server/database (9) for further action/use.
The projection means
By displaying an over-layer of information onto the workpiece/target product (7) to be processed, while in motion and lying on the conveyor belt (2), the projection means (5) for use according to the invention shall assist the executing operator (6) in cutting and/or trimming the meat item (7) in question . For fulfilling its tasks, the projection means (5) shall be in communication with the in-let conveyor (2), and the processing means (4), and shall receive guidance from the processing means (4) .
A projection means (5) for use according to the invention may be a monitor, a projector, a headlight, a laser, or a printer, capable of creating and/or marking a guiding cutting/trimming pattern on the workpiece/target product (7) to be processed, but may also include augmented reality (AR) equipment (5A), e.g. smart-glasses or the like, which over-lay an image of the cutting curves onto the meat item (7) .
Via the projection means (5), the system may provide the executing cutting operator (6) information about the cutting and trimming of meat cuts according to dimension requirements, or about quality defects such as the occurrence of foreign objects that shall be removed. The projector may also display information about the order in which the operator (6) should carry out the suggested cuts and trims.
In one embodiment, the projection means (5) for use according to the invention is a projector capable of over-laying an image onto the workpiece/target product (7) .
In another embodiment, the projection means (5) for use according to the invention is a printer or a laser marker.
In a third embodiment, the projection means (5) for use according to the invention is a pair of smart-glasses (5A), presenting information to the executing cutting operator (6) in the form of an augmented reality (AR) vision of the scene.
In a fourth embodiment, instructions about optimal cutting curves are displayed as an over-layer onto the workpiece/target product (7) .
In a fifth embodiment, instructions about the optimal sequence for cutting the curves is displayed as an over-layer onto the workpiece/target product (7) .
In a sixth embodiment, instructions about the optimal sequence by which cuts are performed is displayed by means of different colours or patterns for different curves.
In a seventh embodiment, instructions about the optimal cutting sequence is presented by displaying the cutting curves in a sequence.
In an eight embodiment, instructions about the locations of foreign objects is displayed as an over-layer onto the workpiece/target product (7) .
In a ninth embodiment, instructions about the instructions displayed are the location of excess fat, cartilage and/or bone to be removed from the piece.
The method of the invention
In another aspect, the invention provides a method for presenting information to a meat processing operator (6), by use of the system of the invention. The method of the invention may be characterised by comprising the subsequent steps of:
(a) receiving a workpiece/starting material (7) on a conveyor belt (2) ; (b) obtaining one or more images of the incoming workpiece/ starting material (7) using a machine vision device (3) comprising two consecutive camera stations (3A, 3B), and transmitting the digitalized data to a processing means (4) ;
(c) analysing the digitalized data obtained by the first camera station (3A) in step (b) for determining identity (ID) and characteristics of the incoming
workpiece/starting material (7), and optionally for calculating cutting lines and the location of existing foreign objects, if any, in communication with the processing means (4) ;
(d) analysing the digitalized data obtained by the second camera station (3B) in step (b) for determining identity (ID) and characteristics of the incoming
workpiece/starting material (7), in communication with the processing means (4), and transmitting the digitalized information/instructions to a projection means (5) ;
(e) using the projection means (5) of step (d) for presenting the cutting lines and the location of foreign objects, if any, calculated in step (c) to the meat
processing operator (6) in the form of an over-layer of the information on the workpiece/starting material (7) to be processed .
In one embodiment, the method of the invention further comprises a step (cl) wherein the digitalised data analysed in step (c) by the processing means (4) is transmitted to a means for storing information (server/database) (9).
In another embodiment, the system further comprises one or more buffer- conveyors (2A) and/or an outlet-conveyor (2B) .
In a third embodiment, the digitalized data obtained in step (c) by the first camera station (3A) is analysed by feature extraction techniques.
In a fourth embodiment, the projection means (5) for use according to the invention is a projector capable of over-laying an image onto the workpiece/target product (7) .
In a fifth embodiment, the projection means (5) for use according to the invention is a printer or a laser marker.
In a sixth embodiment, the projection means (5) for use according to the invention is or a pair of smart-glasses (5A), presenting information to the executing cutting operator (6) in the form of an augmented reality (AR) vision of the scene.
In a further embodiment, the invention provides a method for presenting information to a meat processing operator (6), which method comprises the subsequent steps of:
(a) receiving the starting material (7) on a conveyor belt (2) ;
(b) obtaining one or more images of the incoming starting material (7) using a machine vision device (3), and transmitting the digitalized data to a processing means (4) ; (c) analysing the digitalized data obtained in step (b) using a processing means (4), and transmitting the digitalized information/instructions to a projection means (5);
(d) using a projection means (5), presenting the information/instructions obtained in step (c) to the meat processing operator (6) in the form of an over-layer of the information on the meat item (7) to be processed.
In one embodiment, the method of the invention for presenting information to a meat processing operator (6) comprises the subsequent steps of:
(a) receiving a meat item/starting material/target product (7) on a conveyor belt (2);
(b) obtaining one or more images of the incoming meat item/starting material/target product (7) using a machine vision device (3) comprising two consecutive camera stations (3A, 3B), and transmitting the digitalized data to a processing means (4);
(c) analysing the digitalized data obtained by the first camera station (3A) in step (b) for defining identity and for extracting feature characteristics, and for calculating cutting lines and, optionally, the location of existing foreign objects, if any, in communication with the processing means (4);
(d) analysing the digitalized data obtained by the second camera station (3B) in step (b) for identification of the incoming meat item, in communication with the processing means (4), and transmitting the digitalized information/instructions to a projection means (5);
(e) using the projection means (5) of step (d) for presenting the
information/instructions obtained in step (c) to the meat processing operator (6) in the form of an over-layer of the information on the meat item (7) to be processed.
The analysis in step (c) may be performed by the processing means (4) using pattern recognition, that represents a branch of machine learning that focuses on the recognition of patterns and regularities in data, in this case data obtained from the vision device (3).
The projection means (5) used according to the method may e.g. be a projector capable of presenting (over-lay) an image onto the workpiece/target product (7), or a pair of smart-glasses (5A), presenting information to the executing cutting operator (6) in the form of an augmented reality (AR) vision of the scene, or it may be accomplished by marking the cutting curves onto the meat item (7) by means of a printer or a laser marking or similar.
The method of the invention may provide the executing cutting operator (6) information about the cutting and trimming of workpiece/starting material/target product (7) according to dimension requirements, or about quality defects such as the occurrence of foreign objects that shall be removed. The projector may also display information about the order in which the operator (6) should carry out the suggested cuts and trims.
In one embodiment, a vision camera (3), calibrated for size measurements, is used to assess the dimensions of the workpiece/starting material/target product (7) .
In another embodiment, the processing means (4) used according to the method makes use of algorithms to analyse data from the vision camera (3) in order to:
(a) prior to cutting, evaluate the possibility of compliance of the dimensions of each workpiece/starting material/target product (7) to product requirements;
(b) calculate the optimum cutting curves/lines and/or cutting order to optimize the total product value (meat product and trimmings) ;
(c) recognize and track the workpiece/target product (7) at the location of the cutting operator; and/or
(d) display the cutting curves and/or the cutting order to the operator (6) .
In a further embodiment, the method of the invention is used for providing the executing cutting operator (6) information about possible quality defects, e.g . the occurrence of foreign objects on the meat item (7) .
BRIEF DESCRIPTION OF THE DRAWINGS The present invention is further illustrated by reference to the accompanying drawing, in which :
Fig . 1 shows an embodiment of the system according to the invention : An in-let conveyor (2); A first camera station (3A) ; A second camera station (3B); A processing means (4) ; A projection means (5) ; The executing operator (6) ; The workpiece/ starting material (7) ;
Fig . 2 shows another embodiment of the system according to the invention : An in let conveyor (2); A first camera station (3A); A second camera station (3B) ; A processing means (4) ; A means for obtaining AR (5A) ; The executing operator (6); The
workpiece/starting material (7) ;
Fig . 3A shows a front/side view of an embodiment of the system according to the invention : An in-let conveyor (2) ; A first camera station (3A) ; A second camera station (3B); A projection means (5) ; The executing operator (6) ;
Fig . 3B shows a front view an embodiment of the system according to the invention : An in-let conveyor (2) ; A first camera station (3A) ; A second camera station (3B); A projection means (5) ;
Fig . 3C shows a side view an embodiment of the system according to the invention : An in-let conveyor (2) ; A first camera station (3A) ; A second camera station (3B); A projection means (5) ; Fig. 4 shows another embodiment of the system according to the invention : An in let conveyor (2); A (buffer) conveyor (2A); An outlet conveyor (2B); A first camera station (3A); A second camera station (3B); A projection means (5);
Fig. 5 shows another embodiment of the system according to the invention : An in- let conveyor (2); A machine vision device (3); A processing means (4); A projection means (5)/A means for obtaining AR (5A); The executing operator (6); The
workpiece/starting material (7).
List of reference signs
In the figures, identical structures, elements or parts that appear in more than one figure are generally labelled with the same numeral in all the figures in which they appear.
1. System for displaying instructions to a meat processing operator;
2. In-let conveyor;
2A. A (buffer) conveyor belt (2A);
2B. An outlet conveyor belt (2B);
3. Machine vision device;
3A. First camera station;
3B. Second camera station;
4. Processing means;
5. Projection means;
5A. Augmented reality (AR) equipment;
6. Executing operator;
7. Workpiece/starting material/target product;
8. Processing area;
9. Means for storing information (server/database).
EXAMPLE The invention is further illustrated with reference to the following example, which is not intended to be in any way limiting to the scope of the invention as claimed.
Application of Augmented Reality (AR) for speed/capacity and precision, and the best platforms/channels has been validated in a meat production task, including contaminant detection and cutting recipe support has been benchmarked in a simulated trial environment, and the most promising channel has been demonstrated in a meat production operation, with a vision sensor setup as the supporting information provider. Four different AR platforms candidates were benchmarked. Benchmarking was made with a simulated piece of meat wherein submerged contaminants were inserted to mimic bone fragments in meat.
The tested platforms included a tablet, a set of smart glasses, a video projector and a video monitor. The operator performance was measured using a HTC VIVE based tracking system with a pointer attached to the controller. From the controller, the 3D position of the pointer is recorded together with the time spend for the task, leading to a capacity and quality measure for the 3D operation of pointing out the submerged contamination.
Young volunteers, 48 operators in total, all novices to the specific task, were recruited. A statistical balanced experiment was set up with four groups, each assisted by a tablet, a video monitor, a video projector and a set of AR glasses (EPSON
Moverio200), respectively. The pointer was tracked to give the position accuracy for pointing out the submerged target contamination. The time spend for the entire pointing procedure was recorded to give an estimated efficiency for each operator performing the task.
A setup was built with a projector as the preferred platform to illustrate the potentiality in a real operation from a meat production. The setup included a vision sensor, to determine the size of product (length and width) and the position of eventually surface contamination (Dyna-CQ™, available from DMRI, Denmark). The results from the vision sensor is communicated to the operator using a tracking software for recognition and tracking of each product. Finally, the relevant cutting instruction and calculated position of the detected surface contamination (e.g. polymer fragments from wrapping films or carrier trays) is presented via a video projector to the operator.
Input to the tracking/identification software is a standard RGB camera placed in a fixed position relative to the projecting device.
Results
The potential of using AR technology was demonstrated using a pilot scale production setup. The vision Dyna-CQ™ sensor gives the absolute size (length and width) of the incoming meat products and detects any surface contamination. The product in the pilot experiment should meet specific size requirements. By comparing to the actual measured size, a cutting recipe in form of a set of cutting lines can be calculated for each product.
The calculated cutting lines are projected onto the meat surface in order to guide the operator in performing the optimized cutting operation. The sequential order of the two cutting procedures leaves a significant volume of meat on either of the two trimmings: the caudal (hip) or ventral (belly) part. Actual price differences between the two parts may influence the yield of a specific cutting sequence.
In the pilot setup, the sequential order of cutting is communicated to the operator using colour codes. The sequential order of cutting leaves the lower left-hand cut-off either on the ventral or the caudal part. As these by-products often are priced differently, an optimization potential is available for the operator by selecting a specific cutting sequential order.
Conclusion
Despite the simplicity of the simulated meat product, the results seemed to be relevant for the meat industry, as capacity and precision are parameters with direct impact on the financial the bottom line.
Benchmarking the four AR platforms is also relevant for the meat industry, as all components of the best performing system are off the shelf commercially available products. Different tracking software platforms are also available from different vendors, and the potential indicated in this experiment aligns with previous demonstrated yield improvement of fresh meat production .

Claims

1. A system (1) for displaying instructions to a meat processing operator (6), which system comprises:
an in-let conveyor belt (2) comprising a position sensor, in communication with the processing means (4), and capable of conveying the workpiece/starting material (7);
a machine vision device (3), comprising two camera stations (3A, 3B), each station, in communication with the processing means (4), configured for identification of the incoming workpiece/starting material (7), for making size measurements, and capable of accessing the dimensions of the workpiece/starting material (7), and transmitting the digitalised data to the processing means (4) ;
a processing means (4), in communication with the in-let conveyor (2), the machine vision device (3), and the projecting means (5), and capable of:
calculating the speed of the in-let conveyor belt (2), for keeping the projection means (5) synchronized with the conveyor belt (2) ;
assessing the dimensions of the workpiece/starting material (7) and evaluating the possibility of compliance of each starting material dimensions to product requirements; calculating the optimal cutting/trimming pattern in view of the specifications for the final product/cut; and
recognizing and tracking the workpiece/starting material (7) while being transported from the site of measurement (3) to the location of the executing cutting operator (6) ; and
a projection means (5), in communication with the in-let conveyor (2) and the processing means (4), and capable of displaying an over-layer of information, on the meat item (7) to be processed, to the executing cutting operator (6), while the workpiece/target product (7) is being conveyed; wherein,
the first camera station (3A) feeds data to the processing means (4) for identifying, and for locating, the incoming workpiece/starting material (7), and for calculating cutting lines, and, if present, optionally also identifies foreign objects and notes their position on the surface of the incoming workpiece/starting material (7) ; and
the second camera station (3B), in a dynamic process, feeds data to the processing means (4) for identifying and for locating the incoming workpiece/starting material (7), and communicates the current position of the workpiece/starting material (7) and the proposed cutting lines and location of foreign objects, if any, to the projecting means (5) .
2. The system according to claim 1, which system further comprises one or more buffer-conveyors (2A) and/or an outlet-conveyor (2B) .
3. The system according to either one of claims 1-2, wherein the projection means (5) is a projector capable of over-laying an image onto the workpiece/target product (7) .
4. The system according to claim 3, wherein the projection means (5) is a printer or a laser marker.
5. The system according to either one of claims 1-2, wherein the projection means (5) is or a pair of smart-glasses (5A), presenting information to the executing cutting operator (6) in the form of an augmented reality (AR) vision of the scene.
6. A method for presenting information to a meat processing operator (6), which method comprises the subsequent steps of:
(a) receiving a meat item/starting material (7) on a conveyor belt (2) ;
(b) obtaining one or more images of the incoming meat item/starting material (7) using a machine vision device (3) comprising two consecutive camera stations (3A, 3B), and transmitting the digitalized data to a processing means (4) ;
(c) analysing the digitalized data obtained by the first camera station (3A) in step (b) for identity and characteristics, and optionally for calculating cutting lines and the location of existing foreign objects, if any, in communication with the processing means (4) ;
(d) analysing the digitalized data obtained by the second camera station (3B) in step (b) for identification of the incoming workpiece/starting material (7), in
communication with the processing means (4), and transmitting the digitalized information/instructions to a projection means (5) ;
(e) using the projection means (5) of step (d) for presenting the cutting lines and the location of foreign objects, if any, calculated in step (c) to the meat
processing operator (6) in the form of an over-layer of the information on the workpiece/starting material (7) to be processed .
7. The method according to claim 6, which method further comprises a step (cl) wherein the digitalised data analysed in step (c) by the processing means (4) is transmitted to a means for storing information (server/database) (9).
8. The method according to either one of claims 6-7, which system further comprises one or more buffer-conveyors (2A) and/or an outlet-conveyor (2B) .
9. The method according to any one of claims 6-8, wherein the projection means (5) for use according to the invention is a projector capable of over-laying an image onto the workpiece/target product (7) .
10. The method according to any one of claims 6-8, wherein the projection means
(5) for use according to the invention is or a pair of smart-glasses (5A), presenting information to the executing cutting operator (6) in the form of an augmented reality (AR) vision of the scene.
EP19812942.1A 2018-11-26 2019-11-25 System for cutting and trimming meat cuts Pending EP3887105A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DKPA201800912 2018-11-26
PCT/EP2019/082374 WO2020109210A1 (en) 2018-11-26 2019-11-25 System for cutting and trimming meat cuts

Publications (1)

Publication Number Publication Date
EP3887105A1 true EP3887105A1 (en) 2021-10-06

Family

ID=68733029

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19812942.1A Pending EP3887105A1 (en) 2018-11-26 2019-11-25 System for cutting and trimming meat cuts

Country Status (2)

Country Link
EP (1) EP3887105A1 (en)
WO (1) WO2020109210A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102330181B1 (en) * 2021-06-03 2021-11-24 주식회사 세종푸드 Customized meat processing method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9128810B1 (en) * 2004-01-09 2015-09-08 John Bean Technologies Corporation Method and system for portioning workpieces to directly-controlled and/or indirectly-controlled characteristics
GB0703496D0 (en) 2007-02-22 2007-04-04 Ishida Europ Ltd Method and apparatus for handling food pieces
DK177704B1 (en) 2012-11-22 2014-03-24 Attec Danmark As Method and means for controlling and removing foreign matter in food
EP3403075A1 (en) 2016-01-11 2018-11-21 Teknologisk Institut A method and device for scanning of objects using a combination of spectral ranges within vision, nir and x-rays

Also Published As

Publication number Publication date
WO2020109210A1 (en) 2020-06-04

Similar Documents

Publication Publication Date Title
RU2710146C1 (en) Device for obtaining and analyzing characteristic data for food industry products, an apparatus comprising such a device and a method for processing food industry products
US6031567A (en) Method and apparatus for video lumber grading
US20070193425A1 (en) Slicing of food products
US20100267320A1 (en) Fat Cover Measurement Device
US9675091B1 (en) Automated monitoring in cutting up slaughtered animals
EP2503331A2 (en) Method and system for the real-time automatic analysis of the quality of samples of processed fish meat based on the external appearance thereof, said samples travelling on a conveyor belt such that surface defects can be detected and the fish meat can be sorted according to quality standards
WO2001022081A1 (en) Animal carcase analysis
KR20210122254A (en) Food processing apparatus and method
JP2018146251A (en) Foreign matter detection system, foreign matter detection method and program thereof
US20190116816A1 (en) System for registration and presentation of performance data to an operator
WO2017118757A1 (en) A system and method for determining the presence and/or position of at least one bone in a meat piece
WO2020109210A1 (en) System for cutting and trimming meat cuts
AU2019208245B2 (en) Image acquisition for meat grading
CN216931665U (en) Device for automatically cutting pig carcass
DK180419B1 (en) System for cutting and trimming meat cuts
EP4158590A1 (en) Method and apparatus for identifying possible surface defects of a leather hide
JP2020024124A (en) Food foreign matter inspection device and foreign matter inspection method
DK180440B1 (en) On-line determination of quality characteristics of meat products
CN109187593A (en) A kind of control method and system of detection device
EP4025058B1 (en) Automatic removal of contamination on carcasses
WO2020109208A1 (en) System and method for automatic removal of foreign objects from a food surface
WO2024182367A1 (en) A vision-based quality control and audit system and method of auditing, for carcass processing facility
AU767212B2 (en) Animal carcase analysis
NZ749419A (en) Measuring device for multispectral measuring of quality features or defects of products and method therefor

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210628

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20240312

GRAL Information related to payment of fee for publishing/printing deleted

Free format text: ORIGINAL CODE: EPIDOSDIGR3

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3