DK181309B1 - A system and a method for determination of the tissue composition in meat products - Google Patents
A system and a method for determination of the tissue composition in meat products Download PDFInfo
- Publication number
- DK181309B1 DK181309B1 DKPA202200307A DKPA202200307A DK181309B1 DK 181309 B1 DK181309 B1 DK 181309B1 DK PA202200307 A DKPA202200307 A DK PA202200307A DK PA202200307 A DKPA202200307 A DK PA202200307A DK 181309 B1 DK181309 B1 DK 181309B1
- Authority
- DK
- Denmark
- Prior art keywords
- ray
- meat product
- images
- ray sources
- scintillation
- Prior art date
Links
- 235000013622 meat product Nutrition 0.000 title claims abstract description 81
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000009826 distribution Methods 0.000 claims abstract description 15
- 238000012545 processing Methods 0.000 claims description 37
- 238000004891 communication Methods 0.000 claims description 23
- 238000004458 analytical method Methods 0.000 claims description 6
- 238000003384 imaging method Methods 0.000 claims description 5
- 230000005284 excitation Effects 0.000 claims description 3
- 210000001519 tissue Anatomy 0.000 abstract description 12
- 235000013372 meat Nutrition 0.000 abstract description 8
- 210000000988 bone and bone Anatomy 0.000 abstract description 6
- 210000000845 cartilage Anatomy 0.000 abstract description 2
- 206010033675 panniculitis Diseases 0.000 abstract description 2
- 210000004304 subcutaneous tissue Anatomy 0.000 abstract description 2
- 238000002591 computed tomography Methods 0.000 description 19
- 210000000481 breast Anatomy 0.000 description 16
- 235000015277 pork Nutrition 0.000 description 16
- 238000013528 artificial neural network Methods 0.000 description 14
- 238000012549 training Methods 0.000 description 13
- 230000005855 radiation Effects 0.000 description 8
- 230000001747 exhibiting effect Effects 0.000 description 5
- 210000004003 subcutaneous fat Anatomy 0.000 description 5
- 238000010200 validation analysis Methods 0.000 description 5
- 230000003213 activating effect Effects 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 3
- 210000003484 anatomy Anatomy 0.000 description 3
- 239000000356 contaminant Substances 0.000 description 3
- 235000013305 food Nutrition 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 238000004441 surface measurement Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 210000001015 abdomen Anatomy 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000002247 constant time method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000009313 farming Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 235000019688 fish Nutrition 0.000 description 1
- 239000004519 grease Substances 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000005865 ionizing radiation Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 244000144972 livestock Species 0.000 description 1
- 238000004020 luminiscence type Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 244000144977 poultry Species 0.000 description 1
- 235000013594 poultry meat Nutrition 0.000 description 1
- 102000004169 proteins and genes Human genes 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000003307 slaughter Methods 0.000 description 1
- 210000001562 sternum Anatomy 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 235000013311 vegetables Nutrition 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A22—BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
- A22C—PROCESSING MEAT, POULTRY, OR FISH
- A22C17/00—Other devices for processing meat or bones
- A22C17/0073—Other devices for processing meat or bones using visual recognition, X-rays, ultrasounds, or other contactless means to determine quality or size of portioned meat
-
- A—HUMAN NECESSITIES
- A22—BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
- A22B—SLAUGHTERING
- A22B5/00—Accessories for use during or after slaughtering
- A22B5/0064—Accessories for use during or after slaughtering for classifying or grading carcasses; for measuring back fat
- A22B5/007—Non-invasive scanning of carcasses, e.g. using image recognition, tomography, X-rays, ultrasound
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/02—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
- G01N23/04—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
- G01N33/02—Food
- G01N33/12—Meat; Fish
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Food Science & Technology (AREA)
- Immunology (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Pathology (AREA)
- Biochemistry (AREA)
- Analytical Chemistry (AREA)
- Medicinal Chemistry (AREA)
- Wood Science & Technology (AREA)
- Zoology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
This invention relates to a system and a method for determination of the tissue composition of meat products. More specifically, the invention provides a multi-directional X-ray system and a corresponding method for determining the spatial distribution and location of different subcutaneous tissue types, such as, e.g., fat, meat, bones, and cartilage, by use of multi-directional and/or multiple sources X-ray projections.
Description
DK 181309 B1 1
A SYSTEM AND A METHOD FOR DETERMINATION OF THE TISSUE COMPOSITION
IN MEAT PRODUCTS
This invention relates to a system and a method for determination of the tissue composition of meat products. More specifically, the invention provides a multi-directional X- ray system and a corresponding method for determining the spatial distribution and location of different subcutaneous tissue types, such as, e.g., fat, meat, bones, and cartilage, by use of multi-directional and/or multiple sources X-ray projections.
The use of single source x-ray images for determination of, e.g., bone structures, contaminants, surface characteristics, etc. has been described.
Thus, EP2532246 and EP2827134 describe the use of a single source X-ray image capturing devices for the identification of bones in meat.
EP2922405 discloses a method for removal of undesirable parts of a food item, such as of meat, poultry, fish, and vegetables by transillumination with single source X- rays, gamma rays or magnetic rays.
WO2007049305 describes a method and an apparatus for quantifying the material composition of an object by use of a single source x-ray attenuation image and a thickness map of the object.
WO2013023778 describes a meat inspection and sorting line comprising a radiation inspection facility, e.g., a single source X-ray radiation device, for detecting undesired objects.
WO2018060325 describes a method for generating a three-dimensional surface profile of a food object, where the food object is exposed with a conical single source X- ray beam.
Moreover, WO2006034871 describes a method for determining the physiological parameters of a slaughtered animal body using a computer tomography (CT) or nuclear spin tomography imaging method.
WO2006128456 describes a method for automatically determining quality characteristics of a carcass on a slaughter line which method comprises the use of CT measuring equipment.
WO2006137919 describes a contaminant detection machine using a conveyor line scan x-ray (CT) inspection system designed for contaminant detection in packaged products.
DK 181309 B1 2
WO2015167585 describes a method for assessing the quality of a piece of meat by creating a plurality of cross-sectional images through the piece of meat using an x-ray computed tomography (CT) scanner.
WO2021030321 describes the use of three-dimensional (3D) stationary gantry X- ray computed tomography (CT) systems to scan animals/livestock for enabling improved management of animal farming processes.
However, the multi-directional and/or multiple sources X-ray determination system described herein, for determining the three-dimensional distribution of the tissue composition of a meat product has never been disclosed.
We have now found that a relatively simple setup involving multi-directional X-ray projections, either arising from one X-ray source or from multiple (simultaneous) X-ray sources, is able to mimic the effects of a CT-scanning equipment. Moreover, as an X-ray equipment is considerably cheaper and faster than a CT-scanning equipment, this finding allows for provision of a cheap and reliable alternative to conventional CT-scanning methods.
In addition, the equipment for use according to the present invention make it possible for the end user to retrain the system to process new products, simply by running the in-let conveying belt at a reduced speed.
Therefore, in its first aspect, the invention provides a multi-directional X-ray determination system (1), for determining the three-dimensional distribution of the tissue composition of a meat product (7), which system comprises the following elements: a. one or more X-ray sources (2), in communication with a processing means (6), which X-ray source is configured for recording one or more consecutive images, or which
X-ray sources are configured for recording one or more simultaneous images of the meat product (7) that is placed on a low X-ray density meat product support and conveying means (3); b. the low X-ray density meat product support and conveying means (3), optionally in communication with processing means (6), which conveying means is configured for providing the meat product (7) and supporting and conveying the meat product in a position located between the X-ray source (2) and a scintillating means (4) during the imaging process; c. the scintillating means (4), located in a position allowing excitation by the X- rays emitted from the one or more X-ray sources (2), and configured for exhibiting scintillation when excited by the one or more X-ray sources (2);
DK 181309 B1 3 d. a photographic camera (5), in communication/operation with the processing means (6), which photographic camera is focussing towards the scintillating means (4) and is configured for recording one or more photographic images of the excited scintillating means (4); and e. the processing means (6), in communication/operation/correspondence with the one or more X-ray sources (2), and the photographic camera (5), and optionally in communication with the low X-ray density meat product support and conveying means (3), which processing means is configured for activating the one or more X-ray sources, for operating the meat product support and conveying means (3), and for activating the photographic camera (5).
In another aspect the invention provides a method for determining the three- dimensional distribution of the tissue composition of a meat product (7), which method comprises the subsequent steps of: i. providing the meat product (7) to be analysed and placing the meat product in a position located between one or more multi-directional X-ray sources (2) and a scintillating means (4), which scintillation means is configured for exhibiting scintillation when excited by the one or more X-ray sources (2); ii. subjecting the meat product (7) of step i to analysis by the one or more X-ray sources (2) of step i, which X-ray source is configured for recording one or more consecutive images, or which X-ray sources are configured for recording one or more simultaneous images of the meat product (7), thereby forming an image of the meat product (7) on the scintillating means (4) when excited by the one or more X-ray sources (2); iii. recording one or more photographic images of the excited scintillating means (4) by means of a photographic camera (5) configured for recording one or more photographic images of the excited scintillating means (4); and iv. analysis of the obtained photographic images by use of a processing means (6) for a determination of the three-dimensional distribution of the tissue composition of the examined meat product (7).
Other objects of the invention will be apparent to the person skilled in the art from reading the following detailed description and accompanying drawings.
Any combination of two or more of the embodiments described herein is considered within the scope of the present invention.
The present invention is further illustrated by reference to the accompanying drawing, in which:
DK 181309 B1 4
Fig. 1 shows an example of a side view of the multi-directional X-ray determination system of the invention: The X-ray source (2); The high X-ray density radiation shielding (2B); The low X-ray density meat product support/conveying means (3); The scintillating means/scintillator (4); The optical mirror (4A); The photographic camera (5); The low X- ray density light shielding (5A); The processing means (6); and The meat product (7);
Fig. 2 shows an example of a longitudinal view (end view) of the multi-directional
X-ray determination system of the invention: The X-ray source (2); The bearing/trajectory/C-arc (2A); The high X-ray density radiation shielding (2B); The low X- ray density meat product support/conveying means (3); The scintillating means/scintillator (4); The optical mirror (4A); The photographic camera (5); The low X- ray density light shielding (5A); and The meat product (7);
Fig. 3 shows the distribution of muscles, fat, and bones presented divided according to the Hounsfield scale;
Fig. 4 shows a binary image presenting points from which the fat thickness is to be determined in appropriate steps relative to the curvature of the breast;
Figs. 5A and 5B show endpoints across the breast that are scaled up continuously until they hit past the first fat thickness layer;
Figs. 6A, 6B and 6C show where a slice of the original piece, a slice of a modified piece, and a combined version, showing the modified piece as grey and the original piece as luminous;
Fig. 7 (upper and lower windows) shows examples of training with different hyperparameters of the neural network: The network predicts the landscape over the total subcutaneous fat thickness based on different x-ray projections and the end discs; The network tries to minimize the loss based on the annotated fat thickness; Upper window shows the loss as a function of training epochs on the training dataset; Lower window shows the loss vs epochs on the validation set; and
Fig. 8 shows an example of a prediction compared to an annotated example.
The multi-directional X-ray determination system of the invention
In its first aspect, the invention relates to a multi-directional X-ray determination system for determining the three-dimensional distribution of the tissue composition of a meat product.
The system of the invention may be characterised by comprising the following essential elements: a. one or more X-ray sources (2), in communication with a processing means (6), which X-ray source is configured for recording one or more consecutive images, or
DK 181309 B1 which X-ray sources are configured for recording one or more simultaneous images of the meat product (7) that is placed on a low X-ray density meat product support and conveying means (3); b. a low X-ray density meat product support and conveying means (3), 5 optionally in communication with the processing means (6), which conveying means is configured for providing the meat product (7) and supporting and conveying the meat product in a position located between the X-ray source (2) and a scintillating means (4) during the imaging process; c. the scintillating means (4), located in a position allowing excitation by the
X-rays emitted from the one or more X-ray sources (2), and configured for exhibiting scintillation when excited by the one or more X-ray sources (2); d. a photographic camera (5), in communication/operation with the processing means (6), which photographic camera is focussing towards the scintillating means (4) and is configured for recording one or more photographic images of the excited scintillating means (4); and e. the processing means (6), in communication/operation/correspondence with the one or more X-ray sources (2), and the photographic camera (5), and optionally in communication with the low X-ray density meat product support and conveying means (3), which processing means is configured for activating the one or more X-ray sources, for operating the meat product support and conveying means (3), and for activating the photographic camera (5).
The X-ray sources
The X-ray source for use according to the invention may be any suitable X-ray source. Preferred is X-rays with low energies such as below 100 keV. The X-ray source may be a conventional X-ray tube with an X-ray focal spot. The X-ray source may also be an X-ray source selected from the group of X-ray sources consisting of a hot filament X- ray source and a field emission X-ray source.
In one embodiment, the X-ray source for use according to the invention represents a single-source, multi-directional X-ray system, that is configured for recording one or more consecutive images. In this embodiment, the single X-ray source (2) may be attached to, and be supported by, a bearing/trajectory, e.g., a C-arc (2A), which bearing/trajectory (2A) is configured for rotating the X-ray source in an arc at a suitable working distance to the meat product (7) to be processed.
For causing a rotation of the X-ray source (2) in an arc at a suitable working distance to the meat product (7) to be processed the system may comprise a means for manipulating the bearing/trajectory (2A), that may also be in communication with the processing means (6).
DK 181309 B1 6
In another embodiment, the X-ray source for use according to the invention represents two or more X-ray sources (2), e.g., three X-ray sources, each X-ray source being configured for recording one or more simultaneous images.
The X-ray source for use according to the invention shall be in communication/operation/correspondence with the processing means (6) for use according to the invention, and be configured for being manipulated by the processing means, e.g., with respect to on/off instructions, etc.
The low X-ray density meat product support
For supporting the meat product (7) in question during processing, the system of the invention shall include a meat product support (3). The meat product support (3) shall be configured for supporting the meat product in a position located between the X-ray source (2) and the scintillating means (4) during the imaging process.
In one embodiment, the meat product support (3) also represents an in-let conveying line/belt, configured for provision of the meat product (7) to be processed by the system.
Moreover, as the meat item (7) must be illuminated by the X-rays emitted by the
X-ray source (2), and the image thus obtained shall be reflected on the scintillating means (4), the meat product support shall represent a low X-ray density meat product support (3).
In another embodiment, the meat product support (3) for use according to the invention represents an in-let conveying line/belt, that is in communication with, and may be manipulated by input from, the processing means (6).
Optionally, the in-let conveying line/belt for use according to the invention may be equipped with an encoder, allowing it to be in operation with the processing means (6), so the speed of the conveyor may be adapted to suit the need of the remaining processes.
The scintillating means
For creating images of the treated meat product (7), the system of the invention also shall include a scintillating means/scintillator (4), that is configured for exhibiting scintillation when excited by the one or more X-ray sources (2).
A scintillator is a material that exhibits scintillation, that is a property of luminescence, when excited by ionizing radiation. Luminescent materials, when struck by an incoming particle, absorb its energy, and scintillate (.e., re-emit the absorbed energy in the form of light).
In one embodiment, the scintillating means (4) may form part of one side of a system cabinet.
DK 181309 B1 7
The photographic camera
To maintain and preserve the measured scintillation, the system of the invention must comprise a photographic camera (5), configured for recording one or more photographic images produced on the excited scintillating means (4).
The photographic camera may, e.g., be a conventional b/w camera.
In one embodiment, the image formed on the scintillating means/scintillator (4), as a result of irradiation from the X-ray source (2), is presented to the photographic camera (5) via an optimal mirror (4A).
In one embodiment, the photographic camera (5) for use according to the invention may be in communication/operation with the processing means (6), and be configured for being manipulated by the processing means, e.g., with respect to on/off instructions, etc.
The processing means
For computing data obtained from the one or more X-ray sources (2), and the photographic camera (5), the system of the invention shall comprise one or more processing means (6).
The processing means (6) for use according to the invention may be any available computation device (e.g., GPU, CPU, PLC, and/or PC), and shall be capable of receiving and processing data obtained by the various elements of the system of the invention.
The processing means (6) for use according to the invention shall be in communication/operation/correspondence with the one or more X-ray sources (2), and the photographic camera (5), and optionally also in communication with the low X-ray density meat product support (3).
In one embodiment, the processing means (6) for use according to the invention is also in communication with a means for manipulating the bearing/trajectory (2A) for causing a rotation of the X-ray source (2) in an arc at a suitable working distance to the meat product (7) to be processed.
X-ray radiation shielding
To avoid emitting unwanted radiation to the environment and for protection of the operator, the system should be shielded by use of an X-ray impermeable material.
In this respect, a distinction may be made between a radiation shield with a high
X-ray density (2B), originating directly from the X-ray source (2), and light shield with a low X-ray density (5A), originating from the scintillating means/scintillator (4) and/or from the meat product support (3).
DK 181309 B1 8
The method of the invention
In another aspect, the invention relates to a method for determining the three-dimensional distribution of the tissue composition of a meat product by use of the multi-directional X-ray determination system (1) of the invention.
The method of the invention may be characterised by comprises the subsequent steps of: i. providing the meat product (7) to be analysed and placing the meat product in a position located between one or more multi-directional X-ray sources (2) and a scintillating means (4), which scintillation means is configured for exhibiting scintillation when excited by the one or more X-ray sources (2); ii. subjecting the meat product (7) of step i to analysis by the one or more X-ray sources (2) of step i, which X-ray source is configured for recording one or more consecutive images, or which X-ray sources are configured for recording one or more simultaneous images of the meat product (7), thereby forming an image of the meat product (7) on the scintillating means (4) when excited by the one or more X-ray sources (2); iii. recording one or more photographic images of the excited scintillating means (4) by means of a photographic camera (5) configured for recording one or more photographic images of the excited scintillating means (4); and iv. analysis of the obtained photographic images by use of a processing means (6) for a determination of the three-dimensional distribution of the tissue composition of the examined meat product (7).
In one embodiment, the meat product (7) to be analysed may in particular be provided by use of an in-let conveying line/belt, configured for provision of the meat product (7) to be analysed.
To create an image of the meat product (7) in question, which image can subsequently be recorded by the photographic camera (5), the meat product shall be placed in a position located in between the one or more X-ray sources (2) and the scintillating means (4). Placed in this position, the X-ray source (2) will be able to create an image of the examined meat product by scintillation, which image may subsequently be captured by the photographic camera (5) used according to the invention.
In one embodiment, the X-ray source for use according to the invention represents a single-source, multi-directional X-ray system, that is configured for recording one or more consecutive images. In this embodiment, the single X-ray source (2) may be attached to, and be supported by, a bearing/trajectory, e.g., a C-arc (2A), which bearing/trajectory (2A) is configured for rotating the X-ray source in an arc at a suitable working distance to the meat product (7) to be processed.
DK 181309 B1 9
In another embodiment, the X-ray source for use according to the invention represents two or more X-ray sources (2), e.g., three X-ray sources, each X-ray source being configured for recording one or more simultaneous images.
Finally, the photographic images obtained by the photographic camera (5) are analysed by help from the processing means (6), and the three-dimensional distribution of the tissue composition of the examined meat product (7) is determined.
Training of the Neural Network (NN)
The neural network (NN) training is based on data acquired with different instruments to provide a quantitative description of anatomical features.
Data taken with the X-ray source (2) serve as input either in the form of raw X- ray images, or in a reconstructed manner using tomosynthesis, depending on the equipment's wanted accuracy and processing speed.
Furthermore, the equipment can supply additional data to improve this model.
Such data could include standard images of the target or 3D-surface measurements, which would suffice the neural network with more features to learn from depending on the anatomical part in question.
Therefore, the neural network architecture needs to accommodate the supplied data format. Usually, neural networks are based on RGB images that take three channels as inputs and convolute the image to extract features and give output in a binary one- channel image, i.e., segmentation tasks.
The proposed network will take an arbitrary n-dimensional data array composed of all the measurements taken during processing. The target or the annotated image will be based either on a reference CT scan or a fully reconstructed image using tomosynthesis asthe metric during training. The annotation itself can be done using feature engineering when the fully reconstructed 3D target information is available.
Known techniques for carrying out tomosynthesis include filtered back projection and iterative, expectation-maximization algorithms, that have both been used to reconstruct the data.
Newer techniques based on neural networks, include techniques where networks are trained to interpret a set of X-rays, possibly supplemented with vision images and/or 3D surface measurements.
List of reference signs
This is a listing of various elements relating to the present invention.
Alternative/synonymous designations are separated by slashes: 1. The multi-directional X-ray determination system of the invention 2. X-ray source
DK 181309 B1 10 2A. Bearing/trajectory/C-arc 2B. High X-ray density radiation shielding [scintillator may form part of one side] 3. Low X-ray density meat product support/conveying means 4, Scintillating means/scintillator 4A. Optical mirror 5. Photographic camera 5A. Low X-ray density light shielding 6. Processing means 7. Meat product
The invention is further illustrated with reference to the following example.
Measurement of subcutaneous fat thickness
This example describes the determination of the subcutaneous fat thickness of pork breast using X-ray projections as input to a Neural Network (NN).
Initially, an algorithm for determining “fat coats” is established based on a CT image. This part was performed using standard CT methods as well as image analysis and linear algebra. The first step of the algorithm is to separate the pork belly from the table on which it lies during the scan so that we have only one object in our 3D matrix.
Next, the pork loin is categorized by separating and dividing it according to the
Hounsfield scale into resp. muscles, fat, and bones. The distributions of these tissues can be seen in Fig. 3.
Subsequently, a class is written in Python that finds the endpoints across the breast (indicated by the red rings in Figs. 5A and 5B), and then finds points from which the fat thickness is to be determined in appropriate steps relative to the curvature of the breast.
These points are plotted into the binary image (see Fig. 4). This is done on all slices in the longitudinal direction, which thereby forms an array which is used to determine the fat thickness in the entire breast pork perpendicular to the local surface.
From the points on the surface, a series of normal vectors are made, which are scaled up continuously until they hit past the first fat thickness layer (protein layers below about 4 mm are disregarded). When this happens, the vector stops growing.
The algorithm results, for example, in Figs. 5A and 5B.
Since this fat thickness determination gives rise to local differences in voxel level, due to local variations in the curvature of the surface, a correction is made with an image
DK 181309 B1 11 filter with a Gaussian core across the landscape, which evens out local differences in curvature and gives a more realistic picture of fat thickness.
X-ray projections
The next step was to make the X-ray projections based on the CT image, as these
X-ray images must be input to the network. The "X-rays" are made from CT scans. They are made in three different projections: one projection from top to bottom, another projection across the breastbone and a final projection where 45° is rotated about the longitudinal axis.
The idea of these projections is that they are partly realistic to record in a production where an X-ray source and detector in a C-arc configuration rotates around a band, where the breast is transported in its longitudinal direction, and partly contains information about the subcutaneous fat thickness of the breast and thus can act as the equivalent of an RGB image, i.e., a three-channel image, and be input to a neural network.
Furthermore, it has been decided to include the end discs as there is information about the fat thickness which is evident in such images. Since it has not been possible to photograph the pork breasts used as a reference for training and testing of the network, an approximation has been made by using the CT image to form a grayscale image and inserting a few slices into the breast pork so that end artifacts avoided.
This results in an abstract five-channel image where the input cannot be immediately visualized. However, each channel can be extrapolated, with the first three being different X-ray projections and the last two being grayscale images of the ends.
Synthesis
Typically, a neural network requires large amounts of data to be able to train a backbone. In many applications, pre-trained network types can be used, which reduce the required amount of data, but since the network in this project uses a completely new form of input, for which there are no pre-trained networks to use, it has been necessary to train it from scratch.
Therefore, an attempt has been made to synthesize data by taking existing pieces of breast pork, approx. a few hundred pieces, and modify these to create more reference data. This is done with a random number generator and a class that makes random modifications to the original scan n number of times. This is i.a. done by inserting ellipses of random size and thus moving on the top layer of fat as well as cubes that can change the size and geometry of the breast.
Examples can be seen in Figs. 6A-6C, where a slice of the original piece, a modified piece and a combined version are shown, showing the modified piece as gray and the original piece as luminous.
DK 181309 B1 12
An associated problem has, i.a., been whether the modifications have changed enough on the basic anatomical structure in relation to the number that have been generated. In total, about 30,000 CT scans have been generated.
Another issue is whether the modification made contradicts the underlying anatomical structure that a neural network could potentially include in its prediction.
Deep Convolutional Auto Encoder for Computed Tomography (CTDCAE)
After data has been formed in the form of the five-channel images, a network must be trained to find the fat coat and compare with the real solution made for all the CT images.
The network architecture chosen is an Auto-Encoder (AE) with several layers and adapted to images, thus becoming a Deep Convolutional Auto-Encoder (DCAE).
Structurally, this type of network is very similar to a U-Net. I.e., the input is folded down so that the relevant features can be represented in a smaller number of dimensions, ie so that the parameter space is simplified to the relevant information in the image.
Next, transposed convolutions are made to reconstruct an image. The entire code for these classes is created using Facebook's PyTorch framework. The custom class is written so generally that all hyperparameters can be adjusted and changed as usual during training. Furthermore, the network architecture itself can also be adapted based on the overall empirical data. This is done as there has been no reference point to start from.
Thus, for example, the number of layers can be adjusted so that it is adapted to the amount of information that can be extracted from the available data.
The output from the network therefore ends up being a single-channel image that provides an estimate of a landscape over the total subcutaneous fat thickness based on different X-ray projections as well as the end discs. The metrics chosen to minimize based on have been resp. an MSE loss function and a loss function based on the relative difference between prediction and truth.
Fig. 7 shows an example of some of the workouts.
Here is a nice decrease in both training losses and validation losses. Basically, this is a good sign when training a network, as it indicates that the network is learning features and not picking up statistical noise along the way that could lead to the phenomenon of overtraining.
To further avoid some form of overtraining, holdouts have been made on "families" of pork breast. I.e., training and validation data have been divided so that if, e.g., type gl has been modified x number of times, all of these have gone to the validation set, to which those synthesized by g2, g3, ..., g9 have gone to the training set. This is done both to prevent the network from knowing all the pork families in advance, and to test whether the synthesized data has large enough variation.
DK 181309 B1 13
An example of what prediction looks like compared to the annotated one can be seen from Fig. 8.
This piece of pork is from validation data. So, it's a piece of pork that the network has never encountered before, either as the original pork or as a modified pork from the original pork.
The network is performing well based on a qualitative goal. It finds the fat roughly in the right places and roughly the correct fat thickness.
But an IoU precision will perform poorly here, as it does not collect everything in the image, at the same time as it must be binary, and in this case, you work with a liquid value. It would have been favourable if the network had been able to converge further towards a lower overall loss during training to then be able to provide a more accurate bid on the fat coat and the possibility of determining a realistic IoU of the various areas of interest.
This would immediately require more variation in the data, which at present is not possible to achieve based on the synthesized data that has been constructed.
By normalizing images as well as impairing the precision of the grease coat, ie. to force certain fat thicknesses down in an interval, it has been possible to give an estimate of how well the network is performing (e.g., as shown in the coloured image of Fig. 8).
Here, the number of identical pixels is compared and normalized based on the image format. This gives an accuracy of 57.6%. However, one must be aware that this precision will be affected by the number of "empty" pixels, but it gives an indication of how good the network is at predicting the fat coat.
Conclusion
It can be concluded that the amount of data is a decisive factor for the precision that can be achieved with the algorithm. Since the operator primarily must trim the places on the breast pork where there is too much fat, it may only be necessary to know where on the breast pork these places are located, as well as an approximate measure of how much to trim. I.e., it may be superfluous to know the whole fat coat if it is instead possible to determine such areas of interest on the breast.
Claims (5)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DKPA202200307A DK181309B1 (en) | 2022-04-01 | 2022-04-01 | A system and a method for determination of the tissue composition in meat products |
PCT/EP2023/058114 WO2023186969A1 (en) | 2022-04-01 | 2023-03-29 | A system and a method for determination of the tissue composition in meat products |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DKPA202200307A DK181309B1 (en) | 2022-04-01 | 2022-04-01 | A system and a method for determination of the tissue composition in meat products |
Publications (2)
Publication Number | Publication Date |
---|---|
DK181309B1 true DK181309B1 (en) | 2023-08-08 |
DK202200307A1 DK202200307A1 (en) | 2023-08-08 |
Family
ID=85980566
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
DKPA202200307A DK181309B1 (en) | 2022-04-01 | 2022-04-01 | A system and a method for determination of the tissue composition in meat products |
Country Status (2)
Country | Link |
---|---|
DK (1) | DK181309B1 (en) |
WO (1) | WO2023186969A1 (en) |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6031892A (en) * | 1989-12-05 | 2000-02-29 | University Of Massachusetts Medical Center | System for quantitative radiographic imaging |
DE102004047773A1 (en) | 2004-09-27 | 2006-04-06 | Horst Eger | Method for determining physiological quantities of an animal carcass |
US7450686B2 (en) | 2004-10-29 | 2008-11-11 | Thermofisher Scientific | Contaminant detector for food inspection |
EP1887874B1 (en) | 2005-05-31 | 2012-08-01 | Teknologisk Institut | A method and use of a database for automatically determining quality characteristics of a carcass on a slaughterline |
WO2007049305A1 (en) | 2005-10-28 | 2007-05-03 | Marel Hf. | A method and an apparatus for quantifying the material composition of an object |
DK2532246T3 (en) | 2010-10-27 | 2015-10-05 | Maekawa Seisakusho Kk | Deboning and deboning the meat with bone using X-rays |
US9095146B2 (en) | 2011-08-12 | 2015-08-04 | Marcel Iceland Ehf | Meat inspection system |
WO2013136994A1 (en) | 2012-03-13 | 2013-09-19 | 株式会社前川製作所 | X-ray image capturing device and method for bone-in meat, and bone-in meat deboning system provided with said device |
DK177704B1 (en) | 2012-11-22 | 2014-03-24 | Attec Danmark As | Method and means for controlling and removing foreign matter in food |
WO2015167585A1 (en) | 2014-05-02 | 2015-11-05 | Empire Technology Development Llc | Meat assessment device |
WO2018060325A1 (en) | 2016-09-29 | 2018-04-05 | Marel Iceland Ehf. | A method of generating a three dimensional surface profile of a food object |
BR112021021464A2 (en) * | 2019-05-31 | 2022-01-04 | John Bean Technologies Corp | Determining the thickness profile of work products |
US20210041378A1 (en) | 2019-08-11 | 2021-02-11 | Rapiscan Systems, Inc. | Systems and Methods for Using Three-Dimensional X-Ray Imaging in Meat Production and Processing Applications |
-
2022
- 2022-04-01 DK DKPA202200307A patent/DK181309B1/en active IP Right Grant
-
2023
- 2023-03-29 WO PCT/EP2023/058114 patent/WO2023186969A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
WO2023186969A1 (en) | 2023-10-05 |
DK202200307A1 (en) | 2023-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9546968B2 (en) | Meat assessment device | |
US8699768B2 (en) | Scan plan field of view adjustor, determiner, and/or quality assessor | |
US9390523B2 (en) | Determination of z-effective value for set of voxels using CT density image and sparse multi-energy data | |
US8290568B2 (en) | Method for determining a property map of an object, particularly of a living being, based on at least a first image, particularly a magnetic resonance image | |
US7840043B2 (en) | Method for an X-ray machine | |
US20070092127A1 (en) | Method and device for segmenting at least one substance in an x-ray image | |
US20140010437A1 (en) | Compound object separation | |
KR20010081097A (en) | Computerized tomography for non-destructive testing | |
US20220313176A1 (en) | Artificial Intelligence Training with Multiple Pulsed X-ray Source-in-motion Tomosynthesis Imaging System | |
JP2010223963A (en) | Method and system for inspection of containers | |
Olsen et al. | A review of computed tomography and manual dissection for calibration of devices for pig carcass classification-Evaluation of uncertainty | |
CN111818851A (en) | Non-spectral Computed Tomography (CT) scanner configured to generate spectral volumetric image data | |
EP1941458B1 (en) | Automatic adaptive soft tissue thresholding for two-pass ct cone-beam artifact reduction | |
EP4292051A1 (en) | Metal artifact reduction algorithm for ct-guided interventional procedures | |
EP1012585B1 (en) | Method for analyzing characteristics of a moving wooden object, such as a log | |
DK181309B1 (en) | A system and a method for determination of the tissue composition in meat products | |
Cherezov et al. | Lung nodule sizes are encoded when scaling CT image for CNN's | |
MADONNA | COMPARISON OF IMAGE QUALITY BETWEEN A MEDICAL AND AN INDUSTRIAL CT SCANNER FOR USE IN NON-DESTRUCTIVE TESTING OF TREE-RING WIDTHS IN AN OAK (QUERCUS ROBUR) HISTORICAL | |
JP2015524061A (en) | Method and apparatus for separating plate-like objects from images generated by radiation imaging apparatus | |
RU2812866C1 (en) | Method for processing computer tomography images (ct images) | |
US20240104797A1 (en) | Systems, Methods, and Media for Material Decomposition and Virtual Monoenergetic Imaging from Multi-Energy Computed Tomography Data | |
Andriiashen et al. | X-Ray Image Generation as a Method of Performance Prediction for Real-Time Inspection: a Case Study | |
Sun et al. | Nondestructive estimation method of live chicken leg weight based on deep learning | |
Pereira et al. | Conveyor belt X-ray CT using domain constrained discrete tomography | |
JP2020038077A (en) | Bone inspection device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PAT | Application published |
Effective date: 20230808 |
|
PME | Patent granted |
Effective date: 20230808 |