EP3837533A1 - Multi-view imaging system and methods for non-invasive inspection in food processing - Google Patents

Multi-view imaging system and methods for non-invasive inspection in food processing

Info

Publication number
EP3837533A1
EP3837533A1 EP18830699.7A EP18830699A EP3837533A1 EP 3837533 A1 EP3837533 A1 EP 3837533A1 EP 18830699 A EP18830699 A EP 18830699A EP 3837533 A1 EP3837533 A1 EP 3837533A1
Authority
EP
European Patent Office
Prior art keywords
imaging
image data
light
light source
support ring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP18830699.7A
Other languages
German (de)
English (en)
French (fr)
Inventor
Stefan Mairhofer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thai Union Group Public Co Ltd
Original Assignee
Thai Union Group Public Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thai Union Group Public Co Ltd filed Critical Thai Union Group Public Co Ltd
Publication of EP3837533A1 publication Critical patent/EP3837533A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8806Specially adapted optical and illumination features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/94Investigating contamination, e.g. dust
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/02Food
    • G01N33/12Meat; Fish
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/564Depth or shape recovery from multiple images from contours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products

Definitions

  • the present disclosure relates to non-invasive inspection in food processing, and more particularly, to an imaging system and associated methods for detecting an internal object within food by processing image data of the food to determine a three-dimensional model of the internal object.
  • the food industry operates within narrow margins and is subject to increasing quality control standards. As such, the food processing industry has turned to automated systems to increase processing capacity while meeting higher quality control standards. Aspects of food processing include separation of primary from secondary products and the removal of foreign bodies, among others, so as to increase the added value of the product.
  • the third dimension information is useful for deriving precise information on the object ' s alignment or for geometry-dependent processing of irregular shaped objects so as to enable precise separation or removal of irregular shaped objects.
  • volumetric imaging techniques such as computed tomography (“CT”) or magnetic resonance imaging (“MRI”)
  • CT computed tomography
  • MRI magnetic resonance imaging
  • volumetric imaging technologies lack speed, which is a particularly significant limitation given the narrow margins of the food processing industry.
  • speed makes current systems more suitable as random quality control inspection tools rather than as an inline solutions for automated food processing and sorting.
  • the present disclosure is directed to rapid data acquisition and reconstruction for inline industrial food processing applications that allows for capturing geometric details of food material internal components. Such a system is particularly useful for applications where a full representation of the internal structure, such as one provided by volumetric imaging technologies, is unnecessary, but rather, where the recovery of a rough internal profile and speed are essential.
  • the systems and methods disclosed herein are directed to the surface to surface combination of two distinct materials, wherein one material forms an outer layer that allows the partial penetration of an arbitrary light spectrum, and a second material, or inner object, that is of particular interest is at least partially enclosed in the outer layer and allows for a different range of penetration or absorption by the arbitrary light spectrum.
  • exemplary implementations of the present disclosure include the use of an imaging system, including light sources and imaging devices to capture image data, and computational steps for reconstruction of the data to determine boundaries of the inner object.
  • An exemplary implementation of a system for capturing and processing image data of an object to determine boundaries of an inner portion of the object includes: a first conveyor; a second conveyor separated from the first conveyor by a gap; a transparent plate positioned in the gap and coupled to at least one of the first conveyor and the second conveyor; a support ring positioned at least in part in the gap and coupled to at least one of the first conveyor and the second conveyor, the support ring including at least one camera coupled to the support ring; and a first light source coupled to the support ring, wherein during operation, the first light source emits light directed towards the transparent plate.
  • the implementation may further include: an object positioned on the transparent plate, wherein during operation, the camera receives light passing through the object from the first light source; the object being a tuna fillet and the first light source emitting light at a wavelength that is equal to one of approximately 1260 nanometers, approximately 805 nanometers, or approximately 770 nanometers; a control unit in electronic communication with the camera, the camera capturing light passing through the object and transmitting a signal to the control unit corresponding to image data from the captured light; the image data being one of transmittance image data, interactance image data, or reflectance image data; and the control unit including a processor, the processor using machine learning for detecting boundaries between a first portion of the object and a second portion of the object within the first portion based on the image data received from the camera.
  • the implementation may further include: the processor passing the image data through a deep convolutional neural network; the deep convolutional neural network receiving the image data and outputting a plurality of silhouettes based on the image data corresponding to the second portion of the object, the processor projecting the silhouettes into a plurality of projections and analyzing an intersection between the plurality of projections to determine a three dimensional shape of the second portion of the object; the support ring including a plurality of cameras coupled to the support ring, each of the plurality of cameras capturing one of transmittance, interactance, or reflectance imaging data from the first light source; and the support ring including a second light source coupled to the support ring, wherein during operation, the second light source emits light directed to the transparent plate.
  • An alternative exemplary implementation of a device for capturing and processing image data of an object to determine boundaries of an inner portion of the object includes: a conveyor having a space between a first portion and a second portion of the conveyor; a plate positioned in the space and coupled to the conveyor; a support ring positioned at least in part in the space and coupled to the conveyor, wherein during operation, the support ring rotates between at least a first position and a second position; at least one light source coupled to the support ring, wherein during operation, the at least one light source emits light directed towards an object on the plate; an imaging device coupled to the support ring, wherein the imaging device receives light from the at least one light source after the light passes through the object; and a processor in electronic communication with the imaging device, the processor receiving a first set of image data from the imaging device when the support ring is in the first position and a second set of image data from the imaging device when the support ring is in the second position, wherein during operation, the processor outputs a three-dimensional model of an inner
  • the implementation may further include: the processor utilizing machine learning to process the first set of image data and the second set of image data into a plurality of silhouettes and to project the plurality of silhouettes into a plurality of projections, wherein the three-dimensional model is based on an intersection between each of the plurality of projections; a second light source coupled to the support ring, the imaging device capturing a third set of image data from the second light source when the support ring is in the first or second position, the processor utilizing the third set of image data to clarify boundaries of the three-dimensional model; the imaging device comprising a spectrograph and the at least one light source emitting light at a wavelength selected from one of approximately 1260 nanometers,
  • An exemplary implementation of a method for capturing and processing image data of an object to determine boundaries of an inner portion of the object includes: emitting light from a light source, the emitting including directing the light through an object having a first portion and second portion, the second portion enclosed within the first portion; capturing light from the light source after the light passes through the object with an imaging device, the captured light corresponding to image data of the first portion and the second portion received by the imaging device; transmitting the image data to a processor; and analyzing the image data with the processor to detect a boundary between the first portion and the second portion, wherein the analyzing includes utilizing machine learning to produce a three dimensional representation of the second portion.
  • the implementation may further include: emitting light from the light source includes emitting the light with a wavelength selected from one of approximately 1260 nanometers, 805 nanometers, or 770 nanometers; utilizing machine learning to produce the three dimensional representation of the second portion includes the machine learning utilizing a deep convolutional neural network for processing the image data; analyzing the image data with the processor includes utilizing machine learning to output a plurality of two dimensional silhouettes corresponding to the image data of the second portion; analyzing the image data with the processor includes utilizing machine learning to create a plurality of projections, wherein each projection corresponds to a respective one of the plurality of two dimensional silhouettes; and analyzing includes utilizing machine learning to produce a three dimensional
  • representation further includes analyzing an intersection between each of the plurality of projections to output a three dimensional representation of the second portion of the object.
  • Figure 1 is a perspective view of an exemplary implementation of a conveyor belt system according to the present disclosure with a gap between a first conveyor and a second conveyor of the system.
  • Figure 2 is a schematic representation of an exemplary
  • Figure 3 is a schematic representation of an exemplary
  • Figure 4 is a perspective view of an exemplary implementation of an imaging system according to the present disclosure having a support ring and a plurality of imaging devices and light sources coupled to the support ring.
  • Figure 5 is a schematic representation of a control unit of the imaging system of Figure 4.
  • Figure 8 is a perspective view of an alternative exemplary implementation of an imaging system according to the present disclosure having a support ring and a single imaging device and light source coupled to the support ring, wherein the support ring rotates between at least a first position and second position.
  • Figure 7 is a perspective view of an exemplary implementation of an outer housing according to the present disclosure for reducing
  • Figure 8 is a schematic representation of an exemplary
  • Figure 9 is a flow diagram of an exemplary implementation of a method according to the present disclosure for capturing and processing image data of an object to determine boundaries of an inner portion of the object.
  • Figure 10 is a flow diagram of an alternative exemplary implementation of a method according to the present disclosure for capturing and processing image data of an object to determine boundaries of an inner portion of the object.
  • the present disclosure provides a solution for a fast and non- invasive imaging system that is able to acquire visual data of food material for processing and inspection.
  • the implementations of the present disclosure capture the three-dimensional geometry of an arbitrarily-shaped object enclosed by a different layer of material. For example, in
  • the imaging systems and methods disclosed herein capture the three-dimensional geometry of a portion of dark meat (e.g., a different layer of material) that is contained within an outer layer of white meat, wherein the dark meat has an arbitrary three-dimensional shape that varies between successive fillets.
  • a portion of dark meat e.g., a different layer of material
  • the dark meat has an arbitrary three-dimensional shape that varies between successive fillets.
  • Implementations of the present disclosure include various systems, devices, and related methods that can leverage the optical properties of absorption, reflection, transmission and scatter, which differ for different turbid materials and spectral bands.
  • the relative proportion and amount of each occurrence can depend on the chemical composition and physical parameters of the material.
  • specularity and diffusion depend on the roughness of the surface, while scattering or backscattering of light results from multiple refractions at phase changes or different interfaces inside the material. Scattering may also appear due to heterogeneities, such as pore spaces or capillaries randomly distributed throughout the material, as well as the size, shape and
  • both photons that are not reflected are either being absorbed or transmitted through the material.
  • Such information e.g., light that is scattered and light that passes through the material, is captured through interactance or transmittance imaging respectively, while reflectance imaging focuses mainly on the light directly reflected from the surface, as described herein with reference to exemplary implementations.
  • optical properties and the results from interacting with turbid materials differ for each wavelength of light. While some wavelengths are quickly absorbed, others can penetrate deep into the material and, depending on the thickness, are able to be fully or partially transmitted. As described in more detail below, some of the implementations of systems, devices, and related methods can include a multi- or hyperspectral imaging tool to
  • UV ultraviolet
  • NIR near infrared
  • some implementations described herein may include or utilize diffraction grating, where light is dispersed and the intensity of each wavelength is captured by various implementations of sensor(s) described herein.
  • the implementations of systems, devices, and related methods described herein are capable of acquiring associated data which may include or comprise a three-dimensional cube with spatial data stored in two dimensions and the spectral data in the third dimension. The choice of individual or a combination of suitable wavelengths can be varied depending on the processed food material and may be determined in advance.
  • the choice of suitable wavelengths may be selected based on a database which includes transmittance and reflectance information for a particular food to be scanned and a database with spectral properties of certain wavelengths of light, or before processing, the system can be calibrated to determine a wavelength that corresponds to capturing appropriate imaging data based on the particular object or food to be scanned.
  • appropriate illumination source can also be selected as a system design variable.
  • light sources can differ in the intensity of emitted light for specific spectrums.
  • the light source with the wavelengths suitable for the application can be selected in order to achieve optimal results.
  • types of illumination sources are disclosed herein, including halogen, LED, and laser lights of particular wavelengths.
  • the acquisition of high-quality image data of the inspected food material is an aspect of the present disclosure.
  • the materia! being processed is three-dimensional
  • traditional imaging sensors tend to lack the means of comprehending the depth of a scene, which limits the ability to perceive the complexity of real-world objects.
  • the various implementations of systems, devices, and related methods described herein are capable of capturing a collection of images carrying information from multiple views.
  • 3D surface reconstruction techniques of conventional imaging systems tend to only capture the 3D coordinates of individual points located on the surface of an object, or in this case, the boundary between two materials of the object. Therefore, such methods are often referred to as surface
  • MVS multi-view stereo
  • Structured-light 3D surface reconstruction techniques are another technique for acquiring three-dimensional information.
  • Structured-light methods use a spatially varying 1 D or 2D structured illumination that is projected on an object.
  • the illuminated pattern is identically projected onto the surface, while in a non-planar scene the pattern as seen by the camera will be distorted.
  • 3D information is extracted from the characteristics of the distorted pattern, which is usually acquired from the direct reflection of light at the surface boundary.
  • Shape from silhouettes relies on the contours of an object, from which a three-dimensional representation can be recovered.
  • various implementations of the systems, devices, and related methods are operable to project the silhouettes seen by each camera back into the three-dimensional scene, extracting a 3D shape from their intersections. Because concavities are generally not visible in silhouettes and are therefore disregarded, the reconstruction is only an approximation of the true 3D geometry, commonly known as the visual hull.
  • using interactance imaging along with transmittance imaging precise 3D shapes of the objects can be extracted by accounting for the concavities.
  • an aspect in the context of 3D shape reconstruction and the acquisition of multi-view image data is the positioning of cameras. For projecting the silhouettes back into the scene, it is important to determine where the cameras were positioned and how the scene was originally projected onto the image plane.
  • the various systems, devices, and related methods described herein can operate to receive and/or process this information from camera calibrations. Such information can include intrinsic parameters, which are related to how light is being projected through the lens onto the imaging sensor and any distortions occurring through this process, as well as the extrinsic parameters referring to the position of the camera in the real world.
  • the systems, devices, and related methods described herein are capable of implementing a calibration process, including the intrinsic parameters discussed above, for each camera and position contributing to the system’s multi view image data acquisition, and can be efficiently achieved by the use of binary fiducial markers.
  • the silhouette of the target object when acquiring image data, can be determined before a three-dimensional model is generated.
  • an aspect of the present disclosure can include recognizing what in the image data is representing the food material, and then to distinguish between the different components forming the outer and inner layer.
  • the outer layer or first portion corresponds to a first type of meat and the inner layer or second portion corresponds to a second type of meat, which is usually enclosed within the first type of meat.
  • machine learning more specifically artificial neural networks, can be implemented in carrying out such tasks.
  • a neural network there are several nodes in different layers linked through connections associated with weights. These weights are usually adjusted and learnt through several iterations by specifying what output of a node is expected, given a known input.
  • a deep convolutional neural network can be trained to learn how to recognize the location and exact boundaries of specific objects.
  • the present disclosure incorporates a conveyor belt system for the detection and extraction of the three-dimensional geometry of inspected defects or processed products. Due to the continuous moving belt, it is preferable that data acquisition and analysis are highly efficient.
  • the wavelengths of the applicable light spectrum can be determined beforehand based on the food material and objectives of the application. Specific spectral bands can either be acquired through hyperspectral imaging or the use of specific filters or laser light, which implicates a line scan system.
  • a light source is positioned opposite of the imaging sensor, which requires a small gap in the conveyor belt bridged by a transparent medium that allows light to be transmitted and passed through the food material.
  • Another light source is positioned next and parallel to the imaging sensor for an interactance imaging mode. Both imaging modes are alternated at high frequency so as to avoid obscuring the image data captured by each mode.
  • a combination of multiple light sources and camera sensors (which, as described in more detail herein may be a component of an imaging device or,
  • an imaging device may be generally referred to as an imaging device or in some implementations, a separate component coupled to the imaging device) are mounted in or around a conveyor belt to collect image data from multiple views.
  • a single camera sensor or imaging device can be used instead.
  • the single camera can be mounted on a rotating frame that allows repositioning light and camera around the conveyor belt.
  • image data is acquired in a helical alignment.
  • the helical image data is interpolated between acquisition points along the transversal path.
  • the number of views may vary depending on the application and the required detail of the target object.
  • a structure is built around the imaging apparatus, blocking any light from outside, in order to control the illumination during imaging and thus to achieve better image quality.
  • the present disclosure uses a deep convolutional neural network trained to detect the location and boundaries of the target object.
  • the application of the deep convolutional neural network can depend and vary based on the application, and such a model may be trained beforehand in order to accomplish this task.
  • the extracted silhouettes are used to generate a rough approximation of the target object’s three-dimensional shape.
  • the resolution of the reconstructed model is a trade- off between the required speed and details for the intended application. For example, in some implementations, a higher resolution may require a larger number of images and more computational resources for the reconstruction, which in turn affects application speed.
  • the arrangement and number of cameras may vary, as described herein. Positions and camera parameters can be calibrated prior to capturing image data. Due to the movement of food material on the conveyor belt, the transversal positions of the cameras change in reference to the transported material. The transversal position can be internally maintained, or if necessary depending on application, set in reference to the material or explicitly defined though markers on the conveyor belt.
  • Figure 1 is a perspective view of a conveyor system 100. It is to be appreciated that conveyor system 100 has been simplified for ease of understanding
  • Conveyor system 100 includes a first conveyor 102 spaced from a second conveyor 104.
  • a space or gap 114 separates the first conveyor 102 from the second conveyor 104.
  • the first conveyor 102 may generally be referred to herein as a first portion of the conveyor system 100 and similarly, the second conveyor 104 may generally be referred to herein as a second portion of the conveyor system 100.
  • the space or gap 114 and a size and shape of a plate 110 can vary according to the specific application and as such, the present disclosure is not limited by the distance between the conveyors 102,
  • the plate 110 is positioned in, or proximate to, the gap 114 so as to form a continuous conveyor line.
  • the plate 110 is transparent so as to allow light to pass through the plate 110 without obstruction.
  • the plate 110 may be formed from transparent plastics, polymers, or glass, among others, while in alternative implementations, the plate 110 is translucent and is similarly formed from a translucent plastic, polymer, or glass, for example.
  • the plate 110 is coupled to the conveyor system 100, or more specifically, to at least one of the first conveyor 102 and the second conveyor 104.
  • each conveyor 102, 104 of the system 100 is supported by a support structure 106, wherein a conveyor surface 112 translates via rollers 108 and a conventional drive mechanism (not shown).
  • the conveyor surface 112 may be solid, or may include a plurality of perforations 116, either arranged in a line, as illustrated in Figure 1 , or evenly dispersed across the conveyor surface 112. In other implementations, the conveyor surface 112 is solid, meaning that there are no such perforations in the surface 112.
  • FIG 2 is a schematic representation corresponding to a transmittance imaging mode 200.
  • the transmittance imaging mode 200 includes a light source 202 and an imaging device 204
  • Figure 2 further illustrates an object 206 to be scanned, wherein the object 206 is positioned on the plate 110.
  • the light source 202 emits light 208, which is directed towards plate 110 and propagates outward as it travels toward plate 110.
  • the light 208 transmits through the plate 110 and the object 206, the light converges, as indicated by convergence path or portion 210.
  • the light 208 diverges or disperses, as illustrated by divergence path or portion 212.
  • the imaging device 204 receives transmittance image data corresponding to captured light 208 that has been transmitted through the object 206, wherein the transmittance image data is subsequently transmitted via a signal to a processor or control unit in electronic communication with the imaging device 204 for further processing and reconstruction (e.g., control unit 428 illustrated in Figure 5).
  • the conveyor system 100 is generally translating the object 206 from right to left relative to the orientation shown in Figure 2, although it is to be appreciated that the conveyor system 100 could be translating in either direction.
  • the imaging device 204 and the light source 202 are preferably aligned along a vertical axis, wherein the imaging device 204 is above the light source 202, such that the light 208 output by the light source 202 propagates through the object 206 in a linear manner towards the imaging device. It is also to be understood that due to minor variations in alignment or the properties of the object 206 and the light 208, the alignment of the imaging device 204 and the light source 202 may not be truly vertical, but rather may be within 10 degrees of vertical, within 5 degrees of vertical or substantially vertical (i.e. within 3 degrees of vertical).
  • the light source 202 may be selected from one of a variety of sources in various implementations, such as a laser, a light emitting diode (“LED”), an array or panel of LEDs, incandescent lamps, compact fluorescent lamps, halogen lamps, metal halide lamps, fluorescent tubes, neon lamps, low pressure sodium lamps, or high intensity discharge lamps, for example.
  • the light source 202 is a laser
  • implementations of the present disclosure further include the light source 202 comprising a solid state, gas, excimer, dye, or semiconductor laser.
  • the laser may also be a continuous wave, single pulsed, single pulsed q-switched, repetitively pulsed, or mode locked laser.
  • the light source 202 is preferably selected specifically for the application or object 206 to be scanned, as different light sources 202 output light 208 with different penetration characteristics relative to object 206.
  • the light source 202 preferably outputs light in the transmittance imaging mode 200 with a wavelength of between 790 and 820 nanometers (“nm”), but more preferably, the wavelength is 8Q5nm or approximately 805 nm (i.e. between 800 and 810nm), which corresponds to wavelengths in the infrared portion of the electromagnetic spectrum that is outside of the visible light portion of the spectrum generally between approximately 400 and 750nm.
  • this wavelength corresponds, at least with respect to a tuna fillet, to a wavelength of light that allows for deep penetration into the fillet while minimizing scatter, wherein scatter is generally undesirable in the transmittance imaging mode 200, as scatter tends to reduce the accuracy of the image data corresponding to the transmittance imaging mode 200.
  • the near-infrared spectrum at greater than 75Qnm or approximately 805nm is useful for tuna processing because water is still somewhat transparent to these wavelengths as is hemoglobin, which
  • the imaging device 204 is one of a number of commercially available imaging devices 204, including, without limitation, a spectrograph, a camera, or a sensor, among others.
  • the imaging device 204 is preferably a complementary metal oxide semiconductor (“CMOS”) sensor that captures wavelengths between 3G0nm and 1 G0Gnm.
  • CMOS complementary metal oxide semiconductor
  • the sensor is a charged-coupled device (“CCD”) that captures similar wavelengths sensor or an indium gallium arsenide (“InGaAs”) sensor, which captures wavelengths between 900 and 17GQnm.
  • CCD charged-coupled device
  • InGaAs indium gallium arsenide
  • the imaging device 204 is a camera or a spectrograph
  • the camera or spectrograph can include any of the above sensors, in addition to other electronic components and other types of sensors.
  • the light 208 is preferably split into individual wavelengths so as to investigate which wavelengths have the best transmittance and capture properties for the object 206.
  • a spectrograph with a diffraction grating can be used to split the light 208 into individual wavelengths.
  • a blocking filter that only allows certain selected wavelengths of light to pass through to be captured by the imaging device 204 that correspond to the application can be used in order to increase efficiency and reduce cost.
  • a laser light source 202 can be used that only emits light with a specified wavelength, or in a specified wavelength range, that preferably corresponds to the preferred wavelength selected through calibration, which again, results in reduced cost and increased efficiency.
  • blocking filters usually have a wider range of pass-through wavelengths, while lasers are very specific to a particular wavelength and as such, selecting between the two will depend on the desired operational wavelength for the material in question.
  • FIG 3 is a schematic representation of an exemplary implementation of an interactance imaging mode 300.
  • the interactance imaging mode 300 includes a light source 302 and an imaging device 304.
  • the imaging device 304 may be different than the imaging device 204 of the transmittance mode; in some implementations, the imaging device 304 of the interactance imaging mode 300 may be the same as the imaging device 204 of the transmittance mode. In other words, the same imaging device can be used for both transmittance imaging and interactance imaging modes.
  • the light source 302 and the imaging device 304 may be any of the light sources and imaging devices described above with reference to light source 202 and imaging device 204, respectively.
  • An object 306 to be scanned is present on plate 110 and the light source 302 emits light 310.
  • the light 310 passes through a convergence lens 308 coupled to the light source 302 at an output 312 of the light source 302.
  • the convergence lens 308 may be any of a number of known convergence lenses with principal axis, focal points, focal lengths, and vertical plans selected according to the specific application.
  • the convergence lens 308 assists with clarifying image data captured by the imaging device 304, among other benefits.
  • the light source 302 preferably emits light at a wavelength that is between 740 and 8G0nm and more preferably, is 77Gnm or approximately 77Qnm (l.e. between 765 and 775nm).
  • the imaging device 304 is preferably, or preferably includes, a sensor a CMOS or a CCD sensor as described herein. This range of wavelengths has been found to be preferable for the interactance imaging mode 300 based on the above analysis with respect to the preferable
  • the light 310 After the light 310 is emitted by the light source 302 and passes through convergence lens 308, the light 310 contacts the object 306 as indicated by the portion 314 of light 310 passing through the object 306.
  • the imaging device 304 measures light that is backscattered by the object 306
  • portion 314 of the light 310 corresponds to light that enters object 306 and then bends, curves, or turns within the object 306 due to the material composition of the object 306 and the light 310 before exiting the object 306.
  • the light 310 that is emitted through the convergence lens 308 propagates along a first direction 305 and the light 310 exiting the object propagates in a second direction 307, wherein in an implementation, the first and second directions are opposite each other along parallel axis.
  • implementations of the present disclosure also include the first and second directions being at an angle transverse to each other, such as when the light source 302 is at an angle with respect to the object 306, as described with reference to Figure 4.
  • the light 310 exits the object 306 and propagates toward the imaging device 304, wherein during propagation, the light disperses as indicated by the dispersing portion 316.
  • the imaging device 304 transmits interactance imaging data corresponding to an amount of captured light 310 to a control unit or processor, as described herein (e.g., control unit 428 illustrated in Figure 5).
  • the conveyor 100 is generally translating the object 306 from right to left relative to the orientation shown in Figure 3, as indicated by arrow 318
  • the light source 302 is located upstream of the imaging device 304 relative to the direction of translation of the conveyor system 100.
  • the light source 302 is generally located proximate to and preferably parallel to the imaging device 304. While it may be possible to have the light source 302 downstream from the imaging device 304, this arrangement would result in less accurate imaging data that would have to be corrected during processing.
  • the conveyor 100 it is also possible for the conveyor 100 to translate the object 306 opposite to the direction indicated by arrow 318, in which case, the light source is preferably to the left (i.e. upstream) of the imaging device 304 in the illustrated orientation.
  • both the light source 302 and the imaging device 304 are located above the object 306, and as such, the interactance imaging mode 300 captures the portion 314 of light 310 that scatters back towards the imaging device 304 after it enters the object 306, as opposed to the transmittance imaging mode 200, which captures the portion of light that translates directly through the object 206 along a
  • Figure 4 illustrates a perspective view of an exemplary implementation of an imaging system 400 including a conveyor system 402, a support ring 414 coupled to the conveyor system 402, a plurality of imaging devices 422 coupled to the support ring 414, and at least one light source, such as first light source 424, coupled to the support ring 414.
  • the conveyor system 402 may include ail or substantially of the features described above with reference to conveyor system 100 in Figure 1. However, briefly, the conveyor system 402 includes a first conveyor or portion 404 and a second conveyor or portion 406, wherein the first conveyor 404 is separated from the second conveyor 406 by a gap or space 410. A plate 412, which is preferably transparent, is positioned in the gap 410 and coupled to the conveyor system 402 so as to form a continuous conveyor line.
  • the support ring or frame 414 is coupled to the conveyor system 402 with supports 416, 418, wherein the support ring 414 is preferably circular so as to facilitate rotation of the support ring 414 during calibration of the imaging system 400.
  • the supports 416 are preferably an adjustable collar coupled to and extending from plates 420 coupled to the conveyor system 402, and more specifically, each of the first and second conveyors 404, 406.
  • the support 418 is preferably a base with an open channel for receiving the support ring 414 that is coupled to the conveyor system 402, such that the support ring 414 can be manually rotated by adjusting support collars 416 during calibration of the system.
  • support 416 is illustrated as a collar and support 418 is illustrated as a base with a channel for receiving the support ring 414, it is to be appreciated that a number of other devices or arrangements are considered in the present disclosure for coupling the support ring 414 to the conveyor system 402.
  • the coupling includes use of one or more centrally disposed spokes extending from the conveyor system 402 or another structure located in the space 410 and coupled to the conveyor system 402, or alternatively, the support ring 414 can be coupled to and supported by a housing, such as the housing illustrated in Figure 6.
  • the support ring 414 further includes a plurality of imaging devices 422 coupled to and extending from the support ring 414 Each of the imaging devices 422 can be substantially similar, if not identical to the imaging device 204 described with reference to Figure 2 and any variations thereof.
  • the support ring 414 includes at least a first light source 424, which may be any of the light sources discussed above with reference to light source 204 in Figure 2.
  • the first light source 424 is positioned between the conveyor system 402 and arranged such that light emitted by the first light source 424 is directed toward the plate 412 and an object 408 on the plate 412 to be imaged or scanned.
  • the light passes through plate 412, the object 408, to be received by at least one of the plurality of imaging devices 422, wherein data corresponding to the light received from the first light source 424 corresponds to transmittance imaging data.
  • the support ring further includes a second light source 426 coupled to and extending from the support ring 414 proximate the plurality of imaging devices 422.
  • the second light source 426 is used in the interactance imaging mode, wherein the second light source 426 is located proximate and parallel to the imaging devices 422.
  • the second light source 426 is located proximate to the imaging devices 422, but is at an angle that is transverse to a field of vision of the imaging devices 422, as illustrated in Figure 4 and described herein.
  • the second light source 426 may similarly be any of the above light sources discussed with reference to light source 202 in Figure 2.
  • the second light source 426 emits light that corresponds to the interactance imaging mode 300 described with reference to Figure 3.
  • light emitted by the second light source 426 is directed towards the object 408 in a first direction, turns within the object 408, and exits the object 408 in a second direction to be received by at least one, if not all, of the plurality of imaging devices 422, wherein data corresponding to the light received from the second light source 426
  • the angle between the first and second directions is less than 90 degrees and preferably less than 45 degrees, although it is to be understood that the angle will vary according to the specific application (i.e. the type of object 408 to be scanned).
  • the plurality of imaging devices 422 includes 5 imaging devices 422, wherein the imaging devices 422 are equally spaced from one another along a perimeter, circumference, or inner edge of the support ring 414 with an input directed towards the plate 412 and the object 408 for receiving light from one of the light sources 424, 426.
  • each of the imaging devices 422 will receive imaging data corresponding to different views of the object 408.
  • the selection and arrangement of the plurality of imaging devices 422 provide for multiple views that are input to a machine learning system, wherein the machine learning system generates silhouettes that are the basis for determining a 3D model based on the multiple views, as described herein.
  • the specific number, arrangement, and orientation of imaging devices 422 depends on the object 408 to be scanned and the calibration of the system 400, as discussed herein.
  • each of the imaging device 422 can receive reflectance imaging data from the second light source 426, wherein the reflectance imaging data corresponds to the second light source 426 outputting light at a
  • wavelength that is between 1230 and 1290nm, or more preferably, is 1280nm or approximately 126Qnm (i.e. between 1255 and 1265nm), wherein light emitted at this wavelength is reflected off an outer surface of the object 408 to be received or captured by the plurality of imaging devices 422.
  • wavelength i.e. approximately 1260nm
  • a reflectance imaging mode because although water becomes highly absorbent for wavelengths above i OOOnm, the dark meat starts to reflect the light at approximately
  • each of the imaging devices 422 may further include an InGaAs sensor, as described above, for capturing light at this larger wavelength.
  • Reflectance imaging data is particularly useful in situations where an object is only partially contained within an outer layer (i.e. a portion of the object extends out of the outer layer), but in other implementations, reflectance imaging data can be used, in addition to interactance imaging data, as a correction reference.
  • Figure 4 further illustrates a control unit 428 in electric communication with the system 400.
  • Figure 5 illustrates in detail the control unit 428 according to one example, non-limiting implementation.
  • the control unit 428 is generally operable to provide power to the system 400 and to process or transmit imaging data received from imaging devices 422.
  • Figure 5 schematically illustrates various control systems, modules, or other sub-systems that operate to control the system 400, including the exchange of data between the imaging devices 422 and the control unit 428.
  • the control unit 428 includes a controller 442, for example a microprocessor, digital signal processor, programmable gate array (PGA) or application specific integrated circuit (ASIC).
  • the control unit 428 includes one or more non-transitory storage mediums, for example read only memory (ROM) 440, random access memory (RAM) 438, Flash memory (not shown), or other physical computer- or processor-readable storage media.
  • the non-transitory storage mediums may store instructions and/or data used by the controller 442, for example an operating system (OS) and/or applications.
  • the instructions as executed by the controller 442 may execute logic to perform the functionality of the various implementations of the systems 400, 500 described herein, including, but not limited to, capturing and processing data from the imaging devices 422.
  • the controller 428 may be communicatively coupled to one or more actuators (not shown) to control rotation of the ring 504.
  • the controller 428 may be communicatively coupled to one or more belts (not shown) for rotating the ring 504.
  • the controller 442 may include instructions corresponding to specific positions (i.e. the first position and the second position discussed with reference to Figure 6), which are transmitted to the actuator or belt for automatically rotating the support ring 504 according to a predetermined manufacturing speed or conveyor speed.
  • the control unit 428 may include a user interface 438, to allow an end user to operate or otherwise provide input to the systems 400, 500 regarding the operational state or condition of the systems 400, 500.
  • the user interface 436 may include a number of user actuatabie controls accessible from the system 400, 500.
  • the user interface 436 may include a number of switches or keys operable to turn the systems 400, 500 ON and OFF and/or to set various operating parameters of the systems 400, 500.
  • the user interface 436 may include a display, for instance a touch panel display.
  • the touch panel display ⁇ e.g., LCD with touch sensitive overlay
  • the touch panel display may present a graphical user interface, with various user selectable icons, menus, check boxes, dialog boxes, and other components and elements selectable by the end user to set operational states or conditions of the systems 400, 500.
  • the user interface 436 may also include one or more auditory transducers, for example one or more speakers and/or microphones. Such may allow audible alert notifications or signals to be provided to an end user. Such may additionally, or
  • the user interface 436 may include additional components and/or different components than those illustrated or described, and/or may omit some components.
  • the switches and keys or the graphical user interface may, for example, include toggle switches, a keypad or keyboard, rocker switches, trackball, joystick or thumbstick.
  • the switches and keys or the graphical user interface may, for example, allow an end user to turn ON the systems 400, 500, start or end a transmittance imaging mode or a interactance imaging mode, communicably couple or decouple to remote accessories and programs, access, transmit, or process imaging data, activate or deactivate motors or audio subsystems, start or end an operational state of a conveyor system, etc.
  • the control unit 428 includes a communications sub-system 444 that may include one or more communications modules or components which facilitate communications with various components of one or more external device, such as a personal computer or processor, etc.
  • the communications sub-system 444 may provide wireless or wired communications to the one or more external devices.
  • the communications sub-system 44 may include wireless receivers, wireless transmitters or wireless transceivers to provide wireless signal paths to the various remote components or systems of the one or more paired devices.
  • the communications sub-system 444 may, for example, include components enabling short range (e.g., via Bluetooth, near field communication (NFC), or radio frequency identification (RFID) components and protocols) or longer range wireless communications (e.g., over a wireless LAN, Low-Power-Wide-Area Network (LPWAN), satellite, or cellular network) and may include one or more modems or one or more Ethernet or other types of communications cards or components for doing so.
  • the communications sub-system 444 may include one or more bridges or routers suitable to handle network traffic including switched packet type communications protocols (TCP/IP), Ethernet or other networking protocols.
  • TCP/IP switched packet type communications protocols
  • the wired or wireless communications with the external device may provide access to look-up table indicative of various material properties and light wavelength properties. For example, an end user may select a material from a variety of materials displayed in the user interface 438, which may be stored in a look-up table or the like in the external device.
  • the control unit 428 includes a power interface manager 432 that manages supply of power from a power source (not shown) to the various components of the controller 428, for example, the controller 428 integrated in, or attached to the systems 400, 500.
  • the power interface manager 432 is coupled to the controller 442 and a power source. Alternatively, in some implementations, the power interface manager 432 can be integrated in the controller 442.
  • the power source may include external power supply, among others.
  • the power interface manager 432 may include power converters, rectifiers, buses, gates, circuitry, etc. In particular, the power interface manager 432 can control, limit, restrict the supply of power from the power source based on the various operational states of the systems 400, 500.
  • the instructions and/or data stored on the non-transitory storage mediums that may be used by the controller includes or provides an application program interface (“API”) that provides programmatic access to one or more functions of the controller 428.
  • API application program interface
  • such an API may provide a programmatic interface to control one or more operational characteristics of the systems 400, 500, including, but not limited to, one or more functions of the user interface 436, or processing the imaging data received from the imaging device or devices 422.
  • Such control may be invoked by one of the other programs, other remote device or system (not shown), or some other module.
  • the API may facilitate the development of third-party software, such as various different user interfaces and control systems for other devices, plug-ins, and adapters, and the like to facilitate interactivity and customization of the operation and devices within the systems 400, 500.
  • components or modules of the control unit 428 and other devices within the systems 400, 500 are implemented using standard programming techniques.
  • the logic to perform the functionality of the various embodiments or implementations described herein may be implemented as a “native” executable running on the controller, e.g., microprocessor 442, along with one or more static or dynamic libraries.
  • various functions of the controller 428 may be implemented as instructions processed by a virtual machine that executes as one or more programs whose instructions are stored on ROM 440 and/or random RAM 438.
  • a range of programming languages known in the art may be employed for implementing such example embodiments, including representative implementations of various programming language paradigms, including but not limited to, object-oriented (e.g., Java, C++, C#, Visual Basic.NET, Smalltalk, and the like), functional (e.g., ML, Lisp, Scheme, and the like), procedural (e.g., C, Pascal, Ada, Modula, and the like), scripting (e.g., Perl, Ruby, Python, JavaScript, VBScript, and the like), or declarative (e.g., SQL, Prolog, and the like).
  • object-oriented e.g., Java, C++, C#, Visual Basic.NET, Smalltalk, and the like
  • functional e.g., ML, Lisp, Scheme, and the like
  • procedural e.g., C, Pascal, Ada, Modula, and the like
  • scripting e.g., Perl, Ruby, Python, JavaScript, VBScript, and
  • instructions stored in a memory configure, when executed, one or more processors of the control unit 428, such as microprocessor 442, to perform the functions of the control unit 428.
  • the instructions cause the microprocessor 442 or some other processor, such as an I/O controller/processor, to process and act on information received from one or more imaging device(s) 422 to provide the functionality and operations of reconstructing a 3D model based on imaging data.
  • the embodiments or implementations described above may also use well-known or other synchronous or asynchronous client-server computing techniques.
  • the various components may be implemented using more monolithic programming techniques as well, for example, as an executable running on a single microprocessor, or alternatively decomposed using a variety of structuring techniques known in the art, including but not limited to, multiprogramming, multithreading, client-server, or peer-to-peer (e.g., Bluetooth®, NFC or RFID wireless technology, mesh networks, etc., providing a communication channel between the devices within the systems 400, 500), running on one or more computer systems each having one or more central processing units (CPUs) or other processors.
  • CPUs central processing units
  • Some embodiments may execute concurrently and asynchronously, and communicate using message passing techniques.
  • other functions could be implemented and/or performed by each component/module, and in different orders, and by different components/modules, yet still achieve the functions of the control unit 428.
  • control unit 428 can be available by standard mechanisms such as through C, C++, C#, and Java APIs; libraries for accessing files, databases, or other data repositories; scripting languages; or Web servers, FTP servers, or other types of servers providing access to stored data.
  • the data stored and utilized by the control unit 428 and overall imaging system may be implemented as one or more database systems, file systems, or any other technique for storing such information, or any combination of the above, including implementations using distributed computing techniques.
  • control unit 428 and components of other devices within the systems 400, 500 may be implemented or provided in other manners, such as at least partially in firmware and/or hardware, including, but not limited to, one or more application-specific integrated circuits (“ASICs”), standard integrated circuits, controllers (e.g., by executing appropriate instructions, and including microcontrollers and/or embedded controllers), field-programmable gate arrays (“FPGAs”), complex programmable logic devices (“CPLDs”), and the like.
  • ASICs application-specific integrated circuits
  • controllers e.g., by executing appropriate instructions, and including microcontrollers and/or embedded controllers
  • FPGAs field-programmable gate arrays
  • CPLDs complex programmable logic devices
  • system components and/or data structures may also be stored as contents (e.g , as executable or other machine-readable software instructions or structured data) on a computer-readable medium (e.g., as a hard disk; a memory; a computer network, cellular wireless network or other data transmission medium; or a portable media article to be read by an appropriate drive or via an appropriate connection, such as a DVD or flash memory device) so as to enable or configure the computer-readable medium and/or one or more associated computing systems or devices to execute or otherwise use, or provide the contents to perform, at least some of the described techniques.
  • a computer-readable medium e.g., as a hard disk; a memory; a computer network, cellular wireless network or other data transmission medium; or a portable media article to be read by an appropriate drive or via an appropriate connection, such as a DVD or flash memory device
  • control unit 428 is in electrical communication with the support ring 414 and each of the plurality of imaging devices 422, either through wires 430, which may be internally or externally located with respect to the conveyor system 402, the support 418, and the support ring 414.
  • the control unit 428 may be in wireless communication with the system 400 to wirelessly receive imaging data from imaging devices 422, as described above with reference to Figure 5.
  • the control unit 428 may be coupled to the system or located external to the system.
  • the control unit 428 provides power to the system 400 and also receives imaging data from the imaging devices 422.
  • the control unit 428 can include at least one processor such as in a standard computer, for processing the imaging data, or alternatively the control unit 428 can transmit the imaging data to an additional external processor or computer that is not specifically illustrated for clarity.
  • Figure 6 illustrates an alternative exemplary implementation of an imaging system 500 including a conveyor system 402, a frame or ring 504 coupled to the frame 504, an imaging device 510 coupled to and extending from the frame 504, and a first and second light source 512, 514.
  • Certain features of the implementation of the system 500 are similar or identical to features described above with reference to system 400 and as such, those features have not been repeated in the interest of efficiency.
  • the frame 504 is coupled to the conveyor system 502 with supports 506, 508, wherein the support 506 is a base with a channel for receiving the frame 504 and at least one collar 508 surrounding the frame 504.
  • the system 500 further includes a mechanism for rotating the frame 504 about the conveyor system 502, such that the imaging device 510 can capture imaging data of an object 516 from multiple perspectives, angles, or views to facilitate 3D reconstruction.
  • the base 506 can include a rotating belt in the channel, wherein the belt is in contact with the frame 504 to rotate the frame 504 according to an input received from an external control unit, which may be control unit 428 (see Figure 4).
  • an external control unit which may be control unit 428 (see Figure 4).
  • the frame 504 rotates between at least a first position and a second position, wherein in the first position, the imaging device 510 captures a first set of imaging data corresponding to transmittance or interactance imaging data from the first light source 512 or the second light source 514, respectively. Then, the frame 504 rotates to the second position and repeats the capture process for a second set of imaging data. This process can be repeated to produce as many views from as many orientations as are required for the specific application (i.e. third, fourth, fifth, sixth, or more views based on the imaging device 510 being in different positions with respect to the object 516). Moreover, in implementations where the frame 504 is rotated automatically, rotation of the frame 504 can be done efficiently according to positions established during calibration of the system 500, while reducing the cost of the system 500 due to the use of less imaging devices 510.
  • Figure 7 illustrates an exemplary representation of a system 600, which may be substantially similar or identical to systems 400, 500, wherein the system 600 includes an outer housing or cover 604, wherein walls 612 of the outer housing 604 are solid and opaque. Entrance portions or openings 610 of the housing 604 include a cover 608 comprising strips of opaque material that extend over at least 80% of an area of each entrance portion 610 such that light cannot enter housing 604.
  • the support rings or frames 414, 504 can be coupled to and supported by the housing 604 and a control unit 606 can be coupled to an outer wall 612 of the housing 604, wherein the control unit 606 provides power to the system 600, provides coordinates corresponding to positions of the rotating frame 504 and controls rotation of the rotating frame 504, or includes a processor for producing a 3D model based on imaging data received from the system 600, in various implementations.
  • Figure 8 is a schematic representation of a reconstruction method or system 700 utilized by a machine learning system or deep convoluted neural network to produce a 3D model 702 from one-dimensional (“1 D”) imaging data and two-dimensional (“2D”) silhouettes.
  • 1 D one-dimensional
  • 2D two-dimensional
  • CNN machine learning and convolutional neural networks
  • One or more convolutional layers may be followed by one or more pooling layers, and the one or more pooling layers may be optionally followed by one or more normalization layers.
  • the convolutional layers create a plurality of kernel maps, which are otherwise called filtered images, from a single unknown image.
  • the large quantity of data in the plurality of filtered images is reduced with one or more pooling layers, and the quantity of data is reduced further by one or more rectified linear unit layers (“ReLU”) that normalize the data.
  • ReLU rectified linear unit layers
  • implementations of the present disclosure rely on semantic segmentation wherein the parameters for training the CNN are application dependent and need to be adjusted according to the complexity of the image data, which in turn depends on the food to be inspected.
  • the kernels are selected from a known image. Not every kernel of the known image needs to be used by the neural network.
  • kernels that are determined to be“important” features may be selected.
  • the kernel map is passed through a pooling layer, and a normalization ⁇ i.e., ReLU) layer. All of the values in the output maps are averaged ⁇ i.e., sum & divide), and the output value from the averaging is used as a prediction of whether or not the unknown image contains the particular feature found in the known image.
  • the output value is used to predict whether the unknown image contains the feature of importance, which in an
  • the CNN can then produce a silhouette corresponding to the identified areas of interest from an image.
  • the machine learning program or deep convolution neural network will receive, as an input, the image data 704 from multiple views captured from systems 400, 500, wherein each image data set corresponds to a photograph of a tuna fillet with a first portion 706
  • the camera or spectrograph may use a line-scan to acquire 1 D data
  • the 1 D data is combined to 2D image data before being used in the CNN to recover the silhouettes, as described herein.
  • the first portion 706 corresponds to a first, outer layer of meat with a first set of characteristics and the second portion 708 corresponds to a second, inner layer of meat with a second, different set of characteristics, wherein the second portion 708 is located within the first portion 706.
  • each of the image data sets 704 corresponds to imaging data, and preferably
  • the image data 704 is 1 D in the sense that it is a single line of a 2D image, or in other words, each pixel or kernel analyzed by the CNN corresponds to an intensity value.
  • the CNN is trained based on a pool of thousands of representative sample images to identify the general appearance of the second portion 708 (i.e. dark meat in tuna fillets); for instance, the second portion 708 goes through a center of the fillet parallel to its major axis based on a large pool of reference images, which may include thousands of images of tuna fillets.
  • the CNN will acquire knowledge on edges, lines and curves based on representative sample images, wherein the accuracy of the CNN will improve as more images are scanned. As such, based on the difference in intensify values, the CNN will identify the portions of the
  • each silhouette corresponds to the identified second portion 708 in each of the views represented in 2D.
  • a CNN is composed of many layers, where layers between the input and output are called“hidden layers.” Each layer has numerous neurons, which are fully connected between layers. These connection correspond to weights that are learned based on reference images.
  • a neuron or node is a computational unit that takes an input value, multiplies it with the associated weight, runs it through an activation function (e.g. ReLU as described herein) and delivers an output. This output forms the input of the next neuron linked though another connection.
  • the CNN can include other layers such as convolutions, pooling, normalization, and dropout that are used similarly to neurons, but have different functions.
  • the connections or weights between nodes are assigned randomly.
  • labelled or annotated data is used.
  • the input data e.g image data
  • the expected output data e.g silhouettes
  • the CNN returns the expected output. This is basically an optimization process with a large number of parameters. Whenever one weight is changed it will affect the entire network and as such, training a CNN may include tens, hundreds, or thousands of iterations.
  • the convolution layers of the CNN reduce the size of the image, which determines the area that can be seen or is evaluated. For example, a small window of 9x9 pixels is moved over the full image to be analyzed. In that window, an observer would see a small fraction of the whole object (e.g. lines and corners). As the size of the image is reduced, but the window size is kept the same, more of the object features are recognized. If the image is very small and almost fits in the window, the observer would see the whole fillet as well as the dark meat in a single step. This is an example of what the neural network sees. In the early layers of the network, weights will be learned that allow for detecting lines and corners that are relevant in identifying the dark meat in a tuna fish fillet, for example.
  • the computational system back projects each of the silhouettes 710 into a plurality of projections 712 by using an algorithm, wherein in an implementation, the algorithm extends lines corresponding to an outer boundary of each silhouette 710 into a higher dimensional scene, as illustrated in Figure 7.
  • the 3D model 702 of the second portion 708 can be determined.
  • the back projection is based on a cone shaped field of view. Imaging data corresponding to the interactance imaging mode, such as mode 300, can be utilized to refine the model based on the depth of the object of interest.
  • the amount of light that scatters to be captured in the interactance imaging mode will vary depending on the depth of the object of interest, which in an implementation, is the dark meat in a tuna fillet.
  • the interactance imaging data assists with correcting for concavities in the surface of the scanned object because the captured light will vary according to the depth the object, as above.
  • the inieractance imaging data will differ for the portion of the object with the concavity, where the material is thinner as opposed to the portion of the object without the concavity, where the material is thicker (i.e. a lower captured intensity value corresponds to thinner material because less of the light will be scattered and captured and a higher intensity value corresponds to thicker material because more of the light will be scattered when it cannot penetrate or transmit through the thicker material).
  • FIG. 9 is a flow diagram representing an exemplary method 800 of generating a 3D model of an object of interest based on image data captured by an imaging system (e.g., imaging systems 400, 500).
  • the method 800 begins at 802 wherein a conveyor is activated at 804 and an object to be scanned is loaded onto the conveyor. Activation of the conveyor can occur through an external switch, or through a control unit or program.
  • the light transmittance system which may be substantially similar to the transmittance imaging mode 200 described with reference 200, is activated, either manually via an external switch or through a control unit or processor.
  • An imaging device that is part of the light transmittance system, either on its own or through a program associated with a control unit in electronic
  • the transmittance system determines at 808 whether transmittance image data is received by the imaging device corresponding to light passing through the object on the conveyor, wherein in an implementation, the imaging device is a spectrograph, camera, or sensor.
  • the process returns to 806 and repeats until image data is received.
  • transmittance image data is received, it is transmitted to a processor and the method 800 proceeds to 810, wherein the transmittance mode is deactivated and the interactance imaging mode is activated.
  • the interactance imaging mode may be substantially similar to the interactance mode 300 described with reference to Figure 3.
  • the above process can be repeated for each unique imaging device to produce a plurality of views.
  • this process is repeated each time the imaging device is located in a unique position in order to generate a plurality of views or a plurality of transmittance imaging data sets and a plurality of interactance imaging data sets.
  • the processor includes a machine learning program or a CNN, wherein the CNN receives as an input, the transmittance image data
  • the CNN then generates at 816, a plurality of silhouettes corresponding to a feature of interest in each transmittance image data set, which in an implementation, is an object located within a piece of food, or a second portion of fish located within a first portion of fish.
  • Each of the silhouettes is back projected at 818 into a plurality of projections and the common intersections of each projection are analyzed to determine a 3D geometry based on the intersections.
  • the processor then outputs this 3D geometry at 822 and it is determined, either manually or through an additional processor step, whether the object of interest is near a surface of the scanned object. If not, the 3D geometry, based on the transmittance imaging data, is output and the method 800 finishes at 828.
  • the process continues to 824, wherein the CNN uses the interactance imaging data to correct or clarify the 3D geometry based on the transmittance image data. Once the 3D geometry is corrected at 824, the processor or the CNN outputs the corrected 3D geometry at 826 and the process finishes at 828.
  • FIG 10 is an alternative exemplary implementation of a method 900 for generating a 3D model of an object of interest based on image data captured by an imaging system.
  • the method 900 begins at 902, wherein a conveyor and light transmittance system 904 are activated.
  • the imaging device or spectrograph determines at 906 whether imaging data corresponding to transmittance imaging data is received or captured by the imaging device. If not, the processor returns to 904 until the data is received. If so, the method 900 continues to 908, wherein the transmittance imaging data is transmitted to a processor at 908.
  • the processor determines, via a convoluted neural network, a plurality of 2D silhouettes from the 1 D transmittance image data at 910. Each of the silhouettes are back projected and the intersections are analyzed at 912. Then, the processor outputs a 3D geometry based on common intersections between each of the projections at 914.
  • the method 900 finishes at 926 with the 3D model based on the transmittance imaging data. If yes, the method 900 continues to activate the interactance imaging system at 918, wherein the imaging device determines whether imaging data corresponding to interactance imaging data is received by the imaging device or spectrograph at 920. If not, the method 900 returns to 918 until such data is received or captured if so, the method 900 proceeds to 922, wherein the interactance imaging data is transmitted to the processor and the 3D model is corrected, if necessary, based on the interactance imaging data. Finally, the corrected 3D model is outputted at 924 and the method 900 finishes at 926.
  • the present disclosure allows for more precisely determining the volume and shape of infernal defects, which may have impacts on quality control inspections as to whether or not certain food material needs to be rejected. Further, by knowing the three-dimensional geometry of the objects, processing and removal of secondary products can be performed more accurately and thus minimizes the loss of primary products.
  • implementations disclosed herein, in whole or in part, can be equivalently implemented in standard integrated circuits, as one or more computer programs executed by one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs executed by on one or more controllers (e.g., microcontrollers) as one or more programs executed by one or more processors (e.g., microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of ordinary skill in the art in light of the teachings of this disclosure.
  • logic or information can be stored on any computer-readable medium for use by or in connection with any processor-related system or method.
  • a memory is a computer-readable medium that is an electronic, magnetic, optical, or other physical device or means that contains or stores a computer and/or processor program.
  • Logic and/or the information can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions associated with logic and/or information.
  • a“computer-readable medium” can be any element that can store the program associated with logic and/or information for use by or in connection with the instruction execution system, apparatus, and/or device.
  • the computer-readable medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device.
  • the computer readable medium would include the following: a portable computer diskette (magnetic, compact flash card, secure digital, or the like), a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory), a portable compact disc read-only memory (CDROM), digital tape, and other nontransitory media.
  • a portable computer diskette magnetic, compact flash card, secure digital, or the like
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • CDROM compact disc read-only memory
  • digital tape digital tape

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Food Science & Technology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medicinal Chemistry (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
EP18830699.7A 2018-08-16 2018-12-18 Multi-view imaging system and methods for non-invasive inspection in food processing Pending EP3837533A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862765113P 2018-08-16 2018-08-16
PCT/US2018/066314 WO2020036620A1 (en) 2018-08-16 2018-12-18 Multi-view imaging system and methods for non-invasive inspection in food processing

Publications (1)

Publication Number Publication Date
EP3837533A1 true EP3837533A1 (en) 2021-06-23

Family

ID=65003598

Family Applications (1)

Application Number Title Priority Date Filing Date
EP18830699.7A Pending EP3837533A1 (en) 2018-08-16 2018-12-18 Multi-view imaging system and methods for non-invasive inspection in food processing

Country Status (8)

Country Link
EP (1) EP3837533A1 (es)
JP (1) JP7324271B2 (es)
KR (1) KR20210041055A (es)
CN (1) CN113167740A (es)
EC (1) ECSP21013708A (es)
MX (1) MX2021001799A (es)
PH (1) PH12021550342A1 (es)
WO (1) WO2020036620A1 (es)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111337506B (zh) * 2020-03-30 2023-07-07 河南科技学院 一种用于服装品质检验的智能装置
US11363909B2 (en) * 2020-04-15 2022-06-21 Air Products And Chemicals, Inc. Sensor device for providing control for a food processing system
CN112763700B (zh) * 2021-02-18 2023-08-04 同济大学 混凝土预制梁成品质量检测和数字实体模型构建系统及方法
LU501123B1 (en) 2021-12-29 2023-06-29 Analitica D O O Apparatus and method for detecting polymer objects and/or chemical additives in food products

Family Cites Families (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5352153A (en) 1993-07-13 1994-10-04 The Laitram Corporation Imaging system for use in processing transversely cut fish body sections
JPH07306163A (ja) * 1994-05-13 1995-11-21 Nippon Steel Corp 疵検査装置
DE29518639U1 (de) * 1995-11-24 1997-03-27 Heuft Systemtechnik Gmbh Vorrichtung zum Transportieren von Behältern vorbei an einer Einrichtung zum Inspizieren des Bodens der Behälter
CA2380099A1 (en) * 1999-07-28 2001-02-08 Marine Harvest Norway As Method and apparatus for determining quality properties of fish
JP2002125581A (ja) * 2000-10-30 2002-05-08 Nekusuko:Kk 自動定量切断装置
US6563904B2 (en) 2000-12-01 2003-05-13 Fmc Technologies, Inc. Apparatus and method for detecting and removing undesirable material from workpieces
US6587575B1 (en) 2001-02-09 2003-07-01 The United States Of America As Represented By The Secretary Of Agriculture Method and system for contaminant detection during food processing
US7060981B2 (en) 2003-09-05 2006-06-13 Facet Technology Corp. System for automated detection of embedded objects
JP2005158410A (ja) 2003-11-25 2005-06-16 Hitachi Ltd X線撮像装置
CN101194161B (zh) * 2005-06-03 2012-05-23 株式会社前川制作所 用于检测食物中污染物的设备
JP2007309780A (ja) 2006-05-18 2007-11-29 Web Tec Kk 印刷物の品質検査装置及びその品質検査方法
WO2008016309A1 (en) 2006-08-04 2008-02-07 Sinvent As Multi-modal machine-vision quality inspection of food products
JP5274338B2 (ja) 2009-03-30 2013-08-28 富士フイルム株式会社 計測対象保持具
BRPI1015107B1 (pt) * 2009-04-03 2019-02-26 Robotic Technologies Limited Métodos e aparelho de corte de carcaça
JP2011085424A (ja) * 2009-10-13 2011-04-28 Shimadzu Corp X線検査方法、及び該x線検査方法を用いたx線検査装置
CL2009002085A1 (es) 2009-11-16 2011-03-11 Univ Pontificia Catolica Chile Metodo y sistema para analizar automaticamente en tiempo real la calidad de muestras de carnes de pescado que circulan por una cinta transportadora, que permiten detectar defectos superficiales y clasificar las carnes segun patrones de calidad, en base a la segmentacion de las imagenes capturadas.
EP2353395A1 (en) 2010-02-07 2011-08-10 Valka Ehf Food processing apparatus for detecting and cutting tissues from food items
JP5712392B2 (ja) 2010-03-31 2015-05-07 株式会社 カロリアジャパン 物体中の異物混入判別装置
FI125531B (fi) * 2010-04-29 2015-11-13 Planmed Oy Lääketieteellinen röntgenkuvauslaitteisto
US9668705B2 (en) * 2010-07-13 2017-06-06 Takara Telesystems Corp. X-ray tomogram imaging device
JP2012098181A (ja) * 2010-11-02 2012-05-24 Sumitomo Electric Ind Ltd 検出装置及び検出方法
CN102141525A (zh) * 2011-01-01 2011-08-03 上海创波光电科技有限公司 可调节的正、背光源照明检测装置
US8515149B2 (en) * 2011-08-26 2013-08-20 General Electric Company Inspection system and method for determining three dimensional model of an object
US9886631B2 (en) 2012-11-19 2018-02-06 Altria Client Services Llc On-line oil and foreign matter detection stystem and method employing hyperspectral imaging
EP2755018B2 (de) * 2013-01-15 2024-04-03 Nordischer Maschinenbau Rud. Baader GmbH + Co. KG Vorrichtung und Verfahren zur berührungslosen Erkennung roter Gewebestrukturen sowie Anordnung zum Lösen eines Streifens roter Gewebestrukturen
US9420641B2 (en) * 2013-01-23 2016-08-16 Whirlpool Corporation Microwave oven multiview silhouette volume calculation for mass estimation
CH709896A2 (de) 2014-07-18 2016-01-29 Tecan Trading Ag Monochromator mit schwingungsarm bewegbaren optischen Elementen.
CN107003253B (zh) * 2014-07-21 2020-10-16 7386819曼尼托巴有限公司 用于肉类中骨头扫描的方法和装置
US10252466B2 (en) * 2014-07-28 2019-04-09 Massachusetts Institute Of Technology Systems and methods of machine vision assisted additive fabrication
US20180317531A1 (en) * 2015-10-14 2018-11-08 Thai Union Group Public Company Limited The combinatorial methods of high pressure and temperature process (hptp) for producing texturized meat products and the improved meat products obtained from the methods thereof
DE102015221299A1 (de) * 2015-10-30 2017-05-04 Voith Patent Gmbh Stirnradgetriebe
WO2017093539A1 (en) 2015-12-04 2017-06-08 Marel Iceland Ehf. A method for automatically processing fish fillets when they are in a frozen state
JP6723061B2 (ja) 2016-04-15 2020-07-15 キヤノン株式会社 情報処理装置、情報処理装置の制御方法およびプログラム
US20180017501A1 (en) * 2016-07-13 2018-01-18 Sightline Innovation Inc. System and method for surface inspection
US10021369B2 (en) * 2016-07-26 2018-07-10 Qcify Inc. In-flight 3D inspector
CN108122265A (zh) * 2017-11-13 2018-06-05 深圳先进技术研究院 一种ct重建图像优化方法及系统

Also Published As

Publication number Publication date
KR20210041055A (ko) 2021-04-14
ECSP21013708A (es) 2021-04-29
CN113167740A (zh) 2021-07-23
PH12021550342A1 (en) 2021-10-04
WO2020036620A1 (en) 2020-02-20
JP2021535367A (ja) 2021-12-16
MX2021001799A (es) 2021-06-15
JP7324271B2 (ja) 2023-08-09

Similar Documents

Publication Publication Date Title
US11120540B2 (en) Multi-view imaging system and methods for non-invasive inspection in food processing
EP3837533A1 (en) Multi-view imaging system and methods for non-invasive inspection in food processing
US11861889B2 (en) Analysis device
Satat et al. Towards photography through realistic fog
US9818232B2 (en) Color-based depth smoothing of scanned 3D model to enhance geometry in 3D printing
EP1462992B1 (en) System and method for shape reconstruction from optical images
Piron et al. Weed detection in 3D images
US11054370B2 (en) Scanning devices for ascertaining attributes of tangible objects
JP2019200773A (ja) 物体検出システム、それを用いた自律走行車、およびその物体検出方法
WO2012157716A1 (ja) タイヤの欠陥検出方法
Dacal-Nieto et al. Non–destructive detection of hollow heart in potatoes using hyperspectral imaging
FR3039684A1 (fr) Procede optimise d'analyse de la conformite de la surface d'un pneumatique
US20220178841A1 (en) Apparatus for optimizing inspection of exterior of target object and method thereof
CN111122590B (zh) 一种陶瓷表面缺陷检测装置及检测方法
Zhang et al. Computer vision estimation of the volume and weight of apples by using 3d reconstruction and noncontact measuring methods
US20230222645A1 (en) Inspection apparatus, unit selection apparatus, inspection method, and computer-readable storage medium storing an inspection program
EA037743B1 (ru) Обнаружение микроскопических объектов в текучей среде
CN115908257A (zh) 缺陷识别模型训练方法及果蔬缺陷识别方法
CA3143481A1 (en) Machine learning based phone imaging system and analysis method
Banus et al. A deep-learning based solution to automatically control closure and seal of pizza packages
ElMasry et al. Effectiveness of specularity removal from hyperspectral images on the quality of spectral signatures of food products
CN116977341A (zh) 一种尺寸测量方法及相关装置
CN111344553B (zh) 曲面物体的缺陷检测方法及检测系统
RU2737607C1 (ru) Способ оптического контроля качества сельскохозяйственной продукции шарообразной формы при сортировке на конвейере
KR102576213B1 (ko) 학습 모듈을 이용한 품질관리 시스템 및 방법

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210205

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230523

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20231222