WO2020036083A1 - Dispositif d'imagerie par inspection - Google Patents

Dispositif d'imagerie par inspection Download PDF

Info

Publication number
WO2020036083A1
WO2020036083A1 PCT/JP2019/030575 JP2019030575W WO2020036083A1 WO 2020036083 A1 WO2020036083 A1 WO 2020036083A1 JP 2019030575 W JP2019030575 W JP 2019030575W WO 2020036083 A1 WO2020036083 A1 WO 2020036083A1
Authority
WO
WIPO (PCT)
Prior art keywords
inspection
unit
image
food
imaging
Prior art date
Application number
PCT/JP2019/030575
Other languages
English (en)
Japanese (ja)
Inventor
祥貴 下平
Original Assignee
味の素株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 味の素株式会社 filed Critical 味の素株式会社
Publication of WO2020036083A1 publication Critical patent/WO2020036083A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/89Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles

Definitions

  • the present invention relates to an inspection apparatus, an inspection method, an inspection program, and an inspection imaging apparatus that captures an image used in the inspection apparatus, the inspection method, and the inspection program.
  • Non-Patent Document 1 introduces a case where deep learning is used in the inspection of raw materials of food
  • Non-Patent Document 2 discloses a technique for selecting raw materials used in factories by artificial intelligence. The development is introduced.
  • an inspection device includes a control unit configured to inspect a raw material used for food, food, or a container and food contained in the container.
  • An inspection device wherein the control unit performs unsupervised transfer learning using feature amount data extracted using a learned model adjusted by Bayesian optimization, using an image of a non-defective inspection target as learning data.
  • a transfer learning means for generating a model for classifying the inspection target as non-defective or non-defective, and an object for recognizing the inspection target from an image of the inspection target and cutting out the recognized inspection target region from the image Recognition means, by adapting the region cut out by the object recognition means to the model generated by the transfer learning means, the inspection object recognized by the object recognition means to a non-defective or non-defective Characterized in that it comprises an object classification unit similar, the.
  • the learning data may be an image of each of a plurality of types of non-defective inspection targets, and the object classifying unit may classify the recognized inspection target as a non-defective item of each type. It may be classified into any one of a plurality of non-defective classes.
  • the inspection method the raw material used for food, food, or a container and the food contained in the inspection target, the control unit of the inspection device having a control unit, the control unit is a non-defective inspection target
  • the control unit is a non-defective inspection target
  • To classify the inspection target into non-defective products by performing unsupervised transfer learning using the feature data extracted using the trained model adjusted by Bayesian optimization using the image obtained by copying the image as the learning data Generating a model, and recognizing the inspection target from the image of the inspection target, cutting out the recognized inspection target region from the image, and applying the cut-out region to the generated model. Classifying the recognized inspection target into a non-defective or non-defective product.
  • the inspection program according to the present invention the raw material used for food, food, or a container and the food contained in the inspection target, the control unit of the inspection device having a control unit, a non-defective inspection target
  • a non-defective inspection target To classify the inspection target into non-defective products by performing unsupervised transfer learning using the feature data extracted using the trained model adjusted by Bayesian optimization using the image obtained by copying the image as the learning data Generating a model, and recognizing the inspection target from the image of the inspection target, cutting out the recognized inspection target region from the image, and applying the cut-out region to the generated model.
  • a step of classifying the recognized inspection target into a non-defective product or a non-defective product To classify the inspection target into non-defective products by performing unsupervised transfer learning using the feature data extracted using the trained model adjusted by Bayesian optimization using the image obtained by copying the image as the learning data Generating a model, and recognizing the inspection target from the image of the inspection target, cutting out
  • the imaging device for inspection is transported by a transport device installed in a food manufacturing factory, is in a manufacturing stage, is used as a raw material for food, food, or a container and is contained therein.
  • An imaging apparatus for inspection that captures food as a subject for inspection, and has an image capturing unit that captures an image of a subject to generate an image, a lighting unit that irradiates light, and a property of suppressing light reflection.
  • a process that suppresses light reflection formed of a member having a light blocking property, a housing having an opening, wherein the housing is close to the opening and the transfer surface of the transfer device.
  • the imaging unit is disposed at a position inside the housing, at which an image of a subject can be taken through the opening, and the illumination unit is a DC power supply that does not cause a flicker phenomenon.
  • the illumination unit is a DC power supply that does not cause a flicker phenomenon.
  • the imaging unit may be arranged such that an optical axis of the lens passes near the center of the opening and is substantially orthogonal to the transport surface.
  • the illumination unit may be arranged at a position where emitted light does not directly enter a lens of the imaging unit.
  • the member may be subjected to a matte black anodizing process.
  • the inspection device According to the inspection device, the inspection method, and the inspection program of the present invention, there is an effect that a highly accurate food inspection can be realized. Further, according to the imaging device for inspection according to the present invention, there is an effect that it is possible to contribute to the realization of a highly accurate food inspection.
  • FIG. 1 is a diagram illustrating an example of a configuration of the food inspection system 1.
  • FIG. 2 is a diagram illustrating an example of the configuration of the food imaging device 11.
  • FIG. 3 is a diagram illustrating an example of the configuration of the food imaging device 11.
  • FIG. 4 is a diagram illustrating an example of the configuration of the food imaging device 11.
  • FIG. 5 is a diagram illustrating an example of a flowchart relating to the object recognition processing (including the image preprocessing).
  • FIG. 6 is a diagram illustrating an example of a flowchart relating to a model generation process by unsupervised transfer learning.
  • FIG. 7 is a diagram illustrating an example of a flowchart relating to the image inflating process.
  • FIG. 8 is a diagram illustrating an example of a flowchart relating to the inspection processing.
  • FIG. 1 is a diagram illustrating an example of a configuration of a food inspection system 1 according to the present embodiment.
  • the food inspection system 1 is configured such that “ingredients used in food (eg, shrimp and leek used in gyoza, etc.) in a manufacturing process are conveyed by a conveyance device CV (eg, a belt conveyor or the like) installed in a food manufacturing factory. ) ",” Food (eg, gyoza or shumai) "or” container and food contained therein (for example, a food tray in which food is stored) “(hereinafter referred to as” object to be inspected “). This is an inspection system. “Food” includes not only final products but also foods that are being manufactured.
  • the food inspection system 1 includes a food imaging device 11 (corresponding to an inspection imaging device according to the present invention), a food inspection server 12 (corresponding to an inspection device according to the present invention), and, for example, the Internet, an intranet, or a LAN (wired / wireless). And the network 13).
  • a food imaging device 11 corresponding to an inspection imaging device according to the present invention
  • a food inspection server 12 corresponding to an inspection device according to the present invention
  • the configuration of the food imaging device 11 is simplified by only describing the imaging unit 11 b connected to the network 13. .
  • the number of the food imaging devices 11 is not limited to one and may be an arbitrary plural number.
  • FIGS. 2 to 4 are diagrams illustrating an example of the configuration of the food imaging device 11.
  • the food imaging device 11 is a device that images an inspection target as a subject for inspection.
  • the food imaging device 11 includes a housing 11a, an imaging unit 11b, a lighting unit 11c, and a power supply unit 11d.
  • the housing 11a includes four vertical members 11a1, four horizontal members (two horizontal members 11a2 and two horizontal members 11a3), and five rectangular members (one ceiling member 11a4 and two wall members 11a5).
  • This is a bone assembly structure having an opening formed by using two wall members 11a6).
  • the opening is formed, for example, by two sides corresponding to the end 11a51 of the wall member 11a5 (see FIG. 2) and two sides corresponding to the end 11a61 of the wall member 11a6 (see FIG. 4).
  • This corresponds to a rectangular region to be formed specifically, a region where the cross section of the vertical member 11a1 has been removed from the rectangular region).
  • the housing 11a is installed so as to straddle or cover a part of the transport device CV, as illustrated. Specifically, the housing 11a is installed such that the opening and the transfer surface CV1 of the transfer device CV approach and face each other.
  • the vertical member 11a1 is a bar-shaped member (frame).
  • An example of the material of the vertical member 11a1 is aluminum, but the material is not particularly limited to this.
  • the horizontal member 11a2 and the horizontal member 11a3 are rod-shaped members (frames). Aluminum is an example of a material for the horizontal members 11a2 and 11a3, but the material is not particularly limited thereto.
  • the horizontal member 11a2 and the horizontal member 11a3 are arranged in a direction substantially parallel to the transport direction of the transport device CV.
  • the top plate member (upper plate) 11a4, the wall member (side plate) 11a5, and the wall member (side plate) 11a6 are plate-shaped members (panels).
  • the top plate member 11a4, the wall member 11a5, and the wall member 11a6 are light-blocking members that have a property of suppressing light reflection or are processed to suppress light reflection (for example, a matte black alumite process is performed). Aluminum flat plate, etc.).
  • the wall member 11a5 is disposed in a direction substantially perpendicular to the transport direction of the transport device CV (see FIG. 3).
  • the wall member 11a6 is disposed in a direction substantially parallel to the transport direction of the transport device CV (see FIG. 3).
  • the vertical (vertical) lengths of the wall members 11a5 and 11a6 are the height H from the ground to the transport surface CV1 (see FIGS. 2 and 4), the height of the inspection target, and the lens forming the imaging unit 11b. May be set based on the focal length of the image, the quality of the image captured by the imaging unit 11b, and the like. Note that the vertical (vertical) lengths of the wall member 11a5 and the wall member 11a6 may not be the same. For example, the length of the wall member 11a6 in the vertical direction (vertical direction) may be set so that the distance L (see FIG. 4) between the transport surface CV1 and the end 11a61 is almost zero.
  • the imaging unit 11b is, for example, a camera such as a GigE camera or an IoT camera, and captures an inspection target to generate an image.
  • the imaging unit 11b is communicably connected to the food inspection server 12 via the network 13.
  • the imaging unit 11b transfers the generated image to the food inspection server 12 via the network 13.
  • the imaging unit 11b is arranged at a position inside the housing 11a where an image of the inspection target can be imaged through the opening.
  • the imaging unit 11b may be fixedly arranged on the ceiling member 11a4 such that the optical axis OA of the lens forming the imaging unit passes near the center of the opening and is substantially perpendicular to the transport surface CV1 ( 2 and 4).
  • the illumination unit 11c is an illumination unit (for example, an LED illumination unit or the like) operated by a DC power supply, and emits light.
  • the lighting unit 11c is connected to a power supply unit 11d that supplies DC power, and operates with the DC power supplied from the power supply unit 11d. With this configuration, the occurrence of the flicker phenomenon can be prevented.
  • the illumination unit 11c is arranged at a position inside the housing 11a where light can be emitted to the inspection target.
  • the illumination unit 11c may be arranged at a position where the light emitted by the irradiation unit does not directly enter the lens forming the imaging unit 11b.
  • the lighting unit 11c may be fixedly arranged on the ceiling member 11a4.
  • the food inspection server 12 includes a control unit 12 a that centrally controls the device such as a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit), and a communication device such as a router and a wired line such as a dedicated line.
  • a communication interface unit 12b for communicatively connecting the device to a network 13 via a wireless communication line
  • a storage unit 12c for storing various databases, tables or files
  • an input unit 12e and an output unit 12f An input / output interface unit 12d, an input unit 12e, and an output unit 12f.
  • Each unit of the food inspection server 12 is communicably connected via an arbitrary communication path.
  • the communication interface unit 12b mediates communication between the food inspection server 12 and the network 13 (or a communication device such as a router). That is, the communication interface unit 12b has a function of communicating data with another terminal via a communication line.
  • the input / output interface unit 12d is connected to the input unit 12e and the output unit 12f.
  • a speaker or a printer can be used as the output unit 12f.
  • a monitor that realizes a pointing device function in cooperation with a mouse, a touch panel, or the like can be used in addition to a keyboard, a mouse, or a microphone.
  • the storage unit 12c is a storage unit.
  • a memory device such as a RAM / ROM, a fixed disk device such as a hard disk, a flexible disk, or an optical disk can be used.
  • the storage unit 12c may store a computer program for performing various processes by giving an instruction to a CPU or a GPU in cooperation with an OS (Operating @ System).
  • the storage unit 12c includes, for example, an image storage unit 12c1 that stores an image captured by the food imaging device 11 and the like, and an image that stores, for example, a learning image of a good product, an image for detecting a good product, and an image for detecting a non-defective product. And a storage unit 12c2.
  • the non-defective learning image is obtained by an object recognition process described later based on an image of the non-defective inspection object captured by the food imaging device 11, and is used as learning data in transfer learning described below. Things.
  • the non-defective learning image includes a learning image generated by a padding process described later.
  • the transfer learning unit 12a3 When the transfer learning unit 12a3 generates a model for classifying an inspection target into one of a plurality of classes including a plurality of types of non-defective products and non-defective products, the learning image of the non-defective products includes a plurality of learning images. It may be based on an image of each of the types of non-defective inspection targets.
  • the non-defective detection image is obtained by an object recognition process described below based on an image of the non-defective inspection object captured by the food imaging device 11, and is used as detection data in transfer learning described below. Things.
  • the transfer learning unit 12a3 When the transfer learning unit 12a3 generates a model for classifying the inspection target into one of a plurality of classes including a plurality of types of non-defective products and non-defective products, a plurality of non-defective detection images are used. It may be based on an image of each of the types of non-defective inspection targets.
  • the non-defective item detection image is obtained by an object recognition process described later based on an image of the non-defective item inspection target imaged by the food imaging device 11, and is used as detection data in transfer learning described later. What is used.
  • the transfer learning unit 12a3 When the transfer learning unit 12a3 generates a model for classifying the inspection target into one of a plurality of classes including a plurality of types of non-defective products and non-defective products, the non-defective image for detection is It may be based on an image of each of a plurality of types of non-defective inspection targets.
  • the control unit 12a has an internal memory for storing a control program such as an OS, a program defining various processing procedures, and required data, and executes various information processing based on these programs.
  • the control unit 12a conceptually includes an object recognition unit 12a1, an object classification unit 12a2, a transfer learning unit 12a3, and an inflated image generation unit 12a4.
  • the object recognizing unit 12a1 is an object recognizing unit that recognizes an inspection target from an image of the inspection target, and cuts out the recognized inspection target region from the image. Note that the learning image and the detection image used in the model generation processing described later, and the inspection target area (production data) adapted to the model in the inspection processing described later are processed by the processing executed by the object recognition unit 12a1. It is obtained. The specific processing executed by the object recognition unit 12a1 will be described in “2. Processing”.
  • the object classifying unit 12a2 is an object classifying unit that classifies the inspection target recognized by the object recognizing unit 12a1 into a non-defective or non-defective product by adapting a region cut out by the object recognizing unit 12a1 to a model generated by the transfer learning unit 12a3. is there.
  • the transfer learning unit 12a3 generates a model for classifying the inspection target into one of a plurality of classes including a plurality of types of non-defective products and non-defective products
  • the object classification unit 12a2 performs the object recognition process.
  • the inspection target recognized by the unit 12a1 may be classified into any one of a plurality of classes including non-defective products and non-defective products of each type.
  • non-defective and non-defective inspections of a plurality of types of inspection targets can be collectively performed by one model.
  • a plurality of types of inspection targets for example, foods such as gyoza and shumai and raw materials such as shrimp
  • the inspection is performed. It can be implemented while suppressing an increase in cost.
  • the specific processing executed by the object classification unit 12a2 will be described in “2. Processing”.
  • the transfer learning unit 12a3 performs an unsupervised transfer learning by using feature amount data extracted by using a trained model adjusted by Bayesian optimization, in which an image of a non-defective inspection target is used as learning data. This is a transfer learning means for generating a model for classifying a target into a non-defective product or a non-defective product. Note that the transfer learning unit 12a3 performs transfer learning using, as learning data, an image of each of a plurality of types of non-defective inspection targets, so that the inspection target is classified into a plurality of classes of non-defective and non-defective types. A model for classification into any one of them may be generated. The specific processing executed by the transfer learning unit 12a3 will be described in “2. Processing”.
  • the inflated image generation unit 12a4 performs predetermined processing on the learning image to inflate the learning image.
  • the specific processing executed by the inflated image generation unit 12a4 will be described in “2. Processing”.
  • FIG. 5 is a diagram illustrating an example of a flowchart relating to the object recognition processing (including the image preprocessing).
  • the object recognizing unit 12a1 adjusts the brightness of the image captured by the food imaging device 11 (step SA1).
  • step SA1 for example, the luminance may be adjusted so that the state of the image is a general state in which no halation or color skip occurs. Step SA1 need not be performed.
  • the object recognizing unit 12a1 executes threshold processing by binarization using a binary search algorithm on the single-channel array elements in the image processed at step SA1 (step SA2).
  • the object recognizing unit 12a1 executes a structural element expansion process on the image after the process in step SA2 in order to eliminate fine contours and noise (step SA3).
  • the object recognizing unit 12a1 extracts a contour from the image after the processing in step SA3 to detect an area (step SA4).
  • step SA5 based on the contour extracted in step SA4, the object recognizing unit 12a1 extracts only an area having a certain size or more from the image processed in step SA3 (step SA5).
  • the object recognizing unit 12a1 extracts, from the region extracted in step SA5, a minimum rectangle in consideration of rotation, surrounding the given two-dimensional point set (step SA6).
  • the object recognizing unit 12a1 extracts the rotated rectangular area extracted in Step SA6 from the area extracted in Step SA5, and rotationally reduces the extracted rectangular area into a square (Step SA7).
  • the object recognizing unit 12a1 flattens the histogram of the pixel values for the area after the processing in step SA7 (step SA8).
  • a high-quality learning image and a high-quality detection image used in a model generation process described below can be obtained from the image captured by the food imaging device 11.
  • a high-quality inspection target area (production data) adapted to a model in an inspection process described later can be obtained.
  • the object recognition process may be realized by an object recognition algorithm such as OpenCV (Open Source Computer Vision Library), SegNet, SSD (Single Shot Multibox Detector), and YOLO (You Only Look Once).
  • FIG. 6 is a diagram illustrating an example of a flowchart relating to a model generation process by unsupervised transfer learning.
  • the transfer learning unit 12a3 transmits a learned model (specifically, TensorFlow (registered trademark) InceptionV3, VGG16, VGG19, Xception, ResNet50, InceptionResNetV2, MobileNet, DenseNet, NASNet, MobileNetV2, DCGAN, Efficient @ GAN (reference "" Efficient GAN-Based Anomaly Detection, Houssam Zenati, Chuan Sheng Foo, Bruno Lecouat, Gaurav Manek, Vijay Ramaseshan Chandrasekhar, Submitted on 17 Feb 2018, last revised 1 May Nature, and GABay in the ⁇ Anals of the GAFeatures of the GAFeatures of the GAFeatures of the GAFeatures and the Features of the Features of the Features of the Features and the May A.
  • a learned model specifically, TensorFlow (registered trademark) InceptionV3, VGG16, VGG19, Xception, ResNet50, InceptionResNetV2, MobileNet, DenseNet, NAS
  • Step SB1 feature quantity extraction processing.
  • the reason why the model from which the final layer has been removed is used for learning without performing convolution processing.
  • Step SB1 is executed for tens of thousands of learning images (obtained by the processing of the object recognizing unit 12a1) in which non-defective inspection targets are photographed.
  • the transfer learning unit 12a3 compares the tens of thousands of 20588-dimensional feature amounts extracted in step SB1 with an outlier (abnormal) detection algorithm (specifically, One-Class ⁇ SVM (Support ⁇ Vector ⁇ Machine) or Isolation). Forrest) or a model for detecting outliers (abnormalities) or usable for outliers (abnormalities) detection (specifically, AnoGAN, Efficient GAN, BiGAN, BigGAN-deep, VQ-VAE of TensorFlow (registered trademark)) (Step SB2: learning process). In step SB2, the transfer learning unit 12a3 obtains a decision boundary (Decision @ boundary) for classifying non-defective products and non-defective products.
  • an outlier (abnormal) detection algorithm specifically, One-Class ⁇ SVM (Support ⁇ Vector ⁇ Machine) or Isolation. Forrest) or a model for detecting outliers (abnormalities) or usable for outliers (
  • a plurality of detection images prepared by the processing of the object recognizing unit 12a1) in which non-defective inspection targets are photographed and non-defective inspection targets are photographed.
  • a plurality of detection images prepared in advance obtained by the processing of the object recognizing unit 12a1
  • the transfer learning unit 12a3 calculates a harmonic mean (F value) of accuracy and recall from the correct answer rate of non-defective products and the correct answer rate of non-defective products based on the result obtained in step SB3, and performs Bayesian optimization.
  • the optimum F-number is searched using the same (step SB4).
  • the transfer learning unit 12a3 calculates the difference (breakeven point) between the non-defective product inclusion rate and the abnormality determination rate from the non-defective product correct rate and the non-defective product correct answer rate based on the result obtained in step SB3, An equilibrium point is determined using Bayesian optimization so that the breakeven point is minimized (step SB5).
  • step SB4 it is determined from the F value obtained in step SB4 and the equilibrium point obtained in step SB5 whether a highly accurate model has been completed.
  • the process proceeds to the following process. This determination may be made, for example, by focusing on the height of the F value or the low equilibrium point.
  • the inflated image generating unit 12a4 executes inflated processing on a learning image randomly selected from the tens of thousands of learning images prepared in advance (obtained by the processing of the object recognizing unit 12a1). Then, the learning image is increased about 10 times (step SB6). The details of the padding process will be described later.
  • step SB6 the present model generation processing is executed again from step SB1.
  • learning can be performed so as to be a model that is robust against noise.
  • reinforcement learning can be continuously performed by separately adding learning images separately and continuously.
  • a model for classifying an inspection target into a non-defective product or a non-defective product can be generated by the model generation process.
  • the number of learned models to be diverted may be one or more.
  • the deep learning layer of the transfer learning model can be changed according to the type of the inspection target. Generating a target model can be easily realized.
  • FIG. 7 is a diagram illustrating an example of a flowchart relating to the image inflating process.
  • the inflated image generation unit 12a4 uses the learning image randomly selected in step SB6 to process the truth value so that the average of the input pixel values (feature amounts) becomes zero in the entire learning image. Execute (step SC1).
  • the inflated image generation unit 12a4 normalizes the input pixel value (feature amount) by the standard deviation of the learning image (step SC2: normalization processing).
  • the inflated image generation unit 12a4 rotates the learning image (rotation angle: any angle from 0 to 180 degrees), inverts horizontally (inverts left and right), and inverts the learning image.
  • rotation angle any angle from 0 to 180 degrees
  • inverts horizontally inverts left and right
  • inverts the learning image One or more processes arbitrarily selected from the processes (vertical inversion) are executed in an arbitrary order (step SC3).
  • the inflated image generation unit 12a4 executes zero-phase whitening (ZCA whitening) on the learning image obtained in step SC3 (step SC4).
  • ZCA whitening zero-phase whitening
  • FIG. 8 is a diagram illustrating an example of a flowchart relating to the inspection processing.
  • the object recognition unit 12a1 recognizes the inspection target from the image of the inspection target captured by the food imaging device 11, and cuts out the recognized inspection target region from the image (step SD1). Note that a specific example of the object recognition process executed in step SD1 is described in [2-1. Object recognition processing], and a description thereof will be omitted.
  • the object classification unit 12a2 recognized the region to be inspected cut out in step SD1 in step SD1 by applying the region to be inspected in step SD1 to the model generated in the transfer learning unit 12a3 (the one determined to have sufficient accuracy).
  • the inspection target is classified into a non-defective product or a non-defective product (step SD2).
  • the final product immediately before packaging in the final step in the food manufacturing process (for example, the food Everything up to the stored food tray) can be inspected with high accuracy. That is, it is possible to realize a highly accurate inspection of a product that could not be an inspection target in the past in a food inspection.
  • all or some of the processes described as being performed automatically can be manually performed, or all of the processes described as being performed manually can be performed.
  • a part thereof can be automatically performed by a known method.
  • the program is recorded on a non-transitory computer-readable recording medium containing programmed instructions for causing the information processing apparatus to execute the inspection method according to the present invention. Is read. That is, the storage unit 106 such as a ROM or an HDD stores a computer program for giving an instruction to the CPU or the GPU in cooperation with the OS and performing various processes.
  • the computer program is executed by being loaded into the RAM, and configures a control unit in cooperation with the CPU or the GPU.
  • the inspection program according to the present invention may be stored in a non-transitory computer-readable recording medium, or may be configured as a program product.
  • the “recording medium” refers to a memory card, USB memory, SD card, flexible disk, magneto-optical disk, ROM, EPROM, EEPROM, CD-ROM, MO, DVD, and Blu-ray (registered trademark). It shall include any “portable physical medium” such as Disc.
  • system distribution / integration is not limited to the illustrated one, and the system may be functionally or physically distributed / integrated in arbitrary units.
  • the present invention is extremely useful in many industrial fields, especially in the food manufacturing industry.

Landscapes

  • Engineering & Computer Science (AREA)
  • Textile Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

La présente invention aborde le problème consistant à fournir un dispositif d'imagerie par inspection qui peut contribuer à réaliser des inspections d'aliments extrêmement précises. Selon le présent mode de réalisation, un dispositif d'imagerie d'aliments 11 comprend : (1) un boîtier (par exemple, un cadre tridimensionnel qui a des ouvertures et est formé à l'aide d'éléments verticaux, d'éléments horizontaux et d'éléments rectangulaires) ; (2) une partie d'imagerie (par exemple, une caméra GigE, une caméra IoT, ou similaire) qui capture une image d'une cible d'inspection et transfère l'image capturée à un serveur d'inspection d'aliments 12 sur un réseau ; (3) une partie d'éclairage (par exemple, une unité d'éclairage à DEL ou similaire) qui fonctionne sur une alimentation de courant continu et émet de la lumière ; et (4) une partie d'alimentation électrique qui fournit un courant continu.
PCT/JP2019/030575 2018-08-15 2019-08-02 Dispositif d'imagerie par inspection WO2020036083A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018152831 2018-08-15
JP2018-152831 2018-08-15

Publications (1)

Publication Number Publication Date
WO2020036083A1 true WO2020036083A1 (fr) 2020-02-20

Family

ID=69525461

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/030575 WO2020036083A1 (fr) 2018-08-15 2019-08-02 Dispositif d'imagerie par inspection

Country Status (2)

Country Link
TW (1) TW202024607A (fr)
WO (1) WO2020036083A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002174592A (ja) * 2000-12-07 2002-06-21 Kubota Corp 評価装置
JP2005241316A (ja) * 2004-02-25 2005-09-08 Nec Corp 金属腐食度測定方法および装置
JP2005257503A (ja) * 2004-03-12 2005-09-22 Yanmar Co Ltd 検査装置
JP2006300943A (ja) * 2005-04-18 2006-11-02 Khs Ag 検査装置
JP2014044157A (ja) * 2012-08-28 2014-03-13 Ricoh Co Ltd 光学センサ及び画像形成装置
JP2015094644A (ja) * 2013-11-12 2015-05-18 株式会社クボタ 卵の外観検査装置および方法
WO2017217035A1 (fr) * 2016-06-14 2017-12-21 パナソニックIpマネジメント株式会社 Élément de visualisation, système de mesure et procédé de mesure

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002174592A (ja) * 2000-12-07 2002-06-21 Kubota Corp 評価装置
JP2005241316A (ja) * 2004-02-25 2005-09-08 Nec Corp 金属腐食度測定方法および装置
JP2005257503A (ja) * 2004-03-12 2005-09-22 Yanmar Co Ltd 検査装置
JP2006300943A (ja) * 2005-04-18 2006-11-02 Khs Ag 検査装置
JP2014044157A (ja) * 2012-08-28 2014-03-13 Ricoh Co Ltd 光学センサ及び画像形成装置
JP2015094644A (ja) * 2013-11-12 2015-05-18 株式会社クボタ 卵の外観検査装置および方法
WO2017217035A1 (fr) * 2016-06-14 2017-12-21 パナソニックIpマネジメント株式会社 Élément de visualisation, système de mesure et procédé de mesure

Also Published As

Publication number Publication date
TW202024607A (zh) 2020-07-01

Similar Documents

Publication Publication Date Title
JP7391173B2 (ja) 食品検査補助システム、食品検査補助装置、およびコンピュータプログラム
US11393082B2 (en) System and method for produce detection and classification
Saranya et al. Banana ripeness stage identification: a deep learning approach
US10520452B2 (en) Automated quality control and selection
US10157456B2 (en) Information processing apparatus, information processing method, and storage medium utilizing technique for detecting an abnormal state such as a scratch on a target
US11189058B2 (en) Image generating device, inspection apparatus, and learning device
KR20220095216A (ko) Sem 이미지에 대한 bbp 지원 결함 검출 흐름
JP6790160B2 (ja) インテリジェントマシンのネットワーク
WO2020036082A1 (fr) Dispositif d'inspection, procédé d'inspection et programme d'inspection
US20220178841A1 (en) Apparatus for optimizing inspection of exterior of target object and method thereof
JPWO2019151393A1 (ja) 食品検査システム、食品検査プログラム、食品検査方法および食品生産方法
JP6295798B2 (ja) 検査方法
JP2016181098A (ja) 領域検出装置および領域検出方法
Stavropoulos et al. A vision-based system for real-time defect detection: a rubber compound part case study
WO2018112218A1 (fr) Procédé d'imagerie radiographique à microfoyer à double énergie pour inspection de viande
Papavasileiou et al. An optical system for identifying and classifying defects of metal parts
Ciora et al. Industrial applications of image processing
JP2021143884A (ja) 検査装置、検査方法、プログラム、学習装置、学習方法、および学習済みデータセット
Banus et al. A deep-learning based solution to automatically control closure and seal of pizza packages
WO2020036083A1 (fr) Dispositif d'imagerie par inspection
Hasan et al. Framework for fish freshness detection and rotten fish removal in Bangladesh using mask R–CNN method with robotic arm and fisheye analysis
US11436716B2 (en) Electronic apparatus, analysis system and control method of electronic apparatus
WO2022005776A1 (fr) Procédés et systèmes de test non destructif (ndt) avec traitement à base d'intelligence artificielle entraîné
JP7250301B2 (ja) 検査装置、検査システム、検査方法、検査プログラム及び記録媒体
JP2018066586A (ja) 外観検査装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19850272

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19850272

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP