WO2020036082A1 - Inspection device, inspection method, and inspection program - Google Patents

Inspection device, inspection method, and inspection program Download PDF

Info

Publication number
WO2020036082A1
WO2020036082A1 PCT/JP2019/030574 JP2019030574W WO2020036082A1 WO 2020036082 A1 WO2020036082 A1 WO 2020036082A1 JP 2019030574 W JP2019030574 W JP 2019030574W WO 2020036082 A1 WO2020036082 A1 WO 2020036082A1
Authority
WO
WIPO (PCT)
Prior art keywords
inspection
defective
image
inspection target
food
Prior art date
Application number
PCT/JP2019/030574
Other languages
French (fr)
Japanese (ja)
Inventor
祥貴 下平
和之 森
Original Assignee
味の素株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 味の素株式会社 filed Critical 味の素株式会社
Priority to JP2020537416A priority Critical patent/JPWO2020036082A1/en
Publication of WO2020036082A1 publication Critical patent/WO2020036082A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an inspection apparatus, an inspection method, an inspection program, and an inspection imaging apparatus that captures an image used in the inspection apparatus, the inspection method, and the inspection program.
  • Non-Patent Document 1 introduces a case where deep learning is used in the inspection of raw materials of food
  • Non-Patent Document 2 discloses a technique for selecting raw materials used in factories by artificial intelligence. The development is introduced.
  • an inspection device includes a control unit configured to inspect a raw material used for food, food, or a container and food contained in the container.
  • An inspection device wherein the control unit performs unsupervised transfer learning using feature amount data extracted using a learned model adjusted by Bayesian optimization, using an image of a non-defective inspection target as learning data.
  • a transfer learning means for generating a model for classifying the inspection target as non-defective or non-defective, and an object for recognizing the inspection target from an image of the inspection target and cutting out the recognized inspection target region from the image Recognition means, by adapting the region cut out by the object recognition means to the model generated by the transfer learning means, the inspection object recognized by the object recognition means to a non-defective or non-defective Characterized in that it comprises an object classification unit similar, the.
  • the learning data may be an image of each of a plurality of types of non-defective inspection targets, and the object classifying unit may classify the recognized inspection target as a non-defective item of each type. It may be classified into any one of a plurality of non-defective classes.
  • the inspection method the raw material used for food, food, or a container and the food contained in the inspection target, the control unit of the inspection device having a control unit, the control unit is a non-defective inspection target
  • the control unit is a non-defective inspection target
  • To classify the inspection target into non-defective products by performing unsupervised transfer learning using the feature data extracted using the trained model adjusted by Bayesian optimization using the image obtained by copying the image as the learning data Generating a model, and recognizing the inspection target from the image of the inspection target, cutting out the recognized inspection target region from the image, and applying the cut-out region to the generated model. Classifying the recognized inspection target into a non-defective or non-defective product.
  • the inspection program according to the present invention the raw material used for food, food, or a container and the food contained in the inspection target, the control unit of the inspection device having a control unit, a non-defective inspection target
  • a non-defective inspection target To classify the inspection target into non-defective products by performing unsupervised transfer learning using the feature data extracted using the trained model adjusted by Bayesian optimization using the image obtained by copying the image as the learning data Generating a model, and recognizing the inspection target from the image of the inspection target, cutting out the recognized inspection target region from the image, and applying the cut-out region to the generated model.
  • a step of classifying the recognized inspection target into a non-defective product or a non-defective product To classify the inspection target into non-defective products by performing unsupervised transfer learning using the feature data extracted using the trained model adjusted by Bayesian optimization using the image obtained by copying the image as the learning data Generating a model, and recognizing the inspection target from the image of the inspection target, cutting out
  • the imaging device for inspection is transported by a transport device installed in a food manufacturing factory, is in a manufacturing stage, is used as a raw material for food, food, or a container and is contained therein.
  • An imaging apparatus for inspection that captures food as a subject for inspection, and has an image capturing unit that captures an image of a subject to generate an image, a lighting unit that irradiates light, and a property of suppressing light reflection.
  • a process that suppresses light reflection formed of a member having a light blocking property, a housing having an opening, wherein the housing is close to the opening and the transfer surface of the transfer device.
  • the imaging unit is disposed at a position inside the housing, at which an image of a subject can be taken through the opening, and the illumination unit is a DC power supply that does not cause a flicker phenomenon.
  • the illumination unit is a DC power supply that does not cause a flicker phenomenon.
  • the imaging unit may be arranged such that an optical axis of the lens passes near the center of the opening and is substantially orthogonal to the transport surface.
  • the illumination unit may be arranged at a position where emitted light does not directly enter a lens of the imaging unit.
  • the member may be subjected to a matte black anodizing process.
  • the inspection device According to the inspection device, the inspection method, and the inspection program of the present invention, there is an effect that a highly accurate food inspection can be realized. Further, according to the imaging device for inspection according to the present invention, there is an effect that it is possible to contribute to the realization of a highly accurate food inspection.
  • FIG. 1 is a diagram illustrating an example of a configuration of the food inspection system 1.
  • FIG. 2 is a diagram illustrating an example of the configuration of the food imaging device 11.
  • FIG. 3 is a diagram illustrating an example of the configuration of the food imaging device 11.
  • FIG. 4 is a diagram illustrating an example of the configuration of the food imaging device 11.
  • FIG. 5 is a diagram illustrating an example of a flowchart relating to the object recognition processing (including the image preprocessing).
  • FIG. 6 is a diagram illustrating an example of a flowchart relating to a model generation process by unsupervised transfer learning.
  • FIG. 7 is a diagram illustrating an example of a flowchart relating to the image inflating process.
  • FIG. 8 is a diagram illustrating an example of a flowchart relating to the inspection processing.
  • FIG. 1 is a diagram illustrating an example of a configuration of a food inspection system 1 according to the present embodiment.
  • the food inspection system 1 is configured such that “ingredients used in food (eg, shrimp and leek used in gyoza, etc.) in a manufacturing process are conveyed by a conveyance device CV (eg, a belt conveyor or the like) installed in a food manufacturing factory. ) ",” Food (eg, gyoza or shumai) "or” container and food contained therein (for example, a food tray in which food is stored) “(hereinafter referred to as” object to be inspected “). This is an inspection system. “Food” includes not only final products but also foods that are being manufactured.
  • the food inspection system 1 includes a food imaging device 11 (corresponding to an inspection imaging device according to the present invention), a food inspection server 12 (corresponding to an inspection device according to the present invention), and, for example, the Internet, an intranet, or a LAN (wired / wireless). And the network 13).
  • a food imaging device 11 corresponding to an inspection imaging device according to the present invention
  • a food inspection server 12 corresponding to an inspection device according to the present invention
  • the configuration of the food imaging device 11 is simplified by only describing the imaging unit 11 b connected to the network 13. .
  • the number of the food imaging devices 11 is not limited to one and may be an arbitrary plural number.
  • FIGS. 2 to 4 are diagrams illustrating an example of the configuration of the food imaging device 11.
  • the food imaging device 11 is a device that images an inspection target as a subject for inspection.
  • the food imaging device 11 includes a housing 11a, an imaging unit 11b, a lighting unit 11c, and a power supply unit 11d.
  • the housing 11a includes four vertical members 11a1, four horizontal members (two horizontal members 11a2 and two horizontal members 11a3), and five rectangular members (one ceiling member 11a4 and two wall members 11a5).
  • This is a bone assembly structure having an opening formed by using two wall members 11a6).
  • the opening is formed, for example, by two sides corresponding to the end 11a51 of the wall member 11a5 (see FIG. 2) and two sides corresponding to the end 11a61 of the wall member 11a6 (see FIG. 4).
  • This corresponds to a rectangular region to be formed specifically, a region where the cross section of the vertical member 11a1 has been removed from the rectangular region).
  • the housing 11a is installed so as to straddle or cover a part of the transport device CV, as illustrated. Specifically, the housing 11a is installed such that the opening and the transfer surface CV1 of the transfer device CV approach and face each other.
  • the vertical member 11a1 is a bar-shaped member (frame).
  • An example of the material of the vertical member 11a1 is aluminum, but the material is not particularly limited to this.
  • the horizontal member 11a2 and the horizontal member 11a3 are rod-shaped members (frames). Aluminum is an example of a material for the horizontal members 11a2 and 11a3, but the material is not particularly limited thereto.
  • the horizontal member 11a2 and the horizontal member 11a3 are arranged in a direction substantially parallel to the transport direction of the transport device CV.
  • the top plate member (upper plate) 11a4, the wall member (side plate) 11a5, and the wall member (side plate) 11a6 are plate-shaped members (panels).
  • the top plate member 11a4, the wall member 11a5, and the wall member 11a6 are light-blocking members that have a property of suppressing light reflection or are processed to suppress light reflection (for example, a matte black alumite process is performed). Aluminum flat plate, etc.).
  • the wall member 11a5 is disposed in a direction substantially perpendicular to the transport direction of the transport device CV (see FIG. 3).
  • the wall member 11a6 is disposed in a direction substantially parallel to the transport direction of the transport device CV (see FIG. 3).
  • the vertical (vertical) lengths of the wall members 11a5 and 11a6 are the height H from the ground to the transport surface CV1 (see FIGS. 2 and 4), the height of the inspection target, and the lens forming the imaging unit 11b. May be set based on the focal length of the image, the quality of the image captured by the imaging unit 11b, and the like. Note that the vertical (vertical) lengths of the wall member 11a5 and the wall member 11a6 may not be the same. For example, the length of the wall member 11a6 in the vertical direction (vertical direction) may be set so that the distance L (see FIG. 4) between the transport surface CV1 and the end 11a61 is almost zero.
  • the imaging unit 11b is, for example, a camera such as a GigE camera or an IoT camera, and captures an inspection target to generate an image.
  • the imaging unit 11b is communicably connected to the food inspection server 12 via the network 13.
  • the imaging unit 11b transfers the generated image to the food inspection server 12 via the network 13.
  • the imaging unit 11b is arranged at a position inside the housing 11a where an image of the inspection target can be imaged through the opening.
  • the imaging unit 11b may be fixedly arranged on the ceiling member 11a4 such that the optical axis OA of the lens forming the imaging unit passes near the center of the opening and is substantially perpendicular to the transport surface CV1 ( 2 and 4).
  • the illumination unit 11c is an illumination unit (for example, an LED illumination unit or the like) operated by a DC power supply, and emits light.
  • the lighting unit 11c is connected to a power supply unit 11d that supplies DC power, and operates with the DC power supplied from the power supply unit 11d. With this configuration, the occurrence of the flicker phenomenon can be prevented.
  • the illumination unit 11c is arranged at a position inside the housing 11a where light can be emitted to the inspection target.
  • the illumination unit 11c may be arranged at a position where the light emitted by the irradiation unit does not directly enter the lens forming the imaging unit 11b.
  • the lighting unit 11c may be fixedly arranged on the ceiling member 11a4.
  • the food inspection server 12 includes a control unit 12 a that centrally controls the device such as a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit), and a communication device such as a router and a wired line such as a dedicated line.
  • a communication interface unit 12b for communicatively connecting the device to a network 13 via a wireless communication line
  • a storage unit 12c for storing various databases, tables or files
  • an input unit 12e and an output unit 12f An input / output interface unit 12d, an input unit 12e, and an output unit 12f.
  • Each unit of the food inspection server 12 is communicably connected via an arbitrary communication path.
  • the communication interface unit 12b mediates communication between the food inspection server 12 and the network 13 (or a communication device such as a router). That is, the communication interface unit 12b has a function of communicating data with another terminal via a communication line.
  • the input / output interface unit 12d is connected to the input unit 12e and the output unit 12f.
  • a speaker or a printer can be used as the output unit 12f.
  • a monitor that realizes a pointing device function in cooperation with a mouse, a touch panel, or the like can be used in addition to a keyboard, a mouse, or a microphone.
  • the storage unit 12c is a storage unit.
  • a memory device such as a RAM / ROM, a fixed disk device such as a hard disk, a flexible disk, or an optical disk can be used.
  • the storage unit 12c may store a computer program for performing various processes by giving an instruction to a CPU or a GPU in cooperation with an OS (Operating @ System).
  • the storage unit 12c includes, for example, an image storage unit 12c1 that stores an image captured by the food imaging device 11 and the like, and an image that stores, for example, a learning image of a good product, an image for detecting a good product, and an image for detecting a non-defective product. And a storage unit 12c2.
  • the non-defective learning image is obtained by an object recognition process described later based on an image of the non-defective inspection object captured by the food imaging device 11, and is used as learning data in transfer learning described below. Things.
  • the non-defective learning image includes a learning image generated by a padding process described later.
  • the transfer learning unit 12a3 When the transfer learning unit 12a3 generates a model for classifying an inspection target into one of a plurality of classes including a plurality of types of non-defective products and non-defective products, the learning image of the non-defective products includes a plurality of learning images. It may be based on an image of each of the types of non-defective inspection targets.
  • the non-defective detection image is obtained by an object recognition process described below based on an image of the non-defective inspection object captured by the food imaging device 11, and is used as detection data in transfer learning described below. Things.
  • the transfer learning unit 12a3 When the transfer learning unit 12a3 generates a model for classifying the inspection target into one of a plurality of classes including a plurality of types of non-defective products and non-defective products, a plurality of non-defective detection images are used. It may be based on an image of each of the types of non-defective inspection targets.
  • the non-defective item detection image is obtained by an object recognition process described later based on an image of the non-defective item inspection target imaged by the food imaging device 11, and is used as detection data in transfer learning described later. What is used.
  • the transfer learning unit 12a3 When the transfer learning unit 12a3 generates a model for classifying the inspection target into one of a plurality of classes including a plurality of types of non-defective products and non-defective products, the non-defective image for detection is It may be based on an image of each of a plurality of types of non-defective inspection targets.
  • the control unit 12a has an internal memory for storing a control program such as an OS, a program defining various processing procedures, and required data, and executes various information processing based on these programs.
  • the control unit 12a conceptually includes an object recognition unit 12a1, an object classification unit 12a2, a transfer learning unit 12a3, and an inflated image generation unit 12a4.
  • the object recognizing unit 12a1 is an object recognizing unit that recognizes an inspection target from an image of the inspection target, and cuts out the recognized inspection target region from the image. Note that the learning image and the detection image used in the model generation processing described later, and the inspection target area (production data) adapted to the model in the inspection processing described later are processed by the processing executed by the object recognition unit 12a1. It is obtained. The specific processing executed by the object recognition unit 12a1 will be described in “2. Processing”.
  • the object classifying unit 12a2 is an object classifying unit that classifies the inspection target recognized by the object recognizing unit 12a1 into a non-defective or non-defective product by adapting a region cut out by the object recognizing unit 12a1 to a model generated by the transfer learning unit 12a3. is there.
  • the transfer learning unit 12a3 generates a model for classifying the inspection target into one of a plurality of classes including a plurality of types of non-defective products and non-defective products
  • the object classification unit 12a2 performs the object recognition process.
  • the inspection target recognized by the unit 12a1 may be classified into any one of a plurality of classes including non-defective products and non-defective products of each type.
  • non-defective and non-defective inspections of a plurality of types of inspection targets can be collectively performed by one model.
  • a plurality of types of inspection targets for example, foods such as gyoza and shumai and raw materials such as shrimp
  • the inspection is performed. It can be implemented while suppressing an increase in cost.
  • the specific processing executed by the object classification unit 12a2 will be described in “2. Processing”.
  • the transfer learning unit 12a3 performs an unsupervised transfer learning by using feature amount data extracted by using a trained model adjusted by Bayesian optimization, in which an image of a non-defective inspection target is used as learning data. This is a transfer learning means for generating a model for classifying a target into a non-defective product or a non-defective product. Note that the transfer learning unit 12a3 performs transfer learning using, as learning data, an image of each of a plurality of types of non-defective inspection targets, so that the inspection target is classified into a plurality of classes of non-defective and non-defective types. A model for classification into any one of them may be generated. The specific processing executed by the transfer learning unit 12a3 will be described in “2. Processing”.
  • the inflated image generation unit 12a4 performs predetermined processing on the learning image to inflate the learning image.
  • the specific processing executed by the inflated image generation unit 12a4 will be described in “2. Processing”.
  • FIG. 5 is a diagram illustrating an example of a flowchart relating to the object recognition processing (including the image preprocessing).
  • the object recognizing unit 12a1 adjusts the brightness of the image captured by the food imaging device 11 (step SA1).
  • step SA1 for example, the luminance may be adjusted so that the state of the image is a general state in which no halation or color skip occurs. Step SA1 need not be performed.
  • the object recognizing unit 12a1 executes threshold processing by binarization using a binary search algorithm on the single-channel array elements in the image processed at step SA1 (step SA2).
  • the object recognizing unit 12a1 executes a structural element expansion process on the image after the process in step SA2 in order to eliminate fine contours and noise (step SA3).
  • the object recognizing unit 12a1 extracts a contour from the image after the processing in step SA3 to detect an area (step SA4).
  • step SA5 based on the contour extracted in step SA4, the object recognizing unit 12a1 extracts only an area having a certain size or more from the image processed in step SA3 (step SA5).
  • the object recognizing unit 12a1 extracts, from the region extracted in step SA5, a minimum rectangle in consideration of rotation, surrounding the given two-dimensional point set (step SA6).
  • the object recognizing unit 12a1 extracts the rotated rectangular area extracted in Step SA6 from the area extracted in Step SA5, and rotationally reduces the extracted rectangular area into a square (Step SA7).
  • the object recognizing unit 12a1 flattens the histogram of the pixel values for the area after the processing in step SA7 (step SA8).
  • a high-quality learning image and a high-quality detection image used in a model generation process described below can be obtained from the image captured by the food imaging device 11.
  • a high-quality inspection target area (production data) adapted to a model in an inspection process described later can be obtained.
  • the object recognition process may be realized by an object recognition algorithm such as OpenCV (Open Source Computer Vision Library), SegNet, SSD (Single Shot Multibox Detector), and YOLO (You Only Look Once).
  • FIG. 6 is a diagram illustrating an example of a flowchart relating to a model generation process by unsupervised transfer learning.
  • the transfer learning unit 12a3 transmits a learned model (specifically, TensorFlow (registered trademark) InceptionV3, VGG16, VGG19, Xception, ResNet50, InceptionResNetV2, MobileNet, DenseNet, NASNet, MobileNetV2, DCGAN, Efficient @ GAN (reference "" Efficient GAN-Based Anomaly Detection, Houssam Zenati, Chuan Sheng Foo, Bruno Lecouat, Gaurav Manek, Vijay Ramaseshan Chandrasekhar, Submitted on 17 Feb 2018, last revised 1 May Nature, and GABay in the ⁇ Anals of the GAFeatures of the GAFeatures of the GAFeatures of the GAFeatures and the Features of the Features of the Features of the Features and the May A.
  • a learned model specifically, TensorFlow (registered trademark) InceptionV3, VGG16, VGG19, Xception, ResNet50, InceptionResNetV2, MobileNet, DenseNet, NAS
  • Step SB1 feature quantity extraction processing.
  • the reason why the model from which the final layer has been removed is used for learning without performing convolution processing.
  • Step SB1 is executed for tens of thousands of learning images (obtained by the processing of the object recognizing unit 12a1) in which non-defective inspection targets are photographed.
  • the transfer learning unit 12a3 compares the tens of thousands of 20588-dimensional feature amounts extracted in step SB1 with an outlier (abnormal) detection algorithm (specifically, One-Class ⁇ SVM (Support ⁇ Vector ⁇ Machine) or Isolation). Forrest) or a model for detecting outliers (abnormalities) or usable for outliers (abnormalities) detection (specifically, AnoGAN, Efficient GAN, BiGAN, BigGAN-deep, VQ-VAE of TensorFlow (registered trademark)) (Step SB2: learning process). In step SB2, the transfer learning unit 12a3 obtains a decision boundary (Decision @ boundary) for classifying non-defective products and non-defective products.
  • an outlier (abnormal) detection algorithm specifically, One-Class ⁇ SVM (Support ⁇ Vector ⁇ Machine) or Isolation. Forrest) or a model for detecting outliers (abnormalities) or usable for outliers (
  • a plurality of detection images prepared by the processing of the object recognizing unit 12a1) in which non-defective inspection targets are photographed and non-defective inspection targets are photographed.
  • a plurality of detection images prepared in advance obtained by the processing of the object recognizing unit 12a1
  • the transfer learning unit 12a3 calculates a harmonic mean (F value) of accuracy and recall from the correct answer rate of non-defective products and the correct answer rate of non-defective products based on the result obtained in step SB3, and performs Bayesian optimization.
  • the optimum F-number is searched using the same (step SB4).
  • the transfer learning unit 12a3 calculates the difference (breakeven point) between the non-defective product inclusion rate and the abnormality determination rate from the non-defective product correct rate and the non-defective product correct answer rate based on the result obtained in step SB3, An equilibrium point is determined using Bayesian optimization so that the breakeven point is minimized (step SB5).
  • step SB4 it is determined from the F value obtained in step SB4 and the equilibrium point obtained in step SB5 whether a highly accurate model has been completed.
  • the process proceeds to the following process. This determination may be made, for example, by focusing on the height of the F value or the low equilibrium point.
  • the inflated image generating unit 12a4 executes inflated processing on a learning image randomly selected from the tens of thousands of learning images prepared in advance (obtained by the processing of the object recognizing unit 12a1). Then, the learning image is increased about 10 times (step SB6). The details of the padding process will be described later.
  • step SB6 the present model generation processing is executed again from step SB1.
  • learning can be performed so as to be a model that is robust against noise.
  • reinforcement learning can be continuously performed by separately adding learning images separately and continuously.
  • a model for classifying an inspection target into a non-defective product or a non-defective product can be generated by the model generation process.
  • the number of learned models to be diverted may be one or more.
  • the deep learning layer of the transfer learning model can be changed according to the type of the inspection target. Generating a target model can be easily realized.
  • FIG. 7 is a diagram illustrating an example of a flowchart relating to the image inflating process.
  • the inflated image generation unit 12a4 uses the learning image randomly selected in step SB6 to process the truth value so that the average of the input pixel values (feature amounts) becomes zero in the entire learning image. Execute (step SC1).
  • the inflated image generation unit 12a4 normalizes the input pixel value (feature amount) by the standard deviation of the learning image (step SC2: normalization processing).
  • the inflated image generation unit 12a4 rotates the learning image (rotation angle: any angle from 0 to 180 degrees), inverts horizontally (inverts left and right), and inverts the learning image.
  • rotation angle any angle from 0 to 180 degrees
  • inverts horizontally inverts left and right
  • inverts the learning image One or more processes arbitrarily selected from the processes (vertical inversion) are executed in an arbitrary order (step SC3).
  • the inflated image generation unit 12a4 executes zero-phase whitening (ZCA whitening) on the learning image obtained in step SC3 (step SC4).
  • ZCA whitening zero-phase whitening
  • FIG. 8 is a diagram illustrating an example of a flowchart relating to the inspection processing.
  • the object recognition unit 12a1 recognizes the inspection target from the image of the inspection target captured by the food imaging device 11, and cuts out the recognized inspection target region from the image (step SD1). Note that a specific example of the object recognition process executed in step SD1 is described in [2-1. Object recognition processing], and a description thereof will be omitted.
  • the object classification unit 12a2 recognized the region to be inspected cut out in step SD1 in step SD1 by applying the region to be inspected in step SD1 to the model generated in the transfer learning unit 12a3 (the one determined to have sufficient accuracy).
  • the inspection target is classified into a non-defective product or a non-defective product (step SD2).
  • the final product immediately before packaging in the final step in the food manufacturing process (for example, the food Everything up to the stored food tray) can be inspected with high accuracy. That is, it is possible to realize a highly accurate inspection of a product that could not be an inspection target in the past in a food inspection.
  • all or some of the processes described as being performed automatically can be manually performed, or all of the processes described as being performed manually can be performed.
  • a part thereof can be automatically performed by a known method.
  • the program is recorded on a non-transitory computer-readable recording medium containing programmed instructions for causing the information processing apparatus to execute the inspection method according to the present invention. Is read. That is, the storage unit 106 such as a ROM or an HDD stores a computer program for giving an instruction to the CPU or the GPU in cooperation with the OS and performing various processes.
  • the computer program is executed by being loaded into the RAM, and configures a control unit in cooperation with the CPU or the GPU.
  • the inspection program according to the present invention may be stored in a non-transitory computer-readable recording medium, or may be configured as a program product.
  • the “recording medium” refers to a memory card, USB memory, SD card, flexible disk, magneto-optical disk, ROM, EPROM, EEPROM, CD-ROM, MO, DVD, and Blu-ray (registered trademark). It shall include any “portable physical medium” such as Disc.
  • system distribution / integration is not limited to the illustrated one, and the system may be functionally or physically distributed / integrated in arbitrary units.
  • the present invention is extremely useful in many industrial fields, especially in the food manufacturing industry.

Landscapes

  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Analytical Chemistry (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Processing (AREA)

Abstract

The present invention addresses the problem of providing an inspection device, an inspection method, and an inspection program that make it possible to achieve highly accurate foodstuff inspections. According to the present embodiment, a foodstuff inspection server 12: (1) performs object recognition that involves recognizing an inspection target (an ingredient for a foodstuff, a foodstuff, or a container and a foodstuff that is in the container) from an image that has been captured of the inspection target by a foodstuff imaging device 11 and extracting the region of the recognized inspection target from the image; and (2) adapts the extracted region to a model that has been generated by the foodstuff inspection server 12 (a model that is for classifying inspection targets as quality or not quality and that has been generated by unsupervised transfer learning that uses images of quality inspection targets as learning data and is based on feature data that has been extracted using a trained model that has been adjusted by Bayesian optimization) and thereby classifies the recognized inspection target as quality or not quality.

Description

検査装置、検査方法および検査プログラムInspection device, inspection method and inspection program
 本発明は、検査装置、検査方法および検査プログラムならびにこれらで用いられる画像を撮像する検査用撮像装置に関するものである。 The present invention relates to an inspection apparatus, an inspection method, an inspection program, and an inspection imaging apparatus that captures an image used in the inspection apparatus, the inspection method, and the inspection program.
 食品製造業において不良品の発生は、顧客からの信用低下に繋がる非常に重大なリスクであるため、様々な食品検査技術が開発されている(例えば特許文献1から特許文献6、非特許文献1および非特許文献2など)。特に、人工知能(Artificial Intelligence)技術が人間の能力に追いつき、さらにはそれを超えることが期待できる時代となってきたことから、人工知能を利活用した食品検査技術については、近年、急速に開発が進んでいる。例えば、非特許文献1には、食品の原材料の検査においてディープラーニングを利活用した事例が紹介されており、また、非特許文献2には、工場で使用する原料を人工知能で選別する技術を開発したことが紹介されている。 Since the occurrence of defective products in the food manufacturing industry is a very serious risk leading to a decrease in customer trust, various food inspection techniques have been developed (for example, Patent Documents 1 to 6, Non-Patent Document 1). And Non-Patent Document 2). In particular, in the era when artificial intelligence (Artificial Intelligence) technology can catch up with human abilities and can be expected to exceed it, food inspection technology utilizing artificial intelligence has been rapidly developed in recent years. Is progressing. For example, Non-Patent Document 1 introduces a case where deep learning is used in the inspection of raw materials of food, and Non-Patent Document 2 discloses a technique for selecting raw materials used in factories by artificial intelligence. The development is introduced.
 食品製造業において、今後、人間による目視検査の精度に匹敵するまたはその精度を超える検査技術が開発されれば、品質保証や人件費削減や生産性向上(例えば歩留り改善など)などといったメリットが期待出来る。 In the food manufacturing industry, if inspection techniques are developed that match or exceed the accuracy of human visual inspection, benefits such as quality assurance, labor cost reduction, and productivity improvement (for example, improved yield) are expected. I can do it.
特開2017-211259号公報JP 2017-211259 A 特開2016-109495号公報JP 2016-109495 A 国際公開第2017/159620号International Publication No. WO 2017/159620 特開2017-49974号公報JP 2017-49974 A 特開2014-145639号公報JP-A-2014-145639 特開2002-205019号公報Japanese Patent Application Laid-Open No. 2002-200519
 しかしながら、AIを利活用した食品検査技術について、精度の高さが十分といえるものは存在せず、それ故、技術で対応できていない部分は、未だ人間による目視検査で対応せざるを得ないのが現状である。また、このような現状となっていることの要因の一つとして、食品製造工場内の製造ラインを流れている最中の食品等を被写体としたときに質の高い画像を得ることが必ずしも容易ではないことが考えられる。 However, there is no food inspection technology utilizing AI that can be said to have high enough accuracy. Therefore, parts that cannot be dealt with by technology still have to be dealt with by human visual inspection. is the current situation. In addition, one of the causes of the current situation is that it is not always easy to obtain a high-quality image when the subject is a food or the like flowing through a manufacturing line in a food manufacturing factory. It is not possible.
 本発明は、上記問題点に鑑みてなされたもので、精度の高い食品検査を実現できる検査装置、検査方法および検査プログラムを提供することを目的とする。また、精度の高い食品検査の実現に貢献することができる検査用撮像装置を提供することを目的とする。 The present invention has been made in view of the above problems, and an object of the present invention is to provide an inspection apparatus, an inspection method, and an inspection program capable of realizing highly accurate food inspection. Another object of the present invention is to provide an imaging device for inspection that can contribute to the realization of highly accurate food inspection.
 上述した課題を解決し、目的を達成するために、本発明にかかる検査装置は、食品に使われる原材料、食品、または、容器とそれに収容されている食品を検査対象とする、制御部を備える検査装置であって、前記制御部が、良品の検査対象を写した画像を学習用データとする、ベイズ最適化で調整した学習済みモデルを用いて抽出した特徴量データによる教師なしの転移学習を行うことにより、検査対象を良品か非良品に分類するためのモデルを生成する転移学習手段と、検査対象を写した画像から検査対象を認識し、前記認識した検査対象の領域を画像から切り抜く物体認識手段と、前記物体認識手段で切り抜いた領域を前記転移学習手段で生成したモデルに適応することにより、前記物体認識手段で認識した検査対象を良品か非良品に分類する物体分類手段と、を備えることを特徴とする。 In order to solve the above-described problems and achieve the object, an inspection device according to the present invention includes a control unit configured to inspect a raw material used for food, food, or a container and food contained in the container. An inspection device, wherein the control unit performs unsupervised transfer learning using feature amount data extracted using a learned model adjusted by Bayesian optimization, using an image of a non-defective inspection target as learning data. By doing so, a transfer learning means for generating a model for classifying the inspection target as non-defective or non-defective, and an object for recognizing the inspection target from an image of the inspection target and cutting out the recognized inspection target region from the image Recognition means, by adapting the region cut out by the object recognition means to the model generated by the transfer learning means, the inspection object recognized by the object recognition means to a non-defective or non-defective Characterized in that it comprises an object classification unit similar, the.
 なお、本発明にかかる検査装置において、前記学習用データは、複数種類の良品の検査対象のそれぞれを写した画像でもよく、前記物体分類手段は、前記認識した検査対象を、各種類の良品と非良品からなる複数のクラスのうちのいずれか一つに分類してもよい。 In the inspection device according to the present invention, the learning data may be an image of each of a plurality of types of non-defective inspection targets, and the object classifying unit may classify the recognized inspection target as a non-defective item of each type. It may be classified into any one of a plurality of non-defective classes.
 また、本発明にかかる検査方法は、食品に使われる原材料、食品、または、容器とそれに収容されている食品を検査対象とする、制御部を備える検査装置の前記制御部が、良品の検査対象を写した画像を学習用データとする、ベイズ最適化で調整した学習済みモデルを用いて抽出した特徴量データによる教師なしの転移学習を行うことにより、検査対象を良品か非良品に分類するためのモデルを生成するステップと、検査対象を写した画像から検査対象を認識し、前記認識した検査対象の領域を前記画像から切り抜くステップと、前記切り抜いた領域を前記生成したモデルに適応することにより、前記認識した検査対象を良品か非良品に分類するステップと、を実行する検査方法であることを特徴とする。 In addition, the inspection method according to the present invention, the raw material used for food, food, or a container and the food contained in the inspection target, the control unit of the inspection device having a control unit, the control unit is a non-defective inspection target To classify the inspection target into non-defective products by performing unsupervised transfer learning using the feature data extracted using the trained model adjusted by Bayesian optimization using the image obtained by copying the image as the learning data Generating a model, and recognizing the inspection target from the image of the inspection target, cutting out the recognized inspection target region from the image, and applying the cut-out region to the generated model. Classifying the recognized inspection target into a non-defective or non-defective product.
 また、本発明にかかる検査プログラムは、食品に使われる原材料、食品、または、容器とそれに収容されている食品を検査対象とする、制御部を備える検査装置の前記制御部に、良品の検査対象を写した画像を学習用データとする、ベイズ最適化で調整した学習済みモデルを用いて抽出した特徴量データによる教師なしの転移学習を行うことにより、検査対象を良品か非良品に分類するためのモデルを生成するステップと、検査対象を写した画像から検査対象を認識し、前記認識した検査対象の領域を前記画像から切り抜くステップと、前記切り抜いた領域を前記生成したモデルに適応することにより、前記認識した検査対象を良品か非良品に分類するステップと、を実行させるための検査プログラムであることを特徴とする。 In addition, the inspection program according to the present invention, the raw material used for food, food, or a container and the food contained in the inspection target, the control unit of the inspection device having a control unit, a non-defective inspection target To classify the inspection target into non-defective products by performing unsupervised transfer learning using the feature data extracted using the trained model adjusted by Bayesian optimization using the image obtained by copying the image as the learning data Generating a model, and recognizing the inspection target from the image of the inspection target, cutting out the recognized inspection target region from the image, and applying the cut-out region to the generated model. And a step of classifying the recognized inspection target into a non-defective product or a non-defective product.
 また、本発明にかかる検査用撮像装置は、食品製造工場に設置されている搬送装置により搬送されている、製造段階にある、食品に使われる原材料、食品、または、容器とそれに収容されている食品を、検査のために被写体として撮像する検査用撮像装置であって、被写体を撮像して画像を生成する撮像部と、光を照射する照明部と、光の反射を抑制する性質を有するまたは光の反射を抑制する加工が施された、遮光性を有する部材で形成された、開口を有する筐体と、を備え、前記筐体は、前記開口と前記搬送装置の搬送面とが接近して向かい合うように設置され、前記撮像部は、前記筐体の内部の、前記開口を介して被写体を撮像することが可能な位置に配置され、前記照明部は、フリッカー現象を生じさせないため直流電源で動作するものであり、前記筐体の内部の、被写体へ光を照射することが可能な位置に配置されること、を特徴とする。 Moreover, the imaging device for inspection according to the present invention is transported by a transport device installed in a food manufacturing factory, is in a manufacturing stage, is used as a raw material for food, food, or a container and is contained therein. An imaging apparatus for inspection that captures food as a subject for inspection, and has an image capturing unit that captures an image of a subject to generate an image, a lighting unit that irradiates light, and a property of suppressing light reflection. Provided with a process that suppresses light reflection, formed of a member having a light blocking property, a housing having an opening, wherein the housing is close to the opening and the transfer surface of the transfer device. The imaging unit is disposed at a position inside the housing, at which an image of a subject can be taken through the opening, and the illumination unit is a DC power supply that does not cause a flicker phenomenon. Works with It is those, wherein the interior of the housing, be placed on the possible locations by irradiating light to the subject, and wherein.
 なお、本発明にかかる検査用撮像装置において、前記撮像部は、レンズの光軸が前記開口の中央付近を通過し且つ前記搬送面と略直交するように配置されてもよい。また、本発明にかかる検査用撮像装置において、前記照明部は、放射した光が前記撮像部のレンズに直接入射しないような位置に配置されてもよい。また、本発明にかかる検査用撮像装置において、前記部材は、艶消し黒アルマイト加工が施されたものでもよい。 In the inspection imaging apparatus according to the present invention, the imaging unit may be arranged such that an optical axis of the lens passes near the center of the opening and is substantially orthogonal to the transport surface. In the inspection imaging apparatus according to the present invention, the illumination unit may be arranged at a position where emitted light does not directly enter a lens of the imaging unit. Further, in the inspection imaging apparatus according to the present invention, the member may be subjected to a matte black anodizing process.
 本発明にかかる検査装置、検査方法および検査プログラムによれば、精度の高い食品検査を実現できるという効果を奏する。また、本発明にかかる検査用撮像装置によれば、精度の高い食品検査の実現に貢献することができるという効果を奏する。 According to the inspection device, the inspection method, and the inspection program of the present invention, there is an effect that a highly accurate food inspection can be realized. Further, according to the imaging device for inspection according to the present invention, there is an effect that it is possible to contribute to the realization of a highly accurate food inspection.
図1は、食品検査システム1の構成の一例を示す図である。FIG. 1 is a diagram illustrating an example of a configuration of the food inspection system 1. 図2は、食品撮像装置11の構成の一例を示す図である。FIG. 2 is a diagram illustrating an example of the configuration of the food imaging device 11. 図3は、食品撮像装置11の構成の一例を示す図である。FIG. 3 is a diagram illustrating an example of the configuration of the food imaging device 11. 図4は、食品撮像装置11の構成の一例を示す図である。FIG. 4 is a diagram illustrating an example of the configuration of the food imaging device 11. 図5は、物体認識処理(画像前処理を含む)に関するフローチャートの一例を示す図である。FIG. 5 is a diagram illustrating an example of a flowchart relating to the object recognition processing (including the image preprocessing). 図6は、教師なし転移学習によるモデル生成処理に関するフローチャートの一例を示す図である。FIG. 6 is a diagram illustrating an example of a flowchart relating to a model generation process by unsupervised transfer learning. 図7は、画像水増し処理に関するフローチャートの一例を示す図である。FIG. 7 is a diagram illustrating an example of a flowchart relating to the image inflating process. 図8は、検査処理に関するフローチャートの一例を示す図である。FIG. 8 is a diagram illustrating an example of a flowchart relating to the inspection processing.
 以下に、本発明にかかる検査装置、検査方法および検査プログラムならびに検査用撮像装置の実施形態を、図面に基づいて詳細に説明する。なお、本実施形態により本発明が限定されるものではない。 Hereinafter, embodiments of an inspection apparatus, an inspection method, an inspection program, and an inspection imaging apparatus according to the present invention will be described in detail with reference to the drawings. The present invention is not limited by the embodiment.
[1.構成]
 本実施形態にかかる食品検査システム1の構成について、図1から図4を参照して詳細に説明する。
[1. Constitution]
The configuration of the food inspection system 1 according to the present embodiment will be described in detail with reference to FIGS.
 図1は、本実施形態にかかる食品検査システム1の構成の一例を示す図である。食品検査システム1は、食品製造工場に設置されている搬送装置CV(例えばベルトコンベヤなど)により搬送されている、製造過程にある「『食品に使われる原材料(例えばギョーザに使われるエビやニラなど)』、『食品(例えばギョーザやシュウマイなど)』または『容器とそれに収容されている食品(例えば食品が収容されている状態の食品トレイなど)』」(以下「検査対象」と記す。)を検査するシステムである。なお、「食品」は、最終製品に限らず製造途中にあるものを含む。 FIG. 1 is a diagram illustrating an example of a configuration of a food inspection system 1 according to the present embodiment. The food inspection system 1 is configured such that “ingredients used in food (eg, shrimp and leek used in gyoza, etc.) in a manufacturing process are conveyed by a conveyance device CV (eg, a belt conveyor or the like) installed in a food manufacturing factory. ) "," Food (eg, gyoza or shumai) "or" container and food contained therein (for example, a food tray in which food is stored) "(hereinafter referred to as" object to be inspected "). This is an inspection system. “Food” includes not only final products but also foods that are being manufactured.
 食品検査システム1は、食品撮像装置11(本発明にかかる検査用撮像装置に相当)と、食品検査サーバ12(本発明にかかる検査装置に相当)と、例えばインターネット、イントラネットまたはLAN(有線/無線の双方を含む)等のネットワーク13と、で構成される。なお、図1では、食品検査サーバ12の構成を主として記載しているため、食品撮像装置11の構成については、ネットワーク13との接続がされている撮像部11bを記載するにとどめ簡略化している。また、食品検査システム1において、食品撮像装置11の台数は一台に限定されず任意の複数の台数でもよい。 The food inspection system 1 includes a food imaging device 11 (corresponding to an inspection imaging device according to the present invention), a food inspection server 12 (corresponding to an inspection device according to the present invention), and, for example, the Internet, an intranet, or a LAN (wired / wireless). And the network 13). In FIG. 1, since the configuration of the food inspection server 12 is mainly described, the configuration of the food imaging device 11 is simplified by only describing the imaging unit 11 b connected to the network 13. . Further, in the food inspection system 1, the number of the food imaging devices 11 is not limited to one and may be an arbitrary plural number.
 図2から図4は、食品撮像装置11の構成の一例を示す図である。食品撮像装置11は、検査対象を検査のために被写体として撮像する装置である。食品撮像装置11は、筐体11aと撮像部11bと照明部11cと電源供給部11dとを備える。 FIGS. 2 to 4 are diagrams illustrating an example of the configuration of the food imaging device 11. FIG. The food imaging device 11 is a device that images an inspection target as a subject for inspection. The food imaging device 11 includes a housing 11a, an imaging unit 11b, a lighting unit 11c, and a power supply unit 11d.
 筐体11aは、4つの垂直材11a1と4つの横架材(2つの横架材11a2および2つの横架材11a3)と5つの矩形の部材(1つの天井部材11a4と2つの壁部材11a5と2つの壁部材11a6)を用いて形成された、開口を有する骨組立体構造物である。図示のような筐体の場合、開口は、例えば、壁部材11a5の端11a51(図2参照)に対応する2つの辺と壁部材11a6の端11a61(図4参照)に対応する2つの辺で形成される矩形領域(具体的には当該矩形領域から垂直材11a1の断面が除かれた領域)に相当する。筐体11aは、図示の如く、搬送装置CVの一部を跨ぐまたは覆い隠すように設置される。具体的には、筐体11aは、開口と搬送装置CVの搬送面CV1とが接近して向かい合うように設置される。 The housing 11a includes four vertical members 11a1, four horizontal members (two horizontal members 11a2 and two horizontal members 11a3), and five rectangular members (one ceiling member 11a4 and two wall members 11a5). This is a bone assembly structure having an opening formed by using two wall members 11a6). In the case of the case as shown in the figure, the opening is formed, for example, by two sides corresponding to the end 11a51 of the wall member 11a5 (see FIG. 2) and two sides corresponding to the end 11a61 of the wall member 11a6 (see FIG. 4). This corresponds to a rectangular region to be formed (specifically, a region where the cross section of the vertical member 11a1 has been removed from the rectangular region). The housing 11a is installed so as to straddle or cover a part of the transport device CV, as illustrated. Specifically, the housing 11a is installed such that the opening and the transfer surface CV1 of the transfer device CV approach and face each other.
 垂直材11a1は、棒状の部材(フレーム)である。垂直材11a1の材質の一例としてアルミニウムが挙げられるが、材質は特にこれに限定されない。横架材11a2および横架材11a3は、棒状の部材(フレーム)である。横架材11a2および横架材11a3の材質の一例としてアルミニウムが挙げられるが、材質は特にこれらに限定されない。横架材11a2および横架材11a3は、搬送装置CVの搬送方向と概ね平行する向きに配置される。 The vertical member 11a1 is a bar-shaped member (frame). An example of the material of the vertical member 11a1 is aluminum, but the material is not particularly limited to this. The horizontal member 11a2 and the horizontal member 11a3 are rod-shaped members (frames). Aluminum is an example of a material for the horizontal members 11a2 and 11a3, but the material is not particularly limited thereto. The horizontal member 11a2 and the horizontal member 11a3 are arranged in a direction substantially parallel to the transport direction of the transport device CV.
 天板部材(上板)11a4、壁部材(側面板)11a5および壁部材(側面板)11a6は、板状の部材(パネル)である。天板部材11a4、壁部材11a5および壁部材11a6は、光の反射を抑制する性質を有するまたは光の反射を抑制する加工が施された、遮光性を有する部材(例えば艶消し黒アルマイト加工が施されたアルミニウム製の平板など)である。壁部材11a5は、搬送装置CVの搬送方向と概ね直交する向きに配置される(図3参照)。壁部材11a6は、搬送装置CVの搬送方向と概ね平行する向きに配置される(図3参照)。壁部材11a5および壁部材11a6の垂直方向(縦方向)の長さは、地面から搬送面CV1までの高さH(図2,4参照)や検査対象の高さ、撮像部11bを構成するレンズの焦点距離、撮像部11bで撮像した画像の質などを踏まえて設定してもよい。なお、壁部材11a5および壁部材11a6の垂直方向(縦方向)の長さは同じでなくてもよい。例えば、壁部材11a6の垂直方向(縦方向)の長さは、搬送面CV1と端11a61との間隔L(図4参照)がほぼ無いくらいに設定してもよい。 The top plate member (upper plate) 11a4, the wall member (side plate) 11a5, and the wall member (side plate) 11a6 are plate-shaped members (panels). The top plate member 11a4, the wall member 11a5, and the wall member 11a6 are light-blocking members that have a property of suppressing light reflection or are processed to suppress light reflection (for example, a matte black alumite process is performed). Aluminum flat plate, etc.). The wall member 11a5 is disposed in a direction substantially perpendicular to the transport direction of the transport device CV (see FIG. 3). The wall member 11a6 is disposed in a direction substantially parallel to the transport direction of the transport device CV (see FIG. 3). The vertical (vertical) lengths of the wall members 11a5 and 11a6 are the height H from the ground to the transport surface CV1 (see FIGS. 2 and 4), the height of the inspection target, and the lens forming the imaging unit 11b. May be set based on the focal length of the image, the quality of the image captured by the imaging unit 11b, and the like. Note that the vertical (vertical) lengths of the wall member 11a5 and the wall member 11a6 may not be the same. For example, the length of the wall member 11a6 in the vertical direction (vertical direction) may be set so that the distance L (see FIG. 4) between the transport surface CV1 and the end 11a61 is almost zero.
 撮像部11bは、例えばGigEカメラやIoTカメラなどのカメラであり、検査対象を撮像して画像を生成する。撮像部11bは、ネットワーク13を介して食品検査サーバ12と通信可能に接続されている。撮像部11bは、生成した画像を、ネットワーク13を介して食品検査サーバ12に転送する。撮像部11bは、筐体11aの内部の、開口を介して検査対象を撮像することが可能な位置に配置される。例えば、撮像部11bは、当該撮像部を構成するレンズの光軸OAが開口の中央付近を通過し且つ搬送面CV1と略直交するように、天井部材11a4に固定して配置されてもよい(図2,4参照)。 The imaging unit 11b is, for example, a camera such as a GigE camera or an IoT camera, and captures an inspection target to generate an image. The imaging unit 11b is communicably connected to the food inspection server 12 via the network 13. The imaging unit 11b transfers the generated image to the food inspection server 12 via the network 13. The imaging unit 11b is arranged at a position inside the housing 11a where an image of the inspection target can be imaged through the opening. For example, the imaging unit 11b may be fixedly arranged on the ceiling member 11a4 such that the optical axis OA of the lens forming the imaging unit passes near the center of the opening and is substantially perpendicular to the transport surface CV1 ( 2 and 4).
 照明部11cは、直流電源で動作する照明ユニット(例えばLED照明ユニットなど)であり、光を照射する。照明部11cは、直流電源を供給する電源供給部11dと接続されており、電源供給部11dから供給された直流電源で動作する。この構成により、フリッカー現象の発生を防ぐことができる。照明部11cは、筐体11aの内部の、検査対象へ光を照射することが可能な位置に配置される。例えば、照明部11cは、当該照射部が放射した光が撮像部11bを構成するレンズに直接入射しないような位置に配置されてもよい。照明部11cは、天井部材11a4に固定して配置されてもよい。 The illumination unit 11c is an illumination unit (for example, an LED illumination unit or the like) operated by a DC power supply, and emits light. The lighting unit 11c is connected to a power supply unit 11d that supplies DC power, and operates with the DC power supplied from the power supply unit 11d. With this configuration, the occurrence of the flicker phenomenon can be prevented. The illumination unit 11c is arranged at a position inside the housing 11a where light can be emitted to the inspection target. For example, the illumination unit 11c may be arranged at a position where the light emitted by the irradiation unit does not directly enter the lens forming the imaging unit 11b. The lighting unit 11c may be fixedly arranged on the ceiling member 11a4.
 図1に戻り、食品検査サーバ12は、CPU(Central Processing Unit)またはGPU(Graphics Processing Unit)等の当該装置を統括的に制御する制御部12aと、ルータ等の通信装置および専用線等の有線または無線の通信回線を介して当該装置をネットワーク13に通信可能に接続する通信インターフェース部12bと、各種のデータベース、テーブルまたはファイルなどを記憶する記憶部12cと、入力部12eおよび出力部12fに接続する入出力インターフェース部12dと、入力部12eと、出力部12fと、を備える。食品検査サーバ12が備える各部は、任意の通信路を介して通信可能に接続される。 Returning to FIG. 1, the food inspection server 12 includes a control unit 12 a that centrally controls the device such as a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit), and a communication device such as a router and a wired line such as a dedicated line. Alternatively, a communication interface unit 12b for communicatively connecting the device to a network 13 via a wireless communication line, a storage unit 12c for storing various databases, tables or files, and an input unit 12e and an output unit 12f. An input / output interface unit 12d, an input unit 12e, and an output unit 12f. Each unit of the food inspection server 12 is communicably connected via an arbitrary communication path.
 通信インターフェース部12bは、食品検査サーバ12とネットワーク13(またはルータ等の通信装置)との間における通信を媒介する。すなわち、通信インターフェース部12bは、他の端末と通信回線を介してデータを通信する機能を有する。 The communication interface unit 12b mediates communication between the food inspection server 12 and the network 13 (or a communication device such as a router). That is, the communication interface unit 12b has a function of communicating data with another terminal via a communication line.
 入出力インターフェース部12dには、入力部12eおよび出力部12fが接続されている。出力部12fには、モニタの他、スピーカまたはプリンタなどを用いることができる。入力部12eには、キーボード、マウスまたはマイクの他、マウスと協働してポインティングデバイス機能を実現するモニタ、または、タッチパネルなどを用いることができる。 (4) The input / output interface unit 12d is connected to the input unit 12e and the output unit 12f. In addition to a monitor, a speaker or a printer can be used as the output unit 12f. As the input unit 12e, a monitor that realizes a pointing device function in cooperation with a mouse, a touch panel, or the like can be used in addition to a keyboard, a mouse, or a microphone.
 記憶部12cは、ストレージ手段である。記憶部12cとして、例えば、RAM・ROM等のメモリ装置、ハードディスクのような固定ディスク装置、フレキシブルディスク、または光ディスク等を用いることができる。記憶部12cには、OS(Operating System)と協働してCPUまたはGPUに命令を与えて各種処理を行うためのコンピュータプログラムが記録されていてもよい。 The storage unit 12c is a storage unit. As the storage unit 12c, for example, a memory device such as a RAM / ROM, a fixed disk device such as a hard disk, a flexible disk, or an optical disk can be used. The storage unit 12c may store a computer program for performing various processes by giving an instruction to a CPU or a GPU in cooperation with an OS (Operating @ System).
 記憶部12cは、例えば食品撮像装置11で撮像された画像などを記憶する画像記憶部12c1と、例えば、良品の学習用画像、良品の検知用画像および非良品の検知用画像などを記憶する画像記憶部12c2と、を備える。 The storage unit 12c includes, for example, an image storage unit 12c1 that stores an image captured by the food imaging device 11 and the like, and an image that stores, for example, a learning image of a good product, an image for detecting a good product, and an image for detecting a non-defective product. And a storage unit 12c2.
 良品の学習用画像は、食品撮像装置11で撮像された良品の検査対象を写した画像を基に後述する物体認識処理により得られたものであり且つ後述する転移学習における学習用データとして使用されるものである。良品の学習用画像には、後述する水増し処理により生成された学習用画像も含まれる。なお、転移学習部12a3で、検査対象を、複数種類の良品と非良品からなる複数のクラスのうちのいずれか一つに分類するためのモデルを生成する場合、良品の学習用画像は、複数種類の良品の検査対象のそれぞれを写した画像を基にしたものでもよい。 The non-defective learning image is obtained by an object recognition process described later based on an image of the non-defective inspection object captured by the food imaging device 11, and is used as learning data in transfer learning described below. Things. The non-defective learning image includes a learning image generated by a padding process described later. When the transfer learning unit 12a3 generates a model for classifying an inspection target into one of a plurality of classes including a plurality of types of non-defective products and non-defective products, the learning image of the non-defective products includes a plurality of learning images. It may be based on an image of each of the types of non-defective inspection targets.
 良品の検知用画像は、食品撮像装置11で撮像された良品の検査対象を写した画像を基に後述する物体認識処理により得られたものであり且つ後述する転移学習における検知用データとして使用されるものである。なお、転移学習部12a3で、検査対象を、複数種類の良品と非良品からなる複数のクラスのうちのいずれか一つに分類するためのモデルを生成する場合、良品の検知用画像は、複数種類の良品の検査対象のそれぞれを写した画像を基にしたものでもよい。 The non-defective detection image is obtained by an object recognition process described below based on an image of the non-defective inspection object captured by the food imaging device 11, and is used as detection data in transfer learning described below. Things. When the transfer learning unit 12a3 generates a model for classifying the inspection target into one of a plurality of classes including a plurality of types of non-defective products and non-defective products, a plurality of non-defective detection images are used. It may be based on an image of each of the types of non-defective inspection targets.
 非良品の検知用画像は、食品撮像装置11で撮像された非良品の検査対象を写した画像を基に後述する物体認識処理により得られたものであり且つ後述する転移学習における検知用データとして使用されるものである。なお、転移学習部12a3で、検査対象を、複数種類の良品と非良品からなる複数のクラスのうちのいずれか一つに分類するためのモデルを生成する場合、非良品の検知用画像は、複数種類の非良品の検査対象のそれぞれを写した画像を基にしたものでもよい。 The non-defective item detection image is obtained by an object recognition process described later based on an image of the non-defective item inspection target imaged by the food imaging device 11, and is used as detection data in transfer learning described later. What is used. When the transfer learning unit 12a3 generates a model for classifying the inspection target into one of a plurality of classes including a plurality of types of non-defective products and non-defective products, the non-defective image for detection is It may be based on an image of each of a plurality of types of non-defective inspection targets.
 制御部12aは、OS等の制御プログラム・各種の処理手順等を規定したプログラム・所要データなどを格納するための内部メモリを有し、これらのプログラムに基づいて種々の情報処理を実行する。制御部12aは、機能概念的に、物体認識部12a1と物体分類部12a2と転移学習部12a3と水増し画像生成部12a4とを備える。 The control unit 12a has an internal memory for storing a control program such as an OS, a program defining various processing procedures, and required data, and executes various information processing based on these programs. The control unit 12a conceptually includes an object recognition unit 12a1, an object classification unit 12a2, a transfer learning unit 12a3, and an inflated image generation unit 12a4.
 物体認識部12a1は、検査対象を写した画像から検査対象を認識し、前記認識した検査対象の領域を画像から切り抜く物体認識手段である。なお、後述するモデル生成処理に使用される学習用画像および検知用画像ならびに後述する検査処理にてモデルに適応される検査対象の領域(本番データ)は、物体認識部12a1で実行される処理により得られたものである。物体認識部12a1で実行される具体的な処理の説明については、「2.処理」にて行う。 The object recognizing unit 12a1 is an object recognizing unit that recognizes an inspection target from an image of the inspection target, and cuts out the recognized inspection target region from the image. Note that the learning image and the detection image used in the model generation processing described later, and the inspection target area (production data) adapted to the model in the inspection processing described later are processed by the processing executed by the object recognition unit 12a1. It is obtained. The specific processing executed by the object recognition unit 12a1 will be described in “2. Processing”.
 物体分類部12a2は、物体認識部12a1で切り抜いた領域を転移学習部12a3で生成したモデルに適応することにより、物体認識部12a1で認識した検査対象を良品か非良品に分類する物体分類手段である。なお、転移学習部12a3で、検査対象を、複数種類の良品と非良品からなる複数のクラスのうちのいずれか一つに分類するためのモデルを生成した場合、物体分類部12a2は、物体認識部12a1で認識した検査対象を、各種類の良品と非良品からなる複数のクラスのうちのいずれか一つに分類してもよい。これにより、複数種類の検査対象(例えばギョーザ・シュウマイなどの食品やエビなどの原材料など)についての良品・非良品の検査を一つのモデルでまとめて実施することができる。特に、複数の食品撮像装置11を、食品毎の製造ラインに分散して設置したり特定の食品の製造ラインにおける任意の複数の工程に設置したりしておけばよいだけなので、当該検査を、コストの増加を押さえつつ実施することができる。物体分類部12a2で実行される具体的な処理の説明については、「2.処理」にて行う。 The object classifying unit 12a2 is an object classifying unit that classifies the inspection target recognized by the object recognizing unit 12a1 into a non-defective or non-defective product by adapting a region cut out by the object recognizing unit 12a1 to a model generated by the transfer learning unit 12a3. is there. When the transfer learning unit 12a3 generates a model for classifying the inspection target into one of a plurality of classes including a plurality of types of non-defective products and non-defective products, the object classification unit 12a2 performs the object recognition process. The inspection target recognized by the unit 12a1 may be classified into any one of a plurality of classes including non-defective products and non-defective products of each type. As a result, non-defective and non-defective inspections of a plurality of types of inspection targets (for example, foods such as gyoza and shumai and raw materials such as shrimp) can be collectively performed by one model. In particular, since it is only necessary to disperse and install a plurality of food imaging devices 11 on a production line for each food item or to install them on any of a plurality of processes in a production line for a specific food item, the inspection is performed. It can be implemented while suppressing an increase in cost. The specific processing executed by the object classification unit 12a2 will be described in “2. Processing”.
 転移学習部12a3は、良品の検査対象を写した画像を学習用データとする、ベイズ最適化で調整した学習済みモデルを用いて抽出した特徴量データによる教師なしの転移学習を行うことにより、検査対象を良品か非良品に分類するためのモデルを生成する転移学習手段である。なお、転移学習部12a3は、複数種類の良品の検査対象のそれぞれを写した画像を学習用データとして転移学習を行うことにより、検査対象を、複数種類の良品と非良品からなる複数のクラスのうちのいずれか一つに分類するためのモデルを生成してもよい。転移学習部12a3で実行される具体的な処理の説明は、「2.処理」にて行う。 The transfer learning unit 12a3 performs an unsupervised transfer learning by using feature amount data extracted by using a trained model adjusted by Bayesian optimization, in which an image of a non-defective inspection target is used as learning data. This is a transfer learning means for generating a model for classifying a target into a non-defective product or a non-defective product. Note that the transfer learning unit 12a3 performs transfer learning using, as learning data, an image of each of a plurality of types of non-defective inspection targets, so that the inspection target is classified into a plurality of classes of non-defective and non-defective types. A model for classification into any one of them may be generated. The specific processing executed by the transfer learning unit 12a3 will be described in “2. Processing”.
 水増し画像生成部12a4は、学習用画像に対して所定の処理を実行して学習用画像の水増しを行う。水増し画像生成部12a4で実行される具体的な処理の説明は、「2.処理」にて行う。 The inflated image generation unit 12a4 performs predetermined processing on the learning image to inflate the learning image. The specific processing executed by the inflated image generation unit 12a4 will be described in “2. Processing”.
[2.処理]
 食品検査システム1で実行される各種処理の一例について、図5から図8を参照して説明する。
[2. processing]
An example of various processes executed in the food inspection system 1 will be described with reference to FIGS.
[2-1.物体認識処理]
 図5は、物体認識処理(画像前処理を含む)に関するフローチャートの一例を示す図である。
[2-1. Object recognition processing]
FIG. 5 is a diagram illustrating an example of a flowchart relating to the object recognition processing (including the image preprocessing).
 まず、物体認識部12a1は、食品撮像装置11で撮像された画像に対し、輝度の調節を実行する(ステップSA1)。ステップSA1では、例えば、画像の状態が、ハレーションまたは色飛びが発生していない一般的な状態となるように、輝度の調節を行ってもよい。また、ステップSA1は、実行しなくてもよい。 First, the object recognizing unit 12a1 adjusts the brightness of the image captured by the food imaging device 11 (step SA1). In step SA1, for example, the luminance may be adjusted so that the state of the image is a general state in which no halation or color skip occurs. Step SA1 need not be performed.
 つぎに、物体認識部12a1は、ステップSA1による処理後の画像におけるシングルチャンネルの配列要素に対して、バイナリーサーチアルゴリズムによる2値化での閾値処理を実行する(ステップSA2)。 Next, the object recognizing unit 12a1 executes threshold processing by binarization using a binary search algorithm on the single-channel array elements in the image processed at step SA1 (step SA2).
 つぎに、物体認識部12a1は、細かい輪郭やノイズなどを省くために、ステップSA2による処理後の画像に対し、構造要素の膨張処理を実行する(ステップSA3)。 Next, the object recognizing unit 12a1 executes a structural element expansion process on the image after the process in step SA2 in order to eliminate fine contours and noise (step SA3).
 つぎに、物体認識部12a1は、領域を検知するために、ステップSA3による処理後の画像から輪郭を抽出する(ステップSA4)。 Next, the object recognizing unit 12a1 extracts a contour from the image after the processing in step SA3 to detect an area (step SA4).
 つぎに、物体認識部12a1は、ステップSA4で抽出した輪郭に基づいて、ステップSA3による処理後の画像から、一定以上の大きさの領域のみを取り出す(ステップSA5)。 Next, based on the contour extracted in step SA4, the object recognizing unit 12a1 extracts only an area having a certain size or more from the image processed in step SA3 (step SA5).
 つぎに、物体認識部12a1は、ステップSA5で取り出した領域から、与えられた2次元点集合を囲む、回転を考慮した最小の矩形を抽出する(ステップSA6)。 Next, the object recognizing unit 12a1 extracts, from the region extracted in step SA5, a minimum rectangle in consideration of rotation, surrounding the given two-dimensional point set (step SA6).
 つぎに、物体認識部12a1は、ステップSA5で取り出した領域から、ステップSA6で抽出した回転した矩形領域を取り出し、当該取り出した矩形領域を正方形に回転縮小する(ステップSA7)。 Next, the object recognizing unit 12a1 extracts the rotated rectangular area extracted in Step SA6 from the area extracted in Step SA5, and rotationally reduces the extracted rectangular area into a square (Step SA7).
 つぎに、物体認識部12a1は、ステップSA7による処理後の領域に対し、画素値のヒストグラムの平坦化を実行する(ステップSA8)。 Next, the object recognizing unit 12a1 flattens the histogram of the pixel values for the area after the processing in step SA7 (step SA8).
 以上、本物体認識処理により、食品撮像装置11で撮像された画像から、後述するモデル生成処理に使用される品質の良い学習用画像および検知用画像を得ることができたり、食品撮像装置11で撮像された画像から、後述する検査処理にてモデルに適応される品質の良い検査対象の領域(本番データ)を得ることができたりする。なお、本物体認識処理は、例えば、OpenCV(Open Source Computer Vision Library)や、SegNet、SSD(Single Shot Multibox Detector)、YOLO(You Only Look Once)などの物体認識アルゴリズムにより実現されてもよい。 As described above, from the image captured by the food imaging device 11, a high-quality learning image and a high-quality detection image used in a model generation process described below can be obtained from the image captured by the food imaging device 11. From a captured image, a high-quality inspection target area (production data) adapted to a model in an inspection process described later can be obtained. The object recognition process may be realized by an object recognition algorithm such as OpenCV (Open Source Computer Vision Library), SegNet, SSD (Single Shot Multibox Detector), and YOLO (You Only Look Once).
[2-2.教師なし転移学習によるモデル生成処理]
 図6は、教師なし転移学習によるモデル生成処理に関するフローチャートの一例を示す図である。
[2-2. Model generation processing by unsupervised transfer learning]
FIG. 6 is a diagram illustrating an example of a flowchart relating to a model generation process by unsupervised transfer learning.
 まず、転移学習部12a3は、学習済みモデル(具体的にはTensorFlow(登録商標)のInceptionV3やVGG16やVGG19やXceptionやResNet50やInceptionResNetV2やMobileNetやDenseNetやNASNetやMobileNetV2やDCGANやEfficient GAN(文献「“Efficient GAN-Based Anomaly Detection”, Houssam Zenati, Chuan Sheng Foo, Bruno Lecouat, Gaurav Manek, Vijay Ramaseshan Chandrasekhar, Submitted on 17 Feb 2018, last revised 1 May 2019」参照)やBiGAN(文献「“Adversarial Feature Learning”, Jeff Donahue, Philipp Krahenbuhl, Trevor Darrell, Submitted on 31 May 2016, last revised 3 Apr 2017」または文献「“Adversarially Learned Inference”, Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, Aaron Courville, Submitted on 2 Jun 2016, last revised 21 Feb 2017」参照)やBigGAN-deep(文献「“Large Scale GAN Training for High Fidelity Natural Image Synthesis”, Andrew Brock, Jeff Donahue, Karen Simonyan, Submitted on 28 Sep 2018, last revised 25 Feb 2019」参照)やVQ-VAE-2(文献「“Generating Diverse High-Fidelity Images with VQ-VAE-2”, Ali Razavi, Aaron van den Oord, Oriol Vinyals, Submitted on 2 Jun 2019」参照)のモデル)からその最終層(具体的には畳み込み層(convolutional layer)の4階テンソル)が取り除かれた状態のモデルを使用して、良品の検査対象が写されている予め用意した学習用画像(物体認識部12a1の処理により得られたもの)から、25088次元の特徴量(特徴ベクトル)を抽出する(ステップSB1:特徴量抽出処理)。なお、最終層が取り除かれたモデルを使用するのは、畳み込み処理をせずに学習させるためである。ステップSB1は、良品の検査対象が写されている予め用意した数万枚の学習用画像(物体認識部12a1の処理により得られたもの)に対し実行される。 First, the transfer learning unit 12a3 transmits a learned model (specifically, TensorFlow (registered trademark) InceptionV3, VGG16, VGG19, Xception, ResNet50, InceptionResNetV2, MobileNet, DenseNet, NASNet, MobileNetV2, DCGAN, Efficient @ GAN (reference "" Efficient GAN-Based Anomaly Detection, Houssam Zenati, Chuan Sheng Foo, Bruno Lecouat, Gaurav Manek, Vijay Ramaseshan Chandrasekhar, Submitted on 17 Feb 2018, last revised 1 May Nature, and GABay in the 文献 Anals of the GAFeatures of the GAFeatures of the GAFeatures of the GAFeatures and the Features of the Features of the Features of the Features and the May A. Jeff Donahue, Philipp Krahenbuhl, Trevor Darrell, Submitted on 31 May 2016, last revised 3 Apr 2017 ”or the book“ Adversarially Learned Inference ”, Vincent Dumoulin, Ishmael Belghär, Atro A A A B rghA A B rgha , Submitted 2 Jun 2016, last revised 21 Feb 2017)) or B igGAN-deep (see “Large Scale GAN Training for High Fidelity Natural Image Synthesis”, Andrew Brock, Jeff Donahue, Karen Simonyan, Submitted on 28 Sep 2018, last Revised 25 Feb Feb 2019-V, E-2V From the model of “Generating Diversity High-Fidelity Images with VQ-VAE-2”, Ali Razavi, Aaron van den Oord, Oriol Vinyals, Submitted on 2 Jun 2019), its final layer (specifically, convolutional layer ( Using a model in which the fourth-order tensor of the convolutional layer has been removed, a learning image (obtained by the processing of the object recognition unit 12a1) in which a non-defective inspection object is captured is obtained. A 25088-dimensional feature quantity (feature vector) is extracted (step SB1: feature quantity extraction processing). The reason why the model from which the final layer has been removed is used for learning without performing convolution processing. Step SB1 is executed for tens of thousands of learning images (obtained by the processing of the object recognizing unit 12a1) in which non-defective inspection targets are photographed.
 つぎに、転移学習部12a3は、ステップSB1で抽出した、数万の数の20588次元の特徴量を、外れ値(異常)検知アルゴリズム(具体的にはOne-Class SVM(Support Vector Machine)またはIsolation Forrest)または外れ値(異常)検知を目的とした若しくは外れ値(異常)検知に利用可能なモデル(具体的にはTensorFlow(登録商標)のAnoGANやEfficient GANやBiGANやBigGAN-deepやVQ-VAE-2のモデル)で学習させる(ステップSB2:学習処理)。ステップSB2において、転移学習部12a3は、良品と非良品を分類する決定境界(Decision boundary)を求める。 Next, the transfer learning unit 12a3 compares the tens of thousands of 20588-dimensional feature amounts extracted in step SB1 with an outlier (abnormal) detection algorithm (specifically, One-Class \ SVM (Support \ Vector \ Machine) or Isolation). Forrest) or a model for detecting outliers (abnormalities) or usable for outliers (abnormalities) detection (specifically, AnoGAN, Efficient GAN, BiGAN, BigGAN-deep, VQ-VAE of TensorFlow (registered trademark)) (Step SB2: learning process). In step SB2, the transfer learning unit 12a3 obtains a decision boundary (Decision @ boundary) for classifying non-defective products and non-defective products.
 つぎに、転移学習部12a3は、良品の検査対象が写されている予め用意した複数の検知用画像(物体認識部12a1の処理により得られたもの)と非良品の検査対象が写されている予め用意した複数の検知用画像(物体認識部12a1の処理により得られたもの)を用いて、ステップSB2による学習が実行されたモデルで、良品か非良品かを判別する(ステップSB3:検知処理)。 Next, in the transfer learning unit 12a3, a plurality of detection images (prepared by the processing of the object recognizing unit 12a1) in which non-defective inspection targets are photographed and non-defective inspection targets are photographed. Using a plurality of detection images prepared in advance (obtained by the processing of the object recognizing unit 12a1), it is determined whether the model has undergone the learning in step SB2 as a non-defective or non-defective product (step SB3: detection process). ).
 つぎに、転移学習部12a3は、ステップSB3で得られた結果に基づく良品の正答率と非良品の正答率から、精度と再現率の調和平均(F値)を算出して、ベイズ最適化を用いて最適なF値を探索する(ステップSB4)。
※精度(precision)=良品と予測されたデータに、実際に良品であるデータが占める割合=TP÷(TP+FP)
※再現率(recall)=実際に良品であるデータに、良品と予測されたデータが占める割合=TP÷(TP+FN)
※調和平均(F-measure)=(2×精度×再現率)÷(精度+再現率)
※TP:True Positive
※FP:False Positive
※FN:False Negative
Next, the transfer learning unit 12a3 calculates a harmonic mean (F value) of accuracy and recall from the correct answer rate of non-defective products and the correct answer rate of non-defective products based on the result obtained in step SB3, and performs Bayesian optimization. The optimum F-number is searched using the same (step SB4).
* Precision = the ratio of data that is actually good to the data that is predicted to be good = TP ÷ (TP + FP)
* Recall (recall) = ratio of data predicted to be non-defective to actual non-defective data = TP / (TP + FN)
* Harmonic average (F-measure) = (2 x accuracy x recall) / (accuracy + recall)
* TP: True Positive
* FP: False Positive
* FN: False Negative
 つぎに、転移学習部12a3は、ステップSB3で得られた結果に基づく良品の正答率と非良品の正答率から、非良品含み率と異常判定率の差(ブレイクイーブンポイント)を算出して、ベイズ最適化を用いてブレイクイーブンポイントが最小になるように均衡点を求める(ステップSB5)。
※非良品含み率=非良品と予測されたデータに、実際に非良品であるデータが占める割合=TN÷(FN+TN)
※異常判定率=実際に非良品であるデータに、非良品と予測されたデータが占める割合=TN÷(FP+TN)
※FP:False Positive
※FN:False Negative
※TN:True Negative
Next, the transfer learning unit 12a3 calculates the difference (breakeven point) between the non-defective product inclusion rate and the abnormality determination rate from the non-defective product correct rate and the non-defective product correct answer rate based on the result obtained in step SB3, An equilibrium point is determined using Bayesian optimization so that the breakeven point is minimized (step SB5).
* Non-conforming product content = Ratio of data that is actually non-conforming to data predicted to be non-conforming, = TN ÷ (FN + TN)
* Abnormality judgment rate = Ratio of data predicted as non-defective to actual non-defective data = TN / (FP + TN)
* FP: False Positive
* FN: False Negative
* TN: True Negative
 そして、ステップSB4で得られたF値およびステップSB5で得られた均衡点から、高精度のモデルが完成したかが判断される。モデルの精度が不十分と判断される場合は、以下の処理へ進む。なお、この判断は、例えば、F値の高さや均衡点の低さに着目して行ってもよい。 Then, it is determined from the F value obtained in step SB4 and the equilibrium point obtained in step SB5 whether a highly accurate model has been completed. When it is determined that the accuracy of the model is insufficient, the process proceeds to the following process. This determination may be made, for example, by focusing on the height of the F value or the low equilibrium point.
 つぎに、水増し画像生成部12a4は、前記予め用意した数万枚の学習用画像(物体認識部12a1の処理により得られたもの)からランダムに選ばれた学習用画像に対し水増し処理を実行して、学習用画像を10倍程度増やす(ステップSB6)。なお、水増し処理の詳細は後述する。 Next, the inflated image generating unit 12a4 executes inflated processing on a learning image randomly selected from the tens of thousands of learning images prepared in advance (obtained by the processing of the object recognizing unit 12a1). Then, the learning image is increased about 10 times (step SB6). The details of the padding process will be described later.
 そして、ステップSB6で水増しされた学習用画像を基に、本モデル生成処理が再度、ステップSB1から実行される。これにより、ノイズに対してロバストなモデルになるよう学習させることができる。また、学習用画像を別途、継続的に新規に追加することで継続して強化学習が可能となる。 モ デ ル Then, based on the learning image inflated in step SB6, the present model generation processing is executed again from step SB1. Thereby, learning can be performed so as to be a model that is robust against noise. Further, reinforcement learning can be continuously performed by separately adding learning images separately and continuously.
 以上、本モデル生成処理により、検査対象を良品か非良品に分類するためのモデル(転移学習モデル)を生成することができる。なお、転用する学習済みモデルは、一つであってもよく、また複数であってもよい。転移学習であれば、転移学習モデルのディープラーニング層を検査対象の種類に応じて変更することができるので、例えばギョーザトレイを検査対象とするモデルを生成したり、シューマイトレイを検査対象とする異なる対象のモデルを生成したりすることが容易に実現できる。 As described above, a model (transfer learning model) for classifying an inspection target into a non-defective product or a non-defective product can be generated by the model generation process. Note that the number of learned models to be diverted may be one or more. In the case of transfer learning, the deep learning layer of the transfer learning model can be changed according to the type of the inspection target. Generating a target model can be easily realized.
[2-3.画像水増し処理(データ拡張)]
 図7は、画像水増し処理に関するフローチャートの一例を示す図である。
[2-3. Image inflating process (data extension)]
FIG. 7 is a diagram illustrating an example of a flowchart relating to the image inflating process.
 まず、水増し画像生成部12a4は、ステップSB6においてランダムに選ばれた学習用画像を用いて、学習用画像全体で入力の画素値(特徴量)の平均がゼロとなるように真理値の処理を実行する(ステップSC1)。 First, the inflated image generation unit 12a4 uses the learning image randomly selected in step SB6 to process the truth value so that the average of the input pixel values (feature amounts) becomes zero in the entire learning image. Execute (step SC1).
 つぎに、水増し画像生成部12a4は、入力の画素値(特徴量)を学習用画像の標準偏差で正規化する(ステップSC2:正規化処理)。 Next, the inflated image generation unit 12a4 normalizes the input pixel value (feature amount) by the standard deviation of the learning image (step SC2: normalization processing).
 つぎに、水増し画像生成部12a4は、学習用画像に対し、回転処理(回転角度:0度から180度までの任意の角度)、水平方向への反転処理(左右反転)および垂直方向への反転処理(上下反転)から任意に選ばれる1つ以上の処理を、任意の順に実行する(ステップSC3)。 Next, the inflated image generation unit 12a4 rotates the learning image (rotation angle: any angle from 0 to 180 degrees), inverts horizontally (inverts left and right), and inverts the learning image. One or more processes arbitrarily selected from the processes (vertical inversion) are executed in an arbitrary order (step SC3).
 つぎに、水増し画像生成部12a4は、ステップSC3で得られた学習用画像に対し、ゼロ位相白色化(ZCA whitening)の処理を実行する(ステップSC4)。 Next, the inflated image generation unit 12a4 executes zero-phase whitening (ZCA whitening) on the learning image obtained in step SC3 (step SC4).
[2-4.検査処理]
 図8は、検査処理に関するフローチャートの一例を示す図である。
[2-4. Inspection processing]
FIG. 8 is a diagram illustrating an example of a flowchart relating to the inspection processing.
 まず、物体認識部12a1は、食品撮像装置11で撮像された検査対象を写した画像から検査対象を認識し、この認識した検査対象の領域をこの画像から切り抜く(ステップSD1)。なお、ステップSD1にて実行される物体認識処理の具体例については、[2-1.物体認識処理]にて説明したので、ここでは省略する。 First, the object recognition unit 12a1 recognizes the inspection target from the image of the inspection target captured by the food imaging device 11, and cuts out the recognized inspection target region from the image (step SD1). Note that a specific example of the object recognition process executed in step SD1 is described in [2-1. Object recognition processing], and a description thereof will be omitted.
 つぎに、物体分類部12a2は、ステップSD1で切り抜いた検査対象の領域を、転移学習部12a3で生成されたモデル(精度が十分と判断されたもの)に適応することにより、ステップSD1で認識した検査対象を良品か非良品に分類する(ステップSD2)。 Next, the object classification unit 12a2 recognized the region to be inspected cut out in step SD1 in step SD1 by applying the region to be inspected in step SD1 to the model generated in the transfer learning unit 12a3 (the one determined to have sufficient accuracy). The inspection target is classified into a non-defective product or a non-defective product (step SD2).
 以上、本発明の実施形態について説明したが、本実施形態によれば、食品製造過程における初期工程にある食品の原材料から、食品製造過程における最終工程にあるパッケージング直前の最終製品(例えば食品が収容されている状態の食品トレイなど)までの全てを高精度に検査することができる。すなわち、食品検査において従来では検査対象とすることができなかった製品についても、精度の高い検査を実現することができる。 As described above, according to the embodiment of the present invention, according to the present embodiment, from the raw material of the food in the initial step in the food manufacturing process, the final product immediately before packaging in the final step in the food manufacturing process (for example, the food Everything up to the stored food tray) can be inspected with high accuracy. That is, it is possible to realize a highly accurate inspection of a product that could not be an inspection target in the past in a food inspection.
[3.他の実施形態]
 本発明は、上述した実施形態以外にも、請求の範囲に記載した技術的思想の範囲内において種々の異なる実施形態にて実施されてよいものである。
[3. Other Embodiments]
The present invention may be embodied in various different embodiments within the scope of the technical idea described in the claims, in addition to the above-described embodiments.
 例えば、実施形態において説明した各処理のうち、自動的に行われるものとして説明した処理の全部または一部を手動的に行うこともでき、あるいは、手動的に行われるものとして説明した処理の全部または一部を公知の方法で自動的に行うこともできる。 For example, among the processes described in the embodiment, all or some of the processes described as being performed automatically can be manually performed, or all of the processes described as being performed manually can be performed. Alternatively, a part thereof can be automatically performed by a known method.
 また、各装置に関して、図示の各構成要素は機能概念的なものであり、必ずしも物理的に図示の如く構成されていることを要しない。 Also, with respect to each device, the components shown in the drawings are functionally conceptual, and need not necessarily be physically configured as shown in the drawings.
 また、食品検査サーバ12が備える処理機能、特に制御部12aにて行われる各処理機能については、その全部または任意の一部を、CPUまたはGPUおよび当該CPUまたはGPUにて解釈実行されるプログラムにて実現してもよい。プログラムは、情報処理装置に本発明にかかる検査方法を実行させるためのプログラム化された命令を含む一時的でないコンピュータ読み取り可能な記録媒体に記録されており、必要に応じて食品検査サーバ12に機械的に読み取られる。すなわち、ROMまたはHDDなどの記憶部106には、OSと協働してCPUまたはGPUに命令を与え、各種処理を行うためのコンピュータプログラムが記録されている。このコンピュータプログラムは、RAMにロードされることによって実行され、CPUまたはGPUと協働して制御部を構成する。 In addition, all or any part of the processing functions provided in the food inspection server 12, particularly each processing function performed by the control unit 12a, is converted into a CPU or GPU and a program interpreted and executed by the CPU or GPU. May be realized. The program is recorded on a non-transitory computer-readable recording medium containing programmed instructions for causing the information processing apparatus to execute the inspection method according to the present invention. Is read. That is, the storage unit 106 such as a ROM or an HDD stores a computer program for giving an instruction to the CPU or the GPU in cooperation with the OS and performing various processes. The computer program is executed by being loaded into the RAM, and configures a control unit in cooperation with the CPU or the GPU.
 また、本発明にかかる検査プログラムを、一時的でないコンピュータ読み取り可能な記録媒体に格納してもよく、また、プログラム製品として構成することもできる。ここで、この「記録媒体」とは、メモリーカード、USBメモリ、SDカード、フレキシブルディスク、光磁気ディスク、ROM、EPROM、EEPROM、CD-ROM、MO、DVD、および、Blu-ray(登録商標) Disc等の任意の「可搬用の物理媒体」を含むものとする。 The inspection program according to the present invention may be stored in a non-transitory computer-readable recording medium, or may be configured as a program product. Here, the “recording medium” refers to a memory card, USB memory, SD card, flexible disk, magneto-optical disk, ROM, EPROM, EEPROM, CD-ROM, MO, DVD, and Blu-ray (registered trademark). It shall include any “portable physical medium” such as Disc.
 また、システムの分散・統合の具体的形態は図示するものに限られず、任意の単位で機能的または物理的に分散・統合して構成してもよい。 The specific form of the system distribution / integration is not limited to the illustrated one, and the system may be functionally or physically distributed / integrated in arbitrary units.
 本発明は、産業上の多くの分野、特に食品製造業において極めて有用である。 The present invention is extremely useful in many industrial fields, especially in the food manufacturing industry.
1 食品検査システム
  11 食品撮像装置
     11a 筐体
         11a1 垂直材
         11a2 横架材
         11a3 横架材
         11a4 天井部材
         11a5 壁部材
         11a6 壁部材
     11b 撮像部
     11c 照明部
     11d 電源供給部
  12 食品検査サーバ
     12a 制御部
         12a1 物体認識部
         12a2 物体分類部
         12a3 転移学習部
         12a4 水増し画像生成部
     12b 通信インターフェース部
     12c 記憶部
         12c1 画像記憶部
         12c2 画像記憶部
     12d 入出力インターフェース部
     12e 入力部
     12f 出力部
  13 ネットワーク
Reference Signs List 1 food inspection system 11 food imaging device 11a housing 11a1 vertical member 11a2 horizontal member 11a3 horizontal member 11a4 ceiling member 11a5 wall member 11a6 wall member 11b imaging part 11c lighting part 11d power supply part 12 food inspection server 12a control part 12a1 Recognition unit 12a2 Object classification unit 12a3 Transfer learning unit 12a4 Inflated image generation unit 12b Communication interface unit 12c Storage unit 12c1 Image storage unit 12c2 Image storage unit 12d Input / output interface unit 12e Input unit 12f Output unit 13 Network

Claims (4)

  1.  食品に使われる原材料、食品、または、容器とそれに収容されている食品を検査対象とする、制御部を備える検査装置であって、
     前記制御部が、
     良品の検査対象を写した画像を学習用データとする、ベイズ最適化で調整した学習済みモデルを用いて抽出した特徴量データによる教師なしの転移学習を行うことにより、検査対象を良品か非良品に分類するためのモデルを生成する転移学習手段と、
     検査対象を写した画像から検査対象を認識し、前記認識した検査対象の領域を画像から切り抜く物体認識手段と、
     前記物体認識手段で切り抜いた領域を前記転移学習手段で生成したモデルに適応することにより、前記物体認識手段で認識した検査対象を良品か非良品に分類する物体分類手段と、
     を備える検査装置。
    Raw material used for food, food, or a container and the food contained in the inspection target, an inspection device with a control unit,
    The control unit includes:
    By performing unsupervised transfer learning using feature data extracted using a trained model adjusted by Bayesian optimization using an image of a non-defective inspection object as learning data, the inspection target is a non-defective or non-defective item. Transfer learning means for generating a model for classification into
    Object recognition means for recognizing the inspection target from the image of the inspection target and cutting out the recognized inspection target region from the image,
    By applying the region cut out by the object recognition unit to the model generated by the transfer learning unit, an object classification unit that classifies the inspection target recognized by the object recognition unit into a non-defective or non-defective product,
    An inspection device comprising:
  2.  前記学習用データは、複数種類の良品の検査対象のそれぞれを写した画像であり、
     前記物体分類手段は、前記認識した検査対象を、各種類の良品と非良品からなる複数のクラスのうちのいずれか一つに分類すること、
     を特徴とする請求項1に記載の検査装置。
    The learning data is an image of each of a plurality of non-defective inspection targets,
    The object classifying means, classifying the recognized inspection target into any one of a plurality of classes consisting of non-defective products and non-defective products of each type,
    The inspection device according to claim 1, wherein:
  3.  食品に使われる原材料、食品、または、容器とそれに収容されている食品を検査対象とする、制御部を備える検査装置の前記制御部が、
     良品の検査対象を写した画像を学習用データとする、ベイズ最適化で調整した学習済みモデルを用いて抽出した特徴量データによる教師なしの転移学習を行うことにより、検査対象を良品か非良品に分類するためのモデルを生成するステップと、
     検査対象を写した画像から検査対象を認識し、前記認識した検査対象の領域を前記画像から切り抜くステップと、
     前記切り抜いた領域を前記生成したモデルに適応することにより、前記認識した検査対象を良品か非良品に分類するステップと、
     を実行する検査方法。
    Raw materials used for food, food, or containers and foods contained therein to be inspected, the control unit of the inspection device having a control unit,
    By performing unsupervised transfer learning using feature data extracted using a trained model adjusted by Bayesian optimization using an image of a non-defective inspection object as learning data, the inspection target is a non-defective or non-defective item. Generating a model for classification into
    Recognizing the inspection target from an image of the inspection target, and cutting out the recognized inspection target region from the image,
    Classifying the recognized inspection target into a non-defective or non-defective product by adapting the cut-out region to the generated model;
    Inspection method to perform.
  4.  食品に使われる原材料、食品、または、容器とそれに収容されている食品を検査対象とする、制御部を備える検査装置の前記制御部に、
     良品の検査対象を写した画像を学習用データとする、ベイズ最適化で調整した学習済みモデルを用いて抽出した特徴量データによる教師なしの転移学習を行うことにより、検査対象を良品か非良品に分類するためのモデルを生成するステップと、
     検査対象を写した画像から検査対象を認識し、前記認識した検査対象の領域を前記画像から切り抜くステップと、
     前記切り抜いた領域を前記生成したモデルに適応することにより、前記認識した検査対象を良品か非良品に分類するステップと、
     を実行させるための検査プログラム。
    Ingredients used for food, food, or containers and food contained in it to be inspected, the control unit of the inspection device having a control unit,
    By performing unsupervised transfer learning using feature data extracted using a trained model adjusted by Bayesian optimization using an image of a non-defective inspection object as learning data, the inspection target is a non-defective or non-defective item. Generating a model for classification into
    Recognizing the inspection target from an image of the inspection target, and cutting out the recognized inspection target region from the image,
    Classifying the recognized inspection target into a non-defective or non-defective product by applying the cut-out region to the generated model;
    Inspection program to execute.
PCT/JP2019/030574 2018-08-15 2019-08-02 Inspection device, inspection method, and inspection program WO2020036082A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020537416A JPWO2020036082A1 (en) 2018-08-15 2019-08-02 Inspection equipment, inspection methods and inspection programs

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018152830 2018-08-15
JP2018-152830 2018-08-15

Publications (1)

Publication Number Publication Date
WO2020036082A1 true WO2020036082A1 (en) 2020-02-20

Family

ID=69525458

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/030574 WO2020036082A1 (en) 2018-08-15 2019-08-02 Inspection device, inspection method, and inspection program

Country Status (3)

Country Link
JP (1) JPWO2020036082A1 (en)
TW (1) TW202022358A (en)
WO (1) WO2020036082A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021029677A (en) * 2019-08-26 2021-03-01 株式会社 ゼンショーホールディングス Placement state management device, placement state management method, and placement state management program
JP2021156653A (en) * 2020-03-26 2021-10-07 株式会社奥村組 Device, method, and program for specifying sewer damage

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017049974A (en) * 2015-09-04 2017-03-09 キヤノン株式会社 Discriminator generator, quality determine method, and program
WO2017159620A1 (en) * 2016-03-14 2017-09-21 オムロン株式会社 Expandability retention device
JP2017211259A (en) * 2016-05-25 2017-11-30 株式会社シーイーシー Inspection device, inspection method and program
JP2018005640A (en) * 2016-07-04 2018-01-11 タカノ株式会社 Classifying unit generation device, image inspection device, and program
JP2018005773A (en) * 2016-07-07 2018-01-11 株式会社リコー Abnormality determination device and abnormality determination method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017049974A (en) * 2015-09-04 2017-03-09 キヤノン株式会社 Discriminator generator, quality determine method, and program
WO2017159620A1 (en) * 2016-03-14 2017-09-21 オムロン株式会社 Expandability retention device
JP2017211259A (en) * 2016-05-25 2017-11-30 株式会社シーイーシー Inspection device, inspection method and program
JP2018005640A (en) * 2016-07-04 2018-01-11 タカノ株式会社 Classifying unit generation device, image inspection device, and program
JP2018005773A (en) * 2016-07-07 2018-01-11 株式会社リコー Abnormality determination device and abnormality determination method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021029677A (en) * 2019-08-26 2021-03-01 株式会社 ゼンショーホールディングス Placement state management device, placement state management method, and placement state management program
JP7401995B2 (en) 2019-08-26 2023-12-20 株式会社 ゼンショーホールディングス Placement status management device, placement status management method, and placement status management program
JP2021156653A (en) * 2020-03-26 2021-10-07 株式会社奥村組 Device, method, and program for specifying sewer damage
JP7356941B2 (en) 2020-03-26 2023-10-05 株式会社奥村組 Pipe damage identification device, pipe damage identification method, and pipe damage identification program

Also Published As

Publication number Publication date
TW202022358A (en) 2020-06-16
JPWO2020036082A1 (en) 2021-08-26

Similar Documents

Publication Publication Date Title
US11393082B2 (en) System and method for produce detection and classification
JP7391173B2 (en) Food inspection aid system, food inspection aid device, and computer program
US10157456B2 (en) Information processing apparatus, information processing method, and storage medium utilizing technique for detecting an abnormal state such as a scratch on a target
US11189058B2 (en) Image generating device, inspection apparatus, and learning device
JP6528040B2 (en) Intelligent machine network
JP2010145135A (en) X-ray inspection apparatus
WO2020036082A1 (en) Inspection device, inspection method, and inspection program
KR20220095216A (en) BBP-assisted defect detection flow for SEM images
JPWO2019151393A1 (en) Food inspection system, food inspection program, food inspection method and food production method
US20220178841A1 (en) Apparatus for optimizing inspection of exterior of target object and method thereof
JP2011089920A (en) X-ray inspection method and x-ray inspection apparatus using the same
Hashim et al. Automated vision inspection of timber surface defect: A review
US20230023641A1 (en) Automated detection of chemical component of moving object
US20230145715A1 (en) Inspection device for tofu products, manufacturing system for tofu products, inspection method for tofu products, and program
Ciora et al. Industrial applications of image processing
Papavasileiou et al. An optical system for identifying and classifying defects of metal parts
JP2015203586A (en) inspection method
JP2021143884A (en) Inspection device, inspection method, program, learning device, learning method, and trained dataset
WO2018112218A1 (en) Dual-energy microfocus radiographic imaging method for meat inspection
Gudavalli et al. Real-time biomass feedstock particle quality detection using image analysis and machine vision
WO2020036083A1 (en) Inspection imaging device
WO2022065110A1 (en) X-ray inspection device and x-ray inspection method
US11436716B2 (en) Electronic apparatus, analysis system and control method of electronic apparatus
JP2021149653A (en) Image processing device and image processing system
JP7240780B1 (en) inspection equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19849146

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020537416

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19849146

Country of ref document: EP

Kind code of ref document: A1