WO2019176354A1 - Learning data collection method, learning data collection device, abnormality detection system, and computer program - Google Patents

Learning data collection method, learning data collection device, abnormality detection system, and computer program Download PDF

Info

Publication number
WO2019176354A1
WO2019176354A1 PCT/JP2019/003397 JP2019003397W WO2019176354A1 WO 2019176354 A1 WO2019176354 A1 WO 2019176354A1 JP 2019003397 W JP2019003397 W JP 2019003397W WO 2019176354 A1 WO2019176354 A1 WO 2019176354A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
learning
inspection object
difference
feature extraction
Prior art date
Application number
PCT/JP2019/003397
Other languages
French (fr)
Japanese (ja)
Inventor
勝司 三浦
柿井 俊昭
康 野村
佳孝 上
イヴァンダー
Original Assignee
住友電気工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 住友電気工業株式会社 filed Critical 住友電気工業株式会社
Publication of WO2019176354A1 publication Critical patent/WO2019176354A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present disclosure relates to a learning data collection method, a learning data collection device, an anomaly detection system, and a computer program.
  • This application claims priority based on Japanese Patent Application No. 2018-45727 filed on Mar. 13, 2018, and incorporates all the content described in the above Japanese application.
  • the factory production line has a defect detection system that detects product defects.
  • the defect detection system detects a defect of an inspection object using a sensor in a production line of a factory.
  • the sensor is, for example, a camera.
  • machine learning such as deep learning, it is possible to learn sensor values and automatically set determination parameters necessary for defect detection.
  • Non-Patent Document 1 discloses a curriculum learning technique for automatically selecting sample data used for deep neural network learning.
  • additional learning is performed when abnormal data that does not match any learned category is detected using SOM, which is one of machine learning techniques, and abnormal data is learned as a new category.
  • SOM which is one of machine learning techniques
  • the learning data collection method of the present disclosure includes a learned auto encoder that receives inspection target data for detecting an abnormality of an inspection target and outputs feature extraction data obtained by extracting features of the input inspection target data.
  • a learning data collection method for collecting data for additional learning wherein a plurality of first inspection object data input to the auto encoder and a plurality of output data are output in order to learn the auto encoder before learning.
  • the difference between the first feature extraction data and the plurality of second inspection target data input to the auto encoder after learning and the difference between the plurality of second feature extraction data output after learning are statistically compared, If there is a difference, the second inspection object data related to the normal inspection object is selected as additional learning data.
  • the learning data collection device of the present disclosure includes a learned auto-encoder that receives inspection target data for detecting an abnormality of the inspection target and outputs feature extraction data obtained by extracting features of the input inspection target data.
  • a learning data collection device for collecting data for additional learning, wherein a plurality of first inspection object data input to the auto encoder and a plurality of output data are output to the auto encoder before learning.
  • a comparison unit that statistically compares the difference between the first feature extraction data and the plurality of second inspection target data input to the auto encoder after learning and the difference between the plurality of second feature extraction data output.
  • a selection unit that selects the second inspection object data relating to the normal inspection object as data for additional learning when there is a statistical difference.
  • the anomaly detection system of the present disclosure includes a learned auto encoder that receives inspection object data for detecting an abnormality of an inspection object, outputs feature extraction data obtained by extracting features of the input inspection object data, and An anomaly detection processing unit for detecting an anomaly of the inspection object by comparing the inspection object data input to the auto encoder and the output feature extraction data, and additional learning of the auto encoder
  • the learning data collecting device for collecting data
  • a learning processing unit for additionally learning the auto encoder based on the data collected by the learning data collecting device.
  • the computer program of the present disclosure adds a learned autoencoder that receives inspection target data for detecting whether there is an abnormality in the inspection target object and outputs feature extraction data obtained by extracting features of the input inspection target data
  • FIG. It is a schematic diagram which shows the structural example of the anomaly detection system which concerns on Embodiment 1.
  • FIG. It is a block diagram which shows the structural example of the anomaly detection apparatus which concerns on Embodiment 1.
  • FIG. It is a block diagram which shows the structural example of the anomaly detection apparatus which concerns on Embodiment 1.
  • FIG. It is a schematic diagram which shows the neural network and learning method of an auto encoder.
  • It is a schematic diagram which shows the production
  • FIG. 10 is a flowchart illustrating a collection processing procedure of additional learning data according to the second embodiment. It is a schematic diagram which shows the selection reception method of the data for learning. It is a schematic diagram which shows the selection reception method of the data for learning. It is a schematic diagram which shows the structural example of the anomaly detection system which concerns on Embodiment 3.
  • FIG. 10 is a flowchart illustrating a collection processing procedure of additional learning data according to the second embodiment. It is a schematic diagram which shows the selection reception method of the data for learning. It is a schematic diagram which shows the selection reception method of the data for learning. It is a schematic diagram which shows the structural example of the anomaly detection system which concerns on Embodiment 3. FIG.
  • the sensor value slightly changes when the lighting environment in the factory changes, the sensor becomes dirty, or the location of the production line moves. As a result, the input value to the defect detection system changes. Even a defect detection system using machine learning such as deep learning cannot cope with a change in sensor value after learning, and there is a possibility that the detection accuracy of a defective product is lowered. There is a demand for a defect detection system that can automatically perform additional learning according to changes in the factory environment.
  • the important point when performing additional learning automatically is the selection of learning data. For example, in the anomaly detection system using an auto encoder, learning data in which inspection object data obtained by imaging a non-defective inspection object and inspection object data obtained by imaging a defective inspection object are mixed. If additional learning is performed using, an abnormality of the inspection object cannot be detected.
  • the purpose of this disclosure is to add a learned auto encoder for anomaly detection to respond to environmental changes when sensor values change due to changes in the lighting environment in the factory, sensor contamination, movement of the installation location of the line, etc.
  • An object is to provide a learning data collection method, a learning data collection device, an anomaly detection system, and a computer program that can collect data for learning.
  • learning is performed in which inspection target data for detecting an abnormality of the inspection target is input, and feature extraction data obtained by extracting features of the input inspection target data is output.
  • a learning data collection method for collecting data for additionally learning a completed auto encoder wherein a plurality of first inspection object data and outputs input to the auto encoder for learning the auto encoder before learning The difference between the plurality of first feature extraction data thus obtained and the difference between the plurality of second inspection target data inputted to the auto encoder after learning and the plurality of second feature extraction data outputted If there is a statistical difference, the second inspection object data related to the normal inspection object is selected as additional learning data.
  • the difference between the difference between the plurality of first inspection object data and the plurality of first feature extraction data and the difference between the plurality of second inspection object data and the plurality of second feature extraction data Compare differences statistically.
  • the first inspection target data is data used for learning or manufacturing the auto encoder.
  • the second inspection object data is data input to the auto encoder in order to detect an abnormality of the inspection object after learning.
  • second inspection target data related to a normal inspection target object is selected as additional learning data from among the plurality of second inspection target data. If the auto encoder is trained using inspection object data relating to an inspection object having an abnormality, the inspection object having an abnormality cannot be detected. By performing the selection as described above, it is possible to collect learning data that can be adapted to changes in the anomaly detection environment by additional learning.
  • the second inspection target data out of the confidence interval is selected as additional learning data from among the plurality of second inspection target data. Since the inspection target data is data in which the influence of the anomaly detection environment appears, it is possible to effectively additionally learn the auto encoder by selecting such data.
  • a number of the plurality of first test object data corresponding to a ratio within the confidence interval and a number of the second test object data corresponding to a ratio outside the confidence interval are used for additional learning.
  • a configuration to select as data is preferable.
  • the first inspection object data and the second inspection object data are mixed at an appropriate ratio and selected as additional learning data.
  • additional learning data By selecting additional learning data in this way, abnormal learning can be avoided and the auto encoder can be effectively additionally learned.
  • Additional learning data obtained by randomly mixing the plurality of first inspection object data and the plurality of second inspection object data related to the normal inspection object.
  • the difference between the plurality of first inspection object data and the outputted first feature extraction data and the difference between the second inspection object data and the outputted feature extraction data is selected by comparing the occurrence location.
  • the second inspection object data relating to the normal inspection object can be selected by comparing the locations where the differences occur.
  • This aspect includes both an invention for manually selecting second inspection object data relating to a normal inspection object and an invention for automatically selecting the second inspection object data.
  • the second inspection object data is the second inspection object data related to a normal inspection object by comparing the statistical distance of the occurrence point of each difference with a threshold value. It can be determined whether or not.
  • the second inspection object data relating to the normal inspection object can be automatically selected.
  • the data indicating the difference occurrence location is output to the outside together with the second inspection target data having a statistical difference compared to the first inspection target data. That is, the second inspection target data that is a candidate for additional learning is output.
  • the second inspection object data includes inspection object data related to a normal inspection object and inspection object data related to an abnormal inspection object.
  • data indicating the location where the difference occurs is externally output so that a person can easily select a normal inspection object. And selection of the 2nd inspection object data concerning a normal inspection object is received.
  • the plurality of second inspection target data input to the auto encoder after learning is accumulated, and the second inspection target data has a statistical difference compared to the plurality of first inspection target data.
  • the inspection object data is image data obtained by imaging the inspection object.
  • the auto encoder uses the image data obtained by imaging the inspection object as inspection object data to detect an abnormality of the inspection object.
  • data suitable for additional learning can be collected from a plurality of second inspection object data that is image data.
  • the learning data collection apparatus receives learning target data for detecting an abnormality of the inspection target object, and outputs feature extraction data obtained by extracting features of the input inspection target data
  • a learning data collection device for collecting data for additionally learning a completed auto encoder, wherein a plurality of first inspection object data and outputs input to the auto encoder for learning the auto encoder before learning The difference between the plurality of first feature extraction data thus obtained and the difference between the plurality of second inspection target data inputted to the auto encoder after learning and the plurality of second feature extraction data outputted If there is a statistical difference between the comparison unit and the comparison unit, the second inspection object data related to the normal inspection object is selected as additional learning data. And a part.
  • the learning data collection device can collect learning data that can be adapted to changes in the anomaly detection environment through additional learning.
  • the anomaly detection system receives learned object data for detecting an anomaly of an object to be inspected, and outputs feature extraction data obtained by extracting features of the inputted inspection object data.
  • an abnormality detection processing unit for detecting an abnormality of the inspection object by comparing the inspection object data input to the auto encoder and the output feature extraction data, and the auto encoder
  • a learning data collecting device for collecting data for learning, and a learning processing unit for additionally learning the auto encoder based on the data collected by the learning data collecting device.
  • the anomaly detection system can detect an anomaly of an inspection object using an auto encoder.
  • the learning data collection device can collect learning data that can be adapted to changes in the anomaly detection environment by additional learning.
  • the learning processing unit can additionally learn the auto-encoder using the collected learning data, and can adapt the change to the anomaly detection environment.
  • the computer program according to this aspect has been learned to input inspection target data for detecting whether there is an abnormality in the inspection target and to output feature extraction data obtained by extracting the characteristics of the input inspection target data
  • the learning data collection device can be adapted to change in the anomaly detection environment by additional learning in the same manner as in aspect 1. Data can be collected.
  • FIG. 1 is a schematic diagram illustrating a configuration example of the anomaly detection system according to the first embodiment.
  • the anomaly detection system is an anomaly detection device 1 that detects an anomaly of an inspection object 110 based on an imaging unit 2 that images the inspection object 110 and image data obtained by imaging (hereinafter referred to as inspection object data). And a display unit 3.
  • the inspection object 110 is a connector terminal of the wire harness 100 disposed in the vehicle, for example.
  • FIG. 2 is a block diagram illustrating a configuration example of the anomaly detection device 1 according to the first embodiment.
  • the anomaly detection device 1 is a computer having an operation unit 1a such as one or a plurality of CPUs (Central Processing Units) and a multi-core CPU.
  • a temporary storage unit 1b, an image input unit 1c, an output unit 1d, an input unit 1e, a clock unit 1f, a storage unit 1g, and a data storage unit 1h are connected to the calculation unit 1a via a bus line.
  • CPUs Central Processing Units
  • a temporary storage unit 1b, an image input unit 1c, an output unit 1d, an input unit 1e, a clock unit 1f, a storage unit 1g, and a data storage unit 1h are connected to the calculation unit 1a via a bus line.
  • the calculation unit 1a controls the operation of each component by executing a computer program 5 described later stored in the storage unit 1g, and detects an abnormality of the inspection object 110 using an auto-encoder type neural network.
  • This neural network constitutes an auto encoder 11 (see FIGS. 3 and 4) described later.
  • the temporary storage unit 1b is a memory such as a DRAM (Dynamic RAM) or an SRAM (Static RAM), and is executed by the computer program 5 read from the storage unit 1g or the arithmetic processing when the arithmetic processing of the arithmetic unit 1a is executed. Temporarily store various data generated.
  • DRAM Dynamic RAM
  • SRAM Static RAM
  • the storage unit 1g is a non-volatile memory such as a hard disk, an EEPROM (Electrically Erasable Programmable ROM), or a flash memory.
  • the storage unit 1g stores a computer program 5 for executing the anomaly detection process of the inspection object 110, the collection process of additional learning data, and the additional learning process by the operation unit 1a controlling the operation of each component. is doing.
  • the storage unit 1g may store the computer program 5 read from the recording medium by a reading device (not shown). Recording media are CD (Compact Disc) -ROM, DVD (Digital Versatile Disc) -ROM, optical disc such as BD (Blu-ray (registered trademark) Disc), flexible disc, magnetic disc such as hard disc, magnetic optical disc, semiconductor memory, etc. It is. Further, the computer program 5 according to the first embodiment may be downloaded from an external computer (not shown) connected to a communication network (not shown) and stored in the storage unit 1g.
  • the image input unit 1c is an interface to which the imaging unit 2 is connected.
  • the image pickup unit 2 converts an image formed by the lens into an electrical signal such as a CCD and a CMOS, and AD converts the electrical signal converted by the image sensor into digital image data.
  • an image processing unit that outputs image data as inspection target data.
  • the inspection object data output from the imaging unit 2 is input to the anomaly detection device 1 via the image input unit 1c.
  • the inspection target data input to the anomaly detection device 1 is stored in the data storage unit 1h.
  • the imaging unit 2 and the anomaly detection device 1 may be configured to be connected separately by a dedicated cable, or may be configured to be connected via a network such as a LAN (Local Area Network).
  • the inspection target data is digital data in which the pixels arranged in the vertical and horizontal directions are indicated by luminance values of a predetermined gradation. In the present embodiment, description will be made assuming that the image data is monochrome.
  • the output unit 1d is an interface to which the display unit 3 is connected.
  • the display unit 3 is a liquid crystal panel, an organic EL display, electronic paper, a plasma display, or the like.
  • the display unit 3 displays various information corresponding to the image data given from the calculation unit 1a. For example, the contents of the anomaly detection result, the image of the inspection object 110 having a defect, and the like are displayed.
  • the display unit 3 is an example of an external output device that outputs the anomaly detection result, and may be a buzzer, a speaker, a light emitting element, or another notification device.
  • the factory operator can recognize the state of the inspection object 110 as a result of the anomaly detection from the image displayed on the display unit 3.
  • the operation unit 4 such as a keyboard, a mouse, and a touch sensor is connected to the input unit 1e.
  • a signal indicating the operation state of the operation unit 4 is input to the anomaly detection device 1 via the input unit 1e.
  • the calculation unit 1a can recognize the operation state of the operation unit 4 via the input unit 1e.
  • the clock unit 1 f keeps timing for additionally learning the neural network of the auto encoder 11.
  • the clock unit 1f outputs a signal at a timing for confirming the necessity of additional learning of the neural network, such as a one-month cycle, for example, and the arithmetic unit 1a determines the necessity of additional learning, and for additional learning as necessary Data collection and additional learning processing are executed.
  • the data storage unit 1h is a non-volatile memory such as a hard disk, an EEPROM, or a flash memory, like the storage unit 1g.
  • the data storage unit 1h stores the inspection target data for learning, the inspection target data in operation, the difference data, and the inspection target data for additional learning.
  • the inspection object data will be described with reference symbols ⁇ and ⁇ .
  • the inspection object data ⁇ for learning is image data used when learning a neural network for detecting an abnormality of the inspection object 110. That is, the image data used in the manufacturing stage of the anomaly detection device 1.
  • the inspection object data ⁇ in operation is image data that is input to the anomaly detection device 1 when the inspection object 110 is actually inspected on a factory line or the like.
  • the difference data is data indicating a difference between the inspection target data and the feature extraction data.
  • the feature extraction data is image data obtained by extracting features of the inspection object 110 from the inspection object data in order to detect an abnormality in the inspection object 110. Details of the feature extraction data and the difference data will be described later.
  • the inspection target data for additional learning is data for additionally learning the neural network.
  • the additional learning data is data for adapting to various environmental changes by additionally learning the neural network.
  • FIG. 3 is a block diagram illustrating a configuration example of the anomaly detection device 1 according to the first embodiment.
  • the anomaly detection device 1 includes an auto encoder 11, an anomaly detection processing unit 12, a learning data collection device 13, and a learning processing unit 14 as functional units.
  • the abnormality detection processing unit 12 is further configured by a determination data generation unit 12a and an abnormality determination unit 12b.
  • Each functional unit of the anomaly detection device 1 is realized by hardware such as the arithmetic unit 1a and the data storage unit 1h.
  • the auto encoder 11 is a functional unit that receives the inspection target data output from the imaging unit 2 and outputs feature extraction data obtained by extracting the characteristics of the input inspection target 110. Specifically, when the inspection object data is input, the auto encoder 11 outputs feature extraction data representing the characteristics of the normal inspection object 110.
  • the image related to the inspection target data may include an image of dust, scratches, shadows, anomalous parts of the inspection target 110 itself, and the like.
  • the feature extraction data is image data that reproduces an ideal inspection object 110 that is obtained when these images are removed and a normal inspection object 110 having no abnormality is imaged.
  • Auto-encoding is realized by a neural network.
  • the neural network includes an intermediate layer that dimensionally compresses inspection target data, and inputs and outputs image data having the same number of pixels.
  • FIG. 4 is a schematic diagram showing a neural network of the auto encoder 11 and a learning method.
  • the auto encoder 11 includes an input layer 11a, a convolution layer (CONV layer), a deconvolution layer (DECONV layer), and an output layer 11b.
  • the input layer 11a is a layer to which data of each pixel value related to inspection target data is input.
  • the convolution layer is a layer that dimensionally compresses the inspection object data ⁇ for learning. For example, the convolution layer performs dimensional compression by performing convolution integration.
  • the feature amount of the inspection object 110 is extracted by dimensional compression.
  • the deconvolution layer is a layer that restores data that has been dimensionally compressed in the convolution layer to its original dimension.
  • the deconvolution layer performs deconvolution processing and restores the original dimension. By the restoration, image data representing the original characteristic of the inspection object 110, that is, the characteristic of the normal inspection object 110 is restored.
  • image data representing the original characteristic of the inspection object 110 that is, the characteristic of the normal inspection object 110 is restored.
  • the convolution layer and the deconvolution layer are two layers
  • one layer or three or more layers may be used.
  • the output layer 11b is a layer that outputs data of each pixel value related to the feature extraction data obtained by extracting the feature of the inspection object 110 in the convolution layer and the deconvolution layer.
  • the auto encoder 11 causes the neural network of the auto encoder 11 to perform machine learning so that the input inspection target data and the output feature extraction data are the same. That is, the neural network is machine-learned so that the input image 110a and the output image 110b are the same.
  • Such machine learning is performed using inspection object data obtained by imaging a normal inspection object 110.
  • FIG. 5 is a schematic diagram showing the function of the auto encoder 11 that extracts the feature of the inspection object 110 and outputs the feature extraction data.
  • the auto encoder 11 learned as described above receives either inspection object data obtained by imaging a normal inspection object 110 or inspection object data obtained by imaging the inspection object 110 having an abnormality.
  • Image data obtained by imaging the normal inspection object 110 is output as feature extraction data.
  • the feature extraction data input to the auto encoder 11 from the left side is image data obtained by imaging the inspection object 110 having an abnormality.
  • the image data includes data of the image 110a of the inspection object 110 having the image 111a at the abnormal location.
  • feature extraction data as shown on the right side in FIG. 5 is output.
  • the feature extraction data is image data obtained by imaging a normal inspection object 110.
  • the image data includes data of the image 110b of the inspection object 110 from which the image 111a of the anomalous portion is removed and the original characteristics of the inspection object 110 appear.
  • the feature extraction data output from the auto encoder 11 is input to the determination data generation unit 12a shown in FIG. Further, the inspection object data output from the imaging unit 2 is input to the determination data generation unit 12a.
  • the determination data generation unit 12a calculates the difference between the inspection target data and the feature extraction data, and outputs the difference data obtained by the difference calculation to the anomaly determination unit 12b. That is, the difference between the pixel value of each pixel related to the inspection target data and the pixel value of each corresponding pixel related to the feature extraction data is calculated, and image data having the difference value as the pixel value is output as difference data.
  • the difference data generated by the determination data generation unit 12a is stored in the data storage unit 1h in association with the inspection target data that is the source of the difference data.
  • FIG. 6 is a schematic diagram showing a method for generating difference data.
  • the left figure shows an image 110a related to inspection object data, and the center figure shows an image 110b related to feature extraction data.
  • the image 110a related to the inspection object data includes the image 111a of the anomalous portion. Looking at the image 110b related to the feature extraction data, it can be seen that the image 111a at the anomalous location is removed and the image of the normal inspection object 110 is obtained.
  • the determination data generation unit 12a generates difference data obtained by subtracting the luminance value of each pixel related to the feature extraction data from the luminance value of each pixel related to the inspection target data.
  • the right figure shows difference data, and an image 111a of an abnormal part is extracted as an image of difference data.
  • FIG. 7A, 7B, and 7C are schematic diagrams illustrating examples of inspection target data, feature extraction data, and difference data obtained by imaging the connector that is the inspection target 110.
  • FIG. 7, 7B and 7C show a method for generating difference data using inspection object data obtained by imaging the connector of the wire harness 100 as a more specific inspection object 110.
  • FIG. The basic operation is as described in FIG. FIG. 7A shows data to be inspected, and the connector image 110a includes an image 111a of an abnormal location.
  • FIG. 7B shows feature extraction data, which includes an image 111b of a normal inspection object 110.
  • FIG. FIG. 7C is an image obtained by extracting the anomalous location image 111a as an image of difference data.
  • the anomaly determination unit 12b determines whether there is an anomaly in the inspection object 110 based on the difference data output from the determination data generation unit 12a.
  • the anomaly determination unit 12b is, for example, a learned one-class support vector machine.
  • the one-class support vector machine is a classifier that classifies difference data obtained from normal inspection target data and difference data obtained from abnormal inspection target data.
  • the one class support vector machine is an example of a classifier, and the determination method is not particularly limited as long as difference data obtained from abnormal inspection target data can be statistically determined as an outlier.
  • FIG. 8 is a flowchart showing a processing procedure related to collection of additional learning data and additional learning.
  • the arithmetic unit 1a executes the following processing at the additional learning confirmation timing of the auto encoder 11 that arrives periodically.
  • the calculation unit 1a reads a plurality of learning inspection target data ⁇ from the data storage unit 1h, and inputs the read plurality of inspection target data ⁇ to the auto encoder 11, thereby extracting feature extraction data of the inspection target data ⁇ . Is generated (step S11).
  • the calculating part 1a produces
  • the difference amount is a scalar amount indicating the size of the difference.
  • the arithmetic unit 1a performs threshold processing on the difference data and binarizes it.
  • the calculating part 1a performs a connection area
  • the maximum integrated value among the integrated values of each connected region included in the image related to the difference data is determined as the difference amount of the difference data. That is, the luminance integrated value of the largest difference image portion included in the image related to the difference data is calculated as the difference amount.
  • the difference amount is calculated for each of a plurality of difference data.
  • the calculation unit 1a calculates a frequency distribution of the difference amount (see FIG. 9).
  • the calculation unit 1a reads the difference data obtained during operation from the data storage unit 1h (step S13).
  • the difference data is data obtained by calculating a difference between the inspection target data ⁇ and the feature extraction data.
  • the calculating part 1a produces
  • the said difference amount and frequency distribution can be produced
  • the calculation unit 1a calculates a test statistic for testing a statistical difference between the frequency distribution calculated in step S12 and the frequency distribution calculated in step S14 (step S15), It is determined whether or not the frequency distribution has a statistically significant difference (step S16).
  • FIG. 9 is a graph showing the statistical difference of the difference amount.
  • the horizontal axis indicates the difference amount, and the vertical axis indicates the frequency.
  • a in the figure indicates the frequency distribution calculated in step S12, and B in the figure indicates the frequency distribution calculated in step S14.
  • the computing unit 1a calculates a t value as a test statistic, and performs a t test on the difference between the population means of each frequency distribution.
  • the test method is not particularly limited, and an F value may be calculated and a difference in variance may be F-tested. Of course, you may test using other test statistics. It is also possible to determine whether each frequency distribution has a statistically significant difference by calculating the similarity or statistical distance of each frequency distribution and calculating whether the statistical distance is greater than or equal to a threshold value. good.
  • step S16 When it is determined in step S16 that there is no statistically significant difference (step S16: NO), the arithmetic unit 1a finishes the process. Statistics between the frequency distribution of the difference amount obtained from the inspection object data ⁇ for learning used at the time of manufacturing the anomaly detection apparatus 1 and the frequency distribution of the difference amount obtained from the inspection object data ⁇ captured during operation If there is no significant difference, it is considered that there is no change in the environment that lowers the anomaly detection accuracy, and additional learning of the auto encoder 11 is not necessary.
  • step S16 the calculation unit 1a collects or selects data for additionally learning the auto encoder 11 by executing the processes of steps S17 to S23. Execute the process.
  • the frequency distribution of the difference amount obtained from the inspection object data ⁇ for learning used at the time of manufacturing the anomaly detection apparatus 1 and the frequency distribution of the difference amount obtained from the inspection object data ⁇ captured during operation are statistically calculated. If they are different, there may be some changes such as changes in the lighting environment in the factory, contamination of the sensor, and movement of the installation location of the line. In this case, the anomaly detection accuracy may be lowered, and it is necessary to additionally learn the auto encoder 11 and adapt it to environmental changes.
  • the arithmetic unit 1a is configured such that the difference amount of the difference data obtained from the inspection object data ⁇ among the plurality of inspection object data ⁇ obtained during operation is the difference amount obtained from the inspection object data ⁇ for learning.
  • One inspection object data ⁇ that is out of the confidence interval of the frequency distribution A is selected (step S17).
  • the inspection object data ⁇ selected in step S17 includes inspection object data ⁇ obtained by imaging the inspection object 110 having an abnormality. Such inspection object data ⁇ must be excluded from the additional learning data.
  • the processing of step S18 and step S19 following step S17 includes inspection object data ⁇ obtained by imaging the normal inspection object 110 and inspection object data ⁇ obtained by imaging the inspection object 110 having an abnormality. It is for sorting.
  • the calculation unit 1a that has finished the process of step S17 calculates a difference occurrence location in the image of the plurality of learning target data ⁇ for learning (step S18), and the calculated difference occurrence location is selected in step S17. It is determined whether or not the difference occurrence location in the image of the inspection object data ⁇ matches (step S19). That is, the calculation unit 1a selects whether or not the difference is not an abnormality of the inspection object 110 but a difference caused by an environmental change or the like based on the occurrence location. This will be specifically described below.
  • FIG. 10 is a contour diagram showing a mixed normal distribution obtained based on learning inspection object data ⁇ obtained by imaging a normal inspection object 110.
  • the computing unit 1a identifies the position where the difference occurs based on the difference data calculated in step S11 and step S12.
  • symbol 6 has shown the generation
  • the position of the difference occurrence position may be estimated using, for example, a mixed normal distribution, where the coordinates of the image related to the difference data are (x, y) and the magnitude of the difference is p (x, y).
  • a mixed normal distribution is represented by a superposition of a plurality of normal distributions.
  • the computing unit 1a estimates a plurality of normal distributions i that reproduce p (x, y) using an EM algorithm or the like.
  • six peaks appear, and in the simplest case, the magnitude p (x, y) of the difference can be expressed by a mixed normal distribution obtained by superimposing six normal distributions i. . Note that the number of peaks and the number of normal distributions i to be superimposed do not necessarily match.
  • FIG. 11 is a schematic diagram showing difference data obtained during anomaly detection processing after learning.
  • the portion of the mark X where the difference occurs is the difference center coordinate ⁇ k, and the number of the difference centers is “1”.
  • the computing unit 1a calculates a statistical distance between the center coordinate ⁇ i and the center coordinate ⁇ k, and specifies the center coordinate ⁇ i closest to the center coordinate ⁇ k.
  • the statistical distance for example, the L2 norm, that is, the Euclidean distance may be calculated.
  • the calculation unit 1a statistically determines the center coordinate ⁇ i that is a difference occurrence location related to the inspection target data ⁇ for learning and the center coordinate ⁇ k that is a difference occurrence location related to the inspection target data ⁇ obtained during operation.
  • the peak of the mixed normal distribution that is, the normal distribution j, which is closest to the center coordinate ⁇ k of the difference occurrence location related to the inspection target data ⁇ obtained during operation. Then, when the value obtained by multiplying the value pj ( ⁇ k) of the normal distribution j by the reciprocal of the mixture ratio ⁇ j is equal to or greater than a predetermined threshold value, the calculation unit 1a determines that the difference occurrence points match. To do.
  • the inspection target data ⁇ selected in step S17 images the inspection target 110 having an abnormality. It can be estimated that the obtained image data.
  • the inspection object data ⁇ selected in step S17 was obtained by imaging the normal inspection object 110. It is image data, and it can be estimated that the data is affected by environmental changes.
  • step S19: YES When it is determined in step S19 that the difference occurrence locations match (step S19: YES), the calculation unit 1a selects the inspection object data ⁇ selected in step S17 as data for additional learning (step S20). ). When it is determined that the difference occurrence points do not match (step S19: NO), the calculation unit 1a excludes the inspection target data ⁇ selected in step S17 from the additional learning data (step S21).
  • the arithmetic unit 1a determines whether or not the selection of whether or not all the inspection object data ⁇ accumulated during operation is suitable as additional learning data has been completed (step S22). If it is determined that the sorting has not been completed (step S22: NO), the computing unit 1a returns the process to step S17.
  • stored in the data storage part 1h was demonstrated, you may use a part of stored test object data (beta) as a candidate of the data for learning.
  • the learning inspection target data ⁇ stored in the data storage unit 1h and the inspection target data ⁇ selected in the processing of steps S17 to S21 are randomly selected.
  • the inspection object data ⁇ and ⁇ obtained by mixing are preferably stored in the data storage unit 1h as learning data.
  • the calculation unit 1a performs additional learning of the auto encoder 11 using the inspection object data ⁇ and ⁇ mixed in step S23 as data for additional learning (step S24), and ends the process.
  • the additional learning method is the same as the learning method of the auto encoder 11 at the time of manufacture. That is, the arithmetic unit 1a causes the neural network constituting the auto encoder 11 to machine-learn so that the inspection object data ⁇ and ⁇ input to the auto encoder 11 and the output feature extraction data are the same.
  • the learning data collection method the learning data collection device 13, the anomaly detection system, and the computer program 5 configured as described above, the lighting environment in the factory changes, the dirt of the sensor, the movement of the installation location of the line, etc.
  • the sensor value changes, it is possible to automatically collect learning data for additionally learning the auto encoder 11 so as to respond to environmental changes.
  • test object data ⁇ deviating from the confidence interval (see FIG. 9) obtained based on the test object data ⁇ for learning is selected as data for additional learning, the auto encoder 11 is effectively used. Can be additionally learned.
  • the inspection object data ⁇ for learning and the inspection object data ⁇ accumulated during operation are randomly mixed at an appropriate ratio, and the mixed inspection object data ⁇ and ⁇ are used as additional learning data. Therefore, abnormal learning can be avoided and the auto encoder 11 can be additionally learned effectively.
  • the inspection object data ⁇ and ⁇ are mixed according to the ratio of the confidence interval, biased abnormal learning can be avoided and the auto encoder 11 can be effectively additionally learned.
  • the inspection object data ⁇ obtained by imaging the normal inspection object 110 can be selected by comparing the difference occurrence points related to the inspection object data ⁇ , ⁇ , and the auto encoder 11 can be appropriately selected. Additional learning can be done.
  • the inspection object data ⁇ obtained by imaging the normal inspection object 110 can be automatically selected, and the auto encoder 11 can be additionally learned.
  • step S16 the difference between the difference data of the inspection target data ⁇ and ⁇ can be statistically determined to determine whether additional learning is necessary. Accordingly, useless additional learning can be avoided, and the auto encoder 11 can be additionally learned as necessary.
  • the anomaly detection apparatus 1 detects an anomaly of the inspection object 110 using image data obtained by imaging the inspection object 110, and the additional learning data collection apparatus is Data suitable for additional learning can be selected by image analysis of the inspection object data ⁇ .
  • the inspection target data of the image is taken as an example, and anomaly detection of the inspection target 110, data collection for additional learning of the auto encoder 11, and additional learning have been described.
  • the format and contents are not particularly limited.
  • the inspection target data may be voice data detected by a microphone, vibration data detected by a vibration sensor, voltage data detected by a voltmeter or an ammeter, current data, and the like.
  • the processing shown in FIG. 8 is periodically executed, and there is a statistical difference between the difference amount distribution related to the inspection target data ⁇ and the difference amount distribution of the inspection target data ⁇ .
  • the additional learning process is executed has been described.
  • the calculation unit 1a determines whether or not the accumulated amount of the inspection target data ⁇ having a statistical difference exceeds a predetermined threshold value. If the threshold value is exceeded, the processing in step S17 and subsequent steps may be executed.
  • the inspection target data ⁇ which is a candidate for additional learning data
  • the process related to additional learning is performed. Therefore, unnecessary additional learning can be avoided and the auto encoder can be additionally learned effectively. it can.
  • the operator determines whether the inspection object data ⁇ outside the confidence interval obtained based on the inspection object data ⁇ for learning is inspection object data suitable as additional learning data.
  • the first embodiment in that it is selected semi-automatically. In the following, mainly such differences will be described. Since other configurations and operational effects are the same as those of the first embodiment, the corresponding portions are denoted by the same reference numerals, and detailed description thereof is omitted.
  • FIG. 13 is a flowchart showing a collection processing procedure of additional learning data according to the second embodiment.
  • the arithmetic unit 1a that has executed the processing of step S11 to step S16 of the first embodiment has the difference amount of the difference data obtained from the inspection target data ⁇ among the plurality of inspection target data ⁇ obtained during operation.
  • the inspection object data ⁇ that is out of the confidence interval of the frequency distribution A of the difference amount obtained from the inspection object data ⁇ for learning is selected (step S217).
  • the calculation unit 1a selects all the inspection target data ⁇ that is outside the confidence interval.
  • the calculating part 1a calculates the difference generation
  • the calculation unit 1a calculates a difference occurrence point in the images of the plurality of pieces of inspection target data ⁇ selected in step S217 (step S219). Specifically, the computing unit 1a estimates the difference occurrence location in the mixed normal distribution in the same manner as in step S218.
  • the calculation unit 1a compares the difference occurrence location calculated in step S218 with the difference occurrence location calculated in step S219, and the inspection target data ⁇ at a position deviating from the difference occurrence location related to the inspection target data ⁇ .
  • the difference occurrence location is extracted (step S220). That is, the inspection object data ⁇ having a difference in a difference occurrence place that is normally found in the learning inspection object data ⁇ and a difference occurrence place is extracted. Since it is highly likely that the inspection object data ⁇ is abnormal data due to an abnormality of the inspection object 110, the operator determines whether or not the inspection object data is appropriate as learning data.
  • the calculating part 1a selects the some test object data (beta) which has a difference in the difference generation location extracted by step S220 (step S221).
  • the calculation unit 1a outputs the inspection target data ⁇ selected in step S221 and the difference occurrence location display image 7 indicating the difference occurrence location extracted in step S220 to the display unit 3 via the output unit 1d. (Step S222). And the calculating part 1a receives the suitability as data for additional learning in the operation part 4 (step S223).
  • FIG. 14A and 14B are schematic diagrams showing a method for accepting selection of learning data.
  • the difference occurrence location display image 7 is superimposed and displayed on the image 110 a of the inspection object 110 related to the inspection object data ⁇ .
  • there is an abnormality in the inspection object 110 a difference is generated due to the abnormality, and the difference occurrence location is indicated by the difference occurrence location display image 7.
  • the inspection object 110 is normal, not caused by an abnormality, and a difference occurrence place display image 7 shows a difference occurrence place that normally occurs.
  • the operator confirms the video as shown in FIG. 14A or FIG. 14B displayed on the display unit 3, and selects the suitability as additional learning data using the operation unit 4.
  • the calculation unit 1a performs selection related to the additional learning data based on the selection result received by the operation unit 4 (step S224).
  • the calculation unit 1a selects, as additional learning data, inspection target data ⁇ having a difference at the difference occurrence location extracted in step S220.
  • the inspection object data ⁇ having a difference at the difference occurrence location extracted in step S220 is excluded from the additional learning data.
  • the calculation unit 1a also determines the inspection target data ⁇ having a difference in the difference occurrence location. Automatically selected as additional learning data.
  • Embodiment 2 semi-automatic collection of additional image data taking into account the judgment by the operator is possible by checking the suitability as the learning data for the inspection target data ⁇ that may be abnormal.
  • the auto-encoder 11 can be additionally learned more effectively.
  • FIG. 15 is a schematic diagram illustrating a configuration example of the anomaly detection system according to the third embodiment.
  • the anomaly detection system according to Embodiment 3 is different from that of Embodiment 1 in that the learning data collection device 301 and the anomaly detection device 310 that performs anomaly detection are configured separately and connected via a communication line L. Different. In the following, mainly such differences will be described. Since other configurations and operational effects are the same as those of the first embodiment, the corresponding portions are denoted by the same reference numerals, and detailed description thereof is omitted.
  • the hardware configuration of the learning data collection device 301 is the same as that of the anomaly detection device 310 of the first embodiment, and further includes a communication unit 1i that communicates with the anomaly detection device 310.
  • the hardware configuration on the anomaly detection device 310 side is the same, and further includes a communication unit 313 that communicates with the learning data collection device 301.
  • the anomaly detection device 310 transmits the inspection object data ⁇ and the difference data to the learning data collection device 301 through the communication unit 313 during operation.
  • the learning data collection device 301 receives the inspection object data ⁇ and the difference data transmitted from the anomaly detection device 310 by the communication unit 1i.
  • the learning data collection method is the same as that in the first embodiment, and the learning data collection device 301 collects the necessity for additional learning of the anomaly detection device 310 and data for additional learning.
  • the learning data collection device 301 transmits the collected learning data to the anomaly detection device 310 via the communication unit 1i.
  • the anomaly detection device 310 receives learning data at the communication unit 313 and performs additional learning based on the received data.
  • processing related to data collection and additional learning that causes the auto-encoder 311 to perform additional learning can be distributed as appropriate according to the performance of the hardware, and there is no particular problem where the processing is executed.
  • the anomaly detection system as in the first embodiment, when the sensor value changes due to changes in the lighting environment in the factory, dirt on the sensor, movement of the installation location of the line, etc. For this purpose, learning data for additionally learning the auto encoder 311 can be collected.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Testing And Monitoring For Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

Provided is a learning data collection method for collecting data for additional learning of a learned auto encoder to which object-to-be-inspected data for detection of an abnormality in an object to be inspected is input and from which feature extraction data indicating an extracted feature of the input object-to-be-inspected data is output, the method comprising: statistically comparing a difference between output feature extraction data and a plurality of first object-to-be-inspected data input to the auto encoder for learning of the auto encoder that has not learned yet with a difference between output feature extraction data and a plurality of second object-to-be-inspected data input to the auto encoder having learned; and when a statistical difference is found, selecting, as data for additional learning, second object-to-be-inspected data related to a normal object to be inspected.

Description

学習用データ収集方法、学習用データ収集装置、異変検知システム及びコンピュータプログラムLearning data collection method, learning data collection device, anomaly detection system, and computer program
 本開示は、学習用データ収集方法、学習用データ収集装置、異変検知システム及びコンピュータプログラムに関する。
 本出願は、2018年3月13日出願の日本出願第2018-45727号に基づく優先権を主張し、前記日本出願に記載された全ての記載内容を援用するものである。
The present disclosure relates to a learning data collection method, a learning data collection device, an anomaly detection system, and a computer program.
This application claims priority based on Japanese Patent Application No. 2018-45727 filed on Mar. 13, 2018, and incorporates all the content described in the above Japanese application.
 工場の生産ラインには製品の不良検知を行う不良検知システムが導入されている。不良検知システムは、工場の生産ラインにセンサを用いて検査対象物の不良検知を行う。センサは、例えばカメラである。ディープラーニング等の機械学習を用いた不良検知システムにおいては、センサ値を学習し、不良検知に必要な判定パラメータを自動で設定することができる。 The factory production line has a defect detection system that detects product defects. The defect detection system detects a defect of an inspection object using a sensor in a production line of a factory. The sensor is, for example, a camera. In a defect detection system using machine learning such as deep learning, it is possible to learn sensor values and automatically set determination parameters necessary for defect detection.
 非特許文献1には、ディープニューラルネットワークの学習に使用するサンプルデータを自動で選択するカリキュラム学習技術が開示されている。
 非特許文献2には、機械学習技術の一つであるSOMを用いて、学習済みのどのカテゴリーにも一致しない異常データを検出した場合に追加学習を行い、異常データを新たなカテゴリーとして学習する技術が開示されている。
Non-Patent Document 1 discloses a curriculum learning technique for automatically selecting sample data used for deep neural network learning.
In Non-Patent Document 2, additional learning is performed when abnormal data that does not match any learned category is detected using SOM, which is one of machine learning techniques, and abnormal data is learned as a new category. Technology is disclosed.
 本開示の学習用データ収集方法は、検査対象物の異変を検知するための検査対象データが入力され、入力された前記検査対象データの特徴を抽出した特徴抽出データを出力する学習済みオートエンコーダを追加学習させるためのデータを収集する学習用データ収集方法であって、学習前の前記オートエンコーダを学習させるために該オートエンコーダに入力された複数の第1の検査対象データ及び出力された複数の第1の特徴抽出データの差分と、学習後に前記オートエンコーダに入力された複数の第2の検査対象データ及び出力された複数の第2の特徴抽出データの差分とを統計的に比較し、統計的差異が有る場合、正常な前記検査対象物に係る前記第2の検査対象データを追加学習用のデータとして選択する。 The learning data collection method of the present disclosure includes a learned auto encoder that receives inspection target data for detecting an abnormality of an inspection target and outputs feature extraction data obtained by extracting features of the input inspection target data. A learning data collection method for collecting data for additional learning, wherein a plurality of first inspection object data input to the auto encoder and a plurality of output data are output in order to learn the auto encoder before learning. The difference between the first feature extraction data and the plurality of second inspection target data input to the auto encoder after learning and the difference between the plurality of second feature extraction data output after learning are statistically compared, If there is a difference, the second inspection object data related to the normal inspection object is selected as additional learning data.
 本開示の学習用データ収集装置は、検査対象物の異変を検知するための検査対象データが入力され、入力された前記検査対象データの特徴を抽出した特徴抽出データを出力する学習済みオートエンコーダを追加学習させるためのデータを収集する学習用データ収集装置であって、学習前の前記オートエンコーダを学習させるために該オートエンコーダに入力された複数の第1の検査対象データ及び出力された複数の第1の特徴抽出データの差分と、学習後に前記オートエンコーダに入力された複数の第2の検査対象データ及び出力された複数の第2の特徴抽出データの差分とを統計的に比較する比較部と、統計的に差異が有る場合、正常な前記検査対象物に係る前記第2の検査対象データを追加学習用のデータとして選択する選択部とを備える。 The learning data collection device of the present disclosure includes a learned auto-encoder that receives inspection target data for detecting an abnormality of the inspection target and outputs feature extraction data obtained by extracting features of the input inspection target data. A learning data collection device for collecting data for additional learning, wherein a plurality of first inspection object data input to the auto encoder and a plurality of output data are output to the auto encoder before learning. A comparison unit that statistically compares the difference between the first feature extraction data and the plurality of second inspection target data input to the auto encoder after learning and the difference between the plurality of second feature extraction data output. And a selection unit that selects the second inspection object data relating to the normal inspection object as data for additional learning when there is a statistical difference. The
 本開示の異変検知システムは、検査対象物の異変を検知するための検査対象データが入力され、入力された前記検査対象データの特徴を抽出した特徴抽出データを出力する学習済みオートエンコーダと、該オートエンコーダに入力された前記検査対象データと、出力された前記特徴抽出データとを比較することによって、前記検査対象物の異変を検知する異変検知処理部と、前記オートエンコーダを追加学習させるためのデータを収集する前記学習用データ収集装置と、該学習用データ収集装置にて収集されたデータに基づいて、前記オートエンコーダを追加学習させる学習処理部とを備える。 The anomaly detection system of the present disclosure includes a learned auto encoder that receives inspection object data for detecting an abnormality of an inspection object, outputs feature extraction data obtained by extracting features of the input inspection object data, and An anomaly detection processing unit for detecting an anomaly of the inspection object by comparing the inspection object data input to the auto encoder and the output feature extraction data, and additional learning of the auto encoder The learning data collecting device for collecting data, and a learning processing unit for additionally learning the auto encoder based on the data collected by the learning data collecting device.
 本開示のコンピュータプログラムは、検査対象物の異変の有無を検知するための検査対象データが入力され、入力された前記検査対象データの特徴を抽出した特徴抽出データを出力する学習済みオートエンコーダを追加学習させるためのデータを、コンピュータに収集させるためのコンピュータプログラムであって、前記コンピュータに、学習前の前記オートエンコーダを学習させるために該オートエンコーダに入力された複数の第1の検査対象データ及び出力された複数の第1の特徴抽出データの差分と、学習後に前記オートエンコーダに入力された複数の第2の検査対象データ及び出力された複数の第2の特徴抽出データの差分とを統計的に比較し、統計的に差異が有る場合、正常な前記検査対象物に係る前記第2の検査対象データを追加学習用のデータとして選択する処理を実行させる。 The computer program of the present disclosure adds a learned autoencoder that receives inspection target data for detecting whether there is an abnormality in the inspection target object and outputs feature extraction data obtained by extracting features of the input inspection target data A computer program for causing a computer to collect data for learning, the plurality of first inspection target data input to the auto encoder for causing the computer to learn the auto encoder before learning, and The difference between the plurality of output first feature extraction data and the difference between the plurality of second inspection target data input to the auto encoder after learning and the output plurality of second feature extraction data are statistically calculated. If there is a statistical difference, the second inspection object data relating to the normal inspection object is To execute a process of selecting a data pressurized learning.
実施形態1に係る異変検知システムの構成例を示す模式図である。It is a schematic diagram which shows the structural example of the anomaly detection system which concerns on Embodiment 1. FIG. 実施形態1に係る異変検知装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the anomaly detection apparatus which concerns on Embodiment 1. FIG. 実施形態1に係る異変検知装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the anomaly detection apparatus which concerns on Embodiment 1. FIG. オートエンコーダのニューラルネットワーク及び学習方法を示す模式図である。It is a schematic diagram which shows the neural network and learning method of an auto encoder. 検査対象物の特徴を抽出し、特徴抽出データを出力するオートエンコーダの機能を示す模式図である。It is a schematic diagram which shows the function of the auto encoder which extracts the characteristic of a test target object and outputs characteristic extraction data. 差分データの生成方法を示す模式図である。It is a schematic diagram which shows the production | generation method of difference data. 検査対象物であるコネクタを撮像して得られる検査対象データ、特徴抽出データ及び差分データの一例を示す模式図である。It is a schematic diagram which shows an example of the test object data obtained by imaging the connector which is a test object, feature extraction data, and difference data. 検査対象物であるコネクタを撮像して得られる検査対象データ、特徴抽出データ及び差分データの一例を示す模式図である。It is a schematic diagram which shows an example of the test object data obtained by imaging the connector which is a test object, feature extraction data, and difference data. 検査対象物であるコネクタを撮像して得られる検査対象データ、特徴抽出データ及び差分データの一例を示す模式図である。It is a schematic diagram which shows an example of the test object data obtained by imaging the connector which is a test object, feature extraction data, and difference data. 追加学習用データの収集及び追加学習に係る処理手順を示すフローチャートである。It is a flowchart which shows the process sequence which concerns on the collection of the data for additional learning, and additional learning. 差分量の統計的差異を示すグラフである。It is a graph which shows the statistical difference of difference amount. 正常な検査対象物を撮像して得られた学習用検査対象データに基づいて得られる混合正規分布を示す等高線図である。It is a contour map which shows the mixed normal distribution obtained based on the test object data for learning obtained by imaging a normal test object. 学習後の異変検知処理中に得られた差分データを示す模式図である。It is a schematic diagram which shows the difference data obtained during the anomaly detection process after learning. 異変がある検査対象物を撮像して得られた検査対象データと、正常な検査対象物を撮像して得られた検査対象データとを判別する方法を示す模式図である。It is a schematic diagram which shows the method of discriminate | determining the test object data obtained by imaging the test object with abnormality, and the test object data obtained by imaging a normal test object. 異変がある検査対象物を撮像して得られた検査対象データと、正常な検査対象物を撮像して得られた検査対象データとを判別する方法を示す模式図である。It is a schematic diagram which shows the method of discriminate | determining the test object data obtained by imaging the test object with abnormality, and the test object data obtained by imaging a normal test object. 実施形態2に係る追加学習用データの収集処理手順を示すフローチャートである。10 is a flowchart illustrating a collection processing procedure of additional learning data according to the second embodiment. 学習用データの選択受け付け方法を示す模式図である。It is a schematic diagram which shows the selection reception method of the data for learning. 学習用データの選択受け付け方法を示す模式図である。It is a schematic diagram which shows the selection reception method of the data for learning. 実施形態3に係る異変検知システムの構成例を示す模式図である。It is a schematic diagram which shows the structural example of the anomaly detection system which concerns on Embodiment 3. FIG.
[本開示が解決しようとする課題]
 工場内の照明環境が変化したり、センサが汚れたり、生産ラインの設置場所が移動したりすると、センサ値がわずかに変化する。その結果、不良検知システムへの入力値が変化する。ディープラーニング等の機械学習を用いた不良検知システムであっても、学習後のセンサ値の変化には対応できず、不良品の検知精度が低下する可能性がある。工場の環境変化に合わせて自動で追加学習を行うことができる不良検知システムが望まれている。
[Problems to be solved by the present disclosure]
The sensor value slightly changes when the lighting environment in the factory changes, the sensor becomes dirty, or the location of the production line moves. As a result, the input value to the defect detection system changes. Even a defect detection system using machine learning such as deep learning cannot cope with a change in sensor value after learning, and there is a possibility that the detection accuracy of a defective product is lowered. There is a demand for a defect detection system that can automatically perform additional learning according to changes in the factory environment.
 自動で追加学習を行う際に重要となるのは、学習データの選別である。例えばオートエンコーダを用いた異変検知システムにおいては、良品の検査対象物を撮像して得られる検査対象データと、不良品の検査対象物を撮像して得られる検査対象データとが混在した学習用データを用いて追加学習を行うと、検査対象物の異変を検知できなくなってしまう。 The important point when performing additional learning automatically is the selection of learning data. For example, in the anomaly detection system using an auto encoder, learning data in which inspection object data obtained by imaging a non-defective inspection object and inspection object data obtained by imaging a defective inspection object are mixed. If additional learning is performed using, an abnormality of the inspection object cannot be detected.
 本開示の目的は、工場内の照明環境の変化、センサの汚れ、ラインの設置場所の移動等によって、センサ値が変化した場合に、環境変化に対応させるべく異変検知用学習済みオートエンコーダを追加学習させるためのデータを収集することができる学習用データ収集方法、学習用データ収集装置、異変検知システム及びコンピュータプログラムを提供することにある。 The purpose of this disclosure is to add a learned auto encoder for anomaly detection to respond to environmental changes when sensor values change due to changes in the lighting environment in the factory, sensor contamination, movement of the installation location of the line, etc. An object is to provide a learning data collection method, a learning data collection device, an anomaly detection system, and a computer program that can collect data for learning.
[本開示の効果]
 本開示によれば、工場内の照明環境の変化、センサの汚れ、ラインの設置場所の移動等によって、センサ値が変化した場合に、環境変化に対応させるべく異変検知用学習済みオートエンコーダを追加学習させるためのデータを収集することができる学習用データ収集方法、学習用データ収集装置、異変検知システム及びコンピュータプログラムを提供することが可能となる。
[本開示の実施形態の説明]
 最初に本開示の実施態様を列記して説明する。また、以下に記載する実施形態の少なくとも一部を任意に組み合わせてもよい。
[Effects of the present disclosure]
According to the present disclosure, when a sensor value changes due to changes in the lighting environment in the factory, contamination of the sensor, movement of the installation location of the line, etc., a learned auto-encoder for anomaly detection is added to respond to the environmental change It is possible to provide a learning data collection method, a learning data collection device, an anomaly detection system, and a computer program that can collect data for learning.
[Description of Embodiment of Present Disclosure]
First, embodiments of the present disclosure will be listed and described. Moreover, you may combine arbitrarily at least one part of embodiment described below.
(1)本態様に係る学習用データ収集方法は、検査対象物の異変を検知するための検査対象データが入力され、入力された前記検査対象データの特徴を抽出した特徴抽出データを出力する学習済みオートエンコーダを追加学習させるためのデータを収集する学習用データ収集方法であって、学習前の前記オートエンコーダを学習させるために該オートエンコーダに入力された複数の第1の検査対象データ及び出力された複数の第1の特徴抽出データの差分と、学習後に前記オートエンコーダに入力された複数の第2の検査対象データ及び出力された複数の第2の特徴抽出データの差分とを統計的に比較し、統計的差異が有る場合、正常な前記検査対象物に係る前記第2の検査対象データを追加学習用のデータとして選択する。 (1) In the learning data collection method according to this aspect, learning is performed in which inspection target data for detecting an abnormality of the inspection target is input, and feature extraction data obtained by extracting features of the input inspection target data is output. A learning data collection method for collecting data for additionally learning a completed auto encoder, wherein a plurality of first inspection object data and outputs input to the auto encoder for learning the auto encoder before learning The difference between the plurality of first feature extraction data thus obtained and the difference between the plurality of second inspection target data inputted to the auto encoder after learning and the plurality of second feature extraction data outputted If there is a statistical difference, the second inspection object data related to the normal inspection object is selected as additional learning data.
 本態様にあっては、複数の第1の検査対象データ及び複数の第1の特徴抽出データの差分と、複数の第2の検査対象データ及び複数の第2の特徴抽出データの差分との相異を統計的に比較する。第1の検査対象データは、オートエンコーダを学習ないし製造させるために用いられたデータである。第2の検査対象データは、学習後、検査対象物の異変を検知するためにオートエンコーダに入力されたデータである。上記差分を統計的に比較することによって、工場内の照明環境の変化、センサの汚れ、ラインの設置場所の移動等、検査対象物の異変検知環境に変化があったか否かが分かる。
 統計的差異がある場合、複数の第2の検査対象データのうち、正常な検査対象物に係る第2の検査対象データを、追加学習用のデータとして選択する。異変がある検査対象物に係る検査対象データを用いてオートエンコーダを学習させると、異変がある検査対象物を検知できなくなってしまう。上記のような選択を行うことによって、追加学習によって異変検知環境の変化に適応させることが可能な学習用のデータを収集することが可能になる。
In this aspect, the difference between the difference between the plurality of first inspection object data and the plurality of first feature extraction data and the difference between the plurality of second inspection object data and the plurality of second feature extraction data. Compare differences statistically. The first inspection target data is data used for learning or manufacturing the auto encoder. The second inspection object data is data input to the auto encoder in order to detect an abnormality of the inspection object after learning. By statistically comparing the above differences, it is possible to determine whether or not there has been a change in the anomaly detection environment of the inspection object, such as a change in the lighting environment in the factory, contamination of the sensor, or movement of the installation location of the line.
When there is a statistical difference, second inspection target data related to a normal inspection target object is selected as additional learning data from among the plurality of second inspection target data. If the auto encoder is trained using inspection object data relating to an inspection object having an abnormality, the inspection object having an abnormality cannot be detected. By performing the selection as described above, it is possible to collect learning data that can be adapted to changes in the anomaly detection environment by additional learning.
(2)前記複数の第1の検査対象データ及び出力された前記複数の第1の特徴抽出データの差分から統計的に得られる所定の信頼区間から外れた前記第2の検査対象データを、追加学習用のデータとして選択する構成が好ましい。 (2) Add the second inspection object data out of a predetermined confidence interval statistically obtained from the difference between the plurality of first inspection object data and the output first feature extraction data. It is preferable to select the data for learning.
 本態様にあっては、複数の第2の検査対象データのうち、上記信頼区間から外れた第2の検査対象データを、追加学習用のデータとして選択する。当該検査対象データは、異変検知環境の影響が表れたデータであるため、かかるデータを選択することによって、オートエンコーダを効果的に追加学習させることができる。 In this aspect, the second inspection target data out of the confidence interval is selected as additional learning data from among the plurality of second inspection target data. Since the inspection target data is data in which the influence of the anomaly detection environment appears, it is possible to effectively additionally learn the auto encoder by selecting such data.
(3)前記信頼区間内の割合に相当する数の前記複数の第1の検査対象データと、前記信頼区間外の割合に相当する数の前記複数の第2の検査対象データを追加学習用のデータとして選択する構成が好ましい。 (3) A number of the plurality of first test object data corresponding to a ratio within the confidence interval and a number of the second test object data corresponding to a ratio outside the confidence interval are used for additional learning. A configuration to select as data is preferable.
 本態様にあっては、適当な割合で、第1の検査対象データと、第2の検査対象データを混合させて、追加学習用のデータとして選択する。このように追加学習用のデータを選択することによって、異常学習を回避し、オートエンコーダを効果的に追加学習させることができる。 In this aspect, the first inspection object data and the second inspection object data are mixed at an appropriate ratio and selected as additional learning data. By selecting additional learning data in this way, abnormal learning can be avoided and the auto encoder can be effectively additionally learned.
(4)前記複数の第1の検査対象データと、正常な前記検査対象物に係る前記複数の第2の検査対象データとを無作為に混合させた学習用検査対象データを追加学習用のデータとして選択する構成が好ましい。 (4) Additional learning data obtained by randomly mixing the plurality of first inspection object data and the plurality of second inspection object data related to the normal inspection object. The configuration selected as is preferable.
 本態様にあっては、第1の検査対象データと、第2の検査対象データを無作為に混合させることによって、偏った異常学習を回避し、オートエンコーダを効果的に追加学習させることができる。 In this aspect, by randomly mixing the first inspection target data and the second inspection target data, it is possible to avoid biased abnormal learning and effectively additionally learn the auto encoder. .
(5)前記複数の第1の検査対象データ及び出力された前記複数の第1の特徴抽出データの差分の発生箇所と、前記第2の検査対象データ及び出力された前記特徴抽出データの差分の発生箇所とを比較することによって、正常な前記検査対象物に係る前記第2の検査対象データを選択する構成が好ましい。 (5) The difference between the plurality of first inspection object data and the outputted first feature extraction data and the difference between the second inspection object data and the outputted feature extraction data. It is preferable that the second inspection object data related to the normal inspection object is selected by comparing the occurrence location.
 本態様にあっては、各差分の発生箇所を比較することによって、正常な検査対象物に係る第2の検査対象データを選択することができる。本態様には、正常な検査対象物に係る第2の検査対象データを手動で選択する発明と、自動で選択する発明の双方が含まれる。 In this aspect, the second inspection object data relating to the normal inspection object can be selected by comparing the locations where the differences occur. This aspect includes both an invention for manually selecting second inspection object data relating to a normal inspection object and an invention for automatically selecting the second inspection object data.
(6)前記複数の第1の検査対象データ及び出力された前記複数の第1の特徴抽出データの差分の発生箇所と、前記第2の検査対象データ及び出力された前記特徴抽出データの差分の発生箇所との間に統計的差異が無い場合、前記第2の検査対象データを、正常な前記検査対象物に係る前記第2の検査対象データとして選択する構成が好ましい。 (6) The difference between the plurality of first inspection object data and the outputted first feature extraction data, and the difference between the second inspection object data and the outputted feature extraction data. In the case where there is no statistical difference between the occurrence location and the occurrence location, it is preferable to select the second inspection target data as the second inspection target data related to the normal inspection target.
 本態様にあっては、各差分の発生箇所の統計距離と、閾値とを比較することによって、当該第2の検査対象データが、正常な検査対象物に係る第2の検査対象データであるか否かを判定することができる。統計距離及び閾値の比較処理によって、正常な検査対象物に係る第2の検査対象データを自動で選択することができる。 In this aspect, whether the second inspection object data is the second inspection object data related to a normal inspection object by comparing the statistical distance of the occurrence point of each difference with a threshold value. It can be determined whether or not. By the statistical distance and threshold comparison processing, the second inspection object data relating to the normal inspection object can be automatically selected.
(7)前記複数の第1の検査対象データ及び出力された前記複数の第1の特徴抽出データの差分の発生箇所と、前記複数の第2の検査対象データ及び出力された前記特徴抽出データの差分の発生箇所との間に統計的差異がある場合、差異がある発生箇所に差分を有する前記第2の検査対象データと共に、該発生箇所を示すデータを外部出力し、正常な前記検査対象物に係る前記第2の検査対象データであるか否かの選択を受け付ける構成が好ましい。 (7) The difference between the plurality of first inspection object data and the output first feature extraction data, the plurality of second inspection object data and the output feature extraction data. When there is a statistical difference between the occurrence location of the difference and the second inspection object data having a difference in the occurrence location where there is a difference, data indicating the occurrence location is output to the outside, and the normal inspection object It is preferable to accept a selection as to whether or not the data is the second inspection object data.
 本態様にあっては、第1の検査対象データと比較して統計的差異がある第2の検査対象データと共に、差の発生箇所を示したデータを外部出力する。つまり、追加学習用の候補である第2の検査対象データを出力する。当該第2の検査対象データには、正常な検査対象物に係る検査対象データと、異常な検査対象物に係る検査対象データとが含まれる。そこで、人が正常な検査対象物を選択し易いように、差分の発生箇所を示したデータを外部出力する。そして、正常な検査対象物に係る第2の検査対象データの選択を受け付ける。 In this aspect, the data indicating the difference occurrence location is output to the outside together with the second inspection target data having a statistical difference compared to the first inspection target data. That is, the second inspection target data that is a candidate for additional learning is output. The second inspection object data includes inspection object data related to a normal inspection object and inspection object data related to an abnormal inspection object. In view of this, data indicating the location where the difference occurs is externally output so that a person can easily select a normal inspection object. And selection of the 2nd inspection object data concerning a normal inspection object is received.
(8)前記複数の第1の検査対象データ及び出力された前記複数の第1の特徴抽出データの差分と、前記複数の第2の検査対象データ及び出力された前記特徴抽出データの差分とを統計的に比較した結果に基づいて、前記オートエンコーダの追加学習の要否を判定する構成が好ましい。 (8) The difference between the plurality of first inspection object data and the output plurality of first feature extraction data, and the difference between the plurality of second inspection object data and the output feature extraction data. A configuration in which the necessity of additional learning of the auto encoder is determined based on a statistical comparison result is preferable.
 本態様にあっては、複数の第1の検査対象データ及び複数の第1の特徴抽出データの差分と、複数の第2の検査対象データ及び複数の第2の特徴抽出データの差分との間の統計的差異の有無に基づいて、追加学習の要否を判定する。従って、無駄な追加学習を避けることができる。 In this aspect, between the difference between the plurality of first inspection object data and the plurality of first feature extraction data and the difference between the plurality of second inspection object data and the plurality of second feature extraction data. The necessity of additional learning is determined based on the presence or absence of the statistical difference. Therefore, useless additional learning can be avoided.
(9)学習後に前記オートエンコーダに入力された前記複数の第2の検査対象データを蓄積し、前記複数の第1の検査対象データと比較して統計的差異がある前記第2の検査対象データの蓄積量が閾値を超えた場合、追加学習が必要と判定する構成が好ましい。 (9) The plurality of second inspection target data input to the auto encoder after learning is accumulated, and the second inspection target data has a statistical difference compared to the plurality of first inspection target data. A configuration in which it is determined that additional learning is necessary when the amount of storage exceeds the threshold.
 本態様にあっては、複数の第1の検査対象データと比較して、統計的差異がある複数の第2の検査対象データが十分に蓄積された場合、追加学習が必要であると判定する。従って、無駄な追加学習を避け、効果的にオートエンコーダを追加学習させることができる。 In this aspect, it is determined that additional learning is necessary when a plurality of second inspection target data having statistical differences is sufficiently accumulated as compared with a plurality of first inspection target data. . Therefore, useless additional learning can be avoided and additional learning of the auto encoder can be effectively performed.
(10)定期的に追加学習の要否を判定する構成が好ましい。 (10) It is preferable to periodically determine whether additional learning is necessary.
 本態様にあっては、定期的に追加学習の必要性を確認し、必要に応じてオートエンコーダを追加学習させることができる。 In this aspect, it is possible to periodically confirm the necessity of additional learning and to perform additional learning of the auto encoder as necessary.
(11)前記検査対象データは前記検査対象物を撮像して得られる画像データである構成が好ましい。 (11) It is preferable that the inspection object data is image data obtained by imaging the inspection object.
 本態様にあっては、オートエンコーダは、検査対象物を撮像して得られた画像データを検査対象物データとして用いて、検査対象物の異変を検知する。本態様によれば、画像データである複数の第2の検査対象物データから、追加学習に適したデータを収集することができる。 In this aspect, the auto encoder uses the image data obtained by imaging the inspection object as inspection object data to detect an abnormality of the inspection object. According to this aspect, data suitable for additional learning can be collected from a plurality of second inspection object data that is image data.
(12)本態様に係る学習用データ収集装置は、検査対象物の異変を検知するための検査対象データが入力され、入力された前記検査対象データの特徴を抽出した特徴抽出データを出力する学習済みオートエンコーダを追加学習させるためのデータを収集する学習用データ収集装置であって、学習前の前記オートエンコーダを学習させるために該オートエンコーダに入力された複数の第1の検査対象データ及び出力された複数の第1の特徴抽出データの差分と、学習後に前記オートエンコーダに入力された複数の第2の検査対象データ及び出力された複数の第2の特徴抽出データの差分とを統計的に比較する比較部と、統計的に差異が有る場合、正常な前記検査対象物に係る前記第2の検査対象データを追加学習用のデータとして選択する選択部とを備える。 (12) The learning data collection apparatus according to this aspect receives learning target data for detecting an abnormality of the inspection target object, and outputs feature extraction data obtained by extracting features of the input inspection target data A learning data collection device for collecting data for additionally learning a completed auto encoder, wherein a plurality of first inspection object data and outputs input to the auto encoder for learning the auto encoder before learning The difference between the plurality of first feature extraction data thus obtained and the difference between the plurality of second inspection target data inputted to the auto encoder after learning and the plurality of second feature extraction data outputted If there is a statistical difference between the comparison unit and the comparison unit, the second inspection object data related to the normal inspection object is selected as additional learning data. And a part.
 本態様にあっては、態様1と同様、学習用データ収集装置は、追加学習によって異変検知環境の変化に適応させることが可能な学習用のデータを収集することができる。 In this aspect, as in aspect 1, the learning data collection device can collect learning data that can be adapted to changes in the anomaly detection environment through additional learning.
(13)本態様に係る異変検知システムは、検査対象物の異変を検知するための検査対象データが入力され、入力された前記検査対象データの特徴を抽出した特徴抽出データを出力する学習済みオートエンコーダと、該オートエンコーダに入力された前記検査対象データと、出力された前記特徴抽出データとを比較することによって、前記検査対象物の異変を検知する異変検知処理部と、前記オートエンコーダを追加学習させるためのデータを収集する態様(12)の学習用データ収集装置と、該学習用データ収集装置にて収集されたデータに基づいて、前記オートエンコーダを追加学習させる学習処理部とを備える。 (13) The anomaly detection system according to this aspect receives learned object data for detecting an anomaly of an object to be inspected, and outputs feature extraction data obtained by extracting features of the inputted inspection object data. Added an encoder, an abnormality detection processing unit for detecting an abnormality of the inspection object by comparing the inspection object data input to the auto encoder and the output feature extraction data, and the auto encoder A learning data collecting device according to an aspect (12) for collecting data for learning, and a learning processing unit for additionally learning the auto encoder based on the data collected by the learning data collecting device.
 本態様にあっては、異変検知システムは、検査対象物の異変をオートエンコーダを用いて検知することができる。態様1と同様、学習用データ収集装置は、追加学習によって異変検知環境の変化に適応させることが可能な学習用のデータを収集することができる。学習処理部は、収集された学習用のデータを用いて、オートエンコーダを追加学習させ、異変検知環境の変化に適応させることができる。 In this aspect, the anomaly detection system can detect an anomaly of an inspection object using an auto encoder. Similar to aspect 1, the learning data collection device can collect learning data that can be adapted to changes in the anomaly detection environment by additional learning. The learning processing unit can additionally learn the auto-encoder using the collected learning data, and can adapt the change to the anomaly detection environment.
(14)本態様に係るコンピュータプログラムは、検査対象物の異変の有無を検知するための検査対象データが入力され、入力された前記検査対象データの特徴を抽出した特徴抽出データを出力する学習済みオートエンコーダを追加学習させるためのデータを、コンピュータに収集させるためのコンピュータプログラムであって、前記コンピュータに、学習前の前記オートエンコーダを学習させるために該オートエンコーダに入力された複数の第1の検査対象データ及び出力された複数の第1の特徴抽出データの差分と、学習後に前記オートエンコーダに入力された複数の第2の検査対象データ及び出力された複数の第2の特徴抽出データの差分とを統計的に比較し、統計的に差異が有る場合、正常な前記検査対象物に係る前記第2の検査対象データを追加学習用のデータとして選択する処理を実行させる。 (14) The computer program according to this aspect has been learned to input inspection target data for detecting whether there is an abnormality in the inspection target and to output feature extraction data obtained by extracting the characteristics of the input inspection target data A computer program for causing a computer to collect data for further learning of an auto encoder, wherein the computer is configured to receive a plurality of first inputs input to the auto encoder so that the computer learns the auto encoder before learning. Differences between inspection object data and output first feature extraction data, differences between a plurality of second inspection object data input to the auto encoder after learning and output second feature extraction data And when there is a statistical difference, the second inspection pair related to the normal inspection object To execute a process of selecting the data as data for the additional learning.
 本態様にあっては、上記コンピュータプログラムを実行させることによって、コンピュータに、態様1と同様にして、学習用データ収集装置は、追加学習によって異変検知環境の変化に適応させることが可能な学習用のデータを収集させることができる。 In this aspect, by causing the computer to execute the computer program, the learning data collection device can be adapted to change in the anomaly detection environment by additional learning in the same manner as in aspect 1. Data can be collected.
[本開示の実施形態の詳細]
 本開示の実施形態に係る学習用データ収集方法、学習用データ収集装置、異変検知システム及びコンピュータプログラムの具体例を、以下に図面を参照しつつ説明する。なお、本開示はこれらの例示に限定されるものではなく、請求の範囲によって示され、請求の範囲と均等の意味及び範囲内でのすべての変更が含まれることが意図される。
[Details of Embodiment of the Present Disclosure]
Specific examples of the learning data collection method, the learning data collection device, the anomaly detection system, and the computer program according to the embodiment of the present disclosure will be described below with reference to the drawings. In addition, this indication is not limited to these illustrations, is shown by the claim, and it is intended that all the changes within the meaning and range equivalent to the claim are included.
(実施形態1)
 図1は実施形態1に係る異変検知システムの構成例を示す模式図である。異変検知システムは、検査対象物110を撮像する撮像部2と、撮像して得た画像データ(以下、検査対象データと呼ぶ)に基づいて、検査対象物110の異変を検知する異変検知装置1と、表示部3とを備える。検査対象物110は、例えば車両に配設されるワイヤハーネス100のコネクタ端子である。
(Embodiment 1)
FIG. 1 is a schematic diagram illustrating a configuration example of the anomaly detection system according to the first embodiment. The anomaly detection system is an anomaly detection device 1 that detects an anomaly of an inspection object 110 based on an imaging unit 2 that images the inspection object 110 and image data obtained by imaging (hereinafter referred to as inspection object data). And a display unit 3. The inspection object 110 is a connector terminal of the wire harness 100 disposed in the vehicle, for example.
<異変検知装置1のハードウェア構成>
 図2は実施形態1に係る異変検知装置1の構成例を示すブロック図である。異変検知装置1は、例えば一又は複数のCPU(Central Processing Unit)、マルチコアCPU等の演算部1aを有するコンピュータである。演算部1aには、一時記憶部1b、画像入力部1c、出力部1d、入力部1e、時計部1f、記憶部1g及びデータ蓄積部1hがバスラインを介して接続されている。
<Hardware configuration of the anomaly detection device 1>
FIG. 2 is a block diagram illustrating a configuration example of the anomaly detection device 1 according to the first embodiment. The anomaly detection device 1 is a computer having an operation unit 1a such as one or a plurality of CPUs (Central Processing Units) and a multi-core CPU. A temporary storage unit 1b, an image input unit 1c, an output unit 1d, an input unit 1e, a clock unit 1f, a storage unit 1g, and a data storage unit 1h are connected to the calculation unit 1a via a bus line.
 演算部1aは、記憶部1gに記憶されている後述のコンピュータプログラム5を実行することにより、各構成部の動作を制御し、オートエンコーダ型のニューラルネットワークを用いた検査対象物110の異変を検知する処理、本実施形態に係る追加学習用データを収集する処理、収集したデータを用いて当該ニューラルネットワークを追加学習させる処理等を実行する。このニューラルネットワークは、後述のオートエンコーダ11(図3,図4参照)を構成する。 The calculation unit 1a controls the operation of each component by executing a computer program 5 described later stored in the storage unit 1g, and detects an abnormality of the inspection object 110 using an auto-encoder type neural network. A process of collecting additional learning data according to the present embodiment, a process of additionally learning the neural network using the collected data, and the like. This neural network constitutes an auto encoder 11 (see FIGS. 3 and 4) described later.
 一時記憶部1bは、DRAM(Dynamic RAM)、SRAM(Static RAM)等のメモリであり、演算部1aの演算処理を実行する際に記憶部1gから読み出されたコンピュータプログラム5、又は演算処理によって生ずる各種データを一時記憶する。 The temporary storage unit 1b is a memory such as a DRAM (Dynamic RAM) or an SRAM (Static RAM), and is executed by the computer program 5 read from the storage unit 1g or the arithmetic processing when the arithmetic processing of the arithmetic unit 1a is executed. Temporarily store various data generated.
 記憶部1gは、ハードディスク、EEPROM(Electrically Erasable ProgrammableROM)、フラッシュメモリ等の不揮発性メモリである。記憶部1gは、演算部1aが各構成部の動作を制御することにより、検査対象物110の異変検知処理、追加学習用データの収集処理、追加学習処理を実行するためのコンピュータプログラム5を記憶している。
 なお記憶部1gは、図示しない読出装置によって記録媒体から読み出されたコンピュータプログラム5を記憶する態様であっても良い。記録媒体はCD(Compact Disc)-ROM、DVD(Digital Versatile Disc)-ROM、BD(Blu-ray(登録商標) Disc)等の光ディスク、フレキシブルディスク、ハードディスク等の磁気ディスク、磁気光ディスク、半導体メモリ等である。また、図示しない通信網に接続されている図示しない外部コンピュータから本実施形態1に係るコンピュータプログラム5をダウンロードし、記憶部1gに記憶させても良い。
The storage unit 1g is a non-volatile memory such as a hard disk, an EEPROM (Electrically Erasable Programmable ROM), or a flash memory. The storage unit 1g stores a computer program 5 for executing the anomaly detection process of the inspection object 110, the collection process of additional learning data, and the additional learning process by the operation unit 1a controlling the operation of each component. is doing.
The storage unit 1g may store the computer program 5 read from the recording medium by a reading device (not shown). Recording media are CD (Compact Disc) -ROM, DVD (Digital Versatile Disc) -ROM, optical disc such as BD (Blu-ray (registered trademark) Disc), flexible disc, magnetic disc such as hard disc, magnetic optical disc, semiconductor memory, etc. It is. Further, the computer program 5 according to the first embodiment may be downloaded from an external computer (not shown) connected to a communication network (not shown) and stored in the storage unit 1g.
 画像入力部1cは撮像部2が接続されるインタフェースである。撮像部2は、レンズにて結像した像を電気信号に変換するCCD、CMOS等の撮像素子と、撮像素子にて変換された電気信号をデジタルの画像データにAD変換し、AD変換された画像データを検査対象データとして出力する画像処理部とを備える。撮像部2から出力された検査対象データは画像入力部1cを介して異変検知装置1に入力される。異変検知装置1に入力された検査対象データは、データ蓄積部1hに蓄積される。撮像部2と、異変検知装置1とは、専用ケーブルで各別に接続される構成であっても良いし、LAN(Local Area Network)等のネットワークを介して接続される構成であっても良い。なお、検査対象データは、縦横に配列される各画素を所定階調の輝度値で示したデジタルのデータである。本実施形態では、モノクロの画像データであるものとして説明する。 The image input unit 1c is an interface to which the imaging unit 2 is connected. The image pickup unit 2 converts an image formed by the lens into an electrical signal such as a CCD and a CMOS, and AD converts the electrical signal converted by the image sensor into digital image data. And an image processing unit that outputs image data as inspection target data. The inspection object data output from the imaging unit 2 is input to the anomaly detection device 1 via the image input unit 1c. The inspection target data input to the anomaly detection device 1 is stored in the data storage unit 1h. The imaging unit 2 and the anomaly detection device 1 may be configured to be connected separately by a dedicated cable, or may be configured to be connected via a network such as a LAN (Local Area Network). The inspection target data is digital data in which the pixels arranged in the vertical and horizontal directions are indicated by luminance values of a predetermined gradation. In the present embodiment, description will be made assuming that the image data is monochrome.
 出力部1dは表示部3が接続されるインタフェースである。表示部3は液晶パネル、有機ELディスプレイ、電子ペーパ、プラズマディスプレイ等である。表示部3は、演算部1aから与えられた画像データに応じた各種情報を表示する。例えば、異変検知結果の内容、不具合のある検査対象物110の画像等を表示する。なお、表示部3は、異変検知結果を出力する外部出力装置の一例であり、ブザー、スピーカ、発光素子、その他の報知装置であっても良い。工場の作業者は、表示部3に表示された画像にて、異変検知の結果、検査対象物110の状態等を認識することができる。 The output unit 1d is an interface to which the display unit 3 is connected. The display unit 3 is a liquid crystal panel, an organic EL display, electronic paper, a plasma display, or the like. The display unit 3 displays various information corresponding to the image data given from the calculation unit 1a. For example, the contents of the anomaly detection result, the image of the inspection object 110 having a defect, and the like are displayed. The display unit 3 is an example of an external output device that outputs the anomaly detection result, and may be a buzzer, a speaker, a light emitting element, or another notification device. The factory operator can recognize the state of the inspection object 110 as a result of the anomaly detection from the image displayed on the display unit 3.
 入力部1eには、キーボード、マウス、タッチセンサ等の操作部4が接続される。操作部4の操作状態を示した信号は入力部1eを介して異変検知装置1に入力される。演算部1aは、入力部1eを介して操作部4の操作状態を認識することができる。 The operation unit 4 such as a keyboard, a mouse, and a touch sensor is connected to the input unit 1e. A signal indicating the operation state of the operation unit 4 is input to the anomaly detection device 1 via the input unit 1e. The calculation unit 1a can recognize the operation state of the operation unit 4 via the input unit 1e.
 時計部1fは、オートエンコーダ11のニューラルネットワークを追加学習させるタイミングを計時している。時計部1fは、例えば1ヶ月周期等、ニューラルネットワークの追加学習の要否を確認するタイミングで信号を出力し、演算部1aは追加学習の要否を判定し、必要に応じて追加学習用のデータ収集及び追加学習処理を実行する。 The clock unit 1 f keeps timing for additionally learning the neural network of the auto encoder 11. The clock unit 1f outputs a signal at a timing for confirming the necessity of additional learning of the neural network, such as a one-month cycle, for example, and the arithmetic unit 1a determines the necessity of additional learning, and for additional learning as necessary Data collection and additional learning processing are executed.
 データ蓄積部1hは、記憶部1gと同様、ハードディスク、EEPROM、フラッシュメモリ等の不揮発性メモリである。データ蓄積部1hは、学習用の検査対象データ、稼働中の検査対象データ、差分データ及び追加学習用の検査対象データを記憶する。
 以下、説明の便宜上、異変検知装置1の製造時の学習処理に用いた学習用の検査対象データと、学習後、異変検知装置1の稼働中に入力される検査対象データとを区別する必要がある際は、それぞれの検査対象データに符号α及びβを付して説明する。
The data storage unit 1h is a non-volatile memory such as a hard disk, an EEPROM, or a flash memory, like the storage unit 1g. The data storage unit 1h stores the inspection target data for learning, the inspection target data in operation, the difference data, and the inspection target data for additional learning.
Hereinafter, for convenience of explanation, it is necessary to distinguish between the inspection object data for learning used in the learning process at the time of manufacturing the anomaly detection apparatus 1 and the inspection object data input during operation of the anomaly detection apparatus 1 after learning. In some cases, the inspection object data will be described with reference symbols α and β.
 学習用の検査対象データαは、検査対象物110の異変を検知するためのニューラルネットワークを学習させる際に用いられた画像データである。つまり、異変検知装置1の製造段階で用いられた画像データである。
 稼働中の検査対象データβは、実際に工場ライン等で検査対象物110の検査を行う際に異変検知装置1に入力される画像データである。
 差分データは、検査対象データと、特徴抽出データの差分を示すデータである。特徴抽出データは、検査対象物110の異変を検知するために、検査対象データから検査対象物110の特徴を抽出して得られた画像データである。特徴抽出データ及び差分データの詳細は後述する。
 追加学習用の検査対象データは、上記ニューラルネットワークを追加学習させるためのデータである。工場内の照明環境が変化したり、撮像部2のレンズが汚れたり、ラインの設置場所が移動したりすると、検査対象物110を撮像して得られる画像の輝度値が変化する。そうすると、検査対象物110の異変検知精度が低下する可能性がある。追加学習用データは、上記ニューラルネットワークを追加学習させることによって、各種環境変化に適応させるためのデータである。
The inspection object data α for learning is image data used when learning a neural network for detecting an abnormality of the inspection object 110. That is, the image data used in the manufacturing stage of the anomaly detection device 1.
The inspection object data β in operation is image data that is input to the anomaly detection device 1 when the inspection object 110 is actually inspected on a factory line or the like.
The difference data is data indicating a difference between the inspection target data and the feature extraction data. The feature extraction data is image data obtained by extracting features of the inspection object 110 from the inspection object data in order to detect an abnormality in the inspection object 110. Details of the feature extraction data and the difference data will be described later.
The inspection target data for additional learning is data for additionally learning the neural network. When the illumination environment in the factory changes, the lens of the imaging unit 2 becomes dirty, or the installation location of the line moves, the luminance value of the image obtained by imaging the inspection object 110 changes. If it does so, the abnormality detection precision of the test object 110 may fall. The additional learning data is data for adapting to various environmental changes by additionally learning the neural network.
<異変検知装置1の機能部>
 図3は実施形態1に係る異変検知装置1の構成例を示すブロック図である。異変検知装置1は、機能部としてのオートエンコーダ11、異変検知処理部12、学習用データ収集装置13及び学習処理部14を有する。異変検知処理部12は更に判定データ生成部12a及び異変判定部12bにて構成されている。異変検知装置1の各機能部は、演算部1a、データ蓄積部1h等のハードウェアによって実現される。
<Functional part of the anomaly detection device 1>
FIG. 3 is a block diagram illustrating a configuration example of the anomaly detection device 1 according to the first embodiment. The anomaly detection device 1 includes an auto encoder 11, an anomaly detection processing unit 12, a learning data collection device 13, and a learning processing unit 14 as functional units. The abnormality detection processing unit 12 is further configured by a determination data generation unit 12a and an abnormality determination unit 12b. Each functional unit of the anomaly detection device 1 is realized by hardware such as the arithmetic unit 1a and the data storage unit 1h.
 オートエンコーダ11は、撮像部2から出力された検査対象データが入力され、入力された検査対象物110の特徴を抽出した特徴抽出データを出力する機能部である。具体的には、オートエンコーダ11は、検査対象データが入力された場合、正常な検査対象物110の特徴を表した特徴抽出データを出力する。検査対象データに係る画像には、ゴミ、傷、影、検査対象物110自体の異変部位等の画像が含まれることがある。特徴抽出データはこれらの画像を除去し、異変が無い正常な検査対象物110を撮像した場合に得られるような理想的な検査対象物110を再現した画像データである。オートエンコードはニューラルネットワークによって実現される。当該ニューラルネットワークは、検査対象データを次元圧縮する中間層を含み、同一画素数の画像データを入出力する。 The auto encoder 11 is a functional unit that receives the inspection target data output from the imaging unit 2 and outputs feature extraction data obtained by extracting the characteristics of the input inspection target 110. Specifically, when the inspection object data is input, the auto encoder 11 outputs feature extraction data representing the characteristics of the normal inspection object 110. The image related to the inspection target data may include an image of dust, scratches, shadows, anomalous parts of the inspection target 110 itself, and the like. The feature extraction data is image data that reproduces an ideal inspection object 110 that is obtained when these images are removed and a normal inspection object 110 having no abnormality is imaged. Auto-encoding is realized by a neural network. The neural network includes an intermediate layer that dimensionally compresses inspection target data, and inputs and outputs image data having the same number of pixels.
 図4はオートエンコーダ11のニューラルネットワーク及び学習方法を示す模式図である。オートエンコーダ11は、入力層11aと、コンボリューション層(CONV層)と、デコンボリューション層(DECONV層)と、出力層11bとを有する。入力層11aは検査対象データに係る各画素値のデータが入力される層である。コンボリューション層は、学習用の検査対象データαを次元圧縮する層である。例えば、コンボリューション層は、畳み込み積分を行うことにより、次元圧縮を行う。次元圧縮により、検査対象物110の特徴量が抽出される。デコンボリューション層は、コンボリューション層で次元圧縮されたデータを元の次元に復元する層である。デコンボリューション層は、逆畳み込み処理を行い、元の次元に復元する。当該復元によって、検査対象物110の本来の特徴、即ち正常な検査対象物110の特徴を表した画像データが復元される。なお、コンボリューション層及びデコンボリューション層が2層である例を示したが、1層又は3層以上であっても良い。出力層11bは、コンボリューション層及びデコンボリューション層にて検査対象物110の特徴を抽出した特徴抽出データに係る各画素値のデータを出力する層である。 FIG. 4 is a schematic diagram showing a neural network of the auto encoder 11 and a learning method. The auto encoder 11 includes an input layer 11a, a convolution layer (CONV layer), a deconvolution layer (DECONV layer), and an output layer 11b. The input layer 11a is a layer to which data of each pixel value related to inspection target data is input. The convolution layer is a layer that dimensionally compresses the inspection object data α for learning. For example, the convolution layer performs dimensional compression by performing convolution integration. The feature amount of the inspection object 110 is extracted by dimensional compression. The deconvolution layer is a layer that restores data that has been dimensionally compressed in the convolution layer to its original dimension. The deconvolution layer performs deconvolution processing and restores the original dimension. By the restoration, image data representing the original characteristic of the inspection object 110, that is, the characteristic of the normal inspection object 110 is restored. In addition, although the example in which the convolution layer and the deconvolution layer are two layers is shown, one layer or three or more layers may be used. The output layer 11b is a layer that outputs data of each pixel value related to the feature extraction data obtained by extracting the feature of the inspection object 110 in the convolution layer and the deconvolution layer.
 オートエンコーダ11は、図4に示すように、入力された検査対象データと、出力された特徴抽出データとが同じになるように、オートエンコーダ11のニューラルネットワークを機械学習させる。つまり、入力された画像110aと、出力される画像110bとが同じになるように、ニューラルネットワークを機械学習させる。かかる機械学習は、正常な検査対象物110を撮像して得られる検査対象データを用いて行う。 As shown in FIG. 4, the auto encoder 11 causes the neural network of the auto encoder 11 to perform machine learning so that the input inspection target data and the output feature extraction data are the same. That is, the neural network is machine-learned so that the input image 110a and the output image 110b are the same. Such machine learning is performed using inspection object data obtained by imaging a normal inspection object 110.
 図5は検査対象物110の特徴を抽出し、特徴抽出データを出力するオートエンコーダ11の機能を示す模式図である。上記の通り学習させたオートエンコーダ11は、正常な検査対象物110を撮像して得られる検査対象データ、異変がある検査対象物110を撮像して得られる検査対象データのいずれが入力されても、正常な検査対象物110を撮像して得られたような画像データが特徴抽出データとして出力される。
 図5中、左側からオートエンコーダ11に入力される特徴抽出データは、異変がある検査対象物110を撮像して得られる画像データである。当該画像データには、異変箇所の画像111aを有する検査対象物110の画像110aのデータが含まれる。この検査対象データをオートエンコーダ11に入力すると、図5中右側に示すような特徴抽出データが出力される。当該特徴抽出データは、正常な検査対象物110を撮像して得られたような画像データである。当該画像データには、異変箇所の画像111aが除去され、検査対象物110本来の特徴が表れた検査対象物110の画像110bのデータが含まれる。
FIG. 5 is a schematic diagram showing the function of the auto encoder 11 that extracts the feature of the inspection object 110 and outputs the feature extraction data. As described above, the auto encoder 11 learned as described above receives either inspection object data obtained by imaging a normal inspection object 110 or inspection object data obtained by imaging the inspection object 110 having an abnormality. Image data obtained by imaging the normal inspection object 110 is output as feature extraction data.
In FIG. 5, the feature extraction data input to the auto encoder 11 from the left side is image data obtained by imaging the inspection object 110 having an abnormality. The image data includes data of the image 110a of the inspection object 110 having the image 111a at the abnormal location. When this inspection object data is input to the auto encoder 11, feature extraction data as shown on the right side in FIG. 5 is output. The feature extraction data is image data obtained by imaging a normal inspection object 110. The image data includes data of the image 110b of the inspection object 110 from which the image 111a of the anomalous portion is removed and the original characteristics of the inspection object 110 appear.
 オートエンコーダ11から出力された特徴抽出データは、図3に示す判定データ生成部12aに入力される。また、撮像部2から出力された検査対象データが、判定データ生成部12aに入力される。
 判定データ生成部12aは、検査対象データと、特徴抽出データとの差分を演算し、差分演算によって得られた差分データを異変判定部12bへ出力する。つまり、検査対象データに係る各画素の画素値と、特徴抽出データに係る対応する各画素の画素値との差分を演算し、差分値を画素値として有する画像データを差分データとして出力する。判定データ生成部12aにて生成された差分データは、当該差分データの元になった検査対象データと対応付けてデータ蓄積部1hに記憶される。
The feature extraction data output from the auto encoder 11 is input to the determination data generation unit 12a shown in FIG. Further, the inspection object data output from the imaging unit 2 is input to the determination data generation unit 12a.
The determination data generation unit 12a calculates the difference between the inspection target data and the feature extraction data, and outputs the difference data obtained by the difference calculation to the anomaly determination unit 12b. That is, the difference between the pixel value of each pixel related to the inspection target data and the pixel value of each corresponding pixel related to the feature extraction data is calculated, and image data having the difference value as the pixel value is output as difference data. The difference data generated by the determination data generation unit 12a is stored in the data storage unit 1h in association with the inspection target data that is the source of the difference data.
 図6は差分データの生成方法を示す模式図である。左図は検査対象データに係る画像110aを示し、中央図は特徴抽出データに係る画像110bを示している。検査対象データに係る画像110aには、異変箇所の画像111aが含まれている。特徴抽出データに係る画像110bを見ると、異変箇所の画像111aが除去され、正常な検査対象物110の画像となっていることが分かる。判定データ生成部12aは、検査対象データに係る各画素の輝度値から、特徴抽出データに係る各画素の輝度値を減算して得られる差分データを生成する。右図は差分データを表したものであり、異変箇所の画像111aが差分データの画像として抽出されている。 FIG. 6 is a schematic diagram showing a method for generating difference data. The left figure shows an image 110a related to inspection object data, and the center figure shows an image 110b related to feature extraction data. The image 110a related to the inspection object data includes the image 111a of the anomalous portion. Looking at the image 110b related to the feature extraction data, it can be seen that the image 111a at the anomalous location is removed and the image of the normal inspection object 110 is obtained. The determination data generation unit 12a generates difference data obtained by subtracting the luminance value of each pixel related to the feature extraction data from the luminance value of each pixel related to the inspection target data. The right figure shows difference data, and an image 111a of an abnormal part is extracted as an image of difference data.
 図7A、図7B及び図7Cは検査対象物110であるコネクタを撮像して得られる検査対象データ、特徴抽出データ及び差分データの一例を示す模式図である。図7、図7B及び図7Cは、より具体的な検査対象物110として、ワイヤハーネス100のコネクタを撮像して得られる検査対象データを用いた、差分データの生成方法を示したものである。基本的な動作は図6で説明した通りである。図7Aは検査対象データであり、コネクタの画像110aに異変箇所の画像111aが含まれている。図7Bは特徴抽出データであり、正常な検査対象物110の画像111bが含まれている。図7Cは異変箇所の画像111aが差分データの画像として抽出された画像である。 7A, 7B, and 7C are schematic diagrams illustrating examples of inspection target data, feature extraction data, and difference data obtained by imaging the connector that is the inspection target 110. FIG. 7, 7B and 7C show a method for generating difference data using inspection object data obtained by imaging the connector of the wire harness 100 as a more specific inspection object 110. FIG. The basic operation is as described in FIG. FIG. 7A shows data to be inspected, and the connector image 110a includes an image 111a of an abnormal location. FIG. 7B shows feature extraction data, which includes an image 111b of a normal inspection object 110. FIG. FIG. 7C is an image obtained by extracting the anomalous location image 111a as an image of difference data.
 異変判定部12bは、判定データ生成部12aから出力された差分データに基づいて、検査対象物110の異変の有無を判定する。異変判定部12bは、例えば学習済みの1クラスサポートベクタマシンである。1クラスサポートベクタマシンは、正常な検査対象データから得られる差分データと、異常な検査対象データから得られる差分データとを分類する分類器である。なお、1クラスサポートベクタマシンは分類器の一例であり、異常な検査対象データから得られる差分データを、外れ値として統計的に判別できれば、判定方法は特に限定されるものでは無い。 The anomaly determination unit 12b determines whether there is an anomaly in the inspection object 110 based on the difference data output from the determination data generation unit 12a. The anomaly determination unit 12b is, for example, a learned one-class support vector machine. The one-class support vector machine is a classifier that classifies difference data obtained from normal inspection target data and difference data obtained from abnormal inspection target data. The one class support vector machine is an example of a classifier, and the determination method is not particularly limited as long as difference data obtained from abnormal inspection target data can be statistically determined as an outlier.
<追加学習用データの収集>
 次に、オートエンコーダ11の追加学習処理について説明する。
 図8は追加学習用データの収集及び追加学習に係る処理手順を示すフローチャートである。演算部1aは、周期的に到来するオートエンコーダ11の追加学習確認タイミングで以下の処理を実行する。まず演算部1aは、データ蓄積部1hから複数の学習用の検査対象データαを読み出し、読み出した複数の検査対象データαをオートエンコーダ11に入力することによって、当該検査対象データαの特徴抽出データを生成する(ステップS11)。そして、演算部1aは、判定データ生成部12aから出力される差分データに基づいて、差分量の度数分布を生成する(ステップS12)。
<Collecting additional learning data>
Next, the additional learning process of the auto encoder 11 will be described.
FIG. 8 is a flowchart showing a processing procedure related to collection of additional learning data and additional learning. The arithmetic unit 1a executes the following processing at the additional learning confirmation timing of the auto encoder 11 that arrives periodically. First, the calculation unit 1a reads a plurality of learning inspection target data α from the data storage unit 1h, and inputs the read plurality of inspection target data α to the auto encoder 11, thereby extracting feature extraction data of the inspection target data α. Is generated (step S11). And the calculating part 1a produces | generates the frequency distribution of difference amount based on the difference data output from the determination data production | generation part 12a (step S12).
 差分量は、差分の大きさを示したスカラー量である。例えば、演算部1aは、差分データに閾値処理を行い、2値化する。そして、演算部1aは、ノイズ除去後の差分データに係る画像から連結領域抽出処理を実行し、連結領域毎に、連結領域を構成する各画素の輝度値の積算値を算出する。次いで、差分データに係る画像に含まれる各連結領域の積算値のうち、最大の積算値を、当該差分データの差分量として決定する。つまり、差分データに係る画像に含まれるもっとも大きな差分画像部分の輝度積算値を差分量として算出する。差分量は、複数の差分データ毎に算出される。最後に、演算部1aは、差分量の度数分布を算出する(図9参照)。 The difference amount is a scalar amount indicating the size of the difference. For example, the arithmetic unit 1a performs threshold processing on the difference data and binarizes it. And the calculating part 1a performs a connection area | region extraction process from the image which concerns on the difference data after noise removal, and calculates the integrated value of the luminance value of each pixel which comprises a connection area | region for every connection area | region. Next, the maximum integrated value among the integrated values of each connected region included in the image related to the difference data is determined as the difference amount of the difference data. That is, the luminance integrated value of the largest difference image portion included in the image related to the difference data is calculated as the difference amount. The difference amount is calculated for each of a plurality of difference data. Finally, the calculation unit 1a calculates a frequency distribution of the difference amount (see FIG. 9).
 次いで、演算部1aは、データ蓄積部1hから稼働中に得られた差分データを読み出す(ステップS13)。当該差分データは、検査対象データβと、特徴抽出データとの差分を算出して得られるデータである。そして、演算部1aは、読み出された差分データに基づいて、差分量の度数分布を生成する(ステップS14)。当該差分量及び度数分布は、ステップS12と同様の処理で生成することができる。 Next, the calculation unit 1a reads the difference data obtained during operation from the data storage unit 1h (step S13). The difference data is data obtained by calculating a difference between the inspection target data β and the feature extraction data. And the calculating part 1a produces | generates the frequency distribution of difference amount based on the read difference data (step S14). The said difference amount and frequency distribution can be produced | generated by the process similar to step S12.
 次いで、演算部1aは、ステップS12にて算出された度数分布と、ステップS14にて算出された度数分布との統計的差異を検定するための検定統計量を算出し、(ステップS15)、各度数分布が統計的に有意差を有するか否かを判定する(ステップS16)。 Next, the calculation unit 1a calculates a test statistic for testing a statistical difference between the frequency distribution calculated in step S12 and the frequency distribution calculated in step S14 (step S15), It is determined whether or not the frequency distribution has a statistically significant difference (step S16).
 図9は差分量の統計的差異を示すグラフである。横軸は差分量、縦軸は度数を示している。図中Aは、ステップS12にて算出された度数分布、図中Bは、ステップS14にて算出された度数分布を示す。演算部1aは、検定統計量としてt値を算出し、各度数分布の母平均の差異をt検定する。なお、検定手法は特に限定されるものでは無く、F値を算出し、分散の差異をF検定しても良い。もちろん、その他の検定統計量を用いて検定しても良い。また、各度数分布の類似度ないし統計距離を算出し、統計距離が閾値以上であるか否かを算出することによって、各度数分布が統計的に有意差を有するか否かを判定しても良い。 FIG. 9 is a graph showing the statistical difference of the difference amount. The horizontal axis indicates the difference amount, and the vertical axis indicates the frequency. A in the figure indicates the frequency distribution calculated in step S12, and B in the figure indicates the frequency distribution calculated in step S14. The computing unit 1a calculates a t value as a test statistic, and performs a t test on the difference between the population means of each frequency distribution. The test method is not particularly limited, and an F value may be calculated and a difference in variance may be F-tested. Of course, you may test using other test statistics. It is also possible to determine whether each frequency distribution has a statistically significant difference by calculating the similarity or statistical distance of each frequency distribution and calculating whether the statistical distance is greater than or equal to a threshold value. good.
 ステップS16において、統計的な有意差が無いと判定した場合(ステップS16:NO)、演算部1aは処理を終える。異変検知装置1の製造時に用いた学習用の検査対象データαから得られた差分量の度数分布と、稼働中に撮像された検査対象データβから得られる差分量の度数分布との間に統計的な有意差が無い場合、異変検知精度を低下させる環境の変化は無く、オートエンコーダ11の追加学習は必要無い状況にあると考えられる。 When it is determined in step S16 that there is no statistically significant difference (step S16: NO), the arithmetic unit 1a finishes the process. Statistics between the frequency distribution of the difference amount obtained from the inspection object data α for learning used at the time of manufacturing the anomaly detection apparatus 1 and the frequency distribution of the difference amount obtained from the inspection object data β captured during operation If there is no significant difference, it is considered that there is no change in the environment that lowers the anomaly detection accuracy, and additional learning of the auto encoder 11 is not necessary.
 統計的な有意差が有ると判定した場合(ステップS16:YES)、演算部1aは、ステップS17~ステップS23の処理を実行することにより、オートエンコーダ11を追加学習させるためのデータを収集ないし選択する処理を実行する。異変検知装置1の製造時に用いた学習用の検査対象データαから得られた差分量の度数分布と、稼働中に撮像された検査対象データβから得られる差分量の度数分布とが統計的に異なっている場合、工場内の照明環境の変化、センサの汚れ、ラインの設置場所の移動等、何らかの変化があった可能性がある。この場合、異変検知精度が低下する可能性があり、オートエンコーダ11を追加学習させ、環境変化に適応させる必要がある。 When it is determined that there is a statistically significant difference (step S16: YES), the calculation unit 1a collects or selects data for additionally learning the auto encoder 11 by executing the processes of steps S17 to S23. Execute the process. The frequency distribution of the difference amount obtained from the inspection object data α for learning used at the time of manufacturing the anomaly detection apparatus 1 and the frequency distribution of the difference amount obtained from the inspection object data β captured during operation are statistically calculated. If they are different, there may be some changes such as changes in the lighting environment in the factory, contamination of the sensor, and movement of the installation location of the line. In this case, the anomaly detection accuracy may be lowered, and it is necessary to additionally learn the auto encoder 11 and adapt it to environmental changes.
 演算部1aは、稼働中に得られた複数の検査対象データβのうち、当該検査対象データβから得られた差分データの差分量が、学習用の検査対象データαから得られた差分量の度数分布Aの信頼区間から外れている一の検査対象データβを選択する(ステップS17)。信頼区間は例えば2σ=約95%等である。 The arithmetic unit 1a is configured such that the difference amount of the difference data obtained from the inspection object data β among the plurality of inspection object data β obtained during operation is the difference amount obtained from the inspection object data α for learning. One inspection object data β that is out of the confidence interval of the frequency distribution A is selected (step S17). The confidence interval is, for example, 2σ = about 95%.
 ステップS17で選択される検査対象データβには、異変がある検査対象物110を撮像して得られる検査対象データβも含まれている。かかる検査対象データβは、追加学習のデータから除外しなければならない。ステップS17に続くステップS18及びステップS19の処理は、正常な検査対象物110を撮像して得られる検査対象データβと、異変がある検査対象物110を撮像して得られる検査対象データβとを選別するためのものである。 The inspection object data β selected in step S17 includes inspection object data β obtained by imaging the inspection object 110 having an abnormality. Such inspection object data β must be excluded from the additional learning data. The processing of step S18 and step S19 following step S17 includes inspection object data β obtained by imaging the normal inspection object 110 and inspection object data β obtained by imaging the inspection object 110 having an abnormality. It is for sorting.
 ステップS17の処理を終えた演算部1aは、学習用の複数の検査対象データαの画像における差分発生箇所を算出し(ステップS18)、算出された差分発生箇所と、ステップS17にて選択された検査対象データβの画像における差分発生箇所とが一致しているか否かを判定する(ステップS19)。
 つまり、演算部1aは、差分が検査対象物110の異変では無く、環境変化等に起因する差分であるか否かを、その発生箇所によって選別する。以下、具体的に説明する。
The calculation unit 1a that has finished the process of step S17 calculates a difference occurrence location in the image of the plurality of learning target data α for learning (step S18), and the calculated difference occurrence location is selected in step S17. It is determined whether or not the difference occurrence location in the image of the inspection object data β matches (step S19).
That is, the calculation unit 1a selects whether or not the difference is not an abnormality of the inspection object 110 but a difference caused by an environmental change or the like based on the occurrence location. This will be specifically described below.
 図10は正常な検査対象物110を撮像して得られた学習用の検査対象データαに基づいて得られる混合正規分布を示す等高線図である。演算部1aは、ステップS11及びステップS12で算出された差分データに基づいて、差分が発生する位置を特定する。図10中、符号6は、差分の発生箇所を示している。差分の発生箇所の位置は、例えば、差分データに係る画像の座標を(x,y)、差分の大きさをp(x,y)とし、混合正規分布を用いて推定すると良い。混合正規分布は複数個の正規分布の重ね合わせで表される。演算部1aは、EMアルゴリズム等を用いて、p(x,y)を再現する複数の正規分布iを推定する。演算部1aは、混合する正規分布iの数N、各正規分布の中心座標μi=(μix,μiy)、混合比πiを算出する。図10に示す例では、各山の中央が中心座標μiである。i(=1,2,…,N)は、混合される正規分布を示す整数である。
 図10に示す例では6個の山が表れており、最も単純なケースでは6個の正規分布iを重ね合わせた混合正規分布によって、差分の大きさp(x,y)を表すことができる。なお、山の数と、重ね合わせる正規分布iの数は必ずしも一致するものでは無い。
FIG. 10 is a contour diagram showing a mixed normal distribution obtained based on learning inspection object data α obtained by imaging a normal inspection object 110. The computing unit 1a identifies the position where the difference occurs based on the difference data calculated in step S11 and step S12. In FIG. 10, the code | symbol 6 has shown the generation | occurrence | production location of a difference. The position of the difference occurrence position may be estimated using, for example, a mixed normal distribution, where the coordinates of the image related to the difference data are (x, y) and the magnitude of the difference is p (x, y). A mixed normal distribution is represented by a superposition of a plurality of normal distributions. The computing unit 1a estimates a plurality of normal distributions i that reproduce p (x, y) using an EM algorithm or the like. The computing unit 1a calculates the number N of normal distributions i to be mixed, the center coordinates μi = (μix, μii) of each normal distribution, and the mixing ratio πi. In the example shown in FIG. 10, the center of each mountain is the central coordinate μi. i (= 1, 2,..., N) is an integer indicating a normal distribution to be mixed.
In the example shown in FIG. 10, six peaks appear, and in the simplest case, the magnitude p (x, y) of the difference can be expressed by a mixed normal distribution obtained by superimposing six normal distributions i. . Note that the number of peaks and the number of normal distributions i to be superimposed do not necessarily match.
 図11は学習後の異変検知処理中に得られた差分データを示す模式図である。
 演算部1aは、ステップS17で選択された差分データに基づいて、差分の中心座標μk=(μkx,μky)を、k-means又はk-means++法等を用いて算出する(k=1,2,…)。図11に示す例では、差分が発生しているX印の箇所が差分の中心座標μkであり、差分の中心の数は「1」である。
FIG. 11 is a schematic diagram showing difference data obtained during anomaly detection processing after learning.
The calculation unit 1a calculates the difference center coordinates μk = (μkx, μky) based on the difference data selected in step S17 using the k-means or k-means ++ method (k = 1, 2). , ...). In the example illustrated in FIG. 11, the portion of the mark X where the difference occurs is the difference center coordinate μk, and the number of the difference centers is “1”.
 図12A及び図12Bは異変がある検査対象物110を撮像して得られた検査対象データβと、正常な検査対象物110を撮像して得られた検査対象データβとを判別する方法を示す模式図である。演算部1aは、中心座標μiと、中心座標μkとの統計距離を算出し、中心座標μkに最も近い中心座標μiを特定する。統計距離としては、例えばL2ノルム、つまりユークリッド距離等を計算すると良い。そして、演算部1aは、学習用の検査対象データαに係る差分発生箇所である中心座標μiと、稼働中に得た検査対象データβに係る差分発生箇所である中心座標μkとが、統計的に近いか否かを判定する。
 具体的には、稼働中に得た検査対象データβに係る差分発生箇所の中心座標μkに最も近い、混合正規分布の山、即ち正規分布jを特定する。そして、演算部1aは、当該正規分布jの値pj(μk)に混合比πjの逆数を乗算して得た値が所定の閾値以上である場合、差分の発生箇所が一致していると判定する。
 図12Aに示す例では、最も近い中心座標μiと、中心座標μkは統計的に離れているため、ステップS17にて選択された検査対象データβは、異変がある検査対象物110を撮像して得られた画像データであると推定できる。
 図12Bに示す例では、最も近い中心座標μiと、中心座標μkは統計的に近いため、ステップS17にて選択された検査対象データβは、正常な検査対象物110を撮像して得られた画像データであり、環境変化等の影響を受けているデータであると推定できる。
12A and 12B show a method for discriminating between inspection object data β obtained by imaging an inspection object 110 having an abnormality and inspection object data β obtained by imaging a normal inspection object 110. It is a schematic diagram. The computing unit 1a calculates a statistical distance between the center coordinate μi and the center coordinate μk, and specifies the center coordinate μi closest to the center coordinate μk. As the statistical distance, for example, the L2 norm, that is, the Euclidean distance may be calculated. Then, the calculation unit 1a statistically determines the center coordinate μi that is a difference occurrence location related to the inspection target data α for learning and the center coordinate μk that is a difference occurrence location related to the inspection target data β obtained during operation. It is determined whether it is close to.
Specifically, the peak of the mixed normal distribution, that is, the normal distribution j, which is closest to the center coordinate μk of the difference occurrence location related to the inspection target data β obtained during operation, is specified. Then, when the value obtained by multiplying the value pj (μk) of the normal distribution j by the reciprocal of the mixture ratio πj is equal to or greater than a predetermined threshold value, the calculation unit 1a determines that the difference occurrence points match. To do.
In the example shown in FIG. 12A, since the closest center coordinate μi and the center coordinate μk are statistically separated, the inspection target data β selected in step S17 images the inspection target 110 having an abnormality. It can be estimated that the obtained image data.
In the example shown in FIG. 12B, since the closest center coordinate μi and the center coordinate μk are statistically close, the inspection object data β selected in step S17 was obtained by imaging the normal inspection object 110. It is image data, and it can be estimated that the data is affected by environmental changes.
 ステップS19にて差分発生箇所が一致していると判定した場合(ステップS19:YES)、演算部1aはステップS17にて選択した検査対象データβを、追加学習用のデータとして選択する(ステップS20)。差分発生箇所が一致していないと判定した場合(ステップS19:NO)、演算部1aはステップS17にて選択された検査対象データβを、追加学習用のデータから除外する(ステップS21)。 When it is determined in step S19 that the difference occurrence locations match (step S19: YES), the calculation unit 1a selects the inspection object data β selected in step S17 as data for additional learning (step S20). ). When it is determined that the difference occurrence points do not match (step S19: NO), the calculation unit 1a excludes the inspection target data β selected in step S17 from the additional learning data (step S21).
 次いで、演算部1aは、稼働中に蓄積した全ての検査対象データβについて、追加学習用のデータとして適しているか否かの選別を終えたか否かを判定する(ステップS22)。選別を終えていないと判定した場合(ステップS22:NO)、演算部1aは処理をステップS17へ戻す。
 なお、データ蓄積部1hに蓄積された検査対象データβの全てを選別する例を説明したが、蓄積された検査対象データβの一部を学習用データの候補として用いて良い。
Next, the arithmetic unit 1a determines whether or not the selection of whether or not all the inspection object data β accumulated during operation is suitable as additional learning data has been completed (step S22). If it is determined that the sorting has not been completed (step S22: NO), the computing unit 1a returns the process to step S17.
In addition, although the example which selects all the test object data (beta) accumulate | stored in the data storage part 1h was demonstrated, you may use a part of stored test object data (beta) as a candidate of the data for learning.
 選別を終えたと判定した場合(ステップS22:YES)、データ蓄積部1hに蓄積されている学習用の検査対象データαと、ステップS17~ステップS21の処理で選択された検査対象データβとをランダムに混合する(ステップS23)。混合して得た検査対象データα,βは学習用データとしてデータ蓄積部1hに記憶すると良い。
 学習用の検査対象データαと、上記選択された検査対象データβの混合比は、特に限定されるものでは無いが、ステップS17で用いた信頼区間に係る割合で混合すると良い。つまり、信頼区間内の割合に対応する数の検査対象データαと、信頼区間外の割合に対応する数の上記選択された検査対象データβとを混合すると良い。
 例えば、信頼区間が2σ=95%である場合、ステップS20で選択された検査対象データβの数が500である場合、9500の検査対象データαをデータ蓄積部1hから読み出して混合すると良い。
If it is determined that the selection has been completed (step S22: YES), the learning inspection target data α stored in the data storage unit 1h and the inspection target data β selected in the processing of steps S17 to S21 are randomly selected. (Step S23). The inspection object data α and β obtained by mixing are preferably stored in the data storage unit 1h as learning data.
The mixing ratio of the inspection object data α for learning and the selected inspection object data β is not particularly limited, but may be mixed at a ratio related to the confidence interval used in step S17. That is, it is preferable to mix the number of inspection object data α corresponding to the ratio within the confidence interval and the number of the selected inspection object data β corresponding to the proportion outside the confidence interval.
For example, if the confidence interval is 2σ = 95% and the number of inspection object data β selected in step S20 is 500, 9500 inspection object data α may be read from the data storage unit 1h and mixed.
 次いで、演算部1aは、ステップS23にて混合された検査対象データα,βを、追加学習用のデータとして用いて、オートエンコーダ11の追加学習を実行し(ステップS24)、処理を終える。追加学習の方法は、製造時のオートエンコーダ11の学習方法と同じである。つまり、演算部1aは、オートエンコーダ11に入力された検査対象データα,βと、出力される特徴抽出データとが同一になるように、オートエンコーダ11を構成するニューラルネットワークを機械学習させる。 Next, the calculation unit 1a performs additional learning of the auto encoder 11 using the inspection object data α and β mixed in step S23 as data for additional learning (step S24), and ends the process. The additional learning method is the same as the learning method of the auto encoder 11 at the time of manufacture. That is, the arithmetic unit 1a causes the neural network constituting the auto encoder 11 to machine-learn so that the inspection object data α and β input to the auto encoder 11 and the output feature extraction data are the same.
 このように構成された学習用データ収集方法、学習用データ収集装置13、異変検知システム及びコンピュータプログラム5によれば、工場内の照明環境の変化、センサの汚れ、ラインの設置場所の移動等によって、センサ値が変化した場合に、環境変化に対応させるべくオートエンコーダ11を追加学習させるための学習用データを自動的に収集することができる。 According to the learning data collection method, the learning data collection device 13, the anomaly detection system, and the computer program 5 configured as described above, the lighting environment in the factory changes, the dirt of the sensor, the movement of the installation location of the line, etc. When the sensor value changes, it is possible to automatically collect learning data for additionally learning the auto encoder 11 so as to respond to environmental changes.
 また、学習用の検査対象データαに基づいて得られた信頼区間(図9参照)から外れた検査対象データβを、追加学習用のデータとして選択する構成であるため、オートエンコーダ11を効果的に追加学習させることができる。 Further, since the test object data β deviating from the confidence interval (see FIG. 9) obtained based on the test object data α for learning is selected as data for additional learning, the auto encoder 11 is effectively used. Can be additionally learned.
 更に、適当な割合で学習用の検査対象データαと、稼働中に蓄積した検査対象データβとをランダムに混合させ、混合させた検査対象データα,βを追加学習用のデータとする構成であるため、異常学習を回避し、オートエンコーダ11を効果的に追加学習させることができる。 Further, the inspection object data α for learning and the inspection object data β accumulated during operation are randomly mixed at an appropriate ratio, and the mixed inspection object data α and β are used as additional learning data. Therefore, abnormal learning can be avoided and the auto encoder 11 can be additionally learned effectively.
 更に、信頼区間の割合に応じて検査対象データα,βを混合させる構成であるため、偏った異常学習を回避し、オートエンコーダ11を効果的に追加学習させることができる。 Furthermore, since the inspection object data α and β are mixed according to the ratio of the confidence interval, biased abnormal learning can be avoided and the auto encoder 11 can be effectively additionally learned.
 更にまた、検査対象データα,βに係る差分発生箇所を比較することによって、正常な検査対象物110を撮像して得られた検査対象データβを選別することができ、オートエンコーダ11を適切に追加学習させることができる。 Furthermore, the inspection object data β obtained by imaging the normal inspection object 110 can be selected by comparing the difference occurrence points related to the inspection object data α, β, and the auto encoder 11 can be appropriately selected. Additional learning can be done.
 更にまた、正常な検査対象物110を撮像して得られた検査対象データβを自動で選別することができ、オートエンコーダ11を追加学習させることができる。 Furthermore, the inspection object data β obtained by imaging the normal inspection object 110 can be automatically selected, and the auto encoder 11 can be additionally learned.
 更にまた、ステップS16において検査対象データα,βの差分データの差異を統計的に判定し、追加学習の要否を判定することができる。従って、無駄な追加学習を回避し、必要に応じてオートエンコーダ11を追加学習させることができる。 Furthermore, in step S16, the difference between the difference data of the inspection target data α and β can be statistically determined to determine whether additional learning is necessary. Accordingly, useless additional learning can be avoided, and the auto encoder 11 can be additionally learned as necessary.
 更にまた、時計部1fを用いて定期的に追加学習の必要性を確認し、必要に応じてオートエンコーダ11を追加学習させることができる。 Furthermore, it is possible to periodically check the necessity of additional learning using the clock unit 1f and to perform additional learning of the auto encoder 11 as necessary.
 更にまた、本実施形態1に係る異変検知装置1は、検査対象物110を撮像して得られた画像データを用いて、検査対象物110の異変を検知しており、追加学習データ収集装置は検査対象データβの画像分析によって追加学習に適したデータを選択することができる。 Furthermore, the anomaly detection apparatus 1 according to the first embodiment detects an anomaly of the inspection object 110 using image data obtained by imaging the inspection object 110, and the additional learning data collection apparatus is Data suitable for additional learning can be selected by image analysis of the inspection object data β.
 なお、本実施形態1では、画像の検査対象データを例に挙げて、検査対象物110の異変検知、オートエンコーダ11を追加学習させるためのデータ収集、追加学習を説明したが、検査対象データの形式及び内容は特に限定されるものでは無い。例えば、検査対象データはマイクで検出された音声データ、振動センサで検出された振動データ、電圧計又は電流計で検出された電圧データ、電流データ等であっても良い。 In the first embodiment, the inspection target data of the image is taken as an example, and anomaly detection of the inspection target 110, data collection for additional learning of the auto encoder 11, and additional learning have been described. The format and contents are not particularly limited. For example, the inspection target data may be voice data detected by a microphone, vibration data detected by a vibration sensor, voltage data detected by a voltmeter or an ammeter, current data, and the like.
 また、本実施形態1では、定期的に図8に示す処理を実行し、検査対象データαに係る差分量の分布と、検査対象データβの差分量の分布との間に統計的差異がある場合に、追加学習処理を実行する例を説明したが、ステップS16において、演算部1aは、統計的差異がある検査対象データβの蓄積量が所定の閾値を超えているか否かを判定し、閾値を超えている場合、ステップS17以下の処理を実行するように構成しても良い。 In the first embodiment, the processing shown in FIG. 8 is periodically executed, and there is a statistical difference between the difference amount distribution related to the inspection target data α and the difference amount distribution of the inspection target data β. In this case, the example in which the additional learning process is executed has been described. In step S16, the calculation unit 1a determines whether or not the accumulated amount of the inspection target data β having a statistical difference exceeds a predetermined threshold value. If the threshold value is exceeded, the processing in step S17 and subsequent steps may be executed.
 この場合、追加学習データの候補である検査対象データβが十分に蓄積されてから、追加学習に係る処理を実行するため、無駄な追加学習を避け、効果的にオートエンコーダを追加学習させることができる。 In this case, the inspection target data β, which is a candidate for additional learning data, is sufficiently accumulated and then the process related to additional learning is performed. Therefore, unnecessary additional learning can be avoided and the auto encoder can be additionally learned effectively. it can.
(実施形態2)
 実施形態2は、学習用の検査対象データαに基づいて得られる信頼区間外の検査対象データβのうち、追加学習用のデータとして適した検査対象データであるか否かを、作業者の判断を加味して、半自動で選択する点が実施形態1と異なる。以下では主にかかる相違点について説明する。その他の構成及び作用効果は実施形態1と同様であるため、対応する箇所には同様の符号を付して詳細な説明を省略する。
(Embodiment 2)
In the second embodiment, the operator determines whether the inspection object data β outside the confidence interval obtained based on the inspection object data α for learning is inspection object data suitable as additional learning data. Is different from the first embodiment in that it is selected semi-automatically. In the following, mainly such differences will be described. Since other configurations and operational effects are the same as those of the first embodiment, the corresponding portions are denoted by the same reference numerals, and detailed description thereof is omitted.
 図13は実施形態2に係る追加学習用データの収集処理手順を示すフローチャートである。実施形態1のステップS11~ステップS16の処理を実行した演算部1aは、稼働中に得られた複数の検査対象データβのうち、当該検査対象データβから得られた差分データの差分量が、学習用の検査対象データαから得られた差分量の度数分布Aの信頼区間から外れている検査対象データβを選択する(ステップS217)。実施形態2のステップS217においては、演算部1aは、信頼区間外にある全ての検査対象データβを選択する。 FIG. 13 is a flowchart showing a collection processing procedure of additional learning data according to the second embodiment. The arithmetic unit 1a that has executed the processing of step S11 to step S16 of the first embodiment has the difference amount of the difference data obtained from the inspection target data β among the plurality of inspection target data β obtained during operation. The inspection object data β that is out of the confidence interval of the frequency distribution A of the difference amount obtained from the inspection object data α for learning is selected (step S217). In step S217 of the second embodiment, the calculation unit 1a selects all the inspection target data β that is outside the confidence interval.
 そして、演算部1aは、学習用の複数の検査対象データαの画像における差分発生箇所を算出する(ステップS218)。具体的には、演算部1aは、実施形態1のステップS18と同様にして、混合正規分布にて、差分発生箇所を推定する。 And the calculating part 1a calculates the difference generation | occurrence | production location in the image of the some test object data (alpha) for learning (step S218). Specifically, the computing unit 1a estimates the difference occurrence location in the mixed normal distribution in the same manner as in step S18 of the first embodiment.
 次いで、演算部1aは、ステップS217にて選択された複数の検査対象データβの画像における差分発生箇所を算出する(ステップS219)。具体的には演算部1aは、ステップS218と同様にして、混合正規分布にて、差分発生箇所を推定する。 Next, the calculation unit 1a calculates a difference occurrence point in the images of the plurality of pieces of inspection target data β selected in step S217 (step S219). Specifically, the computing unit 1a estimates the difference occurrence location in the mixed normal distribution in the same manner as in step S218.
 次いで、演算部1aは、ステップS218で算出した差分発生箇所と、ステップS219で算出した差分発生箇所とを比較し、検査対象データαに係る差分発生箇所から外れた位置にある、検査対象データβに係る差分発生箇所を抽出する(ステップS220)。つまり、学習用の検査対象データαにおいて通常見られる差分発生箇所と、異なる発生箇所に差分を有する検査対象データβを抽出する。
 当該検査対象データβは、検査対象物110の異常に起因する異常なデータである可能性が高いため、作業者が学習用データとしての適否を判断する。
Next, the calculation unit 1a compares the difference occurrence location calculated in step S218 with the difference occurrence location calculated in step S219, and the inspection target data β at a position deviating from the difference occurrence location related to the inspection target data α. The difference occurrence location is extracted (step S220). That is, the inspection object data β having a difference in a difference occurrence place that is normally found in the learning inspection object data α and a difference occurrence place is extracted.
Since it is highly likely that the inspection object data β is abnormal data due to an abnormality of the inspection object 110, the operator determines whether or not the inspection object data is appropriate as learning data.
 そして、演算部1aは、ステップS220で抽出された差分発生箇所に差分を有する複数の検査対象データβを選択する(ステップS221)。 And the calculating part 1a selects the some test object data (beta) which has a difference in the difference generation location extracted by step S220 (step S221).
 次いで、演算部1aは、ステップS221で選択された検査対象データβと、ステップS220にて抽出された差分発生箇所を示す差分発生箇所表示画像7とを出力部1dを介して表示部3に出力する(ステップS222)。そして、演算部1aは、追加学習用データとしての適否を操作部4にて受け付ける(ステップS223)。 Next, the calculation unit 1a outputs the inspection target data β selected in step S221 and the difference occurrence location display image 7 indicating the difference occurrence location extracted in step S220 to the display unit 3 via the output unit 1d. (Step S222). And the calculating part 1a receives the suitability as data for additional learning in the operation part 4 (step S223).
 図14A及び図14Bは学習用データの選択受け付け方法を示す模式図である。図14Aは、検査対象データβに係る検査対象物110の画像110aに、差分発生箇所表示画像7が重畳されて表示されている。図14Aに示す例では、検査対象物110に異変があり、当該異変に起因して差分が発生しており、当該差分発生箇所が差分発生箇所表示画像7によって示されている。図14Bに示す例では、検査対象物110は正常であり、異変に起因するものでは無く、通常発生する差分の発生箇所が差分発生箇所表示画像7によって示されている。
 作業者は表示部3に表示された、図14A又は図14Bに示すような映像を確認し、追加学習データとしての適否を操作部4にて選択する。
14A and 14B are schematic diagrams showing a method for accepting selection of learning data. In FIG. 14A, the difference occurrence location display image 7 is superimposed and displayed on the image 110 a of the inspection object 110 related to the inspection object data β. In the example illustrated in FIG. 14A, there is an abnormality in the inspection object 110, a difference is generated due to the abnormality, and the difference occurrence location is indicated by the difference occurrence location display image 7. In the example shown in FIG. 14B, the inspection object 110 is normal, not caused by an abnormality, and a difference occurrence place display image 7 shows a difference occurrence place that normally occurs.
The operator confirms the video as shown in FIG. 14A or FIG. 14B displayed on the display unit 3, and selects the suitability as additional learning data using the operation unit 4.
 演算部1aは、操作部4にて受け付けた選択結果に基づいて、追加学習用データに係る選別を行う(ステップS224)。演算部1aは、追加学習データとして適切であることを示す操作を受け付けた場合、ステップS220で抽出された差分発生箇所に差分を有する検査対象データβを追加学習用のデータとして選択する。逆に、追加学習データとして不適であることを示す操作を受け付けた場合、ステップS220で抽出された差分発生箇所に差分を有する検査対象データβを追加学習用のデータから除外する。
 なお、演算部1aは、ステップS218で算出した差分発生箇所と、ステップS219で算出した差分発生箇所とが統計的に一致している場合、当該差分発生箇所に差分を有する検査対象データβも、追加学習用のデータとして自動的に選択する。
The calculation unit 1a performs selection related to the additional learning data based on the selection result received by the operation unit 4 (step S224). When the operation unit 1a receives an operation indicating that it is appropriate as additional learning data, the calculation unit 1a selects, as additional learning data, inspection target data β having a difference at the difference occurrence location extracted in step S220. Conversely, when an operation indicating that it is inappropriate as additional learning data is received, the inspection object data β having a difference at the difference occurrence location extracted in step S220 is excluded from the additional learning data.
When the difference occurrence location calculated in step S218 and the difference occurrence location calculated in step S219 are statistically coincident, the calculation unit 1a also determines the inspection target data β having a difference in the difference occurrence location. Automatically selected as additional learning data.
 実施形態2によれば、異常がある可能性のある検査対象データβについて、学習用データとしての適否を作業者が確認することにより、作業者による判断を加味した追加画像データの半自動収集が可能になり、オートエンコーダ11をより効果的に追加学習させることができる。 According to Embodiment 2, semi-automatic collection of additional image data taking into account the judgment by the operator is possible by checking the suitability as the learning data for the inspection target data β that may be abnormal. Thus, the auto-encoder 11 can be additionally learned more effectively.
(実施形態3)
 図15は実施形態3に係る異変検知システムの構成例を示す模式図である。
 実施形態3に係る異変検知システムは、学習用データ収集装置301と、異変検知を行う異変検知装置310とが別体で構成され、通信線Lを介して接続されている点が実施形態1と異なる。以下では主にかかる相違点について説明する。その他の構成及び作用効果は実施形態1と同様であるため、対応する箇所には同様の符号を付して詳細な説明を省略する。
(Embodiment 3)
FIG. 15 is a schematic diagram illustrating a configuration example of the anomaly detection system according to the third embodiment.
The anomaly detection system according to Embodiment 3 is different from that of Embodiment 1 in that the learning data collection device 301 and the anomaly detection device 310 that performs anomaly detection are configured separately and connected via a communication line L. Different. In the following, mainly such differences will be described. Since other configurations and operational effects are the same as those of the first embodiment, the corresponding portions are denoted by the same reference numerals, and detailed description thereof is omitted.
 学習用データ収集装置301のハードウェア構成は、実施形態1の異変検知装置310と同様であり、異変検知装置310と通信を行う通信部1iを更に備える。異変検知装置310側のハードウェア構成も同様であり、学習用データ収集装置301と通信を行う通信部313を更に備える。 The hardware configuration of the learning data collection device 301 is the same as that of the anomaly detection device 310 of the first embodiment, and further includes a communication unit 1i that communicates with the anomaly detection device 310. The hardware configuration on the anomaly detection device 310 side is the same, and further includes a communication unit 313 that communicates with the learning data collection device 301.
 異変検知装置310は、稼働中、検査対象データβ及び差分データを通信部313にて学習用データ収集装置301へ送信する。学習用データ収集装置301は、異変検知装置310から送信された検査対象データβ及び差分データを通信部1iにて受信する。学習用データの収集方法は実施形態1と同様であり、学習用データ収集装置301は、異変検知装置310の追加学習の要否、追加学習用のデータを収集する。そして、学習用データ収集装置301は、収集した学習用のデータを通信部1iにて異変検知装置310へ送信する。異変検知装置310は、通信部313にて学習用のデータを受信し、受信したデータに基づいて追加学習を行う。
 なお、オートエンコーダ311のニューラルネットワークを構成するための重み係数、層構造を規定する各種パラメータを学習用データ収集装置301及び異変検知装置310間で交換し、追加学習も学習用データ収集装置301側で実行するように構成しても良い。
The anomaly detection device 310 transmits the inspection object data β and the difference data to the learning data collection device 301 through the communication unit 313 during operation. The learning data collection device 301 receives the inspection object data β and the difference data transmitted from the anomaly detection device 310 by the communication unit 1i. The learning data collection method is the same as that in the first embodiment, and the learning data collection device 301 collects the necessity for additional learning of the anomaly detection device 310 and data for additional learning. The learning data collection device 301 transmits the collected learning data to the anomaly detection device 310 via the communication unit 1i. The anomaly detection device 310 receives learning data at the communication unit 313 and performs additional learning based on the received data.
It should be noted that various parameters defining the weighting factor and layer structure for configuring the neural network of the auto encoder 311 are exchanged between the learning data collection device 301 and the anomaly detection device 310, and additional learning is also performed on the learning data collection device 301 side. You may comprise so that it may be performed.
 なお、オートエンコーダ311を追加学習させるデータの収集及び追加学習に係る処理は、ハードウェアの性能に応じて、適宜分散させることができ、処理を実行する場所は特に問題は無い。 It should be noted that the processing related to data collection and additional learning that causes the auto-encoder 311 to perform additional learning can be distributed as appropriate according to the performance of the hardware, and there is no particular problem where the processing is executed.
 実施形態3に係る異変検知システムにおいても、実施形態1と同様、工場内の照明環境の変化、センサの汚れ、ラインの設置場所の移動等によって、センサ値が変化した場合に、環境変化に対応させるべくオートエンコーダ311を追加学習させるための学習用データを収集することができる。 In the anomaly detection system according to the third embodiment, as in the first embodiment, when the sensor value changes due to changes in the lighting environment in the factory, dirt on the sensor, movement of the installation location of the line, etc. For this purpose, learning data for additionally learning the auto encoder 311 can be collected.
 1 異変検知装置
 1a 演算部
 1b 一時記憶部
 1c 画像入力部
 1d 出力部
 1e 入力部
 1f 時計部
 1g 記憶部
 1h データ蓄積部
 2 撮像部
 3 表示部
 4 操作部
 5 コンピュータプログラム
 7 差分発生箇所表示画像
 11 オートエンコーダ
 12 異変検知処理部
 12a 判定データ生成部
 12b 異変判定部
 13 学習用データ収集装置
 14 学習処理部
 100 ワイヤハーネス
 110 検査対象物
 301 学習用データ収集装置
 
DESCRIPTION OF SYMBOLS 1 Abnormality detection apparatus 1a Arithmetic unit 1b Temporary storage unit 1c Image input unit 1d Output unit 1e Input unit 1f Clock unit 1g Storage unit 1h Data storage unit 2 Imaging unit 3 Display unit 4 Operation unit 5 Computer program 7 Difference occurrence location display image 11 Auto encoder 12 Anomaly detection processing unit 12a Determination data generation unit 12b Anomaly determination unit 13 Learning data collection device 14 Learning processing unit 100 Wire harness 110 Inspection object 301 Learning data collection device

Claims (14)

  1.  検査対象物の異変を検知するための検査対象データが入力され、入力された前記検査対象データの特徴を抽出した特徴抽出データを出力する学習済みオートエンコーダを追加学習させるためのデータを収集する学習用データ収集方法であって、
     学習前の前記オートエンコーダを学習させるために該オートエンコーダに入力された複数の第1の検査対象データ及び出力された複数の第1の特徴抽出データの差分と、学習後に前記オートエンコーダに入力された複数の第2の検査対象データ及び出力された複数の第2の特徴抽出データの差分とを統計的に比較し、
     統計的差異が有る場合、正常な前記検査対象物に係る前記第2の検査対象データを追加学習用のデータとして選択する
     学習用データ収集方法。
    Learning that collects data for additionally learning a learned auto-encoder that outputs feature extraction data obtained by extracting features of the inputted inspection target data, when inspection target data for detecting an abnormality of the inspection target is input Data collection method,
    In order to learn the auto encoder before learning, the difference between the plurality of first inspection target data input to the auto encoder and the plurality of output first feature extraction data is input to the auto encoder after learning. Statistically comparing the plurality of second inspection target data and the difference between the plurality of output second feature extraction data,
    A learning data collection method for selecting the second inspection object data relating to the normal inspection object as data for additional learning when there is a statistical difference.
  2.  前記複数の第1の検査対象データ及び出力された前記複数の第1の特徴抽出データの差分から統計的に得られる所定の信頼区間から外れた前記第2の検査対象データを、追加学習用のデータとして選択する
     請求項1に記載の学習用データ収集方法。
    The second inspection object data that deviates from a predetermined confidence interval statistically obtained from the difference between the plurality of first inspection object data and the output plurality of first feature extraction data. The learning data collection method according to claim 1, wherein the learning data collection method is selected as data.
  3.  前記信頼区間内の割合に相当する数の前記複数の第1の検査対象データと、前記信頼区間外の割合に相当する数の前記複数の第2の検査対象データを追加学習用のデータとして選択する
     請求項2に記載の学習用データ収集方法。
    The plurality of pieces of first inspection object data corresponding to the ratio within the confidence interval and the plurality of pieces of second inspection object data corresponding to the proportion outside the confidence interval are selected as additional learning data. The learning data collection method according to claim 2.
  4.  前記複数の第1の検査対象データと、正常な前記検査対象物に係る前記複数の第2の検査対象データとを無作為に混合させた学習用検査対象データを追加学習用のデータとして選択する
     請求項1~請求項3のいずれか一項に記載の学習用データ収集方法。
    The learning inspection object data obtained by randomly mixing the plurality of first inspection object data and the plurality of second inspection object data related to the normal inspection object is selected as additional learning data. The learning data collection method according to any one of claims 1 to 3.
  5.  前記複数の第1の検査対象データ及び出力された前記複数の第1の特徴抽出データの差分の発生箇所と、前記第2の検査対象データ及び出力された前記特徴抽出データの差分の発生箇所とを比較することによって、正常な前記検査対象物に係る前記第2の検査対象データを選択する
     請求項1~請求項4のいずれか一項に記載の学習用データ収集方法。
    Differences between the plurality of first inspection object data and the output first feature extraction data, and differences between the second inspection object data and the output feature extraction data. The learning data collection method according to any one of claims 1 to 4, wherein the second inspection object data relating to the normal inspection object is selected by comparing.
  6.  前記複数の第1の検査対象データ及び出力された前記複数の第1の特徴抽出データの差分の発生箇所と、前記第2の検査対象データ及び出力された前記特徴抽出データの差分の発生箇所との間に統計的差異が無い場合、前記第2の検査対象データを、正常な前記検査対象物に係る前記第2の検査対象データとして選択する
     請求項5に記載の学習用データ収集方法。
    Differences between the plurality of first inspection object data and the output first feature extraction data, and differences between the second inspection object data and the output feature extraction data. 6. The learning data collection method according to claim 5, wherein if there is no statistical difference between the second inspection target data, the second inspection target data is selected as the second inspection target data related to the normal inspection target.
  7.  前記複数の第1の検査対象データ及び出力された前記複数の第1の特徴抽出データの差分の発生箇所と、前記複数の第2の検査対象データ及び出力された前記特徴抽出データの差分の発生箇所との間に統計的差異がある場合、差異がある発生箇所に差分を有する前記第2の検査対象データと共に、該発生箇所を示すデータを外部出力し、
     正常な前記検査対象物に係る前記第2の検査対象データであるか否かの選択を受け付ける
     請求項5に記載の学習用データ収集方法。
    Occurrence point of difference between the plurality of first inspection object data and the output plurality of first feature extraction data, and generation of difference between the plurality of second inspection object data and the output feature extraction data If there is a statistical difference between the location and the second inspection object data having a difference in the occurrence location where there is a difference, externally output data indicating the occurrence location,
    The learning data collection method according to claim 5, wherein selection of whether or not the second inspection object data relating to the normal inspection object is received.
  8.  前記複数の第1の検査対象データ及び出力された前記複数の第1の特徴抽出データの差分と、前記複数の第2の検査対象データ及び出力された前記特徴抽出データの差分とを統計的に比較した結果に基づいて、前記オートエンコーダの追加学習の要否を判定する
     請求項1~請求項7のいずれか一項に記載の学習用データ収集方法。
    The difference between the plurality of first inspection object data and the output first feature extraction data and the difference between the plurality of second inspection object data and the output feature extraction data are statistically calculated. The learning data collection method according to any one of claims 1 to 7, wherein the necessity of additional learning of the auto encoder is determined based on the comparison result.
  9.  学習後に前記オートエンコーダに入力された前記複数の第2の検査対象データを蓄積し、
     前記複数の第1の検査対象データと比較して統計的差異がある前記第2の検査対象データの蓄積量が閾値を超えた場合、追加学習が必要と判定する
     請求項8に記載の学習用データ収集方法。
    Accumulating the plurality of second inspection target data input to the auto encoder after learning,
    9. The learning according to claim 8, wherein additional learning is determined when an accumulation amount of the second inspection target data having a statistical difference compared to the plurality of first inspection target data exceeds a threshold. Data collection method.
  10.  定期的に追加学習の要否を判定する
     請求項8又は請求項9に記載の学習用データ収集方法。
    The learning data collection method according to claim 8 or 9, wherein the necessity of additional learning is periodically determined.
  11.  前記検査対象データは前記検査対象物を撮像して得られる画像データである
     請求項1~請求項10のいずれか一項に記載の学習用データ収集方法。
    The learning data collection method according to any one of claims 1 to 10, wherein the inspection object data is image data obtained by imaging the inspection object.
  12.  検査対象物の異変を検知するための検査対象データが入力され、入力された前記検査対象データの特徴を抽出した特徴抽出データを出力する学習済みオートエンコーダを追加学習させるためのデータを収集する学習用データ収集装置であって、
     学習前の前記オートエンコーダを学習させるために該オートエンコーダに入力された複数の第1の検査対象データ及び出力された複数の第1の特徴抽出データの差分と、学習後に前記オートエンコーダに入力された複数の第2の検査対象データ及び出力された複数の第2の特徴抽出データの差分とを統計的に比較する比較部と、
     統計的に差異が有る場合、正常な前記検査対象物に係る前記第2の検査対象データを追加学習用のデータとして選択する選択部と
     を備える学習用データ収集装置。
    Learning that collects data for additionally learning a learned auto-encoder that outputs feature extraction data obtained by extracting features of the inputted inspection target data, when inspection target data for detecting an abnormality of the inspection target is input Data collection device,
    In order to learn the auto encoder before learning, the difference between the plurality of first inspection target data input to the auto encoder and the plurality of output first feature extraction data is input to the auto encoder after learning. A comparison unit that statistically compares the plurality of second inspection target data and the difference between the plurality of output second feature extraction data;
    A learning data collection device comprising: a selection unit that selects the second inspection object data related to the normal inspection object as data for additional learning when there is a statistical difference.
  13.  検査対象物の異変を検知するための検査対象データが入力され、入力された前記検査対象データの特徴を抽出した特徴抽出データを出力する学習済みオートエンコーダと、
     該オートエンコーダに入力された前記検査対象データと、出力された前記特徴抽出データとを比較することによって、前記検査対象物の異変を検知する異変検知処理部と、
     前記オートエンコーダを追加学習させるためのデータを収集する請求項12に記載の学習用データ収集装置と、
     該学習用データ収集装置にて収集されたデータに基づいて、前記オートエンコーダを追加学習させる学習処理部と
     を備える異変検知システム。
    Inspection target data for detecting an abnormality of the inspection target is input, and a learned auto encoder that outputs feature extraction data obtained by extracting the characteristics of the input inspection target data;
    An anomaly detection processing unit that detects an anomaly of the inspection object by comparing the inspection object data input to the auto encoder and the output feature extraction data;
    The learning data collection device according to claim 12, which collects data for additionally learning the auto encoder;
    An anomaly detection system comprising: a learning processing unit that additionally learns the auto-encoder based on data collected by the learning data collection device.
  14.  検査対象物の異変の有無を検知するための検査対象データが入力され、入力された前記検査対象データの特徴を抽出した特徴抽出データを出力する学習済みオートエンコーダを追加学習させるためのデータを、コンピュータに収集させるためのコンピュータプログラムであって、
     前記コンピュータに、
     学習前の前記オートエンコーダを学習させるために該オートエンコーダに入力された複数の第1の検査対象データ及び出力された複数の第1の特徴抽出データの差分と、学習後に前記オートエンコーダに入力された複数の第2の検査対象データ及び出力された複数の第2の特徴抽出データの差分とを統計的に比較し、
     統計的に差異が有る場合、正常な前記検査対象物に係る前記第2の検査対象データを追加学習用のデータとして選択する
     処理を実行させるためのコンピュータプログラム。
     
    Inspection object data for detecting whether there is an abnormality in the inspection object is input, and data for additionally learning a learned auto encoder that outputs feature extraction data obtained by extracting features of the input inspection object data, A computer program for causing a computer to collect,
    In the computer,
    In order to learn the auto encoder before learning, the difference between the plurality of first inspection target data input to the auto encoder and the plurality of output first feature extraction data is input to the auto encoder after learning. Statistically comparing the plurality of second inspection target data and the difference between the plurality of output second feature extraction data,
    A computer program for executing a process of selecting the second inspection object data related to the normal inspection object as data for additional learning when there is a statistical difference.
PCT/JP2019/003397 2018-03-13 2019-01-31 Learning data collection method, learning data collection device, abnormality detection system, and computer program WO2019176354A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018045727 2018-03-13
JP2018-045727 2018-03-13

Publications (1)

Publication Number Publication Date
WO2019176354A1 true WO2019176354A1 (en) 2019-09-19

Family

ID=67906501

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/003397 WO2019176354A1 (en) 2018-03-13 2019-01-31 Learning data collection method, learning data collection device, abnormality detection system, and computer program

Country Status (1)

Country Link
WO (1) WO2019176354A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113127305A (en) * 2021-04-22 2021-07-16 北京百度网讯科技有限公司 Abnormality detection method and apparatus
JP2021114121A (en) * 2020-01-17 2021-08-05 株式会社日立製作所 Method for monitoring data to be monitored
WO2021193931A1 (en) * 2020-03-27 2021-09-30 Kyb株式会社 Machine learning device, learning model generation method, and program
WO2022091305A1 (en) * 2020-10-29 2022-05-05 日本電気株式会社 Behavior estimation device, behavior estimation method, route generation device, route generation method, and computer-readable recording medium
WO2022158065A1 (en) * 2021-01-20 2022-07-28 パナソニックIpマネジメント株式会社 Fitting detection method, fitting detection device, and fitting detection system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06281592A (en) * 1993-03-29 1994-10-07 Sumitomo Metal Ind Ltd Surface inspecting method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06281592A (en) * 1993-03-29 1994-10-07 Sumitomo Metal Ind Ltd Surface inspecting method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IKEDA, YASUHIRO ET AL.: "Retraining anomaly detection model using autoencoder", IEICE TECHN. REPORT, vol. 117, no. 397, 15 January 2018 (2018-01-15), pages 77 - 82 *
TSUKADA, MINETO ET AL.: "Accelerating sequential learning algorithm OS-ELM using FPGA-NIC", IEICE TECHN. REPORT, vol. 117, no. 379, 11 January 2018 (2018-01-11), pages 133 - 138 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021114121A (en) * 2020-01-17 2021-08-05 株式会社日立製作所 Method for monitoring data to be monitored
JP7246330B2 (en) 2020-01-17 2023-03-27 株式会社日立製作所 How to monitor monitored data
WO2021193931A1 (en) * 2020-03-27 2021-09-30 Kyb株式会社 Machine learning device, learning model generation method, and program
WO2022091305A1 (en) * 2020-10-29 2022-05-05 日本電気株式会社 Behavior estimation device, behavior estimation method, route generation device, route generation method, and computer-readable recording medium
WO2022158065A1 (en) * 2021-01-20 2022-07-28 パナソニックIpマネジメント株式会社 Fitting detection method, fitting detection device, and fitting detection system
JP7457908B2 (en) 2021-01-20 2024-03-29 パナソニックIpマネジメント株式会社 Mating detection method, mating detection device, and mating detection system
CN113127305A (en) * 2021-04-22 2021-07-16 北京百度网讯科技有限公司 Abnormality detection method and apparatus
CN113127305B (en) * 2021-04-22 2024-02-13 北京百度网讯科技有限公司 Abnormality detection method and device

Similar Documents

Publication Publication Date Title
WO2019176354A1 (en) Learning data collection method, learning data collection device, abnormality detection system, and computer program
US8854431B2 (en) Optical self-diagnosis of a stereoscopic camera system
JP4095860B2 (en) Defect inspection method and apparatus
KR101992970B1 (en) Apparatus And Method for Detecting A Surface Defect Using A Deep Neural Network And Noise Reduction
US11715190B2 (en) Inspection system, image discrimination system, discrimination system, discriminator generation system, and learning data generation device
US20170278235A1 (en) Fast density estimation method for defect inspection application
JP2007156655A (en) Variable region detection apparatus and its method
US20150362908A1 (en) Automatic Recipe Stability Monitoring and Reporting
JP2020060879A (en) Learning device, image generator, method for learning, and learning program
EP4060607A1 (en) Information processing device, information processing method, and program
CN103606221A (en) Fault automatic diagnostic method of counter and device
JP2021086379A (en) Information processing apparatus, information processing method, program, and method of generating learning model
JP7453813B2 (en) Inspection equipment, inspection methods, programs, learning devices, learning methods, and learned datasets
JP3756507B1 (en) Image processing algorithm evaluation method and apparatus, image processing algorithm generation method and apparatus, program, and program recording medium
CN116503388A (en) Defect detection method, device and storage medium
JP2004258034A (en) System and method for detecting and reporting manufacturing defects using multi-variant image analysis
JP7459697B2 (en) Anomaly detection system, learning device, anomaly detection program, learning program, anomaly detection method, and learning method
JP5075083B2 (en) Teacher data creation support method, image classification method, and image classification apparatus
JP2006266943A (en) Apparatus and method for inspecting defect
WO2020137228A1 (en) Image determination device, image determination method, and image determination program
CN115909157A (en) Machine vision-based identification detection method, device, equipment and medium
JP7198438B2 (en) OBJECT DETECTION METHOD, OBJECT DETECTION DEVICE AND COMPUTER PROGRAM
CN112561852A (en) Image determination device and image determination method
JP2010019561A (en) Flaw inspection device and flaw inspection method
WO2020031422A1 (en) Object detection method, object detection device, and computer program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19767660

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19767660

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP