WO2022202456A1 - Appearance inspection method and appearance inspection system - Google Patents

Appearance inspection method and appearance inspection system Download PDF

Info

Publication number
WO2022202456A1
WO2022202456A1 PCT/JP2022/011438 JP2022011438W WO2022202456A1 WO 2022202456 A1 WO2022202456 A1 WO 2022202456A1 JP 2022011438 W JP2022011438 W JP 2022011438W WO 2022202456 A1 WO2022202456 A1 WO 2022202456A1
Authority
WO
WIPO (PCT)
Prior art keywords
learning
evaluation value
extended
evaluation
correct
Prior art date
Application number
PCT/JP2022/011438
Other languages
French (fr)
Japanese (ja)
Inventor
敦 宮本
晟 伊藤
直明 近藤
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to JP2023509035A priority Critical patent/JP7549736B2/en
Publication of WO2022202456A1 publication Critical patent/WO2022202456A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present disclosure relates to an appearance inspection device and an appearance inspection method based on machine learning. More specifically, an evaluation value is obtained from an inspection image of an actual inspection object using an evaluation engine that has learned the relationship between a learning image obtained by imaging an inspection object for learning in advance and its correct evaluation value. An apparatus and method for highly accurate estimation and automatic evaluation of the performance of an inspected object are disclosed.
  • Non-Patent Document 1 discloses a method of determining shape defects of welded portions using machine learning. ing.
  • the image of the inspection object for learning (learning image) is input, and the difference between the estimated evaluation value output from the evaluation engine and the correct evaluation value taught by the inspector is reduced.
  • Update the evaluation engine's internal parameters (such as network weights and biases).
  • Data Augmentation is known as a general method for suppressing overfitting.
  • This is a technique of artificially padding a learning image prepared by an inspector by adding processing such as translation and rotation to the learning image. For example, if an inspection target with a certain correct evaluation value happens to appear in the upper left of all learning images, even if the correct evaluation value and the position of the inspection target on the image are irrelevant, the evaluation engine may erroneously learn that location is the key criterion. Such erroneous learning can be suppressed by using data augmentation to increase position variation.
  • WO2020/129617 Patent Literature 1 also discloses data extension such as changing the position of a defective shape portion.
  • the correct evaluation value taught by the inspector includes fluctuations in judgment due to individual differences and erroneous teaching.
  • one learning sample may have multiple correct evaluation values. It is considered that over-learning is particularly likely to occur under such circumstances.
  • the above-described padding of the inspection image alone cannot sufficiently solve the problem of variations in correct evaluation values. Therefore, even if the quality of learning samples deteriorates, a mechanism is required to maintain the performance of estimating evaluation values by machine learning.
  • a visual inspection method and a visual inspection system include: (a) storing learning data, which is a set of learning samples that are pairs of learning images obtained by imaging inspection objects for learning and correct evaluation values for the learning images, in a storage resource; (b) changing the correct evaluation value of the learning sample included in the learning data according to a changeable predetermined variation distribution, and generating an extended learning sample that is a learning sample having the changed value as the correct evaluation value; (c) generating augmented learning data that is a set of the augmented learning samples; (d) determining the internal parameters of the evaluation engine by learning the relationship between the learning image and the evaluation value based on the augmented learning data; (e) Acquiring an inspection image of the inspection object, (f) inputting the inspection image to the evaluation engine, and obtaining an estimated evaluation value, which is an estimated evaluation value of the inspection image, from the output of the evaluation engine;
  • the evaluation engine does not over-learn even against variations in correct evaluation values due to fluctuations in judgment, incorrect teaching, existence of multiple correct evaluation values, etc. It can be suppressed, and an improvement in the accuracy of the evaluation engine can be expected.
  • FIG. 10 is a diagram showing expansion of learning data in the learning phase; It is a figure which shows an example of a to-be-tested object.
  • FIG. 10 is a diagram showing an example of a distribution of correct evaluation values; It is a figure which shows an example of a to-be-tested object.
  • FIG. 10 is a diagram showing an example of a distribution of correct evaluation values;
  • FIG. 5 is a diagram showing an example of a known learning sequence; It is a figure which shows an example of a learning sequence. It is a figure which shows an example of a learning sequence.
  • FIG. 10 is a diagram showing updating of the fluctuating distribution in the learning phase;
  • FIG. 10 is a diagram showing a GUI for inputting and displaying a variation distribution of correct evaluation values; It is a figure which shows the hardware constitutions of an automatic visual inspection system.
  • a learning image acquisition step of imaging an inspection object for learning and acquiring learning images ⁇ f_i ⁇ (i 1,...,Nf, Nf: number of images);
  • a learning data input step of inputting a set ⁇ (f_i, g_i) ⁇ (i 1,...,Nf) of pairs (f_i, g_i) of correct evaluation values g_i taught by the user as learning samples as learning data.
  • a variation distribution input step of inputting the variation distribution d(g'_i;g_i) of the correct evaluation value, and based on the variation distribution d(g'_i;g_i), the correct evaluation value g_i an augmented learning data generation step that generates augmented learning data consisting of multiple augmented learning samples (f_i,g'_ij) (j 1,...,NS_i, NS_i: number of extensions) generated by changing the value of ; A learning step of learning the relationship between the learning image and the evaluation value based on the learning data to determine the internal parameters of the evaluation engine; Nf'', Nf'': the number of images), and an evaluation step of inputting the inspection image f''_i to the trained evaluation engine and outputting the estimated evaluation value g'' ⁇ _i.
  • the variation distribution d(g'_i;g_i) given in the variation distribution input step is characterized in that the distribution changes with the correct evaluation value g_i as a parameter.
  • the problem with this process is how to appropriately give the variation distribution d.
  • This embodiment is characterized in that the variation distribution d is switched according to the value of the correct evaluation value g_i. That is, the variation distribution was given for each correct evaluation value as a probability distribution d(g'_i;g_i) with the extended correct evaluation value g'_i as a variable and the correct evaluation value g_i as a parameter.
  • g'_ij is the jth extended correct evaluation value g'_i generated based on the variation d(g'_i;g_i).
  • one extended learning sample (f_i, g'_i) is generated and replaced with the learning sample (f_i, g_i) to generate an extended mini-batch m'_s.
  • a plurality of augmented learning samples are generated from one learning sample, which increases the learning time. That is, due to the increase in the number of learning samples, either the number of learning samples included in one mini-batch before expansion, the number of mini-batches, or both increases compared to before data expansion.
  • each learning sample (f_i, g_i) before extension is simply replaced with one extended learning sample (f_i, g'_i)
  • the number of learning samples and the number of mini-batches do not change.
  • variation distribution d as a probability distribution
  • a random number following this probability distribution is used to generate the extended evaluation correct value g'_i.
  • the distribution of the correct augmented evaluation values in the trained augmented learning samples can approach the dispersion distribution d. Be expected.
  • the variation distribution d is updated during the iterations, and the extended learning data is generated based on the updated variation distribution.
  • the reliability R(g' ⁇ _i) of the estimated evaluation value g' ⁇ _i output by inputting the extended learning sample to the evaluation engine during learning is calculated, and the reliability R(g' ⁇ _i) is a parameter, and the variation distribution is updated during iteration.
  • the difference between the correct evaluation value g_i and the estimated evaluation value g' ⁇ _i, or the variation distribution d(g'_i;g_i) is regarded as a probability distribution, and the estimated evaluation value g' ⁇ _i is calculated as
  • the imputed value d(g' ⁇ _i;g_i) (a measure of how likely it is that the estimated value is g' ⁇ _i)
  • the method based on the variation in the degree of membership to each classification class is mentioned.
  • the degree of belonging to each classification class if the degree of belonging to the correct classification class is outstanding, the reliability is high, but if there is also the degree of belonging to other classification classes, the reliability decreases according to the degree.
  • the variation distribution is changed during learning according to such reliability.
  • By repeatedly updating the variation distribution and the evaluation engine it is possible to estimate an appropriate variation distribution that could not be assumed before learning, and to obtain a higher performance evaluation engine by expanding the data based on this variation distribution.
  • the embodiments described below do not limit the present invention, and not all of the elements described in the embodiments and their combinations are essential to the solution of the invention.
  • FIG. 1 shows an automatic visual inspection system and overall processing sequence in the present invention.
  • the processing sequence is roughly divided into a learning phase 100 and an inspection phase 101 .
  • the learning image is acquired by imaging the surface or inside of the object to be inspected as a digital image with an imaging device such as a CCD camera, an optical microscope, a charged particle microscope, an ultrasonic inspection device, an X-ray inspection device, or the like.
  • an imaging device such as a CCD camera, an optical microscope, a charged particle microscope, an ultrasonic inspection device, an X-ray inspection device, or the like.
  • acquisition it is also possible to simply receive an image captured by another system and store it in the storage resource of the automatic visual inspection system.
  • a correct evaluation value g_i is assigned to each learning image f_i (103).
  • the evaluation value is an index for evaluating various workmanship such as shape defects, assembly defects, adhesion of foreign matter, defects and criticality inside the object to be inspected, surface scratches, spots, dirt, etc. can be defined.
  • a correct evaluation value g_i is assigned to these evaluation criteria based on the inspector's visual judgment and numerical values analyzed by other inspection devices and methods.
  • this correct evaluation value g_i is accurate, but there is a possibility that it may include variations due to fluctuations in the judgment of the inspector, misinstruction, existence of multiple correct evaluation values, and the like.
  • the training data is used to train the rating engine (104).
  • An evaluation engine is an estimator based on machine learning that takes an inspection image f_i as an input and outputs an estimated evaluation value g ⁇ _i.
  • Various existing machine learning engines can be used as the evaluation engine. -nearest neighbor (k-NN) and the like. These evaluation engines can handle classification and regression problems.
  • learning samples are used to optimize the internal parameters of the evaluation engine so that an estimated evaluation value g ⁇ _i close to the taught correct evaluation value g_i is output.
  • the variation distribution d of the correct evaluation value g_i is input.
  • the inspection image f''_i is input to the trained evaluation engine, and the estimated evaluation value g'' ⁇ _i is output (107).
  • the estimated evaluation value is checked by an inspector as necessary (108), and if there is a defect, countermeasures are fed back to the manufacturing process.
  • each of the learning image f_i and the inspection image f''_i is one image.
  • a rating value may be estimated as an input to the rating engine.
  • the learning image group and the inspection image group are f_i and f''_i, respectively.
  • a data extension method in this embodiment is shown in FIG.
  • the pair (f_i, g_i) of the learning image f_i and the correct evaluation value g_i taught by the inspector is called a learning sample (in FIG. 2, three learning samples 201, 202, and 203 are shown as examples),
  • the correct evaluation value g_i in the learning data varies due to fluctuations in the judgment of the inspector, misinstruction, existence of a plurality of correct evaluation values, and the like. Variations in correct evaluation values are inherently present, and it is difficult to completely remove them.
  • the variation of the correct evaluation value is given as the variation distribution d (211).
  • a plurality of extended correct evaluation values ⁇ g′_ij ⁇ (j 1, .
  • the expansion number Ns_i may be changed for each learning sample.
  • a pair (f_i, g'_ij) of a learning image f_i and an extended correct evaluation value g'_ij is called an extended learning sample (in FIG.
  • g'_ij is the j-th extended correct evaluation value g'_i generated from the learning sample (f_i, g_i).
  • ⁇ (F_k, G_k) ⁇ ⁇ S_1,...,S_Nf ⁇ .
  • the variation distribution d can be regarded as the tendency of the correct evaluation value to be mistaken or the probability distribution of the correct evaluation value. Even if there is, it can be expected to suppress excessive optimization for each learning sample and improve generalization performance.
  • the problem with this process is how to appropriately give the variation distribution d.
  • a plurality of inspectors may assign a correct evaluation value g_i to each learning sample f_i, and the variation distribution d may be obtained from the actual degree of variation depending on the inspector.
  • evaluation by a plurality of inspectors for each learning sample f_i significantly increases the inspection cost and the load on the inspectors.
  • This embodiment is characterized in that the variation distribution d is switched according to the value of the correct evaluation value g_i.
  • the variation distribution was given for each correct evaluation value as a probability distribution d(g'_i;g_i) with the extended correct evaluation value g'_i as a variable and the correct evaluation value g_i as a parameter.
  • g'_ij is the jth extended correct evaluation value g'_i generated based on the variation d(g'_i;g_i).
  • the variation distribution d(g'_i;g_i) may be given by a histogram as shown in FIG. 4, which will be described later, or by a polygonal line as shown in FIG. 6, which will be described later. Also, it may be given by a combination of parametric functions (Gaussian distribution, etc.), or given by a free curve as shown in FIG. 11, which will be described later.
  • FIG. 3 shows two schematic diagrams of defects belonging to defect classes D1 to D4, respectively.
  • 4A to 4D show defects 300 and 301 belonging to defect class D1, defects 302 and 303 belonging to defect class D2, defects 304 and 305 belonging to defect class D3, and defect class D4.
  • Defects 306 and 307 belonging to , respectively, are displayed.
  • Variation distributions for the correct evaluation values D1 to D4 are shown in FIGS. 4(a) to 4(d). In this example, the variation distribution is given by a histogram.
  • D2 also has a relatively high frequency of 2 (401). This is because, as can be seen by comparing (a) and (b) of FIG. 3, the defect classes D1 and D2 are similar in that they are both jagged-shaped defects, and are easy to confuse in teaching. Therefore, the extended correct evaluation value of 2/6 ⁇ 33% of the extended learning samples in the extended learning sample group S_i is D2.
  • the power of D3 is 0 (402). This is because, as can be seen by comparing (a) and (c) of FIG. 3, it is impossible to confuse a jagged defect with a round defect.
  • the extended learning sample group S_i does not include an extended learning sample with an extended correct evaluation value of D3.
  • the degree of D4 is small at 1, it has a value (403).
  • the defect classes D1 and D4 are not very similar, but there is some possibility of confusion in that they are both bumpy defects. It's for. Therefore, the extended correct evaluation value of 1/6 ⁇ 17% of the extended learning samples in the extended learning sample group S_i is D4.
  • FIG. 5 Taking the surface roughness evaluation shown in FIG. 5 as an example of visual inspection, a specific example of the variation distribution for this example will be described with reference to FIG.
  • This inspection is an example of quantifying and evaluating the roughness level of the surface of the inspection object from 1.0 to 3.0.
  • the evaluation value is the roughness level, and the smaller the value, the better the condition.
  • Schematic diagrams (500-504) of surface images with roughness levels of 1.0, 1.5, 2.0, 2.5 and 3.0 are shown in FIGS. 5(a)-(e), respectively.
  • the correct evaluation value g_i of the roughness level was given in increments of 0.5 in the learning samples, the roughness level is a continuous value, and there are inspection objects with intermediate roughness levels. Therefore, the estimation engine deals with regression problems.
  • Figures 6(a) to 6(e) show the variation distribution for correct evaluation values of 1.0 to 3.0. In this example, the variation distribution is given by a polygonal line.
  • a polygonal line of d(g'_i;1.0) indicates the probability distribution that the actual evaluation value of the learning sample whose correct evaluation value g_i is 1.0 is the expanded correct evaluation value g'_i on the horizontal axis. Therefore, the extended correct evaluation value g'_i is generated using the value of the polygonal line as the generation probability.
  • the value of the polygonal line at the extended correct evaluation value of 1.0 is the highest (601), but as the extended correct evaluation value increases, the value of the polygonal line gradually decreases, reaching 0 at the extended correct evaluation value of 2.0 (604).
  • extended learning samples with intermediate roughness levels as extended correct evaluation values are not limited to roughness levels 1.0 and 1.5 (indicated by black circles in Fig. 6(a)) included in the learning data. (indicated by white circles in FIG. 6(a)).
  • augmented learning samples with coarseness level 1.25 can be generated 602 with a frequency intermediate between the generation frequencies of coarseness levels 1.0 and 1.5.
  • 5(b) and 5(c), respectively, also have discontinuous changes in appearance, and the decision line (505) for determining whether or not to ship is set between them. For this reason, there is a tendency that a decision error that crosses this decision line is less likely to occur. Therefore, this tendency was reflected in the variation distribution. 6(b) and 6(c) show this tendency in an easy-to-understand manner. In FIG. 6(b), as the value of the extended correct value g'_i on the horizontal axis increases from 1.5 to 2.0, the value of the broken line sharply decreases (607 ⁇ 608). Similarly, in FIG.
  • the learning sequence of the evaluation engine using the augmented learning data includes several examples. In each embodiment, the number of expansions of the expansion learning data, the timing of expansion processing, and the like are different. A typical example will be specifically described below.
  • ⁇ e_t ⁇ (t 1, ..., Ne, Ne: the number of epochs).
  • e_1 700
  • the illustration is omitted.
  • the same mini-batch division is performed in the second and subsequent epochs e_2 to e_Ne (701 to 703), and the learning samples included in the mini-batch may be shuffled for each epoch.
  • FIG. 800-803 are epochs and 805 is mini-batch.
  • the learning data ⁇ (f_i, g_i) ⁇ (704) in FIG. 7 are replaced with extended learning data ⁇ (f_i, g'_ij) ⁇ (804).
  • the number of extended learning samples NF in the extended learning data is larger than the number of learning samples Nf in the learning data (NF>Nf), so in the embodiment of FIG.
  • the number of samples included in one extended mini-batch increases, so if the number of epochs remains the same, the learning time increases.
  • one extended learning sample (f_1, g'_1) (916) is generated from one learning sample (f_1, g_1) (909) included in the first mini-batch m_1 (905) (911). This is done for all learning samples (909, 910, etc.) included in all mini-batches (905-908, etc.) to generate extended mini-batches (912-915, etc.) and learning samples (916, 917, etc.).
  • Supplementary information about this feature In general data augmentation, since a plurality of augmented learning samples are generated from one learning sample, the learning time increases. That is, due to the increase in the number of learning samples, either the number of learning samples included in one mini-batch before expansion, the number of mini-batches, or both increases compared to before data expansion. In this embodiment, since one learning sample (f_i, g_i) before extension is simply replaced with one extended learning sample (f_i, g'_i), the number of learning samples and the number of mini-batches do not change. Considering the variation distribution d as a probability distribution, a random number following this probability distribution is used to generate the extended evaluation correct value g'_i.
  • the value of the extended evaluation correct value g'_i may change depending on the epoch.
  • the distribution of the correct augmented evaluation values in the trained augmented learning samples can approach the dispersion distribution d. Be expected.
  • the number of epochs is the same, it is possible to learn the augmented learning data reflecting the information of the variation distribution in the same time as the learning of the learning data ⁇ (f_i, g_i) ⁇ .
  • Variation distribution change during learning 4.1
  • Basic processing It is characterized by generating augmented learning data based on the distribution.
  • the reliability R(g' ⁇ _i) of the estimated evaluation value g' ⁇ _i output by inputting the extended learning sample to the evaluation engine during learning is calculated, and the reliability R(g' ⁇ _i) is a parameter, and the variation distribution is updated during iteration.
  • FIG. 10 incorporates a mechanism for changing the variation distribution during learning to the learning sequence described with reference to FIG.
  • the mechanism for changing the variation distribution during learning can be incorporated.
  • the change of variation distribution during learning can be applied to the learning sequence described with reference to FIG. 8 and other learning sequences.
  • the epoch corresponding to the t-th epoch e_t in FIG. 9 is indicated at 1000, but the same applies to other epochs.
  • each learning image f_i is input to the evaluation engine (1013) being trained, the estimated evaluation value g' ⁇ _i is output, and the reliability of g' ⁇ _i R(g' ⁇ _i) (1014 to 1016)
  • the reliability the difference between the correct evaluation value g_i and the estimated evaluation value g' ⁇ _i, or the variation distribution d(g'_i;g_i) is regarded as a probability distribution, and the estimated evaluation value g' ⁇ _i is calculated as The imputed value d(g' ⁇ _i;g_i) (a measure of how likely it is that the estimated value is g' ⁇ _i), and for classification problems, the method based on the variation in the degree of membership to each classification class is mentioned.
  • Variation distribution (1017) is changed during learning according to such reliability. The timing of the change may be for each mini-batch learning or for each epoch. Based on the modified variation distribution (1017) in the next mini-batch or epoch, generate (1007) the augmented learning samples (f_i,g'_i) (1010-1012) and use them to run the evaluation engine (1013). learn.
  • the variation distribution is switched according to the value of the correct evaluation value g_i, and the variation distribution is given by d(g'_i;g_i) using g_i as a parameter. That is, the variation distribution is changed based on the reliability of the estimated evaluation value g' ⁇ _i for the augmented learning sample whose correct evaluation value is g_i.
  • the variation distribution is switched for each learning sample (f_i,g_i), and the variation distribution is given by d(g'_i;(f_i,g_i)) with (f_i,g_i) as a parameter. be done. Since the reliability can be calculated for each extended learning sample (f_i,g'_ij), it is possible to evaluate the validity of the variation distribution for each learning sample (f_i,g_i) that is the basis of the extended learning sample. is. Therefore, it is possible to change the variation distribution for each learning sample (f_i, g_i). Manually giving a variation distribution for each learning sample requires a huge amount of work cost, and it is also difficult to give an appropriate variation distribution. By updating the variability distribution based on the reliability of the training sample to improve the performance of the evaluation engine, the variability distribution is optimized for each training sample in parallel with optimizing the internal parameters of the evaluation engine without human intervention. become possible.
  • the variation distribution is changed based on the evaluation result of the verification data.
  • data called verification data is prepared separately from the training data used for learning (called learning data in this disclosure) in order to obtain internal parameters with high generalization performance for unlearned data.
  • learning data data used for learning
  • the internal parameters are successively updated to improve the estimation results for the training data, but which internal parameters are finally adopted depends on the estimation results of the verification data (untrained data) that are not used in training.
  • An internal parameter that becomes In this embodiment, the variation distribution may be selected based on the reliability of the estimated evaluation value of the verification data instead of the learning data.
  • a heuristic method may be used to optimize the variability distribution.
  • the variability distribution given before learning is used as the initial value, and the variability distribution is slightly changed during learning, resulting in an update to a variability distribution that improves the performance of the evaluation engine (accuracy rate and reliability of estimated evaluation values). continue.
  • Such a method allows obtaining a heuristically relevant variability distribution without using an analytical approach.
  • FIG. 11 shows an example of a graphical user interface (GUI) for a user such as an inspector to specify and confirm the variation distribution (1100).
  • GUI graphical user interface
  • This GUI can display the variation distribution for each correct evaluation value (1102 to 1104 in 1101).
  • the variation distribution to be displayed can be displayed by switching between the distribution initially specified by the user and the distribution updated during learning using radio buttons or the like (1105). In the latter, by specifying the ID of the epoch (1106), it is possible to display the variation distribution during the update at any epoch.
  • Variation distribution can be given by a histogram (e.g. Fig. 4), a line (e.g. Fig. 6), a combination of parametric functions, a free curve (e.g. 1102 or 1103 in Fig. 11), etc., and the method of giving can be selected with radio buttons. (1107).
  • the information of the learning sample (f_i, g_i) can be displayed (1108) as a material for determining the validity of the processing result or as a material for determining the variation distribution.
  • Several training samples can be displayed side by side and compared (1109, 1121, 1133).
  • the information "display 1" (1109) of one learning sample will be taken up and the details of the display contents will be explained.
  • the learning sample to be displayed can be specified by the ID of the learning image (1110), and can be filtered by the correct evaluation value g_i (1111).
  • the inspection image f_i, the estimated evaluation value g ⁇ _i for the learning sample (f_i, g_i), and the reliability R(g ⁇ _i) can be displayed (1112, 1113, 1114).
  • An inspection image (1112) in this example is a defect image inside a semiconductor device captured by an ultrasonic inspection apparatus.
  • Information on the extended learning sample ⁇ (f_i,g'_ij) ⁇ generated from the learning sample (f_i,g_i) can also be displayed (1115).
  • Information of multiple extended samples can also be displayed side by side (1116, 1117).
  • Display contents include extended correct evaluation value g'_ij, estimated evaluation value g' ⁇ _ij for extended learning sample (f_i,g'_ij), confidence R(g' ⁇ _ij) (1118, 1119, 1120) .
  • the information "display 2" (1121) of other learning samples is the same as "display 1". That is, 1122-1132 correspond to 1110-1120.
  • information on learning samples is displayed in displays 1109 and 1121, but information on inspection samples (inspection image f''_i, estimated evaluation value g'' ⁇ _i, etc.) can also be displayed in the same way. be.
  • This embodiment suppresses the over-learning of the evaluation engine against variations in correct evaluation values due to fluctuations in judgment, incorrect teaching, and the existence of multiple correct evaluation values in the automation of appearance inspections using machine learning. can do. As a practical matter, it is difficult to improve the quality of learning samples only by the efforts of inspectors. In addition, the variation in correct evaluation values cannot be sufficiently resolved by padding the inspection image alone. For this reason, the present embodiment provides a mechanism that maintains the performance of estimating an evaluation value by machine learning even if the quality of learning samples is degraded. This makes it possible to estimate the evaluation value from the inspection image of the inspection object with high accuracy and automatically evaluate the performance of the inspection object.
  • two-dimensional image data is used as input information, but one-dimensional signals such as received waves of ultrasonic waves, or three-dimensional volume data acquired by a laser range finder, etc., can also be used as input information. It is possible to apply the technique of this embodiment.
  • the method of the present embodiment can also be applied when there are multiple input images and multiple types of estimated evaluation values (evaluation engine has multiple inputs and multiple outputs).
  • FIG. 12 shows an automatic visual inspection system that implements the visual inspection method described in the above embodiments.
  • the automatic visual inspection system is composed of the imaging device described above and a computer. Examples of imaging devices have already been described.
  • the computer is a component for processing the visual inspection method described in this embodiment, and has the following.
  • *Processor Examples of processors include CPU, GPU, and FPGA, but other components may be used as long as they can process the visual inspection method.
  • Storage resources Examples of storage resources include RAM, ROM, HDD, and non-volatile memory (flash memory, etc.). Note that the storage resource may include a volatile memory (the aforementioned RAM is one example).
  • the storage resource may store a program (referred to as a visual inspection program) that causes a processor to execute the visual inspection method described in the above embodiments. Also, the storage resource may store data referred to or generated by the visual inspection program.
  • GUI device Examples of a GUI device include a display and a projector, but other devices may be used as long as they can display a GUI.
  • Input device Examples of input devices include keyboards, mice, and touch panels, but other devices may be used as long as they are configured to accept operations from the user. Also, the input device and the GUI device may be an integrated device.
  • Communication interface device Examples of communication interfaces include USB, Ethernet, and Wi-Fi.
  • any interface that can receive images directly from an imaging device or that allows the user to send the images to a computer Other interface devices may be used.
  • a portable nonvolatile storage medium for example, flash memory, DVD, CD-ROM, Blu-ray disc, etc.
  • the image may be stored in the computer.
  • the automatic visual inspection system may include a plurality of computers and a plurality of imaging devices.
  • the aforementioned visual inspection program may be stored in the computer through the following path: * Store the visual inspection program in a portable non-volatile storage medium, and distribute the program to computers by connecting the medium to a communication interface. * The program distribution server distributes the appearance inspection program to the computer.
  • the program distribution server has a storage resource that stores the appearance inspection program, a processor that performs distribution processing for distributing the appearance inspection program, and a communication interface device that can communicate with the communication interface device of the computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The following occur during an appearance inspection according to the present invention: training data, which is a set of training samples in which training images capturing an item to be inspected for training purposes are paired with correct evaluation values for the training images is stored in a storage resource; correct evaluation values for training samples included in the training data are altered in accordance with a modifiable prescribed variation distribution; expanded training samples, which are training samples in which the altered values are set as correct evaluation values, are generated; expanded training data, which is a set of expanded training samples, is generated; the relationship between the training images and the evaluation values is learned, on the basis of the expanded training data, to determine internal parameters of an evaluation engine; an inspection image capturing the item to be inspected is acquired; the inspection image is inputted into the evaluation engine; and an estimated evaluation value, which is an estimate of the inspection image evaluation value, is acquired from output from the evaluation engine. (Selected Drawing: FIG. 2)

Description

外観検査方法および外観検査システムAPPEARANCE INSPECTION METHOD AND APPEARANCE INSPECTION SYSTEM
 本開示は機械学習に基づく外観検査装置および外観検査方法に関する。より具体的には、予め学習用の検査対象物を撮像して取得した学習画像とその正解評価値との関係を学習した評価エンジンを用いて、実際の検査対象物の検査画像から評価値を高精度に推定し、検査対象物のできばえを自動で評価する装置と方法が開示される。 The present disclosure relates to an appearance inspection device and an appearance inspection method based on machine learning. More specifically, an evaluation value is obtained from an inspection image of an actual inspection object using an evaluation engine that has learned the relationship between a learning image obtained by imaging an inspection object for learning in advance and its correct evaluation value. An apparatus and method for highly accurate estimation and automatic evaluation of the performance of an inspected object are disclosed.
 機械、金属、化学、食品、繊維等を含む多くの工業製品においては検査画像を基に、形状不良、組立不良、異物の付着、内部の欠損や致命度、表面の傷や斑、汚れ等、様々な出来栄えを評価する外観検査が広く行われている。従来、これらの外観検査の多くは検査員の目視判断により行われてきた。一方、大量生産や品質向上への要求増大に伴い、検査コストならびに検査員の負荷が増大している。また、人間の感覚に基づく官能検査では特に高い経験やスキルが求められる。検査員によって評価値が異なったり、検査の度に結果が異なるといった属人性や再現性も課題となる。このような検査のコスト、スキル、属人性等の課題に対し、検査の自動化が強く求められている。 In many industrial products including machinery, metals, chemicals, food, textiles, etc., based on inspection images, defects such as shape defects, assembly defects, adhesion of foreign substances, internal defects and criticality, surface scratches, spots, stains, etc. Appearance inspection is widely performed to evaluate various workmanship. Conventionally, many of these appearance inspections have been performed by visual judgment by inspectors. On the other hand, with the increasing demand for mass production and quality improvement, inspection costs and the burden on inspectors are increasing. Also, sensory tests based on human senses require particularly high experience and skill. Reproducibility and reproducibility are also issues, as the evaluation values differ depending on the inspector, and the results differ each time the inspection is performed. There is a strong demand for automation of inspections in order to address issues such as inspection costs, skills, and individuality.
 近年、Convolutional Neural Network(CNN)に代表される深層ネットワークモデルの提案により、機械学習の性能は飛躍的に向上した(例えば、非特許文献1)。機械学習に基づく評価エンジンを活用した外観検査方法は多く提案されており、例えば、WO2020/129617号公報(特許文献1)では、溶接個所の形状不良を機械学習を用いて判定する方法が開示されている。 In recent years, the performance of machine learning has dramatically improved due to the proposal of deep network models represented by Convolutional Neural Network (CNN) (for example, Non-Patent Document 1). Many appearance inspection methods utilizing evaluation engines based on machine learning have been proposed. For example, WO2020/129617 (Patent Document 1) discloses a method of determining shape defects of welded portions using machine learning. ing.
 評価エンジンの学習においては、学習用の検査対象物の画像(学習画像)を入力として、評価エンジンから出力される推定評価値と検査員により教示された正解評価値との差分が小さくなるように評価エンジンの内部パラメータ(ネットワークの重みやバイアス等)を更新する。内部パラメータを更新するタイミングとしては、全ての学習サンプルをまとめて学習するのではなく、学習データをいくつかのミニバッチと呼ばれる集合に分割し、ミニバッチ毎に内部パラメータの更新を行うことが一般的である。これはミニバッチ学習と呼ばれ、全てのミニバッチが学習された時点で、全ての学習サンプルが学習に用いられたことになる。この全てのミニバッチを1回学習することを1エポックと呼ぶ。エポックを何回も繰り返すことで、内部パラメータを最適化していく。エポック毎にミニバッチに含まれる学習サンプルをシャッフルすることもある。 In the learning of the evaluation engine, the image of the inspection object for learning (learning image) is input, and the difference between the estimated evaluation value output from the evaluation engine and the correct evaluation value taught by the inspector is reduced. Update the evaluation engine's internal parameters (such as network weights and biases). As for the timing of updating the internal parameters, it is common to divide the training data into sets called mini-batches and update the internal parameters for each mini-batch instead of learning all the training samples at once. be. This is called mini-batch learning, and when all mini-batches have been learned, all training samples have been used for learning. Learning all these mini-batches once is called one epoch. By repeating epochs many times, we optimize the internal parameters. We may also shuffle the training samples contained in the mini-batch for each epoch.
 機械学習においては、特定の学習サンプルに対して評価エンジンが過度に最適化され汎化性能が低下する、いわゆる「過学習」が課題となる。その要因の一つが少数の学習サンプルによる学習である。膨大な学習サンプルを用意することができれば、特定のサンプルに最適化されることなく、汎化性能の高い内部パラメータの最適化が期待できる。一方、学習サンプルの収集は検査員が手作業で行うことが多く、膨大な学習サンプルを用意することは困難な場合が多い。 In machine learning, the so-called "overfitting" is a problem, in which the evaluation engine is over-optimized for specific learning samples and generalization performance declines. One of the factors is learning with a small number of learning samples. If a huge number of learning samples can be prepared, optimization of internal parameters with high generalization performance can be expected without optimizing for specific samples. On the other hand, collection of learning samples is often manually performed by inspectors, and it is often difficult to prepare a large number of learning samples.
 過学習を抑制する一般的な方法として、データ拡張(Data Augmentation)が知られている。これは、検査員が用意した学習画像に対して平行移動や回転等の処理を加えることで、人為的に学習画像を水増しする技術である。例えば、ある値の正解評価値をもつ検査対象物が、たまたま全ての学習画像の左上に映っていた場合、正解評価値と検査対象物の画像上の位置が無関係であったとしても、評価エンジンは位置が重要な判断基準であると誤って学習する恐れがある。データ拡張を用いて位置のバリエーションを増やすことにより、このような誤った学習を抑制することができる。WO2020/129617号公報(特許文献1)においても、形状不良箇所の位置を変更する等のデータ拡張が開示されている。 Data Augmentation is known as a general method for suppressing overfitting. This is a technique of artificially padding a learning image prepared by an inspector by adding processing such as translation and rotation to the learning image. For example, if an inspection target with a certain correct evaluation value happens to appear in the upper left of all learning images, even if the correct evaluation value and the position of the inspection target on the image are irrelevant, the evaluation engine may erroneously learn that location is the key criterion. Such erroneous learning can be suppressed by using data augmentation to increase position variation. WO2020/129617 (Patent Literature 1) also discloses data extension such as changing the position of a defective shape portion.
国際公開WO2020/129617号明細書International Publication WO2020/129617
 前述の通り、検査画像には一般に多くのバリエーションが存在することが好ましいが、画像収集や正解評価値の教示は検査員が手作業で行うことが多く、多くの学習サンプルを用意するには多くの労力を要する。また、検査員が教示した正解評価値には個人差による判断の揺らぎや誤教示が含まれる。場合によっては一つの学習サンプルに複数の正解評価値が存在することもありうる。このような状況下において過学習は特に発生しやすいと考えられる。一方、現実問題として学習サンプルの質の向上を検査員の努力だけで実現することは難しい。また、正解評価値のばらつきに対しては前述の検査画像の水増しだけでは十分な解決が図れない。そのため、たとえ学習サンプルの質が低下しても機械学習による評価値の推定性能を維持する仕組みが必要である。 As mentioned above, it is generally desirable to have many variations in inspection images. of effort. In addition, the correct evaluation value taught by the inspector includes fluctuations in judgment due to individual differences and erroneous teaching. In some cases, one learning sample may have multiple correct evaluation values. It is considered that over-learning is particularly likely to occur under such circumstances. On the other hand, as a practical matter, it is difficult to improve the quality of learning samples only by the efforts of inspectors. In addition, the above-described padding of the inspection image alone cannot sufficiently solve the problem of variations in correct evaluation values. Therefore, even if the quality of learning samples deteriorates, a mechanism is required to maintain the performance of estimating evaluation values by machine learning.
 本開示のひとつの態様による外観検査方法および外観検査システムは、
 (a)学習用の検査対象物を撮像した学習画像と該学習画像に対する正解評価値とのペアである学習サンプルの集合である学習データを記憶資源に格納し、
 (b)前記学習データに含まれる学習サンプルの正解評価値を、変更可能な所定のばらつき分布に従って変化させ、変化後の値を正解評価値とする学習サンプルである拡張学習サンプルを生成し、
 (c)前記拡張学習サンプルの集合である拡張学習データを生成し、
 (d)前記拡張学習データを基に学習画像と評価値との関係を学習して評価エンジンの内部パラメータを決定し、
 (e)検査対象物を撮像した検査画像を取得し、
 (f)前記検査画像を前記評価エンジンに入力し、前記評価エンジンの出力から前記検査画像の評価値の推定値である推定評価値を取得する。
A visual inspection method and a visual inspection system according to one aspect of the present disclosure include:
(a) storing learning data, which is a set of learning samples that are pairs of learning images obtained by imaging inspection objects for learning and correct evaluation values for the learning images, in a storage resource;
(b) changing the correct evaluation value of the learning sample included in the learning data according to a changeable predetermined variation distribution, and generating an extended learning sample that is a learning sample having the changed value as the correct evaluation value;
(c) generating augmented learning data that is a set of the augmented learning samples;
(d) determining the internal parameters of the evaluation engine by learning the relationship between the learning image and the evaluation value based on the augmented learning data;
(e) Acquiring an inspection image of the inspection object,
(f) inputting the inspection image to the evaluation engine, and obtaining an estimated evaluation value, which is an estimated evaluation value of the inspection image, from the output of the evaluation engine;
 本態様によれば、機械学習を活用した外観検査の自動化において、判断の揺らぎや誤教示、複数の正解評価値の存在等に起因する正解評価値のばらつきに対しても評価エンジンの過学習を抑制することができ、評価エンジンの精度の向上が期待できる。 According to this aspect, in the automation of appearance inspection using machine learning, the evaluation engine does not over-learn even against variations in correct evaluation values due to fluctuations in judgment, incorrect teaching, existence of multiple correct evaluation values, etc. It can be suppressed, and an improvement in the accuracy of the evaluation engine can be expected.
自動外観検査システムおよび全体の処理シーケンスを示す図である。1 is a diagram showing an automatic visual inspection system and an overall processing sequence; FIG. 学習フェーズにおける学習データの拡張を示す図である。FIG. 10 is a diagram showing expansion of learning data in the learning phase; 検査対象物の一例を示す図である。It is a figure which shows an example of a to-be-tested object. 正解評価値のばらつき分布の一例を示す図である。FIG. 10 is a diagram showing an example of a distribution of correct evaluation values; 検査対象物の一例を示す図である。It is a figure which shows an example of a to-be-tested object. 正解評価値のばらつき分布の一例を示す図である。FIG. 10 is a diagram showing an example of a distribution of correct evaluation values; 公知例の学習シーケンスの一例を示す図である。FIG. 5 is a diagram showing an example of a known learning sequence; 学習シーケンスの一例を示す図である。It is a figure which shows an example of a learning sequence. 学習シーケンスの一例を示す図である。It is a figure which shows an example of a learning sequence. 学習フェーズにおけるはらつき分布の更新を示す図である。FIG. 10 is a diagram showing updating of the fluctuating distribution in the learning phase; 正解評価値のばらつき分布の入力、表示を行うGUIを示す図である。FIG. 10 is a diagram showing a GUI for inputting and displaying a variation distribution of correct evaluation values; 自動外観検査システムのハードウェア構成を示す図である。It is a figure which shows the hardware constitutions of an automatic visual inspection system.
 本実施形態について図面を参照して説明する。本実施形態は以下の特徴を有する。ただし、本実施形態に含まれる特徴は以下に示すもののみに限定されない。 The present embodiment will be described with reference to the drawings. This embodiment has the following features. However, the features included in this embodiment are not limited to those shown below.
 (1)学習用の検査対象物を撮像して学習画像{f_i}(i=1,…,Nf, Nf:画像枚数)を取得する学習画像取得ステップと、学習画像f_iと学習画像f_iに対しユーザが教示した正解評価値g_iとのペア(f_i,g_i)を学習サンプルとし、その集合{(f_i,g_i)}(i=1,…,Nf)を学習データとして入力する学習データ入力ステップと、正解評価値のばらつき分布d(g'_i;g_i)を入力するばらつき分布入力ステップと、ばらつき分布d(g'_i;g_i)を基に、学習データ(f_i,g_i)における正解評価値g_iの値を変化させて生成した複数の拡張学習サンプル(f_i,g'_ij)(j=1,…,NS_i, NS_i:拡張数)からなる拡張学習データを生成する拡張学習データ生成ステップと、拡張学習データを基に学習画像と評価値との関係を学習して、評価エンジンの内部パラメータを決定する学習ステップと、検査対象物を撮像して検査画像f''_i(i=1,…,Nf'', Nf'':画像枚数)を取得する検査画像取得ステップと、検査画像f''_iを学習済みの評価エンジンに入力し、推定評価値g''^_iを出力する評価ステップを含み、前記ばらつき分布入力ステップで与えたばらつき分布d(g'_i;g_i)は、正解評価値g_iをパラメータとして分布が変化することを特徴とする。 (1) A learning image acquisition step of imaging an inspection object for learning and acquiring learning images {f_i} (i=1,...,Nf, Nf: number of images); A learning data input step of inputting a set {(f_i, g_i)} (i=1,...,Nf) of pairs (f_i, g_i) of correct evaluation values g_i taught by the user as learning samples as learning data. , a variation distribution input step of inputting the variation distribution d(g'_i;g_i) of the correct evaluation value, and based on the variation distribution d(g'_i;g_i), the correct evaluation value g_i an augmented learning data generation step that generates augmented learning data consisting of multiple augmented learning samples (f_i,g'_ij) (j=1,...,NS_i, NS_i: number of extensions) generated by changing the value of ; A learning step of learning the relationship between the learning image and the evaluation value based on the learning data to determine the internal parameters of the evaluation engine; Nf'', Nf'': the number of images), and an evaluation step of inputting the inspection image f''_i to the trained evaluation engine and outputting the estimated evaluation value g''^_i. The variation distribution d(g'_i;g_i) given in the variation distribution input step is characterized in that the distribution changes with the correct evaluation value g_i as a parameter.
 本特徴について補足する。正解評価値のばらつきは本質的に存在するものであり、完全に除去することは困難である。そこで本実施形態では正解評価値の統計的なばらつき分布dに基づいたデータ拡張を行う。すなわち、検査員が与えた正解評価値g_iから、ばらつき分布dに従って複数の拡張正解評価値{g'_ij}(j=1,…,Ns_i, Ns_i:拡張数)を生成する。これらを拡張学習サンプル群{(f_i,g'_ij)}として学習する。正解評価値をばらつかせることにより、教示された正解値が多少不正確であったとしても、各学習サンプルに対する過度な最適化を抑制し、汎化性能の向上が期待できる。 Supplementary information about this feature. Variations in correct evaluation values are inherently present, and it is difficult to completely remove them. Therefore, in this embodiment, data expansion is performed based on the statistical variation distribution d of correct evaluation values. That is, from the correct evaluation value g_i given by the inspector, a plurality of extended correct evaluation values {g′_ij} (j=1, . These are learned as an extended learning sample group {(f_i,g'_ij)}. By varying the correct evaluation value, even if the taught correct value is somewhat inaccurate, excessive optimization for each learning sample can be suppressed, and an improvement in generalization performance can be expected.
 本処理の課題は、ばらつき分布dをどのように適切に与えるかである。万が一、誤ったばらつき分布を与えると偽の学習データを大量に生成してしまい、逆に評価値の推定性能が低下する恐れがある。本実施形態では正解評価値g_iの値に応じてばらつき分布dを切り替えることが特徴である。すなわち、ばらつき分布を拡張正解評価値g'_iを変数、正解評価値g_iをパラメータとする確率分布d(g'_i;g_i)として正解評価値毎に与えた。ばらつきd(g'_i;g_i)に基づきj番目に生成した拡張正解評価値g'_iがg'_ijである。 The problem with this process is how to appropriately give the variation distribution d. In the unlikely event that an erroneous variation distribution is given, a large amount of false learning data will be generated, and the performance of estimating the evaluation value may deteriorate. This embodiment is characterized in that the variation distribution d is switched according to the value of the correct evaluation value g_i. That is, the variation distribution was given for each correct evaluation value as a probability distribution d(g'_i;g_i) with the extended correct evaluation value g'_i as a variable and the correct evaluation value g_i as a parameter. g'_ij is the jth extended correct evaluation value g'_i generated based on the variation d(g'_i;g_i).
 (2)拡張学習データ生成ステップおよび学習ステップにおいて、s番目のミニバッチm_sに含まれるi番目の1つの学習サンプル(f_i,g_i)に対して、ばらつき分布dに基づき1つの拡張学習サンプル(f_i,g'_i)を生成して学習サンプル(f_i,g_i)と差し替えることによって、拡張ミニバッチm'_sを生成することを特徴とする。 (2) In the extended learning data generation step and learning step, one extended learning sample (f_i, g'_i) is generated and replaced with the learning sample (f_i, g_i) to generate an extended mini-batch m'_s.
 本特徴について補足する。一般的なデータ拡張の手法においては、1つの学習サンプルから複数の拡張学習サンプルを生成するため、学習時間が増加してしまう。すなわち、学習サンプル数の増加により、拡張前の1つのミニバッチに含まれる学習サンプル数、ミニバッチ数のいずれか、または両方がデータ拡張前に比べて増加することになる。本実施形態では、拡張前の各学習サンプル(f_i,g_i)を1つの拡張学習サンプル(f_i,g'_i)に差し替えるだけなので、学習サンプル数、ミニバッチ数は共に変化しない。ばらつき分布dを確率分布と考え、この確率分布に従う乱数で拡張評価正解値g'_iを生成する。生成回数が少ないときは、拡張学習サンプルに偏りが生じうるが、何度もエポックを繰り返して学習が進むにつれ、学習済みの拡張学習サンプルにおける拡張評価正解値の分布はばらつき分布dに近づくことが期待される。 Supplementary information about this feature. In general data augmentation methods, a plurality of augmented learning samples are generated from one learning sample, which increases the learning time. That is, due to the increase in the number of learning samples, either the number of learning samples included in one mini-batch before expansion, the number of mini-batches, or both increases compared to before data expansion. In this embodiment, since each learning sample (f_i, g_i) before extension is simply replaced with one extended learning sample (f_i, g'_i), the number of learning samples and the number of mini-batches do not change. Considering the variation distribution d as a probability distribution, a random number following this probability distribution is used to generate the extended evaluation correct value g'_i. When the number of generations is small, there may be bias in the augmented learning samples, but as the learning progresses with repeated epochs, the distribution of the correct augmented evaluation values in the trained augmented learning samples can approach the dispersion distribution d. Be expected.
 (3)拡張学習データ生成ステップおよび学習ステップにおいて、拡張学習データを生成しながら反復的に評価エンジンを学習する際、反復中にばらつき分布dを更新し、更新したばらつき分布を基に拡張学習データを生成することを特徴とする。その具体例として、拡張学習サンプルを学習中の評価エンジンに入力して出力される推定評価値g'^_iの信頼度R(g'^_i)を求め、信頼度R(g'^_i)をパラメータとして、ばらつき分布を反復中に更新することを特徴とする。 (3) In the extended learning data generation step and the learning step, when the evaluation engine is iteratively trained while generating the extended learning data, the variation distribution d is updated during the iterations, and the extended learning data is generated based on the updated variation distribution. is characterized by generating As a specific example, the reliability R(g'^_i) of the estimated evaluation value g'^_i output by inputting the extended learning sample to the evaluation engine during learning is calculated, and the reliability R(g'^_i) is a parameter, and the variation distribution is updated during iteration.
 本特徴について補足する。初めから精度の高いばらつき分布を与えることには限界がある。そこで、学習中にばらつき分布を変化させ、より適切な分布とすることを考える。ばらつき分布を変化させる方法として、評価エンジンの学習状態に応じて適応的に変化させることを考える。例えば、学習画像f_iを学習中の評価エンジンに入力して推定評価値g'^_iを出力し、g'^_iの信頼度R(g'^_i)を算出する。信頼度の算出方法の例としては、正解評価値g_iと推定評価値g'^_iの差分や、ばらつき分布d(g'_i;g_i)を確率分布と捉えて推定評価値g'^_iを代入した値d(g'^_i;g_i)(推定値がg'^_iになることがどの程度起こりうるかの指標)、分類問題であれば各分類クラスへの帰属度のばらつきに基づく方法が挙げられる。各分類クラスへの帰属度について、正解分類クラスへの帰属度が突出していれば信頼度は高いが、他の分類クラスへの帰属度もあれば、程度に応じて信頼度は低くなる。 Supplementary information about this feature. There is a limit to giving a highly accurate variation distribution from the beginning. Therefore, it is considered to change the variation distribution during learning to obtain a more appropriate distribution. As a method for changing the variation distribution, consider adaptively changing it according to the learning state of the evaluation engine. For example, the learning image f_i is input to the evaluation engine during learning, the estimated evaluation value g'^_i is output, and the reliability R(g'^_i) of g'^_i is calculated. As an example of how to calculate the reliability, the difference between the correct evaluation value g_i and the estimated evaluation value g'^_i, or the variation distribution d(g'_i;g_i) is regarded as a probability distribution, and the estimated evaluation value g'^_i is calculated as The imputed value d(g'^_i;g_i) (a measure of how likely it is that the estimated value is g'^_i), and for classification problems, the method based on the variation in the degree of membership to each classification class is mentioned. Regarding the degree of belonging to each classification class, if the degree of belonging to the correct classification class is outstanding, the reliability is high, but if there is also the degree of belonging to other classification classes, the reliability decreases according to the degree.
 このような信頼度に応じてばらつき分布を学習中に変化させる。ばらつき分布と評価エンジンの更新を繰り返すことによって、学習前には想定できなかった適切なばらつき分布を推定し、このばらつき分布に基づくデータ拡張によって、より高性能な評価エンジンを得ることが可能になる。
なお、以下に説明する実施形態は本発明を限定するものではなく、また、実施形態の中で説明されている諸要素及びその組み合わせの全てが発明の解決手段に必須であるとは限らない。
The variation distribution is changed during learning according to such reliability. By repeatedly updating the variation distribution and the evaluation engine, it is possible to estimate an appropriate variation distribution that could not be assumed before learning, and to obtain a higher performance evaluation engine by expanding the data based on this variation distribution. .
It should be noted that the embodiments described below do not limit the present invention, and not all of the elements described in the embodiments and their combinations are essential to the solution of the invention.
1. 自動外観検査システムと全体の処理シーケンス
 本発明における自動外観検査システムおよび全体の処理シーケンスを図1に示す。処理シーケンスは大きく学習フェーズ100と検査フェーズ101に分かれる。
1. Automatic Visual Inspection System and Overall Processing Sequence FIG. 1 shows an automatic visual inspection system and overall processing sequence in the present invention. The processing sequence is roughly divided into a learning phase 100 and an inspection phase 101 .
 学習フェーズでは、学習用の検査対象物を撮像して学習画像{f_i}(i=1,…,Nf, Nf:画像枚数)を取得する(102)。学習画像は、CCDカメラ、光学顕微鏡、荷電粒子顕微鏡、超音波検査装置、X線検査装置、等の撮像装置で検査対象物の表面あるいは内部をデジタル映像として撮像することで取得する。なお、「取得」の他の例としては、ほかのシステムで撮像した画像を単に受信して、自動外観検査システムが有する記憶資源に格納するだけでもよい。 In the learning phase, an inspection object for learning is imaged to acquire learning images {f_i} (i=1,...,Nf, Nf: number of images) (102). The learning image is acquired by imaging the surface or inside of the object to be inspected as a digital image with an imaging device such as a CCD camera, an optical microscope, a charged particle microscope, an ultrasonic inspection device, an X-ray inspection device, or the like. As another example of "acquisition", it is also possible to simply receive an image captured by another system and store it in the storage resource of the automatic visual inspection system.
 次に、各学習画像f_iに対し正解評価値g_iを付与する(103)。評価値とは、検査対象物の形状不良、組立不良、異物の付着、対処物内部の欠損や致命度、表面の傷や斑、汚れ等、様々な出来栄えを評価する指標であって、様々な定義が可能である。例えば、不良の致命度や表面の状態を数値化して評価値としてもよいし(g_i=[g_min,g_max])、発生した不良の種類をクラス分けしたラベルを評価値としてもよい(g_i={class1,…,classN})。これらの評価基準に対し、検査者の目視判断や他の検査装置や方法で解析された数値等を基に正解評価値g_iを付与する。勿論、この正解評価値g_iは正確であることが望ましいが、検査者の判断の揺らぎや誤教示、複数の正解評価値の存在等に起因するばらつきが含まれる可能性がある。 Next, a correct evaluation value g_i is assigned to each learning image f_i (103). The evaluation value is an index for evaluating various workmanship such as shape defects, assembly defects, adhesion of foreign matter, defects and criticality inside the object to be inspected, surface scratches, spots, dirt, etc. can be defined. For example, evaluation values may be obtained by quantifying the criticality of defects and surface conditions (g_i=[g_min,g_max]), or by classifying the types of defects that have occurred as labels (g_i={ class1,…,classN}). A correct evaluation value g_i is assigned to these evaluation criteria based on the inspector's visual judgment and numerical values analyzed by other inspection devices and methods. Of course, it is desirable that this correct evaluation value g_i is accurate, but there is a possibility that it may include variations due to fluctuations in the judgment of the inspector, misinstruction, existence of multiple correct evaluation values, and the like.
 各学習画像f_iと正解評価値g_iとのペア(f_i,g_i)を学習サンプルと呼び、学習サンプルの集合{(f_i,g_i)}(i=1,…,Nf)を学習データと呼ぶ。この学習データを用いて評価エンジンを学習する(104)。評価エンジンとは検査画像f_iを入力として推定評価値g^_iを出力する機械学習に基づく推定器である。評価エンジンとしては既存の様々な機械学習のエンジンを用いることができるが、例えばConvolutional Neural Network(CNN)に代表される深層ニューラルネットワークや、Support Vector Machine(SVM)/Support Vector Regression(SVR)、k-nearest neighbor(k-NN)等が挙げられる。これらの評価エンジンは分類問題や回帰問題を扱うことができる。 A pair (f_i, g_i) of each learning image f_i and a correct evaluation value g_i is called a learning sample, and a set of learning samples {(f_i, g_i)} (i=1,..., Nf) is called learning data. The training data is used to train the rating engine (104). An evaluation engine is an estimator based on machine learning that takes an inspection image f_i as an input and outputs an estimated evaluation value g^_i. Various existing machine learning engines can be used as the evaluation engine. -nearest neighbor (k-NN) and the like. These evaluation engines can handle classification and regression problems.
 一般には、教示した正解評価値g_iに近い推定評価値g^_iが出力されるように、学習サンプルを用いて評価エンジンの内部パラメータを最適化する。本実施形態では、正解評価値g_iのばらつき分布dを入力する。このばらつき分布dを基に、学習データ(f_i,g_i)における正解評価値g_iの値を変化させて生成した複数の拡張学習サンプル(f_i,g'_ij)(j=1,…,NS_i, NS_i:拡張数)からなる拡張学習データを生成し(105)、拡張学習データを基に評価エンジンを学習し、内部パラメータ(106)を最適化する。 Generally, learning samples are used to optimize the internal parameters of the evaluation engine so that an estimated evaluation value g^_i close to the taught correct evaluation value g_i is output. In this embodiment, the variation distribution d of the correct evaluation value g_i is input. Based on this variation distribution d, multiple augmented learning samples (f_i,g'_ij) (j=1,...,NS_i, NS_i : expansion number), and trains the evaluation engine based on the augmented learning data to optimize the internal parameters (106).
 検査フェーズでは、実際の検査対象物を撮像して検査画像f''_i(i=1,…,Nf'', Nf'':画像枚数)を取得する(102)。検査画像f''_iを学習済みの評価エンジンに入力し、推定評価値g''^_iを出力する(107)。推定評価値は必要に応じて検査者が確認し(108)、不良等があれば対策を製造プロセスにフィードバックする。 In the inspection phase, the actual inspection object is imaged to acquire inspection images f''_i (i=1,..., Nf'', Nf'': number of images) (102). The inspection image f''_i is input to the trained evaluation engine, and the estimated evaluation value g''^_i is output (107). The estimated evaluation value is checked by an inspector as necessary (108), and if there is a defect, countermeasures are fed back to the manufacturing process.
 前述の実施例では学習画像f_iと検査画像f''_iはそれぞれ1枚の画像であったが、例えば撮像方向や撮像装置を変えて複数枚の画像(画像群)を撮像し、画像群を評価エンジンの入力として評価値を推定してもよい。この場合、学習画像群、検査画像群がそれぞれf_i、f''_iとなる。同様に評価値も複数種類存在してもよい。両者を合わせると多入力で他出力の評価エンジンを用いてもよい。 In the above-described embodiment, each of the learning image f_i and the inspection image f''_i is one image. A rating value may be estimated as an input to the rating engine. In this case, the learning image group and the inspection image group are f_i and f''_i, respectively. Similarly, there may be multiple types of evaluation values. Combining both, an evaluation engine with multiple inputs and multiple outputs may be used.
2. データ拡張
2.1 データ拡張の方法
 本実施形態におけるデータ拡張の方法を図2に示す。前述の通り、学習画像f_iと検査員が教示した正解評価値g_iとのペア(f_i,g_i)を学習サンプルと呼び(図2では、例として3つの学習サンプル201、202、203を図示)、学習サンプルの集合{(f_i,g_i)}(i=1,…,Nf)を学習データ(200)と呼ぶ。学習データにおける正解評価値g_iには、検査者の判断の揺らぎや誤教示、複数の正解評価値の存在等に起因するばらつきが存在する。正解評価値のばらつきは本質的に存在するものであり、完全に除去することは困難である。そこで本実施形態では、この正解評価値のばらつきをばらつき分布d(211)として与える。このばらつき分布dに従って正解評価値g_i毎に、複数の拡張正解評価値{g'_ij}(j=1,…,Ns_i, Ns_i:拡張数)を生成する。拡張数Ns_iは学習サンプル毎に変更してもよい。ただし、拡張数が多い学習サンプルは拡張数が少ない学習サンプルよりも評価エンジンの学習に影響を与える可能性があるので、学習時は拡張サンプル群S_iに対して重みw_i(i=1,…,Nf)(212)をかけることができる(例えば、拡張数の逆数を重みにする)。学習画像f_iと拡張正解評価値g'_ijとのペア(f_i,g'_ij)を拡張学習サンプルと呼び(図2では、例として学習サンプル(f_1, g_1)から生成した3つの拡張学習サンプル208、209、210を図示)、i番目の学習サンプル(f_i,g_i)から生成した拡張学習サンプルの集合を拡張学習サンプル群S_i={(f_i, g'_ij)}と呼ぶ(図2では、例として3つの拡張学習サンプル群205、206、207を図示)。g'_ijは学習サンプル(f_i,g_i)から生成したj番目の拡張正解評価値g'_iである。更に全ての拡張学習サンプル群をまとめて拡張学習データ{(F_k,G_k)}(k=1,…,NF, NF=ΣNS_i)(204)と呼ぶ。すなわち、{(F_k, G_k)}={S_1,…,S_Nf}である。ばらつき分布dは正解評価値の間違えやすさの傾向、あるいは正解評価値の確率分布と捉えることができ、この分布に従って正解評価値をばらつかせることにより、教示された正解値が多少不正確であったとしても、各学習サンプルに対する過度な最適化を抑制し、汎化性能の向上が期待できる。
2. Data Extension 2.1 Data Extension Method A data extension method in this embodiment is shown in FIG. As described above, the pair (f_i, g_i) of the learning image f_i and the correct evaluation value g_i taught by the inspector is called a learning sample (in FIG. 2, three learning samples 201, 202, and 203 are shown as examples), A set of learning samples {(f_i, g_i)} (i=1, . . . , Nf) is called learning data (200). The correct evaluation value g_i in the learning data varies due to fluctuations in the judgment of the inspector, misinstruction, existence of a plurality of correct evaluation values, and the like. Variations in correct evaluation values are inherently present, and it is difficult to completely remove them. Therefore, in this embodiment, the variation of the correct evaluation value is given as the variation distribution d (211). A plurality of extended correct evaluation values {g′_ij} (j=1, . The expansion number Ns_i may be changed for each learning sample. However, since training samples with a large number of expansions may affect the learning of the evaluation engine more than training samples with a small number of expansions, weights w_i (i=1,..., Nf)(212) can be multiplied (e.g. weighted by the reciprocal of the expansion number). A pair (f_i, g'_ij) of a learning image f_i and an extended correct evaluation value g'_ij is called an extended learning sample (in FIG. 2, as an example, three extended learning samples 208 generated from a learning sample (f_1, g_1) , 209, and 210), and the set of extended learning samples generated from the i-th learning sample (f_i, g_i) is called an extended learning sample group S_i={(f_i, g'_ij)} (in FIG. (shows three extended learning sample groups 205, 206, 207). g'_ij is the j-th extended correct evaluation value g'_i generated from the learning sample (f_i, g_i). Furthermore, all the extended learning sample groups are collectively called extended learning data {(F_k, G_k)} (k=1,..., NF, NF=ΣNS_i) (204). That is, {(F_k, G_k)}={S_1,...,S_Nf}. The variation distribution d can be regarded as the tendency of the correct evaluation value to be mistaken or the probability distribution of the correct evaluation value. Even if there is, it can be expected to suppress excessive optimization for each learning sample and improve generalization performance.
 本処理の課題は、ばらつき分布dをどのように適切に与えるかである。万が一、誤ったばらつき分布を与えると偽の学習データを大量に生成してしまい、逆に評価値の推定性能が低下する恐れがある。これに対し、学習サンプルf_i毎に複数人の検査員が正解評価値g_iを付与し、検査員の違いによる実際のばらつき度合いからばらつき分布dを求めてもよい。ただし、学習サンプルf_i毎に複数人の検査員が評価を行うことは、検査コストならびに検査員の負荷が大きく増大することになる。本実施形態では正解評価値g_iの値に応じてばらつき分布dを切り替えることが特徴である。すなわち、ばらつき分布を拡張正解評価値g'_iを変数、正解評価値g_iをパラメータとする確率分布d(g'_i;g_i)として正解評価値毎に与えた。ばらつきd(g'_i;g_i)に基づきj番目に生成した拡張正解評価値g'_iがg'_ijである。ばらつき分布d(g'_i;g_i)は、後述する図4のようにヒストグラムで与えてもよいし、後述する図6のように折れ線で与えてもよい。また、パラメトリックな関数(ガウス分布、等)の組み合わせで与えてもよいし、後述する図11のように自由曲線で与えてもよい。 The problem with this process is how to appropriately give the variation distribution d. In the unlikely event that an erroneous variation distribution is given, a large amount of false learning data will be generated, and the performance of estimating the evaluation value may deteriorate. On the other hand, a plurality of inspectors may assign a correct evaluation value g_i to each learning sample f_i, and the variation distribution d may be obtained from the actual degree of variation depending on the inspector. However, evaluation by a plurality of inspectors for each learning sample f_i significantly increases the inspection cost and the load on the inspectors. This embodiment is characterized in that the variation distribution d is switched according to the value of the correct evaluation value g_i. That is, the variation distribution was given for each correct evaluation value as a probability distribution d(g'_i;g_i) with the extended correct evaluation value g'_i as a variable and the correct evaluation value g_i as a parameter. g'_ij is the jth extended correct evaluation value g'_i generated based on the variation d(g'_i;g_i). The variation distribution d(g'_i;g_i) may be given by a histogram as shown in FIG. 4, which will be described later, or by a polygonal line as shown in FIG. 6, which will be described later. Also, it may be given by a combination of parametric functions (Gaussian distribution, etc.), or given by a free curve as shown in FIG. 11, which will be described later.
2.2 データ拡張の具体例1
 外観検査の例として図3に示す欠陥分類を取り上げ、本例に対するばらつき分布の具体例を図4を用いて説明する。本検査は、検査対象物に付着した異物を4つの欠陥クラスD1~D4に分類する例である。図3(a)~(d)に欠陥クラスD1~D4に属する欠陥の模式図をそれぞれ2例ずつ示す。つまり、図4(a)~(d)には、欠陥クラスD1に属する欠陥300と欠陥301、欠陥クラスD2に属する欠陥302と欠陥303、欠陥クラスD3に属する欠陥304と欠陥305、欠陥クラスD4に属する欠陥306と欠陥307をそれぞれ表示している。正解評価値D1~D4に対するばらつき分布を図4(a)~(d)に示す。本例においてばらつき分布はヒストグラムで与えている。
2.2 Concrete example 1 of data expansion
Taking the defect classification shown in FIG. 3 as an example of visual inspection, a specific example of variation distribution for this example will be described with reference to FIG. This inspection is an example of classifying foreign matter adhering to an inspection object into four defect classes D1 to D4. FIGS. 3A to 3D show two schematic diagrams of defects belonging to defect classes D1 to D4, respectively. 4A to 4D show defects 300 and 301 belonging to defect class D1, defects 302 and 303 belonging to defect class D2, defects 304 and 305 belonging to defect class D3, and defect class D4. Defects 306 and 307 belonging to , respectively, are displayed. Variation distributions for the correct evaluation values D1 to D4 are shown in FIGS. 4(a) to 4(d). In this example, the variation distribution is given by a histogram.
 図4(a)に示すばらつき分布d(g'_i;g_i)(g_i="D1")を取り上げて詳細に説明する。d(g'_i;"D1")において横軸の拡張正解評価値D1における縦軸の度数が3と最も高い(400)。これは、D1として教示された学習サンプルは、実際の正解評価値もD1である可能性が最も高いことを示す。よって、学習サンプル(f_i, g_i)から拡張学習サンプル群S_i={(f_i, g'_ij)}を生成する際には、拡張正解評価値g'_iの値がD1である拡張学習サンプルを最も多く生成する。ヒストグラムの度数の割合に正確に基づけば、拡張学習サンプル群S_iの内、3/6=50%の拡張学習サンプルの拡張正解評価値はD1となる。また、D2の度数も2と比較的高い(401)。これは図3の(a)と(b)を比較して分かる通り、欠陥クラスD1とD2は共にギザギザした形状の欠陥であるという点において類似しており、教示において混同しやすいためである。よって、拡張学習サンプル群S_iの内、2/6≒33%の拡張学習サンプルの拡張正解評価値はD2となる。一方、D3の度数は0である(402)。これは図3の(a)と(c)を比較して分かる通り、ギザギザの欠陥と丸い欠陥を混同することはあり得ないためである。よって、拡張学習サンプル群S_iに拡張正解評価値がD3の拡張学習サンプルは含まれない。また、D4の度数は1と小さいながらも値はある(403)。これは図3の(a)と(c)を比較して分かる通り、欠陥クラスD1とD4はあまり類似していないが、共に凸凹がある欠陥であるという点において、混同する可能性が若干あるためである。よって、拡張学習サンプル群S_iの内、1/6≒17%の拡張学習サンプルの拡張正解評価値はD4となる。 The variation distribution d(g'_i;g_i) (g_i="D1") shown in FIG. 4(a) will be described in detail. In d(g'_i;"D1"), the extended correct evaluation value D1 on the horizontal axis has the highest frequency of 3 on the vertical axis (400). This indicates that the training sample taught as D1 is most likely to have an actual correctness evaluation value of D1 as well. Therefore, when generating the extended learning sample group S_i={(f_i, g'_ij)} from the learning sample (f_i, g_i), the extended learning sample whose extended correct evaluation value g'_i is D1 is the most generate a lot. Based on the histogram frequency ratio accurately, the extended correct evaluation value of 3/6=50% of the extended learning samples in the extended learning sample group S_i is D1. D2 also has a relatively high frequency of 2 (401). This is because, as can be seen by comparing (a) and (b) of FIG. 3, the defect classes D1 and D2 are similar in that they are both jagged-shaped defects, and are easy to confuse in teaching. Therefore, the extended correct evaluation value of 2/6≈33% of the extended learning samples in the extended learning sample group S_i is D2. On the other hand, the power of D3 is 0 (402). This is because, as can be seen by comparing (a) and (c) of FIG. 3, it is impossible to confuse a jagged defect with a round defect. Therefore, the extended learning sample group S_i does not include an extended learning sample with an extended correct evaluation value of D3. In addition, although the degree of D4 is small at 1, it has a value (403). As can be seen by comparing (a) and (c) in Fig. 3, the defect classes D1 and D4 are not very similar, but there is some possibility of confusion in that they are both bumpy defects. It's for. Therefore, the extended correct evaluation value of 1/6≈17% of the extended learning samples in the extended learning sample group S_i is D4.
 図4(b)~(d)のばらつき分布も同様に説明される。ちなみに、欠陥クラスD3は他の欠陥クラスと混同される可能性が殆どないことから、図4(c)に示すばらつき分布d(g'_i;"D3")は、D3にしか度数がない(404)。つまり、正解評価値がD3である学習サンプルは拡張する必要がないことになる。 The variation distributions in FIGS. 4(b) to (d) are also explained in the same way. Incidentally, since the defect class D3 is unlikely to be confused with other defect classes, the variation distribution d(g'_i;"D3") shown in FIG. 404). In other words, it is not necessary to expand the learning sample whose correct evaluation value is D3.
2.3 データ拡張の具体例2
 外観検査の例として図5に示す表面粗さ評価を取り上げ、本例に対するばらつき分布の具体例を図6を用いて説明する。本検査は、検査対象物表面の粗さレベルを1.0~3.0で定量化して評価する例である。評価値は粗さレベルであり、値が小さいほど良い状態である。図5(a)~(e)にそれぞれ粗さレベルが1.0、1.5、2.0、2.5、3.0である表面画像の模式図(500~504)を示す。学習サンプルにおいて粗さレベルの正解評価値g_iは0.5刻みで与えたが、粗さレベルは連続値であり、その中間の粗さレベルをもつ検査対象物も存在する。そのため、推定エンジンは回帰問題を扱うことになる。正解評価値1.0~3.0に対するばらつき分布を図6(a)~(e)に示す。本例においてばらつき分布は折れ線で与えている。
2.3 Concrete example 2 of data expansion
Taking the surface roughness evaluation shown in FIG. 5 as an example of visual inspection, a specific example of the variation distribution for this example will be described with reference to FIG. This inspection is an example of quantifying and evaluating the roughness level of the surface of the inspection object from 1.0 to 3.0. The evaluation value is the roughness level, and the smaller the value, the better the condition. Schematic diagrams (500-504) of surface images with roughness levels of 1.0, 1.5, 2.0, 2.5 and 3.0 are shown in FIGS. 5(a)-(e), respectively. Although the correct evaluation value g_i of the roughness level was given in increments of 0.5 in the learning samples, the roughness level is a continuous value, and there are inspection objects with intermediate roughness levels. Therefore, the estimation engine deals with regression problems. Figures 6(a) to 6(e) show the variation distribution for correct evaluation values of 1.0 to 3.0. In this example, the variation distribution is given by a polygonal line.
 図6(a)に示すばらつき分布d(g'_i;g_i)(g_i=1.0)を取り上げて詳細に説明する。d(g'_i;1.0)の折れ線は正解評価値g_iが1.0である学習サンプルの実際の評価値が横軸の拡張正解評価値g'_iである確率分布を示す。よって、折れ線の値を生成確率として拡張正解評価値g'_iを生成することになる。拡張正解評価値1.0における折れ線の値が最も高いが(601)、拡張正解評価値が大きくなるにつれて折れ線の値は徐々に小さくなり、拡張正解評価値2.0においては0になっている(604)。粗さレベルの変化は連続的であり、正解評価値のばらつきも連続的に発生する可能性が高いと考えられる。よって、学習サンプル(f_i, g_i)から拡張学習サンプル群S_i={(f_i, g'_ij)}を生成する際には、粗さレベルの値が1.0である拡張学習サンプルを最も多く生成し(601)、粗さレベルの値が1.5である拡張学習サンプルも多少生成するが(603)、粗さレベルの値が2.0以上の拡張学習サンプルは生成しない(604~606)。また、学習データに含まれていた粗さレベル1.0、1.5(図6(a)で黒丸で示す)に限らず、その中間の粗さレベルを拡張正解評価値にもつ拡張学習サンプルを折れ線の頻度に従って生成してもよい(図6(a)で白丸で示す)。例えば粗さレベル1.25の拡張学習サンプルを粗さレベル1.0と1.5の生成頻度の中間の頻度で生成することができる(602)。 The variation distribution d(g'_i;g_i) (g_i=1.0) shown in FIG. 6(a) will be described in detail. A polygonal line of d(g'_i;1.0) indicates the probability distribution that the actual evaluation value of the learning sample whose correct evaluation value g_i is 1.0 is the expanded correct evaluation value g'_i on the horizontal axis. Therefore, the extended correct evaluation value g'_i is generated using the value of the polygonal line as the generation probability. The value of the polygonal line at the extended correct evaluation value of 1.0 is the highest (601), but as the extended correct evaluation value increases, the value of the polygonal line gradually decreases, reaching 0 at the extended correct evaluation value of 2.0 (604). Since the roughness level changes continuously, it is highly likely that the correct evaluation values will also vary continuously. Therefore, when generating the extended learning sample group S_i={(f_i, g'_ij)} from the training samples (f_i, g_i), the largest number of extended learning samples with a coarseness level value of 1.0 are generated ( 601), some extended learning samples with a coarseness level value of 1.5 are also generated (603), but no extended learning samples with a coarseness level value of 2.0 or greater are generated (604-606). In addition, extended learning samples with intermediate roughness levels as extended correct evaluation values are not limited to roughness levels 1.0 and 1.5 (indicated by black circles in Fig. 6(a)) included in the learning data. (indicated by white circles in FIG. 6(a)). For example, augmented learning samples with coarseness level 1.25 can be generated 602 with a frequency intermediate between the generation frequencies of coarseness levels 1.0 and 1.5.
 図6(b)~(e)のばらつき分布も同様に説明される。大きな傾向としては、いずれのばらつき分布においても、正解評価値g_iを中心としてその周辺の評価値に確率分布が存在し、正解評価値g_iから離れるに従って確率分布の値は小さくなっている。ただし、本例では粗さレベル1.5、2.0の間に不連続な確率分布の変化が存在する。図5に示す通り、粗さレベル1.5以下は正常範囲内の粗さであり、製品として出荷可能である。一方、粗さレベル2.0以上は品質が悪く出荷不可能である。図5(b)(c)にそれぞれ示す粗さレベル1.5、2.0の検査画像にも不連続な見た目の変化があり、出荷の可否の判定ライン(505)がこの間に設定されていた。そのため、この判定ラインを跨ぐ判定ミスが起きにくい傾向があった。そこで、この傾向をばらつき分布に反映した。この傾向が分かり易く確認できるのが図6(b)(c)である。図6(b)では横軸の拡張正解値g'_iの値が1.5から2.0になるにつれ、折れ線の値が急激に減少している(607→608)。同様に図6(c)では横軸の拡張正解値g'_iの値が2.0から1.5になるにつれ、折れ線の値が急激に減少している(610→609)。このように正解評価値の間違えやすさの傾向をばらつき分布に容易に反映することができる。 The variation distributions in FIGS. 6(b) to (e) are similarly explained. As a major tendency, in any variation distribution, there is a probability distribution in the evaluation values around the correct evaluation value g_i, and the value of the probability distribution decreases as the distance from the correct evaluation value g_i increases. However, in this example, there is a discontinuous probability distribution change between roughness levels 1.5 and 2.0. As shown in FIG. 5, a roughness level of 1.5 or less is within the normal range and can be shipped as a product. On the other hand, roughness levels of 2.0 and above are of poor quality and cannot be shipped. The inspection images of roughness levels 1.5 and 2.0 shown in FIGS. 5(b) and 5(c), respectively, also have discontinuous changes in appearance, and the decision line (505) for determining whether or not to ship is set between them. For this reason, there is a tendency that a decision error that crosses this decision line is less likely to occur. Therefore, this tendency was reflected in the variation distribution. 6(b) and 6(c) show this tendency in an easy-to-understand manner. In FIG. 6(b), as the value of the extended correct value g'_i on the horizontal axis increases from 1.5 to 2.0, the value of the broken line sharply decreases (607→608). Similarly, in FIG. 6(c), as the value of the extended correct value g'_i on the horizontal axis increases from 2.0 to 1.5, the value of the broken line sharply decreases (610→609). In this way, it is possible to easily reflect the tendency of the correct evaluation value to be mistaken in the variation distribution.
2.4 データ拡張の効果
 図4、図6はあくまでばらつき分布の例であり、検査対象物や評価項目に応じて他にも様々なばらつき分布が考えらえるが、これらの例に見られるように、正解評価値g_iとばらつき分布には高い依存性がある場合が多いと考えられる。そのため、正解評価値g_iの値に応じて分布を切り替えることが有効と考えた。学習データのばらつきは現場の経験から得られた間違えやすさの傾向であり、いわゆる「ドメイン知識」と考えることができる。本実施形態によれば、評価エンジンの学習においてこのようなドメイン知識を有効的かつ効率的に取り込むことが可能となる。すなわち、例えば学習サンプル毎にばらつき分布を定義すると膨大な作業コストを要するし、逆に全学習サンプルで一律なばらつき分布を定義すると正確性を損ね、誤った拡張学習サンプルを大量に生成してしまう恐れがある。これに対し、本実施形態によれば検査対象物の性質を比較的容易にばらつき分布に反映することができ、このことを前述の二つの具体例を用いて示した。たとえ誤教示等が含まれていたとしても検査員によって与えられた正解評価値g_iをパラメータとしてばらつき分布dを定義することにより、正確性と作業コストの両立が可能であることを見出した。
2.4 Effects of data expansion Figures 4 and 6 are just examples of variation distributions, and various other variation distributions can be considered depending on the inspection object and evaluation items. In addition, it is considered that the correct evaluation value g_i and the variation distribution are highly dependent in many cases. Therefore, it was considered effective to switch the distribution according to the value of the correct evaluation value g_i. Variation in learning data is a trend of error susceptibility obtained from field experience, and can be thought of as so-called "domain knowledge." According to this embodiment, it is possible to effectively and efficiently incorporate such domain knowledge in training of the evaluation engine. In other words, for example, defining a variation distribution for each learning sample would require a huge amount of work, and conversely, defining a uniform variation distribution for all training samples would impair accuracy and generate a large number of erroneous extended learning samples. There is fear. On the other hand, according to the present embodiment, it is possible to relatively easily reflect the properties of the object to be inspected in the variation distribution, and this was shown using the two specific examples described above. It was found that even if incorrect teaching is included, it is possible to achieve both accuracy and work cost by defining the variation distribution d using the correct evaluation value g_i given by the inspector as a parameter.
3. 学習シーケンス(拡張学習データの生成タイミングと配分)
 学習フェーズにおいて、本実施形態による拡張学習データを用いた評価エンジンの学習シーケンスにはいくつかの実施例が挙げられる。各実施例において、拡張学習データの拡張数や拡張処理を行うタイミング等が異なる。以下、代表的な実施例について具体的に説明する。
3. Learning sequence (generation timing and distribution of augmented learning data)
In the learning phase, the learning sequence of the evaluation engine using the augmented learning data according to the present embodiment includes several examples. In each embodiment, the number of expansions of the expansion learning data, the timing of expansion processing, and the like are different. A typical example will be specifically described below.
3.1 学習シーケンス1
 まず、学習データ{(f_i, g_i)}を用いた評価エンジンの一般的な学習シーケンスについて図7を用いて説明する。評価エンジンの学習においては、学習用の検査対象物の画像(学習画像f_i)を入力として、評価エンジンから出力される推定評価値g^_iと検査員により教示された正解評価値g_iとの差分が小さくなるように評価エンジンの内部パラメータ(ネットワークの重みやバイアス等)を更新する。内部パラメータを更新するタイミングとしては、全ての学習サンプルをまとめて学習するのではなく、学習データ(704)をいくつかのミニバッチ{m_s}(s=1,…,Nm, Nm:ミニバッチ数, m_s⊂{(f_i,g_i)}と呼ばれる集合に分割し(705)、ミニバッチ毎に内部パラメータの更新を行うことが一般的である。これはミニバッチ学習と呼ばれ、全てのミニバッチが学習された時点で、全ての学習サンプルが学習に用いられたことになる。この全てのミニバッチの1回学習することを1エポックと呼び、エポックを何回も繰り返すことで、内部パラメータを最適化していく。エポックを{e_t}(t=1,…,Ne, Ne:エポック数)と表記する。図7において一回目のエポックe_1(700)内にミニバッチ分割の様子を図示している。図示は省略しているが、2回目以降のエポックe_2~e_Ne(701~703)内においても同様のミニバッチ分割が行われている。また、エポック毎にミニバッチに含まれる学習サンプルをシャッフルすることもある。
3.1 Learning sequence 1
First, a general learning sequence of an evaluation engine using learning data {(f_i, g_i)} will be described with reference to FIG. In the learning of the evaluation engine, the image of the inspection object for learning (learning image f_i) is input, and the difference between the estimated evaluation value g^_i output from the evaluation engine and the correct evaluation value g_i taught by the inspector Update the evaluation engine's internal parameters (network weights, biases, etc.) so that As for the timing of updating the internal parameters, instead of learning all the training samples at once, the training data (704) is divided into several mini-batches {m_s} (s=1,...,Nm, Nm: number of mini-batches, m_s It is common to divide 705 into a set called ⊂{(f_i,g_i)} and update the internal parameters for each mini-batch.This is called mini-batch learning, and when all mini-batches have been learned All the training samples have been used for training.This single learning of all mini-batches is called one epoch, and by repeating the epoch many times, the internal parameters are optimized. is denoted as {e_t} (t = 1, ..., Ne, Ne: the number of epochs).In Fig. 7, the state of mini-batch division is illustrated in the first epoch e_1 (700).The illustration is omitted. However, the same mini-batch division is performed in the second and subsequent epochs e_2 to e_Ne (701 to 703), and the learning samples included in the mini-batch may be shuffled for each epoch.
 本実施形態における拡張学習データ{(f_i, g'_ij)}を用いた評価エンジンの学習シーケンスの実施例を図8を用いて説明する。800~803はエポック、805はミニバッチである。本実施例では図7における学習データ{(f_i, g_i)}(704)を拡張学習データ{(f_i, g'_ij)}に置き換えている(804)。この際、拡張学習データを分割した生成したミニバッチを拡張ミニバッチ{m'_s}(s=1,…,Nm', Nm':拡張ミニバッチ数, m'_s⊂{(f_i,g'_ij)}と呼ぶ。一般に拡張学習データにおける拡張学習サンプル数NFは、学習データにおける学習サンプル数Nfより大きいので(NF>Nf)、図7と比較して、図8の実施例では、ミニバッチ数Nm’、あるいは1つの拡張ミニバッチに含まれるサンプル数が増加する。そのため、エポック数が同じであれば、学習時間は増加することになる。 An example of the learning sequence of the evaluation engine using the extended learning data {(f_i, g'_ij)} in this embodiment will be described with reference to FIG. 800-803 are epochs and 805 is mini-batch. In this embodiment, the learning data {(f_i, g_i)} (704) in FIG. 7 are replaced with extended learning data {(f_i, g'_ij)} (804). At this time, the mini-batches generated by dividing the augmented learning data are divided into augmented mini-batches {m'_s} (s=1,...,Nm', Nm': the number of augmented mini-batches, m'_s⊂{(f_i,g'_ij)} Generally, the number of extended learning samples NF in the extended learning data is larger than the number of learning samples Nf in the learning data (NF>Nf), so in the embodiment of FIG. Alternatively, the number of samples included in one extended mini-batch increases, so if the number of epochs remains the same, the learning time increases.
3.2 学習シーケンス2
 本実施形態における拡張学習データ{(f_i, g'_ij)}を用いた評価エンジンの学習シーケンスの実施例を図9を用いて説明する。本実施例では検査員によって与えらえた学習データ(904)を分割して得られるs番目のミニバッチm_sに含まれるi番目の1つの学習サンプル(f_i,g_i)に対して、ばらつき分布dに基づき1つの拡張学習サンプル(f_i,g'_i)を生成して学習サンプル(f_i,g_i)と差し替えることによって、拡張ミニバッチm'_sを生成することを特徴とする。例えば、1番目のミニバッチm_1(905)に含まれる1つの学習サンプル(f_1,g_1)(909)から1つの拡張学習サンプル(f_1,g'_1)(916)を生成する(911)。これを全てのミニバッチ(905~908等)に含まれる全ての学習サンプル(909、910等)に対して行い、拡張ミニバッチ(912~915等)、学習サンプル(916、917等)を生成する。
3.2 Learning sequence 2
An example of the learning sequence of the evaluation engine using the extended learning data {(f_i, g'_ij)} in this embodiment will be described with reference to FIG. In this embodiment, for one i-th learning sample (f_i, g_i) included in the s-th mini-batch m_s obtained by dividing the learning data (904) given by the inspector, based on the variation distribution d It is characterized by generating an extended mini-batch m'_s by generating one extended learning sample (f_i, g'_i) and replacing it with the learning sample (f_i, g_i). For example, one extended learning sample (f_1, g'_1) (916) is generated from one learning sample (f_1, g_1) (909) included in the first mini-batch m_1 (905) (911). This is done for all learning samples (909, 910, etc.) included in all mini-batches (905-908, etc.) to generate extended mini-batches (912-915, etc.) and learning samples (916, 917, etc.).
 本特徴について補足する。一般的なデータ拡張においては、1つの学習サンプルから複数の拡張学習サンプルを生成するため、学習時間が増加してしまう。すなわち、学習サンプル数の増加により、拡張前の1つのミニバッチに含まれる学習サンプル数、ミニバッチ数のいずれか、または両方がデータ拡張前に比べて増加することになる。本実施形態では、拡張前の1つの学習サンプル(f_i,g_i)を1つの拡張学習サンプル(f_i,g'_i)に差し替えるだけなので、学習サンプル数、ミニバッチ数は共に変化しない。ばらつき分布dを確率分布と考え、この確率分布に従う乱数で拡張評価正解値g'_iを生成する。この乱数による拡張評価正解値g'_iの生成は各エポックで実行する。そのため、同じ学習サンプルであってもエポックによって拡張評価正解値g'_iの値は変化しうる。生成回数が少ないときは、拡張学習サンプルに偏りが生じうるが、何度もエポックを繰り返して学習が進むにつれ、学習済みの拡張学習サンプルにおける拡張評価正解値の分布はばらつき分布dに近づくことが期待される。これにより、エポック数が同じであれば、学習データ{(f_i, g_i)}の学習と同じ時間で、ばらつき分布の情報が反映された拡張学習データの学習を行うことができる。これは、データ拡張によりサンプル数が増えた拡張学習データをエポック全体で学習するように配分することで、結果的に学習コストを増加させることなく、正解評価値のばらつき分布を考慮できたと捉えることもできる。 Supplementary information about this feature. In general data augmentation, since a plurality of augmented learning samples are generated from one learning sample, the learning time increases. That is, due to the increase in the number of learning samples, either the number of learning samples included in one mini-batch before expansion, the number of mini-batches, or both increases compared to before data expansion. In this embodiment, since one learning sample (f_i, g_i) before extension is simply replaced with one extended learning sample (f_i, g'_i), the number of learning samples and the number of mini-batches do not change. Considering the variation distribution d as a probability distribution, a random number following this probability distribution is used to generate the extended evaluation correct value g'_i. Generation of the extended evaluation correct value g'_i using this random number is performed at each epoch. Therefore, even for the same learning sample, the value of the extended evaluation correct value g'_i may change depending on the epoch. When the number of generations is small, there may be bias in the augmented learning samples, but as the learning progresses with repeated epochs, the distribution of the correct augmented evaluation values in the trained augmented learning samples can approach the dispersion distribution d. Be expected. As a result, if the number of epochs is the same, it is possible to learn the augmented learning data reflecting the information of the variation distribution in the same time as the learning of the learning data {(f_i, g_i)}. By distributing the augmented learning data, which has an increased number of samples due to data augmentation, to be learned over the entire epoch, it is possible to take into account the variation distribution of the correct evaluation values without increasing the learning cost as a result. can also
4. 学習中のばらつき分布変更
4.1 基本的な処理
 本実施形態では、拡張学習データを生成しながら反復的に評価エンジンを学習する際、反復中にばらつき分布dを更新し、更新したばらつき分布を基に拡張学習データを生成することを特徴とする。その具体例として、拡張学習サンプルを学習中の評価エンジンに入力して出力される推定評価値g'^_iの信頼度R(g'^_i)を求め、信頼度R(g'^_i)をパラメータとして、ばらつき分布を反復中に更新することを特徴とする。
4. Variation distribution change during learning 4.1 Basic processing It is characterized by generating augmented learning data based on the distribution. As a specific example, the reliability R(g'^_i) of the estimated evaluation value g'^_i output by inputting the extended learning sample to the evaluation engine during learning is calculated, and the reliability R(g'^_i) is a parameter, and the variation distribution is updated during iteration.
 本特徴について補足する。初めから精度の高いばらつき分布を与えることには限界がある。そこで、学習中にばらつき分布を変化させ、より適切な分布とすることを考える。ばらつき分布を変化させる方法として、評価エンジンの学習状態に応じて適応的に変化させることを考える。 Supplementary information about this feature. There is a limit to giving a highly accurate variation distribution from the beginning. Therefore, it is considered to change the variation distribution during learning to obtain a more appropriate distribution. As a method for changing the variation distribution, consider adaptively changing it according to the learning state of the evaluation engine.
 学習中にばらつき分布を変化させる具体的な実施例の一つを図10を用いて説明する。図10は、図9で説明した学習シーケンスに対して、学習中にばらつき分布を変化させる仕組みを組み込んだものである。ただし、学習中にばらつき分布を変化させる仕組みを組み込むことができるのは図9で説明した学習シーケンスに限定されない。例えば、図8で説明した学習シーケンスや、その他の学習シーケンスに対しても学習中のばらつき分布の変更は適用可能である。図9におけるt番目のエポックe_tに対応するエポックを1000に示すが、他のエポックに関しても同様である。まず、学習データ(1001)を分割して得られるミニバッチ{m_s}(1002、1003)を生成し、ミニバッチに含まれる各学習サンプル(f_i,g_i)(1004~1006)に対して、ばらつき分布d(1017)に基づいて拡張学習サンプル(f_i,g'_i)(1010~1012)を生成し(1007)、学習サンプル(f_i,g_i)と差し替えることによって、拡張ミニバッチ{m'_s}(1008、1009)を生成する。そして拡張ミニバッチ{m'_s}を学習して評価エンジン(1013)の内部パラメータを更新する。ここまでの処理は、図9と同じである。次に、各学習画像f_iを学習中の評価エンジン(1013)に入力して推定評価値g'^_iを出力し、g'^_iの信頼度R(g'^_i) (1014~1016)を算出する。信頼度の算出方法の例としては、正解評価値g_iと推定評価値g'^_iの差分や、ばらつき分布d(g'_i;g_i)を確率分布と捉えて推定評価値g'^_iを代入した値d(g'^_i;g_i)(推定値がg'^_iになることがどの程度起こりうるかの指標)、分類問題であれば各分類クラスへの帰属度のばらつきに基づく方法が挙げられる。各分類クラスへの帰属度について、正解分類クラスへの帰属度が突出していれば信頼度は高いが、他の分類クラスへの帰属度もあれば、程度に応じて信頼度は低くなる。このような信頼度に応じてばらつき分布(1017)を学習中に変化させる。変化させるタイミングは、ミニバッチの学習毎でも、エポック毎でも構わない。次のミニバッチあるいはエポックでは変更したばらつき分布(1017)に基づいて、また拡張学習サンプル(f_i,g'_i)(1010~1012)を生成し(1007)、これを用いて評価エンジン(1013)を学習する。 A specific example of changing the variation distribution during learning will be described with reference to FIG. FIG. 10 incorporates a mechanism for changing the variation distribution during learning to the learning sequence described with reference to FIG. However, it is not limited to the learning sequence described with reference to FIG. 9 that the mechanism for changing the variation distribution during learning can be incorporated. For example, the change of variation distribution during learning can be applied to the learning sequence described with reference to FIG. 8 and other learning sequences. The epoch corresponding to the t-th epoch e_t in FIG. 9 is indicated at 1000, but the same applies to other epochs. First, generate mini-batches {m_s} (1002, 1003) obtained by dividing the learning data (1001), and for each learning sample (f_i, g_i) (1004-1006) included in the mini-batch, the variation distribution d Generate (1007) augmented learning samples (f_i, g'_i) (1010-1012) based on (1017) and replace them with training samples (f_i, g_i) to obtain augmented mini-batch {m'_s} (1008, 1009). Then learn the extended mini-batch {m'_s} to update the internal parameters of the evaluation engine (1013). The processing up to this point is the same as in FIG. Next, each learning image f_i is input to the evaluation engine (1013) being trained, the estimated evaluation value g'^_i is output, and the reliability of g'^_i R(g'^_i) (1014 to 1016) Calculate As an example of how to calculate the reliability, the difference between the correct evaluation value g_i and the estimated evaluation value g'^_i, or the variation distribution d(g'_i;g_i) is regarded as a probability distribution, and the estimated evaluation value g'^_i is calculated as The imputed value d(g'^_i;g_i) (a measure of how likely it is that the estimated value is g'^_i), and for classification problems, the method based on the variation in the degree of membership to each classification class is mentioned. Regarding the degree of belonging to each classification class, if the degree of belonging to the correct classification class is outstanding, the reliability is high, but if there is also the degree of belonging to other classification classes, the reliability decreases according to the degree. Variation distribution (1017) is changed during learning according to such reliability. The timing of the change may be for each mini-batch learning or for each epoch. Based on the modified variation distribution (1017) in the next mini-batch or epoch, generate (1007) the augmented learning samples (f_i,g'_i) (1010-1012) and use them to run the evaluation engine (1013). learn.
 ばらつき分布と評価エンジンの更新を繰り返すことによって、学習前には想定できなかった適切なばらつき分布を推定し、このばらつき分布に基づくデータ拡張によって、より高性能な評価エンジンを得ることが可能になる。 By repeatedly updating the variation distribution and the evaluation engine, it is possible to estimate an appropriate variation distribution that could not be assumed before learning, and to obtain a higher performance evaluation engine by expanding the data based on this variation distribution. .
4.2 処理バリエーション
 4.1で述べたばらつき分布変更の基本的な処理に対する処理バリエーションを説明する。4.1では拡張学習サンプルの推定評価値g'^_iの信頼度R(g'^_i)に基づいてばらつき分布を変化させた。信頼度が高ければ、その拡張学習サンプルを生成したばらつき分布が妥当であった可能性が高く、逆に信頼度が低い場合は、ばらつき分布を変化させる必要がある。ばらつき分布の変更は(A)正解評価値g_i毎に行ってもよいし、(B)学習サンプル(f_i,g_i)毎に行ってもよい。また、ばらつき分布を変更する手掛かりとして、信頼度R(g'^_i)以外に、(C)検証データの評価結果を用いてもよいし、(D)発見的方法を用いてもよい。
4.2 Processing Variations Processing variations for the basic processing of changing the variation distribution described in 4.1 will be described. In 4.1, the variation distribution was changed based on the reliability R(g'^_i) of the estimated evaluation value g'^_i of the augmented learning sample. If the reliability is high, there is a high possibility that the variation distribution that generated the extended learning sample was appropriate. Conversely, if the reliability is low, it is necessary to change the variation distribution. The variation distribution may be changed (A) for each correct evaluation value g_i, or (B) for each learning sample (f_i, g_i). In addition to the reliability R(g'^_i), (C) the evaluation result of the verification data or (D) a heuristic method may be used as clues for changing the variation distribution.
 まず、ばらつき分布を変更するパラメータのバリエーションについて詳細を述べる。前述の(A)においては、ばらつき分布は正解評価値g_iの値に応じて切り替えることになり、ばらつき分布はg_iをパラメータとしてd(g'_i;g_i)で与えられる。すなわち、正解評価値の値がg_iである拡張学習サンプルに対する推定評価値g'^_iの信頼度に基づいてばらつき分布を変化させる。 First, we will describe in detail the variations of the parameters that change the variation distribution. In (A) above, the variation distribution is switched according to the value of the correct evaluation value g_i, and the variation distribution is given by d(g'_i;g_i) using g_i as a parameter. That is, the variation distribution is changed based on the reliability of the estimated evaluation value g'^_i for the augmented learning sample whose correct evaluation value is g_i.
 前述の(B)においては、ばらつき分布は学習サンプル(f_i,g_i)毎に切り替えることになり、ばらつき分布は(f_i,g_i)をパラメータとしてd(g'_i;(f_i,g_i))で与えられる。信頼度は拡張学習サンプル(f_i,g'_ij)毎に算出可能であるため、その拡張学習サンプルの元になった学習サンプル(f_i,g_i)毎にばらつき分布の妥当性を評価することが可能である。そのため、学習サンプル(f_i,g_i)毎にばらつき分布を変更することが可能となる。手動で学習サンプル毎にばらつき分布を与えることは膨大な作業コストを要するし、また、適切なばらつき分布を与えることも困難である。学習サンプルの信頼度に基づいて、評価エンジンの性能が向上するようにばらつき分布を更新することにより、人手を介さず評価エンジンの内部パラメータの最適化と並行して学習サンプル毎にばらつき分布を最適化することが可能となる。 In (B) above, the variation distribution is switched for each learning sample (f_i,g_i), and the variation distribution is given by d(g'_i;(f_i,g_i)) with (f_i,g_i) as a parameter. be done. Since the reliability can be calculated for each extended learning sample (f_i,g'_ij), it is possible to evaluate the validity of the variation distribution for each learning sample (f_i,g_i) that is the basis of the extended learning sample. is. Therefore, it is possible to change the variation distribution for each learning sample (f_i, g_i). Manually giving a variation distribution for each learning sample requires a huge amount of work cost, and it is also difficult to give an appropriate variation distribution. By updating the variability distribution based on the reliability of the training sample to improve the performance of the evaluation engine, the variability distribution is optimized for each training sample in parallel with optimizing the internal parameters of the evaluation engine without human intervention. become possible.
 次に、ばらつき分布を変更する手掛かりのバリエーションについて詳細を述べる。前述の(C)においては、検証データの評価結果に基づいてばらつき分布を変更する。評価エンジンの学習においては、未学習データに対して汎化性能の高い内部パラメータを得るため、学習に用いる訓練データ(本開示では学習データと呼んでいる)とは別に検証データと呼ばれるデータを用意することが一般的である。学習において内部パラメータは学習データに対する推定結果が改善するように逐次更新されるが、最終的にどの内部パラメータを採用するかは、学習に用いていない検証データ(未学習データ)の推定結果が良好となる内部パラメータを採用することがある。本実施形態においては、学習データではなく、検証データの推定評価値の信頼度に基づいてばらつき分布を選択してもよい。 Next, we will describe in detail the variations of the clues that change the variation distribution. In (C) above, the variation distribution is changed based on the evaluation result of the verification data. In the learning of the evaluation engine, data called verification data is prepared separately from the training data used for learning (called learning data in this disclosure) in order to obtain internal parameters with high generalization performance for unlearned data. It is common to In learning, the internal parameters are successively updated to improve the estimation results for the training data, but which internal parameters are finally adopted depends on the estimation results of the verification data (untrained data) that are not used in training. An internal parameter that becomes In this embodiment, the variation distribution may be selected based on the reliability of the estimated evaluation value of the verification data instead of the learning data.
 前述の(D)においては、ばらつき分布の最適化に発見的方法を用いてもよい。すなわち、学習前に与えたばらつき分布を初期値として、学習中にばらつき分布を微小変化させ、結果的に評価エンジンの性能(推定評価値の正解率や、信頼度)が向上したばらつき分布に更新していく。このような方法によって解析的なアプローチを用いることなく、発見的に適切なばらつき分布を獲得することができる。 In (D) above, a heuristic method may be used to optimize the variability distribution. In other words, the variability distribution given before learning is used as the initial value, and the variability distribution is slightly changed during learning, resulting in an update to a variability distribution that improves the performance of the evaluation engine (accuracy rate and reliability of estimated evaluation values). continue. Such a method allows obtaining a heuristically relevant variability distribution without using an analytical approach.
5. GUI
 本実施形態において、ばらつき分布の指定や確認を検査員等のユーザが行うためのGraphical User Interface(GUI)の例を図11に示す(1100)。本GUIでは正解評価値毎にばらつき分布を表示することができる(1101内の1102~1104)。表示するばらつき分布はラジオボタン等により、最初にユーザが指定した分布や学習中に更新された分布を切り替えて表示することができる(1105)。後者ではエポックのIDを指定することにより(1106)、任意のエポックにおける更新中のばらつき分布を表示することができる。ばらつき分布は、ヒストグラム(例えば図4)、折れ線(例えば図6)、パラメトリックな関数の組み合わせ、自由曲線(例えば図11の1102や1103)等で与えることができ、与え方はラジオボタン等で選択することができる(1107)。
5. GUI
In this embodiment, FIG. 11 shows an example of a graphical user interface (GUI) for a user such as an inspector to specify and confirm the variation distribution (1100). This GUI can display the variation distribution for each correct evaluation value (1102 to 1104 in 1101). The variation distribution to be displayed can be displayed by switching between the distribution initially specified by the user and the distribution updated during learning using radio buttons or the like (1105). In the latter, by specifying the ID of the epoch (1106), it is possible to display the variation distribution during the update at any epoch. Variation distribution can be given by a histogram (e.g. Fig. 4), a line (e.g. Fig. 6), a combination of parametric functions, a free curve (e.g. 1102 or 1103 in Fig. 11), etc., and the method of giving can be selected with radio buttons. (1107).
 ばらつき分布を与える判断材料あるいは処理結果の妥当性を判断する材料として、学習サンプル(f_i,g_i)の情報を表示することができる(1108)。いくつかの学習サンプルを並べて表示し、比較することができる(1109、1121、1133)。一つの学習サンプルの情報「表示1」(1109)を取り上げ表示内容の詳細を説明する。表示する学習サンプルは学習画像のIDで指定することができ(1110)、正解評価値g_iでフィルタリングすることもできる(1111)。検査画像f_i、学習サンプル(f_i,g_i)に対する推定評価値g^_i、信頼度R(g^_i)を表示することができる(1112、1113、1114)。本例の検査画像(1112)は超音波検査装置で撮像した半導体デバイス内部の欠陥画像である。また、学習サンプル(f_i,g_i)から生成した拡張学習サンプル{(f_i,g'_ij)}の情報も表示することができる(1115)。複数の拡張サンプルの情報を並べて表示することもできる(1116、1117)。表示内容には、拡張正解評価値g'_ij、拡張学習サンプル(f_i,g'_ij)に対する推定評価値g'^_ij、信頼度R(g'^_ij)を含む(1118、1119、1120)。他の学習サンプルの情報「表示2」(1121)も「表示1」と同様である。すなわち、1122~1132は1110~1120に相当する。また、本例では表示1109、1121に学習サンプルの情報を表示したが、同様に検査サンプルの情報(検査画像f''_iや推定評価値g''^_i等)を表示することも可能である。 The information of the learning sample (f_i, g_i) can be displayed (1108) as a material for determining the validity of the processing result or as a material for determining the variation distribution. Several training samples can be displayed side by side and compared (1109, 1121, 1133). The information "display 1" (1109) of one learning sample will be taken up and the details of the display contents will be explained. The learning sample to be displayed can be specified by the ID of the learning image (1110), and can be filtered by the correct evaluation value g_i (1111). The inspection image f_i, the estimated evaluation value g^_i for the learning sample (f_i, g_i), and the reliability R(g^_i) can be displayed (1112, 1113, 1114). An inspection image (1112) in this example is a defect image inside a semiconductor device captured by an ultrasonic inspection apparatus. Information on the extended learning sample {(f_i,g'_ij)} generated from the learning sample (f_i,g_i) can also be displayed (1115). Information of multiple extended samples can also be displayed side by side (1116, 1117). Display contents include extended correct evaluation value g'_ij, estimated evaluation value g'^_ij for extended learning sample (f_i,g'_ij), confidence R(g'^_ij) (1118, 1119, 1120) . The information "display 2" (1121) of other learning samples is the same as "display 1". That is, 1122-1132 correspond to 1110-1120. In addition, in this example, information on learning samples is displayed in displays 1109 and 1121, but information on inspection samples (inspection image f''_i, estimated evaluation value g''^_i, etc.) can also be displayed in the same way. be.
 本実施例により、機械学習を活用した外観検査の自動化において、判断の揺らぎや誤教示、複数の正解評価値の存在等に起因する正解評価値のばらつきに対しても評価エンジンの過学習を抑制することができる。現実問題として学習サンプルの質の向上を検査員の努力だけで実現することは難しい。また、正解評価値のばらつきに対して、検査画像の水増しだけでは十分な解決が図れない。そのため本実施形態では、たとえ学習サンプルの質が低下しても機械学習による評価値の推定性能を維持する仕組みを提供した。これにより、検査対象物の検査画像から評価値を高精度に推定し、検査対象物のできばえを自動で評価することが可能となる。 This embodiment suppresses the over-learning of the evaluation engine against variations in correct evaluation values due to fluctuations in judgment, incorrect teaching, and the existence of multiple correct evaluation values in the automation of appearance inspections using machine learning. can do. As a practical matter, it is difficult to improve the quality of learning samples only by the efforts of inspectors. In addition, the variation in correct evaluation values cannot be sufficiently resolved by padding the inspection image alone. For this reason, the present embodiment provides a mechanism that maintains the performance of estimating an evaluation value by machine learning even if the quality of learning samples is degraded. This makes it possible to estimate the evaluation value from the inspection image of the inspection object with high accuracy and automatically evaluate the performance of the inspection object.
 なお、本実施例では入力情報として二次元の画像データを扱ったが、超音波の受信波等の一次元信号や、レーザレンジファインダ等で取得した三次元のボリュームデータを入力情報とした場合も本実施形態の手法を適用することが可能である。また、入力画像が複数枚、推定評価値が複数種類の場合も本実施形態の手法を適用することが可能である(評価エンジンが多入力で多出力)。 In this embodiment, two-dimensional image data is used as input information, but one-dimensional signals such as received waves of ultrasonic waves, or three-dimensional volume data acquired by a laser range finder, etc., can also be used as input information. It is possible to apply the technique of this embodiment. The method of the present embodiment can also be applied when there are multiple input images and multiple types of estimated evaluation values (evaluation engine has multiple inputs and multiple outputs).
6.自動外観検査システムのハードウェア構成
 以上の実施形態にて説明した外観検査方法を実現する自動外観検査システムを図12に示す。自動外観検査システムは、前述の撮像装置と、計算機により構成される。撮像装置の例はすでに説明した通りである。
6. Hardware Configuration of Automatic Visual Inspection System FIG. 12 shows an automatic visual inspection system that implements the visual inspection method described in the above embodiments. The automatic visual inspection system is composed of the imaging device described above and a computer. Examples of imaging devices have already been described.
 計算機は、本実施形態で説明した外観検査方法を処理する構成物であり、以下を有する。
*プロセッサ:プロセッサの例としてはCPU、GPU、FPGAが考えられるが、外観検査方法を処理できるのであれば、ほかの構成物であってもよい。
*記憶資源:記憶資源の例としてはRAM、ROM、HDD、不揮発メモリ(フラッシュメモリ等)が考えられる。なお、記憶資源には揮発メモリ(前述のRAMはその一例)を含んでもよい。当該記憶資源は、以上の実施形態にて説明した外観検査方法をプロセッサに実行させるプログラム(外観検査プログラムと呼ぶ)を格納してもよい。また、当該記憶資源は、外観検査プログラムが参照または生成するデータを格納してもよい。その記憶資源に格納するデータの一例は以下である:
**学習画像、正解評価値、学習データ
**拡張学習サンプル、拡張学習データ、
**評価エンジンの内部パラメータ、
**検査画像、
**推定評価値。
*GUI装置:GUI装置の例としては、ディスプレイやプロジェクタ等が考えられるが、GUIを表示ができるのであれば、ほかの装置でもよい。
*入力装置:入力装置の例としては、キーボード、マウス、タッチパネルが考えられるが、ユーザからの操作を受け付けられる構成物であれば他の装置でもよい。また、入力装置とGUI装置とは一体の装置であってもよい。
*通信インターフェース装置: 通信インターフェースの例としては、USB、Ethernet、Wi-Fiといった例が考えられる、撮像装置から画像を直接受信できたり、又はユーザが当該画像を計算機に送信できるインターフェースであれば、ほかのインターフェース装置であってもよい。また、当該通信インターフェースに、当該画像を格納した可搬不揮発記憶媒体(たとえばフラッシュメモリ、DVD、CD-ROM、ブルーレイディスク等の)を接続し、計算機に当該画像を格納してもよい。
以上が計算機のハードウェア構成である。なお、自動外観検査システムを構成する計算機は複数であってもよく、撮像装置が複数であってもよい。
The computer is a component for processing the visual inspection method described in this embodiment, and has the following.
*Processor: Examples of processors include CPU, GPU, and FPGA, but other components may be used as long as they can process the visual inspection method.
* Storage resources: Examples of storage resources include RAM, ROM, HDD, and non-volatile memory (flash memory, etc.). Note that the storage resource may include a volatile memory (the aforementioned RAM is one example). The storage resource may store a program (referred to as a visual inspection program) that causes a processor to execute the visual inspection method described in the above embodiments. Also, the storage resource may store data referred to or generated by the visual inspection program. An example of data stored in that storage resource is:
**Learning image, correct evaluation value, learning data **Extended learning sample, extended learning data,
** Internal parameters of the rating engine,
**Inspection image,
** Estimated rating.
*GUI device: Examples of a GUI device include a display and a projector, but other devices may be used as long as they can display a GUI.
* Input device: Examples of input devices include keyboards, mice, and touch panels, but other devices may be used as long as they are configured to accept operations from the user. Also, the input device and the GUI device may be an integrated device.
*Communication interface device: Examples of communication interfaces include USB, Ethernet, and Wi-Fi. Any interface that can receive images directly from an imaging device or that allows the user to send the images to a computer Other interface devices may be used. Alternatively, a portable nonvolatile storage medium (for example, flash memory, DVD, CD-ROM, Blu-ray disc, etc.) storing the image may be connected to the communication interface, and the image may be stored in the computer.
The above is the hardware configuration of the computer. It should be noted that the automatic visual inspection system may include a plurality of computers and a plurality of imaging devices.
 なお、前述の外観検査プログラムは、以下の経路で計算機に格納されてもよい:
*外観検査プログラムを可搬不揮発記憶媒体に格納し、当該媒体を通信インターフェースに接続することで、当該プログラムを計算機に配布する。
*プログラム配信サーバにより外観検査プログラムを計算機に配信する。なお、プログラム配信サーバーは、外観検査プログラムを格納した記憶資源と、外観検査プログラムを配信する配信処理を行うプロセッサと、計算機の通信インターフェース装置と通信可能である通信インターフェース装置、とを有する。
以上で実施形態の説明を終える。前述の通り、これまで説明した実施形態は特許請求の範囲に係る発明を限定するものではなく、また、実施形態の中で説明されている諸要素及びその組み合わせの全てが発明の解決手段に必須であるとは限らない。
 本出願は、2021年3月22日に出願された日本出願特願2021-47889を基礎として優先権の利益を主張するものであり、その開示の全てを引用によってここに取り込む。
Note that the aforementioned visual inspection program may be stored in the computer through the following path:
* Store the visual inspection program in a portable non-volatile storage medium, and distribute the program to computers by connecting the medium to a communication interface.
* The program distribution server distributes the appearance inspection program to the computer. The program distribution server has a storage resource that stores the appearance inspection program, a processor that performs distribution processing for distributing the appearance inspection program, and a communication interface device that can communicate with the communication interface device of the computer.
This completes the description of the embodiment. As described above, the embodiments described so far do not limit the claimed invention, and all of the elements described in the embodiments and their combinations are essential to the solution of the invention. not necessarily.
This application claims the benefit of priority based on Japanese Patent Application No. 2021-47889 filed on March 22, 2021, the entire disclosure of which is incorporated herein by reference.
 100…学習フェーズ 101…検査フェーズ 201、202、203…学習サンプル 205、206、207…拡張学習サンプル群 208、209、210…拡張学習サンプル
 

 
100... Learning phase 101... Inspection phase 201, 202, 203... Learning samples 205, 206, 207... Extended learning sample groups 208, 209, 210... Extended learning samples

Claims (10)

  1.  (a)学習用の検査対象物を撮像した学習画像と該学習画像に対する正解評価値とのペアである学習サンプルの集合である学習データを記憶資源に格納し、
     (b)前記学習データに含まれる学習サンプルの正解評価値を、変更可能な所定のばらつき分布に従って変化させ、変化後の値を正解評価値とする学習サンプルである拡張学習サンプルを生成し、
     (c)前記拡張学習サンプルの集合である拡張学習データを生成し、
     (d)前記拡張学習データを基に学習画像と評価値との関係を学習して評価エンジンの内部パラメータを決定し、
     (e)検査対象物を撮像した検査画像を取得し、
     (f)前記検査画像を前記評価エンジンに入力し、前記評価エンジンの出力から前記検査画像の評価値の推定値である推定評価値を取得する、
    外観検査方法。
    (a) storing learning data, which is a set of learning samples that are pairs of learning images obtained by imaging inspection objects for learning and correct evaluation values for the learning images, in a storage resource;
    (b) changing the correct evaluation value of the learning sample included in the learning data according to a changeable predetermined variation distribution, and generating an extended learning sample that is a learning sample having the changed value as the correct evaluation value;
    (c) generating augmented learning data that is a set of the augmented learning samples;
    (d) determining the internal parameters of the evaluation engine by learning the relationship between the learning image and the evaluation value based on the augmented learning data;
    (e) Acquiring an inspection image of the inspection object,
    (f) inputting the inspection image to the evaluation engine and obtaining an estimated evaluation value, which is an estimated evaluation value of the inspection image, from the output of the evaluation engine;
    Appearance inspection method.
  2.  前記(b)において、前記学習データに含まれる前記学習サンプルのそれぞれから、正解評価値を前記ばらつき分布に従って乱数を用いて変化させることにより拡張学習サンプルを生成し、
     前記(c)において、前記学習データの前記学習サンプルを当該学習サンプルから生成された拡張学習サンプルに差し替えることにより前記拡張学習データを生成し、
     前記(d)において、前記拡張学習データを基に前記評価エンジンの内部パラメータを更新し、
     前記(b)から前記(d)を1エポックとして、複数回のエポックを、エポック毎に前記(b)において前記正解評価値を再生成しつつ繰り返す、
    請求項1に記載の外観検査方法。
    In the above (b), from each of the learning samples included in the learning data, generating an extended learning sample by changing the correct evaluation value using a random number according to the variation distribution,
    In (c) above, generating the extended learning data by replacing the learning sample of the learning data with an extended learning sample generated from the learning sample;
    In (d) above, updating internal parameters of the evaluation engine based on the augmented learning data;
    Repeating (b) to (d) as one epoch, repeating a plurality of epochs while regenerating the correct evaluation value in (b) for each epoch;
    The appearance inspection method according to claim 1.
  3.  前記(a)において、更に、前記学習データを部分集合である複数のミニバッチに分割し、
     前記(b)において、前記ミニバッチのそれぞれに含まれる前記学習サンプルのそれぞれから、正解評価値を前記ばらつき分布に従って乱数を用いて変化させることにより拡張学習サンプルを生成し、
     前記(c)において、前記ミニバッチの単位で前記拡張学習サンプルの集合である複数の拡張ミニバッチを生成し、前記複数の拡張ミニバッチを部分集合とする前記拡張学習データを生成し、
     前記(d)において、前記拡張学習データを前記拡張ミニバッチ毎に学習し、前記拡張ミニバッチを学習する毎に前記内部パラメータを更新する、
    請求項2に記載の外観検査方法。
    In (a) above, further dividing the learning data into a plurality of mini-batches that are subsets;
    In the above (b), from each of the learning samples included in each of the mini-batches, generating an extended learning sample by changing the correct evaluation value using a random number according to the variation distribution,
    In the above (c), generating a plurality of extended mini-batches that are a set of the extended learning samples in units of the mini-batch, and generating the extended learning data with the plurality of extended mini-batches as a subset;
    In (d), learning the extended learning data for each extended mini-batch, and updating the internal parameter each time the extended mini-batch is learned;
    The appearance inspection method according to claim 2.
  4.  更に、
     (g)前記ばらつき分布をGUIによってユーザが確認可能なように表示する、
    請求項1に記載の外観検査方法。
    Furthermore,
    (g) displaying the variation distribution in a GUI so that the user can check it;
    The appearance inspection method according to claim 1.
  5.  前記エポックを繰り返しにおける前記(d)において、学習中の前記評価エンジンに前記学習画像を入力し、前記評価エンジンから出力される推定評価値に基づいて、前記(b)に用いる前記ばらつき分布を変更する、
    請求項2に記載の外観検査方法。
    In (d) in repeating the epoch, the learning image is input to the evaluation engine during learning, and the variation distribution used in (b) is changed based on the estimated evaluation value output from the evaluation engine. do,
    The appearance inspection method according to claim 2.
  6.  前記評価エンジンから出力される推定評価値に基づいて信頼度を算出し、前記信頼度に基づいて前記(b)に用いる前記ばらつき分布を変更する、
    請求項5に記載の外観検査方法。
    calculating reliability based on the estimated evaluation value output from the evaluation engine, and changing the variation distribution used in (b) based on the reliability;
    The appearance inspection method according to claim 5.
  7.  前記エポックを繰り返しにおける前記(d)において、前記拡張ミニバッチを学習する毎に、学習中の前記評価エンジンに前記学習画像を入力し、前記評価エンジンから出力される推定評価値に基づいて、前記(b)に用いる前記ばらつき分布を変更する、
    請求項3に記載の外観検査方法。
    In the above (d) in repeating the epoch, each time the extended mini-batch is learned, the learning image is input to the evaluation engine during learning, and based on the estimated evaluation value output from the evaluation engine, the ( changing the variation distribution used in b);
    The appearance inspection method according to claim 3.
  8.  前記評価エンジンから出力される推定評価値に基づいて信頼度を算出し、前記信頼度に基づいて前記(b)に用いる前記ばらつき分布を変更する、
    請求項7に記載の外観検査方法。
    calculating reliability based on the estimated evaluation value output from the evaluation engine, and changing the variation distribution used in (b) based on the reliability;
    The appearance inspection method according to claim 7.
  9.  プロセッサと記憶資源とを有する外観検査システムであって、
     前記プロセッサは、
     (a)学習用の検査対象物を撮像した学習画像と該学習画像に対する正解評価値とのペアである学習サンプルの集合である学習データを前記記憶資源に格納し、
     (b)前記学習データに含まれる学習サンプルの正解評価値を、変更可能な所定のばらつき分布に従って変化させ、変化後の値を正解評価値とする学習サンプルである拡張学習サンプルを生成し、
     (c)前記拡張学習サンプルの集合である拡張学習データを生成し、
     (d)前記拡張学習データを基に学習画像と評価値との関係を学習して評価エンジンの内部パラメータを決定し、
     (e)検査対象物を撮像した検査画像を取得し、
     (f)前記検査画像を前記評価エンジンに入力し、前記評価エンジンの出力から前記検査画像の評価値の推定値である推定評価値を取得する、
    外観検査システム。
    A visual inspection system having a processor and storage resources,
    The processor
    (a) storing learning data, which is a set of learning samples that are pairs of learning images obtained by imaging inspection objects for learning and correct evaluation values for the learning images, in the storage resource;
    (b) changing the correct evaluation value of the learning sample included in the learning data according to a changeable predetermined variation distribution, and generating an extended learning sample that is a learning sample having the changed value as the correct evaluation value;
    (c) generating augmented learning data that is a set of the augmented learning samples;
    (d) determining the internal parameters of the evaluation engine by learning the relationship between the learning image and the evaluation value based on the augmented learning data;
    (e) Acquiring an inspection image of the inspection object,
    (f) inputting the inspection image to the evaluation engine and obtaining an estimated evaluation value, which is an estimated evaluation value of the inspection image, from the output of the evaluation engine;
    Appearance inspection system.
  10.  (a)学習用の検査対象物を撮像した学習画像と該学習画像に対する正解評価値とのペアである学習サンプルの集合である学習データを記憶資源に格納し、
     (b)前記学習データに含まれる学習サンプルの正解評価値を、変更可能な所定のばらつき分布に従って変化させ、変化後の値を正解評価値とする学習サンプルである拡張学習サンプルを生成し、
     (c)前記拡張学習サンプルの集合である拡張学習データを生成し、
     (d)前記拡張学習データを基に学習画像と評価値との関係を学習して評価エンジンの内部パラメータを決定し、
     (e)検査対象物を撮像した検査画像を取得し、
     (f)前記検査画像を前記評価エンジンに入力し、前記評価エンジンの出力から前記検査画像の評価値の推定値である推定評価値を取得する、
    ことをコンピュータに実行させるための外観検査プログラム。

     
    (a) storing learning data, which is a set of learning samples that are pairs of learning images obtained by imaging inspection objects for learning and correct evaluation values for the learning images, in a storage resource;
    (b) changing the correct evaluation value of the learning sample included in the learning data according to a changeable predetermined variation distribution, and generating an extended learning sample that is a learning sample having the changed value as the correct evaluation value;
    (c) generating augmented learning data that is a set of the augmented learning samples;
    (d) determining the internal parameters of the evaluation engine by learning the relationship between the learning image and the evaluation value based on the augmented learning data;
    (e) Acquiring an inspection image of the inspection object,
    (f) inputting the inspection image to the evaluation engine and obtaining an estimated evaluation value, which is an estimated evaluation value of the inspection image, from the output of the evaluation engine;
    A visual inspection program that allows a computer to do things.

PCT/JP2022/011438 2021-03-22 2022-03-14 Appearance inspection method and appearance inspection system WO2022202456A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023509035A JP7549736B2 (en) 2021-03-22 2022-03-14 Visual inspection method and visual inspection system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-047889 2021-03-22
JP2021047889 2021-03-22

Publications (1)

Publication Number Publication Date
WO2022202456A1 true WO2022202456A1 (en) 2022-09-29

Family

ID=83397152

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/011438 WO2022202456A1 (en) 2021-03-22 2022-03-14 Appearance inspection method and appearance inspection system

Country Status (2)

Country Link
JP (1) JP7549736B2 (en)
WO (1) WO2022202456A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015176175A (en) * 2014-03-13 2015-10-05 日本電気株式会社 Information processing apparatus, information processing method and program
WO2019222734A1 (en) * 2018-05-18 2019-11-21 Google Llc Learning data augmentation policies
US20210056417A1 (en) * 2019-08-22 2021-02-25 Google Llc Active learning via a sample consistency assessment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015176175A (en) * 2014-03-13 2015-10-05 日本電気株式会社 Information processing apparatus, information processing method and program
WO2019222734A1 (en) * 2018-05-18 2019-11-21 Google Llc Learning data augmentation policies
US20210056417A1 (en) * 2019-08-22 2021-02-25 Google Llc Active learning via a sample consistency assessment

Also Published As

Publication number Publication date
JPWO2022202456A1 (en) 2022-09-29
JP7549736B2 (en) 2024-09-11

Similar Documents

Publication Publication Date Title
Tomani et al. Post-hoc uncertainty calibration for domain drift scenarios
US10818000B2 (en) Iterative defect filtering process
EP3620990A1 (en) Capturing network dynamics using dynamic graph representation learning
Talvitie Model Regularization for Stable Sample Rollouts.
KR20180130925A (en) Artificial intelligent device generating a learning image for machine running and control method thereof
WO2014186488A2 (en) Tuning hyper-parameters of a computer-executable learning algorithm
Serret et al. Solving optimization problems with Rydberg analog quantum computers: Realistic requirements for quantum advantage using noisy simulation and classical benchmarks
JP7477608B2 (en) Improving the accuracy of classification models
TWI763451B (en) System, method, and non-transitory computer readable medium utilizing automatic selection of algorithmic modules for examination of a specimen
Plaza et al. Minimizing manual image segmentation turn-around time for neuronal reconstruction by embracing uncertainty
CN112633461A (en) Application assistance system and method, and computer-readable recording medium
JP2021149842A (en) Machine learning system and machine learning method
CN114169460A (en) Sample screening method, sample screening device, computer equipment and storage medium
WO2022202456A1 (en) Appearance inspection method and appearance inspection system
US11688175B2 (en) Methods and systems for the automated quality assurance of annotated images
Papadopoulos et al. Reliable Confidence Intervals for Software Effort Estimation.
EP3696771A1 (en) System for processing an input instance, method, and medium
Thorström Applying machine learning to key performance indicators
Farhad et al. Keep your distance: determining sampling and distance thresholds in machine learning monitoring
Hall et al. Bias amplification in image classification
WO2023166776A1 (en) Appearance analysis system, appearance analysis method, and program
JP2011145905A (en) Prediction function generation device and method, and program
KR20220010516A (en) Inspection Device, Inspection Method and Inspection Program, and Learning Device, Learning Method and Learning Program
US20240127153A1 (en) Systems and methods for automated risk assessment in machine learning
WO2023166773A1 (en) Image analysis system, image analysis method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22775247

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023509035

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22775247

Country of ref document: EP

Kind code of ref document: A1