WO2019176988A1 - Système d'inspection, système d'identification et dispositif d'évaluation d'appareil d'identification - Google Patents

Système d'inspection, système d'identification et dispositif d'évaluation d'appareil d'identification Download PDF

Info

Publication number
WO2019176988A1
WO2019176988A1 PCT/JP2019/010178 JP2019010178W WO2019176988A1 WO 2019176988 A1 WO2019176988 A1 WO 2019176988A1 JP 2019010178 W JP2019010178 W JP 2019010178W WO 2019176988 A1 WO2019176988 A1 WO 2019176988A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
discriminator
evaluation
learning
classifier
Prior art date
Application number
PCT/JP2019/010178
Other languages
English (en)
Japanese (ja)
Inventor
真嗣 栗田
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Publication of WO2019176988A1 publication Critical patent/WO2019176988A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an inspection system, an identification system, and a classifier evaluation apparatus.
  • Patent Document 1 when it is determined whether the inspection object shown in the image is normal or abnormal based on the learned first neural network, and it is determined that the inspection object is abnormal, There has been proposed an inspection apparatus for classifying the type of abnormality based on a learned second neural network.
  • the present inventor has found the following problems in the conventional technique for determining the quality of a product from image data using a discriminator such as a learned neural network as in Patent Document 1. It was.
  • the discriminator is constructed so as to discriminate the quality of the product shown in the pre-collected image data, and therefore cannot cope with an unknown case such as a new type of defect. Therefore, in order to deal with an unknown case, additional image data in which the unknown case is captured is collected, and re-learning or additional learning of the discriminator is performed using the collected additional image data. As a result, the updated discriminator can cope with an unknown case learned by the additional image data.
  • the performance of the discriminator may deteriorate due to the influence of learning special case discrimination.
  • AI technology such as a neural network
  • over-learning may occur due to excessive adaptation to the good / bad cases appearing in the learning data.
  • the performance of the discriminator may deteriorate.
  • the performance of the discriminator deteriorates due to re-learning or additional learning, there is a problem that the reliability of product quality determination by the discriminator is impaired. Found.
  • this problem is not unique to the scene where product quality is judged. Similar problems may occur in any scene where the discriminator is updated by re-learning or additional learning, such as a scene where a certain state of a subject is identified from image data and a scene where a characteristic is identified from data other than image data. That is, the re-learning or additional learning of the classifier is performed using the collected additional data, so that the updated classifier can cope with an unknown case learned by the additional data.
  • the performance of identifying the features appearing in the target data by the classifier may be deteriorated.
  • the present invention has been made in view of such a situation in one aspect, and the purpose thereof is to use a discriminator even when the discrimination performance of the discriminator deteriorates due to re-learning or additional learning. It is to provide a technique for preventing the determination reliability from being impaired.
  • the present invention adopts the following configuration in order to solve the above-described problems.
  • the inspection system is an inspection system that inspects the quality of a product, and is a correct answer to the evaluation image data that is reflected in the product and the quality determination of the product that is reflected in the evaluation image data.
  • An evaluation data acquisition unit that acquires a plurality of evaluation data sets each configured by a combination of correct answer data indicating the above, and image data for learning the quality of the product for each evaluation data set
  • the evaluation image data is input to a first discriminator constructed by machine learning using the first learning data, and the product image reflected in the evaluation image data based on an output obtained from the first discriminator.
  • a first evaluation unit that evaluates the determination performance of the first discriminator by comparing the result of determining pass / fail with the correct answer indicated by the correct answer data, and evaluation of each case
  • the evaluation image data is applied to a second discriminator constructed by machine learning using second learning data composed of the first learning data and additional image data for learning the quality of the product. And comparing the result of determining the quality of the product shown in the image data for evaluation based on the output obtained from the second discriminator with the correct answer indicated by the correct data, the second discriminator
  • the determination performance of the second discriminator is worse than that of the first discriminator, based on the second evaluation unit that evaluates the determination performance of the first discriminator and the result of the evaluation on the first discriminator and the second discriminator.
  • a performance determination unit that determines whether or not a target data acquisition unit that acquires target image data of the product to be inspected, and the determination performance of the second classifier is compared with the first classifier Worse If it is determined that the product is reflected in the target image data using the first discriminator, the determination performance of the second discriminator is deteriorated as compared with the first discriminator.
  • a pass / fail judgment unit for judging pass / fail of the product in the target image data using the second discriminator when it is determined that the product is not present.
  • the determination performance of the first discriminator and the second discriminator is evaluated by using a plurality of evaluation data sets each composed of a combination of evaluation image data and correct answer data.
  • the first discriminator is constructed by machine learning using the first learning data
  • the second discriminator uses second learning data composed of the first learning data and additional image data.
  • machine learning Built by machine learning. That is, in the relationship between the first classifier and the second classifier, the first classifier is a classifier before re-learning or additional learning, and the second classifier is identified after re-learning or additional learning. It is a vessel.
  • this configuration it is possible to perform re-learning or additional learning of a classifier so as to correspond to an unknown case, and monitor whether the performance of the classifier has deteriorated due to re-learning or additional learning. be able to. As a result, even if the performance of the discriminator deteriorates due to re-learning or additional learning, the discriminator is used by not using the discriminator (second discriminator) after re-learning or additional learning. It is possible to prevent the reliability of the determination made from being impaired.
  • the “product” is not particularly limited, and may be appropriately selected according to the embodiment.
  • the “product” may be, for example, a product conveyed on a production line such as an electronic component or an automobile component.
  • the electronic component is, for example, a substrate, a chip capacitor, a liquid crystal, a relay winding, or the like.
  • the automobile parts are, for example, connecting rods, shafts, engine blocks, power window switches, panels, and the like. “Determining whether or not the product is good” may be simply determining whether or not the product is defective. In addition to determining whether or not the product is defective, the type of the defect is identified. May be included.
  • the defects are, for example, scratches, dirt, cracks, dents, dust, burrs, color unevenness, and the like.
  • the “discriminator” may be configured by a learning model that can acquire the ability to perform predetermined inference by machine learning, such as a neural network, a support vector machine, a self-organizing map, and a reinforcement learning model.
  • the first discriminator may be a discriminator after re-learning or additional learning in relation to other discriminators.
  • “Relearning” is to perform machine learning again using original learning data (first learning data) used for machine learning of the classifier.
  • additional learning is to update the classifier by machine learning using new learning data.
  • “Relearning or additional learning” includes updating a discriminator by using these machine learnings and constructing a new discriminator by machine learning using original learning data and new learning data. It may be.
  • the first evaluation unit may include the plurality of evaluation data sets of the product that is reflected in the evaluation image data based on an output obtained from the first discriminator.
  • the determination performance of the first discriminator may be evaluated by calculating a ratio in which the result of determining pass / fail matches the correct answer indicated by the correct answer data, and the second evaluation unit may evaluate the plurality of cases.
  • the ratio of the result of determining the quality of the product shown in the evaluation image data based on the output obtained from the second discriminator agrees with the correct answer indicated by the correct answer data is calculated.
  • the determination performance of the second discriminator may be evaluated.
  • the determination performance of each discriminator can be appropriately evaluated by the ratio with which the determination result by each discriminator and correct answer data correspond, ie, the correct answer rate of the determination with respect to the evaluation data set. It becomes like this. This makes it possible to appropriately determine whether or not the performance of the classifier has deteriorated due to re-learning or additional learning.
  • a weight indicating the degree of contribution to the determination performance may be set in each evaluation data set.
  • the second discriminator constructed by re-learning or additional learning makes a mistake in determining whether the evaluation image data with relatively high importance is wrong. Therefore, when the second discriminator mistakenly determines pass / fail for evaluation image data having a relatively high importance, the pass / fail determination is made by not using the second discriminator. It is possible to prevent the reliability of the device from being impaired.
  • the plurality of evaluation data sets may include contraindication data sets that are set so as not to erroneously determine the quality of the product in the evaluation image data
  • the performance determination unit determines whether the product shown in the evaluation image data in the contraindication data set is good or bad based on an output obtained from the first discriminator, and matches a correct answer indicated by the correct data.
  • the result of determining the quality of the product shown in the evaluation image data of the contraindication data set based on the output obtained from the second discriminator does not match the correct answer indicated by the correct data
  • the determination performance of the second classifier may be determined to be worse than that of the first classifier.
  • the second discriminator when the second discriminator mistakenly determines pass / fail for the evaluation image data of the contraindicated data set, the reliability of the pass / fail determination can be improved by not using the second discriminator. Can be prevented from being damaged. In this case, the quality determination for the evaluation data sets other than the contraindicated data sets may be omitted. Thereby, the calculation cost of the process which evaluates the determination performance of a 1st discriminator and a 2nd discriminator can be reduced, and the processing load of an arithmetic unit can be reduced.
  • the evaluation data acquisition unit may acquire the evaluation image data in an environment for inspecting the quality of the product. According to this configuration, it is possible to appropriately evaluate the determination performance of each discriminator by using the evaluation image data obtained in an environment for inspecting non-defective products. This makes it possible to appropriately determine whether or not the performance of the classifier has deteriorated due to re-learning or additional learning.
  • a part such as a part for evaluating the determination performance of each classifier, a part for determining the quality of the product using the first classifier or the second classifier is extracted.
  • An apparatus according to another embodiment may be configured.
  • the discriminator evaluation device is configured by a combination of evaluation image data showing a product and correct data indicating a correct answer to the quality determination of the product shown in the evaluation image data.
  • the evaluation image data is input to the constructed first discriminator, and the result of determining the quality of the product in the evaluation image data based on the output obtained from the first discriminator and the correct data are shown.
  • a first evaluation unit that evaluates the determination performance of the first discriminator by collating with the correct answer, and for each evaluation data set, the first learning data and The evaluation image data is input to a second discriminator constructed by machine learning using second learning data composed of additional image data for learning the quality of the recording product, from the second discriminator.
  • a second evaluation unit that evaluates the determination performance of the second discriminator by comparing the result of determining the quality of the product reflected in the image data for evaluation based on the obtained output with the correct answer indicated by the correct data And determining whether or not the determination performance of the second classifier is worse than that of the first classifier based on the result of the evaluation on the first classifier and the second classifier And an output unit that outputs a result of determining whether or not the determination performance of the second classifier is worse than that of the first classifier.
  • the inspection system is relearned such as a scene where a characteristic is determined from image data other than image data showing a product, a scene where a characteristic is determined from data including data other than the image data, etc. Or it may change so that it can apply to every scene which updates a discriminator by additional learning.
  • the identification system includes a plurality of pieces of evaluation data each composed of a combination of evaluation image data and correct data indicating correct answers for identification of the state of the subject in the evaluation image data.
  • the evaluation image data is input to the first discriminator, and the result of determining the state of the subject in the evaluation image data based on the output obtained from the first discriminator and the correct answer indicated by the correct data.
  • a first evaluation unit that evaluates the determination performance of the first discriminator by collating, and for each evaluation data set, the first learning data and The evaluation image data is input to a second discriminator constructed by machine learning using second learning data composed of additional image data for performing learning for identifying the state of the recording object, and the second First, the determination performance of the second discriminator is evaluated by comparing the result of determining the state of the subject in the image data for evaluation based on the output obtained from the discriminator with the correct answer indicated by the correct data. 2 Based on the evaluation result and the evaluation results for the first discriminator and the second discriminator, it is determined whether or not the judgment performance of the second discriminator is worse than that of the first discriminator.
  • the determination performance of the second discriminator is worse than that of the first discriminator, the target data acquisition unit that acquires the target image data in which the subject that is the target for identifying the state is captured, and the second discriminator.
  • the target data acquisition unit that acquires the target image data in which the subject that is the target for identifying the state is captured
  • the second discriminator is determined Determining the state of the subject in the target image data using the first discriminator, and determining that the determination performance of the second discriminator is not worse than that of the first discriminator.
  • the “subject” and the “state” of the subject to be identified need not be particularly limited, and may be appropriately selected according to the embodiment.
  • the “subject” may be, for example, the face of the subject, the body of the subject, the work to be worked, and the like.
  • the state to be identified may be, for example, the type of facial expression, the state of the facial parts, the individual who owns the face, and the like.
  • the state to be identified may be, for example, a body pose.
  • the state to be identified may be, for example, the position and posture of the work.
  • the identification system includes a plurality of evaluation data sets each configured by a combination of evaluation data and correct data indicating correct answers for identification of features included in the evaluation data.
  • a first discriminator constructed by machine learning using first learning data composed of data for performing learning for identifying the features of each evaluation data set The evaluation data is input to the first identification device, and the result of determining the feature included in the evaluation data based on the output obtained from the first discriminator is compared with the correct answer indicated by the correct data.
  • a first evaluation unit that evaluates the determination performance of one classifier and learning for identifying the first learning data and the features are performed for each evaluation data set.
  • the evaluation data is input to a second discriminator constructed by machine learning using the second learning data composed of the additional data, and the evaluation data is based on the output obtained from the second discriminator.
  • a second evaluation unit that evaluates the determination performance of the second discriminator by comparing the result of determining the feature included in the correct data with the correct answer indicated by the correct data, the first discriminator, and the second discriminator
  • a target including a performance determination unit that determines whether or not the determination performance of the second discriminator is worse than that of the first discriminator based on a result of the evaluation of the unit, and a feature to be identified
  • the target data is obtained using the first discriminator. Determining the features included in the If it is determined that the determination performance of the second discriminator is not deteriorated as compared with the first discriminator, the feature determination for determining the feature included in the target data using the second discriminator.
  • the “data” may include all kinds of data that can be analyzed by the discriminator, and may be, for example, sound data (voice data), numerical data, text data, etc. in addition to image data.
  • a “feature” may include any feature identifiable from data.
  • the “data” is sound data
  • the “feature” may be, for example, whether or not a specific sound (for example, abnormal noise of a machine) is included.
  • the “data” is numerical data or text data related to biological data such as blood pressure and activity amount
  • the “feature” may be, for example, the state of the subject.
  • the “data” is numerical data such as a driving amount of the machine or text data
  • the “feature” may be, for example, the state of the machine.
  • the first evaluation unit may include the characteristics included in the evaluation data based on an output obtained from the first identifier for the plurality of evaluation data sets.
  • the determination performance of the first discriminator may be evaluated by calculating a ratio at which the determined result matches the correct answer indicated by the correct answer data, and the second evaluation unit is for evaluating the plurality of evaluation results. By calculating a ratio that the result of determining the feature included in the evaluation data based on the output obtained from the second discriminator matches the correct answer indicated by the correct data for the data set, The determination performance of the second discriminator may be evaluated.
  • a weight indicating a degree of contribution to the determination performance may be set in each evaluation data set.
  • the plurality of evaluation data sets may include contraindication data sets that are set so as not to erroneously determine the characteristics included in the evaluation data, and the performance
  • the determination unit determines that the result of determining the feature included in the evaluation data of the contraindication data set based on the output obtained from the first discriminator matches the correct answer indicated by the correct answer data. If the result of determining the feature included in the evaluation data of the contraindication data set based on the output obtained from the second discriminator does not match the correct answer indicated by the correct data, the second discriminator It may be determined that the determination performance is worse than that of the first discriminator.
  • the evaluation data acquisition unit may acquire the evaluation data in an environment for determining the feature included in the target data.
  • an apparatus may be configured by extracting a part.
  • the discriminator evaluation apparatus includes a plurality of evaluations each composed of a combination of evaluation image data and correct data indicating correct answers for identification of the state of the subject in the evaluation image data.
  • Constructed by machine learning using first learning data composed of an evaluation data acquisition unit for acquiring a data set and image data for learning to identify the state of the subject for each evaluation data set
  • the evaluation image data is input to the first discriminator and the result of determining the state of the subject in the evaluation image data based on the output obtained from the first discriminator and the correct answer indicated by the correct data
  • the first learning data and the evaluation data set for each case with respect to the first evaluation unit that evaluates the determination performance of the first discriminator.
  • the evaluation image data is input to a second discriminator constructed by machine learning using second learning data composed of additional image data for performing learning for identifying the state of the subject, and the second First, the determination performance of the second discriminator is evaluated by comparing the result of determining the state of the subject in the image data for evaluation based on the output obtained from the discriminator with the correct answer indicated by the correct data. 2 Based on the evaluation result and the evaluation results for the first discriminator and the second discriminator, it is determined whether or not the judgment performance of the second discriminator is worse than that of the first discriminator. And an output unit that outputs a result of determining whether or not the determination performance of the second discriminator is worse than that of the first discriminator.
  • the classifier evaluation apparatus provides a plurality of evaluation data each configured by a combination of evaluation data and correct data indicating correct answers for identification of features included in the evaluation data.
  • An evaluation data acquisition unit that acquires a data set, and a first learning machine constructed using machine learning using first learning data composed of data for performing learning for identifying the characteristics of each evaluation data set
  • a first evaluation unit that evaluates the determination performance of the first discriminator, and learning for identifying the first learning data and the feature for each evaluation data set
  • the evaluation data is input to a second discriminator constructed by machine learning using the second learning data composed of additional data for the evaluation, and based on the output obtained from the second discriminator
  • a second evaluation unit that evaluates the determination performance of the second discriminator by collating a result of determining the feature included in the data with a correct answer indicated by the correct data
  • one aspect of the present invention is an information processing method that realizes all or a part of each of the above-described configurations.
  • it may be a program, or a storage medium that stores such a program and that can be read by a computer, other devices, machines, or the like.
  • the computer-readable storage medium is a medium that stores information such as programs by electrical, magnetic, optical, mechanical, or chemical action.
  • An inspection method is an information processing method for inspecting the quality of a product, and the computer determines evaluation image data in which the product is imaged and the quality of the product in the evaluation image data.
  • a plurality of evaluation data sets each composed of a combination of correct answer data indicating correct answers to the above, and for each evaluation data set, the image data for learning the quality of the product.
  • the quality of the product shown in the evaluation image data based on the output obtained by inputting the evaluation image data into a first discriminator constructed by machine learning using the first learning data.
  • the evaluation image data is stored in a second discriminator constructed by machine learning using second learning data composed of the first learning data and additional image data for learning the quality of the product.
  • the second discriminator Whether the determination performance of the second discriminator is worse than that of the first discriminator based on the step of evaluating the determination performance of the first discriminator and the result of the evaluation on the first discriminator and the second discriminator A step of determining whether or not, a step of acquiring target image data of the product to be inspected, and a determination performance of the second discriminator is determined to be worse than that of the first discriminator.
  • Case Is to determine the quality of the product in the target image data using the first discriminator, and to determine that the determination performance of the second discriminator is not deteriorated compared to the first discriminator Is a method for determining whether or not the product shown in the target image data is acceptable by using the second discriminator.
  • the computer includes the evaluation image data representing the product and the correct data indicating the correct answer to the quality determination of the product captured in the evaluation image data.
  • a step of acquiring a plurality of evaluation data sets each constituted by a combination, and the first learning data composed of image data for learning the quality of the product is used for each evaluation data set.
  • the evaluation image data is input to the first discriminator constructed by machine learning, and the result of determining the quality of the product reflected in the evaluation image data based on the output obtained from the first discriminator and the correct answer
  • the evaluation image data is input to a second discriminator constructed by machine learning using second learning data composed of learning data and additional image data for learning the quality of the product.
  • a step of evaluating the determination performance of the second discriminator by comparing the result of determining the quality of the product shown in the image data for evaluation based on the output obtained from the discriminator with the correct answer indicated by the correct data. And determining whether or not the determination performance of the second discriminator is worse than that of the first discriminator based on the result of the evaluation for the first discriminator and the second discriminator. And a step of outputting a result of determining whether or not the determination performance of the second discriminator is deteriorated as compared with the first discriminator.
  • the evaluation program according to one aspect of the present invention is based on a combination of evaluation image data representing the product and correct data indicating a correct answer to the quality determination of the product reflected in the evaluation image data.
  • Machine learning using first learning data composed of image data for learning the quality of the product for each of the evaluation data sets the step of acquiring a plurality of evaluation data sets each configured
  • the evaluation image data is input to the first discriminator constructed by the above, and the result of determining the quality of the product reflected in the evaluation image data based on the output obtained from the first discriminator and the correct data
  • the evaluation image data is input to a second discriminator constructed by machine learning using second learning data composed of learning data and additional image data for learning the quality of the product.
  • a step of evaluating the determination performance of the second discriminator by comparing the result of determining the quality of the product shown in the image data for evaluation based on the output obtained from the discriminator with the correct answer indicated by the correct data. And determining whether or not the determination performance of the second discriminator is worse than that of the first discriminator based on the result of the evaluation for the first discriminator and the second discriminator. And a step of outputting a result of determining whether or not the determination performance of the second classifier is worse than that of the first classifier.
  • the computer includes a plurality of combinations of image data for evaluation and correct data indicating correct answers for identification of the state of the subject in the image data for evaluation. And a machine learning using first learning data composed of image data for performing learning for identifying the state of the subject for each evaluation data set.
  • the evaluation image data is input to the first discriminator and the result of determining the state of the subject in the evaluation image data based on the output obtained from the first discriminator and the correct answer indicated by the correct data And evaluating the determination performance of the first discriminator, and the first learning data for each evaluation data set.
  • the evaluation image data is input to a second discriminator constructed by machine learning using second learning data constituted by additional image data for performing learning for identifying the state of the subject and the subject,
  • the determination performance of the second discriminator is evaluated by checking the result of determining the state of the subject in the evaluation image data based on the output obtained from the second discriminator and the correct answer indicated by the correct data. And determining whether or not the determination performance of the second discriminator is worse than that of the first discriminator, based on the evaluation result for the first discriminator and the second discriminator.
  • the step of acquiring target image data in which the subject to be identified is captured, and the determination performance of the second discriminator are worse than those of the first discriminator When it is determined that the step, the step of acquiring target image data in which the subject to be identified is captured, and the determination performance of the second discriminator are worse than those of the first discriminator , When the state of the subject in the target image data is determined using the first classifier and it is determined that the determination performance of the second classifier has not deteriorated compared to the first classifier And determining the state of the subject in the target image data using the second discriminator.
  • the classifier evaluation method is configured by a combination of image data for evaluation and correct data indicating correct answers for identification of the state of the subject in the evaluation image data.
  • a plurality of evaluation data sets, and machine learning using first learning data composed of image data for performing learning for identifying the state of the subject for each evaluation data set The evaluation image data is input to the first discriminator constructed according to the above, and the result of determining the state of the subject in the evaluation image data based on the output obtained from the first discriminator and the correct data
  • the step of evaluating the determination performance of the first discriminator by collating the correct answer shown, and the first evaluation data set Inputting the evaluation image data into a second discriminator constructed by machine learning using second learning data composed of learning data and additional image data for performing learning for identifying the state of the subject, Based on the output obtained from the second discriminator, the result of determining the state of the subject in the evaluation image data is collated with the correct answer indicated by the correct data, thereby improving the determination performance of the second discriminator.
  • the evaluation program according to one aspect of the present invention includes a plurality of combinations of a combination of evaluation image data and correct data indicating correct answers for identification of the state of the subject in the evaluation image data. And a machine learning using first learning data composed of image data for performing learning for identifying the state of the subject for each evaluation data set.
  • the evaluation image data is input to the first discriminator and the result of determining the state of the subject in the evaluation image data based on the output obtained from the first discriminator and the correct answer indicated by the correct data And evaluating the determination performance of the first discriminator, and for each evaluation data set, the first Inputting the evaluation image data into a second discriminator constructed by machine learning using second learning data composed of learning data and additional image data for performing learning for identifying the state of the subject, Based on the output obtained from the second discriminator, the result of determining the state of the subject in the evaluation image data is collated with the correct answer indicated by the correct data, thereby improving the determination performance of the second discriminator.
  • the computer includes a plurality of evaluations each configured by a combination of evaluation data and correct data indicating correct answers for identification of features included in the evaluation data.
  • a first discriminator constructed by machine learning using first learning data composed of data for performing learning for identifying the features of each evaluation data set
  • the evaluation data is input to the first identification device, and the result of determining the feature included in the evaluation data based on the output obtained from the first discriminator is compared with the correct answer indicated by the correct data.
  • a step of evaluating determination performance of one classifier, and learning for identifying the first learning data and the feature is performed for each evaluation data set.
  • the evaluation data is input to a second discriminator constructed by machine learning using the second learning data composed of additional data for the evaluation, and based on the output obtained from the second discriminator
  • Based on the result of the evaluation a step of determining whether or not the determination performance of the second discriminator is worse than that of the first discriminator, and acquiring target data including characteristics to be identified And determining the feature included in the target data using the first discriminator when it is determined that the determination performance of the second discriminator is worse than that of the first discriminator.
  • the computer includes a plurality of cases each constituted by a combination of evaluation data and correct data indicating correct answers for identification of features included in the evaluation data.
  • a first learning data set constructed by machine learning using first learning data composed of data for performing learning for identifying the characteristics of each evaluation data set.
  • a step of evaluating the determination performance of the first discriminator, and learning for identifying the first learning data and the feature for each evaluation data set The evaluation data is input to a second discriminator constructed by machine learning using second learning data composed of additional data to be performed, and the evaluation is performed based on an output obtained from the second discriminator.
  • the evaluation program provides a computer with a plurality of evaluations each configured by a combination of evaluation data and correct data indicating correct answers for identification of features included in the evaluation data. And a first discriminator constructed by machine learning using first learning data composed of data for performing learning for identifying the features of each evaluation data set The evaluation data is input to the first identification device, and the result of determining the feature included in the evaluation data based on the output obtained from the first discriminator is compared with the correct answer indicated by the correct data.
  • a step of evaluating determination performance of one classifier, and learning for identifying the first learning data and the feature for each evaluation data set The evaluation data is input to a second discriminator constructed by machine learning using second learning data composed of additional data to be performed, and the evaluation is performed based on an output obtained from the second discriminator.
  • the present invention it is possible to provide a technique for preventing the reliability of determination using a discriminator from being impaired even when the discrimination performance of the discriminator deteriorates due to re-learning or additional learning. it can.
  • FIG. 1 schematically illustrates an example of a scene to which the present invention is applied.
  • FIG. 2 schematically illustrates an example of the hardware configuration of the classifier evaluation apparatus according to the embodiment.
  • FIG. 3 schematically illustrates an example of a hardware configuration of the learning device according to the embodiment.
  • FIG. 4 schematically illustrates an example of a hardware configuration of the inspection apparatus according to the embodiment.
  • FIG. 5 schematically illustrates an example of the software configuration of the classifier evaluation apparatus according to the embodiment.
  • FIG. 6 schematically illustrates an example of the software configuration of the learning device according to the embodiment.
  • FIG. 7 schematically illustrates an example of the software configuration of the inspection apparatus according to the embodiment.
  • FIG. 8 illustrates an example of a processing procedure of the learning device according to the embodiment.
  • FIG. 9 illustrates an example of the processing procedure of the classifier evaluation apparatus according to the embodiment.
  • FIG. 10 illustrates an example of a processing procedure of the inspection apparatus according to the embodiment.
  • FIG. 11 schematically illustrates an example of a software configuration of a discriminator evaluation apparatus according to another embodiment.
  • FIG. 12 schematically illustrates an example of a hardware configuration of an identification device according to another embodiment.
  • FIG. 13 schematically illustrates an example of the software configuration of the identification device according to another embodiment.
  • FIG. 14 schematically illustrates an example of a software configuration of a discriminator evaluation apparatus according to another embodiment.
  • FIG. 15 schematically illustrates an example of a software configuration of a learning device according to another embodiment.
  • FIG. 16 schematically illustrates an example of a hardware configuration of an identification device according to another embodiment.
  • FIG. 17 schematically illustrates an example of the software configuration of the identification device according to another embodiment.
  • this embodiment will be described with reference to the drawings.
  • this embodiment described below is only an illustration of the present invention in all respects. It goes without saying that various improvements and modifications can be made without departing from the scope of the present invention. That is, in implementing the present invention, a specific configuration according to the embodiment may be adopted as appropriate.
  • data appearing in this embodiment is described in a natural language, more specifically, it is specified by a pseudo language, a command, a parameter, a machine language, or the like that can be recognized by a computer.
  • a plurality of evaluation data sets each composed of a combination of evaluation data and correct answer data are acquired.
  • the evaluation data is input to the first classifier before re-learning or additional learning, and the characteristics included in the evaluation data based on the output obtained from the first classifier
  • the determination performance of the first discriminator is evaluated by collating the result of determining the correct answer indicated by the correct answer data.
  • the evaluation data is input to the second classifier after re-learning or additional learning, and the characteristics included in the evaluation data are based on the output obtained from the second classifier.
  • the determination performance of the second classifier is evaluated by collating the determined result with the correct answer indicated by the correct data. And based on the result of the evaluation with respect to the 1st discriminator and the 2nd discriminator, it is judged whether the judgment performance of the 2nd discriminator has deteriorated compared with the 1st discriminator.
  • the performance of the constructed classifier for some reason such as over-learning occurs. It is possible to monitor whether or not it has deteriorated compared to before re-learning or additional learning. Therefore, in the example of the present invention, even when the performance of the discriminator is deteriorated due to re-learning or additional learning, it is possible to visualize this, so that a discriminator having deteriorated performance is not used. Thus, it is possible to prevent the determination reliability from being impaired.
  • FIG. 1 schematically illustrates an example of a scene in which the present invention is applied to an appearance inspection of a product R.
  • the application range of the present invention is not limited to the example of visual inspection exemplified below.
  • the present invention can be applied to any scene where the classifier is updated by re-learning or additional learning.
  • the inspection system 100 illustrated in FIG. 1 includes a discriminator evaluation device 1, a learning device 2, and an inspection device 3 connected via a network, and is configured to inspect the quality of the product.
  • the type of network between the classifier evaluation device 1, the learning device 2, and the inspection device 3 may be appropriately selected from the Internet, a wireless communication network, a mobile communication network, a telephone network, a dedicated network, and the like.
  • the classifier evaluation device 1, the learning device 2, and the inspection device 3 are separate computers.
  • the configuration of the inspection system 100 may not be limited to such an example.
  • the pair of at least one of the classifier evaluation device 1, the learning device 2, and the inspection device 3 may be an integrated computer.
  • Each of the classifier evaluation device 1, the learning device 2, and the inspection device 3 may be configured by a plurality of computers.
  • the learning device 2 is a computer configured to construct the first discriminator 5 and the second discriminator 6 used in the inspection device 3. Specifically, the learning device 2 acquires the ability to determine the quality of the product by performing machine learning using the first learning data 221 composed of the image data 222 for learning the quality of the product. The learned first discriminator 5 is constructed. In addition, the learning device 2 performs the machine learning using the second learning data 226 including the first learning data 221 and the additional image data 227 for learning the quality of the product, thereby determining the quality of the product. A learned second discriminator 6 that has acquired the ability to determine is constructed.
  • the learning device 2 can construct the first discriminator 5 and the second discriminator 6 used in the inspection device 3.
  • the inspection device 3 can acquire the learned first discriminator 5 and second discriminator 6 constructed by the learning device 2 via a network, for example.
  • the first discriminator 5 is a discriminator before re-learning or additional learning
  • the second discriminator 6 is re-learning or additional learning. It is a later classifier.
  • the relearning or additional learning “before” and “after” merely indicate a relative relationship between the first discriminator 5 and the second discriminator 6.
  • the first discriminator 5 may be a discriminator after relearning or additional learning in relation to other discriminators other than the second discriminator 6.
  • the second discriminator 6 may be a discriminator before re-learning or additional learning in the relationship with other discriminators other than the first discriminator 5.
  • the discriminator evaluation device 1 is a computer configured to monitor the performance of the second discriminator 6 constructed by the learning device 2.
  • the discriminator evaluation apparatus 1 includes a plurality of cases each composed of a combination of evaluation image data 122 showing a product and correct data 123 indicating correct answers for determination of product quality shown in the evaluation image data 122.
  • the evaluation data set 121 is acquired.
  • the discriminator evaluation apparatus 1 inputs the evaluation image data 122 to the first discriminator 5 for each evaluation data set 121, and evaluates the image based on the output obtained from the first discriminator 5.
  • the determination performance of the first discriminator 5 is evaluated by comparing the result of determining the quality of the product shown in the data 122 with the correct answer indicated by the correct data 123.
  • the classifier evaluation apparatus 1 inputs the evaluation image data 122 to the second classifier 6 for each evaluation data set 121 and evaluates the image data based on the output obtained from the second classifier 6.
  • the determination performance of the second discriminator 6 is evaluated by comparing the result of determining the quality of the product shown in 122 with the correct answer indicated by the correct data 123.
  • the discriminator evaluation apparatus 1 determines whether or not the determination performance of the second discriminator 6 is worse than that of the first discriminator 5 based on the evaluation results for the first discriminator 5 and the second discriminator 6. Determine whether. Thereby, the discriminator evaluation device 1 can monitor the performance of the second discriminator 6 constructed by the learning device 2. The discriminator evaluation apparatus 1 appropriately outputs a result of determining whether or not the determination performance of the second discriminator 6 is worse than that of the first discriminator 5.
  • the inspection device 3 is a computer configured to inspect the quality of the product R. Specifically, the inspection apparatus 3 acquires target image data 321 in which the product R to be inspected is captured. In the present embodiment, the inspection apparatus 3 is connected to the camera 41, and acquires the target image data 321 by photographing the product R with the camera 41.
  • the inspection device 3 uses the first discriminator 5 or the second discriminator 6 to determine whether the product R shown in the target image data 321 is good or bad. That is, when it is determined that the determination performance of the second discriminator 6 is worse than that of the first discriminator 5, the inspection device 3 uses the first discriminator 5 to display the target image data 321. The quality of the product R to be photographed is judged. On the other hand, when it is determined that the determination performance of the second discriminator 6 has not deteriorated compared to the first discriminator 5, the inspection device 3 uses the second discriminator 6 to generate the target image data 321. The quality of the product R to be photographed is judged.
  • the learning device 2 uses the second learning data 226 obtained by adding new image data 227 to the first learning data 221 used for the machine learning of the first discriminator 5.
  • the learning device 2 can construct the second discriminator 6 that can handle a case in which the quality of the product R cannot be determined by the first discriminator 5.
  • the discriminator evaluation apparatus 1 evaluates the determination performance of the first discriminator 5 and the second discriminator 6 using a plurality of evaluation data sets 121, and compares the evaluation results. By doing so, it can be monitored whether the determination performance of the 2nd discriminator 6 deteriorated compared with the 1st discriminator 5.
  • FIG. Thereby, when the performance of the 2nd discriminator 6 constructed
  • the product R to be subjected to the appearance inspection may not be particularly limited, and may be appropriately selected according to the embodiment.
  • the product R may be, for example, a product that is conveyed on a production line such as an electronic component or an automobile component.
  • the electronic component is, for example, a substrate, a chip capacitor, a liquid crystal, a relay winding, or the like.
  • the automobile parts are, for example, connecting rods, shafts, engine blocks, power window switches, panels, and the like.
  • the determination of pass / fail may be simply determining whether or not the product R has a defect, and in addition to determining whether or not the product R has a defect, the type of the defect is identified. May include.
  • the defects are, for example, scratches, dirt, cracks, dents, dust, burrs, color unevenness, and the like.
  • FIG. 2 schematically illustrates an example of a hardware configuration of the classifier evaluation apparatus 1 according to the present embodiment.
  • the classifier evaluation apparatus 1 includes a computer in which a control unit 11, a storage unit 12, a communication interface 13, an input device 14, an output device 15, and a drive 16 are electrically connected. It is.
  • the communication interface is described as “communication I / F”.
  • the control unit 11 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), etc., which are hardware processors, and is configured to execute information processing based on programs and various data.
  • the storage unit 12 is an example of a memory, and includes, for example, a hard disk drive or a solid state drive. In the present embodiment, the storage unit 12 stores various types of information such as an evaluation program 81, a plurality of evaluation data sets 121, first learning result data 224, second learning result data 229, and the like.
  • the evaluation program 81 is a program for causing the discriminator evaluation apparatus 1 to execute information processing (FIG. 9) described later for evaluating the determination performance of the first discriminator 5 and the second discriminator 6, and a series of the information processing. Including instructions.
  • the plurality of evaluation data sets 121 are used to evaluate the determination performance of the first discriminator 5 and the second discriminator 6.
  • Each of the plurality of evaluation data sets 121 is composed of a combination of evaluation image data 122 showing a product and correct answer data 123 indicating correct answers for determination of quality of the product shown in the evaluation image data 122.
  • the first learning result data 224 is data for setting the learned first discriminator 5.
  • the second learning result data 229 is data for setting the learned second discriminator 6. Details will be described later.
  • the communication interface 13 is, for example, a wired LAN (Local Area Network) module, a wireless LAN module, or the like, and is an interface for performing wired or wireless communication via a network.
  • the classifier evaluation apparatus 1 can perform data communication via the network with other information processing apparatuses (for example, the learning apparatus 2 and the inspection apparatus 3) by using the communication interface 13.
  • the input device 14 is a device for inputting, for example, a mouse and a keyboard.
  • the output device 15 is a device for outputting, for example, a display or a speaker. The operator can operate the discriminator evaluation apparatus 1 by using the input device 14 and the output device 15.
  • the drive 16 is, for example, a CD drive, a DVD drive, or the like, and is a drive device for reading a program stored in the storage medium 91.
  • the type of the drive 16 may be appropriately selected according to the type of the storage medium 91.
  • At least one of the evaluation program 81, the plurality of evaluation data sets 121, the first learning result data 224, and the second learning result data 229 may be stored in the storage medium 91.
  • the storage medium 91 stores information such as a program by electrical, magnetic, optical, mechanical, or chemical action so that a computer or other device, machine, or the like can read the recorded program or the like. It is a medium to accumulate.
  • the discriminator evaluation apparatus 1 may acquire at least one of the evaluation program 81, the plurality of evaluation data sets 121, the first learning result data 224, and the second learning result data 229 from the storage medium 91. Good.
  • a disk-type storage medium such as a CD or a DVD is illustrated.
  • the type of the storage medium 91 is not limited to the disk type and may be other than the disk type.
  • Examples of the storage medium other than the disk type include a semiconductor memory such as a flash memory.
  • the control unit 11 may include a plurality of hardware processors.
  • the hardware processor may be constituted by a microprocessor, an FPGA (field-programmable gate array), a DSP (digital signal processor), or the like.
  • the storage unit 12 may be configured by a RAM and a ROM included in the control unit 11. At least one of the communication interface 13, the input device 14, the output device 15, and the drive 16 may be omitted.
  • the discriminator evaluation apparatus 1 may be composed of a plurality of computers. In this case, the hardware configurations of the computers may or may not match.
  • the classifier evaluation apparatus 1 may be a general-purpose server apparatus, a PC (Personal Computer), or the like, in addition to an information processing apparatus designed exclusively for the provided service.
  • FIG. 3 schematically illustrates an example of a hardware configuration of the learning device 2 according to the present embodiment.
  • the learning device 2 is a computer in which a control unit 21, a storage unit 22, a communication interface 23, an input device 24, an output device 25, and a drive 26 are electrically connected.
  • the communication interface is described as “communication I / F”.
  • the control unit 21 to the drive 26 of the learning device 2 may be configured in the same manner as the control unit 11 to the drive 16 of the classifier evaluation device 1, respectively. That is, the control unit 21 includes a CPU, RAM, ROM, and the like, which are hardware processors, and is configured to execute various types of information processing based on programs and data.
  • the storage unit 22 is configured by, for example, a hard disk drive, a solid state drive, or the like.
  • the storage unit 22 includes a learning program 82, image data 222 for learning product quality, additional image data 227 for learning product quality, first learning result data 224, second learning result data 229, and the like. Various types of information are stored.
  • the learning program 82 causes the learning device 2 to execute later-described machine learning information processing (FIG. 8) for constructing the first discriminator 5 and the second discriminator 6, and as a result, the first learning result data 224 and the first discriminator 2 is a program for generating learning result data 229.
  • the learning program 82 includes a series of instructions for the information processing.
  • the image data 222 constitutes first learning data 221 used for machine learning of the first discriminator 5.
  • the first learning data 221 and the additional image data 227 constitute second learning data 226 used for machine learning of the second discriminator 6. Details will be described later.
  • the communication interface 23 is, for example, a wired LAN module, a wireless LAN module, or the like, and is an interface for performing wired or wireless communication via a network.
  • the learning device 2 can perform data communication via the network with other information processing devices (for example, the discriminator evaluation device 1 and the inspection device 3).
  • the input device 24 is a device for inputting, for example, a mouse and a keyboard.
  • the output device 25 is a device for outputting a display, a speaker, or the like, for example. The operator can operate the learning device 2 by using the input device 24 and the output device 25.
  • the drive 26 is, for example, a CD drive, a DVD drive, or the like, and is a drive device for reading a program stored in the storage medium 92. At least one of the learning program 82, the image data 222, and the additional image data 227 may be stored in the storage medium 92. The learning device 2 may acquire at least one of the learning program 82, the image data 222, and the additional image data 227 from the storage medium 92.
  • the control unit 21 may include a plurality of hardware processors.
  • the hardware processor may be configured by a microprocessor, FPGA, DSP, or the like.
  • the storage unit 22 may be configured by a RAM and a ROM included in the control unit 21. At least one of the communication interface 23, the input device 24, the output device 25, and the drive 26 may be omitted.
  • the learning device 2 may be composed of a plurality of computers. In this case, the hardware configurations of the computers may or may not match. Further, the learning device 2 may be a general-purpose server device, a general-purpose PC, or the like, in addition to an information processing device designed exclusively for the provided service.
  • FIG. 4 schematically illustrates an example of the hardware configuration of the inspection apparatus 3 according to the present embodiment.
  • the inspection device 3 includes a control unit 31, a storage unit 32, a communication interface 33, an external interface 34, an input device 35, an output device 36, and a drive 37 that are electrically connected.
  • Computer In FIG. 4, the communication interface and the external interface are described as “communication I / F” and “external I / F”, respectively.
  • the control unit 31 to the communication interface 33 and the input device 35 to the drive 37 of the inspection device 3 may be configured similarly to the control unit 11 to the drive 16 of the classifier evaluation device 1, respectively. That is, the control unit 31 includes a CPU, RAM, ROM, and the like, which are hardware processors, and is configured to execute various types of information processing based on programs and data.
  • the storage unit 32 is configured by, for example, a hard disk drive, a solid state drive, or the like.
  • the storage unit 32 stores various types of information such as the inspection program 83, the first learning result data 224, the second learning result data 229, and the like.
  • the inspection program 83 uses the first discriminator 5 or the second discriminator 6 to cause the inspection apparatus 3 to execute information processing (FIG. 10) described later for determining the quality of the product R shown in the target image data 321. And includes a series of instructions for the information processing. Details will be described later.
  • the communication interface 33 is, for example, a wired LAN module, a wireless LAN module, or the like, and is an interface for performing wired or wireless communication via a network.
  • the inspection device 3 can perform data communication via the network with other information processing devices (for example, the discriminator evaluation device 1 and the learning device 2).
  • the external interface 34 is, for example, a USB (Universal Serial Bus) port, a dedicated port, or the like, and is an interface for connecting to an external device.
  • the type and number of external interfaces 34 may be appropriately selected according to the type and number of external devices to be connected.
  • the inspection apparatus 3 is connected to the camera 41 via the external interface 34.
  • the camera 41 is used to acquire the target image data 321 by photographing the product R.
  • the type and location of the camera 41 may not be particularly limited, and may be determined as appropriate according to the embodiment.
  • a known camera such as a digital camera or a video camera may be used.
  • the camera 41 may be disposed in the vicinity of the production line on which the product R is conveyed.
  • the inspection apparatus 3 may be connected to the camera 41 via the communication interface 33 instead of the external interface 34.
  • the input device 35 is a device for inputting, for example, a mouse and a keyboard.
  • the output device 36 is a device for outputting, for example, a display or a speaker. The operator can operate the inspection device 3 by using the input device 35 and the output device 36.
  • the drive 37 is, for example, a CD drive, a DVD drive, or the like, and is a drive device for reading a program stored in the storage medium 93. At least one of the inspection program 83, the first learning result data 224, and the second learning result data 229 may be stored in the storage medium 93. In addition, the inspection apparatus 3 may acquire at least one of the inspection program 83, the first learning result data 224, and the second learning result data 229 from the storage medium 93.
  • the control unit 31 may include a plurality of hardware processors.
  • the hardware processor may be configured by a microprocessor, FPGA, DSP, or the like.
  • the storage unit 32 may be configured by a RAM and a ROM included in the control unit 31.
  • At least one of the communication interface 33, the external interface 34, the input device 35, the output device 36, and the drive 37 may be omitted.
  • the inspection device 3 may be composed of a plurality of computers. In this case, the hardware configurations of the computers may or may not match.
  • the inspection device 3 may be a general-purpose server device, a general-purpose desktop PC, a notebook PC, a tablet PC, a mobile phone including a smartphone, or the like in addition to an information processing device designed exclusively for the provided service.
  • FIG. 5 schematically illustrates an example of the software configuration of the classifier evaluation apparatus 1 according to the present embodiment.
  • the control unit 11 of the discriminator evaluation apparatus 1 expands the evaluation program 81 stored in the storage unit 12 in the RAM. Then, the control unit 11 interprets and executes the evaluation program 81 expanded in the RAM, and controls each component. Accordingly, as illustrated in FIG. 5, the classifier evaluation apparatus 1 according to the present embodiment includes an evaluation data acquisition unit 111, a first evaluation unit 112, a second evaluation unit 113, a performance determination unit 114, and an output unit 115. It operates as a computer provided as a software module. That is, in the present embodiment, each software module is realized by the control unit 11 (CPU).
  • the evaluation data acquisition unit 111 includes a plurality of pieces of evaluation data each composed of a combination of evaluation image data 122 showing a product and correct answer data 123 indicating correct answers for the quality determination of the product shown in the evaluation image data 122.
  • the set 121 is acquired.
  • the first evaluation unit 112 includes a first discriminator 5 constructed by machine learning using the first learning data 221.
  • the first evaluation unit 112 inputs the evaluation image data 122 to the first discriminator 5 for each evaluation data set 121, and executes the arithmetic processing of the first discriminator 5, whereby the first discriminator The output value is acquired from 5.
  • the output value obtained from the first discriminator 5 corresponds to the result of determining the quality of the product shown in the input evaluation image data 122. Therefore, the first evaluation unit 112 determines the quality of the product shown in the evaluation image data 122 based on the output value obtained from the first discriminator 5, and correct data associated with the input evaluation image data 122. The correct answer indicated by 123 is collated. Thereby, the first evaluation unit 112 evaluates the determination performance of the first discriminator 5.
  • the second evaluation unit 113 includes a second discriminator 6 constructed by machine learning using the second learning data 226.
  • the second evaluation unit 113 inputs the evaluation image data 122 into the second discriminator 6 for each evaluation data set 121, and executes the arithmetic processing of the second discriminator 6, whereby the second discriminator The output value is obtained from 6.
  • the output value obtained from the second discriminator 6 corresponds to the result of determining the quality of the product shown in the input evaluation image data 122. Therefore, the second evaluation unit 113 determines the quality of the product shown in the evaluation image data 122 based on the output value obtained from the second discriminator 6, and correct data associated with the input evaluation image data 122. The correct answer indicated by 123 is collated. Thereby, the second evaluation unit 113 evaluates the determination performance of the second discriminator 6.
  • the performance determination unit 114 determines whether or not the determination performance of the second identifier 6 is worse than that of the first identifier 5 based on the evaluation performance evaluation results for the first identifier 5 and the second identifier 6. In other words, it is determined whether the determination performance of the second discriminator 6 is lower than the determination performance of the first discriminator 5. Then, the output unit 115 outputs a result of determining whether or not the determination performance of the second discriminator 6 is worse than that of the first discriminator 5.
  • the first discriminator 5 is configured by a neural network.
  • the first discriminator 5 is configured by a multilayer neural network used for so-called deep learning, and includes an input layer 51, an intermediate layer (hidden layer) 52, and an output layer 53.
  • the neural network constituting the first discriminator 5 includes one intermediate layer 52, the output of the input layer 51 is input to the intermediate layer 52, and the output of the intermediate layer 52 is Input to the output layer 53.
  • the number of intermediate layers 52 may not be limited to one layer.
  • the first discriminator 5 may include two or more intermediate layers 52.
  • Each layer 51 to 53 includes one or a plurality of neurons.
  • the number of neurons in the input layer 51 may be set according to input image data (evaluation image data 122, target image data 321).
  • the number of neurons in the intermediate layer 52 may be set as appropriate according to the embodiment.
  • the number of neurons in the output layer 53 may be set according to the output format of the pass / fail determination result.
  • Adjacent layers of neurons are appropriately connected to each other, and a weight (connection load) is set for each connection.
  • each neuron is connected to all neurons in adjacent layers.
  • the neuron connection may not be limited to such an example, and may be appropriately set according to the embodiment.
  • a threshold is set for each neuron, and basically, the output of each neuron is determined by whether or not the sum of products of each input and each weight exceeds the threshold.
  • the first evaluation unit 112 inputs the evaluation image data 122 to the input layer 51 of the first discriminator 5, and performs firing determination of each neuron included in each layer in order from the input side as a calculation process of the neural network. Thereby, the first evaluation unit 112 can acquire an output value corresponding to the result of determining the quality of the product shown in the input evaluation image data 122 from the output layer 53.
  • the second discriminator 6 is also configured by a neural network, like the first discriminator 5.
  • the second discriminator 6 may be configured similarly to the first discriminator 5. That is, the input layer 61, the intermediate layer (hidden layer) 62, and the output layer 63 may be configured similarly to the layers 51 to 53 of the first discriminator 5.
  • the structure of the neural network of the second discriminator 6 may not match the first discriminator 5.
  • the number of layers of the neural network constituting the second discriminator 6, the number of neurons in each layer, and the connection relationship between the neurons may be different from those of the neural network constituting the first discriminator 5.
  • the second evaluation unit 113 inputs the evaluation image data 122 to the input layer 61 of the second discriminator 6, and performs firing determination of each neuron included in each layer in order from the input side as a calculation process of the neural network. Thereby, the second evaluation unit 113 can acquire an output value corresponding to the result of determining the quality of the product R shown in the input evaluation image data 122 from the output layer 63.
  • the configuration of the first discriminator 5 (neural network) (for example, the number of layers in each network, the number of neurons in each layer, the connection between neurons, the transfer function of each neuron), the weight of the connection between the neurons, and Information indicating the threshold value of each neuron is included in the first learning result data 224.
  • the first evaluation unit 112 sets the first discriminator 5 with reference to the first learning result data 224.
  • the configuration of the second discriminator 6 (neural network) (for example, the number of layers in each network, the number of neurons in each layer, the connection relationship between neurons, the transfer function of each neuron), the weight of the connection between each neuron, Information indicating the threshold value of each neuron is included in the second learning result data 229.
  • the second evaluation unit 113 sets the second classifier 6 with reference to the second learning result data 229.
  • FIG. 6 schematically illustrates an example of the software configuration of the learning device 2 according to the present embodiment.
  • the control unit 21 of the learning device 2 expands the learning program 82 stored in the storage unit 22 in the RAM. Then, the control unit 21 interprets and executes the learning program 82 expanded in the RAM, and controls each component. Accordingly, as illustrated in FIG. 6, the learning device 2 according to the present embodiment is configured as a computer including the learning data acquisition unit 211 and the learning processing unit 212 as software modules. That is, in this embodiment, each software module is realized by the control unit 21 (CPU).
  • the learning data acquisition unit 211 acquires first learning data 221 composed of image data 222 for learning the quality of a product.
  • the learning processing unit 212 constructs the learned first discriminator 5 that has acquired the ability to determine the quality of the product by performing machine learning using the acquired first learning data 221.
  • the learning data acquisition unit 211 acquires the second learning data 226 including the first learning data 221 and additional image data 227 for learning the quality of the product.
  • the learning processing unit 212 constructs the learned second discriminator 6 that has acquired the ability to determine the quality of the product by performing machine learning using the acquired second learning data 226.
  • the second classifier 6 is a classifier after re-learning or additional learning in relation to the first classifier 5.
  • the learning model of each discriminator is configured by a neural network. Therefore, correct data 223 indicating the correct answer to the quality determination of the product shown in the image data 222 is assigned to the image data 222.
  • the additional image data 227 is provided with correct answer data 228 indicating a correct answer to the quality determination of the product shown in the additional image data 227.
  • the first learning data 221 is composed of a plurality of data sets each including a combination of the image data 222 and the correct answer data 223.
  • the second learning data 226 includes a plurality of data sets each including a combination of additional image data 227 and correct answer data 228, and first learning data 221.
  • the learning processing unit 212 For each data set included in the first learning data 221, the learning processing unit 212 inputs the image data 222 to the input layer 51, and outputs an output value corresponding to the correct data 223 associated with the input image data 222. Machine learning of the first discriminator 5 is performed so as to output from the output layer 53. Thereby, the learning processing unit 212 constructs the learned first discriminator 5 that has acquired the ability to judge the quality of the product. Then, the learning processing unit 212 stores information indicating the configuration of the learned first discriminator 5, the weight of the connection between the neurons, and the threshold value of each neuron as the first learning result data 224 in the storage unit 22.
  • the learning processing unit 212 inputs image data (222, 227) to the input layer 61 for each data set included in the second learning data 226, the learning processing unit 212 is associated with the input image data (222, 227).
  • the machine learning of the second discriminator 6 is performed so that the output value corresponding to the correct data (223, 228) is output from the output layer 63.
  • the learning processing unit 212 constructs the learned second discriminator 6 that has acquired the ability to determine the quality of the product.
  • the learning processing unit 212 stores information indicating the configuration of the learned second discriminator 6, the weight of the connection between the neurons, and the threshold value of each neuron as the second learning result data 229 in the storage unit 22.
  • FIG. 7 schematically illustrates an example of the software configuration of the inspection apparatus 3 according to the present embodiment.
  • the control unit 31 of the inspection apparatus 3 expands the inspection program 83 stored in the storage unit 32 in the RAM. Then, the control unit 31 interprets and executes the inspection program 83 expanded in the RAM, and controls each component. Accordingly, as illustrated in FIG. 7, the inspection apparatus 3 according to the present embodiment is configured as a computer including the target data acquisition unit 311, the pass / fail determination unit 312, and the output unit 313 as software modules. That is, in this embodiment, each software module is realized by the control unit 31 (CPU).
  • the target data acquisition unit 311 acquires target image data 321 in which the product R to be inspected is captured.
  • the target data acquisition unit 311 acquires target image data 321 by photographing the product R with the camera 41.
  • the quality determination unit 312 determines the quality of the product R shown in the target image data 321 by using the first discriminator 5 or the second discriminator 6.
  • the output unit 313 outputs the result of determining the quality of the product R, that is, the result of the appearance inspection.
  • the pass / fail judgment unit 312 includes at least one of the learned first discriminator 5 and the learned second discriminator 6. When it is determined that the determination performance of the second discriminator 6 is worse than that of the first discriminator 5, the pass / fail determination unit 312 uses the first discriminator 5 to be reflected in the target image data 321. The quality of the product R is judged. Specifically, the quality determination unit 312 sets the learned first classifier 5 with reference to the first learning result data 224. Next, the pass / fail judgment unit 312 inputs the acquired target image data 321 to the first discriminator 5 and executes the arithmetic processing of the first discriminator 5 to obtain the output value from the first discriminator 5. To do. Then, the quality determination unit 312 determines quality of the product R shown in the target image data 321 based on the output value acquired from the first discriminator 5.
  • the pass / fail determination unit 312 uses the second discriminator 6 to target image data 321.
  • the quality of the product R reflected in the product is judged.
  • the pass / fail judgment unit 312 sets the learned second discriminator 6 with reference to the second learning result data 229.
  • the pass / fail judgment unit 312 acquires the output value from the second discriminator 6 by inputting the acquired target image data 321 to the second discriminator 6 and executing the arithmetic processing of the second discriminator 6. To do.
  • the quality determination unit 312 determines quality of the product R shown in the target image data 321 based on the output value acquired from the second discriminator 6.
  • each software module of the classifier evaluation device 1, the learning device 2, and the inspection device 3 is realized by a general-purpose CPU.
  • some or all of the above software modules may be implemented by one or more dedicated processors.
  • software modules may be omitted, replaced, and added as appropriate according to the embodiment.
  • FIG. 8 is a flowchart illustrating an example of a processing procedure of the learning device 2 according to this embodiment.
  • the processing procedure described below is merely an example, and each processing may be changed as much as possible. Further, in the processing procedure described below, steps can be omitted, replaced, and added as appropriate according to the embodiment.
  • the learning device 2 can construct the first discriminator 5 and the second discriminator 6 by the same processing procedure except that the learning data to be used is different. Below, the process sequence at the time of constructing the 1st discriminator 5 is demonstrated first.
  • step S101 Construction of first discriminator
  • the control unit 21 operates as the learning data acquisition unit 211, and acquires first learning data 221 configured with image data 222 for learning the quality of the product.
  • the control unit 21 since the learning model of the first discriminator 5 is configured by a neural network, the control unit 21 includes a first data set including a plurality of data sets each including a combination of image data 222 and correct answer data 223. 1 learning data 221 is acquired.
  • the method for acquiring the first learning data 221 is not particularly limited, and may be appropriately determined according to the embodiment.
  • a camera is prepared, and a product of the same type as the product R to be inspected, and a defective product (defective product) and a product without a defect (non-defective product) are photographed under various conditions using the prepared camera.
  • a learning data set can be created by combining the obtained image data 222 with correct answer data 223 indicating pass / fail (correct answer) appearing in the product shown in the image data 222.
  • the specific content of the correct answer data 223 may be appropriately determined according to the output format of each discriminator (5, 6).
  • each discriminator (5, 6) is configured such that the output value obtained from each discriminator (5, 6) indicates the quality of the product itself or its probability
  • the content of the correct answer data 223 is reflected in the image data 222. It may be determined as appropriate so as to indicate the quality itself or the probability that appears in the product.
  • each discriminator (5, 6) is configured such that the output value obtained from each discriminator (5, 6) indicates a product pass / fail index
  • the content of the correct answer data 223 is reflected in the image data 222. It may be determined as appropriate so as to indicate the quality index that appears on the product.
  • This index may be associated with each of the product having no defect (the product is a non-defective product) and the type of defect present in the product.
  • the creation of the first learning data 221 may be performed by the learning device 2.
  • the control unit 21 may create the first learning data 221 according to the operation of the input device 24 by the operator. Further, the control unit 21 may automatically create the first learning data 221 by the processing of the learning program 82. By executing this creation process, the control unit 21 can acquire the first learning data 221 in this step S101.
  • the creation of the first learning data 221 may be performed by an information processing device other than the learning device 2.
  • the first learning data 221 may be created manually by an operator or automatically by program processing.
  • the control unit 21 may acquire the first learning data 221 created by another information processing apparatus via the network, the storage medium 92, or the like.
  • the number of data sets constituting the first learning data 221 is not particularly limited, and may be appropriately determined to such an extent that machine learning of the first discriminator 5 can be performed, for example. Thereby, if the 1st learning data 221 is acquired, control part 21 will advance processing to the following Step S102.
  • Step S102 the control unit 21 operates as the learning processing unit 212 and has learned the ability to determine the quality of the product by executing machine learning using the first learning data 221 acquired in step S101.
  • the first discriminator 5 is constructed.
  • the control unit 21 inputs the image data 222 to the input layer 51 using the data sets of the respective cases constituting the first learning data 221, an output value corresponding to the correct answer indicated by the correct answer data 223. Is output from the output layer 53 to perform machine learning of the neural network.
  • the control unit 21 prepares a neural network (first discriminator 5 before learning) to be subjected to learning processing.
  • Each parameter such as the configuration of the neural network to be prepared, the initial value of the connection weight between the neurons, and the initial value of the threshold value of each neuron may be given by a template or by an operator input.
  • the control unit 21 prepares a neural network before learning based on the learning result data of the other discriminator. Also good.
  • control unit 21 uses the image data 222 included in each data set of the first learning data 221 acquired in step S101 as input data, and uses the correct answer data 223 as teacher data to learn the neural network. Execute the process. A stochastic gradient descent method or the like may be used for this neural network learning process.
  • control unit 21 inputs the image data 222 to the input layer 51 and performs firing determination of each neuron included in each of the layers 51 to 53 in order from the input side. Thereby, the control unit 21 obtains an output value from the output layer 53. Next, the control unit 21 calculates an error between the output value obtained from the output layer 53 and the value corresponding to the correct answer indicated by the correct answer data 223. Subsequently, the control unit 21 uses the error back-propagation method to calculate the weight of the connection between the neurons and the error of the threshold value of each neuron using the error of the calculated output value. Then, the control unit 21 updates the values of the connection weights between the neurons and the threshold values of the neurons based on the calculated errors.
  • the output value obtained from the output layer 53 by inputting the image data 222 to the input layer 51 is represented by the correct data 223 associated with the input image data 222.
  • the control unit 21 repeats this series of processing until it matches the value corresponding to the correct answer shown.
  • the control unit 21 can construct a learned neural network (that is, the first discriminator 5) that outputs an output value corresponding to the correct answer indicated by the correct answer data 223. .
  • the control unit 21 proceeds to the next step S103.
  • Step S103 the control unit 21 operates as the learning processing unit 212, and first sets information indicating the configuration of the first discriminator 5 constructed by machine learning, the connection weight between the neurons, and the threshold value of each neuron.
  • the learning result data 224 is stored in the storage unit 22. Thereby, the control part 21 complete
  • the learning device 2 can construct the second discriminator 6 in the same manner as the first discriminator 5 except that the learning data to be used is different from the first discriminator 5.
  • Step S101 That is, in step S101, the control unit 21 operates as the learning data acquisition unit 211, and acquires the second learning data 226 including the first learning data 221 and additional image data 227 for learning the quality of the product. To do.
  • the control unit 21 since the learning model of the second discriminator 6 is configured by a neural network, the control unit 21 includes a plurality of data sets each including a combination of additional image data 227 and correct data 228 and the first data set. The second learning data 226 configured by the learning data 221 is acquired.
  • the additional image data 227 and correct answer data 228 of the second learning data 226 may be acquired by the same method as the first learning data 221.
  • the control unit 21 can acquire the second learning data 226 by executing the data set creation process. Alternatively, when the second learning data 226 is created by another information processing device, the control unit 21 acquires the second learning data 226 created by the other information processing device via the network, the storage medium 92, or the like. can do. When the second learning data 226 is acquired, the control unit 21 proceeds to the next step S102.
  • Step S102 the control unit 21 operates as the learning processing unit 212 and has learned the ability to determine the quality of the product by executing machine learning using the second learning data 226 acquired in step S102.
  • the second discriminator 6 is constructed.
  • the control unit 21 inputs the image data (222, 227) to the input layer 61 using each data set constituting the second learning data 226, the control unit 21 outputs an output value corresponding to the correct data (223, 228).
  • Machine learning of the neural network is performed so as to output from the output layer 63.
  • parameters such as the configuration of the neural network to be prepared (second discriminator 6 before learning), the initial value of the connection weight between neurons, and the initial value of the threshold value of each neuron may be given by the template. It may be given by an operator input. Further, when performing re-learning or additional learning of the first discriminator 5, the control unit 21 may prepare a neural network before learning based on the first learning result data 224.
  • the control unit 21 uses the image data (222, 227) included in each data set of the second learning data 226 acquired in step S101 as input data, and uses the correct data (223, 228) as teacher data. To execute the above learning process of the neural network.
  • the control unit 21 performs machine learning of the first learning data 221 (image data 222 and correct answer data 223). May be omitted. Accordingly, when the image data (222, 227) is input, the control unit 21 outputs an output value corresponding to the correct answer indicated by the correct data (223, 228) associated with the input image data (222, 227).
  • a learned neural network ie, the second discriminator 6) can be constructed. When the learning process of the second discriminator 6 is completed, the control unit 21 proceeds to the next step S103.
  • Step S103 the control unit 21 operates as the learning processing unit 212, and receives information indicating the configuration of the second discriminator 6 constructed by machine learning, the connection weight between the neurons, and the threshold value of each neuron.
  • the learning result data 229 is stored in the storage unit 22. Thereby, the control part 21 complete
  • control unit 21 transfers the created first learning result data 224 and second learning result data 229 to the discriminator evaluation apparatus 1 and the inspection apparatus 3 after the process of step S103 is completed. Also good.
  • the control unit 21 may store the created first learning result data 224 and second learning result data 229 in a data server such as NAS (Network Attached Storage).
  • NAS Network Attached Storage
  • the classifier evaluation device 1 and the inspection device 3 may acquire the first learning result data 224 and the second learning result data 229 from the data server, respectively.
  • FIG. 9 is a flowchart illustrating an example of a processing procedure of the discriminator evaluation apparatus 1 according to this embodiment.
  • the processing procedure described below is an example of the classifier evaluation method of the present invention. However, the processing procedure described below is merely an example, and each processing may be changed as much as possible. Further, in the processing procedure described below, steps can be omitted, replaced, and added as appropriate according to the embodiment.
  • step S201 In step S ⁇ b> 201, the control unit 11 operates as the evaluation data acquisition unit 111, and a combination of the evaluation image data 122 showing the product and the correct data 123 indicating the correct answer for the quality determination of the product shown in the evaluation image data 122 A plurality of evaluation data sets 121 each obtained by the above are acquired.
  • the evaluation data set 121 for each case can be created in the same manner as the learning data set. That is, a camera is prepared, and a product of the same type as the product R to be inspected, and a defective product (defective product) and a product without a defect (non-defective product) are photographed under various conditions using the prepared camera. As a result, it is possible to obtain the evaluation image data 122 in which the product showing the quality is shown. Then, by combining the obtained evaluation image data 122 with correct answer data 123 indicating pass / fail (correct answer) appearing in the product shown in the evaluation image data 122, an evaluation data set 121 for each case is created. can do.
  • the correct answer data 123 may be set so as to correspond to the output value of each discriminator (5, 6). Specifically, when the output value obtained from each discriminator (5, 6) indicates the quality of the product itself or the probability thereof, the correct answer data 123 is the quality of the product that appears in the product reflected in the evaluation image data 122 or It may be set to indicate the probability. When the output value obtained from each discriminator (5, 6) indicates a product quality index, the correct answer data 123 is set so as to indicate the quality index appearing on the product shown in the evaluation image data 122. Good.
  • the content of the correct answer data 123 may be set so as to correspond to the result of determining the quality of the product based on the output value of each discriminator (5, 6). Specifically, when the output value obtained from each discriminator (5, 6) indicates the quality of the product itself or its probability, the correct answer data 123 indicates whether the product appears in the evaluation image data 122 or not. May be set. When the output value obtained from each discriminator (5, 6) indicates the quality index of the product, the correct answer data 123 indicates that the product shown in the evaluation image data 122 has no defect or the defect present in the product. It may be set to indicate the type.
  • the control unit 11 may acquire a plurality of evaluation data sets 121 by repeatedly executing the same process as the data set creation process. Alternatively, when a plurality of evaluation data sets 121 are created by another information processing apparatus, the control unit 11 evaluates the plurality of evaluations created by another information processing apparatus via the network, the storage medium 91, or the like. The data set 121 can be acquired. When the plurality of evaluation data sets 121 are acquired, the control unit 11 advances the processing to the next step S202.
  • each evaluation data set 121 is acquired in an environment for inspecting product quality.
  • the inspection device 3 inspects the quality of the product R. Therefore, it is preferable that each evaluation data set 121 is acquired in an environment in which the inspection apparatus 3 is used.
  • at least the evaluation image data 122 of each evaluation data set 121 may be acquired by the inspection apparatus 3. For example, by obtaining the evaluation image data 122 via the inspection apparatus 3 and adding the correct answer data 123 to the obtained evaluation image data 122, each evaluation data set 121 can be created.
  • step S202 In step S ⁇ b> 202, the control unit 11 operates as the first evaluation unit 112, inputs the evaluation image data 122 to the first classifier 5 for each evaluation data set 121, and calculates the first classifier 5. By executing the process, the output value is acquired from the first discriminator 5.
  • control unit 11 sets the learned first discriminator 5 with reference to the first learning result data 224. Subsequently, the control unit 11 inputs the evaluation image data 122 of the evaluation data set 121 of each case to the input layer 51 of the first discriminator 5, and sequentially determines each neuron included in each of the layers 51 to 53 from the input side. Ignition is performed. Thereby, the control unit 11 acquires an output value corresponding to the result of determining the quality of the product shown in the input evaluation image data 122 from the output layer 53 of the first discriminator 5.
  • control unit 11 determines the quality of the product shown in the evaluation image data 122 based on the output value obtained from the first discriminator 5 and the correct data 123 associated with the input evaluation image data 122. The correct answer indicated by is collated. Thereby, the control unit 11 evaluates the determination performance of the first discriminator 5.
  • a collation method of determination result by the first discriminator 5 and the correct answer indicated by the correct data 123 will be described.
  • a method for collating the determination result with the correct answer may be appropriately determined according to the format of the correct answer data 123. For example, when the content of the correct answer data 123 is set so as to correspond to the output value of each discriminator (5, 6), the control unit 11 outputs the output value and the correct answer data obtained from the first discriminator 5. It may be determined whether or not the value corresponding to the correct answer indicated by 123 matches or approximates.
  • the controller 11 When the output value obtained from the first discriminator 5 and the value corresponding to the correct answer match or approximate, the controller 11 evaluates the image data 122 for evaluation based on the output value obtained from the first discriminator 5. It may be determined that the result of determining the quality of the product shown in is consistent with the correct answer indicated by the correct answer data 123. That is, the control unit 11 may determine that the product quality determination by the first discriminator 5 for the target evaluation data set 121 is correct. On the other hand, when the output value obtained from the first discriminator 5 and the value corresponding to the correct answer do not match or approximate, the control unit 11 evaluates the evaluation image based on the output value obtained from the first discriminator 5.
  • control unit 11 may determine that the determination of the quality of the product by the first discriminator 5 with respect to the target evaluation data set 121 is incorrect.
  • the control unit 11 when the content of the correct answer data 123 is set so as to correspond to the result of judging the quality of the product based on the output value of each discriminator (5, 6), the control unit 11 firstly Based on the output value obtained from the first discriminator 5, the quality of the product shown in the evaluation image data 122 may be determined. When the output value obtained from each discriminator (5, 6) indicates the quality of the product itself or its probability, the control unit 11 evaluates by comparing the output value obtained from the first discriminator 5 with a threshold value. The quality of the product shown in the image data 122 can be determined.
  • the discriminator evaluation apparatus 1 determines whether the output value obtained from each discriminator (5, 6) and the product pass / fail Reference information (not shown) such as a table format in which defect types are associated may be held in the storage unit 12. In this case, the control unit 11 can determine the quality of the product shown in the evaluation image data 122 according to the output value obtained from the first discriminator 5 by referring to the reference information.
  • the control unit 11 determines whether the result of determining the quality of the product shown in the evaluation image data 122 based on the output value obtained from the first discriminator 5 matches the correct answer indicated by the correct data 123. It may be determined. When the determination result by the first discriminator 5 and the correct answer indicated by the correct answer data 123 match, the control unit 11 correctly determines the quality of the product by the first discriminator 5 for the target evaluation data set 121. May be determined. On the other hand, when the determination result by the first discriminator 5 and the correct answer indicated by the correct data 123 do not match, the control unit 11 determines the quality of the product by the first discriminator 5 for the target evaluation data set 121. May be determined to be incorrect.
  • the determination performance of the first discriminator 5 may be appropriately evaluated based on the comparison result between the determination result and the correct answer. For example, for the plurality of evaluation data sets 121, the control unit 11 determines whether the quality of the product shown in the evaluation image data 122 based on the output value obtained from the first discriminator 5 is correct data. The determination performance of the first discriminator 5 may be evaluated by calculating a ratio that coincides with the correct answer indicated by 123 (hereinafter also simply referred to as “correct answer rate”).
  • control unit 11 may calculate the correct answer rate of the pass / fail determination by the first discriminator 5 for the plurality of evaluation data sets 121 as the evaluation result of the determination performance of the first discriminator 5.
  • the “correct answer rate” is a quotient obtained by dividing the number of evaluation data sets 121 (ie, the number of correct answers) determined to be correct by the first discriminator 5 by the total number of evaluation data sets 121.
  • the number of correct answers may be included.
  • a weight indicating the degree of contribution to the determination performance may be set in each evaluation data set 121.
  • the determination performance of the first discriminator 5 may be evaluated according to this weight.
  • the control unit 11 calculates the sum of the weights set in the evaluation data set 121 determined to be correct by the first discriminator 5 as the weight set in each evaluation data set 121.
  • the quotient hereinafter also referred to as “correct answer rate including weight” divided by the sum of the above may be calculated as the evaluation result of the determination performance of the first discriminator 5.
  • the “correct answer rate including weight” may include the total weight set in the evaluation data set 121 that is determined to be correct by the first discriminator 5.
  • the plurality of evaluation data sets 121 may include contraindication data sets that are set so as not to erroneously determine the quality of the product shown in the evaluation image data 122.
  • the control unit 11 determines whether or not the quality determination by the first discriminator 5 for the contraindication data set is incorrect, or based on the number of the quality determination by the first discriminator 5 for the contraindication data set is incorrect.
  • the determination performance of the first discriminator 5 may be evaluated. Specifically, the control unit 11 evaluates that the determination performance of the first discriminator 5 is low as the determination of pass / fail for the contraindication data set is wrong, and the determination of pass / fail for the contraindication data set is not erroneous.
  • the control unit 11 may evaluate that the determination performance of the first discriminator 5 is high.
  • control unit 11 determines the quality of the product shown in the evaluation image data 122 based on the output value obtained from the first discriminator 5 and the correct answer data 123 associated with the input evaluation image data 122.
  • the determination performance of the first discriminator 5 can be evaluated by collating with the correct answer indicated by.
  • the control unit 11 advances the processing to the next step S203.
  • step S203 In step S ⁇ b> 203, the control unit 11 operates as the second evaluation unit 113, inputs the evaluation image data 122 to the second classifier 6 for each evaluation data set 121, and calculates the second classifier 6. By executing the process, an output value is acquired from the second discriminator 6.
  • step S202 the control unit 11 sets the learned second classifier 6 with reference to the second learning result data 229. Subsequently, the control unit 11 inputs the evaluation image data 122 of the evaluation data set 121 of each case to the input layer 61 of the second discriminator 6, and sequentially determines each neuron included in each of the layers 61 to 63 from the input side. Ignition is performed. Thereby, the control unit 11 acquires an output value corresponding to the result of determining the quality of the product shown in the input evaluation image data 122 from the output layer 63 of the second discriminator 6.
  • control unit 11 determines the quality of the product shown in the evaluation image data 122 based on the output value obtained from the second discriminator 6 and the correct data 123 associated with the input evaluation image data 122. The correct answer indicated by is collated. Thereby, the control unit 11 evaluates the determination performance of the second discriminator 6.
  • the method for checking the determination result and the correct answer may be the same as in step S202.
  • the control unit 11 Based on whether or not the output value obtained from the second discriminator 6 and the value corresponding to the correct answer indicated by the correct answer data 123 match or approximate, the control unit 11 performs the processing for the target evaluation data set 121. You may determine whether the determination of the quality of the product by the 2nd discriminator 6 is correct. Alternatively, the control unit 11 determines whether the product is good or bad based on the output value obtained from the second discriminator 6 and whether the correct answer indicated by the correct answer data 123 matches the target. You may determine whether the determination of the quality of the product by the 2nd discriminator 6 with respect to the evaluation data set 121 is correct.
  • the evaluation method of the judgment performance may be the same method as in the above step S202.
  • the control unit 11 determines the quality of the product reflected in the evaluation image data 122 based on the output value obtained from the second discriminator 6 for a plurality of evaluation data sets 121 as correct data.
  • the determination performance of the second discriminator 6 may be evaluated by calculating a ratio (correct answer rate) that matches the correct answer indicated by 123. That is, the control unit 11 may calculate the correct answer rate of the pass / fail determination by the second discriminator 6 for the plurality of evaluation data sets 121 as the evaluation result of the determination performance of the second discriminator 6.
  • control unit 11 may evaluate the determination performance of the second discriminator 6 according to the weight set in each evaluation data set 121. As an example, the control unit 11 calculates the sum of the weights set in the evaluation data set 121 that is determined to be correct by the second discriminator 6 as the weight set in each evaluation data set 121. A quotient (correct answer rate including weight) divided by the sum of the above may be calculated as an evaluation result of the determination performance of the second discriminator 6.
  • the plurality of evaluation data sets 121 may include contraindication data sets that are set so as not to erroneously determine the quality of the product shown in the evaluation image data 122.
  • the control part 11 is based on whether the judgment of the quality by the 2nd discriminator 6 with respect to a contraindication data set is incorrect, or the number of the pass / fail judgment by the 2nd discriminator 6 with respect to a contraindication data set is incorrect. The determination performance of the second discriminator 6 may be evaluated.
  • control unit 11 can evaluate the determination performance of the second discriminator 6 similarly to the first discriminator 5. When the evaluation of the determination performance of the second discriminator 6 is completed, the control unit 11 advances the processing to the next step S204. In addition, this step S204 does not necessarily need to be performed after step S203. The timing for executing the processes of step S203 and step S204 may be appropriately determined according to the embodiment.
  • Step S204 the control unit 11 determines that the determination performance of the second identifier 6 is the first identifier 5 based on the evaluation performance evaluation results for the first identifier 5 and the second identifier 6 in steps S202 and S203. In other words, it is determined whether the determination performance of the second discriminator 6 is lower than the determination performance of the first discriminator 5.
  • the method for determining whether the determination performance of the second discriminator 6 is worse than that of the first discriminator 5 is appropriately determined according to the evaluation method of the determination performance of the first discriminator 5 and the second discriminator 6. May be determined.
  • the control unit 11 compares the correct answer rate of the first discriminator 5 with the correct answer rate of the second discriminator 6. And when the correct answer rate of the 2nd discriminator 6 is lower than the correct answer rate of the 1st discriminator 5, the control part 11 deteriorates the determination performance of the 2nd discriminator 6 compared with the 1st discriminator 5.
  • the control unit 11 deteriorates the determination performance of the second discriminator 6 compared to the first discriminator 5. It may be determined that it is not.
  • the control unit 11 compares the correct answer rate including the weight of the first discriminator 5 with the correct answer rate including the weight of the second discriminator 6.
  • the control unit 11 determines that the determination performance of the second discriminator 6 is the first discriminator 5. It may be determined that the condition is worse than
  • the controller 11 determines that the determination performance of the second discriminator 6 is the first discriminator. You may determine that it has not deteriorated compared with 5.
  • the controller 11 determines that the determination performance of the second discriminator 6 is worse than that of the first discriminator 5 in accordance with correctness / incorrectness of the determination for the contraindication data set. It may be determined whether or not. For example, the result of determining the quality of the product shown in the evaluation image data 122 of the contraindication data set based on the output value obtained from the first discriminator 5 matches the correct answer indicated by the correct answer data 123.
  • the control unit 11 May determine that the determination performance of the second discriminator 6 is worse than that of the first discriminator 5. That is, when the first discriminator 5 can correctly determine pass / fail for the contraindicated data set of the object, the second discriminator 6 erroneously makes the pass / fail determination. It may be determined that the determination performance of the device 6 is worse than that of the first discriminator 5.
  • the control unit 11 may determine that the determination performance of the second discriminator 6 has not deteriorated compared to the first discriminator 5, and based on the above other indicators, It may be determined whether or not the determination performance of the two classifier 6 is worse than that of the first classifier 5.
  • the determination performance of each discriminator (5, 6) is evaluated in this way with an emphasis on pass / fail determination for the contraindicated data set, in steps S202 and S203, the evaluation data set 121 other than the contraindicated data set 121 is used.
  • the pass / fail judgment by each discriminator (5, 6) may be omitted. This can reduce the calculation cost of the processing of steps S202 to S204, and can reduce the processing load on the control unit 11 (CPU).
  • control unit 11 can determine whether the determination performance of the second discriminator 6 is worse than that of the first discriminator 5. When the determination is completed, the control unit 11 advances the processing to the next step S205.
  • Step S205 the control unit 11 outputs a result of determining whether or not the determination performance of the second discriminator 6 is worse than that of the first discriminator 5 in step S204.
  • the output format of the determined result is not particularly limited, and may be appropriately selected according to the embodiment.
  • the control unit 11 may output the result of determining whether or not the determination performance of the second discriminator 6 is worse than that of the first discriminator 5 to the output device 15 as it is. In this case, when the determination performance of the second discriminator 6 is worse than that of the first discriminator 5, the control unit 11 does not use the second discriminator 6 for the user of the inspection device 3. A warning to prompt may be output from the output device 15.
  • the control unit 11 may execute predetermined information processing according to the determined result. Assume a first case in which the inspection system 100 according to the present embodiment is configured such that the second discriminator 6 (second learning result data 229) is distributed from the learning device 2 to the inspection device 3. In this first case, when it is determined that the determination performance of the second discriminator 6 is worse than that of the first discriminator 5, the control unit 11 distributes the second discriminator 6 to the inspection device 3. May be transmitted to the learning device 2 as the output process of step S205. On the other hand, if it is determined that the determination performance of the second discriminator 6 has not deteriorated compared to the first discriminator 5, the control unit 11 instructs the delivery of the second discriminator 6 to the inspection device 3.
  • the control unit 11 distributes the result of determining whether the determination performance of the second discriminator 6 is worse than that of the first discriminator 5 to the learning device 2 and inspects the second discriminator 6.
  • the learning device 2 may determine whether or not to distribute to the device 3. Thereby, it is possible to prevent the second discriminator 6 having a deteriorated determination performance from being distributed to the inspection device 3 and to prevent the second discriminator 6 having the deteriorated determination performance from being used in the inspection device 3.
  • a second case is assumed in which the inspection system 100 according to the present embodiment has the second discriminator 6 (second learning result data 229) already held by the inspection device 3.
  • the control unit 11 issues a command for prohibiting the use of the second discriminator 6. You may transmit to the test
  • the control part 11 gives the instruction
  • control unit 11 distributes the result of determining whether the determination performance of the second discriminator 6 is worse than that of the first discriminator 5 to the inspection device 3 and uses the second discriminator 6.
  • the inspection apparatus 3 may determine whether or not to do so. Thereby, it can prevent that the 2nd discriminator 6 with which the determination performance deteriorated is utilized in the test
  • the inspection system 100 distributes the second classifier 6 (second learning result data 229) from the learning device 2 to the inspection device 3 via the classifier evaluation device 1. Is assumed.
  • the control unit 11 performs the output process of step S205 from the learning device 2. Transfer of the received second discriminator 6 to the inspection device 3 may be prohibited, and a command to use the first discriminator 5 may be transmitted to the inspection device 3.
  • the control part 11 is the 2nd received from the learning apparatus 2 as an output process of this step S205.
  • the identifier 6 may be transferred to the inspection device 3.
  • control unit 11 ends the processing according to this operation example.
  • control unit 11 may execute a series of processes in steps S201 to S205 each time the second discriminator 6 is constructed in the learning device 2. Thereby, the determination performance of the 2nd discriminator 6 constructed
  • FIG. 10 is a flowchart illustrating an example of a processing procedure of the inspection apparatus 3.
  • the processing procedure described below is merely an example, and each processing may be changed as much as possible. Further, in the processing procedure described below, steps can be omitted, replaced, and added as appropriate according to the embodiment.
  • Step S301 the control unit 31 operates as the target data acquisition unit 311 and acquires target image data 321 in which the product R to be inspected is captured.
  • the inspection apparatus 3 is connected to the camera 41 via the external interface 34. Therefore, the control unit 31 acquires target image data 321 from the camera 41.
  • the target image data 321 may be moving image data or still image data.
  • the control unit 31 advances the processing to the next step S302.
  • the route for acquiring the target image data 321 is not limited to such an example, and may be appropriately selected according to the embodiment.
  • another information processing apparatus different from the inspection apparatus 3 may be connected to the camera 41.
  • the control unit 31 may acquire the target image data 321 by receiving transmission of the target image data 321 from another information processing apparatus.
  • Step S302 In step S ⁇ b> 302, the control unit 31 operates as the quality determination unit 312, and determines the quality of the product R reflected in the target image data 321 using the first discriminator 5 or the second discriminator 6. The discriminator used for pass / fail judgment is determined based on the performance evaluation result in step S204.
  • the control unit 31 uses the first classifier 5 to perform target image data.
  • the quality of the product R shown in 321 is determined.
  • the control unit 31 specifies that the discriminator used to determine the quality of the product R is the first discriminator 5 by accessing the output processing in step S205 or the discriminator evaluation apparatus 1. .
  • the control unit 31 sets the learned first discriminator 5 with reference to the first learning result data 224.
  • control unit 31 acquires the output value from the first discriminator 5 by inputting the acquired target image data 321 to the first discriminator 5 and executing the arithmetic processing of the first discriminator 5. .
  • the calculation process of the first discriminator 5 may be executed in the same manner as in steps S202 and S203.
  • the control part 31 determines the quality of the product R reflected in the object image data 321 based on the output value acquired from the 1st discriminator 5.
  • FIG. The quality determination may be performed in the same manner as in steps S202 and S203. That is, when the output value obtained from the first discriminator 5 indicates the quality of the product itself or its probability, the control unit 31 compares the output value obtained from the first discriminator 5 with the threshold value, thereby obtaining the target image. The quality of the product R shown in the data 321 can be determined. When the output value obtained from the first discriminator 5 indicates an index of product quality, the inspection device 3 associates the output value obtained from the first discriminator 5 with the quality of the product or the type of defect.
  • Reference information such as a table format may be held in the storage unit 32.
  • the control unit 31 can determine the quality of the product R shown in the target image data 321 according to the output value obtained from the first discriminator 5 by referring to the reference information.
  • the control unit 31 uses the second discriminator 6 to perform the target image.
  • the quality of the product R shown in the data 321 is determined.
  • the control unit 31 specifies that the discriminator used to determine the quality of the product R is the second discriminator 6 by accessing the output process of step S205 or the discriminator evaluation apparatus 1. .
  • the control unit 31 acquires the output value from the second discriminator 6 by executing the arithmetic processing of the learned second discriminator 6 with reference to the second learning result data 229.
  • control unit 31 determines the quality of the product R shown in the target image data 321 based on the output value acquired from the second discriminator 6.
  • the calculation process and pass / fail determination of the second discriminator 6 may be executed in the same manner as steps S202 and S203.
  • control unit 31 can determine the quality of the product R shown in the target image data 321 using the first discriminator 5 or the second discriminator 6 based on the result of the evaluation in step S204. . When the determination of the quality of the product R is completed, the control unit 31 advances the processing to the next step S303.
  • learning result data incorporated in the inspection apparatus 3 may be selected based on the result of the evaluation in step S204. Specifically, when it is determined that the determination performance of the second discriminator 6 is worse than that of the first discriminator 5, the first learning result data 224 is incorporated in the inspection apparatus 3 in advance. Also good. On the other hand, when it is determined that the determination performance of the second discriminator 6 has not deteriorated compared to the first discriminator 5, the second learning result data 229 may be incorporated in the inspection device 3 in advance. Thereby, in this step S302, the control part 31 can utilize the discriminator according to the result of evaluation of step S204 by referring to the learning result data incorporated beforehand.
  • Step S303 the control unit 31 operates as the output unit 313, and outputs the result of determining whether the product R is good or bad in step S302.
  • the output format of the result of determining the quality of the product R may not be particularly limited, and may be appropriately selected according to the embodiment.
  • the control unit 31 may output the determination result of the product R to the output device 36 as it is. If it is determined in step S302 that the product R has a defect, the control unit 31 may perform a warning for notifying that a defect has been found as the output process in step S303.
  • the inspection apparatus 3 is connected to a production line that transports a product, when it is determined that the product R is defective, the control unit 31 determines that the defective product R is different from a product that does not have a defect.
  • a process of transmitting a command to be transported along the route to the production line may be performed as the output process of step S303.
  • control unit 31 ends the process according to this operation example.
  • control unit 31 may execute a series of processes in steps S301 to S303 each time the product R conveyed on the production line enters the imaging range of the camera 41. Thereby, the inspection apparatus 3 can perform the appearance inspection of the product R conveyed on the production line.
  • the learning device 2 constructs the first discriminator 5 by executing machine learning using the first learning data 221 through a series of processes in steps S101 to S103.
  • the learning device 2 constructs the second discriminator 6 by executing machine learning using the second learning data 226 including the first learning data 221 and the additional image data 227.
  • an unknown case that is not included in the first learning data 221 is prepared as additional image data 227, so that the learning device 2 is a product in the first discriminator 5.
  • the second discriminator 6 that can cope with a case where it is impossible to determine whether the R is good or bad can be constructed.
  • the inspection device 3 can determine the quality of the product R by using the second classifier 6 for cases where the first classifier 5 cannot determine the quality.
  • the discriminator evaluation apparatus 1 uses the plurality of evaluation data sets 121 by the processing of steps S202 to S204, and the determination performance of the second discriminator 6 is the first discriminator 5. Monitor whether or not it has deteriorated. Thereby, when the performance of the second discriminator 6 constructed by re-learning or additional learning has deteriorated compared to the first discriminator 5, the second discriminator 6 having deteriorated performance is used in the inspection apparatus 3. It can be prevented from being used. Therefore, according to the present embodiment, it is possible to prevent the reliability of the quality determination of the product R from being impaired.
  • each discriminator (5, 6) is configured by a fully connected neural network having a multilayer structure.
  • the configuration of each discriminator (5, 6) may not be limited to such an example, and may be appropriately selected according to the embodiment.
  • each discriminator (5, 6) may be configured by a convolutional neural network, a recursive neural network, or the like.
  • a neural network is adopted as a learning model for each classifier (5, 6).
  • the learning model of each discriminator (5, 6) may not be limited to such an example, and may be appropriately selected according to the embodiment.
  • a learning model of each discriminator (5, 6) for example, a support vector machine, a self-organizing map, a learning model that performs machine learning by reinforcement learning, or the like may be employed.
  • each correct answer data (223, 228) may be omitted in each learning data (221, 226).
  • each learning result data (224, 229) includes information indicating the configuration of the neural network.
  • the configuration of each learning result data (224, 229) does not have to be limited to such an example. If the learning result data (224, 229) can be used to set each learned classifier (5, 6), the embodiment It may be determined appropriately according to For example, when the configuration of the neural network to be used is shared by each device, each learning result data (224, 229) may not include information indicating the configuration of the neural network.
  • FIG. 11 schematically illustrates an example of the software configuration of the discriminator evaluation apparatus 1A according to this modification.
  • 12 and 13 schematically illustrate an example of the hardware configuration and software configuration of the identification device 3A according to the present modification.
  • the identification system according to this modification is configured by a classifier evaluation device 1A, the learning device 2, and the identification device 3A.
  • the identification system according to this modification may be configured in the same manner as the inspection system 100, except that the data to be processed is replaced with image data showing a subject from image data showing a product.
  • the discriminator evaluation device 1 ⁇ / b> A has the same hardware configuration as that of the discriminator evaluation device 1.
  • the storage unit 12 of the classifier evaluation apparatus 1A stores various information such as the first learning result data 224A and the second learning result data 229A.
  • the first learning result data 224A is data for setting the learned first discriminator 5A.
  • the second learning result data 229A is data for setting the learned second discriminator 6A.
  • the first discriminator 5A is constructed by machine learning using first learning data composed of image data for performing learning for identifying the state of the subject.
  • the second discriminator 6A is constructed by machine learning using second image data composed of additional image data and first learning data for performing learning for identifying the state of the subject.
  • the learning model of each discriminator (5A, 6A) is configured by a neural network as in the above embodiment. Machine learning of each discriminator (5A, 6A) may be performed in the same manner as in the above embodiment.
  • the state of the subject and the subject to be identified need not be particularly limited, and may be appropriately selected according to the embodiment.
  • the subject may be, for example, the subject's face, the subject's body, the work target work, or the like.
  • the state to be identified may be, for example, the type of facial expression, the state of the facial parts, the individual who owns the face, and the like. Identification of the person who owns the face may be performed to perform face authentication.
  • the state to be identified may be, for example, a body pose.
  • the state to be identified may be, for example, the position and posture of the work.
  • the discriminator evaluation device 1 ⁇ / b> A is similar to the discriminator evaluation device 1 in that the evaluation data acquisition unit 111, the first evaluation unit 112, the second evaluation unit 113, the performance determination unit 114, and the output unit It operates as a computer having 115 as a software module.
  • the evaluation data acquisition unit 111 acquires a plurality of evaluation data sets 121A each configured by a combination of the evaluation image data 122A and the correct answer data 123A indicating the correct answer to the identification of the state of the subject in the evaluation image data 122A. To do.
  • the content of the correct answer data 123 ⁇ / b> A may be set in the same manner as the correct answer data 123.
  • the first evaluation unit 112 For each evaluation data set 121A, the first evaluation unit 112 inputs the evaluation image data 122A to the first discriminator 5A, and the evaluation image data 122A based on the output value obtained from the first discriminator 5A. The result of the determination of the state of the subject shown in FIG. 5 is collated with the correct answer indicated by the correct answer data 123A. Thereby, the 1st evaluation part 112 evaluates the determination performance of 5 A of 1st discriminators.
  • the second evaluation unit 113 For each evaluation data set 121A, the second evaluation unit 113 inputs the evaluation image data 122A to the second discriminator 6A, and the evaluation image data 122A based on the output value obtained from the second discriminator 6A. The result of the determination of the state of the subject shown in FIG. 5 is collated with the correct answer indicated by the correct answer data 123A. Thereby, the second evaluation unit 113 evaluates the determination performance of the second discriminator 6A.
  • the performance determination unit 114 determines whether or not the determination performance of the second discriminator 6A is worse than that of the first discriminator 5A based on the evaluation results for the first discriminator 5A and the second discriminator 6A. To do.
  • the output unit 115 outputs a result of determining whether the determination performance of the second discriminator 6A is worse than that of the first discriminator 5A.
  • the identification device 3 ⁇ / b> A has a hardware configuration similar to that of the inspection device 3.
  • the storage unit 32 of the identification device 3A stores various types of information such as an identification program 83A, first learning result data 224A, and second learning result data 229A.
  • the identification program 83A is a program for causing the identification device 3A to perform information processing for determining the state of the subject by the same processing procedure as the inspection device 3, and includes a series of instructions for the information processing.
  • the identification device 3 ⁇ / b> A is connected to the camera 41 via the external interface 34 in the same manner as the inspection device 3.
  • the camera 41 is appropriately arranged at a place where the subject whose state is to be determined can be taken.
  • the camera 41 may be disposed at a location where the subject who is the subject may exist.
  • the camera 41 may be arranged toward a place where the work can exist.
  • the identification device 3 ⁇ / b> A operates as a computer including the target data acquisition unit 311, the state determination unit 312 ⁇ / b> A, and the output unit 313 as software modules by executing the identification program 83 ⁇ / b> A by the control unit 31. To do.
  • the target data acquisition unit 311 acquires target image data 321A in which a subject to be identified is captured.
  • the state determination unit 312A uses the first discriminator 5A to capture the target image data 321A. Determine the state of the subject.
  • the state determination unit 312A uses the second discriminator 6A to target image data 321A. The state of the subject appearing in is determined.
  • the output unit 313 outputs the result of determining the state of the subject.
  • the identification system according to the present modification operates according to a processing procedure substantially similar to that of the inspection system 100.
  • the control unit 21 of the learning device 2 performs machine learning using the first learning data composed of image data for performing learning for identifying the state of the subject by the processing of steps S101 to S103.
  • 1 discriminator 5A is constructed, and information indicating the configuration and the like of the constructed first discriminator 5A is stored in the storage unit 22 as first learning result data 224A.
  • the control unit 21 of the learning device 2 obtains the second learning data composed of the first learning data and additional image data for performing learning for identifying the state of the subject by the processing of steps S101 to S103.
  • the second classifier 6A is constructed by executing the used machine learning, and information indicating the configuration and the like of the constructed second classifier 6A is stored in the storage unit 22 as the second learning result data 229A.
  • step S201 the control unit 11 of the discriminator evaluation apparatus 1A operates as the evaluation data acquisition unit 111, and correct data 123A indicating the correct answer for identifying the state of the subject in the evaluation image data 122A and the evaluation image data 122A.
  • a plurality of evaluation data sets 121 ⁇ / b> A configured by combinations of the above are acquired.
  • Each evaluation data set 121 ⁇ / b> A may be created by the same method as the evaluation data set 121.
  • step S202 the control unit 11 operates as the first evaluation unit 112 and sets the learned first discriminator 5A with reference to the first learning result data 224A. Subsequently, for each evaluation data set 121A, the control unit 11 inputs the evaluation image data 122A to the first discriminator 5A, and executes the arithmetic processing of the first discriminator 5A, thereby performing the first discrimination. An output value is acquired from the device 5A. Next, the control unit 11 collates the result of determining the state of the subject shown in the evaluation image data 122A with the correct answer indicated by the correct answer data 123A based on the output value obtained from the first discriminator 5A. The collation between the determination result and the correct answer may be performed by the same method as in the above embodiment. Thereby, the control part 11 evaluates the determination performance of 5 A of 1st discriminators.
  • the control unit 11 determines that the result of determining the state of the subject in the evaluation image data 122A based on the output value obtained from the first discriminator 5A for the plurality of evaluation data sets 121A is correct data.
  • the determination performance of the first discriminator 5A may be evaluated by calculating a ratio (correct answer rate) that matches the correct answer indicated by 123A. That is, the control unit 11 may calculate the correct answer rate of the subject state determination by the first discriminator 5A for the plurality of evaluation data sets 121A as the evaluation result of the determination performance of the first discriminator 5A.
  • a weight indicating the degree of contribution to the determination performance may be set in each evaluation data set 121A.
  • the control unit 11 may evaluate the determination performance of the first discriminator 5A according to the weight set in each evaluation data set 121A.
  • the control unit 11 calculates the sum of the weights set in the evaluation data set 121A that is determined to be correct by the first discriminator 5A as the weight set in each evaluation data set 121A.
  • the quotient (correct answer rate including weight) divided by the sum of the above may be calculated as the evaluation result of the determination performance of the first discriminator 5A.
  • the plurality of evaluation data sets 121A may include contraindication data sets that are set so that the determination of the state of the subject in the evaluation image data 122A should not be erroneous.
  • the controller 11 determines whether or not the determination of the state of the subject by the first discriminator 5A for the contraindication data set is incorrect, or the determination of the state of the subject by the first discriminator 5A for the contraindication data set is incorrect.
  • the determination performance of the first discriminator 5A may be evaluated based on the number.
  • step S203 the control unit 11 operates as the second evaluation unit 113, and sets the learned second discriminator 6A with reference to the second learning result data 229A. Subsequently, for each evaluation data set 121A, the control unit 11 inputs the evaluation image data 122A to the second discriminator 6A, and executes the arithmetic processing of the second discriminator 6A, thereby performing the second discrimination. An output value is obtained from the device 6A. Next, the control unit 11 collates the result of determining the state of the subject shown in the evaluation image data 122A with the correct answer indicated by the correct answer data 123A based on the output value obtained from the second discriminator 6A. The collation between the determination result and the correct answer may be performed by the same method as in the above embodiment. Thereby, the control unit 11 evaluates the determination performance of the second discriminator 6A.
  • the control unit 11 determines, based on the output value obtained from the second discriminator 6A, a plurality of evaluation data sets 121A, the result of determining the state of the subject reflected in the evaluation image data 122A based on the correct data 123A.
  • the determination performance of the second discriminator 6A may be evaluated by calculating a ratio (correct answer rate) that matches the correct answer shown.
  • the control unit 11 calculates the sum of the weights set in the evaluation data set 121A that is determined to be correct by the second discriminator 6A as the weight set in each evaluation data set 121A.
  • a quotient (correct answer rate including weight) divided by the sum may be calculated as an evaluation result of the determination performance of the second discriminator 6A. Further, the control unit 11 determines whether or not the determination of the state of the subject by the second discriminator 6A for the contraindication data set is incorrect, or the number of the determination of the state of the subject by the second discriminator 6A for the contraindication data set is incorrect. Based on the above, the determination performance of the second discriminator 6A may be evaluated.
  • step S204 the control unit 11 operates as the performance determination unit 114, and the determination performance of the second discriminator 6A is determined based on the evaluation results for the first discriminator 5A and the second discriminator 6A. It is determined whether or not it is worse than that.
  • the comparison between the determination performance of the first discriminator 5A and the determination performance of the second discriminator 6 may be performed by the same method as in the above embodiment.
  • the control unit 11 deteriorates the determination performance of the second discriminator 6A compared to the first discriminator 5A. It may be determined that On the other hand, when the correct answer rate of the second discriminator 6A is not lower than the correct answer rate of the first discriminator 5A, the control unit 11 makes the determination performance of the second discriminator 6A worse than that of the first discriminator 5A. It may be determined that it is not.
  • the controller 11 determines that the determination performance of the second discriminator 6A is the first discriminator 5A. It may be determined that the condition is worse than On the other hand, when the correct answer rate including the weight of the second discriminator 6A is not lower than the correct answer rate including the weight of the first discriminator 5A, the controller 11 determines that the determination performance of the second discriminator 6A is the first discriminator. You may determine that it has not deteriorated compared with 5A.
  • the control unit 11 May determine that the determination performance of the second discriminator 6A is worse than that of the first discriminator 5A.
  • the control unit 11 determines that the second discriminator 6A erroneously determines the state of the subject. It may be determined that the determination performance of the second discriminator 6A is worse than that of the first discriminator 5A. On the other hand, if this is not the case, the control unit 11 may determine that the determination performance of the second discriminator 6A has not deteriorated compared to the first discriminator 5A, and based on the other indicators, You may determine whether the determination performance of 2 classifier 6A is deteriorating compared with 5 A of 1st classifiers.
  • step S205 the control unit 11 operates as the output unit 115, and outputs a result of determining whether or not the determination performance of the second discriminator 6A is worse than that of the first discriminator 5A.
  • the output of the determined result may be performed in the same manner as in the above embodiment.
  • the control unit 11 may output the result of determining whether or not the determination performance of the second discriminator 6A is worse than that of the first discriminator 5A to the output device 15 as it is.
  • the control unit 11 determines whether or not the determination performance of the second classifier 6A is worse than that of the first classifier 5A, according to the determination result 3A.
  • a command for prohibiting or permitting distribution of the second discriminator 6A to the learning device 2 may be transmitted to the learning device 2.
  • the control unit 11 distributes the result of determining whether or not the determination performance of the second discriminator 6A is worse than that of the first discriminator 5A to the learning device 2 to identify the second discriminator 6A.
  • the learning device 2 may determine whether to distribute to the device 3A.
  • the use of the second discriminator 6A is used according to the result of determining whether or not the determination performance of the second discriminator 6A is worse than that of the first discriminator 5A.
  • a command to be prohibited or permitted may be transmitted to the identification device 3A.
  • the control unit 11 distributes the result of determining whether or not the determination performance of the second discriminator 6A is worse than that of the first discriminator 5A to the discriminator 3A and uses the second discriminator 6A. Whether or not to do so may be determined by the identification device 3A.
  • the control unit 11 receives the second received from the learning device 2. Transfer of the discriminator 6A to the discriminator 3A may be prohibited, and a command for using the first discriminator 5A may be transmitted to the discriminator 3A.
  • the control part 11 makes 2nd discriminator 6A received from the learning apparatus 2 into discriminating apparatus 3A. It may be transferred.
  • step S301 the control unit 31 of the identification device 3A operates as the target data acquisition unit 311 and acquires target image data 321A in which a subject to be identified is captured.
  • step S302 the control unit 31 operates as the state determination unit 312A, and determines the state of the subject in the target image data 321A using the first discriminator 5A or the second discriminator 6A.
  • the classifier to be used is determined based on the performance evaluation result in step S204. That is, when it is determined that the determination performance of the second classifier 6A is worse than that of the first classifier 5A, the control unit 31 uses the first classifier 5A to generate the target image data 321A. Determine the state of the subject. On the other hand, when it is determined that the determination performance of the second discriminator 6A has not deteriorated compared to the first discriminator 5A, the control unit 31 uses the second discriminator 6A to generate the target image data 321A. Determine the state of the subject.
  • the control unit 31 sets the learned first discriminator 5A with reference to the first learning result data 224A. Subsequently, the control unit 31 inputs the acquired target image data 321A to the first discriminator 5A, and executes an arithmetic process of the first discriminator 5A, thereby acquiring an output value from the first discriminator 5A. . Then, the control unit 31 determines the state of the subject shown in the target image data 321A based on the output value acquired from the first discriminator 5A. The determination of the state of the subject may be performed similarly to the above embodiment.
  • the controller 31 sets the learned second discriminator 6A with reference to the second learning result data 229A. Subsequently, the control unit 31 inputs the acquired target image data 321A to the second discriminator 6A, and executes an arithmetic process of the second discriminator 6A, thereby acquiring an output value from the second discriminator 6A. . And the control part 31 determines the state of the to-be-photographed object reflected in the object image data 321A based on the output value acquired from 6 A of 2nd discriminators.
  • step S303 the control unit 31 operates as the output unit 313 and outputs a result of determining the state of the subject.
  • the output format of the result of determining the state of the subject is not particularly limited, and may be appropriately selected according to the embodiment.
  • the control unit 31 may output the result of determining the state of the subject to the output device 36 as it is.
  • the control unit 31 may execute a predetermined output process according to the state of the subject.
  • the control unit 11 may The transmission of the e-mail notifying this may be performed as the output process in step S303.
  • FIG. 14 schematically illustrates an example of the software configuration of the classifier evaluation apparatus 1B according to the present modification.
  • FIG. 15 schematically illustrates an example of the software configuration of the learning device 2B according to the present modification.
  • 16 and 17 schematically illustrate an example of a hardware configuration and a software configuration of the identification device 3B according to this modification.
  • the identification system according to this modification includes a classifier evaluation device 1B, a learning device 2B, and a classification device 3B.
  • the identification system according to this modification may be configured in the same manner as the inspection system 100, except that the data to be processed is replaced with other types of data including some characteristics from the image data representing the product.
  • the discriminator evaluation device 1B has the same hardware configuration as the discriminator evaluation device 1 described above.
  • the storage unit 12 of the classifier evaluation apparatus 1B stores various information such as the first learning result data 224B and the second learning result data 229B.
  • the first learning result data 224B is data for setting the learned first discriminator 5B.
  • the second learning result data 229B is data for setting the learned second discriminator 6B.
  • the data to be processed may include all kinds of data that can be analyzed by the classifier.
  • the feature identified from the target data may include any feature that can be identified from the data.
  • the target data is sound data
  • the identified feature may be, for example, whether or not a specific sound (for example, mechanical noise) is included.
  • the target data is numerical data or text data related to biological data such as activity data
  • the identified feature may be, for example, the state of the subject (for example, whether or not he / she is healthy).
  • the target data is numerical data such as driving amount of the machine or text data
  • the identified feature is, for example, the state of the machine (for example, whether or not the machine is in a predetermined state) Good.
  • the discriminator evaluation apparatus 1B is similar to the discriminator evaluation apparatus 1 in that the evaluation data acquisition unit 111, the first evaluation unit 112, the second evaluation unit 113, the performance determination unit 114, and the output unit. It operates as a computer having 115 as a software module.
  • the evaluation data acquisition unit 111 acquires a plurality of evaluation data sets 121B each composed of a combination of evaluation data 122B and correct data 123B indicating correct answers for identification of features included in the evaluation data 122B.
  • the content of the correct answer data 123B may be appropriately set according to the evaluation data 122B and the features to be identified.
  • the first evaluation unit 112 inputs the evaluation data 122B to the first discriminator 5B for each evaluation data set 121B, and is included in the evaluation data 122B based on the output value obtained from the first discriminator 5B.
  • the result of determining the feature to be checked is compared with the correct answer indicated by the correct answer data 123B. Thereby, the 1st evaluation part 112 evaluates the determination performance of the 1st discriminator 5B.
  • the second evaluation unit 113 inputs the evaluation data 122B to the second discriminator 6B for each evaluation data set 121B, and is included in the evaluation data 122B based on the output value obtained from the second discriminator 6B.
  • the result of determining the feature to be checked is compared with the correct answer indicated by the correct answer data 123B. Thereby, the second evaluation unit 113 evaluates the determination performance of the second discriminator 6B.
  • the performance determination unit 114 determines whether or not the determination performance of the second discriminator 6B is worse than that of the first discriminator 5B based on the evaluation results for the first discriminator 5B and the second discriminator 6B. To do.
  • the output unit 115 outputs a result of determining whether or not the determination performance of the second discriminator 6B is worse than that of the first discriminator 5B.
  • the learning device 2B has the same hardware configuration as the learning device 2 described above. As illustrated in FIG. 15, the learning device 2 ⁇ / b> B operates as a computer including the learning data acquisition unit 211 and the learning processing unit 212 as software modules, similarly to the learning device 2.
  • the learning data acquisition unit 211 acquires first learning data 221B composed of data 222B for learning for identifying features.
  • the learning processing unit 212 performs the machine learning using the acquired first learning data 221B, thereby constructing the learned first discriminator 5B that has acquired the ability to determine the feature included in the target data.
  • the learning data acquisition unit 211 acquires the second learning data 226B configured by the first learning data 221B and additional data 227B for performing learning for identifying features.
  • the learning processing unit 212 performs machine learning using the acquired second learning data 226B, thereby constructing the learned second discriminator 6B that has acquired the ability to determine the feature included in the target data.
  • the learning model of each discriminator is configured by a neural network, as in the above embodiment. Therefore, correct data 223B indicating the correct answer to the identification of the feature included in the data 222B is given to the data 222B. Similarly, correct data 228B indicating the correct answer to the feature identification included in the additional data 227B is given to the additional data 227B.
  • the learning processing unit 212 When the learning processing unit 212 inputs data 222B to the input layer 51 for each data set included in the first learning data 221B, the output corresponding to the correct answer indicated by the correct data 223B associated with the input data 222B. Machine learning of the first discriminator 5B is performed so that the value is output from the output layer 53. Thereby, the learning processing unit 212 constructs the learned second discriminator 6 that has acquired the ability to determine the characteristics included in the target data. Then, the learning processing unit 212 stores information indicating the configuration of the learned first discriminator 5B, the weight of the connection between the neurons, and the threshold value of each neuron as the first learning result data 224B in the storage unit 22.
  • the learning processing unit 212 inputs data (222B, 227B) to the input layer 61 for each data set included in the second learning data 226B, the correct answer associated with the input data (222B, 227B).
  • Machine learning of the second discriminator 6B is performed so that an output value corresponding to the correct answer indicated by the data (223B, 228B) is output from the output layer 63.
  • the learning processing unit 212 constructs the learned second discriminator 6B that has acquired the ability to determine the characteristics included in the target data.
  • the learning processing unit 212 stores, in the storage unit 22, information indicating the configuration of the learned second discriminator 6B, the weight of connection between the neurons, and the threshold value of each neuron as the second learning result data 229B.
  • the identification device 3 ⁇ / b> B has the same hardware configuration as that of the inspection device 3.
  • the storage unit 32 of the identification device 3B stores various information such as the identification program 83B, the first learning result data 224B, and the second learning result data 229B.
  • the identification program 83B is a program for causing the identification device 3B to execute information processing for determining the characteristics included in the target data by the same processing procedure as the inspection device 3, and includes a series of instructions for the information processing. .
  • the identification device 3B is connected to the measurement device 41B via the external interface 34.
  • the measuring device 41B is appropriately configured so as to be able to acquire target data.
  • the type of the measuring device 41B may be appropriately determined according to the data to be processed.
  • the measurement device 41B is, for example, a microphone.
  • the measurement device 41B is a device configured to be able to measure biological information, such as an activity meter and a blood pressure monitor.
  • the measuring device 41B is a device configured to be able to measure a target physical quantity such as an encoder, for example.
  • the arrangement of the measurement device 41B may be appropriately determined according to the embodiment.
  • the identification device 3B operates as a computer including the target data acquisition unit 311, the feature determination unit 312B, and the output unit 313 as software modules by executing the identification program 83B by the control unit 31. To do.
  • the target data acquisition unit 311 acquires target data 321B including a feature to be identified.
  • the feature determination unit 312B is included in the target data 321B using the first identifier 5B when it is determined that the determination performance of the second identifier 6B is worse than that of the first identifier 5B. Determine the characteristics.
  • the feature determination unit 312B determines that the determination performance of the second discriminator 6B has not deteriorated compared to the first discriminator 5B, the feature determination unit 312B includes the target data 321B using the second discriminator 6B. Determine the features that will be used.
  • the output unit 313 outputs the result of determining the features included in the target data 321B.
  • the identification system according to the present modification operates according to a processing procedure substantially similar to that of the inspection system 100.
  • the control unit 21 of the learning device 2B performs machine learning using the first learning data 221B configured by the data 222B for performing learning for identifying the features by the processing of steps S101 and S102.
  • the first discriminator 5B is constructed.
  • the control part 21 stores the information which shows the structure etc. of the constructed
  • control unit 21 performs machine learning using the second learning data 226B including the first learning data 221B and the additional data 227B for performing learning for identifying the features by the processing in steps S101 and S102.
  • second discriminator 6B To construct the second discriminator 6B.
  • control part 21 stores the information which shows the structure etc. of the constructed
  • step S201 the control unit 11 of the discriminator evaluation apparatus 1B operates as the evaluation data acquisition unit 111, and is based on a combination of the evaluation data 122B and the correct data 123B indicating the correct answer to the feature identification included in the evaluation data 122B.
  • a plurality of evaluation data sets 121B configured respectively are acquired.
  • the camera used to acquire the evaluation image data 122 is replaced with an apparatus configured to acquire the evaluation data 122B. This method may be used.
  • step S202 the control unit 11 operates as the first evaluation unit 112 and sets the learned first discriminator 5B with reference to the first learning result data 224B. Subsequently, for each evaluation data set 121B, the control unit 11 inputs the evaluation data 122B to the first discriminator 5B, and executes the arithmetic processing of the first discriminator 5B, whereby the first discriminator. The output value is acquired from 5B. Next, the control unit 11 collates the result of determining the characteristics included in the evaluation data 122B with the correct answer indicated by the correct answer data 123B based on the output value obtained from the first discriminator 5B. The collation between the determination result and the correct answer may be performed by the same method as in the above embodiment. Thereby, the control part 11 evaluates the determination performance of the 1st discriminator 5B.
  • the control unit 11 determines the characteristics included in the evaluation data 122B based on the output value obtained from the first discriminator 5B with respect to the plurality of evaluation data sets 121B based on the correct data 123B.
  • the determination performance of the first discriminator 5B may be evaluated by calculating a ratio (correct answer rate) that matches the correct answer shown. That is, the control unit 11 may calculate the correct answer rate of the feature determination by the first discriminator 5B for the plurality of evaluation data sets 121B as the evaluation result of the determination performance of the first discriminator 5B.
  • a weight indicating the degree of contribution to the determination performance may be set in each evaluation data set 121B.
  • the control unit 11 may evaluate the determination performance of the first discriminator 5B according to the weight set in each evaluation data set 121B.
  • the control unit 11 calculates the sum of the weights set in the evaluation data set 121B that is determined to be correct by the first discriminator 5B as the feature determination, and the weight set in each evaluation data set 121B.
  • the quotient (correct answer rate including weight) divided by the sum of the above may be calculated as the evaluation result of the determination performance of the first discriminator 5B.
  • the plurality of evaluation data sets 121B may include contraindication data sets that are set so that the determination of the characteristics included in the evaluation data 122B should not be mistaken.
  • the control unit 11 determines whether or not the feature determination by the first discriminator 5B for the contraindication data set is incorrect or the number of the feature determination by the first discriminator 5B for the contraindication data set is incorrect. The determination performance of the first discriminator 5B may be evaluated.
  • step S203 the control unit 11 operates as the second evaluation unit 113 and sets the learned second discriminator 6B with reference to the second learning result data 229B. Subsequently, for each evaluation data set 121B, the control unit 11 inputs the evaluation data 122B to the second discriminator 6B, and executes the arithmetic processing of the second discriminator 6B, whereby the second discriminator. The output value is acquired from 6B. Next, the control unit 11 collates the result of determining the feature included in the evaluation data 122B with the correct answer indicated by the correct answer data 123B based on the output value obtained from the second discriminator 6B. The collation between the determination result and the correct answer may be performed by the same method as in the above embodiment. Thereby, the control part 11 evaluates the determination performance of the 2nd discriminator 6B.
  • the evaluation of the determination performance of the second discriminator 6B may be performed in the same manner as the first discriminator 5B.
  • the control unit 11 indicates the result of determining the characteristics included in the evaluation data 122B based on the output value obtained from the second discriminator 6B for the plurality of evaluation data sets 121B by the correct data 123B.
  • the determination performance of the second discriminator 6B may be evaluated by calculating a ratio (correct answer rate) that matches the correct answer.
  • the control unit 11 calculates the sum of the weights set in the evaluation data set 121B that has been determined to be correct by the second discriminator 6B as the feature determination by the weight set in each evaluation data set 121B.
  • the quotient (correct answer rate including weight) divided by the sum may be calculated as the evaluation result of the determination performance of the second discriminator 6B. Further, the control unit 11 determines whether or not the determination of the feature by the second discriminator 6B for the contraindication data set is incorrect, or based on the number of the determination of the feature by the second discriminator 6B for the contraindication data set is incorrect. The determination performance of the second discriminator 6B may be evaluated.
  • step S204 the control unit 11 operates as the performance determination unit 114, and the determination performance of the second discriminator 6B is determined based on the evaluation results for the first discriminator 5B and the second discriminator 6B. It is determined whether or not it is worse than that.
  • the comparison between the determination performance of the first discriminator 5B and the determination performance of the second discriminator B may be performed by the same method as in the above embodiment.
  • the control unit 11 deteriorates the determination performance of the second discriminator 6B compared to the first discriminator 5B. It may be determined that On the other hand, when the correct answer rate of the second discriminator 6B is not lower than the correct answer rate of the first discriminator 5B, the control unit 11 makes the determination performance of the second discriminator 6B worse than that of the first discriminator 5B. It may be determined that it is not.
  • the controller 11 determines that the determination performance of the second discriminator 6B is the first discriminator 5B. It may be determined that the condition is worse than On the other hand, when the correct answer rate including the weight of the second discriminator 6B is not lower than the correct answer rate including the weight of the first discriminator 5B, the controller 11 determines that the determination performance of the second discriminator 6B is the first discriminator. You may determine that it has not deteriorated compared with 5B.
  • the control unit 11 determines the second discriminator. It may be determined that the determination performance of 6B is worse than that of the first discriminator 5B. On the other hand, if this is not the case, the control unit 11 may determine that the determination performance of the second discriminator 6B has not deteriorated compared to the first discriminator 5B, and based on the above other indicators, It may be determined whether or not the determination performance of the second discriminator 6B is worse than that of the first discriminator 5B.
  • step S205 the control unit 11 operates as the output unit 115, and outputs a result of determining whether or not the determination performance of the second discriminator 6B is worse than that of the first discriminator 5B.
  • the output of the determined result may be performed in the same manner as in the above embodiment.
  • the control unit 11 may output the result of determining whether or not the determination performance of the second discriminator 6B is worse than that of the first discriminator 5B to the output device 15 as it is.
  • the control unit 11 determines whether or not the determination performance of the second discriminator 6B is worse than that of the first discriminator 5B, according to the determination result.
  • a command for prohibiting or permitting the distribution of the second discriminator 6B to may be transmitted to the learning device 2B.
  • the control unit 11 distributes the result of determining whether or not the determination performance of the second discriminator 6B is worse than that of the first discriminator 5B to the learning device 2B, and discriminates the second discriminator 6B.
  • the learning device 2B may determine whether to distribute to the device 3B.
  • the use of the second discriminator 6B is used according to the result of determining whether or not the determination performance of the second discriminator 6B is deteriorated as compared with the first discriminator 5B.
  • a command to be prohibited or permitted may be transmitted to the identification device 3B.
  • the control unit 11 distributes the determination result of whether or not the determination performance of the second discriminator 6B is worse than that of the first discriminator 5B to the discriminator 3B, and uses the second discriminator 6B. Whether or not to do so may be determined by the identification device 3B.
  • the control unit 11 receives the second received from the learning device 2B. Transfer of the discriminator 6B to the discriminator 3B may be prohibited, and a command for using the first discriminator 5B may be transmitted to the discriminator 3B.
  • the control part 11 uses the 2nd discriminator 6B received from the learning apparatus 2B for the discriminator 3B. It may be transferred.
  • step S301 the control unit 31 of the identification device 3B operates as the target data acquisition unit 311 and acquires target data 321B including features to be identified.
  • the control unit 31 acquires the target data 321B from the measurement device 41B via the external interface 34.
  • the route for acquiring the target data 321B may not be limited to such an example, and may be appropriately selected according to the embodiment.
  • step S302 the control unit 31 operates as the feature determination unit 312B, and determines the feature included in the target data 321B using the first discriminator 5B or the second discriminator 6B.
  • the classifier to be used is determined based on the performance evaluation result in step S204. That is, when it is determined that the determination performance of the second discriminator 6B is worse than that of the first discriminator 5B, the control unit 31 includes the target data 321B using the first discriminator 5B. Determine the features that will be used. On the other hand, when it is determined that the determination performance of the second discriminator 6B has not deteriorated compared to the first discriminator 5B, the control unit 31 uses the second discriminator 6B to be included in the target data 321B. Determine the features that will be used.
  • the control part 31 When using the 1st discriminator 5B, the control part 31 sets the learned 1st discriminator 5B with reference to the 1st learning result data 224B. Subsequently, the control unit 31 acquires the output value from the first discriminator 5B by inputting the acquired target data 321B to the first discriminator 5B and executing the arithmetic processing of the first discriminator 5B. And the control part 31 determines the characteristic contained in the object data 321B based on the output value acquired from the 1st discriminator 5B. The determination may be performed in the same manner as in the above embodiment.
  • the control unit 31 sets the learned second discriminator 6B with reference to the second learning result data 229B. Subsequently, the control unit 31 acquires the output value from the second discriminator 6B by inputting the acquired target data 321B to the second discriminator 6B and executing the arithmetic processing of the second discriminator 6B. And the control part 31 determines the characteristic contained in the object data 321B based on the output value acquired from the 2nd discriminator 6B.
  • step S303 the control unit 31 operates as the output unit 313, and outputs a result of determining the characteristics included in the target data 321B.
  • the output format of the determined result is not particularly limited and may be appropriately selected according to the embodiment.
  • the control unit 31 may output the result of determining the feature included in the target data 321B as it is from the output device 36. Further, for example, the control unit 31 may execute a predetermined output process according to the determination result.
  • the control unit 31 includes machine noise in the target data 321B.
  • the reliability of the process for determining the feature appearing in the target data It is possible to prevent the performance from being impaired.
  • a learning data acquisition unit 212 ... a learning processing unit, 221 ... 1st learning data, 222: Image data, 223: Correct data, 224 ... 1st learning result data, 226 ... second learning data, 227 ... additional image data, 228 ... correct data, 229 ... second learning result data, 3 ... Inspection device, 31 ... Control unit, 32 ... Storage unit, 33 ... Communication interface, 34 ... External interface, 35 ... Input device, 36 ... Output device, 37 ... Drive, 83 ... Inspection program, 93 ... Storage medium, 311 ... Target data acquisition unit, 312 ... Pass / fail judgment unit, 313: Output unit, 321 ... target image data, 41 ... Camera, 5 ... 1st discriminator, 51 ... Input layer, 52 ... Intermediate layer (hidden layer), 53 ... Output layer, 6 ... second discriminator, 61 ... Input layer, 62 ... Intermediate layer (hidden layer), 63 ... Output layer

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

La présente invention empêche la diminution de la fiabilité d'une détermination obtenue à l'aide d'un appareil d'identification, même dans un cas où l'efficacité d'identification de l'appareil d'identification s'est dégradée en raison d'un réapprentissage ou d'un apprentissage supplémentaire. Un système d'inspection selon un aspect de la présente invention utilise un ensemble de données d'évaluation afin d'évaluer l'efficacité de détermination d'un premier appareil d'identification avant un réapprentissage ou un apprentissage supplémentaire, et d'un second appareil d'identification après un réapprentissage ou un apprentissage supplémentaire, et détermine, sur la base des résultats d'évaluation, si l'efficacité de détermination du second appareil d'identification s'est dégradée par comparaison avec le premier appareil d'identification. Si l'efficacité de détermination du second appareil d'identification s'est dégradée par comparaison avec le premier appareil d'identification, le système d'inspection détermine la qualité d'un produit à l'aide du premier appareil d'identification sans faire appel au second appareil d'identification.
PCT/JP2019/010178 2018-03-14 2019-03-13 Système d'inspection, système d'identification et dispositif d'évaluation d'appareil d'identification WO2019176988A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018047255A JP7056259B2 (ja) 2018-03-14 2018-03-14 検査システム、識別システム、及び識別器評価装置
JP2018-047255 2018-03-14

Publications (1)

Publication Number Publication Date
WO2019176988A1 true WO2019176988A1 (fr) 2019-09-19

Family

ID=67907914

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/010178 WO2019176988A1 (fr) 2018-03-14 2019-03-13 Système d'inspection, système d'identification et dispositif d'évaluation d'appareil d'identification

Country Status (2)

Country Link
JP (1) JP7056259B2 (fr)
WO (1) WO2019176988A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6930411B2 (ja) * 2017-12-15 2021-09-01 コニカミノルタ株式会社 情報処理装置及び情報処理方法
JP7442550B2 (ja) 2019-12-20 2024-03-04 京東方科技集團股▲ふん▼有限公司 推論演算装置、モデル訓練装置、及び推論演算システム
JP7016179B2 (ja) * 2020-02-21 2022-02-04 株式会社 システムスクエア 検査装置およびプログラム

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6067535A (en) * 1997-01-21 2000-05-23 Notel Networks Corporation Monitoring and retraining neural network
JP2005227054A (ja) * 2004-02-12 2005-08-25 Jfe Steel Kk 疵種判別ロジック自動設計方法および装置ならびに表面欠陥計
WO2010050334A1 (fr) * 2008-10-30 2010-05-06 コニカミノルタエムジー株式会社 Dispositif de traitement d'informations
JP2012026982A (ja) * 2010-07-27 2012-02-09 Panasonic Electric Works Sunx Co Ltd 検査装置
JP2014153906A (ja) * 2013-02-08 2014-08-25 Honda Motor Co Ltd 検査装置、検査方法及びプログラム
JP2016109495A (ja) * 2014-12-03 2016-06-20 タカノ株式会社 分類器生成装置、外観検査装置、分類器生成方法、及びプログラム
WO2017023416A1 (fr) * 2015-07-31 2017-02-09 Northrop Grumman Systems Corporation Système et procédé de reformation d'un classificateur in situ permettant l'identification d'un logiciel malveillant et l'hétérogénéité d'un modèle
WO2017216980A1 (fr) * 2016-06-16 2017-12-21 株式会社日立製作所 Dispositif d'apprentissage machine

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6067535A (en) * 1997-01-21 2000-05-23 Notel Networks Corporation Monitoring and retraining neural network
JP2005227054A (ja) * 2004-02-12 2005-08-25 Jfe Steel Kk 疵種判別ロジック自動設計方法および装置ならびに表面欠陥計
WO2010050334A1 (fr) * 2008-10-30 2010-05-06 コニカミノルタエムジー株式会社 Dispositif de traitement d'informations
JP2012026982A (ja) * 2010-07-27 2012-02-09 Panasonic Electric Works Sunx Co Ltd 検査装置
JP2014153906A (ja) * 2013-02-08 2014-08-25 Honda Motor Co Ltd 検査装置、検査方法及びプログラム
JP2016109495A (ja) * 2014-12-03 2016-06-20 タカノ株式会社 分類器生成装置、外観検査装置、分類器生成方法、及びプログラム
WO2017023416A1 (fr) * 2015-07-31 2017-02-09 Northrop Grumman Systems Corporation Système et procédé de reformation d'un classificateur in situ permettant l'identification d'un logiciel malveillant et l'hétérogénéité d'un modèle
WO2017216980A1 (fr) * 2016-06-16 2017-12-21 株式会社日立製作所 Dispositif d'apprentissage machine

Also Published As

Publication number Publication date
JP7056259B2 (ja) 2022-04-19
JP2019158684A (ja) 2019-09-19

Similar Documents

Publication Publication Date Title
WO2019176990A1 (fr) Dispositif d'inspection, dispositif de discrimination d'image, dispositif de discrimination, procédé d'inspection et programme d'inspection
JP6924413B2 (ja) データ生成装置、データ生成方法及びデータ生成プログラム
WO2019176993A1 (fr) Système d'inspection, système de reconnaissance d'image, système de reconnaissance, système de génération de discriminateur et dispositif de génération de données d'apprentissage
WO2019214309A1 (fr) Procédé et dispositif de test de modèle
CN110619618A (zh) 一种表面缺陷检测方法、装置及电子设备
CN109060817B (zh) 人工智能复检系统及其方法
WO2019176988A1 (fr) Système d'inspection, système d'identification et dispositif d'évaluation d'appareil d'identification
WO2019176989A1 (fr) Système d'inspection, système de discrimination et générateur de données d'apprentissage
US20210398674A1 (en) Method for providing diagnostic system using semi-supervised learning, and diagnostic system using same
JP2020060879A (ja) 学習装置、画像生成装置、学習方法、及び学習プログラム
US11783471B2 (en) Method and device for determining whether object includes defect
JP2021086379A (ja) 情報処理装置、情報処理方法、プログラム及び学習モデルの生成方法
JP2020135051A (ja) 欠点検査装置、欠点検査方法、欠点検査プログラム、学習装置および学習済みモデル
CN113554645A (zh) 基于wgan的工业异常检测方法和装置
JP7059889B2 (ja) 学習装置、画像生成装置、学習方法、及び学習プログラム
CN115048290A (zh) 软件质量的评估方法、装置、存储介质及计算机设备
CN114355234A (zh) 一种电源模块的智能化质量检测方法及系统
CN115564702A (zh) 模型训练方法、系统、设备、存储介质及缺陷检测方法
JP7345006B1 (ja) 学習モデル生成方法及び検査装置
JP7459696B2 (ja) 異常検知システム、学習装置、異常検知プログラム、学習プログラム、異常検知方法、および学習方法演算装置の学習方法
CN111160454B (zh) 一种速变信号检测方法和装置
US20230274409A1 (en) Method for automatic quality inspection of an aeronautical part
CN113557536B (zh) 学习系统、数据生成装置、数据生成方法及存储介质
WO2022215446A1 (fr) Dispositif de détermination d'image, procédé de détermination d'image et programme
US20210004954A1 (en) Neural network-type image processing device, appearance inspection apparatus and appearance inspection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19767334

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19767334

Country of ref document: EP

Kind code of ref document: A1