WO2022065273A1 - 外観検査のためのモデル作成装置及び外観検査装置 - Google Patents
外観検査のためのモデル作成装置及び外観検査装置 Download PDFInfo
- Publication number
- WO2022065273A1 WO2022065273A1 PCT/JP2021/034499 JP2021034499W WO2022065273A1 WO 2022065273 A1 WO2022065273 A1 WO 2022065273A1 JP 2021034499 W JP2021034499 W JP 2021034499W WO 2022065273 A1 WO2022065273 A1 WO 2022065273A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- abnormal
- data
- image
- model
- Prior art date
Links
- 238000011179 visual inspection Methods 0.000 title claims abstract description 30
- 230000002159 abnormal effect Effects 0.000 claims abstract description 126
- 238000010801 machine learning Methods 0.000 claims abstract description 50
- 238000012545 processing Methods 0.000 claims abstract description 28
- 230000005856 abnormality Effects 0.000 claims description 14
- 230000008439 repair process Effects 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 4
- 230000002547 anomalous effect Effects 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 238000000034 method Methods 0.000 description 18
- 238000007781 pre-processing Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000013500 data storage Methods 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 6
- 238000003860 storage Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- KNMAVSAGTYIFJF-UHFFFAOYSA-N 1-[2-[(2-hydroxy-3-phenoxypropyl)amino]ethylamino]-3-phenoxypropan-2-ol;dihydrochloride Chemical compound Cl.Cl.C=1C=CC=CC=1OCC(O)CNCCNCC(O)COC1=CC=CC=C1 KNMAVSAGTYIFJF-UHFFFAOYSA-N 0.000 description 1
- 238000012356 Product development Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8883—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
Definitions
- the present invention relates to a model creation device and a visual inspection device for visual inspection.
- images of many normal products and images of many abnormal products are collected in advance. Then, machine learning is performed using the collected images.
- images of abnormal products there are many cases where it is desired to identify even which part of the image has an abnormality. In such a case, it is necessary to prepare in advance a label image showing the abnormal part in the image of the abnormal product.
- the model creation device automatically creates an image of an abnormal product and a label image indicating an abnormal location from an image of a normal product. Then, using the created image of the abnormal product and the image of the original normal product, one model used for the visual inspection is created.
- the model to be created is a model that estimates the image of the original normal product, the image showing the abnormal portion, and the image showing at least one of the abnormal probabilities from the image of the abnormal product.
- a learning method that makes the best use of the characteristics of expression learning is used. More specifically, in addition to the normal label image, other images related to the label image are added to the output data of the learning model to perform efficient and effective learning.
- the model creating apparatus can automatically create a model used for visual inspection based on images of one or a plurality of normal products by executing the above steps.
- One aspect of the present invention is a model creation device that creates a model used for visual inspection, and obtains abnormal image data by processing an image with respect to the normal image data and a data acquisition unit that acquires normal image data.
- the abnormal image creation unit to be created the label image creation unit that creates label image data indicating an abnormal portion based on the processing content of the image by the abnormal image creation unit, and the normal image data using the abnormal image data as input data.
- a model creation including a learning command unit that creates teacher data using the label image data as output data and instructs the machine learning device to create a learning model by performing machine learning based on the teacher data. It is a device.
- Another aspect of the present invention is a visual inspection apparatus that inspects the appearance of the product based on the image of the product, the data acquisition unit for acquiring the image data of the product, the repair image data from the abnormal image data, and the abnormality. Using the learning model for estimating the label image data indicating the location, the machine learning device is instructed to estimate the repair image data and the label image data indicating the abnormal location from the image data of the product, and the estimation is performed. It is a visual inspection device provided with an estimation command unit that outputs repair image data and label image data indicating an abnormal part.
- the cost of collecting images of abnormal products can be significantly reduced. Therefore, the cost required for creating the model used for the visual inspection can be significantly reduced.
- a schematic hardware configuration diagram of a model creation device according to an embodiment.
- Schematic functional block diagram of a model creation device An example of creating abnormal image data using a predetermined geometric image.
- a schematic functional block diagram of a model creation device based on a modified example. Schematic functional block diagram of a modeling device with other variants.
- the schematic functional block diagram of the visual inspection apparatus by one Embodiment.
- FIG. 1 is a schematic hardware configuration diagram showing a main part of a model creating apparatus according to an embodiment of the present invention.
- the model creation device 1 according to the present embodiment can be implemented as, for example, a control device that controls an industrial machine based on a control program, or is attached to a control device that controls an industrial machine based on a control program. It can be mounted on a personal computer, a personal computer connected to a control device via a wired / wireless network, a cell computer, a fog computer 6, and a cloud server 7. In this embodiment, an example in which the model creation device 1 is mounted on a personal computer connected to the control device via a network is shown.
- the CPU 11 included in the model creation device 1 is a processor that controls the model creation device 1 as a whole.
- the CPU 11 reads the system program stored in the ROM 12 via the bus 22 and controls the entire model creation device 1 according to the system program. Temporary calculation data, display data, various data input from the outside, and the like are temporarily stored in the RAM 13.
- the non-volatile memory 14 is composed of, for example, a memory backed up by a battery (not shown), an SSD (Solid State Drive), or the like, and the storage state is maintained even when the power of the model creation device 1 is turned off.
- the non-volatile memory 14 stores data read from the external device 72 via the interface 15, data input via the input device 71, data acquired from the industrial machine 3 via the network 5, and the like. ..
- the stored data may include image data of the product captured by the sensor 4 such as a visual sensor attached to the industrial machine 3, for example.
- the data stored in the non-volatile memory 14 may be expanded in the RAM 13 at the time of execution / use. Further, various system programs such as a known analysis program are written in the ROM 12 in advance.
- the interface 15 is an interface for connecting the CPU 11 of the model creation device 1 and an external device 72 such as a USB device.
- an external device 72 such as a USB device.
- data related to a product manufactured by each industrial machine for example, image data of a normal product, CAD data indicating the shape of the product, etc.
- the data or the like edited in the model creating device 1 can be stored in an external storage means such as a CF card via the external device 72.
- the interface 20 is an interface for connecting the CPU of the model creation device 1 and the wired or wireless network 5.
- An industrial machine 3, a fog computer, a cloud server, and the like are connected to the network 5, and data is exchanged with each other with the model creating device 1.
- the data read into the memory, the data obtained as a result of executing the program, the data output from the machine learning device 2 described later, and the like are input to and displayed on the display device 70 via the interface 17. Will be done. Further, the input device 71 composed of a keyboard, a pointing device, and the like passes commands, data, and the like based on operations by the operator to the CPU 11 via the interface 18.
- the interface 21 is an interface for connecting the CPU 11 and the machine learning device 2.
- the machine learning device 2 stores a processor 201 that controls the entire machine learning device 2, a ROM 202 that stores a system program, a RAM 203 that temporarily stores each process related to machine learning, a learning model, and the like.
- the non-volatile memory 204 used for the above is provided.
- the machine learning device 2 can observe data (for example, image data of a normal product, image data of an abnormal product, label data, etc.) that can be acquired by the model creation device 1 via the interface 21. Further, the model creation device 1 acquires the processing result output from the machine learning device 2 via the interface 21, stores and displays the acquired result, and transmits the acquired result to another device via the network 5 or the like. do.
- FIG. 2 shows as a schematic block diagram the functions included in the model creation device 1 according to the embodiment of the present invention.
- Each function of the model creation device 1 according to the present embodiment is realized by the CPU 11 included in the model creation device 1 shown in FIG. 1 executing a system program and controlling the operation of each part of the model creation device 1. ..
- the model creation device 1 of the present embodiment includes a data acquisition unit 100, an abnormal image creation unit 110, a preprocessing unit 120, a label image creation unit 130, and a learning command unit 140. Further, the machine learning device 2 connected to the model creating device 1 includes a learning unit 206. Further, the RAM 13 to the non-volatile memory 14 of the model creating device 1 are provided with an acquisition data storage unit 300 in advance as an area for storing data acquired by the data acquisition unit 100 from the industrial machine 3 or the like. Further, on the RAM 203 to the non-volatile memory 204 of the machine learning device 2, a learning model storage unit 210 is prepared in advance as an area for storing the learning model 212 created by the learning unit 206.
- the data acquisition unit 100 executes a system program read from the ROM 12 by the CPU 11 included in the model creation device 1 shown in FIG. 1, mainly performs arithmetic processing using the RAM 13 and the non-volatile memory 14 by the CPU 11, and the interfaces 15 and 18. Alternatively, it is realized by performing the input control process according to 20.
- the data acquisition unit 100 may acquire image data of the product captured by the sensor 4 attached to the industrial machine 3, or may acquire data directly from the industrial machine 3 via the network 5. Further, the data acquired and stored by the external device 72, the fog computer 6, the cloud server 7, and the like may be acquired.
- the data acquired by the data acquisition unit 100 includes at least image data of a normal product (hereinafter referred to as normal image data).
- the data acquired by the data acquisition unit 100 may include image data of an abnormal product (hereinafter referred to as abnormal image data), but in that case, the image data acquired by the data acquisition unit 100 is a normal image. It is desirable that a label indicating that the data is data and a label indicating that the data is abnormal image data are attached.
- the data acquisition unit 100 may acquire, for example, normal image data visually confirmed by the operator based on the operation of the operator. Further, the operator's operation may be accepted so that the acquired image data can be given a label indicating that it is normal image data and a label indicating that it is abnormal image data.
- the image data of the product acquired by the data acquisition unit 100 is stored in the acquisition data storage unit 300.
- the abnormality image creation unit 110 executes a system program read from the ROM 12 by the CPU 11 included in the model creation device 1 shown in FIG. 1, and mainly performs arithmetic processing using the RAM 13 and the non-volatile memory 14 by the CPU 11. It will be realized.
- the abnormal image creating unit 110 creates abnormal image data based on the normal image data stored in the acquired data storage unit 300.
- the abnormal image creating unit 110 may create abnormal image data by superimposing a predetermined figure on a part of the image of the product in the normal image data, or may create an image with respect to the normal image data.
- Abnormal image data may be created by processing the image such as changing the hue, saturation, and lightness of a part of the image, or applying a mosaic, and further, the product in the normal image data.
- Abnormal image data may be created by adding or reducing (transforming) a predetermined figure to the image.
- FIG. 3 is an example of creating abnormal image data by superimposing a predetermined figure (geometric figure) on a part of the image of the product in the normal image data.
- the predetermined figure to be superimposed may be stored in the RAM 13 to the non-volatile memory 14 of the model creation device 1 in advance, or a figure having a geometric shape is created as a predetermined figure at the stage of creating the abnormal image data. You may do so.
- the color of the predetermined figure may be a color similar to the color of the product as long as it is different from the color of the product. For example, a random number value may be calculated and determined at which position on the image of the product the predetermined figure is superimposed.
- the predetermined figure added in this way represents a portion where the processing quality of the product is deteriorated, a portion where the product is defective, and the like.
- the abnormality image creating unit 110 may perform semi-transparent composition with a predetermined transparency, or instead of superimposing the predetermined figure on the product, the abnormality image creating unit 110 may superimpose the predetermined figure on the product. You may change the hue, saturation, and lightness of a predetermined range of figures to be superimposed on the image, or apply a mosaic. With either method, it is possible to express a part that has been processed (a part whose quality has deteriorated) that is different from the processing of a normal product.
- the abnormal image creating unit 110 may perform mask processing on the predetermined figure in consideration of the shape of the product. For example, as illustrated in FIG. 4, when a part of a predetermined figure protrudes from the image of the product, the predetermined figure is masked and only the part superimposed on the image of the product is displayed. You may do so.
- the range of the image of the product in the normal image data may be extracted from the normal image data by a known method combined with edge processing and the like. Further, the range of the image of the product in the normal image data may be extracted from the normal image data by a matching process based on CAD data or the like.
- FIG. 5 is an example of creating abnormal image data by adding a predetermined figure to the image of the product in the normal image data. It is desirable that the predetermined figure to be added is arranged adjacent to the image of the product in the normal image data.
- the shape of the predetermined figure to be added may be stored in the RAM 13 to the non-volatile memory 14 of the model creation device 1 in advance, or a geometrically shaped figure may be stored as a predetermined figure at the stage of creating the abnormal image data. You may create it.
- the predetermined figure to be added is arranged adjacent to the image of the product in the normal image data.
- the color of the predetermined figure added may be a color similar to the color of the product. Further, for example, a random number value may be calculated and determined at which position in the image of the product the predetermined figure is to be added.
- the predetermined figure added in this way expresses uncut parts of the product, large burrs, and the like.
- FIG. 6 is an example of creating abnormal image data by reducing a predetermined figure from the image of the product in the normal image data.
- the shape of the predetermined figure to be reduced may be stored in the RAM 13 to the non-volatile memory 14 of the model creation device 1 in advance, or a geometrically shaped figure may be stored as a predetermined figure at the stage of creating the abnormal image data. You may create it. It is desirable that the predetermined figure to be reduced is to reduce the edge of the image of the product in the normal image data.
- the color of the predetermined figure to be reduced may be a color similar to the background color in the normal image data. Further, for example, a random number value may be calculated and determined as to which position of the image of the product is to be reduced.
- the predetermined figure reduced in this way expresses a defect, excessive cutting, or the like of the product.
- the abnormal image creating unit 110 may create abnormal image data based on a plurality of normal image data stored in the acquired data storage unit 300, or may create a predetermined figure shape based on one normal image data. You may create a plurality of abnormal image data in which the overlapping position, the shape and position of the figure to be added or reduced, and the like are changed.
- the abnormality image creation unit 110 creates a sufficient number of abnormality images for the machine learning device 2 to learn the abnormality portion of the product in the abnormality image data. A sufficient number for this learning may be set in advance by the operator.
- the preprocessing unit 120 is realized by executing a system program read from the ROM 12 by the CPU 11 included in the model creation device 1 shown in FIG. 1 and performing arithmetic processing mainly by the CPU 11 using the RAM 13 and the non-volatile memory 14. Will be done.
- the pre-processing unit 120 performs predetermined image processing on the normal image data stored in the acquired data storage unit 300 and the abnormal image data created by the abnormal image creating unit 110.
- the predetermined image processing performed by the preprocessing unit 120 on the abnormal image data includes at least an image processing method for easily extracting the features of the normal image data / abnormal image data.
- the preprocessing unit 120 may perform edge enhancement processing on the normal image data / abnormal image data so that the contour of the object or the abnormal portion in the normal image data / abnormal image data can be easily identified. good. Further, the preprocessing unit 120 may perform two-dimensional to three-dimensional rotation processing so that the postures and orientations of the objects reflected in the normal image data / abnormal image data are substantially the same. Even if the preprocessing unit 120 performs processing for adjusting the brightness and saturation of the normal image data / abnormal image data so that the range of each part of the object reflected in the normal image data / abnormal image data becomes clear. good.
- the preprocessing unit 120 is not necessarily indispensable, the number of data required for image-based learning can be reduced by providing the preprocessing unit 120.
- the label image creating unit 130 executes a system program read from the ROM 12 by the CPU 11 included in the model creating device 1 shown in FIG. 1, and mainly performs arithmetic processing using the RAM 13 and the non-volatile memory 14 by the CPU 11. It will be realized.
- the label image creating unit 130 creates label image data indicating an abnormal portion based on a predetermined figure used for creating the abnormal image data. As illustrated in FIG. 7, the label image data is an image in which a portion of the image data occupied by a predetermined figure is an image of the first color (for example, white) and the other portion is an image of a second color (for example, black). You can create it as data. In the label image data created in this way, the portion indicated by the first color is indicated as an abnormal portion in the image of the product.
- the learning command unit 140 executes a system program read from the ROM 12 by the CPU 11 included in the model creation device 1 shown in FIG. 1, and mainly uses the arithmetic processing by the CPU 11 using the RAM 13 and the non-volatile memory 14 and the interface 21. It is realized by performing the input / output control process.
- the learning command unit 140 uses the abnormal image data preprocessed by the preprocessing unit 120, the normal image data from which the abnormal image data is created, and the label image data created by the label image creating unit 130.
- the machine learning device 2 is instructed to create a learning model by performing learning based on the data.
- the learning command unit 140 creates, for example, a plurality of normal image data in which abnormal image data is input data, normal image data from which the abnormal image data is created, and teacher data in which label image data is output data, and the created teacher data.
- the machine learning device 2 is instructed to perform learning based on the above.
- not only the label image data but also the normal image data from which the abnormal image data is created is used as the output data.
- a model for inferring a label image from abnormal image data may be created.
- a learning model for estimating the normal image data and the label image data in parallel from the abnormal image data is created.
- a structure for extracting the feature expression of the normal image data is created in the learning model, and the structure is also used when estimating the label image data. Therefore, it is possible to accurately estimate the label image data indicating the abnormal portion.
- the image data captured by the industrial machine 3 when actually performing the visual inspection image data for which it is not known whether the image data is normal image data or abnormal image data. It is possible to create a training model that makes it possible to estimate label image data indicating an abnormal part by using only the training model as an input.
- the learning model created in this way When using normal image data to estimate label image data, for example, there is a method of placing normal image data on the input data side, but in the learning model created in this way, the image captured by the industrial machine 3 is used. If the normal image data captured from the normal component is not prepared in addition to the data, the label image data indicating the abnormal part cannot be estimated. In this respect as well, the learning model created by this embodiment is considered to be excellent.
- the learning unit 206 included in the machine learning device 2 executes a system program read from the ROM 202 by the processor 201 included in the machine learning device 2 shown in FIG. 1, and is mainly calculated by the processor 201 using the RAM 203 and the non-volatile memory 204. It is realized by processing.
- the learning unit 206 creates a learning model 212 by performing machine learning using teacher data in response to a command received from the learning command unit 140.
- the learning unit 206 stores the created learning model 212 in the learning model storage unit 210.
- the machine learning performed by the learning unit 206 is a known supervised learning. Examples of the learning model 212 include a multi-layer neural network and the like.
- the learning model 212 created by the learning unit 206 is a model that has learned the correlation between the abnormal image data and the original normal image data and the label image data indicating the abnormal portion.
- the learning model 212 is a model that estimates the original normal image data and the label image data indicating the abnormal portion from the abnormal image data. Since the technique of estimating other image data from the image data by the machine learning technique is already well known, detailed description thereof is omitted in the present specification.
- the model creation device 1 creates abnormal image data based on easily collectable normal image data, and performs machine learning based on the created abnormal image data to perform visual inspection.
- the model used for is automatically created. Therefore, when performing machine learning, the cost of collecting images of abnormal products can be significantly reduced, and efficient machine learning can be performed.
- the model creation device 1 creates one learning model that can estimate the normal image data and the label image data indicating the abnormal portion based on the abnormal image data.
- a structure for estimating normal image data and a structure for estimating label image data are naturally formed in the process of learning, and the feature expressions estimated by each are mutually utilized. Therefore, learning can proceed efficiently and effectively.
- computer resources such as a processor, a memory, and a non-volatile memory can be efficiently used.
- the label image creating unit 130 has a predetermined label or category for each pixel in the abnormal image data as label image data (in the present invention, an abnormal portion of a product image). ) May be created as a representation of the probability of belonging to the difference in color.
- the learning command unit 140 uses the abnormal image data as input data, the normal image data from which the abnormal image data is created, and the label image data indicating the abnormal probability of the abnormal portion as output data. Is created, and the machine learning device 2 is instructed to perform learning based on the created teacher data.
- the learning model 212 is implemented by using a known semantic segmentation technique for associating labels and categories (in the present invention, abnormal parts of the image of the product) with the pixels in the image data. May be. Further, the learning model 212 may perform a known image regression analysis so as to indicate the probability of an abnormal portion of the image of the product for each pixel in the image data. Since the above-mentioned techniques related to machine learning are already well known, detailed description thereof is omitted in the present specification.
- the model creation device 1 estimates label image data indicating an abnormal portion in the image data in a predetermined color according to the abnormality probability, based on the image data captured by the industrial machine 3. Even in such a case, by placing the normal image data on the output data side, learning can proceed efficiently and effectively.
- the label image creating unit 130 may create both the label image data indicating the abnormal portion and the label image data indicating the abnormal probability of the abnormal portion in the image as the label image data.
- the learning command unit 140 uses the abnormal image data as input data, and the normal image data from which the abnormal image data is created, the label image data indicating the abnormal portion, and the label image indicating the abnormal probability of the abnormal portion.
- a plurality of teacher data using the data as output data are created, and the machine learning device 2 is instructed to perform learning based on the created teacher data.
- the learning model 212 a structure for estimating normal image data in the learning process, a structure for estimating label image data indicating an abnormal part, and a label image showing an abnormal probability of an abnormal part are included.
- the structures for estimating the data are naturally formed, and the feature expressions estimated by each are utilized with each other. Therefore, it is expected that learning will proceed more efficiently and effectively.
- the model creation device 1 may include a machine learning device 2. Further, as illustrated in FIG. 9, the model creation device 1 and the machine learning device 2 may be connected to each other via the network 5. In the latter case, the machine learning device 2 may be mounted in a computer such as a fog computer 6 or a cloud server 7. By doing so, the machine learning device 2 can be shared and used among a plurality of operators, and the introduction cost of the machine learning device 2 can be reduced.
- a computer such as a fog computer 6 or a cloud server 7.
- FIG. 10 shows as a schematic block diagram the functions provided by the visual inspection device 9 that inspects the visual appearance of the product using the learning model 212 created by the model creating device 1 of the present invention.
- the visual inspection device 9 Similar to the model creation device 1, the visual inspection device 9 according to the present embodiment can be mounted on a control device, a personal computer, a cell computer, a fog computer 6, a cloud server 7, and the like. In the following, similarly to the model creation device 1, the visual inspection device 9 will be described as being mounted on a personal computer equipped with the hardware shown in FIG.
- the visual inspection device 9 of the present embodiment includes a data acquisition unit 100, a preprocessing unit 120, and an estimation command unit 160. Further, the machine learning device 2 connected to the visual inspection device 9 includes an estimation unit 207. Further, a learning model storage unit 210 is prepared in advance on the RAM 203 to the non-volatile memory 204 of the machine learning device 2 as an area for storing the learning model 212 created by the model creation device 1.
- the pretreatment unit 120 included in the visual inspection device 9 according to the present embodiment has the same functions as the pretreatment unit 120 included in the model creation device 1 described above.
- the data acquisition unit 100 included in the visual inspection apparatus 9 executes a system program read from the ROM 12 by the CPU 11 included in the model creation apparatus 1 shown in FIG. 1, and mainly uses the RAM 13 and the non-volatile memory 14 by the CPU 11. It is realized by performing the arithmetic processing used and the input control processing by the interface 15, 18 or 20.
- the data acquisition unit 100 may acquire image data of the product captured by the sensor 4 attached to the industrial machine 3, or may acquire data directly from the industrial machine 3 via the network 5. Further, the data acquired and stored by the external device 72, the fog computer 6, the cloud server 7, and the like may be acquired.
- the data acquired by the data acquisition unit 100 may include image data of a normal product and image data of an abnormal product.
- the estimation command unit 160 executes a system program read from the ROM 12 by the CPU 11 included in the model creation device 1 shown in FIG. 1, and mainly uses the arithmetic processing by the CPU 11 using the RAM 13 and the non-volatile memory 14 and the interface 21. It is realized by performing the input / output control process.
- the estimation command unit 160 indicates normal image data (hereinafter referred to as repaired image data) in which an abnormal portion of the image data is repaired and an abnormal portion in the image data based on the image data acquired by the data acquisition unit 100.
- the machine learning device 2 is instructed to estimate the label image data.
- the estimation command unit 160 receives the repair image data estimated by the machine learning device 2 and the label image data indicating the abnormal portion in response to the command. Then, the received repair image data and label image data are displayed and output by the display device 70.
- the estimation command unit 160 may transmit and output the estimated repair image data and the label image data indicating the abnormal portion to another computer via the network.
- the estimation unit 207 included in the machine learning device 2 executes a system program read from the ROM 202 by the processor 201 included in the machine learning device 2 shown in FIG. 1, and is mainly calculated by the processor 201 using the RAM 203 and the non-volatile memory 204. It is realized by processing.
- the estimation unit 207 executes an estimation process using the learning model 212 based on the image data in response to a command received from the estimation command unit 160. For example, the estimation unit 207 estimates the image data input from the estimation command unit 160 as the input data of the learning model 212, and the repair image data output from the learning model 212 and the label image data indicating the abnormal portion as the estimated data. Output to the command unit 160.
- the visual inspection apparatus 9 estimates a normal product image and an image showing an abnormal part from the product image by a one-step process using a learning model.
- the model used for estimation is created by the model creation device 1 described above, and the labor required for collecting abnormal image data is not required for the creation, and the cost for creating the model is significantly reduced as compared with the conventional model. Will be done. This means that a model that can perform visual inspection with high accuracy at a relatively early stage is created when manufacturing a new part or the like. Therefore, accurate visual inspection using machine learning can be started from the initial stage of product development.
- Modeling device 2 Machine learning device 3 Industrial machine 4 Sensor 5 Network 6 Fog computer 7 Cloud server 9 Visual inspection device 11
- CPU 12 ROM 13 RAM 14
- Display device 71 Input device 72
- External device 100
- Data acquisition unit 110
- Abnormal image creation unit 120
- Preprocessing unit 130
- Label image creation unit 140
- Estimate command Part 201
- Processor 202
- ROM RAM
- Non-volatile memory 206
- Learning unit 207 Estimating unit 210
- Learning model storage unit 212 Learning model 300 Acquisition data storage unit
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Pathology (AREA)
- Signal Processing (AREA)
- Analytical Chemistry (AREA)
- Immunology (AREA)
- Chemical & Material Sciences (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biochemistry (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
そこで、正常な製品の画像に基づいて、容易に外観検査に用いるモデルを作成する技術が望まれている。
図1は本発明の一実施形態によるモデル作成装置の要部を示す概略的なハードウェア構成図である。
本実施形態によるモデル作成装置1は、例えば、制御用プログラムに基づいて産業機械を制御する制御装置として実装することができ、または、制御用プログラムに基づいて産業機械を制御する制御装置に併設されたパソコンや、有線/無線のネットワークを介して制御装置と接続されたパソコン、セルコンピュータ、フォグコンピュータ6、クラウドサーバ7の上に実装することができる。本実施形態では、モデル作成装置1を、ネットワーク介して制御装置と接続されたパソコンの上に実装した例を示す。
本実施形態によるモデル作成装置1が備える各機能は、図1に示したモデル作成装置1が備えるCPU11がシステム・プログラムを実行し、モデル作成装置1の各部の動作を制御することにより実現される。
重畳する所定の図形は、予めモデル作成装置1のRAM13乃至不揮発性メモリ14に記憶しておいても良いし、異常画像データを作成する段階で所定の図形として幾何学的形状の図形を作成するようにしても良い。所定の図形の色は、製品の色とは異なるものであれば、製品の色に類似した色であっても良い。製品の画像のどの位置に所定の図形を重畳するかについては、例えば乱数値を算出して決定しても良い。このようにして追加された所定の図形は、製品の加工品質が低下している部分や、製品内の欠損している部分等を表現する。
追加する所定の図形は、正常画像データ内の製品の画像に隣接して配置されることが望ましい。追加する所定の図形の形状は、予めモデル作成装置1のRAM13乃至不揮発性メモリ14に記憶しておいても良いし、異常画像データを作成する段階で所定の図形として幾何学的形状の図形を作成するようにしても良い。追加する所定の図形は、正常画像データ内の製品の画像に隣接して配置されることが望ましい。追加される所定の図形の色は、製品の色に類似した色であって良い。また、製品の画像のどの位置に所定の図形を追加するかについては、例えば乱数値を算出して決定しても良い。このようにして追加された所定の図形は、製品の削り残しや大きなバリ等を表現する。
削減する所定の図形の形状は、予めモデル作成装置1のRAM13乃至不揮発性メモリ14に記憶しておいても良いし、異常画像データを作成する段階で所定の図形として幾何学的形状の図形を作成するようにしても良い。削減する所定の図形は、正常画像データの内の製品の画像の端部を削減するものであることが望ましい。削減する所定の図形の色は、正常画像データ内の背景色に類似した色であって良い。また、製品の画像のどの位置を削減するかについては、例えば乱数値を算出して決定しても良い。このようにして削減された所定の図形は、製品の欠損や削り過ぎ等を表現する。
ここで、出力データとして、ラベル画像データだけではなく、該異常画像データを作成する元となった正常画像データも用いることに留意されたい。通常の学習では、異常画像データからラベル画像を推論するモデルを作成すれば良い。しかしながら、そのような学習をする場合、単純に学習を進めても異常画像データ内の異常箇所の特徴をうまく発見することができない場合が多い。そこで、出力データとして、異常箇所を示すラベル画像データに正常画像データを加えることで、異常画像データから正常画像データ及びラベル画像データを並列して推定する学習モデルを作成させる。このようにすることで、学習モデル内に正常画像データの特徴表現を抽出する構造が作成され、その構造がラベル画像データを推定する際にも用いられるようになる。そのため、精度よく異常箇所を示すラベル画像データを推定することが可能となる。また、出力データ側に正常画像データを置くことで、実際に外観検査を行う場合に、産業機械3で撮像された画像データ(正常画像データであるか異常画像データであるかがわからない画像データ)のみを学習モデルの入力として異常箇所を示すラベル画像データを推定することを可能とする学習モデルを作成できる。ラベル画像データを推定するために正常画像データを利用する場合、例えば入力データ側に正常画像データを置くやり方もあるが、そのようにして作成された学習モデルでは、産業機械3で撮像された画像データに加えて正常な部品から撮像された正常画像データを用意しないと異常箇所を示すラベル画像データを推定することができない。この点においても、本実施形態により作成される学習モデルは優れていると考えられる。
また、図9に例示されるように、ネットワーク5を介してモデル作成装置1と機械学習装置2が接続される形態をとることもできる。後者の場合、機械学習装置2は、フォグコンピュータ6やクラウドサーバ7などのコンピュータ内に実装しても良い。このようにすることで、機械学習装置2を複数のオペレータ間で共有して用いることが可能となり、機械学習装置2の導入コストを低減することができる。
2 機械学習装置
3 産業機械
4 センサ
5 ネットワーク
6 フォグコンピュータ
7 クラウドサーバ
9 外観検査装置
11 CPU
12 ROM
13 RAM
14 不揮発性メモリ
15,17,18,20,21 インタフェース
22 バス
70 表示装置
71 入力装置
72 外部機器
100 データ取得部
110 異常画像作成部
120 前処理部
130 ラベル画像作成部
140 学習指令部
160 推定指令部
201 プロセッサ
202 ROM
203 RAM
204 不揮発性メモリ
206 学習部
207 推定部
210 学習モデル記憶部
212 学習モデル
300 取得データ記憶部
Claims (4)
- 外観検査に用いるモデルを作成するモデル作成装置であって、
正常画像データを取得するデータ取得部と、
正常画像データに対する画像の加工を行うことで異常画像データを作成する異常画像作成部と、
前記異常画像作成部による画像の加工内容に基づいて異常箇所を示すラベル画像データを作成するラベル画像作成部と、
前記異常画像データを入力データとし、前記正常画像データ及び前記ラベル画像データを出力データとする教師データを作成し、該教師データに基づく機械学習を行うことで学習モデルを作成するように機械学習装置に指令する学習指令部と、
を備えたモデル作成装置。 - 前記機械学習装置を備え、該機械学習装置は、
学習指令部の指令に応じ、前記学習モデルを作成する学習部を備える、
請求項1に記載のモデル作成装置。 - 前記ラベル画像データは、異常箇所の異常確率を示すラベル画像データである、
請求項1に記載のモデル作成装置。 - 製品の画像に基づいて該製品の外観検査を行う外観検査装置であって、
前記製品の画像データを取得するデータ取得部と、
異常画像データから修復画像データ及び異常箇所を示すラベル画像データを推定するための学習モデルを用いて、前記製品の画像データから、修復画像データ及び異常箇所を示すラベル画像データの推定を行うように機械学習装置に指令し、推定された修復画像データ及び異常箇所を示すラベル画像データを出力する推定指令部と、
を備えた外観検査装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE112021005018.4T DE112021005018T5 (de) | 2020-09-25 | 2021-09-21 | Modellerstellungsvorrichtung für sichtprüfung und sichtprüfgerät |
JP2022551976A JP7428819B2 (ja) | 2020-09-25 | 2021-09-21 | 外観検査のためのモデル作成装置及び外観検査装置 |
CN202180063827.5A CN116194954A (zh) | 2020-09-25 | 2021-09-21 | 用于外观检查的模型生成装置以及外观检查装置 |
US18/044,993 US20230386014A1 (en) | 2020-09-25 | 2021-09-21 | Model generation device for visual inspection and visual inspection device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020160596 | 2020-09-25 | ||
JP2020-160596 | 2020-09-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022065273A1 true WO2022065273A1 (ja) | 2022-03-31 |
Family
ID=80846493
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/034499 WO2022065273A1 (ja) | 2020-09-25 | 2021-09-21 | 外観検査のためのモデル作成装置及び外観検査装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230386014A1 (ja) |
JP (1) | JP7428819B2 (ja) |
CN (1) | CN116194954A (ja) |
DE (1) | DE112021005018T5 (ja) |
WO (1) | WO2022065273A1 (ja) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018205163A (ja) * | 2017-06-06 | 2018-12-27 | 株式会社デンソー | 外観検査装置、変換データ生成装置、及びプログラム |
US20190287230A1 (en) * | 2018-03-19 | 2019-09-19 | Kla-Tencor Corporation | Semi-supervised anomaly detection in scanning electron microscope images |
US20200175665A1 (en) * | 2018-12-03 | 2020-06-04 | Samsung Electronics Co., Ltd. | Semiconductor wafer fault analysis system and operation method thereof |
JP2020125918A (ja) * | 2019-02-01 | 2020-08-20 | 株式会社キーエンス | 画像検査装置 |
JP2020125919A (ja) * | 2019-02-01 | 2020-08-20 | 株式会社キーエンス | 画像検査装置 |
WO2020184069A1 (ja) * | 2019-03-08 | 2020-09-17 | 日本電気株式会社 | 画像処理方法、画像処理装置、プログラム |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014190821A (ja) | 2013-03-27 | 2014-10-06 | Dainippon Screen Mfg Co Ltd | 欠陥検出装置および欠陥検出方法 |
JP7356292B2 (ja) | 2019-03-15 | 2023-10-04 | 日鉄テックスエンジ株式会社 | 画像処理装置、画像処理方法及び画像処理プログラム |
-
2021
- 2021-09-21 CN CN202180063827.5A patent/CN116194954A/zh active Pending
- 2021-09-21 JP JP2022551976A patent/JP7428819B2/ja active Active
- 2021-09-21 US US18/044,993 patent/US20230386014A1/en active Pending
- 2021-09-21 WO PCT/JP2021/034499 patent/WO2022065273A1/ja active Application Filing
- 2021-09-21 DE DE112021005018.4T patent/DE112021005018T5/de active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018205163A (ja) * | 2017-06-06 | 2018-12-27 | 株式会社デンソー | 外観検査装置、変換データ生成装置、及びプログラム |
US20190287230A1 (en) * | 2018-03-19 | 2019-09-19 | Kla-Tencor Corporation | Semi-supervised anomaly detection in scanning electron microscope images |
US20200175665A1 (en) * | 2018-12-03 | 2020-06-04 | Samsung Electronics Co., Ltd. | Semiconductor wafer fault analysis system and operation method thereof |
JP2020125918A (ja) * | 2019-02-01 | 2020-08-20 | 株式会社キーエンス | 画像検査装置 |
JP2020125919A (ja) * | 2019-02-01 | 2020-08-20 | 株式会社キーエンス | 画像検査装置 |
WO2020184069A1 (ja) * | 2019-03-08 | 2020-09-17 | 日本電気株式会社 | 画像処理方法、画像処理装置、プログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022065273A1 (ja) | 2022-03-31 |
CN116194954A (zh) | 2023-05-30 |
JP7428819B2 (ja) | 2024-02-06 |
US20230386014A1 (en) | 2023-11-30 |
DE112021005018T5 (de) | 2023-07-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200210702A1 (en) | Apparatus and method for image processing to calculate likelihood of image of target object detected from input image | |
JP6693938B2 (ja) | 外観検査装置 | |
US9471057B2 (en) | Method and system for position control based on automated defect detection feedback | |
KR100532635B1 (ko) | 외형 검사를 위한 이미지 프로세싱 방법 | |
CN106770332A (zh) | 一种基于机器视觉的电子模切料缺陷检测实现方法 | |
CN110599441A (zh) | 接缝检查装置 | |
US11562479B2 (en) | Inspection apparatus, inspection method, and non-volatile storage medium | |
WO2022065273A1 (ja) | 外観検査のためのモデル作成装置及び外観検査装置 | |
US20230024820A1 (en) | Analysis device and analysis method | |
WO2022065272A1 (ja) | 外観検査のためのモデル作成装置及び外観検査装置 | |
WO2022065271A1 (ja) | 画像作成装置 | |
CN116452493A (zh) | 用于检测成型金属部件中的缺陷的方法 | |
JP7415046B2 (ja) | 画像処理装置、及びコンピュータが読み取り可能な記憶媒体 | |
JP7384000B2 (ja) | 協調作業システム、解析収集装置および解析プログラム | |
CN116724224A (zh) | 加工面判定装置、加工面判定程序、加工面判定方法、加工系统、推论装置及机器学习装置 | |
WO2023218537A1 (ja) | 対象領域抽出装置、方法、及びシステム | |
WO2023181277A1 (ja) | 外観検査装置、外観検査方法、及びコンピュータ読み取り可能な記録媒体 | |
CN111709991A (zh) | 一种铁路工机具的检测方法、系统、装置和存储介质 | |
WO2022024985A1 (ja) | 検査装置 | |
WO2022181304A1 (ja) | 検査システムおよび検査プログラム | |
US20230316492A1 (en) | Teacher data generating method and generating device | |
US20200082281A1 (en) | Verification device | |
EP4141780A1 (en) | Method and device for generating training data to generate synthetic real-world-like raw depth maps for the training of domain-specific models for logistics and manufacturing tasks | |
JP2023039521A (ja) | 特定方法、特定装置、および、特定システム | |
JP2021131853A (ja) | Arオーバーレイを使用した変化検出方法及びシステム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21872402 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022551976 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18044993 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21872402 Country of ref document: EP Kind code of ref document: A1 |