WO2021225016A1 - 外観検査システム - Google Patents
外観検査システム Download PDFInfo
- Publication number
- WO2021225016A1 WO2021225016A1 PCT/JP2021/002419 JP2021002419W WO2021225016A1 WO 2021225016 A1 WO2021225016 A1 WO 2021225016A1 JP 2021002419 W JP2021002419 W JP 2021002419W WO 2021225016 A1 WO2021225016 A1 WO 2021225016A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- machine learning
- image
- defective
- learning model
- inspection unit
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
- G01N21/956—Inspecting patterns on the surface of objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- This disclosure relates to an image-based visual inspection system.
- automatic visual inspection it is common to take an image of an object with a camera composed of a precise optical system and automatically judge defects by image processing of a computer.
- a determination is made by replacing the degree of defect of the acquired image with a numerical value and comparing the numerical value with a preset threshold value.
- any item such as surface scratches, stains, shape, and posture that can image the feature amount can be inspected.
- the detection accuracy is strongly influenced by the inspection parameters, for example, the imaging conditions such as the illumination method at the time of inspection, and the quality of the captured image such as the variation in the color and shape of the object. Further, the detection accuracy changes depending on where the cutoff point (threshold value) for defect determination is set.
- over-judgment in which a good product is originally judged to be a defective product is likely to occur, and the amount of over-judgment also increases.
- the purpose of the present disclosure is to make it possible to suppress a decrease in productivity due to the occurrence of over-judgment products in visual inspection.
- the first aspect of the present disclosure is a primary inspection unit (10) that determines a defect without using machine learning based on an image obtained by capturing an object, and an object that is determined to be defective by the primary inspection unit (10).
- a visual inspection system characterized by having a secondary inspection unit (20) that separates true defective products from over-judged products using a first machine learning model (21) based on an image of an object. be.
- the secondary inspection unit (20) an over-determined product using the first machine learning model (21) based on the image of the object determined to be defective by the primary inspection unit (10)? Re-determine whether or not. Therefore, since the occurrence of the over-judgment product can be suppressed, the man-hours for manually re-inspecting whether or not the over-judgment product is in the final stage of the inspection can be reduced, so that the decrease in productivity can be suppressed.
- machine learning using a non-defective image as teacher data is performed in advance on the first machine learning model (21), and the secondary inspection unit ( 20) is based on the difference between the good product image generated by the first machine learning model (21) from the image of the object determined to be defective and the image of the object determined to be defective. It is a visual inspection system characterized by separating true defective products from over-judged products.
- the difference between the image of the object determined to be defective and the non-defective image generated by the first machine learning model (21) exceeds a predetermined value, it is considered to be a true defective product. It can be determined. Therefore, it is possible to accurately determine the defect without being sensitive to a slight change (positional deviation, etc.) in the image of the object.
- the secondary inspection unit (20) has the first aspect of the image of the object determined to be defective, and / or a specific part of the image. It is a visual inspection system characterized by evaluating the difference from a non-defective image generated by a machine learning model (21).
- the mounting position of the parts is a good image for the entire image (microscopic part). It is possible to evaluate the difference between the solder joints of the leads and the like, and the difference between the specific part (microscopic part) of the image and the non-defective image. As a result, it is possible to collectively inspect the microscopic part that is easily buried in the macroscopic part, so that the detection power is improved.
- a fourth aspect of the present disclosure is the first machine learning model (21) using the second machine learning model (31) based on line drawing information including at least contours in the second or third aspect. It is a visual inspection system characterized by further including a learning data generation unit (30) that generates a large number of non-defective images for learning.
- the fourth aspect it is not necessary to prepare a large number of real images of the object for learning the first machine learning model (21). Therefore, for example, since the first machine learning model (21) can be learned even when the actual product does not exist before the trial production, it is possible to improve the inspection quality of the prototype and shorten the trial production period.
- a fifth aspect of the present disclosure is the second or third aspect of the first machine learning model (21) using a second machine learning model (31) based on at least one real image. It is a visual inspection system characterized by further including a learning data generation unit (30) that generates a large number of non-defective images for learning.
- the first machine learning model (21) can be trained using a small number of real images.
- a sixth aspect of the present disclosure is, in the fourth aspect, machine learning in advance for the second machine learning model (31) so as to generate a texture from hint information given for each part of the line drawing information. Is performed, and the learning data generation unit (30) adds noise to the texture generated by the second machine learning model (31) to generate a large number of good images for learning. It is an inspection system.
- the first method is applied to a large number of new models. It is possible to obtain a good image for learning the machine learning model (21) of. Therefore, since the learning of the first machine learning model (21) can be advanced at the time when the system is started to be used, the convenience and responsiveness for the system user can be improved.
- a seventh aspect of the present disclosure is any one of the first to sixth aspects, wherein the first machine learning model (21) is from a plurality of learning models in which machine learning is performed according to an inspection item.
- the secondary inspection unit (20) is a visual inspection system configured to use the plurality of learning models in a hierarchical combination.
- the seventh aspect by using a plurality of learning models that have been machine-learned according to the inspection items in a hierarchical combination, it is possible to accurately and efficiently separate the true defective product from the over-judged product. Can be done.
- a plurality of the first machine learning models (21) are prepared in advance for each category of the object, and the primary inspection unit (the primary inspection unit (21) is prepared in advance. 10) determines the category of the object from the image of the object, and the secondary inspection unit (20) corresponds to the category of the object determined by the primary inspection unit (10). It is a visual inspection system characterized by using the first machine learning model (21).
- the learning of the first machine learning model (21) can be optimized for each category in which the objects having similar shapes are grouped. .. Therefore, it is possible to discriminate between a true defective product and an over-determined product with high accuracy.
- FIG. 1 is an overall configuration diagram of a visual inspection system according to an embodiment.
- FIG. 2 is a diagram showing a good product and a defective product of the solder joint portion of the electronic component lead mounted on the printed circuit board.
- FIG. 3 is a diagram showing how the visual inspection system according to the embodiment evaluates the difference between the entire image of the object and the specific portion of the image and the non-defective image.
- FIG. 4 is a flow chart of a visual inspection by the visual inspection system according to the embodiment.
- FIG. 5 is an overall configuration diagram of the visual inspection system according to the first modification.
- FIG. 6 is a diagram showing a state in which a plurality of non-defective product images are generated based on line drawing information by the learning data generation unit of the appearance inspection system according to the first modification.
- FIG. 1 is an overall configuration diagram of a visual inspection system according to an embodiment.
- FIG. 2 is a diagram showing a good product and a defective product of the solder joint portion of the electronic component lead mounted on the
- FIG. 7 is a diagram showing a state in which a plurality of non-defective product images are generated based on actual images by the learning data generation unit of the visual inspection system according to the first modification.
- FIG. 8 is a flow chart of good product image generation by the learning data generation unit of the visual inspection system according to the first modification.
- FIG. 9 is an overall configuration diagram of the visual inspection system according to the second modification.
- FIG. 10 is an overall configuration diagram of the visual inspection system according to the third modification.
- FIG. 11 is a diagram showing a state in which machine learning of a learning model is performed for each category in the visual inspection system according to the modified example 3.
- the over-judgment product is re-determined. May be done.
- the machine learning of the learning model may be optimized according to the inspection item or the category of the object, and these learning models may be used in a hierarchical combination or selectively. As a result, it is possible to accurately re-determine the over-determined product even when a plurality of inspection items are collectively inspected on the premise of objects having various shapes.
- the feature amount of each defective mode appears separately in the macroscopic portion which is the whole (or most of) the image and the microscopic portion which is a part of the image.
- the macroscopic part and the microscopic part may be separated and judged separately. As a result, it is possible to avoid a situation in which the feature amount of the microscopic part is buried in the feature amount of the macroscopic part and cannot be detected.
- a required amount of learning images is generated from line drawing information (for example, design drawing information) of an object and a small number of real images by using a learning model different from the learning model that re-determines the over-judgment product. You may. As a result, even if the inspection target is changed due to the introduction of a new model, etc., it is possible to prepare a considerable amount of learning data in advance. Can be completed. Further, when a unique texture exists for each part where the material is different, the texture may be generated by using another learning model from the hint (label) information given for each part of the line drawing information. As a result, a learning image close to the real image can be generated.
- line drawing information for example, design drawing information
- the visual inspection system (100) includes a primary inspection unit (10) that makes a defect determination without using machine learning based on an image obtained by capturing an image of an object (1).
- the secondary inspection unit (20) Based on the image of the object judged to be defective by the primary inspection unit (10), the secondary inspection unit (20) separates the true defective product from the over-determined product using the first machine learning model (21). ) And.
- the primary inspection unit (10) for example, a general optical visual inspection device that does not use a machine learning model can be used for the defect determination itself.
- the primary inspection unit (10) has, for example, an imaging unit (11) such as a camera.
- the primary inspection unit (10) may have a storage unit such as a hard disk in order to store data such as an captured image.
- the primary inspection unit (10) does not use the learning model for the defect determination itself, but may use machine learning when determining the inspection threshold value and the inspection parameter.
- the primary inspection unit (10) and the secondary inspection unit (20) each have a processing unit for determining a defect.
- Each processing unit is composed of, for example, a processor and a memory for storing programs and information for operating the processor.
- the primary inspection unit (10) and the secondary inspection unit (20) are configured to be able to exchange data such as images with each other.
- the secondary inspection unit (20) may have a storage unit such as a hard disk in order to store data such as an image transmitted from the primary inspection unit (10). Further, in the visual inspection system (100), each storage unit of the primary inspection unit (10) and the secondary inspection unit (20) may be shared.
- the first machine learning model (21) used in the secondary inspection unit (20) may be, for example, a neural network such as a multi-layer perceptron, a support vector machine, a discriminant function, a Bayesian network, or the like, but is particularly limited. It's not something.
- the first machine learning model (21) When the first machine learning model (21) is configured as a neural network, the first machine learning model (21) has, for example, three layers of an input layer, an intermediate layer, and an output layer. Each layer contains one or more neurons, each neuron in the input layer is connected to each neuron in the middle layer, and each neuron in the middle layer is connected to each neuron in the output layer. An image of an object determined to be defective by the primary inspection unit (10) is input to each neuron in the input layer.
- the secondary inspection unit (20) or another information processing device is subjected to the first Machine learning of the machine learning model (21), that is, calculation of model data, for example, calculation of model data indicating neural network settings.
- the model data includes, for example, the number of layers in the neural network, the number of neurons (nodes) contained in each layer, and the connection coefficient (connection load) between neurons.
- the secondary inspection unit (20) sets the first machine learning model (21) using the model data, and overdetermines using the first machine learning model (21) (that is, the trained model) after the setting. Re-judgment the product.
- the teacher data of the input information is input to the input layer, and the data is propagated from the input layer to the output layer.
- the coupling coefficient between the input layer and the output layer and the bias assigned to the neurons in the intermediate layer are calculated. For example, the coupling coefficient and bias are adjusted so that the difference between the output information and the teacher data becomes small.
- model data including the adjusted coupling coefficient and bias is generated.
- the model data includes, for example, the number of layers in the neural network, the number of neurons belonging to each layer, the coupling coefficient and the bias.
- the generated model data is stored, for example, in the storage unit of the secondary inspection unit (20).
- the secondary inspection unit (20) sets the neural network based on the stored model data before re-determining the over-determined product. That is, the secondary inspection unit (20) sets the number of layers, the number of neurons, the coupling coefficient, and the bias in the neural network constituting the first machine learning model (21) to the values specified in the model data. .. In this way, the secondary inspection unit (20) re-determines the over-determined product by the first machine learning model (21) using the model data.
- the first machine learning model (21) is subjected to machine learning using a non-defective image as teacher data in advance, and the object determined to be defective by the primary inspection unit (10). Based on the image, the first machine learning model (21) may be able to generate a non-defective image corresponding to the image. In this case, for example, a hostile generation network (GAN) may be used as the first machine learning model (21). In the case of a truly defective product, the difference between the actual image and the non-defective image generated by the first machine learning model (21) is measured, and this is used to set an arbitrary threshold value. The next inspection unit (20) can separate true defective products from over-judged products.
- GAN hostile generation network
- the visual inspection system (100) of the present embodiment a case where a printed circuit board on which a large number of electronic components are mounted by soldering or the like is used as an object (1) and a defect is determined by the visual inspection system (100) of the present embodiment will be described as an example.
- the mounting state of a plurality of electronic components having a size of several hundred to several thousand ⁇ m and different shapes (several hundred or more per substrate) is inspected over several tens of seconds per substrate.
- the inspection accuracy and the over-judgment amount are in a trade-off relationship, but the inspection accuracy can be improved by re-judging the over-judgment product (secondary inspection) using the learning model as in the present embodiment. It is possible to effectively reduce the amount of over-judgment while maintaining it.
- FIG. 2 is a diagram showing good and defective lead solder joints in electronic components mounted on a printed circuit board.
- the non-defective state (a) and (b) are compared with respect to the non-defective state (a) and (b).
- Various defective product states (c) to (l) can occur.
- the defective product state (g) pinholes (blow holes) (6) are generated in the solder (5), and in the defective product state (h), foreign matter (7) is mixed in the solder (5), resulting in a defective product.
- another lead (8) is bonded to the solder (5), and in the defective state (l), a solder ball (9) is generated.
- the solder joint portion of the lead as shown in FIG. 2 is a specific portion (hereinafter, referred to as “microscopic portion”) in the image. , Sometimes called “microscopic part”).
- the feature amount that appears in the "microscopic part” such as the misalignment of parts and the feature amount that appears in the "microscopic part” such as the solder joint failure of the lead appear at the same time, the feature of the microscopic part The amount may be buried in the feature amount of the microscopic part and cannot be detected.
- FIG. 3 a first machine learning model is used for each of the macroscopic portion and the microscopic portion of the image of the object (1) determined to be defective by the primary inspection unit (10). Evaluate the difference (difference image) from the non-defective image generated in (21).
- FIG. 3 also shows the results (frequency) of determining defects (NG) of a large number of objects (1) based on the difference amount (abnormality) for each of the macroscopic portion and the microscopic portion. In this way, by performing the defect determination separately for the macroscopic portion and the microscopic portion, it is possible to evaluate the abnormal degree of the microscopic portion that tends to be buried in the abnormal degree of the microscopic portion.
- the difference from the non-defective image was evaluated for both the macroscopic part and the microscopic part of the image of the object (1) judged to be defective by the primary inspection unit (10).
- the difference from the non-defective image may be evaluated for either the macroscopic portion or the microscopic portion.
- the position recognition of each microscopic part in the image of the object (1) may be performed by a separately prepared machine learning model. This is because even if the dimensional accuracy of each electronic component is good, if the arrangement of the same component on the board is different, the background circuit pattern and the printed characters on the board surface will also be different. This is because it is difficult to artificially set the visual part.
- the fact that the position cannot be recognized by the learning model prepared separately means that the abnormality such as the misalignment of the parts is detected by itself. As a result, the inspection accuracy can be improved by automatically recognizing the microscopic portion using the first machine learning model (21).
- FIG. 4 is a flow chart of a visual inspection by the visual inspection system (100) of the present embodiment.
- step S1 it is determined whether or not the defect determination has been completed for all the components (parts and parts to be inspected) of the object (1). If the defect judgment is completed for all the components, the process is terminated.
- the primary inspection unit (10) determines the defect of the component to be determined. If there is no defect in the component, a non-defective product is determined in step S9, and the process returns to step S1. If the component is defective, in step S3, the secondary inspection unit (20) acquires an image of the object (1) including the component from the primary inspection unit (10). When the primary inspection unit (10) has a storage unit, the secondary inspection unit (20) does not acquire images for each component from the primary inspection unit (20), but all the images are obtained from the primary inspection unit (20). Images may be acquired for all the components of.
- the primary inspection unit (10) for example, an optical visual inspection device
- the secondary inspection unit (20) that has acquired the image of the object (1) may select (switch) the first machine learning model (21) corresponding to the component in step S4.
- the selection of this first machine learning model (21) will be described in detail in Modification 3 below.
- step S5 the secondary inspection unit (20) uses the first machine learning model (21) to display a defect (for example, misalignment) of the component in the macroscopic portion of the image of the object (1). It is evaluated based on the difference from the above-mentioned non-defective product image, and if there is a defect, a defective product is determined in step S8, and the process returns to step S1. If there are no defects in step S5, the secondary inspection unit (20) uses the first machine learning model (21) in step S6 to see the microscopic unit associated with the component in the image of the object (1). Position recognition of (for example, solder joint) is performed.
- a defect for example, misalignment
- step S8 If the position of the microscopic portion could not be recognized in step S6, a defective product is determined in step S8, and the process returns to step S1.
- the secondary inspection unit (20) uses the first machine learning model (21) in step S7 to cause the microscopic part to have a defect in the component (20). For example, whether or not there is a solder joint defect) is evaluated based on the difference from the above-mentioned non-defective product image, and if there is a defect, a defective product is determined in step S8, and the process returns to step S1. If there is no defect in step S7, the secondary inspection unit (20) determines a non-defective product in step S9 and returns to step S1.
- steps S2 to S9 described above are repeated for each component until it is determined in step S1 that the defect determination has been completed for all the components.
- the secondary inspection unit (20) uses the first machine learning model (21) based on the image of the object (1) determined to be defective by the primary inspection unit (10). Re-determine whether it is a judgment product. Therefore, since the occurrence of over-judgment products can be suppressed, the man-hours for re-inspecting whether or not the over-judgment products are in the final stage of inspection can be reduced, and the decrease in productivity due to the occurrence of over-judgment products can be suppressed. It can be done, and it is especially effective when the amount of over-judgment is large.
- the re-inspection is often a visual operation, but in the present embodiment, the visual inspection personnel who are in charge of these operations get tired, overlooked, or assume due to the repetition of simple operations. Since this can be suppressed, it is possible to constantly prevent a decrease in detection power.
- the first machine learning model (21) is subjected to machine learning using a non-defective image as teacher data in advance, and the object determined to be defective by the primary inspection unit (10).
- a good product image is generated from the image of (1) by the first machine learning model (21), and the true failure is based on the difference between the good product image and the image of the object (1) determined to be defective. Good products and over-judged products may be separated. In this way, when the difference between the image of the object (1) determined to be defective and the corresponding non-defective image exceeds a predetermined value, it can be determined to be a true defective product. Therefore, it is possible to accurately determine the defect without being sensitive to a slight change (positional deviation, etc.) in the image of the object (1).
- the secondary inspection unit (20) refers to each of the macroscopic portion (whole or most) and / or the microscopic portion (specific portion) of the image of the object (1) determined to be defective.
- the difference from the non-defective image generated by the first machine learning model (21) may be evaluated.
- the difference between the microscopic part of the image and the non-defective image is evaluated for the mounting position of the parts.
- the solder joints of the leads and the like it is possible to evaluate the difference between the microscopic portion of the image and the non-defective image.
- Modification example 1 Modification 1 will be described with reference to the drawings.
- the appearance inspection system (100) according to the present modification differs from the embodiment shown in FIG. 1 in that the first machine learning model (31) is used. It is equipped with a learning data generation unit (30) that generates a large number of non-defective images for learning the machine learning model (21).
- the same components as those in the embodiment shown in FIG. 1 are designated by the same reference numerals.
- the second machine learning model (31) may generate a non-defective image based on line drawing information including at least contours (for example, design drawing information), as shown in FIG. 6, for example, or FIG. 7 for example. As shown in, a good product image may be generated based on at least one real image.
- the second machine learning model (31) since electronic parts are industrial products, the dimensions, shape, or substrate surface of the assembly destination are basically the same for each part type, and the texture (hue, brightness, etc.) depends on the material of each part. Texture etc.) is easy to determine. Therefore, when the second machine learning model (31) generates a non-defective image based on the line drawing information in this modification, hints given to the second machine learning model (31) for each part of the line drawing information ( Machine learning may be performed in advance so as to generate a texture from the label) information. In this way, by adding noise adjusted to a predetermined amount to the texture generated by the second machine learning model (31), it is possible to generate a large number of non-defective images close to the actual image.
- FIG. 8 is an example of a flow diagram of good product image generation by the learning data generation unit (30) of the appearance inspection system (100) of this modified example.
- “line drawing”, “label”, “texture”, and “generated image” are schematically shown so as to correspond to each step of the flow diagram.
- step S11 the learning data generation unit (30) acquires line drawing information (original image) including at least the outline.
- step S12 label information is given to the learning data generation unit (30) for each part of the line drawing information, and the second machine learning model (31) generates a texture for each part from the label information.
- the texture may be generated in advance, and the obtained texture may be stored in the database D1.
- step S13 the learning data generation unit (30) reads out the texture for each part stored in the database D1 and adds noise to the read texture.
- step S14 the learning data generation unit (30) generates a large number of good product images by combining textures for each part to which noise is added, and outputs the generated good product images in step S15.
- the output non-defective product image is stored in, for example, a secondary inspection unit (20) or a storage unit of another information processing device, and these non-defective product images are stored in the first machine learning model (21) described in the above embodiment. Used as teacher data in machine learning.
- the second machine learning model (31) is used for learning the first machine learning model (21) based on the line drawing information including at least the contour. Generate a large number of non-defective images. Therefore, it is not necessary to prepare a large number of real images of the object (1) for learning the first machine learning model (21). Therefore, for example, the first machine learning model (21) can be learned even when the actual product does not exist before the trial production. Therefore, for example, a prototype with a small number of actual products or an improvement in inspection quality at the initial stage of mass production and learning. It is possible to shorten the time to completion.
- the learning data generation unit (30) uses the second machine learning model (31) based on at least one real image for learning the first machine learning model (21). Generate a large number of non-defective images. Therefore, it is not necessary to prepare a large number of real images of the object for learning the first machine learning model (21). In other words, the first machine learning model (21) can be trained using a small number of real images.
- the second machine learning model (31) is trained in advance so that the texture is generated from the hint information given for each part (that is, the basic part) of the line drawing information.
- good images for learning the first machine learning model (21) can be obtained for a large number of new models. Therefore, since the learning of the first machine learning model (21) can be advanced at the start of using the system, convenience and responsiveness can be guaranteed to the final system user before use. ..
- the learning data generation unit (30) it may not be possible to prepare an image to be the teacher data of the first machine learning model (21) when the inspection target is new or does not exist. Or, only a limited amount of training data can be prepared. As a result, it is not possible to learn the first machine learning model (21) that can correspond to the parts newly applied in the new model in advance, or the learning becomes insufficient. The re-judgment accuracy will decrease.
- Modification 2 Modification 2 will be described with reference to the drawings.
- the appearance inspection system (100) according to the present modification differs from the embodiment shown in FIG. 1 in that the first machine learning model (21) has inspection items (for example, for example). It is composed of a plurality of learning models in which machine learning is performed according to the macroscopic part and the microscopic part), and the secondary inspection unit (20) uses a plurality of learning models in a hierarchical combination.
- the same components as those in the embodiment shown in FIG. 1 are designated by the same reference numerals.
- the secondary inspection unit (20) is an image (15) of the object (1) determined to be defective by the primary inspection unit (10). Is received, the inspection items 1 to N are sequentially subjected to defect determination using the learning model optimized for each item. Only when there are no defects in all items, the judgment result (25) of "good product" is output.
- the following effects can be obtained. That is, by using a plurality of learning models that have been optimally machine-learned according to the inspection items in a hierarchical combination, it is possible to accurately and efficiently separate the true defective product from the over-judged product. .. Further, when there is a defect in a certain item, it is possible to confirm the "deficiency (NG)" and efficiently complete the inspection without performing the defect determination of the next item. On the contrary, it is also possible to program so that one or a plurality of inspection items can be arbitrarily selected according to the target so that desired result information can be collected.
- Modification example 3 Modification 3 will be described with reference to the drawings.
- the appearance inspection system (100) according to the present modification differs from the embodiment shown in FIG. 1 as follows. That is, a plurality of first machine learning models (21) are prepared in advance for each category in which objects having similar shapes are grouped, and the primary inspection unit (10) is concerned with the image obtained by capturing the object (1). The category of the object (1) is determined, and the secondary inspection unit (20) uses the first machine learning model (21) corresponding to the category of the object (1) determined by the primary inspection unit (10). Use.
- the same components as those in the embodiment shown in FIG. 1 are designated by the same reference numerals.
- the secondary inspection unit (20) is an image (15) of the object (1) determined to be defective by the primary inspection unit (10).
- the selection unit (22) selects the learning model (judgment model) corresponding to the category information (16), and the selected judgment model is selected. Is used to make a defect judgment, and the judgment result (25) is output.
- judgment models 1 to N are prepared for each of the N categories of component groups 1 to N. Further, as shown in FIG. 11, each of the determination models 1 to N is generated by performing learning in advance using the learning data (image data set) prepared for each category.
- the learning of the first machine learning model (21) can be optimized for each category in which the objects having similar shapes are grouped. Therefore, it is possible to discriminate between a true defective product and an over-determined product with high accuracy.
- the visual inspection system (100) of the above-described embodiment may be used for the shipping inspection of fresh vegetables as the object (1).
- the visual inspection system (100) it may be desired to inspect both the characteristics appearing in the whole vegetable such as the color of leaves indicating the freshness and the characteristics appearing in a specific part such as the diameter of the stump indicating the growing period.
Landscapes
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
以下に説明する実施形態に係る外観検査システムでは、ニューラルネットワーク等の学習モデルを判定そのものに用いない従前の自動外観検査装置(一次検査部)において不良と判定された対象物の画像に基づいて、学習モデルを用いた二次検査部が、真の不良品と過判定品とを分別する二次判定をリアルアイムに行う。
実施形態について図面を参照しながら説明する。図1に示すように、本実施形態に係る外観検査システム(100)は、対象物(1)を撮像した画像に基づき、機械学習を用いずに不良判定を行う一次検査部(10)と、一次検査部(10)で不良と判定された対象物の画像に基づき、第1の機械学習モデル(21)を用いて、真の不良品と過判定品とを分別する二次検査部(20)とを備える。
本実施形態によると、一次検査部(10)で不良と判定された対象物(1)の画像に基づき、二次検査部(20)が、第1の機械学習モデル(21)を用いて過判定品かどうかを再判定する。このため、過判定品の発生を抑制できるので、検査の最終段で過判定品かどうかを再検査する工数を削減できるので、過判定品の発生に起因する生産性の低下を抑制することができ、過判定の量が多い場合は特に効果的である。
変形例1について図面を参照しながら説明する。図5に示すように、本変形例に係る外観検査システム(100)が、図1に示す前記実施形態と異なっている点は、第2の機械学習モデル(31)を用いて、第1の機械学習モデル(21)の学習用の良品画像を多数生成する学習データ生成部(30)を備えていることである。尚、図5においては、図1に示す前記実施形態と同じ構成要素には同じ符号を付す。
以上に説明した本変形例では、前記実施形態と同様の効果に加えて、次のような効果を得ることができる。
変形例2について図面を参照しながら説明する。図9に示すように、本変形例に係る外観検査システム(100)が、図1に示す前記実施形態と異なっている点は、第1の機械学習モデル(21)は、検査項目(例えば、巨視部、微視部)に応じて機械学習が行われた複数の学習モデルから構成され、二次検査部(20)は、複数の学習モデルを階層的に組み合わせて用いることである。尚、図9においては、図1に示す前記実施形態と同じ構成要素には同じ符号を付す。
以上に説明した本変形例では、前記実施形態と同様の効果に加えて、次のような効果を得ることができる。すなわち、検査項目に応じて最適に機械学習が行われた複数の学習モデルを階層的に組み合わせて用いることにより、真の不良品と過判定品との分別を精度良く効率的に行うことができる。また、ある項目で不良があった場合は、次の項目の不良判定を行うことをせず、「不良(NG)」を確定して検査を効率よく完了させることもできる。逆に、対象に応じて1つ又は複数の検査項目を任意に選択できるようにプログラミングしておいて、希望の結果情報を集めるようにすることも可能である。
変形例3について図面を参照しながら説明する。図10に示すように、本変形例に係る外観検査システム(100)が、図1に示す前記実施形態と異なっている点は、以下の通りである。すなわち、第1の機械学習モデル(21)は、類似形状を持つ対象物をグルーピングしたカテゴリー毎に予め複数用意され、一次検査部(10)は、対象物(1)を撮像した画像から、当該対象物(1)のカテゴリーを判定し、二次検査部(20)は、一次検査部(10)で判定された対象物(1)のカテゴリーに対応する第1の機械学習モデル(21)を用いる。尚、図10においては、図1に示す前記実施形態と同じ構成要素には同じ符号を付す。
以上に説明した本変形例では、前記実施形態と同様の効果に加えて、次のような効果を得ることができる。すなわち、検査される対象物(1)が様々な形状を有する場合に、類似形状を持つ対象物をグルーピングしたカテゴリー毎に第1の機械学習モデル(21)の学習を最適化することができる。従って、真の不良品と過判定品との分別を高精度で行うことができる。
前記実施形態(変形例を含む。以下同じ。)では、多数の電子部品がはんだ付け等で実装されたプリント基板を対象物(1)とした場合を例示したが、対象物(1)が特に限定されないことは言うまでもない。例えば、生鮮野菜を対象物(1)とする出荷検査に、前記実施形態の外観検査システム(100)を用いてもよい。生鮮野菜の出荷検査では、鮮度を表す葉の色合い等の野菜全体に現れる特徴と、生育期間を示す切り株の径等の特定部位に現れる特徴の両方を検査したい場合があるが、前記実施形態の外観検査システム(100)を用いることにより、野菜全体に現れる特徴と特定部位に現れる特徴とをそれぞれ巨視部と微視部とに分けて、精度良く効率的に検査することができる。
20 二次検査部
21 第1の機械学習モデル
30 学習データ生成部
31 第2の機械学習モデル
100 外観検査システム
Claims (8)
- 対象物を撮像した画像に基づき、機械学習を用いずに不良判定を行う一次検査部(10)と、
前記一次検査部(10)で不良と判定された対象物の画像に基づき、第1の機械学習モデル(21)を用いて、真の不良品と過判定品とを分別する二次検査部(20)とを備える
ことを特徴とする外観検査システム。 - 請求項1において、
前記第1の機械学習モデル(21)に対して、教師データとして良品画像を用いた機械学習が予め行われ、
前記二次検査部(20)は、前記不良と判定された対象物の画像から前記第1の機械学習モデル(21)により生成された良品画像と、前記不良と判定された対象物の画像との差分に基づいて、真の不良品と過判定品とを分別する
ことを特徴とする外観検査システム。 - 請求項2において、
前記二次検査部(20)は、前記不良と判定された対象物の画像の全体及び/又は特定部分のそれぞれについて、前記第1の機械学習モデル(21)により生成された良品画像との差分を評価する
ことを特徴とする外観検査システム。 - 請求項2又は3において、
少なくとも輪郭を含む線画情報に基づき、第2の機械学習モデル(31)を用いて、前記第1の機械学習モデル(21)の学習用の良品画像を多数生成する学習データ生成部(30)をさらに備える
ことを特徴とする外観検査システム。 - 請求項2又は3において、
少なくとも1つの実画像に基づき、第2の機械学習モデル(31)を用いて、前記第1の機械学習モデル(21)の学習用の良品画像を多数生成する学習データ生成部(30)をさらに備える
ことを特徴とする外観検査システム。 - 請求項4において、
前記第2の機械学習モデル(31)に対して、前記線画情報の部位別に与えられたヒント情報からテクスチャーを生成するように予め機械学習が行われ、
前記学習データ生成部(30)は、前記第2の機械学習モデル(31)により生成されたテクスチャーにノイズを付加して学習用の良品画像を多数生成する
ことを特徴とする外観検査システム。 - 請求項1~6のいずれか1項において、
前記第1の機械学習モデル(21)は、検査項目に応じて機械学習が行われた複数の学習モデルから構成され、
前記二次検査部(20)は、前記複数の学習モデルを階層的に組み合わせて用いる
ことを特徴とする外観検査システム。 - 請求項1~7のいずれか1項において、
前記第1の機械学習モデル(21)は、対象物のカテゴリー毎に予め複数用意され、
前記一次検査部(10)は、対象物を撮像した画像から、当該対象物のカテゴリーを判定し、
前記二次検査部(20)は、前記一次検査部(10)で判定された対象物のカテゴリーに対応する前記第1の機械学習モデル(21)を用いる
ことを特徴とする外観検査システム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180029817.XA CN115427791A (zh) | 2020-05-08 | 2021-01-25 | 外观检查系统 |
BR112022021095A BR112022021095A2 (pt) | 2020-05-08 | 2021-01-25 | Sistema de inspeção de aparência |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020-082753 | 2020-05-08 | ||
JP2020082753A JP7410402B2 (ja) | 2020-05-08 | 2020-05-08 | 外観検査システム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021225016A1 true WO2021225016A1 (ja) | 2021-11-11 |
Family
ID=78409411
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/002419 WO2021225016A1 (ja) | 2020-05-08 | 2021-01-25 | 外観検査システム |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP7410402B2 (ja) |
CN (1) | CN115427791A (ja) |
BR (1) | BR112022021095A2 (ja) |
WO (1) | WO2021225016A1 (ja) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023233489A1 (ja) * | 2022-05-30 | 2023-12-07 | 日本電信電話株式会社 | 情報処理装置、検出方法及び検出プログラム |
JP7270314B1 (ja) * | 2022-11-07 | 2023-05-10 | タクトピクセル株式会社 | 検査方法、検査システム、ニューラルネットワーク |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08136466A (ja) * | 1994-11-10 | 1996-05-31 | Dainippon Screen Mfg Co Ltd | 画像パターン検査装置 |
JP2004294360A (ja) * | 2003-03-28 | 2004-10-21 | Hitachi High-Technologies Corp | 欠陥分類方法及び装置 |
JP2018205123A (ja) * | 2017-06-05 | 2018-12-27 | 学校法人梅村学園 | 画像検査システムの性能調整のための検査用画像を生成する画像生成装置及び画像生成方法 |
US20190318471A1 (en) * | 2018-04-13 | 2019-10-17 | Taiwan Semiconductor Manufacturing Co., Ltd. | Hot spot defect detecting method and hot spot defect detecting system |
WO2020031984A1 (ja) * | 2018-08-08 | 2020-02-13 | Blue Tag株式会社 | 部品の検査方法及び検査システム |
JP6653929B1 (ja) * | 2019-07-18 | 2020-02-26 | Jeインターナショナル株式会社 | 自動判別処理装置、自動判別処理方法、検査システム、プログラム、および記録媒体 |
JP2020030145A (ja) * | 2018-08-23 | 2020-02-27 | 東京エレクトロンデバイス株式会社 | 検査装置及び検査システム |
WO2020071234A1 (ja) * | 2018-10-05 | 2020-04-09 | 日本電産株式会社 | 画像処理装置、画像処理方法、外観検査システムおよびコンピュータプログラム |
CN110992329A (zh) * | 2019-11-28 | 2020-04-10 | 上海微创医疗器械(集团)有限公司 | 一种产品表面缺陷检测方法、电子设备及可读存储介质 |
JP2020187657A (ja) * | 2019-05-16 | 2020-11-19 | 株式会社キーエンス | 画像検査装置 |
-
2020
- 2020-05-08 JP JP2020082753A patent/JP7410402B2/ja active Active
-
2021
- 2021-01-25 WO PCT/JP2021/002419 patent/WO2021225016A1/ja active Application Filing
- 2021-01-25 BR BR112022021095A patent/BR112022021095A2/pt unknown
- 2021-01-25 CN CN202180029817.XA patent/CN115427791A/zh active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08136466A (ja) * | 1994-11-10 | 1996-05-31 | Dainippon Screen Mfg Co Ltd | 画像パターン検査装置 |
JP2004294360A (ja) * | 2003-03-28 | 2004-10-21 | Hitachi High-Technologies Corp | 欠陥分類方法及び装置 |
JP2018205123A (ja) * | 2017-06-05 | 2018-12-27 | 学校法人梅村学園 | 画像検査システムの性能調整のための検査用画像を生成する画像生成装置及び画像生成方法 |
US20190318471A1 (en) * | 2018-04-13 | 2019-10-17 | Taiwan Semiconductor Manufacturing Co., Ltd. | Hot spot defect detecting method and hot spot defect detecting system |
WO2020031984A1 (ja) * | 2018-08-08 | 2020-02-13 | Blue Tag株式会社 | 部品の検査方法及び検査システム |
JP2020030145A (ja) * | 2018-08-23 | 2020-02-27 | 東京エレクトロンデバイス株式会社 | 検査装置及び検査システム |
WO2020071234A1 (ja) * | 2018-10-05 | 2020-04-09 | 日本電産株式会社 | 画像処理装置、画像処理方法、外観検査システムおよびコンピュータプログラム |
JP2020187657A (ja) * | 2019-05-16 | 2020-11-19 | 株式会社キーエンス | 画像検査装置 |
JP6653929B1 (ja) * | 2019-07-18 | 2020-02-26 | Jeインターナショナル株式会社 | 自動判別処理装置、自動判別処理方法、検査システム、プログラム、および記録媒体 |
CN110992329A (zh) * | 2019-11-28 | 2020-04-10 | 上海微创医疗器械(集团)有限公司 | 一种产品表面缺陷检测方法、电子设备及可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
BR112022021095A2 (pt) | 2022-12-06 |
JP2021177154A (ja) | 2021-11-11 |
CN115427791A (zh) | 2022-12-02 |
JP7410402B2 (ja) | 2024-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7004145B2 (ja) | 欠陥検査装置、欠陥検査方法、及びそのプログラム | |
TWI653605B (zh) | 利用深度學習的自動光學檢測方法、設備、電腦程式、電腦可讀取之記錄媒體及其深度學習系統 | |
KR102171491B1 (ko) | 딥러닝을 이용한 양품 선별 방법 | |
CN110658198B (zh) | 光学检测方法、光学检测装置及光学检测系统 | |
WO2021225016A1 (ja) | 外観検査システム | |
WO2018006180A1 (en) | System and method for combined automatic and manual inspection | |
CN109840900A (zh) | 一种应用于智能制造车间的故障在线检测系统及检测方法 | |
JP2006220648A (ja) | 基板検査装置並びにその検査ロジック設定方法および検査ロジック設定装置 | |
TW202113345A (zh) | 影像辨識裝置、影像辨識方法及其電腦程式產品 | |
CN112308816B (zh) | 影像辨识装置、影像辨识方法及其存储介质 | |
JP2020112456A (ja) | 検査装置及び検査方法 | |
JP4814116B2 (ja) | 実装基板外観検査方法 | |
KR20210008352A (ko) | 촬상된 품목의 결함을 검출하기 위한 시스템 및 방법 | |
Kefer et al. | An intelligent robot for flexible quality inspection | |
US20220067914A1 (en) | Method and apparatus for the determination of defects during a surface modification method | |
Thielen et al. | Clustering of Image Data to Enhance Machine Learning Based Quality Control in THT Manufacturing | |
Devasena et al. | AI-Based Quality Inspection of Industrial Products | |
JP2006078285A (ja) | 基板検査装置並びにそのパラメータ設定方法およびパラメータ設定装置 | |
JP2006284543A (ja) | 実装回路基板検査方法および実装回路基板検査装置 | |
US20240096059A1 (en) | Method for classifying images and method for optically examining an object | |
Tuncalp et al. | Automated Image Processing System Design for Engineering Education: The Case of Automatic Inspection for Printed Circuit Boards | |
US20230005120A1 (en) | Computer and Visual Inspection Method | |
US20210055235A1 (en) | Method to Automatically Inspect Parts Using X-Rays | |
JP7345764B2 (ja) | 検査システムおよび検査プログラム | |
Tuncalp | Web of Sc ence Page 1 (Records 1--1)[1] Record 1 of 1 T tle: Automated Image Process ng System Des gn for Eng neer ng Educat on: The Case of Automat c Inspect on for Pr nted C rcu t Boards |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21800256 Country of ref document: EP Kind code of ref document: A1 |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112022021095 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112022021095 Country of ref document: BR Kind code of ref document: A2 Effective date: 20221018 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21800256 Country of ref document: EP Kind code of ref document: A1 |