WO2013103032A1 - 画像検査装置の検査領域設定方法 - Google Patents
画像検査装置の検査領域設定方法 Download PDFInfo
- Publication number
- WO2013103032A1 WO2013103032A1 PCT/JP2012/071758 JP2012071758W WO2013103032A1 WO 2013103032 A1 WO2013103032 A1 WO 2013103032A1 JP 2012071758 W JP2012071758 W JP 2012071758W WO 2013103032 A1 WO2013103032 A1 WO 2013103032A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- inspection
- inspection area
- image
- region
- user
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8806—Specially adapted optical and illumination features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/143—Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
Definitions
- the present invention relates to an image inspection apparatus that performs an appearance inspection using an image.
- image inspection devices that perform visual inspections using images are widely used.
- the basic configuration is to image the inspection object with an image sensor (camera), extract the part that will be the inspection area from the obtained image, By analyzing / evaluating the characteristics of the image in the inspection area, the target inspection (for example, good / bad determination, sorting, information acquisition, etc.) is performed.
- an inspection region extraction method using binarization or color gamut extraction is known. That is, this is a method of extracting a pixel group corresponding to a preset luminance range or color gamut from an image and using the pixel group as an inspection region.
- This technique is effective when the brightness or color contrast of the part (foreground) to be extracted as the inspection area and the other part (background) is high, for example, the article from the image of the article conveyed on the belt conveyor. It is used for processing to extract only the part. If this method is used, the above-mentioned problems such as dealing with complicated shapes and simplification of setting work are solved to some extent, but the following problems still remain.
- the foreground part you want to extract as the inspection area is shaded by the influence of lighting, etc.
- the foreground part is composed of various brightness or colors, or there are colors close to the foreground part in the background, binary It is difficult to accurately extract only the foreground part by conversion or color gamut extraction.
- the inspection content has become more sophisticated and subdivided. For example, if you want to perform surface inspection only on one cutting surface of molded parts, or only one part on a printed circuit board on which many parts are mounted. In many cases, there is almost no color difference between the background and the foreground.
- binarization and color gamut extraction are performed for each pixel of the image, it is easily affected by noise and lighting fluctuations. As a result, there is a problem that the inspection accuracy is lowered due to selection of pixels.
- Patent Document 1 as a method for setting an inspection area, a method for setting the position and size of an inspection area from CAD data of a component to be inspected, and an inspection is performed by taking a difference between two images before and after component mounting. A method for recognizing a power region is disclosed. If these methods are used, it is possible to automatically set the inspection area, but these methods are limited in applicable objects and lack generality.
- the present invention has been made in view of the above circumstances, and the object of the present invention is simple and high even if the object is a complicated shape or a special shape, or the foreground and background colors are misleading.
- An object of the present invention is to provide a technique that makes it possible to set an inspection area with high accuracy.
- the gist is to set the inspection area automatically or semi-automatically.
- the present invention extracts a portion to be an inspection area from an original image obtained by photographing the inspection object as an inspection area image, and analyzes the inspection area image to inspect the inspection object.
- the acquisition step and the computer for each candidate region for a plurality of candidate regions that are candidate solutions of the inspection region based on the color or luminance information of each pixel in the sample image and the edge information included in the sample image Pixel separation, which is the degree of color or luminance separation between the inside and outside of the region, and the degree of overlap between the contours of each candidate region and the edges in the sample image
- An inspection region search step for obtaining an optimal solution of the inspection region from the plurality of candidate regions by evaluating both of the edge overlap degrees, and an image of the inspection region obtained by the computer in the inspection region search step
- the setting time and the work load are greatly increased as compared with the case where the inspection area is manually input as in the conventional case. And can be applied to complex shapes and special shapes.
- edge information is also used to comprehensively evaluate both color or luminance pixel separation between the inside and outside of the inspection area and edge overlap of the inspection area.
- the extraction accuracy of the region can be improved as compared with conventional methods such as binarization and color gamut extraction.
- the user can easily confirm whether or not a desired region is selected as the inspection region by looking at the inspection region displayed on the display device. If the inspection area is not appropriate, the recalculation result can be immediately confirmed on the screen while appropriately adjusting the parameters, so that it is easy to drive into the desired inspection area.
- a balance parameter for adjusting the balance between the pixel separation degree and the edge overlap degree is input to the user as one of the parameters, and in the inspection area search step, the balance parameter input from the user is input. Accordingly, it is preferable to adjust the weight in evaluating the pixel separation degree and the edge overlap degree.
- balance parameter adjustable by the user By making the balance parameter adjustable by the user in this way, a desired inspection area can be set easily and in a short time even for images for which it is difficult to automatically separate the foreground and the background.
- the inspection area search step a value obtained by evaluating the foreground likeness of the color or luminance of each pixel inside the candidate area with respect to the representative color or representative luminance of the foreground, or each outside of the candidate area with respect to the representative color or representative luminance of the background It is preferable that the pixel separation degree is a value obtained by evaluating the background of pixel color or luminance, or a value obtained by combining both values.
- the possibility of reaching an appropriate solution can be greatly increased. Note that when calculating a value that evaluates the foreground likelihood, all the pixels inside the candidate region may be used, or only some of the pixels may be used. Similarly, when calculating a value that evaluates the likelihood of background, all the pixels outside the candidate region may be used, or only some of the pixels may be used.
- the weight of the pixel separation degree increases as the difference between the representative color or representative luminance of the foreground and the representative color or representative luminance of the background increases, and the weight of the edge overlap degree decreases as the difference decreases. It is preferable to adjust the weight when evaluating the pixel separation degree and the edge overlap degree so as to increase.
- This configuration is not the user adjusting the balance parameter described above, but automatically adjusting it to an appropriate value. Thereby, even if there is no user assistance, the possibility of reaching an appropriate solution can be increased.
- the parameter receiving step if the user inputs the representative color or the representative luminance of the foreground and / or background as one of the parameters, the possibility of reaching an appropriate solution can be further increased.
- the sample image is displayed on a display device, the user is allowed to specify a portion to be a foreground or background on the displayed sample image, and the color or luminance of the specified portion is set. It is preferable to obtain the representative color or the representative luminance. According to this configuration, it is possible to easily and definitely specify the representative color and the representative luminance.
- any parameters other than those described above may be given as long as they can affect the search for the optimum solution in the inspection region.
- information indicating the characteristics of the inspection area shape, size, position in the image, texture, topology, adjacent elements, inclusion elements, etc. are given as parameters.
- characteristics of the inspection area And the inspection area solution search may be performed so that the degree of similarity between the parameters and the characteristics given by these parameters is also high. In this way, by setting various characteristics of the inspection region as constraint conditions, the possibility of reaching an appropriate solution can be further increased.
- the inspection area obtained in the inspection area search step is displayed on the display device, It is preferable to have an inspection area correction step of correcting the shape of the inspection area in accordance with the input correction instruction.
- Various operation systems for correcting the inspection area can be considered.
- all or part of the contour of the inspection area may be approximated by a path of a Bezier curve or a spline curve, and the path may be corrected by the user.
- the outline of the inspection area can be easily corrected to a desired shape.
- an operation system that allows a user to draw a free curve and synthesizes the free curve and the inspection area so that the free curve becomes a part of the outline of the inspection area, or a part of the outline of the inspection area to the user.
- An operation system in which the section is designated and the contour of the designated section is replaced with a straight line or an arc, or an operation system in which a pixel designated by the user is added to or excluded from the inspection area is also preferable.
- the present invention can also be regarded as an image inspection apparatus having at least one of the above-described means, or as an inspection area setting apparatus for an image inspection apparatus having at least one of the above-described means related to inspection area setting. You can also.
- the present invention can also be understood as an image inspection method or an inspection region setting method for executing at least one of the above processes, a program for causing a computer to execute such a method, or a storage medium storing the program.
- the present invention it is possible to easily and accurately set the inspection area even when the object has a complicated shape or special shape, or when the foreground and background colors are misleading.
- the figure which shows the structure of an image inspection apparatus typically.
- inspection processing. 6 is a flowchart showing a flow of processing for setting an inspection region using a setting tool 103;
- Embodiments described below relate to an image inspection apparatus that performs an appearance inspection using an image, and more particularly, to an inspection area setting apparatus that supports an operation of setting an inspection area for an image inspection apparatus.
- This image inspection apparatus is suitably used for applications in which a large number of articles are continuously or automatically inspected in an FA production line or the like.
- the type of article to be inspected is not limited, but the image inspection apparatus according to the present embodiment performs inspection by extracting a predetermined inspection area from the original image captured by the image sensor. It is assumed that the position and shape of the are fixed.
- the inspection area setting device of the present embodiment can be suitably applied to any inspection.
- the inspection region setting device is mounted in the form of one function (setting tool) of the image inspection device, but the image inspection device and the inspection region setting device may be configured separately.
- FIG. 1 schematically shows the configuration of the image inspection apparatus.
- the image inspection apparatus 1 is a system that performs an appearance inspection of an inspection object 2 that is transported on a transport path.
- the image inspection apparatus 1 includes hardware such as an apparatus main body 10, an image sensor 11, a display device 12, a storage device 13, and an input device 14.
- the image sensor 11 is a device for taking a color or monochrome still image or moving image into the apparatus main body 10, and for example, a digital camera can be suitably used. However, when a special image (such as an X-ray image or a thermo image) other than a visible light image is used for inspection, a sensor that matches the image may be used.
- the display device 12 is a device for displaying an image captured by the image sensor 11, an inspection result, a GUI screen related to inspection processing and setting processing, and for example, a liquid crystal display can be used.
- the storage device 13 is a device that stores various setting information (inspection area definition information, inspection logic, etc.), inspection results, and the like that the image inspection apparatus 1 refers to in the inspection processing.
- various setting information inspection area definition information, inspection logic, etc.
- inspection results and the like that the image inspection apparatus 1 refers to in the inspection processing.
- an HDD, SSD, flash memory, network storage Etc. are available.
- the input device 14 is a device that is operated by a user to input an instruction to the device main body 10.
- a mouse, a keyboard, a touch panel, a dedicated console, or the like can be used.
- the apparatus main body 10 can be configured as a computer including a CPU (Central Processing Unit), a main storage (RAM), and an auxiliary storage (ROM, HDD, SSD, etc.) as hardware.
- An inspection processing unit 101, an inspection region extraction unit 102, and a setting tool 103 are provided.
- the inspection processing unit 101 and the inspection area extraction unit 102 are functions related to the inspection processing, and the setting tool 103 is a function that supports setting work by the user of setting information necessary for the inspection processing. These functions are realized by loading a computer program stored in the auxiliary storage device or the storage device 13 into the main storage device and executing it by the CPU. Note that FIG.
- the apparatus main body 10 may be configured by a computer such as a personal computer or a slate type terminal, or may be configured by a dedicated chip or an on-board computer.
- FIG. 2 is a flowchart showing a flow of the inspection process
- FIG. 3 is a diagram for explaining an inspection region extraction process in the inspection process.
- the flow of the inspection process will be described by taking as an example inspection (detection of scratches and color unevenness) of the panel surface of the casing component of the mobile phone.
- step S20 the inspection object 2 is photographed by the image sensor 11, and the image data is taken into the apparatus main body 10.
- the captured image (original image) is displayed on the display device 12 as necessary.
- the upper part of FIG. 3 shows an example of the original image.
- a case part 2 to be inspected is shown in the center of the original image, and a part of the case part adjacent to the conveyance path is shown on the left and right sides thereof.
- the inspection area extraction unit 102 reads necessary setting information from the storage device 13.
- the setting information includes at least inspection area definition information and inspection logic.
- the inspection area definition information is information that defines the position / shape of the inspection area to be extracted from the original image.
- the format of the inspection area definition information is arbitrary. For example, a bit mask in which the label is changed between the inside and outside of the inspection area, vector data in which the outline of the inspection area is expressed by a Bezier curve or a spline curve, or the like can be used.
- the inspection logic is information that defines the content of the inspection process, and includes, for example, the type of feature amount used for inspection, the determination method, the parameter used in feature amount extraction and determination processing, the threshold value, and the like.
- the inspection region extraction unit 102 extracts a portion to be an inspection region from the original image according to the inspection region definition information.
- the middle part of FIG. 3 shows a state in which the inspection area (indicated by cross hatching) 30 defined by the inspection area definition information is superimposed on the original image. It can be seen that the inspection region 30 just overlaps the panel surface of the casing component 2.
- the lower part of FIG. 3 shows a state in which an image of the portion of the inspection region 30 (inspection region image 31) is extracted from the original image.
- the inspection area image 31 the conveyance path and adjacent components that have been shown around the casing component 2 are deleted. Further, the hinge part 20 and the button part 21 that are excluded from the target parts for the surface inspection are also deleted.
- the inspection area image 31 obtained in this way is delivered to the inspection processing unit 101.
- step S23 the inspection processing unit 101 extracts a necessary feature amount from the inspection region image 31 according to the inspection logic.
- the color of each pixel of the inspection region image 31 and its average value are extracted as the feature amount for inspecting the surface for scratches and color unevenness.
- step S24 the inspection processing unit 101 determines the presence or absence of scratches or color unevenness according to the inspection logic. For example, when a pixel group in which the color difference with respect to the average value obtained in step S23 exceeds a threshold value, the pixel group can be determined as a flaw or color unevenness.
- step S25 the inspection processing unit 101 displays the inspection result on the display device 12 or records it in the storage device 13.
- the inspection process for one inspection object 2 is completed.
- the processing of steps S20 to S25 in FIG. 2 is repeated in synchronization with the timing at which the inspection object 2 is conveyed within the angle of view of the image sensor 11.
- the inspection area image 31 includes a background portion or an extra portion (in the example of FIG. 3, the hinge portion 20 or the button portion 21), the pixel may become noise and reduce the inspection accuracy. This is because if the inspection area image 31 is smaller than the range to be inspected, there is a risk of inspection leakage. Therefore, in the image inspection apparatus 1 according to the present embodiment, a setting tool 103 for easily creating inspection area definition information for cutting out an accurate inspection area image is prepared.
- FIG. 4 is a flowchart showing a flow of processing for setting an inspection region using the setting tool 103
- FIG. 5 is a diagram showing an example of an inspection region setting screen.
- the setting screen shown in FIG. In this setting screen, an image window 50, an image capture button 51, a foreground designation button 52, a background designation button 53, a priority adjustment slider 54, and a confirmation button 55 are provided. Operations such as button selection and slider movement can be performed using the input device 14. Note that this setting screen is merely an example, and any UI may be used as long as parameter input and inspection area confirmation described below can be performed.
- the setting tool 103 captures a sample of the inspection object by the image sensor 11 (step S40).
- a sample a non-defective inspection object (a housing part in the above example) is used, and it is preferable to perform imaging in the same state (relative position between the image sensor 11 and the sample, illumination, etc.) as in the actual inspection process. .
- the obtained sample image data is taken into the apparatus main body 10.
- the setting tool 103 may read sample image data from the auxiliary storage device or the storage device 13. .
- the sample image acquired in step S40 is displayed in the image window 50 of the setting screen as shown in FIG. 5 (step S41).
- step S42 the user inputs foreground and background representative colors (representative brightness in the case of a monochrome image).
- the foreground refers to a portion to be extracted as an inspection region
- the background refers to a portion other than the inspection region.
- the user presses the foreground designation button 52 on the setting screen to enter the foreground designation mode, and then designates a portion to be the foreground on the sample image displayed in the image window 50. Since the designation here is for the purpose of picking up a representative color of the foreground, in the example of FIG. 5, a part of pixels or a group of pixels on the panel surface of the casing component may be appropriately selected.
- the foreground includes a pattern, a shadow, or a greatly different portion of color
- the background designation button 53 is pressed to switch to the background designation mode. Note that it is not essential to input the foreground and background representative colors. Only one of the foreground and the background may be input, or when the representative color is known or when the representative color can be automatically calculated from the color distribution of the sample image, step S42 may be omitted. .
- the setting tool 103 separates (samples) the sample image into the foreground and the background based on the foreground / background representative colors specified in step S42, and selects the foreground portion as an inspection region.
- the edge information included in the sample image is also used, and a plurality of candidate areas that are candidate solutions of the inspection area are between the foreground and the background (that is, The degree of color separation (called the pixel separation) between the inside and outside of the candidate area, and the degree of overlap between the foreground and background borders (that is, the outline of the candidate area) and the edges in the sample image (This is called edge overlap) is comprehensively evaluated to search for an optimal solution that increases both pixel separation and edge overlap. A detailed calculation method of the inspection area will be described later.
- step S44 the inspection area calculated in step S43 is displayed on the image window 50 of the setting screen.
- the user can confirm whether or not a desired area is selected as the inspection area by looking at the inspection area displayed on the setting screen. At this time, it is preferable to overlay and display the inspection area on the sample image, because comparison between the inspection object and the inspection area becomes easy.
- the setting tool 103 waits for input from the user (step S45).
- the confirm button 55 is pressed, the setting tool 103 generates inspection area definition information for the current inspection area and stores it in the storage device 13 (step S46).
- the user can adjust the parameters by operating the foreground designation button 52, the background designation button 53, and the priority adjustment slider 54 (step S47). If the representative color of the foreground or the background is designated again, the evaluation of the pixel separation degree described above is affected. Further, when the priority between the color information and the edge information is changed by the priority adjustment slider 54, the balance (weight) at the time of evaluating the pixel separation degree and the edge overlap degree can be changed.
- the setting tool 103 When receiving the input (change) of the parameter from the user, the setting tool 103 recalculates the optimal solution for the inspection area using the new parameter as a constraint condition, and displays the inspection area after the recalculation on the screen (step S47). ⁇ S43, S44). With such a function, it is possible to repeat the calculation of the inspection area until a desired result is obtained while appropriately adjusting the parameters.
- FIG. 6 shows an example of the process of driving the inspection area by parameter adjustment.
- the inspection area 30 obtained by the first calculation is shown in the upper part.
- the hinge part 20 and the button part 21 of the housing part are also included in the inspection area 30, but here the hinge part 20 and the button are used for the purpose of inspecting the panel surface for scratches and color unevenness.
- the part 21 is to be excluded from the examination area (see FIG. 3). Therefore, first, the user presses the background designation button 53 to switch to the background designation mode, and additionally designates the color of the button portion 21 in the sample image as the representative color of the background. As a result, the button portion 21 is removed from the inspection region 30 as in the image example shown in the middle.
- the hinge portion 20 is dealt with by adjusting the balance parameter because the color difference from the panel surface is small. That is, paying attention to the edge generated at the step between the hinge portion 20 and the panel surface, the priority adjustment slider 54 increases the priority of the edge information. As a result, as in the image example shown in the lower part, the contour of the inspection region 30 is set at the edge between the hinge portion 20 and the part surface, and the desired inspection region 30 is formed.
- the setting tool 103 comprehensively evaluates both the pixel separation degree between the foreground and the background and the edge overlap degree of the boundary between the foreground and the background, so that a candidate for the inspection region can be obtained.
- This calculation can be regarded as an optimization problem that minimizes (or maximizes) an objective function including a function that evaluates pixel separation based on color information and a function that evaluates edge overlap based on edge information.
- a method for solving the inspection region optimization problem using the graph cut algorithm will be described. Since the graph cut algorithm is a known technique (see Non-Patent Document 1), the description of the basic concept of the graph cut algorithm is omitted in this specification, and the following description focuses on the parts specific to this embodiment. Do.
- an energy function is defined as the following equation as an objective function, and a solution L that minimizes the energy E when I is given is obtained.
- I is a sample image
- L is a label (that is, an inspection region) indicating whether it is foreground or background.
- i and j are pixel indexes
- ⁇ is a pixel group in the image I
- N is an adjacent pixel pair group in the image I.
- li and lj are designated labels for pixels i and j, respectively. It is assumed that a label “1” is given for the foreground and “0” is given for the background.
- the first term on the right side is called a data term and gives a constraint condition for the target pixel i.
- the second term on the right-hand side is called a smoothing term and gives a constraint on the pixels i and j adjacent to each other.
- ⁇ is a balance parameter that determines the weight (balance) of the data term and the smoothing term.
- the data term is defined by a function that evaluates pixel separation based on the color information described above.
- the evaluation function U of the data term may be defined by the following formula.
- the probability density function in the foreground likelihood the one estimated from the foreground representative color (for example, the color distribution of the foreground representative color approximated by a Gaussian mixture model) is used.
- the probability density function in the background likelihood uses a value estimated from the background representative color (for example, a color distribution of the background representative color approximated by a Gaussian mixture model). That is, the data term represents the sum of the foreground likelihood of each foreground pixel and the background likelihood of each background pixel. The closer the foreground pixel color is to the foreground representative color, the closer the background pixel color is to the background representative color. The energy decreases, and conversely, the energy increases as the foreground pixel color moves away from the foreground representative color, and as the background pixel color moves away from the background representative color.
- the smoothing term is defined by a function that evaluates the degree of edge overlap based on the edge information described above.
- the evaluation function V of the smoothing term can be defined by the following equation.
- Ii and Ij are pixel values (color or luminance) of the pixels i and j, respectively, and ⁇ is a coefficient.
- ⁇ Ii ⁇ Ij ⁇ 2 represents a difference (distance) of pixel values in a predetermined color space, that is, the height of contrast between pixels.
- the energy increases when the contrast of the pixels i and j is low, and the energy decreases when the contrast is high.
- the portion where the contrast of adjacent pixels is high is a portion where the color or brightness in the image is greatly changed, that is, an edge portion in the image. That is, in the above formula, the energy decreases as the boundary between the foreground and the background (pixel pairs having different labels) overlaps the edge in the image.
- the energy function described above has a global minimum solution when a certain mathematical condition (submodularity) is satisfied. Similarly, a global minimum solution can be obtained with constraints by adding terms satisfying submodularity. Note that a known search algorithm may be used for an efficient solution of the global minimum solution, and a detailed description thereof is omitted here.
- “foreground representative color” and “background representative color” affect the value of the data item.
- the “priority of color information and edge information” corresponds to the balance parameter ⁇ . That is, if the user increases the priority of color information, the value of parameter ⁇ is decreased to increase the weight of the data term, and if the user increases the priority of edge information, the value of parameter ⁇ is increased and smoothed. The term weight is increased. Note that the value of the parameter ⁇ can be automatically determined by a computer (setting tool 103).
- the setting tool 103 calculates the difference between the foreground representative color and the background representative color, and when the difference is large, the value of the parameter ⁇ is decreased and the weight of the data term is increased. This is because if the difference between the foreground and background colors is clear, it can be estimated that the reliability of the data term is high. Conversely, when the difference between the foreground representative color and the background representative color is small, the value of the parameter ⁇ is increased and the weight of the smoothing term is increased. This is because when the difference between the foreground and background colors is not clear, the region division based on the edge information tends to give better results than the color information.
- the initial value of the balance parameter ⁇ is automatically determined by the above method, and the balance parameter ⁇ (priority of color information and edge information) is adjusted by the user using the initial value as a starting point. This is because it can be expected that the higher the validity of the initial value, the more the number of trials and errors of the user can be reduced, and the workload for parameter adjustment can be reduced.
- the sum of the foreground likelihood (foreground likelihood) of the foreground pixel and the background likelihood (background likelihood) of the background pixel is used as a data term.
- a product of foreground-likeness and background-likeness, a weighted sum, a weighted product, a nonlinear function sum, a nonlinear function product, or the like may be used.
- a monotonically increasing function may be used as the nonlinear function.
- only the foreground or background evaluation can be used as the pixel separation degree.
- Equation (2) a function that satisfies U (li
- Equation (4) is a function that evaluates only the foreground quality.
- foreground-likeness calculation may use all foreground pixels (that is, all pixels inside the candidate area) or only some foreground pixels.
- all background pixels that is, all pixels outside the candidate area
- only some of the background pixels may be used in calculating the background likelihood.
- the calculation time can be shortened by excluding the pixels for which the label is determined from the calculation, or by using only the pixels within a predetermined distance from the outline of the candidate area for the calculation.
- the function for evaluating the foreground quality or the background quality is not limited to the expression (2).
- a likelihood ratio that is a ratio between the foreground likelihood and the background likelihood can be used as in the following equation.
- the histogram of the pixel group designated as the foreground representative color by the user is directly used (without estimating the probability density function), and the foreground likelihood is evaluated based on the similarity of the color of each pixel to the foreground representative color histogram, or vice versa.
- the background likelihood may be evaluated based on the degree of color difference of each pixel with respect to the foreground representative color histogram.
- the background likelihood is evaluated based on the similarity to the histogram (background representative color histogram) of the pixel group designated as the background representative color by the user, or the foreground likelihood is evaluated based on the difference from the background representative color histogram. May be.
- the similarity or dissimilarity between the foreground histogram obtained from the foreground pixel group of the candidate area and the background histogram obtained from the background pixel group, and the foreground representative color histogram or background representative color histogram is determined by a predetermined function or distance. You may calculate using a parameter
- the three parameters of the foreground representative color, the background representative color, and the priority of the color information and the edge information have been described. Any parameter can be used.
- industrial products are mainly used for visual inspection, there are cases in which the shape, texture, topology, elements adjacent to the inspection area, elements included in the inspection area, etc. are characteristic. Many.
- the inspection object is set so that it fits within the angle of view when the image sensor is installed, the size of the portion serving as the inspection region and the position in the image can be predicted to some extent.
- the shape information that represents the characteristics related to the shape of the inspection area is often the basic shape of the inspection area (circular, quadrilateral, triangular, star shape, etc.) and the characteristics of the outline (linear outline, round outline, jaggedness) Certainly can be used.
- a UI for inputting shape information a list of basic shape templates and outline features may be displayed in a list, and the user may select a corresponding one from the list.
- a basic shape template is designated, for example, the following expression may be inserted as a constraint condition.
- li is a designated label of pixel i
- ti is a label of a point corresponding to pixel i on the template.
- T () represents affine transformation.
- the above expression represents an operation of performing template matching on the candidate region while enlarging / reducing, rotating, and deforming the designated template, and calculating the minimum score. That is, by adding this constraint condition, the energy of the region having a shape close to the basic shape designated by the user is reduced, and the optimum solution is preferentially selected.
- the jaggedness or smoothness is designated as the contour feature
- the following expression may be inserted as a constraint condition.
- S is a point on the contour of the foreground area
- ⁇ is a gradient angle of the contour
- ⁇ / ⁇ S represents a change amount of the gradient angle along the contour of the foreground region
- C is a constant indicating the jaggedness (smoothness) specified by the user, and the greater the jaggedness, the larger C, and the smoother, the smaller C.
- the above equation is a function for evaluating whether or not the total value of the change amount of the gradient angle of the contour of the foreground region (representing the jaggedness of the contour) is close to the value C (representing the specified jaggedness). That is, by adding this constraint condition, a region having a contour feature close to the jaggedness specified by the user is preferentially selected as the optimum solution.
- the size information representing the characteristics related to the size of the inspection region the area of the inspection region, the length and width, and the like can be used.
- the following expression may be inserted as a constraint condition.
- C is the area (number of pixels) of the foreground region designated by the user. Since the foreground label is 1 and the background label is 0, ⁇ li represents the total number of foreground pixels, that is, the area of the foreground region. Therefore, the above expression is a function for evaluating whether the area of the foreground region is close to the designated area C.
- a region having a size close to the user-specified area is preferentially selected as the optimum solution.
- the barycentric coordinates of the inspection region As the position information representing the feature related to the position of the inspection region in the image, the barycentric coordinates of the inspection region, the existence range of the inspection region (upper, lower, right, left, center,%) Can be used.
- the barycentric coordinates are input as the position information, for example, the following expression may be inserted as a constraint condition.
- w is the barycentric coordinate of the foreground area
- C is the barycentric coordinate specified by the user.
- the above expression is a function for evaluating whether or not the barycentric coordinates of the foreground region are close to the designated coordinates C.
- a region having a center of gravity at a position close to user-specified coordinates is preferentially selected as the optimum solution.
- texture information representing the characteristics related to the texture of the inspection region information representing the pattern, color shading, unevenness, material, etc. in the inspection region can be used.
- various texture templates may be displayed in a list and the user may select a corresponding one from the list.
- the following expression may be inserted as a constraint condition.
- I is a sample image
- E is a texture template designated by the user.
- h 1 1 () represents a color histogram of the foreground pixels
- f () is a function indicating the similarity of the histograms.
- the above expression is a function for evaluating whether the color histogram of the foreground area in the sample image is similar to the color histogram of the designated texture.
- the setting tool 103 can allow the user to arbitrarily select which of the priority of the color information and the edge information is given priority on the setting screen. For example, in the case of an image with many pseudo contours, such as a pattern included in the foreground or background, it is likely that better results will be obtained if color and brightness information is prioritized over edge information. In the case of images with similar colors, it is more likely that better results will be obtained if priority is given to edge information. It is very difficult to reach a correct answer completely automatically for such an image where it is difficult to separate the foreground and the background.
- contour correction tool As an example of the inspection area correction function provided by the setting tool of the present embodiment, (1) contour correction tool, (2) contour drawing tool, (3) arc conversion tool, (4) straight line conversion tool, (5) Describes the draw tool. These tools may be activated, for example, from the setting screen of FIG.
- the configuration of the image inspection apparatus, the operation of the inspection process, the operation of the automatic calculation (optimization) of the inspection area, and the like are the same as those in the first embodiment, and thus description thereof is omitted.
- FIG. 7 is a diagram for explaining an operation example of the contour correction tool.
- (A) shows an image of the inspection object (sample) 70
- (b) shows an automatic calculation result of the inspection area 71. It is assumed that there is a deviation as shown in the figure between the contour of the inspection object 70 and the contour of the inspection region 71.
- the contour correction tool When the user activates the contour correction tool, the contour correction tool first approximates the contour of the inspection area 71 with a path 72 of a Bezier curve or a spline curve, and displays the path 72 together with the control point 73 on the screen as shown in FIG. . At this time, the path 72 and the control point 73 are overlaid on the image of the inspection object 70.
- the user can freely correct the shape of the path 72 by correcting, adding, or deleting the control point 73 using the input device 14.
- the result corrected by the user is immediately reflected in the screen display. Therefore, the user can easily adjust the shape of the path 72 to the contour of the inspection area 71 while confirming on the screen.
- (D) shows the path 72 after correction.
- the contour correction tool converts the area surrounded by the path 72 into the inspection area 71. Thereby, the inspection area 71 having the shape intended by the user is obtained.
- FIG. 8 is a diagram for explaining an operation example of the outline drawing tool.
- (A) is an enlarged view of an image of the inspection object 70 and a part of the automatically calculated inspection area 71. Assume that the contour of the inspection object 70 and the contour of the inspection region 71 are shifted as shown in the figure.
- the mode is switched to the contour drawing mode, and the free curve 74 can be drawn on the image using the input device 14.
- the input device 14 For example, when a mouse is used as the input device 14, the trajectory of the movement of the mouse cursor between the press of the mouse button and the release is drawn as the free curve 74. If drawing of the free curve 74 fails, it is only necessary to exit from the contour drawing mode and start again from the beginning.
- the contour drawing tool causes the free curve 74 and the inspection region 71 to be a part of the contour of the inspection region 71. And synthesize. (C) has shown the test
- a method of combining the free curve 74 and the inspection area 71 is arbitrary. For example, a smoothing process may be applied to smooth the connecting portion between the free curve 74 and the contour of the inspection region 71 and the shape of the free curve 74.
- the free curve 74 and the contour of the inspection region 71 may be joined at the closest point, or the free curve 74 and the contour of the inspection region 71 may be joined. May be interpolated so as to connect them smoothly.
- FIG. 9 is a diagram for explaining an operation example of the arc conversion tool.
- (A) is an enlarged view of an image of the inspection object 70 and a part of the automatically calculated inspection area 71. Assume that the contour of the inspection object 70 and the contour of the inspection region 71 are shifted as shown in the figure.
- the input device 14 can be used to input an arc on the image.
- the mouse cursor is moved as shown in (b), so that two points (1, 2) on the contour of the examination area 71 and a passing point (3) of the arc (3) Click the mouse at the three locations.
- the points 1 and 2 are set as the start point and end point of the arc, and the arc passing through the point 3 is calculated and displayed as an overlay on the image.
- the position of each point may be corrected or the arc input mode may be temporarily exited and the operation may be repeated from the beginning.
- the arc is designated by the three points of the arc start point, end point, and passing point.
- the arc may be input by other designation methods.
- the arc conversion tool displays the outline of the section between the start point (1) and the end point (2) in the outline of the inspection area 71.
- the arc and the contour of the inspection region 71 may be joined at the closest point, or the arc and the inspection region 71 may be joined. Interpolation may be performed so that the contour is smoothly connected.
- FIG. 10 is a diagram for explaining an operation example of the line conversion tool.
- (A) is an enlarged view of an image of the inspection object 70 and a part of the automatically calculated inspection area 71. Assume that the contour of the inspection object 70 and the contour of the inspection region 71 are shifted as shown in the figure.
- the input device 14 can be used to input a straight line segment on the image.
- the mouse cursor is moved and the mouse is clicked at two points (1, 2) on the contour of the inspection area 71 as shown in FIG. Then, a line segment with the points 1 and 2 as the start point and the end point is calculated and displayed as an overlay on the image. If the shape of the line segment is different from the intended shape, the position of each point may be corrected, or the operation may be restarted from the beginning after exiting the straight line input mode.
- a straight line is designated by two points of a start point and an end point, but it is of course possible to input a straight line by other designation methods.
- the straight line conversion tool displays the contour of the section between the start point (1) and the end point (2) in the contour of the inspection area 71. Replace with minutes.
- the line segment and the contour of the inspection region 71 may be joined at the closest point, or the line segment and the inspection region may be joined. Interpolation may be performed so as to smoothly connect the 71 contours.
- FIG. 11 is a diagram illustrating an operation example of the draw tool.
- (A) is an enlarged view of an image of the inspection object 70 and a part of the automatically calculated inspection area 71. Since the drawing tool corrects the inspection area 71 in units of pixels, FIG. 11 shows a pixel grid for convenience of explanation. The contour of the inspection object 70 and the contour of the inspection area 71 are shifted as shown in the figure, and it is assumed that the inspection area 71 is too small in the upper part of the figure and the inspection area 71 is too large on the right side of the figure. .
- the drawing mode is switched, and the input device 14 can be used to specify pixels to be added to the inspection area 71 on the image and to specify pixels to be deleted from the inspection area 71.
- (B) shows how pixels are added to the inspection area. For example, when a mouse is used as the input device 14, an area (pixel group) to be added to the inspection area 71 by sequentially selecting pixels to be added or moving the mouse cursor while pressing a predetermined button. 75 can be specified.
- (c) shows a state in which pixels are deleted from the inspection area.
- the designation of the area (pixel group) 76 to be deleted from the inspection area 71 can also be performed in the same manner as the addition.
- the configuration of the present embodiment described above by providing a function for reworking the shape of the inspection region, it is possible to supplement a difficult part by automatic extraction by a computer with the assistance of the user, and as a result, It is possible to obtain an optimal inspection area (that is, a shape desired by the user) in a short time.
- the five correction functions (1) to (5) have been described.
- the setting tool does not have to include all the correction functions. At least one of the correction functions may be provided.
- the setting tool has other correction functions.
- the above-described embodiments show specific examples of the present invention, and are not intended to limit the scope of the present invention to these specific examples.
- color information of the image is used.
- luminance information may be used instead of the color information.
- the graph cut algorithm is used for optimization, but other methods such as a level set algorithm can also be used.
- the inspection area can be calculated with high accuracy by using the color information (luminance information) and the edge information. Also in this case, it is preferable that the priority of color information (luminance information) and edge information can be changed by the user.
- Image inspection device 2 Inspection object (casing part) 10: device main body, 11: image sensor, 12: display device, 13: storage device, 14: input device, 101: inspection processing unit, 102: inspection region extraction unit, 103: setting tool 20: hinge portion, 21: button Portion 30: Inspection area 31: Inspection area image 50: Image window 51: Image capture button 52: Foreground designation button 53: Background designation button 54: Priority adjustment slider 55: Confirm button 70: Inspection object Object 71: inspection area 72: pass 73: control point 74: free curve
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Analytical Chemistry (AREA)
- Chemical & Material Sciences (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
Description
以下に述べる実施形態は、画像による外観検査を行う画像検査装置に関し、詳しくは、画像検査装置に対して検査領域を設定する作業を支援するための検査領域設定装置に関するものである。この画像検査装置は、FAの生産ラインなどにおいて多数の物品を自動もしくは半自動で連続的に検査する用途などに好適に利用されるものである。検査対象となる物品の種類は問わないが、本実施形態の画像検査装置では画像センサで撮像された元画像から予め決められた検査領域を抽出して検査を行うため、元画像中の検査領域の位置・形状が固定であることが前提となる。外観検査の目的や検査項目には様々なものが存在するが、本実施形態の検査領域設定装置はいずれの検査に対しても好適に適用することができる。なお本実施形態では、画像検査装置の一機能(設定ツール)という形で検査領域設定装置が実装されているが、画像検査装置と検査領域設定装置とを別々の構成としてもよい。
(画像検査装置)
図1は、画像検査装置の構成を模式的に示している。この画像検査装置1は、搬送路上を搬送される検査対象物2の外観検査を行うシステムである。
図2及び図3を参照して、画像検査装置1の検査処理に関わる動作を説明する。図2は、検査処理の流れを示すフローチャートであり、図3は、検査処理における検査領域の抽出過程を説明するための図である。ここでは、説明の便宜のため、携帯電話の筐体部品のパネル面の検査(キズ、色ムラの検出)を例に挙げて検査処理の流れを説明する。
図4及び図5を参照して、設定ツール103の機能及び動作について説明する。図4は、設定ツール103を用いて検査領域を設定する処理の流れを示すフローチャートであり、図5は、検査領域設定画面の一例を示す図である。
図4のステップS43の検査領域の計算方法について説明する。
ここで、i,jはピクセルのインデックスであり、Ωは画像I中のピクセル群であり、Nは画像Iにおける隣接ピクセルペア群である。またli,ljはそれぞれピクセルi,jの指定ラベルである。前景の場合は「1」、背景の場合は「0」というラベルが与えられるものとする。右辺第1項はデータ項と呼ばれ、対象ピクセルiに関する拘束条件を与える。右辺第2項は平滑化項と呼ばれ、互いに隣接するピクセルi,jに関する拘束条件を与える。λは、データ項と平滑化項の重み(バランス)を決定するバランスパラメタである。
ここで、Ii,Ijはそれぞれピクセルi,jのピクセル値(色又は輝度)であり、βは係数である。∥Ii-Ij∥2は、所定の色空間上でのピクセル値の差分(距離)、すなわちピクセル間のコントラストの高さを表している。
上記例では、前景代表色、背景代表色、色情報とエッジ情報の優先度の3つのパラメタについて説明したが、これらの他にも、検査領域の最適解探索に影響を及ぼし得るものであれば、どのようなパラメタを用いて良い。例えば、外観検査の対象物としては工業製品が主であるため、検査領域となる部分の形状、テクスチャ、トポロジ、検査領域に隣接する要素、検査領域に内包される要素などに特徴があるケースが多い。また画像センサの設置に際し検査対象物がちょうど良く画角に収まるように設定されるため、検査領域となる部分の大きさや画像内での位置はある程度予測できる。それゆえ、このような検査領域の特徴を表す情報をパラメタとしてユーザに入力させ、そのパラメタで与えられた特徴と検査領域の特徴との類似度合を評価する拘束条件を目的関数の中に追加することで、妥当な検査領域が探索される可能性をより一層高めることができる。
ここで、liはピクセルiの指定ラベルであり、tiはテンプレート上のピクセルiに対応する点のラベルである。T()はアフィン変換を表す。上記式は、指定されたテンプレートを拡大/縮小、回転、変形等しつつ候補領域に対してテンプレートマッチングを行い、最小スコアを計算する、という操作を表している。すなわち、この拘束条件を加えることにより、ユーザ指定の基本形状に近い形状をもつ領域の方がエネルギーが小さくなり、最適解として優先的に選ばれるようになる。
ここで、Sは前景領域の輪郭上の点、θは輪郭の勾配角度であり、∂θ/∂Sは前景領域の輪郭に沿った勾配角度の変化量を表している。また、Cはユーザにより指定されたギザギザ度合い(滑らか度合い)を示す定数であり、ギザギザであるほどCは大きく、滑らかであるほどCは小さい値となる。上記式は、前景領域の輪郭の勾配角度の変化量の合計値(輪郭のギザギザ度合いを表す)が値C(指定されたギザギザ度合いを表す)に近いかどうか、を評価する関数である。すなわち、この拘束条件を加えることにより、ユーザ指定のギザギザ度合いに近い輪郭特徴をもつ領域の方が最適解として優先的に選ばれるようになる。
ここで、Cはユーザにより指定された前景領域の面積(ピクセル数)である。前景ラベルは1、背景ラベルは0ゆえ、Σliは前景ピクセルの総数、つまり前景領域の面積を表している。したがって、上記式は、前景領域の面積が指定された面積Cに近いかどうか、を評価する関数である。この拘束条件を加えることにより、ユーザ指定の面積に近い大きさの領域の方が最適解として優先的に選ばれるようになる。
ここで、wは前景領域の重心座標であり、Cはユーザにより指定された重心座標である。上記式は、前景領域の重心座標が指定された座標Cに近いかどうか、を評価する関数である。この拘束条件を加えることにより、ユーザ指定の座標に近い位置に重心をもつ領域の方が最適解として優先的に選ばれるようになる。
ここで、Iはサンプル画像であり、Eはユーザにより指定されたテクスチャテンプレートである。hl=1()は前景ピクセルの色ヒストグラムを表しており、f()はヒストグラムの類似度を示す関数である。つまり、上記式は、サンプル画像中の前景領域の色ヒストグラムが、指定されたテクスチャの色ヒストグラムに類似するかどうか、を評価する関数である。この拘束条件を加えることにより、ユーザ指定のテクスチャに類似したテクスチャをもつ領域の方が最適解として優先的に選ばれるようになる。
以上述べた本実施形態の設定ツール103によれば、サンプル画像を用いた最適解探索により検査領域の位置及び形状が決められるため、従来のように単純図形で検査領域を手入力するのに比べ、設定時間及び作業負荷を大幅に軽減できると共に、複雑な形状や特殊な形状に対しても適用が可能である。また、色・輝度の情報に加えてエッジの情報も用い、検査領域の内側と外側の間での色又は輝度のピクセル分離度と検査領域の輪郭とエッジとのエッジ重なり度の両方を総合的に評価することで、二値化や色域抽出といった従来手法に比べて、領域の抽出精度を向上することができる。
次に本発明の第2実施形態について説明する。第1実施形態の設定ツールにおいては、前景・背景の代表色や色とエッジの優先度などのパラメタを調整することで、検査領域の追い込みを可能にしている。しかしながら、このようなパラメタの調整だけではユーザが意図する検査領域の形状に到達できない(多少の誤差が残る)可能性や、パラメタの試行錯誤に時間がかかる場合も想定される。そこで第2実施形態の設定ツールでは、計算により検査領域を求めた後に、その形状をユーザがインタラクティブに手直しできる検査領域修正機能を設けることとする。
図7は、輪郭修正ツールの動作例を説明する図である。(a)は検査対象物(サンプル)70の画像、(b)は検査領域71の自動計算結果を示している。検査対象物70の輪郭と検査領域71の輪郭に図で示すようなずれが生じているものと仮定する。
図8は、輪郭描画ツールの動作例を説明する図である。(a)は検査対象物70の画像と自動計算された検査領域71の一部を拡大して示している。検査対象物70の輪郭と検査領域71の輪郭に図に示すようなずれが生じているものと仮定する。
図9は、円弧変換ツールの動作例を説明する図である。(a)は検査対象物70の画像と自動計算された検査領域71の一部を拡大して示している。検査対象物70の輪郭と検査領域71の輪郭に図に示すようなずれが生じているものと仮定する。
図10は、直線変換ツールの動作例を説明する図である。(a)は検査対象物70の画像と自動計算された検査領域71の一部を拡大して示している。検査対象物70の輪郭と検査領域71の輪郭に図に示すようなずれが生じているものと仮定する。
図11は、ドローツールの動作例を説明する図である。(a)は検査対象物70の画像と自動計算された検査領域71の一部を拡大して示している。ドローツールではピクセル単位で検査領域71を修正するので、図11では説明の便宜のためにピクセルのグリッドを示している。検査対象物70の輪郭と検査領域71の輪郭には図に示すようなずれが生じており、図の上方では検査領域71が小さすぎ、図の右方では検査領域71が大きすぎると仮定する。
2:検査対象物(筐体部品)
10:装置本体、11:画像センサ、12:表示装置、13:記憶装置、14:入力装置、101:検査処理部、102:検査領域抽出部、103:設定ツール
20:ヒンジ部分、21:ボタン部分
30:検査領域、31:検査領域画像
50:画像ウィンドウ、51:画像取込ボタン、52:前景指定ボタン、53:背景指定ボタン、54:優先度調整スライダ、55:確定ボタン
70:検査対象物、71:検査領域、72:パス、73:コントロールポイント、74:自由曲線
Claims (18)
- 検査対象物を撮影して得られた元画像から検査領域とする部分を検査領域画像として抽出し、前記検査領域画像を解析することにより前記検査対象物の検査を行う画像検査装置に対して、
前記検査領域を定義する検査領域定義情報を設定する検査領域設定方法であって、
コンピュータが、検査対象物のサンプルを撮影して得られたサンプル画像を取得する取得ステップと、
コンピュータが、前記サンプル画像における各ピクセルの色又は輝度の情報、及び、前記サンプル画像に含まれるエッジの情報に基づいて、検査領域の候補解である複数の候補領域について、各候補領域の内側と外側の間での色又は輝度の分離の度合いであるピクセル分離度と、各候補領域の輪郭と前記サンプル画像中のエッジとの重なりの度合いであるエッジ重なり度の両方を評価することにより、前記複数の候補領域の中から検査領域の最適解を求める検査領域探索ステップと、
コンピュータが、前記検査領域探索ステップで求められた検査領域の画像内での位置及び形状を定義する検査領域定義情報を前記画像検査装置に対して設定する設定ステップと、を有することを特徴とする検査領域設定方法。 - コンピュータがユーザからパラメタの入力を受け付けるパラメタ受付ステップ、をさらに有しており、
コンピュータが、前記パラメタ受付ステップでユーザからパラメタの入力を受け付ける毎に、入力されたパラメタを拘束条件として用いて前記検査領域探索ステップを実行することにより検査領域の最適解を再計算し、再計算された検査領域を表示装置に表示することを特徴とする請求項1に記載の検査領域設定方法。 - 前記パラメタ受付ステップでは、パラメタの一つとして、前記ピクセル分離度と前記エッジ重なり度のバランスを調整するためのバランスパラメタをユーザに入力させ、
前記検査領域探索ステップでは、ユーザから入力されたバランスパラメタに応じて、前記ピクセル分離度と前記エッジ重なり度を評価する際の重みを調整することを特徴とする請求項2に記載の検査領域設定方法。 - 前記検査領域探索ステップでは、前景の代表色又は代表輝度に対する候補領域の内側の各ピクセルの色又は輝度の前景らしさを評価した値、或いは、背景の代表色又は代表輝度に対する候補領域の外側の各ピクセルの色又は輝度の背景らしさを評価した値、或いは、その両方の値を総合した値を前記ピクセル分離度とすることを特徴とする請求項2又は3に記載の検査領域設定方法。
- 前記検査領域探索ステップでは、前景の代表色又は代表輝度と背景の代表色又は代表輝度との差が大きいほど前記ピクセル分離度の重みが大きくなり、前記差が小さいほど前記エッジ重なり度の重みが大きくなるように、前記ピクセル分離度と前記エッジ重なり度を評価する際の重みを調整することを特徴とする請求項4に記載の検査領域設定方法。
- 前記パラメタ受付ステップでは、パラメタの一つとして、前景若しくは背景若しくはその両方の代表色又は代表輝度をユーザに入力させることを特徴とする請求項4又は5に記載の検査領域設定方法。
- 前記パラメタ受付ステップでは、前記サンプル画像を表示装置に表示し、前記表示されたサンプル画像上で前景若しくは背景とすべき部分をユーザに指定させ、前記指定された部分の色又は輝度を前記代表色又は代表輝度として取得することを特徴とする請求項6に記載の検査領域設定方法。
- 前記パラメタ受付ステップでは、パラメタの一つとして、検査領域の形状に関する特徴を表す形状情報をユーザに入力させ、
前記検査領域探索ステップでは、前記ピクセル分離度と前記エッジ重なり度に加え、検査領域の形状と前記形状情報で表される形状との類似度合も高くなるように、検査領域の最適解が求められることを特徴とする請求項2~7のうちいずれか1項に記載の検査領域設定方法。 - 前記パラメタ受付ステップでは、パラメタの一つとして、検査領域の大きさに関する特徴を表す大きさ情報をユーザに入力させ、
前記検査領域探索ステップでは、前記ピクセル分離度と前記エッジ重なり度に加え、検査領域の大きさと前記大きさ情報で表される大きさとの類似度合も高くなるように、検査領域の最適解が求められることを特徴とする請求項2~8のうちいずれか1項に記載の検査領域設定方法。 - 前記パラメタ受付ステップでは、パラメタの一つとして、検査領域の画像内での位置に関する特徴を表す位置情報をユーザに入力させ、
前記検査領域探索ステップでは、前記ピクセル分離度と前記エッジ重なり度に加え、検査領域のサンプル画像内での位置と前記位置情報で表される位置との類似度合も高くなるように、検査領域の最適解が求められることを特徴とする請求項2~9のうちいずれか1項に記載の検査領域設定方法。 - 前記パラメタ受付ステップでは、パラメタの一つとして、検査領域内の画像のテクスチャに関する特徴を表すテクスチャ情報をユーザに入力させ、
前記検査領域探索ステップでは、前記ピクセル分離度と前記エッジ重なり度に加え、検査領域内の画像のテクスチャと前記テクスチャ情報で表されるテクスチャとの類似度合も高くなるように、検査領域の最適解が求められることを特徴とする請求項2~10のうちいずれか1項に記載の検査領域設定方法。 - コンピュータが、前記検査領域探索ステップで求められた検査領域を表示装置に表示し、ユーザから入力される修正指示にしたがって検査領域の形状を修正する、検査領域修正ステップをさらに有していることを特徴とする請求項1~11のうちいずれか1項に記載の検査領域設定方法。
- 前記検査領域修正ステップは、検査領域の輪郭の全部又は一部をベジェ曲線若しくはスプライン曲線のパスで近似し、前記パスをユーザに修正させるものであることを特徴とする請求項12に記載の検査領域設定方法。
- 前記検査領域修正ステップは、ユーザに自由曲線を描画させ、前記自由曲線が検査領域の輪郭の一部となるように前記自由曲線と検査領域とを合成するものであることを特徴とする請求項12又は13に記載の検査領域設定方法。
- 前記検査領域修正ステップは、ユーザに検査領域の輪郭の一部の区間を指定させ、指定された区間の輪郭を直線又は円弧に置き換えるものであることを特徴とする請求項12~14のうちいずれか1項に記載の検査領域設定方法。
- 前記検査領域修正ステップは、ユーザにより指定されたピクセルを検査領域に追加し、又は検査領域から除外するものであることを特徴とする請求項12~15のうちいずれか1項に記載の検査領域設定方法。
- 請求項1~16のうちいずれか1項に記載の検査領域設定方法の各ステップをコンピュータに実行させることを特徴とするプログラム。
- 検査対象物を撮影して得られた元画像から検査領域とする部分を検査領域画像として抽出し、前記検査領域画像を解析することにより前記検査対象物の検査を行う画像検査装置に対して、
前記検査領域を定義する検査領域定義情報を設定する検査領域設定装置であって、
検査対象物のサンプルを撮影して得られたサンプル画像を取得する取得手段と、
前記サンプル画像における各ピクセルの色又は輝度の情報、及び、前記サンプル画像に含まれるエッジの情報に基づいて、検査領域の候補解である複数の候補領域について、各候補領域の内側と外側の間での色又は輝度の分離の度合いであるピクセル分離度と、各候補領域の輪郭と前記サンプル画像中のエッジとの重なりの度合いであるエッジ重なり度の両方を評価することにより、前記複数の候補領域の中から検査領域の最適解を求める検査領域探索手段と、
前記検査領域探索手段で求められた検査領域の画像内での位置及び形状を定義する検査領域定義情報を前記画像検査装置に対して設定する設定手段と、を有することを特徴とする検査領域設定装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020147015679A KR101626231B1 (ko) | 2012-01-05 | 2012-08-29 | 화상 검사 장치의 검사 영역 설정 방법 |
EP12864252.7A EP2801815B1 (en) | 2012-01-05 | 2012-08-29 | Inspection area setting method for image inspecting device |
CN201280061701.5A CN103988069B (zh) | 2012-01-05 | 2012-08-29 | 图像检查装置的检查区域设定方法 |
US14/363,340 US9269134B2 (en) | 2012-01-05 | 2012-08-29 | Inspection area setting method for image inspecting device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012-000588 | 2012-01-05 | ||
JP2012000588A JP5874398B2 (ja) | 2012-01-05 | 2012-01-05 | 画像検査装置の検査領域設定方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013103032A1 true WO2013103032A1 (ja) | 2013-07-11 |
Family
ID=48745100
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/071758 WO2013103032A1 (ja) | 2012-01-05 | 2012-08-29 | 画像検査装置の検査領域設定方法 |
Country Status (6)
Country | Link |
---|---|
US (1) | US9269134B2 (ja) |
EP (1) | EP2801815B1 (ja) |
JP (1) | JP5874398B2 (ja) |
KR (1) | KR101626231B1 (ja) |
CN (1) | CN103988069B (ja) |
WO (1) | WO2013103032A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112419298A (zh) * | 2020-12-04 | 2021-02-26 | 中冶建筑研究总院(深圳)有限公司 | 一种螺栓节点板锈蚀检测方法、装置、设备及存储介质 |
CN112740271A (zh) * | 2019-06-17 | 2021-04-30 | 大日本印刷株式会社 | 判定装置、判定装置的控制方法、判定系统、判定系统的控制方法以及程序 |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5929238B2 (ja) * | 2012-01-27 | 2016-06-01 | オムロン株式会社 | 画像検査方法および画像検査装置 |
JP2014102685A (ja) * | 2012-11-20 | 2014-06-05 | Sony Corp | 情報処理装置、情報処理方法及びプログラム |
US9213896B2 (en) * | 2013-03-05 | 2015-12-15 | Mitsubishi Electric Research Laboratories, Inc. | Method for detecting and tracking objects in image sequences of scenes acquired by a stationary camera |
US9734797B2 (en) * | 2013-08-06 | 2017-08-15 | Crackle, Inc. | Selectively adjusting display parameter of areas within user interface |
JP6478492B2 (ja) * | 2014-06-27 | 2019-03-06 | キヤノン株式会社 | 画像処理装置およびその方法 |
WO2016021030A1 (ja) | 2014-08-07 | 2016-02-11 | 株式会社ニコン | X線装置および構造物の製造方法 |
CN107076684B (zh) | 2014-09-02 | 2021-04-02 | 株式会社尼康 | 测量处理装置、测量处理方法和测量处理程序 |
JP6763301B2 (ja) * | 2014-09-02 | 2020-09-30 | 株式会社ニコン | 検査装置、検査方法、検査処理プログラムおよび構造物の製造方法 |
US10445612B2 (en) * | 2015-10-26 | 2019-10-15 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and storage medium |
JP6333871B2 (ja) * | 2016-02-25 | 2018-05-30 | ファナック株式会社 | 入力画像から検出した対象物を表示する画像処理装置 |
CN109690625A (zh) * | 2016-05-03 | 2019-04-26 | 莱尼电缆有限公司 | 用于操作员增强查看的采用颜色分段的视觉系统 |
CN106682693A (zh) * | 2016-12-23 | 2017-05-17 | 浙江大学 | 一种用于塑料瓶瓶身重叠图像的识别方法 |
CN106846302B (zh) * | 2016-12-30 | 2024-03-15 | 河南中烟工业有限责任公司 | 一种工具正确取件的检测方法及基于此方法的考核台 |
US10943374B2 (en) * | 2017-02-03 | 2021-03-09 | Microsoft Technology Licensing, Llc | Reshaping objects on a canvas in a user interface |
JP6894725B2 (ja) * | 2017-03-09 | 2021-06-30 | キヤノン株式会社 | 画像処理装置及びその制御方法、プログラム、記憶媒体 |
JP6864549B2 (ja) * | 2017-05-09 | 2021-04-28 | 株式会社キーエンス | 画像検査装置 |
JP6931552B2 (ja) * | 2017-05-09 | 2021-09-08 | 株式会社キーエンス | 画像検査装置 |
JP6919982B2 (ja) * | 2017-05-09 | 2021-08-18 | 株式会社キーエンス | 画像検査装置 |
SE541493C2 (en) * | 2017-10-26 | 2019-10-15 | Ivisys Aps | System and method for optical inspection of an object |
JP6693938B2 (ja) | 2017-11-17 | 2020-05-13 | ファナック株式会社 | 外観検査装置 |
CN109991232B (zh) * | 2017-12-29 | 2022-02-15 | 上海微电子装备(集团)股份有限公司 | 芯片崩边缺陷检测方法 |
CN111630561B (zh) * | 2018-01-17 | 2024-04-02 | 株式会社富士 | 图像处理用元件形状数据生成系统及图像处理用元件形状数据生成方法 |
JP2019185204A (ja) * | 2018-04-04 | 2019-10-24 | 富士電機株式会社 | 画像処理装置、ロボットシステム、画像処理方法 |
IL260417B (en) | 2018-07-04 | 2021-10-31 | Tinyinspektor Ltd | System and method for automatic visual inspection |
JP7299002B2 (ja) | 2018-08-23 | 2023-06-27 | ファナック株式会社 | 判別装置及び機械学習方法 |
JP6795562B2 (ja) | 2018-09-12 | 2020-12-02 | ファナック株式会社 | 検査装置及び機械学習方法 |
JP6823025B2 (ja) | 2018-09-12 | 2021-01-27 | ファナック株式会社 | 検査装置及び機械学習方法 |
JP7214432B2 (ja) * | 2018-10-22 | 2023-01-30 | キヤノン株式会社 | 画像処理方法、画像処理プログラム、記録媒体、画像処理装置、生産システム、物品の製造方法 |
JP7166189B2 (ja) * | 2019-02-15 | 2022-11-07 | 東京エレクトロン株式会社 | 画像生成装置、検査装置及び画像生成方法 |
EP4104100A4 (en) | 2020-02-13 | 2024-01-10 | Inspekto A M V Ltd | USER INTERFACE DEVICE FOR AUTONOMOUS ARTIFICIAL VISION INSPECTION |
TWI742733B (zh) * | 2020-06-19 | 2021-10-11 | 倍利科技股份有限公司 | 圖像轉換方法 |
CN112351247A (zh) * | 2020-10-16 | 2021-02-09 | 国电大渡河枕头坝发电有限公司 | 一种基于图像处理的水电厂内电光闪光检测方法 |
US20230011330A1 (en) * | 2021-07-09 | 2023-01-12 | At&T Intellectual Property I, L.P. | Device condition determination |
JP2023071276A (ja) * | 2021-11-11 | 2023-05-23 | 日立Astemo株式会社 | 検査方法および検査装置 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09140397A (ja) * | 1995-11-18 | 1997-06-03 | Dennou Giken:Kk | 1つの連結領域が示すコロニーの個数を識別する識別方法及びこれを用いたコロニー計数装置 |
JP2004354064A (ja) * | 2003-05-27 | 2004-12-16 | Hitachi High-Tech Electronics Engineering Co Ltd | 光学系測定画像による磁気ヘッドの欠陥検査方法および欠陥検査装置 |
JP2006058284A (ja) | 2004-07-21 | 2006-03-02 | Omron Corp | 基板検査用ウィンドウの設定条件の決定方法、基板検査方法、基板検査用の検査データ作成方法、および基板検査装置 |
JP2007509724A (ja) * | 2003-11-03 | 2007-04-19 | シーメンス コーポレイト リサーチ インコーポレイテツド | 冠状血管可視化のためのレンダリング |
JP2009500752A (ja) * | 2005-07-01 | 2009-01-08 | マイクロソフト コーポレーション | ビデオオブジェクトのカットアンドペースト |
JP2009080660A (ja) * | 2007-09-26 | 2009-04-16 | Rakuten Inc | 物体領域抽出処理プログラム、物体領域抽出装置、および物体領域抽出方法 |
JP2009198514A (ja) * | 2009-06-01 | 2009-09-03 | Hitachi High-Technologies Corp | パターン検査方法及びその装置 |
JP2011212301A (ja) * | 2010-03-31 | 2011-10-27 | Fujifilm Corp | 投影画像生成装置および方法、並びにプログラム |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07280537A (ja) * | 1994-04-11 | 1995-10-27 | Sekisui Chem Co Ltd | 撮像式検査方法および装置 |
US7130039B2 (en) | 2002-04-18 | 2006-10-31 | Kla-Tencor Technologies Corporation | Simultaneous multi-spot inspection and imaging |
JP2006038582A (ja) * | 2004-07-26 | 2006-02-09 | Dainippon Screen Mfg Co Ltd | 画像の領域分割による欠陥の検出 |
JP2006260401A (ja) * | 2005-03-18 | 2006-09-28 | Toshiba Corp | 画像処理装置、方法、およびプログラム |
JP2009506339A (ja) | 2005-08-30 | 2009-02-12 | カムテック エルティーディー. | 検査システム、及び基準フレームに基づいて欠陥を検査する方法 |
JP5159373B2 (ja) * | 2008-03-06 | 2013-03-06 | オリンパス株式会社 | 基板検査方法 |
CN101256156B (zh) * | 2008-04-09 | 2011-06-08 | 西安电子科技大学 | 平板裂缝天线裂缝精密测量方法 |
JP5445452B2 (ja) * | 2008-05-22 | 2014-03-19 | 凸版印刷株式会社 | 非検査領域制限検証方法及び修正方法、プログラム並びに装置 |
US7623229B1 (en) | 2008-10-07 | 2009-11-24 | Kla-Tencor Corporation | Systems and methods for inspecting wafers |
JP5353566B2 (ja) * | 2009-08-31 | 2013-11-27 | オムロン株式会社 | 画像処理装置および画像処理プログラム |
JP5152231B2 (ja) * | 2010-03-12 | 2013-02-27 | オムロン株式会社 | 画像処理方法および画像処理装置 |
US20130329987A1 (en) * | 2012-06-11 | 2013-12-12 | Genesis Group Inc. | Video segmentation method |
-
2012
- 2012-01-05 JP JP2012000588A patent/JP5874398B2/ja active Active
- 2012-08-29 WO PCT/JP2012/071758 patent/WO2013103032A1/ja active Application Filing
- 2012-08-29 EP EP12864252.7A patent/EP2801815B1/en active Active
- 2012-08-29 CN CN201280061701.5A patent/CN103988069B/zh active Active
- 2012-08-29 US US14/363,340 patent/US9269134B2/en active Active
- 2012-08-29 KR KR1020147015679A patent/KR101626231B1/ko active IP Right Grant
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09140397A (ja) * | 1995-11-18 | 1997-06-03 | Dennou Giken:Kk | 1つの連結領域が示すコロニーの個数を識別する識別方法及びこれを用いたコロニー計数装置 |
JP2004354064A (ja) * | 2003-05-27 | 2004-12-16 | Hitachi High-Tech Electronics Engineering Co Ltd | 光学系測定画像による磁気ヘッドの欠陥検査方法および欠陥検査装置 |
JP2007509724A (ja) * | 2003-11-03 | 2007-04-19 | シーメンス コーポレイト リサーチ インコーポレイテツド | 冠状血管可視化のためのレンダリング |
JP2006058284A (ja) | 2004-07-21 | 2006-03-02 | Omron Corp | 基板検査用ウィンドウの設定条件の決定方法、基板検査方法、基板検査用の検査データ作成方法、および基板検査装置 |
JP2009500752A (ja) * | 2005-07-01 | 2009-01-08 | マイクロソフト コーポレーション | ビデオオブジェクトのカットアンドペースト |
JP2009080660A (ja) * | 2007-09-26 | 2009-04-16 | Rakuten Inc | 物体領域抽出処理プログラム、物体領域抽出装置、および物体領域抽出方法 |
JP2009198514A (ja) * | 2009-06-01 | 2009-09-03 | Hitachi High-Technologies Corp | パターン検査方法及びその装置 |
JP2011212301A (ja) * | 2010-03-31 | 2011-10-27 | Fujifilm Corp | 投影画像生成装置および方法、並びにプログラム |
Non-Patent Citations (1)
Title |
---|
Y. BOYKOV; M.-P. JOLLY: "Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in N-D images", ICCV2001, vol. 01, 2001, pages 105 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112740271A (zh) * | 2019-06-17 | 2021-04-30 | 大日本印刷株式会社 | 判定装置、判定装置的控制方法、判定系统、判定系统的控制方法以及程序 |
CN112740271B (zh) * | 2019-06-17 | 2024-04-26 | 大日本印刷株式会社 | 判定装置、判定装置的控制方法、判定系统、判定系统的控制方法以及介质 |
CN112419298A (zh) * | 2020-12-04 | 2021-02-26 | 中冶建筑研究总院(深圳)有限公司 | 一种螺栓节点板锈蚀检测方法、装置、设备及存储介质 |
CN112419298B (zh) * | 2020-12-04 | 2024-01-19 | 中冶建筑研究总院(深圳)有限公司 | 一种螺栓节点板锈蚀检测方法、装置、设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
JP2013140090A (ja) | 2013-07-18 |
KR20140088220A (ko) | 2014-07-09 |
EP2801815A4 (en) | 2015-10-07 |
EP2801815A1 (en) | 2014-11-12 |
KR101626231B1 (ko) | 2016-05-31 |
CN103988069A (zh) | 2014-08-13 |
CN103988069B (zh) | 2016-10-05 |
EP2801815B1 (en) | 2018-06-27 |
US20140314302A1 (en) | 2014-10-23 |
JP5874398B2 (ja) | 2016-03-02 |
US9269134B2 (en) | 2016-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5874398B2 (ja) | 画像検査装置の検査領域設定方法 | |
JP5929238B2 (ja) | 画像検査方法および画像検査装置 | |
US9892504B2 (en) | Image inspection method and inspection region setting method | |
US10896493B2 (en) | Intelligent identification of replacement regions for mixing and replacing of persons in group portraits | |
JP5546317B2 (ja) | 外観検査装置、外観検査用識別器の生成装置及び外観検査用識別器生成方法ならびに外観検査用識別器生成用コンピュータプログラム | |
CN111833303B (zh) | 产品的检测方法、装置、电子设备及存储介质 | |
US8780223B2 (en) | Automatic determination of compliance of a part with a reference drawing | |
US11222418B2 (en) | System and method for automated surface assessment | |
US20210390282A1 (en) | Training data increment method, electronic apparatus and computer-readable medium | |
JP7214432B2 (ja) | 画像処理方法、画像処理プログラム、記録媒体、画像処理装置、生産システム、物品の製造方法 | |
JP2023515520A (ja) | ユーザ入力に基づいて生成された人工知能モデルを使用し、仮想欠陥画像を生成するためのコンピュータプログラム、方法、及び装置 | |
JP6405124B2 (ja) | 検査装置、検査方法およびプログラム | |
CN111696079A (zh) | 一种基于多任务学习的表面缺陷检测方法 | |
JP2015004641A (ja) | ウエハ外観検査装置 | |
JP6049052B2 (ja) | ウエハ外観検査装置及びウエハ外観検査装置における感度しきい値設定方法 | |
EP2691939B1 (en) | Automatic determination of compliance of a part with a reference drawing | |
US9230309B2 (en) | Image processing apparatus and image processing method with image inpainting | |
US20230394797A1 (en) | Data creation system, learning system, estimation system, processing device, evaluation system, data creation method, and program | |
JP2022114462A (ja) | 検査装置および検査方法 | |
JP2014126388A (ja) | 情報処理装置、その制御方法、及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12864252 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20147015679 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012864252 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14363340 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |