WO2012092132A2 - Determining the uniqueness of a model for machine vision - Google Patents

Determining the uniqueness of a model for machine vision Download PDF

Info

Publication number
WO2012092132A2
WO2012092132A2 PCT/US2011/066883 US2011066883W WO2012092132A2 WO 2012092132 A2 WO2012092132 A2 WO 2012092132A2 US 2011066883 W US2011066883 W US 2011066883W WO 2012092132 A2 WO2012092132 A2 WO 2012092132A2
Authority
WO
WIPO (PCT)
Prior art keywords
model
training image
poses
quality metric
modified
Prior art date
Application number
PCT/US2011/066883
Other languages
French (fr)
Other versions
WO2012092132A3 (en
Inventor
Xiaoguang Wang
Lowell Jacobson
Original Assignee
Cognex Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/981,268 external-priority patent/US8542905B2/en
Priority claimed from US12/981,275 external-priority patent/US8542912B2/en
Application filed by Cognex Corporation filed Critical Cognex Corporation
Publication of WO2012092132A2 publication Critical patent/WO2012092132A2/en
Publication of WO2012092132A3 publication Critical patent/WO2012092132A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Definitions

  • the description generally relates to computer-implemented methods and systems, including machine vision systems and computer program products, for determining the uniqueness of a model for machine vision.
  • Machine vision generally relates to finding and/or locating patterns in images, where the patterns generally correspond to and/or represent real-world objects in the field of view of an imaging device, whether based on an image of the object or a simulated representation of the object, such as a CAD drawing.
  • Pattern location methods and systems are of particular importance in industrial automation, where they are used, for example, to guide automation equipment and for quality control, where the objects might include, for example,
  • Basic machine vision systems include one or more cameras (typically having solid-state charge couple device (CCD) as imaging elements) directed at an area of interest, appropriate illumination on the area of interest, frame grabber/image processing elements that capture and/or transmit CCD images, and one or more computer processing units and/or displays for running the machine vision software application and manipulating or analyzing the captured images.
  • CCD solid-state charge couple device
  • Typical machine vision systems include a training stage and a run-time stage.
  • Training typically involves being provided or receiving a digital image of an example object (e.g., a training image).
  • the objective of training is to learn an object's pattern in an image by generating a model that can be used to find similarly-appearing patterns on production objects or in run-time images at run-time.
  • Run-time typically involves being provided or receiving a digital image of a production object (e.g., a run-time image).
  • the objective of run-time processing is (1) to determine whether the pattern exists in the run-time image (called pattern recognition), and (2) if the pattern is found, to determine where the pattern is located, with respect to one or more degrees of freedom (DOF), within the run-time image.
  • pattern recognition the pattern recognition
  • DOE degrees of freedom
  • the pattern's location can be called the object or pattern's pose in the image.
  • One way to represent a pose is as a transformation matrix mapping between coordinates in the model and coordinates in the run-time image or vice versa. Determining whether a pattern is located in an image can establish the location of the production object so that, for example, it can be operated on by automation equipment.
  • Training is one of the more important and challenging aspects of any industrial pattern inspection/location system.
  • a model can be used tens of thousands of times every hour, and any errors or imperfections in the model can potentially affect every single use.
  • the challenge of training arises from several factors. Production objects can vary significantly in appearance from any given example object used in training, due to imperfections in the example object and ordinary
  • the model should be such that the production objects can be found reliably and accurately, while at the same time rejecting objects that do not match the pattern.
  • models are typically trained by human operators (e.g., by drawing a box on a training image with a mouse) whose time is expensive and who are not generally experts in the underlying machine vision technology.
  • machine vision systems also allow models to be defined synthetically (e.g., by using a CAD tool).
  • Each of these training implementations suffers from drawbacks that can decrease the effectiveness of the generated models. For example, manually-selected and synthetically-generated models can result in degenerate models (e.g., straight lines) and other non-unique model features. Training machine vision systems based on object images is also typically time-consuming.
  • FIGS. 1A-1 C illustrate examples of manually trained models from training images and the resulting mis-detections during run-time using the manually-generated models.
  • FIG. 1A illustrates a training image 110 and a user-selected region of interest 111 from which a model 112 is generated (e.g., the region of interest is passed through an edge detection unit to generate a model representing edges).
  • model 112 contains only straight lines 112a and 112b in the same direction, which are degenerate and non-unique features, resulting in a secondary result 115 being detected in a run-time image 116.
  • the detected result 115 matches the model features, but is translated and therefore does not represent what the user intended to find.
  • FIG. 1A illustrates a training image 110 and a user-selected region of interest 111 from which a model 112 is generated (e.g., the region of interest is passed through an edge detection unit to generate a model representing edges).
  • model 112 contains only straight lines 112a and 112b in the
  • IB illustrates the training image 110 with a different user- selected region of interest 117 from which a model 118 is generated.
  • Model 118 while non- degenerate, still can result in a degenerate match 119 in the run-time image 120 even though a portion 120 of the model 119 does not appear in the run-time image 120.
  • FIG. 1C illustrates a training image 130 and two user-selected regions of interest 131 from which a model 132 is generated.
  • models such as model 132 that include a circle 132a and/or short straight lines 132b oriented in a single direction can be non-unique by rotation, resulting in a secondary result 135 being detected in run time image 136 (where the highlight box 137 illustrates the rotation of the secondary result 135.
  • models can be non-unique due to the existence of background features.
  • One approach to determining the uniqueness of a model involves analyzing all of the results returned during run-time application of the model (e.g., determining how many misdetections occur). The drawback of this approach is failure to detect secondary results (e.g., results that are not the highest scoring or do not surpass a certain threshold score) in a robust manner due to the existence of non-linearities of the machine vision tools used. Other approaches include simple alerts based on whether a model consists of a single straight line or a single circle. The drawback of these approaches is that they do not address the general issue of how unique a model is in a given search range.
  • One approach to determining uniqueness of a model involves calculating and evaluating a quality metric of the model.
  • One approach to calculate a quality metric is to perturb the training image and evaluate the perturbed results. Evaluation of the perturbed results can be based on a statistical analysis of the secondary scores associated with the perturbed results.
  • a computerized method for determining a quality metric of a model of an object in a machine vision application includes receiving a training image, generating a model of an object based on the training image, generating a modified training image based on the training image, determining a set of poses that represent possible instances of the model in the modified training image, and computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image,
  • the computer program product is tangibly embodied in a machine-readable storage device and includes instructions being operable to cause data processing apparatus to receive a training image, generate a model of an object based on the training image, generate a modified training image based on the training image, determine a set of poses that represent possible instances of the model in the modified training image, and compute a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.
  • Another aspect features a system for determining a quality metric of a model of an object in a machine vision application.
  • the system includes interface means for receiving a training image, model generating means for generating a model of an object based on the training image, image modifying means for generating a modified training image based on the training image, processor means for determining a set of poses that represent possible instances of the model in the modified training image, and processor means for computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.
  • Another aspect features a system for determining a quality metric of a model of an object in a machine vision application.
  • the system includes an interface for receiving a training image, a model generating module for generating a model of an object based on the training image, an image processing module for generating a modified training image based on the training image, a run-time module for determining a set of poses that represent possible instances of the model in the modified training image, and a quality-metric module for computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.
  • the method can further include computing at least a primary score and a secondary score for at least a portion of the set of poses.
  • Computing the quality metric can be based on the primary score and the secondary score.
  • Computing the quality metric can be based on a distribution of the computed scores for the portion of the set of poses.
  • the model can include a geometric description of the object in the training image.
  • the model can include a portion of the training image.
  • Generating the modified training image can include adding noise to the training image.
  • the noise can include amplifier noise, salt-and-pepper noise, shot noise, quantization noise, film grain noise, non-isotropic noise, or any combination thereof.
  • the noise can be added to one or more pixels in the training image.
  • Generating the modified training image can include transforming the training image by one or more degrees-of- freedom of rotation, translation, scale, skew, aspect ratio, or any combination thereof.
  • Generating the modified training image can include changing the resolution of the training image.
  • the method further includes generating a plurality of modified training images based on the training image.
  • the method can further include determining a set of poses for each of the plurality of modified training images. Each set of poses can represent possible instances of the model in one of the plurality of modified training images.
  • Computing the quality metric can be further based on an evaluation of the sets of poses, determined from the plurality of modified training images, with respect to the modified training image.
  • the method can further include computing at least primary scores and secondary scores for at least a portion of the set of poses and at least portions of each of the sets of poses computed from the plurality of modified training images. Computing the quality metric of the model can be based on the secondary scores.
  • Computing the quality metric of the model can be based on a distribution of the computed scores for the portion of the set of poses and distributions of the computed scores for the portions of the sets of poses computed from the plurality of modified training images.
  • the method can further include modifying a baseline model parameter. Generating the model can be based on the modified baseline model parameter.
  • the baseline model parameter can include an elasticity parameter, a grain limit or granularity, a coarse-value acceptance fraction, a contrast threshold, an edge- value threshold, trainClientFromPattern, or any combination thereof.
  • Another approach to calculating a quality metric of a model is to perturb model parameters in lieu of or in addition to perturbing a training image and evaluate the perturbed results.
  • a computerized method for determining a quality metric of a model of an object in a machine vision application includes receiving a training image and a first set of model parameters, generating a first model of an object, generating a second model of the object based on the training image and a second set of model parameters modified from the first set of model parameters, determining a set of poses that represent possible instances of the second model in the training image, and computing a quality metric of the first model based on an evaluation of the set of poses with respect to the training image.
  • the computer program product is tangibly embodied in a machine-readable storage device and includes instructions being operable to cause data processing apparatus to receive a training image and a first set of model parameters, generate a first model of an object, generate a second model of the object based on the training image and a second set of model parameters modified from the first set of model parameters, determine a set of poses that represent possible instances of the second model in the training image, and compute a quality metric of the first model based on an evaluation of the set of poses with respect to the training image.
  • Another aspect features a system for determining a quality metric of a model of an object in a machine vision application.
  • the system includes interface means for receiving a training image and a first set of model parameters, model generating means for generating a first model of an object, model generating means for generating a second model of the object based on the training image and a second set of model parameters modified from the first set of model parameters, processor means for determining a set of poses that represent possible instances of the second model in the training image, and processor means for computing a quality metric of the first model based on an evaluation of the set of poses with respect to the training image.
  • Another aspect features a system for determining a quality metric of a model of an object in a machine vision application.
  • the system includes an interface for receiving a training image and a first set of model parameters, a model generating module for generating a first model of an object and for generating a second model of the object based on the training image and a second set of model parameters modified from the first set of model parameters, a run-time module for determining a set of poses that represent possible instances of the second model in the training image, and a quality-metric module for computing a quality metric of the first model based on an evaluation of the set of poses with respect to the training image.
  • any of the aspects above can include one or more of the following features.
  • Modifying the first set of model parameters to produce the second set of model parameters can include perturbing one or more values in the first set of model parameters.
  • the model can include a geometric description of the object in the training image.
  • the model can include a portion of the training image.
  • the method can further include computing at least a primary score and a secondary score for at least a portion of the set of poses.
  • Computing the quality metric of the model can be based on the primary score and the secondary score.
  • Computing the quality metric of the model can be based on a distribution of the computed scores for the portion of the set of poses.
  • the method further includes generating a plurality of models based on the training image and a plurality of different sets of model parameters.
  • the plurality of different sets of model parameters can be based on modifications to the first set of model parameters.
  • the method can further include determining a set of poses for each of the plurality of models. Each set of poses can represent possible instances of one of the plurality of models in the training image. Computing the quality metric can be further based on an evaluation of the sets of poses, for each of the plurality of models, with respect to the training image.
  • the method can further include computing at least primary scores and secondary scores for at least a portion of the first set of poses and at least portions of the sets of poses computed for each of the plurality of models.
  • Determining the quality metric of the model can be based on the secondary scores. Determining the quality metric of the model can be based on a distribution of the computed scores for the portion of the first set of poses and distributions of the computed scores for the portions of the sets of poses computed for each of the plurality of models.
  • the method further includes modifying the training image.
  • Modifying the received training image can include adding noise to the received training image.
  • Determining the set of poses can include modifying one or more search parameters.
  • the one or more search space parameters can include a starting pose value, one or more search range values, or any combination thereof.
  • Any of the above implementations can realize one or more of the following advantages. Simulating a variety of run-time-like applications of a generated model on modified training images and/or using perturbed training parameters advantageously allows the uniqueness of the model to be determined in a non-time sensitive manner, e.g., during training. Providing models with a higher degree of uniqueness provides greater reliability (e.g., minimizes error rates, mis-detection and spurious results during run-time).
  • Automatically determining the uniqueness of a model also helps na ' ive users, who otherwise may have picked non-unique models, to pick models likely to result in fewer errors at runtime.
  • any of the features above relating to a method can be performed by a system, and/or a controller of the system, configured to or having means for performing the method.
  • any of the features above relating to a method can be performed by a computer program product including instructions being operable to cause data processing apparatus to perform the method.
  • Any of the above aspects can include any of the above embodiments. In one implementation, any of the above-aspects includes the features of each of the embodiments.
  • FIGS. 1A-1C illustrate examples of manually trained models from training images and the resulting mis-detections during run-time using the manually-generated models.
  • FIG. 2 illustrates a high-level block diagram of a machine vision tool.
  • FIGS. 3A-3B illustrate flowcharts depicting general process flows for determining a quality metric of a model of an object in a machine vision application by perturbing a training image.
  • FIG. 4 illustrates a flowchart depicting a general process flow for determining a quality metric of a model of an object in a machine vision application by perturbing model parameters.
  • FIG. 5 illustrates a flowchart depicting a general process flow for determining a quality metric of a model of an object in a machine vision application by perturbing both a training image and model parameters.
  • FIGS. 6A-6B illustrate graphs depicting examples of score distributions for a set of poses resulting from different models.
  • FIG. 2 illustrates a high-level block diagram of a machine vision tool 200.
  • the machine vision tool 200 includes a training module 210 and a run-time module 220.
  • Training module 210 can be provided with and receives as input a digital image 230 (e.g., a training image).
  • Training image 230 includes the object 235 that is of interest (in this example, a circle containing a cross or fiducial mark that can be used for alignment and location purposes).
  • Training module 210 also receives as input one or more training parameters 240.
  • Run-time module 220 can be provided with and receives as input a run-time image 250 and a model 260 generated by training module 210.
  • training module 210 implements the process of learning the pattern to be found and generating a model 260 for use by run-time module 220.
  • FIG. 2 illustrates model 260 being trained from a training image 230, but other processes can also be used.
  • models 260 can be generated synthetically from any description of a real-world object such as from CAD data describing an idealized appearance of the real-world object.
  • the pattern 235 that is of interest is processed into a usable model 260.
  • model 260 can include a shape-based description of the shape of the pattern (e.g., a geometric description).
  • a geometric model 260 can be generated using the teachings of U.S. Patent No.
  • model 260 can include an image-based description of the shape of the pattern (e.g., a two-dimensional array of pixel brightness values as used in, e.g., normalized correlation-type applications or a set of regions of interest in an image).
  • run-time images 250 are analyzed to produce inspection and/or localization information 270.
  • localization information 270 can include a pose of the pattern 255 within the run-time image 250.
  • the pose of a pattern e.g., the location of the pattern within the run-time image, specifies how the pattern is positioned with respect to one or more degrees-of-freedom (DOF) (e.g., translational and/or generalized DOF).
  • DOFs refer, for example, to the horizontal and vertical location of the pattern in the run-time image.
  • Generalized DOFs refer, for example, to the rotation, aspect ratio, and/or skew of the pattern in the run-time image.
  • the pose of the pattern in the run-time image specifies the transformation operations (e.g., x-coordinate translation, y-coordinate translation, rotation, etc.) that are performed on the model to match the run-time pattern, and/or vice versa.
  • the pose can also be used to transform from run-time coordinates to model coordinates (or vice versa).
  • PatMaxTM and/or PatQuickTM tools sold by Cognex Corp., Natick, MA, are used in cooperation with the run-time module 220.
  • FIG. 3A illustrates a flowchart 300a depicting a general process flow for determining a quality metric of a model of an object in a machine vision application by perturbing a training image.
  • the elements of flowchart 300a are described using the exemplary machine vision tool 200 of FIG. 2.
  • generating a model of an object based on the training image (320) includes, for example, manually generating the model (e.g., by drawing a box on a training image 230 with a mouse). In some embodiments, generating a model of an object based on the training image (320) includes, for example, processing the training image 230 to generate an edge-based representation of the object (either in image-based form or shape- based form). [0038] FIG.
  • 3B illustrates a flowchart 300b depicting an exemplary process flow for generating a modified training image that includes receiving a noise model and/or DOF parameters (331), adding noise to the received training image based on the noise model and/or transforming the training image based on the DOF parameters (332), determining whether more modified training images should be generated (e.g., the number of different modified training images created can range from 20 to 1000) (333), and/or outputting the modified training images.
  • Gaussian noise e.g., Gaussian white noise with a specified constant mean and/or variance
  • local variance noise e.g., zero-mean Gaussian white noise with an intensity-dependent variance
  • Poisson noise e.g., Poisson noise generated from data instead of adding artificial noise
  • noise can be added to one or more pixels in the training image.
  • the amount of noise can be related to brightness values (e.g., plus or minus 5 grey levels) or a percentage of brightness (e.g., 2%). In some embodiments, the amount of noise that most closely simulates the expected noise introduced during run-time can be used.
  • Transforming (332) the training image 201 can be with respect to one or more DOFs (e.g., rotation, translation, scale, skew, aspect ratio, or any combination thereof) and/or changing the resolution of the training image 230.
  • DOFs e.g., rotation, translation, scale, skew, aspect ratio, or any combination thereof
  • modified training images are transformed within an expected range of real life transformations that occur during run-time (e.g., the received (331) DOF parameters can specify a range of up to 360 degrees for rotation, +/- 20 pixels for translation, 2% for scaling, etc.).
  • Modifying training images e.g., through the addition of noise and/or transformation of the training image
  • determining poses of possible instances of the model in the training image advantageously simulates the run-time uniqueness of the model during the training stage.
  • the modified train images can effectively be treated as artificial or simulated run-time images, thereby allowing a user to perform a full analysis of the effectiveness of the model without any of the time constraints associated with acquiring a set of runtime images.
  • modification of the training images can increase the chance for secondary results to be detected by run-time module 220.
  • Determining a set of poses that represent possible instances of the model in the modified training image(s) (340) can include providing the generated model (320) and the modified training image(s) (330) to run-time module 240 in order to generate a set of poses 250 for each training image.
  • the set of poses represent possible instances of the generated model (320) found by the run-time module 240 in a respective modified training image.
  • a set of poses can include each pose of a pattern found by run-time module 240 that satisfies a predetermined criterion (e.g., exceeds a threshold score).
  • the set of poses can be evaluated by assigning scores to one or more of the poses in the set of poses.
  • the score of a particular pose can be calculated by the similarity of the runtime object and the model at the pose.
  • similarity can be, roughly the ratio of the number of matched model edge features of he run-time objects to the total number of model edge features.
  • Other similarity measures can be used.
  • Each set of poses can include a first pose that is calculated to have a primary score (e.g., the highest calculated score from at least a portion of the set of poses) and a second pose that is calculated to have a secondary score (e.g., the second-highest calculated score from the portion of the set of poses).
  • Computing a quality metric of the model can be based on the score evaluations of the set of poses (350).
  • the quality metric is based on the primary score (e.g., the quality metric is the primary score) and/or the secondary score (e.g., the quality metric is the secondary score or the difference between the primary score and the secondary score).
  • the quality metric can be based on a plurality of scores such as, for example, a distribution of the computed scores for a portion of the set of poses (e.g., the quality metric can be the average of scores or a standard deviation of the scores).
  • a low-quality metric indicates a model that is not unique
  • a high quality metric indicates a model that is unique.
  • Providing feedback with respect to the quality metric (e.g., uniqueness) of the model (360) can include, for example, alerting the user via a user interface (e.g., a pop-up window) that the current model selected by the user is not a unique model or fails to satisfy a uniqueness criterion.
  • feedback can be provided in an automated system that generates models automatically. If the quality metric fails to satisfy a predetermined criterion, a new model can be generated based on the training image (320).
  • FIG. 4 illustrates a flowchart 400 depicting a general process flow for determining a quality metric of a model of an object in a machine vision application by perturbing model parameters.
  • the elements of flowchart 400 are described using the exemplary machine vision tool 200 of FIG. 2.
  • Determining a quality metric of a model of an object in a machine vision application can include receiving a first set of model parameters 410 and a training image (230), generating a first model of an object based on the training image and a first set of model parameters (420), generating a second model of the object based on the training image and a second set of model parameters modified from the first set of model parameters (430), determining a set of poses that represent possible instances of the second model in the training image (440), computing a quality metric of the first model based on an evaluation of the set of poses with respect to the training image (450), and/or providing feedback with respect to the uniqueness of the model (460).
  • a plurality of models of the object can be generated based on the training image and sets of model parameters modified in different ways from the first set of model parameters.
  • generating a first model of an object based on the training image (420) includes, for example, manually generating the model (e.g., by drawing a box on a training image 230 with a mouse.
  • generating a model of an object based on the training image (420) includes, for example, processing the training image 230 to generate an edge-based representation of the object (either in image-based form or shape- based form).
  • the second model of the object (430) if the first model is based on a manually-selected region of interest of the training image, the second model can also be based on the same region of interest but using a different set of model parameters.
  • training parameters 240 are used by training module 210 to generate the model 220.
  • model parameters can include an elasticity parameter, a grain limit or granularity, a coarse-value accept fraction, a contrast threshold, an edge -value threshold, a trainClientFromPattern parameter, or any combination thereof. Table I below summarizes the model parameters and examples of how the parameters affect the generation of the models.
  • the first set of model parameters can be assigned default values.
  • the first set of model parameters can be modified from default values or values 240 provided to training module 210.
  • Modifying a set of model parameters can include changing one or more values associated with one or more of the model parameters in the set. For example, some thresholds can be made slightly different from the default (e.g. edge-value threshold can be changed in +1-5 gray levels), which can make the candidates in the run-time image look different (e.g. fewer or more edges may be used in matching) than when the default is used, causing changed matching results.
  • the trainClientFromPattern parameter can be changed slightly (e.g. +/-0.5 pixel in translation and +/-0.1 degree in rotation), making the candidates look slightly different and causing different matching results. Modifying model parameters and then determining poses of possible instances, in the training image, of models generated from the different sets of model parameters
  • model parameters can increase the chance for secondary results to be detected by run-time module 240.
  • Determining a set of poses that represent possible instances of the generated models in the training image (440) can include providing the generated models (e.g., the second model) and the training image to run-time module 220 in order to generate a set of poses 270 for each generated model.
  • the set of poses represent possible instances of a generated model found by the run-time module 220 in the training image.
  • a set of poses can include each pose of a pattern found by run-time module 220 that satisfies a predetermined criterion (e.g., surpasses a threshold score).
  • Computing a quality metric of the first model can be based on the score evaluations of the set of poses (450), similar to the computation step (350) of FIG. 3.
  • Providing feedback with respect to the quality metric (e.g., uniqueness) of the model (460) can include, for example, alerting the user via a user interface (e.g., pop-up window) that the current model selected by the user is not a unique model or fails to satisfy a uniqueness criterion.
  • feedback can be provided in an automated system that generates models automatically. If the quality metric fails to satisfy a predetermined criterion, a new first model can be generated based on the training image (320).
  • FIG. 5 illustrates a flowchart depicting a general process flow 500 for determining a quality metric of a model of an object in a machine vision application by perturbing both a training image and model parameters.
  • One or more training images 501a-c can be modified (step 330 of FIG. 3) based on a baseline training image 230.
  • one or more models 502a-b can be generated (step 440 of FIG. 4) based on training image 230 and a set of model parameters (not shown).
  • Model # 1 260 can be an original model for which a quality metric is being evaluated.
  • a set of poses 510 of possible instances of the respective models 260 and/or 502a-b in the respective training images 501 a-c can be determined (step 340)/(step 440).
  • the poses 510 for a single modified training image e.g., modified training image # 1 501a
  • the poses 510 for a single model e.g., model #2 502a
  • any predetermined or random set of poses 510 can be used to compute (350)/(450) the quality metric for model 260.
  • Different search parameters e.g., a starting pose or search range parameters for one or more DOFs
  • run-time module 220 can be modified.
  • a single training image 230 and a model 260 based on the training image 230 can be provided to runtime module 220, which performs multiple test runs against randomly perturbed search parameters. Modification of the search parameters can increase the chance for secondary results to be detected by run-time module 220.
  • sets of poses of possible instances of models can be determined for any combination of modified training images, different sets of model parameters used to generate the respective models, and/or different sets of search parameters used to perform the simulated run-time analyses.
  • FIGS. 6A-6B illustrate graphs depicting examples of score distributions 600a-600b for a set of poses resulting from different models.
  • Score distributions 600a-600b can be determined, for example, by steps (340) or (440) of FIGS. 3 and 4, respectively.
  • Score distribution 600a illustrates an example of scores for a set of poses associated with a non- unique model.
  • a low score of the "best" result 610 and/or a small gap gi between the "best" result 610 and the "second best” result 620 can indicate a poor uniqueness quality for the model used in determine these pose scores.
  • the bunched distribution of the scores 630 near the highest scoring pose 610 can indicate an increased risk of mis-detections and spurious results if the model were to actually be used during run-time.
  • the small gap gi between the highest scoring pose 610 and the secondary results 630 also can indicate an increased risk of mis-detections and spurious results.
  • Gap gi can be calculated, for example, to be the difference between the scores of the highest score 610 and the next- highest score 620, between the highest score 610 and an average of one or more secondary scores 630, or other combination of scores.
  • score distribution 600b illustrates an example of scores for a set of poses associate with a unique model.
  • a high score of the "best" result 650 and/or a large gap g 2 between the "best” result 650 and the "second best” result 660 can indicate a good uniqueness quality for the model used in determine these pose scores.
  • the large gap g 2 between the highest scoring pose 650 and the second-highest score 660 can indicate a decreased risk of mis-detections and spurious results if the model were to actually be used in run-time.
  • the above-described techniques can be implemented in digital and/or analog electronic circuitry, or in computer hardware, firmware, software, or in combinations of them.
  • the implementation can be as a computer program product, e.g., a computer program tangibly embodied in a machine-readable storage device, for execution by, or to control the operation of, a data processing apparatus, e.g., a programmable processor, a computer, and/or multiple computers.
  • a computer program can be written in any form of computer or programming language, including source code, compiled code, interpreted code and/or machine code, and the computer program can be deployed in any form, including as a standalone program or as a subroutine, element, or other unit suitable for use in a computing environment.
  • a computer program can be deployed to be executed on one computer or on multiple computers at one or more sites.
  • Method steps can be performed by one or more processors executing a computer program to perform functions of the invention by operating on input data and/or generating output data. Method steps can also be performed by, and an apparatus can be implemented as, special purpose logic circuitry, e.g., a FPGA (field programmable gate array), a FPAA (field-programmable analog array), a CPLD (complex programmable logic device), a PSoC (Programmable System -on-Chip), ASIP (application-specific instruction-set processor), or an ASIC (application-specific integrated circuit), or the like.
  • Subroutines can refer to portions of the stored computer program and/or the processor, and/or the special circuitry that implement one or more functions.
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital or analog computer.
  • a processor receives instructions and data from a read-only memory or a random access memory or both.
  • the essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and/or data.
  • Memory devices such as a cache, can be used to temporarily store data. Memory devices can also be used for long-term data storage.
  • a computer also includes, or is operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • a computer can also be operatively coupled to a communications network in order to receive instructions and/or data from the network and/or to transfer instructions and/or data to the network.
  • Computer-readable storage mediums suitable for embodying computer program instructions and data include all forms of volatile and nonvolatile memory, including by way of example semiconductor memory devices, e.g., DRAM, SRAM, EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and optical disks, e.g., CD, DVD, HD- DVD, and Blu-ray disks.
  • semiconductor memory devices e.g., DRAM, SRAM, EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto-optical disks e.g., CD, DVD, HD- DVD, and Blu-ray disks.
  • optical disks e.g., CD, DVD, HD- DVD, and Blu-ray disks.
  • the processor and the memory can be supplemented by and/or incorporated in special purpose logic circuitry.
  • a computer in communication with a display device, e.g., a CRT (cathode ray tube), plasma, or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse, a trackball, a touchpad, or a motion sensor, by which the user can provide input to the computer (e.g., interact with a user interface element).
  • a display device e.g., a CRT (cathode ray tube), plasma, or LCD (liquid crystal display) monitor
  • a keyboard and a pointing device e.g., a mouse, a trackball, a touchpad, or a motion sensor, by which the user can provide input to the computer (e.g., interact with a user interface element).
  • feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, and/or tactile input.
  • feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback
  • input from the user can be received in any form, including acoustic, speech, and/or tactile input.
  • the above described techniques can be implemented in a distributed computing system that includes a back-end component.
  • the back-end component can, for example, be a data server, a middleware component, and/or an application server.
  • the above described techniques can be implemented in a distributed computing system that includes a front-end component.
  • the front-end component can, for example, be a client computer having a graphical user interface, a Web browser through which a user can interact with an example implementation, and/or other graphical user interfaces for a transmitting device.
  • the above described techniques can be implemented in a distributed computing system that includes any combination of such back-end, middleware, or front-end components.
  • Communication networks can include one or more packet-based networks and/or one or more circuit-based networks in any configuration.
  • Packet-based networks can include, for example, an Ethernet-based network (e.g., traditional Ethernet as defined by the IEEE or Carrier Ethernet as defined by the Metro Ethernet Forum (MEF)), an ATM-based network, a carrier Internet Protocol (IP) network (LAN, WAN, or the like), a private IP network, an IP private branch exchange (IPBX), a wireless network (e.g., a Radio Access Network (RAN)), and/or other packet-based networks.
  • IP Internet Protocol
  • IPBX IP private branch exchange
  • RAN Radio Access Network
  • Circuit-based networks can include, for example, the Public Switched Telephone Network (PSTN), a legacy private branch exchange (PBX), a wireless network (e.g., a RAN), and/or other circuit-based networks.
  • PSTN Public Switched Telephone Network
  • PBX legacy private branch exchange
  • RAN Radio Access Network
  • Carrier Ethernet can be used to provide point-to-point connectivity (e.g., new circuits and TDM replacement), point-to-multipoint (e.g., IPTV and content delivery), and/or multipoint-to-multipoint (e.g., Enterprise VPNs and Metro LANs).
  • point-to-point connectivity e.g., new circuits and TDM replacement
  • point-to-multipoint e.g., IPTV and content delivery
  • multipoint-to-multipoint e.g., Enterprise VPNs and Metro LANs.
  • Carrier Ethernet advantageously provides for a lower cost per megabit and more granular bandwidth options.
  • Carrier Ethernet shares the same basic MAC addressing and frame structure as classic Ethernet, but also can leverage certain physical layer specification and components (e.g., 10 and 100 Megabit, 1 and 10 Gigabit copper and optical interfaces).
  • Carrier Ethernet aspects e.g., tagging scheme, resiliency design, operations, administration and management (OAM)
  • OFAM operations, administration and management
  • Devices of the computing system can include, for example, a computer, a computer with a browser device, a telephone, an IP phone, a mobile device (e.g., cellular phone, personal digital assistant (PDA) device, laptop computer, electronic mail device), and/or other communication devices.
  • the browser device includes, for example, a computer (e.g., desktop computer, laptop computer) with a world wide web browser (e.g., Microsoft® Internet Explorer® available from Microsoft Corporation, Mozilla® Firefox available from Mozilla Corporation).
  • Microsoft® Internet Explorer® available from Microsoft Corporation
  • Mozilla® Firefox available from Mozilla Corporation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Described are methods and apparatuses, including computer program products, for determining model uniqueness with a quality metric of a model of an object in a machine vision application. Determining uniqueness involves receiving a training image, generating a model of an object based on the training image, generating a modified training image based on the training image, determining a set of poses that represent possible instances of the model in the modified training image, and computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.

Description

DETERMINING THE UNIQUENESS OF A MODEL FOR MACHINE VISION
TECHNICAL FIELD
[0001] The description generally relates to computer-implemented methods and systems, including machine vision systems and computer program products, for determining the uniqueness of a model for machine vision.
BACKGROUND
[0002] Machine vision generally relates to finding and/or locating patterns in images, where the patterns generally correspond to and/or represent real-world objects in the field of view of an imaging device, whether based on an image of the object or a simulated representation of the object, such as a CAD drawing. Pattern location methods and systems are of particular importance in industrial automation, where they are used, for example, to guide automation equipment and for quality control, where the objects might include, for example,
semiconductor wafers, automotive parts, pharmaceuticals, etc. Machine vision enables quicker, more accurate and repeatable results to be obtained in the production of both mass- produced and custom products. Basic machine vision systems include one or more cameras (typically having solid-state charge couple device (CCD) as imaging elements) directed at an area of interest, appropriate illumination on the area of interest, frame grabber/image processing elements that capture and/or transmit CCD images, and one or more computer processing units and/or displays for running the machine vision software application and manipulating or analyzing the captured images.
[0003] Typical machine vision systems include a training stage and a run-time stage.
Training typically involves being provided or receiving a digital image of an example object (e.g., a training image). The objective of training is to learn an object's pattern in an image by generating a model that can be used to find similarly-appearing patterns on production objects or in run-time images at run-time. Run-time typically involves being provided or receiving a digital image of a production object (e.g., a run-time image). The objective of run-time processing is (1) to determine whether the pattern exists in the run-time image (called pattern recognition), and (2) if the pattern is found, to determine where the pattern is located, with respect to one or more degrees of freedom (DOF), within the run-time image. The pattern's location, as defined by the DOFs, can be called the object or pattern's pose in the image. One way to represent a pose is as a transformation matrix mapping between coordinates in the model and coordinates in the run-time image or vice versa. Determining whether a pattern is located in an image can establish the location of the production object so that, for example, it can be operated on by automation equipment.
[0004] Training is one of the more important and challenging aspects of any industrial pattern inspection/location system. In a typical production application, for example, a model can be used tens of thousands of times every hour, and any errors or imperfections in the model can potentially affect every single use. The challenge of training arises from several factors. Production objects can vary significantly in appearance from any given example object used in training, due to imperfections in the example object and ordinary
manufacturing variations in the production objects and/or the lighting conditions.
Nevertheless, the model should be such that the production objects can be found reliably and accurately, while at the same time rejecting objects that do not match the pattern.
[0005] In addition, models are typically trained by human operators (e.g., by drawing a box on a training image with a mouse) whose time is expensive and who are not generally experts in the underlying machine vision technology. Alternatively, machine vision systems also allow models to be defined synthetically (e.g., by using a CAD tool). Each of these training implementations suffers from drawbacks that can decrease the effectiveness of the generated models. For example, manually-selected and synthetically-generated models can result in degenerate models (e.g., straight lines) and other non-unique model features. Training machine vision systems based on object images is also typically time-consuming. This becomes especially a problem for manufacturing processes, where there may be a wide variety of products and/or objects that need to be inspected and/or localized using machine vision inspection. Furthermore, product designs may frequently change. Even a minor revision to an object, for example, its shape, may require retraining.
[0006] FIGS. 1A-1 C illustrate examples of manually trained models from training images and the resulting mis-detections during run-time using the manually-generated models. FIG. 1A illustrates a training image 110 and a user-selected region of interest 111 from which a model 112 is generated (e.g., the region of interest is passed through an edge detection unit to generate a model representing edges). However, model 112 contains only straight lines 112a and 112b in the same direction, which are degenerate and non-unique features, resulting in a secondary result 115 being detected in a run-time image 116. The detected result 115 matches the model features, but is translated and therefore does not represent what the user intended to find. Similarly, FIG. IB illustrates the training image 110 with a different user- selected region of interest 117 from which a model 118 is generated. Model 118, while non- degenerate, still can result in a degenerate match 119 in the run-time image 120 even though a portion 120 of the model 119 does not appear in the run-time image 120. FIG. 1C illustrates a training image 130 and two user-selected regions of interest 131 from which a model 132 is generated. However, models such as model 132 that include a circle 132a and/or short straight lines 132b oriented in a single direction can be non-unique by rotation, resulting in a secondary result 135 being detected in run time image 136 (where the highlight box 137 illustrates the rotation of the secondary result 135. In other examples, models can be non-unique due to the existence of background features. [0007] One approach to determining the uniqueness of a model involves analyzing all of the results returned during run-time application of the model (e.g., determining how many misdetections occur). The drawback of this approach is failure to detect secondary results (e.g., results that are not the highest scoring or do not surpass a certain threshold score) in a robust manner due to the existence of non-linearities of the machine vision tools used. Other approaches include simple alerts based on whether a model consists of a single straight line or a single circle. The drawback of these approaches is that they do not address the general issue of how unique a model is in a given search range.
SUMMARY
[0008] Existing solutions to training machine vision systems do not allow for quality control in generating robust models and, thus, provide an incomplete and unsatisfactory commercial solution. Thus, an approach to training in machine vision systems and methods that determines the uniqueness of a model is desirable. It is also desirable to facilitate quick choices regarding the uniqueness of a model to be made with little or no required judgment by an operating user.
[0009] One approach to determining uniqueness of a model involves calculating and evaluating a quality metric of the model. One approach to calculate a quality metric is to perturb the training image and evaluate the perturbed results. Evaluation of the perturbed results can be based on a statistical analysis of the secondary scores associated with the perturbed results.
[0010] In one aspect, there is a computerized method for determining a quality metric of a model of an object in a machine vision application. The method includes receiving a training image, generating a model of an object based on the training image, generating a modified training image based on the training image, determining a set of poses that represent possible instances of the model in the modified training image, and computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image,
[0011] In another aspect, there is a computer program product. The computer program product is tangibly embodied in a machine-readable storage device and includes instructions being operable to cause data processing apparatus to receive a training image, generate a model of an object based on the training image, generate a modified training image based on the training image, determine a set of poses that represent possible instances of the model in the modified training image, and compute a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.
[0012] Another aspect features a system for determining a quality metric of a model of an object in a machine vision application. The system includes interface means for receiving a training image, model generating means for generating a model of an object based on the training image, image modifying means for generating a modified training image based on the training image, processor means for determining a set of poses that represent possible instances of the model in the modified training image, and processor means for computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.
[0013] Another aspect features a system for determining a quality metric of a model of an object in a machine vision application. The system includes an interface for receiving a training image, a model generating module for generating a model of an object based on the training image, an image processing module for generating a modified training image based on the training image, a run-time module for determining a set of poses that represent possible instances of the model in the modified training image, and a quality-metric module for computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.
[0014] In other examples, any of the aspects above can include one or more of the following features. The method can further include computing at least a primary score and a secondary score for at least a portion of the set of poses. Computing the quality metric can be based on the primary score and the secondary score. Computing the quality metric can be based on a distribution of the computed scores for the portion of the set of poses. The model can include a geometric description of the object in the training image. The model can include a portion of the training image. Generating the modified training image can include adding noise to the training image. The noise can include amplifier noise, salt-and-pepper noise, shot noise, quantization noise, film grain noise, non-isotropic noise, or any combination thereof. The noise can be added to one or more pixels in the training image. Generating the modified training image can include transforming the training image by one or more degrees-of- freedom of rotation, translation, scale, skew, aspect ratio, or any combination thereof.
Generating the modified training image can include changing the resolution of the training image.
[0015] In some embodiments, the method further includes generating a plurality of modified training images based on the training image. The method can further include determining a set of poses for each of the plurality of modified training images. Each set of poses can represent possible instances of the model in one of the plurality of modified training images. Computing the quality metric can be further based on an evaluation of the sets of poses, determined from the plurality of modified training images, with respect to the modified training image. The method can further include computing at least primary scores and secondary scores for at least a portion of the set of poses and at least portions of each of the sets of poses computed from the plurality of modified training images. Computing the quality metric of the model can be based on the secondary scores. Computing the quality metric of the model can be based on a distribution of the computed scores for the portion of the set of poses and distributions of the computed scores for the portions of the sets of poses computed from the plurality of modified training images. The method can further include modifying a baseline model parameter. Generating the model can be based on the modified baseline model parameter. The baseline model parameter can include an elasticity parameter, a grain limit or granularity, a coarse-value acceptance fraction, a contrast threshold, an edge- value threshold, trainClientFromPattern, or any combination thereof.
[0016] Another approach to calculating a quality metric of a model is to perturb model parameters in lieu of or in addition to perturbing a training image and evaluate the perturbed results. In one aspect, there is a computerized method for determining a quality metric of a model of an object in a machine vision application. The method includes receiving a training image and a first set of model parameters, generating a first model of an object, generating a second model of the object based on the training image and a second set of model parameters modified from the first set of model parameters, determining a set of poses that represent possible instances of the second model in the training image, and computing a quality metric of the first model based on an evaluation of the set of poses with respect to the training image.
[0017] In another aspect, there is a computer program product. The computer program product is tangibly embodied in a machine-readable storage device and includes instructions being operable to cause data processing apparatus to receive a training image and a first set of model parameters, generate a first model of an object, generate a second model of the object based on the training image and a second set of model parameters modified from the first set of model parameters, determine a set of poses that represent possible instances of the second model in the training image, and compute a quality metric of the first model based on an evaluation of the set of poses with respect to the training image.
[0018] Another aspect features a system for determining a quality metric of a model of an object in a machine vision application. The system includes interface means for receiving a training image and a first set of model parameters, model generating means for generating a first model of an object, model generating means for generating a second model of the object based on the training image and a second set of model parameters modified from the first set of model parameters, processor means for determining a set of poses that represent possible instances of the second model in the training image, and processor means for computing a quality metric of the first model based on an evaluation of the set of poses with respect to the training image.
[0019] Another aspect features a system for determining a quality metric of a model of an object in a machine vision application. The system includes an interface for receiving a training image and a first set of model parameters, a model generating module for generating a first model of an object and for generating a second model of the object based on the training image and a second set of model parameters modified from the first set of model parameters, a run-time module for determining a set of poses that represent possible instances of the second model in the training image, and a quality-metric module for computing a quality metric of the first model based on an evaluation of the set of poses with respect to the training image.
[0020] In other examples, any of the aspects above can include one or more of the following features. Modifying the first set of model parameters to produce the second set of model parameters can include perturbing one or more values in the first set of model parameters. The model can include a geometric description of the object in the training image. The model can include a portion of the training image. The method can further include computing at least a primary score and a secondary score for at least a portion of the set of poses. Computing the quality metric of the model can be based on the primary score and the secondary score. Computing the quality metric of the model can be based on a distribution of the computed scores for the portion of the set of poses.
[0021] In some embodiments, the method further includes generating a plurality of models based on the training image and a plurality of different sets of model parameters. The plurality of different sets of model parameters can be based on modifications to the first set of model parameters. The method can further include determining a set of poses for each of the plurality of models. Each set of poses can represent possible instances of one of the plurality of models in the training image. Computing the quality metric can be further based on an evaluation of the sets of poses, for each of the plurality of models, with respect to the training image. The method can further include computing at least primary scores and secondary scores for at least a portion of the first set of poses and at least portions of the sets of poses computed for each of the plurality of models. Determining the quality metric of the model can be based on the secondary scores. Determining the quality metric of the model can be based on a distribution of the computed scores for the portion of the first set of poses and distributions of the computed scores for the portions of the sets of poses computed for each of the plurality of models.
[0022] In some embodiments, the method further includes modifying the training image. Modifying the received training image can include adding noise to the received training image. Determining the set of poses can include modifying one or more search parameters. The one or more search space parameters can include a starting pose value, one or more search range values, or any combination thereof. [0023] Any of the above implementations can realize one or more of the following advantages. Simulating a variety of run-time-like applications of a generated model on modified training images and/or using perturbed training parameters advantageously allows the uniqueness of the model to be determined in a non-time sensitive manner, e.g., during training. Providing models with a higher degree of uniqueness provides greater reliability (e.g., minimizes error rates, mis-detection and spurious results during run-time).
Automatically determining the uniqueness of a model also helps na'ive users, who otherwise may have picked non-unique models, to pick models likely to result in fewer errors at runtime.
[0024] In other examples, any of the features above relating to a method can be performed by a system, and/or a controller of the system, configured to or having means for performing the method. In addition, any of the features above relating to a method can be performed by a computer program product including instructions being operable to cause data processing apparatus to perform the method. Any of the above aspects can include any of the above embodiments. In one implementation, any of the above-aspects includes the features of each of the embodiments.
[0025] The details of one or more examples are set forth in the accompanying drawings and the description below. Further features, aspects, and advantages of the invention will become apparent from the description, the drawings, and the claims. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. BRIEF DESCRIPTION OF THE DRAWINGS
[0026] The foregoing and other objects, features, and advantages of the present invention, as well as the invention itself, will be more fully understood from the following description of various embodiments, when read together with the accompanying drawings.
[0027] FIGS. 1A-1C illustrate examples of manually trained models from training images and the resulting mis-detections during run-time using the manually-generated models.
[0028] FIG. 2 illustrates a high-level block diagram of a machine vision tool.
[0029] FIGS. 3A-3B illustrate flowcharts depicting general process flows for determining a quality metric of a model of an object in a machine vision application by perturbing a training image.
[0030] FIG. 4 illustrates a flowchart depicting a general process flow for determining a quality metric of a model of an object in a machine vision application by perturbing model parameters.
[0031] FIG. 5 illustrates a flowchart depicting a general process flow for determining a quality metric of a model of an object in a machine vision application by perturbing both a training image and model parameters.
[0032] FIGS. 6A-6B illustrate graphs depicting examples of score distributions for a set of poses resulting from different models.
DETAILED DESCRIPTION
[0033] FIG. 2 illustrates a high-level block diagram of a machine vision tool 200. The machine vision tool 200 includes a training module 210 and a run-time module 220. Training module 210 can be provided with and receives as input a digital image 230 (e.g., a training image). Training image 230 includes the object 235 that is of interest (in this example, a circle containing a cross or fiducial mark that can be used for alignment and location purposes). Training module 210 also receives as input one or more training parameters 240. Run-time module 220 can be provided with and receives as input a run-time image 250 and a model 260 generated by training module 210.
[0034] Generally, training module 210 implements the process of learning the pattern to be found and generating a model 260 for use by run-time module 220. FIG. 2 illustrates model 260 being trained from a training image 230, but other processes can also be used. For example, models 260 can be generated synthetically from any description of a real-world object such as from CAD data describing an idealized appearance of the real-world object. In the training step 210, the pattern 235 that is of interest is processed into a usable model 260. In some embodiments, model 260 can include a shape-based description of the shape of the pattern (e.g., a geometric description). In some embodiments, a geometric model 260 can be generated using the teachings of U.S. Patent No. 7,016,539 "Method for Fast, Robust, Multi- Dimensional Pattern Recognition," and/or U.S. Patent No. 7,088,862 "Fast High-Accuracy Multi-Dimensional Pattern Inspection," each of which are herein incorporated by reference in their entirety. In some embodiments, model 260 can include an image-based description of the shape of the pattern (e.g., a two-dimensional array of pixel brightness values as used in, e.g., normalized correlation-type applications or a set of regions of interest in an image).
[0035] In the run-time module 220, run-time images 250 are analyzed to produce inspection and/or localization information 270. For example, localization information 270 can include a pose of the pattern 255 within the run-time image 250. The pose of a pattern, e.g., the location of the pattern within the run-time image, specifies how the pattern is positioned with respect to one or more degrees-of-freedom (DOF) (e.g., translational and/or generalized DOF). Translational DOFs refer, for example, to the horizontal and vertical location of the pattern in the run-time image. Generalized DOFs refer, for example, to the rotation, aspect ratio, and/or skew of the pattern in the run-time image. Given a model, the pose of the pattern in the run-time image specifies the transformation operations (e.g., x-coordinate translation, y-coordinate translation, rotation, etc.) that are performed on the model to match the run-time pattern, and/or vice versa. Similarly, the pose can also be used to transform from run-time coordinates to model coordinates (or vice versa). In some embodiments, the PatMax™ and/or PatQuick™ tools, sold by Cognex Corp., Natick, MA, are used in cooperation with the run-time module 220.
[0036] FIG. 3A illustrates a flowchart 300a depicting a general process flow for determining a quality metric of a model of an object in a machine vision application by perturbing a training image. The elements of flowchart 300a are described using the exemplary machine vision tool 200 of FIG. 2. Determining a quality metric of a model of an object in a machine vision application can include receiving a training image (310), generating a model of an object based on the training image (320), generating a modified training image based on the training image (330), determining a set of poses that represent possible instances of the model in the modified training image (340), computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image (350), and/or providing feedback with respect to the uniqueness of the model (360).
[0037] In some embodiments, generating a model of an object based on the training image (320) includes, for example, manually generating the model (e.g., by drawing a box on a training image 230 with a mouse). In some embodiments, generating a model of an object based on the training image (320) includes, for example, processing the training image 230 to generate an edge-based representation of the object (either in image-based form or shape- based form). [0038] FIG. 3B illustrates a flowchart 300b depicting an exemplary process flow for generating a modified training image that includes receiving a noise model and/or DOF parameters (331), adding noise to the received training image based on the noise model and/or transforming the training image based on the DOF parameters (332), determining whether more modified training images should be generated (e.g., the number of different modified training images created can range from 20 to 1000) (333), and/or outputting the modified training images. Noise models can include Gaussian noise (e.g., Gaussian white noise with a specified constant mean and/or variance), local variance noise (e.g., zero-mean Gaussian white noise with an intensity-dependent variance), Poisson noise (e.g., Poisson noise generated from data instead of adding artificial noise to the data), salt and pepper noise (e.g., on and off pixels based on a specified noise density), speckle noise (e.g., multiplicative noise using the equation: modified image = original image + ^original image, where n is uniformly distributed random noise with mean 0 and variance v), amplifier noise, shot noise, quantization noise, film grain noise, non-isotropic noise, other randomly generated noise, or any combination thereof. In some embodiments, noise can be added to one or more pixels in the training image. The amount of noise can be related to brightness values (e.g., plus or minus 5 grey levels) or a percentage of brightness (e.g., 2%). In some embodiments, the amount of noise that most closely simulates the expected noise introduced during run-time can be used.
[0039] Transforming (332) the training image 201 can be with respect to one or more DOFs (e.g., rotation, translation, scale, skew, aspect ratio, or any combination thereof) and/or changing the resolution of the training image 230. In some embodiments, modified training images are transformed within an expected range of real life transformations that occur during run-time (e.g., the received (331) DOF parameters can specify a range of up to 360 degrees for rotation, +/- 20 pixels for translation, 2% for scaling, etc.). [0040] Modifying training images (e.g., through the addition of noise and/or transformation of the training image) and then determining poses of possible instances of the model in the training image advantageously simulates the run-time uniqueness of the model during the training stage. For example, the modified train images can effectively be treated as artificial or simulated run-time images, thereby allowing a user to perform a full analysis of the effectiveness of the model without any of the time constraints associated with acquiring a set of runtime images. In general, modification of the training images can increase the chance for secondary results to be detected by run-time module 220.
[0041] Determining a set of poses that represent possible instances of the model in the modified training image(s) (340) can include providing the generated model (320) and the modified training image(s) (330) to run-time module 240 in order to generate a set of poses 250 for each training image. The set of poses represent possible instances of the generated model (320) found by the run-time module 240 in a respective modified training image. For example, a set of poses can include each pose of a pattern found by run-time module 240 that satisfies a predetermined criterion (e.g., exceeds a threshold score).
[0042] The set of poses can be evaluated by assigning scores to one or more of the poses in the set of poses. The score of a particular pose can be calculated by the similarity of the runtime object and the model at the pose. There are many ways to define similarities. For example, similarity can be, roughly the ratio of the number of matched model edge features of he run-time objects to the total number of model edge features. Other similarity measures can be used. Each set of poses can include a first pose that is calculated to have a primary score (e.g., the highest calculated score from at least a portion of the set of poses) and a second pose that is calculated to have a secondary score (e.g., the second-highest calculated score from the portion of the set of poses). [0043] Computing a quality metric of the model can be based on the score evaluations of the set of poses (350). In some embodiments, the quality metric is based on the primary score (e.g., the quality metric is the primary score) and/or the secondary score (e.g., the quality metric is the secondary score or the difference between the primary score and the secondary score). The quality metric can be based on a plurality of scores such as, for example, a distribution of the computed scores for a portion of the set of poses (e.g., the quality metric can be the average of scores or a standard deviation of the scores). In general, a low-quality metric indicates a model that is not unique, and a high quality metric indicates a model that is unique.
[0044] Providing feedback with respect to the quality metric (e.g., uniqueness) of the model (360) can include, for example, alerting the user via a user interface (e.g., a pop-up window) that the current model selected by the user is not a unique model or fails to satisfy a uniqueness criterion. In some embodiments, feedback can be provided in an automated system that generates models automatically. If the quality metric fails to satisfy a predetermined criterion, a new model can be generated based on the training image (320).
[0045] FIG. 4 illustrates a flowchart 400 depicting a general process flow for determining a quality metric of a model of an object in a machine vision application by perturbing model parameters. The elements of flowchart 400 are described using the exemplary machine vision tool 200 of FIG. 2. Determining a quality metric of a model of an object in a machine vision application can include receiving a first set of model parameters 410 and a training image (230), generating a first model of an object based on the training image and a first set of model parameters (420), generating a second model of the object based on the training image and a second set of model parameters modified from the first set of model parameters (430), determining a set of poses that represent possible instances of the second model in the training image (440), computing a quality metric of the first model based on an evaluation of the set of poses with respect to the training image (450), and/or providing feedback with respect to the uniqueness of the model (460). A plurality of models of the object can be generated based on the training image and sets of model parameters modified in different ways from the first set of model parameters.
[0046] In some embodiments, generating a first model of an object based on the training image (420) includes, for example, manually generating the model (e.g., by drawing a box on a training image 230 with a mouse. In some embodiments, generating a model of an object based on the training image (420) includes, for example, processing the training image 230 to generate an edge-based representation of the object (either in image-based form or shape- based form). With respect to generating the second model of the object (430), if the first model is based on a manually-selected region of interest of the training image, the second model can also be based on the same region of interest but using a different set of model parameters.
[0047] In general, training parameters 240 are used by training module 210 to generate the model 220. For example, in some embodiments, model parameters can include an elasticity parameter, a grain limit or granularity, a coarse-value accept fraction, a contrast threshold, an edge -value threshold, a trainClientFromPattern parameter, or any combination thereof. Table I below summarizes the model parameters and examples of how the parameters affect the generation of the models.
Table I
Model Parameters
Figure imgf000020_0001
In some embodiments, the first set of model parameters can be assigned default values. In some embodiments, the first set of model parameters can be modified from default values or values 240 provided to training module 210. Modifying a set of model parameters can include changing one or more values associated with one or more of the model parameters in the set. For example, some thresholds can be made slightly different from the default (e.g. edge-value threshold can be changed in +1-5 gray levels), which can make the candidates in the run-time image look different (e.g. fewer or more edges may be used in matching) than when the default is used, causing changed matching results. The trainClientFromPattern parameter can be changed slightly (e.g. +/-0.5 pixel in translation and +/-0.1 degree in rotation), making the candidates look slightly different and causing different matching results. Modifying model parameters and then determining poses of possible instances, in the training image, of models generated from the different sets of model parameters
advantageously simulates the run-time uniqueness of the first model during the training stage. In general, modification of the model parameters can increase the chance for secondary results to be detected by run-time module 240.
[0048] Determining a set of poses that represent possible instances of the generated models in the training image (440) can include providing the generated models (e.g., the second model) and the training image to run-time module 220 in order to generate a set of poses 270 for each generated model. The set of poses represent possible instances of a generated model found by the run-time module 220 in the training image. For example, a set of poses can include each pose of a pattern found by run-time module 220 that satisfies a predetermined criterion (e.g., surpasses a threshold score).
[0049] Computing a quality metric of the first model can be based on the score evaluations of the set of poses (450), similar to the computation step (350) of FIG. 3. Providing feedback with respect to the quality metric (e.g., uniqueness) of the model (460) can include, for example, alerting the user via a user interface (e.g., pop-up window) that the current model selected by the user is not a unique model or fails to satisfy a uniqueness criterion. In some embodiments, feedback can be provided in an automated system that generates models automatically. If the quality metric fails to satisfy a predetermined criterion, a new first model can be generated based on the training image (320).
[0050] In yet further embodiments, aspects of flowcharts 300a-300b and 400 can be combined. FIG. 5 illustrates a flowchart depicting a general process flow 500 for determining a quality metric of a model of an object in a machine vision application by perturbing both a training image and model parameters. One or more training images 501a-c can be modified (step 330 of FIG. 3) based on a baseline training image 230. In addition, one or more models 502a-b can be generated (step 440 of FIG. 4) based on training image 230 and a set of model parameters (not shown). Model # 1 260 can be an original model for which a quality metric is being evaluated. A set of poses 510 of possible instances of the respective models 260 and/or 502a-b in the respective training images 501 a-c can be determined (step 340)/(step 440). In some embodiments, the poses 510 for a single modified training image (e.g., modified training image # 1 501a) are determined for one or more models 260 and/or 502a-b. In other embodiments, the poses 510 for a single model (e.g., model #2 502a) are determined for one or more modified training images 501a-c. In general, any predetermined or random set of poses 510 can be used to compute (350)/(450) the quality metric for model 260.
[0051] Different search parameters (e.g., a starting pose or search range parameters for one or more DOFs) provided to run-time module 220 can be modified. For example, a single training image 230 and a model 260 based on the training image 230 can be provided to runtime module 220, which performs multiple test runs against randomly perturbed search parameters. Modification of the search parameters can increase the chance for secondary results to be detected by run-time module 220. In general, sets of poses of possible instances of models can be determined for any combination of modified training images, different sets of model parameters used to generate the respective models, and/or different sets of search parameters used to perform the simulated run-time analyses.
[0052] FIGS. 6A-6B illustrate graphs depicting examples of score distributions 600a-600b for a set of poses resulting from different models. Score distributions 600a-600b can be determined, for example, by steps (340) or (440) of FIGS. 3 and 4, respectively. Score distribution 600a illustrates an example of scores for a set of poses associated with a non- unique model. A low score of the "best" result 610 and/or a small gap gi between the "best" result 610 and the "second best" result 620 can indicate a poor uniqueness quality for the model used in determine these pose scores. For example, the bunched distribution of the scores 630 near the highest scoring pose 610 can indicate an increased risk of mis-detections and spurious results if the model were to actually be used during run-time. In addition, the small gap gi between the highest scoring pose 610 and the secondary results 630 also can indicate an increased risk of mis-detections and spurious results. Gap gi can be calculated, for example, to be the difference between the scores of the highest score 610 and the next- highest score 620, between the highest score 610 and an average of one or more secondary scores 630, or other combination of scores.
[0053] In contrast, score distribution 600b illustrates an example of scores for a set of poses associate with a unique model. A high score of the "best" result 650 and/or a large gap g2 between the "best" result 650 and the "second best" result 660 can indicate a good uniqueness quality for the model used in determine these pose scores. For example, the large gap g2 between the highest scoring pose 650 and the second-highest score 660 can indicate a decreased risk of mis-detections and spurious results if the model were to actually be used in run-time.
[0054] The above-described techniques can be implemented in digital and/or analog electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The implementation can be as a computer program product, e.g., a computer program tangibly embodied in a machine-readable storage device, for execution by, or to control the operation of, a data processing apparatus, e.g., a programmable processor, a computer, and/or multiple computers. A computer program can be written in any form of computer or programming language, including source code, compiled code, interpreted code and/or machine code, and the computer program can be deployed in any form, including as a standalone program or as a subroutine, element, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one or more sites.
[0055] Method steps can be performed by one or more processors executing a computer program to perform functions of the invention by operating on input data and/or generating output data. Method steps can also be performed by, and an apparatus can be implemented as, special purpose logic circuitry, e.g., a FPGA (field programmable gate array), a FPAA (field-programmable analog array), a CPLD (complex programmable logic device), a PSoC (Programmable System -on-Chip), ASIP (application-specific instruction-set processor), or an ASIC (application-specific integrated circuit), or the like. Subroutines can refer to portions of the stored computer program and/or the processor, and/or the special circuitry that implement one or more functions.
[0056] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital or analog computer. Generally, a processor receives instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and/or data. Memory devices, such as a cache, can be used to temporarily store data. Memory devices can also be used for long-term data storage. Generally, a computer also includes, or is operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. A computer can also be operatively coupled to a communications network in order to receive instructions and/or data from the network and/or to transfer instructions and/or data to the network. Computer-readable storage mediums suitable for embodying computer program instructions and data include all forms of volatile and nonvolatile memory, including by way of example semiconductor memory devices, e.g., DRAM, SRAM, EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and optical disks, e.g., CD, DVD, HD- DVD, and Blu-ray disks. The processor and the memory can be supplemented by and/or incorporated in special purpose logic circuitry.
[0057] To provide for interaction with a user, the above described techniques can be implemented on a computer in communication with a display device, e.g., a CRT (cathode ray tube), plasma, or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse, a trackball, a touchpad, or a motion sensor, by which the user can provide input to the computer (e.g., interact with a user interface element). Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, and/or tactile input.
[0058] The above described techniques can be implemented in a distributed computing system that includes a back-end component. The back-end component can, for example, be a data server, a middleware component, and/or an application server. The above described techniques can be implemented in a distributed computing system that includes a front-end component. The front-end component can, for example, be a client computer having a graphical user interface, a Web browser through which a user can interact with an example implementation, and/or other graphical user interfaces for a transmitting device. The above described techniques can be implemented in a distributed computing system that includes any combination of such back-end, middleware, or front-end components.
[0059] Communication networks can include one or more packet-based networks and/or one or more circuit-based networks in any configuration. Packet-based networks can include, for example, an Ethernet-based network (e.g., traditional Ethernet as defined by the IEEE or Carrier Ethernet as defined by the Metro Ethernet Forum (MEF)), an ATM-based network, a carrier Internet Protocol (IP) network (LAN, WAN, or the like), a private IP network, an IP private branch exchange (IPBX), a wireless network (e.g., a Radio Access Network (RAN)), and/or other packet-based networks. Circuit-based networks can include, for example, the Public Switched Telephone Network (PSTN), a legacy private branch exchange (PBX), a wireless network (e.g., a RAN), and/or other circuit-based networks.
[0060] Carrier Ethernet can be used to provide point-to-point connectivity (e.g., new circuits and TDM replacement), point-to-multipoint (e.g., IPTV and content delivery), and/or multipoint-to-multipoint (e.g., Enterprise VPNs and Metro LANs). Carrier Ethernet advantageously provides for a lower cost per megabit and more granular bandwidth options. Carrier Ethernet shares the same basic MAC addressing and frame structure as classic Ethernet, but also can leverage certain physical layer specification and components (e.g., 10 and 100 Megabit, 1 and 10 Gigabit copper and optical interfaces). Other Carrier Ethernet aspects (e.g., tagging scheme, resiliency design, operations, administration and management (OAM)) have been optimized for carrier design requirements and operational practices. The result is a cost effective, flexible technology that can support building highly scalable and robust networks.
[0061] Devices of the computing system can include, for example, a computer, a computer with a browser device, a telephone, an IP phone, a mobile device (e.g., cellular phone, personal digital assistant (PDA) device, laptop computer, electronic mail device), and/or other communication devices. The browser device includes, for example, a computer (e.g., desktop computer, laptop computer) with a world wide web browser (e.g., Microsoft® Internet Explorer® available from Microsoft Corporation, Mozilla® Firefox available from Mozilla Corporation). [0062] One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The foregoing embodiments are therefore to be considered in all respects illustrative rather than limiting of the invention described herein. Scope of the invention is thus indicated by the appended claims, rather than by the foregoing description, and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims

CLAIMS What is claimed is:
1 . A computerized method for determining a quality metric of a model of an object in a machine vision application, the method comprising:
receiving a training image;
generating a model of an object based on the training image; generating a modified training image based on the training image;
determining a set of poses that represent possible instances of the model in the modified training image; and
computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.
2. The method of claim 1 further comprising computing at least a primary score and a secondary score for at least a portion of the set of poses.
3. The method of claim 2 wherein computing the quality metric is based on the primary score and the secondary score.
4. The method of claim 2 wherein computing the quality metric is based on a distribution of the computed scores for the portion of the set of poses.
5. The method of claim 1 wherein the model comprises a geometric description of the object in the training image.
6. The method of claim 1 wherein the model comprises a portion of the training image.
7. The method of claim 1 wherein generating the modified training image comprises adding noise to the training image.
8. The method of claim 7 wherein the noise comprises amplifier noise, salt-and-pepper noise, shot noise, quantization noise, film grain noise, non-isotropic noise, or any combination thereof.
9. The method of claim 7 wherein the noise is added to one or more pixels in the training image.
10. The method of claim 1 wherein generating the modified training image comprises transforming the training image by one or more degrees-of-freedom of rotation, translation, scale, skew, aspect ratio, or any combination thereof.
1 1. The method of claim 1 wherein generating the modified training image comprises changing the resolution of the training image.
12. The method of claim 1 further comprising:
generating a plurality of modified training images based on the training image; and
determining a set of poses for each of the plurality of modified training images, each set of poses representing possible instances of the model in one of the plurality of modified training images;
wherein computing the quality metric is further based on an evaluation of the sets of poses, determined from the plurality of modified training images, with respect to the modified training image.
13. The method of claim 12 further comprising computing at least primary scores and secondary scores for at least a portion of the set of poses and at least portions of each of the sets of poses computed from the plurality of modified training images.
14. The method of claim 13 wherein computing the quality metric of the model is based on the secondary scores.
15. The method of claim 13 wherein computing the quality metric of the model is based on a distribution of the computed scores for the portion of the set of poses and distributions of the computed scores for the portions of the sets of poses computed from the plurality of modified training images.
16. The method of claim 1 further comprising modifying a baseline model parameter, wherein generating the model is based on the modified baseline model parameter.
17. The method of claim 16 wherein the baseline model parameter comprises an elasticity parameter, a grain limit or granularity, a coarse-value acceptance fraction, a contrast threshold, an edge-value threshold, trainClientFromPattern or any combination thereof.
18. A computer program product, tangibly embodied in a machine-readable storage device, the computer program product including instructions being operable to cause a data processing apparatus to:
receive a training image;
generate a model of an object based on the training image;
generate a modified training image based on the training image; determine a set of poses that represent possible instances of the model in the modified training image; and
compute a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.
19. A system for determining a quality metric of a model of an object in a machine vision application, the system comprising:
interface means for receiving a training image;
model generating means for generating a model of an object based on the training image;
image modifying means for generating a modified training image based on the training image;
processor means for determining a set of poses that represent possible instances of the model in the modified training image; and processor means for computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.
20. A machine vision system for determining a quality metric of a model of an object in a machine vision application, the system comprising:
an interface for receiving a training image;
a model generating module for generating a model of an object based on the training image;
an image processing module for generating a modified training image based on the training image;
a run-time module for determining a set of poses that represent possible instances of the model in the modified training image; and
a quality-metric module for computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.
21. A computerized method for determining a quality metric of a model of an object in a machine vision application, the method comprising:
receiving a training image and a first set of model parameters; generating a first model of an object;
generating a second model of the object based on the training image and a second set of model parameters modified from the first set of model parameters; determining a set of poses that represent possible instances of the second model in the training image; and
computing a quality metric of the first model based on an evaluation of the set of poses with respect to the training image.
22. The method of claim 21 wherein modifying the first set of model parameters to produce the second set of model parameters comprises perturbing one or more values in the first set of model parameters.
23. The method of claim 21 wherein the model comprises a geometric description of the object in the training image.
24. The method of claim 21 wherein the model comprises a portion of the training image.
25. The method of claim 21 further comprising computing at least a primary score and a secondary score for at least a portion of the set of poses.
26. The method of claim 25 wherein computing the quality metric of the model is based on the primary score and the secondary score.
27. The method of claim 25 wherein computing the quality metric of the model is based on a distribution of the computed scores for the portion of the set of poses.
28. The method of claim 21 further comprising:
generating a plurality of models based on the training image and a plurality of different sets of model parameters, the plurality of different sets of model parameters being based on modifications to the first set of model parameters; and
determining a set of poses for each of the plurality of models, each set of poses representing possible instances of one of the plurality of models in the training image; wherein computing the quality metric is further based on an evaluation of the sets of poses, for each of the plurality of models, with respect to the training image.
29. The method of claim 28 further comprising computing at least primary scores and secondary scores for at least a portion of the first set of poses and at least portions of the sets of poses computed for each of the plurality of models.
30. The method of claim 29 wherein determining the quality metric of the model is based on the secondaiy scores.
31. The method of claim 29 wherein determining the quality metric of the model is based on a distribution of the computed scores for the portion of the first set of poses and distributions of the computed scores for the portions of the sets of poses computed for each of the plurality of models.
32. The method of claim 21 further comprises modifying the training image.
33. The method of claim 32 wherein modifying the received training image comprises adding noise to the received training image.
34. The method of claim 21 wherein determining the set of poses comprises modifying one or more search parameters.
35. The method of claim 34 wherein the one or more search space parameters comprise a starting pose value, one or more search range values, or any combination thereof.
36. A computer program product, tangibly embodied in a machine-readable storage device, the computer program product including instructions being operable to cause a data processing apparatus to:
receive a training image and a first set of model parameters;
generate a first model of an object;
generate a second model of the object based on the training image and a second set of model parameters modified from the first set of model parameters; determine a set of poses that represent possible instances of the second model in the training image; and
compute a quality metric of the first model based on an evaluation of the set of poses with respect to the training image.
37. A system for determining a quality metric of a model of an object in a machine vision application, the system comprising: interface means for receiving a training image and a first set of model parameters;
model generating means for generating a first model of an object; model generating means for generating a second model of the object based on the training image and a second set of model parameters modified from the first set of model parameters;
processor means for determining a set of poses that represent possible instances of the second model in the training image; and
processor means for computing a quality metric of the first model based on an evaluation of the set of poses with respect to the training image.
38. A machine vision system for determining a quality metric of a model of an object in a machine vision application, the system comprising:
an interface for receiving a training image and a first set of model parameters; a model generating module for generating a first model of an object and for generating a second model of the object based on the training image and a second set of model parameters modified from the first set of model parameters;
a run-time module for determining a set of poses that represent possible instances of the second model in the training image; and
a quality-metric module for computing a quality metric of the first model based on an evaluation of the set of poses with respect to the training image.
PCT/US2011/066883 2010-12-29 2011-12-22 Determining the uniqueness of a model for machine vision WO2012092132A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US12/981,268 US8542905B2 (en) 2010-12-29 2010-12-29 Determining the uniqueness of a model for machine vision
US12/981,268 2010-12-29
US12/981,275 2010-12-29
US12/981,275 US8542912B2 (en) 2010-12-29 2010-12-29 Determining the uniqueness of a model for machine vision

Publications (2)

Publication Number Publication Date
WO2012092132A2 true WO2012092132A2 (en) 2012-07-05
WO2012092132A3 WO2012092132A3 (en) 2012-10-04

Family

ID=45476683

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/066883 WO2012092132A2 (en) 2010-12-29 2011-12-22 Determining the uniqueness of a model for machine vision

Country Status (1)

Country Link
WO (1) WO2012092132A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020166813A (en) * 2019-03-11 2020-10-08 キヤノン株式会社 Medical image processing device, medical image processing method, and program
WO2022066378A1 (en) * 2020-09-24 2022-03-31 Kla Corporation Methods and systems for determining quality of semiconductor measurements
US11922601B2 (en) 2018-10-10 2024-03-05 Canon Kabushiki Kaisha Medical image processing apparatus, medical image processing method and computer-readable medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7016539B1 (en) 1998-07-13 2006-03-21 Cognex Corporation Method for fast, robust, multi-dimensional pattern recognition
US7088862B1 (en) 1997-11-26 2006-08-08 Cognex Corporation Fast high-accuracy multi-dimensional pattern inspection

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7734071B2 (en) * 2003-06-30 2010-06-08 Honda Motor Co., Ltd. Systems and methods for training component-based object identification systems
US7640145B2 (en) * 2005-04-25 2009-12-29 Smartsignal Corporation Automated model configuration and deployment system for equipment health monitoring

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7088862B1 (en) 1997-11-26 2006-08-08 Cognex Corporation Fast high-accuracy multi-dimensional pattern inspection
US7016539B1 (en) 1998-07-13 2006-03-21 Cognex Corporation Method for fast, robust, multi-dimensional pattern recognition

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11922601B2 (en) 2018-10-10 2024-03-05 Canon Kabushiki Kaisha Medical image processing apparatus, medical image processing method and computer-readable medium
JP2020166813A (en) * 2019-03-11 2020-10-08 キヤノン株式会社 Medical image processing device, medical image processing method, and program
JP7297628B2 (en) 2019-03-11 2023-06-26 キヤノン株式会社 MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING METHOD AND PROGRAM
WO2022066378A1 (en) * 2020-09-24 2022-03-31 Kla Corporation Methods and systems for determining quality of semiconductor measurements
US11530913B2 (en) 2020-09-24 2022-12-20 Kla Corporation Methods and systems for determining quality of semiconductor measurements

Also Published As

Publication number Publication date
WO2012092132A3 (en) 2012-10-04

Similar Documents

Publication Publication Date Title
US8542912B2 (en) Determining the uniqueness of a model for machine vision
US20200210702A1 (en) Apparatus and method for image processing to calculate likelihood of image of target object detected from input image
US8542905B2 (en) Determining the uniqueness of a model for machine vision
US10515460B2 (en) Neural network-based camera calibration
JP6262748B2 (en) Biological unit identification based on supervised shape ranking
CN105164700B (en) Detecting objects in visual data using a probabilistic model
CN109934847B (en) Method and device for estimating posture of weak texture three-dimensional object
JP7316731B2 (en) Systems and methods for detecting and classifying patterns in images in vision systems
JP2021504848A (en) Image processing system and image processing method
JP2015041164A (en) Image processor, image processing method and program
TWI776176B (en) Device and method for scoring hand work motion and storage medium
CN110956131B (en) Single-target tracking method, device and system
JP5704909B2 (en) Attention area detection method, attention area detection apparatus, and program
Kuo et al. Improving defect inspection quality of deep-learning network in dense beans by using hough circle transform for coffee industry
CN115908988A (en) Defect detection model generation method, device, equipment and storage medium
US20230153987A1 (en) Object defect detection
WO2012092132A2 (en) Determining the uniqueness of a model for machine vision
JP2021015479A (en) Behavior recognition method, behavior recognition device and behavior recognition program
CN116416444A (en) Object grabbing point estimation, model training and data generation method, device and system
Hietanen et al. Benchmarking pose estimation for robot manipulation
KR101868520B1 (en) Method for hand-gesture recognition and apparatus thereof
Daudt et al. Learning to understand earth observation images with weak and unreliable ground truth
CN113436251B (en) Pose estimation system and method based on improved YOLO6D algorithm
Takacs et al. Novel outlier filtering method for AOI image databases
Zhong et al. Detection of oxidation region of flexible integrated circuit substrate based on topology mapping

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11808113

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11808113

Country of ref document: EP

Kind code of ref document: A2