EP1435065A1 - Determination automatique de modeles geometriques pour des reconnaissances optiques partielles - Google Patents
Determination automatique de modeles geometriques pour des reconnaissances optiques partiellesInfo
- Publication number
- EP1435065A1 EP1435065A1 EP02774450A EP02774450A EP1435065A1 EP 1435065 A1 EP1435065 A1 EP 1435065A1 EP 02774450 A EP02774450 A EP 02774450A EP 02774450 A EP02774450 A EP 02774450A EP 1435065 A1 EP1435065 A1 EP 1435065A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- model
- groups
- features
- describing
- similarity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
Definitions
- the invention relates to the generation of a model which represents an image object class and thus serves as a recognition model for new members of this class.
- optical and / or acoustic methods are used to control workpieces in production, quality control or identity recognition.
- the methods used must be highly adaptive and stable, since physical changes in the products to be tested usually vary due to poor quality, orientation, or damage, as well as different lighting conditions, the properties of the object to be examined.
- the recognition of objects or patterns is known to be carried out by means of digital image and signal recording technology, to which image and signal processing routines for classifying the objects or patterns are added.
- the routines use methods in which the image objects occurring in the digital images are analyzed on the basis of shape features such as gray value contours, texture, and also edges, corners and straight lines.
- shape features such as gray value contours, texture, and also edges, corners and straight lines.
- shape features such as gray value contours, texture, and also edges, corners and straight lines.
- shape features such as gray value contours, texture, and also edges, corners and straight lines.
- the particularly characteristic, reliable and descriptive shape features of an object are combined to form a model. Different shape features " lead to different models.
- the object is placed under a camera and recorded.
- the resulting images are first analyzed for the shape characteristics of the objects in the image.
- shape characteristics such as straight lines, corners, circles, lines or parts of surfaces recognized, extracted from the image and summarized in a model.
- the selection of the shape features suitable for the models is based on a statistical assessment of all extracted features from many images.
- the measured values of the features initially scatter randomly around an average value due to lighting differences, object differences and camera noise.
- the complications introduced here are currently compensated for by an interactive process with a specialist, whereby the models are generally created and tested by a specialist.
- the formation of groups which are used in the invention is a result of a similarity determination of the shape features generated from the images.
- the development of groups that are used to create a model is based on similarities such as Feature type, length, angle, form factor, area or brightness. Similar features are grouped. In this method, for example, a group could represent the size of an area with a certain brightness or an edge shape with a certain intensity. More information from a new picture, such as a similar brightness distribution or edge shape are then added to the existing groups.
- the invention is therefore based on the object of specifying a method for the automatic generation of models in which an automatic detection of object-describing features leads to the representative strength of the models compared to the objects to be recognized and thus enables a cost-effective adaptivity of a recognition system.
- the invention essentially consists in the fact that groups of shape features, which can be properties of an image object, by adding further shape features that are similar to existing group shape features, are based on a control with selectable threshold values and the inclusion of additional information depending on the changed circumstances complete the object and thus approximately represent an object class to be recognized.
- the model generated in this way can be used in a first step for optical partial recognition.
- the features of the object recognized by the partial recognition can lead to an expansion and completion of the model.
- a selection of image signal information is acquired in an object-describing group which has object-describing shape features.
- First similarity criteria lead to the decision whether an object descriptive characteristic can be assigned to the group.
- a selectable threshold enables the decision whether the group becomes part of the recognition model, at least strong groups being used for a model for partial recognition of an object. The strength is determined according to the number of group features, further image recordings being carried out after the generation of a first model, and new features describing the object can thus be obtained.
- Partial recognition is understood to mean, in particular, a recognition of a part of an image object that clearly has the most important features of the object or a specific feature.
- the method is optimally carried out in such a way that further object-describing features are added to the already existing groups in accordance with a similarity determination until the groups no longer change significantly.
- These statistical values can be mean values and / or maximum values and scatter measurement values can be stored for each feature describing the object, these measurement values being used to characterize a model.
- one of the optical ones is identified based on a first partial recognition
- Image acquisition axis of the displaced object, transformation coefficients for the displaced object position, and obtained with the shape features of the moved object are added to a reverse transformation with sufficient similarity to the corresponding already existing groups, as a result of which larger groups can be generated.
- the transformation coefficients describe a change in size and / or a change in position of the object.
- a further step in the generation of a robust geometric model is achieved by creating mapping equations for the relative position determination of an object feature, taking into account the image recording technique and the perspective distortion from an object position.
- an object-describing model can be generated from a central location in the recording field and this model can be used for partial recognition of appropriately moved objects in order to generate a more extensive model for at least one further object location, the appropriate displacement being carried out in all directions and for each Step the model is adjusted.
- Figure 2 shows a process for expanding a model taking into account difficult image recording circumstances
- Figure 3 shows a process for expanding a model taking into account perspective differences and properties of the recording electronics
- FIG. 1 shows the process for developing a geometric model with the help of thresholds and similarity determinations.
- the provisional procedure for generating a first geometric model is indicated by A.
- the image acquisition of an object is specified in step 1 and is followed by a feature extraction in step 2.
- the extent of a desired similarity is determined by thresholds for the similarity of each feature in step 3. Since features that have to be extracted from many images, the above steps are carried out several times.
- the recorded shape features have scattering, which initially characterize a group of similar features in the form of feature mean values or scattering. These mean values or measures of scatter serve as a further basis for evaluating the similarity of a candidate to be newly admitted to the group, for example from a newly recorded image. These statistical values can be saved or saved at the latest in step 9.
- step 4 The following sequence for storing a group of shape features is represented in FIG. 1 with step 4. This step is outside of both frames A and B shows that the groups are used both in process A and in process B. The number of members assigned to a group is saved as a group strength.
- the feature similarity thresholds By appropriate selection of the feature similarity thresholds, similar, new features are added to the group, i.e. the group grows in number of members and thus in group strength. For example, the distance of a new feature to the calculated mean of the previously accepted members of a group can be used for a similarity value. A lower and / upper threshold for this distance would be referred to as a threshold in this example. A further threshold can be used from a minimum number of object-describing features, which are each assigned to corresponding groups). Less similar features are excluded from the group. A larger group contains more information about the object described more precisely by the group or by the scatter values.
- the average of the set of all features contained in the group is suitable for the description of a model, for example for the representation of brightness distributions.
- a maximum of the set of all features contained in the group would be suitable in order to be able to recognize straight lines with a maximum length from future images.
- a particular advantage of the above-mentioned method is that the larger the number of group members, the more precisely an ideal mean value can be calculated and the geometry of the object to be recognized can be described.
- the strong Groups represent those shape features that are extracted particularly reliably from the images and are therefore well suited for describing the object for partial recognition.
- Step 5 merged for part recognition. It is preferred to use strong groups from step 4, since these groups represent those shape features that are extracted particularly reliably and repeatably from the recorded images and are therefore optimally suited for describing the object or model for at least one partial recognition.
- the model is used for a first partial recognition or position determination for the object to be recognized. However, this model is not sufficient to perform partial recognition with high accuracy under difficult circumstances. However, based on this model basis, as explained below, there is a possibility of generating a more robust model.
- differences between captured images must be taken into account, e.g. Differences due to camera noise, lighting differences or changed camera perspectives.
- step 6 After the generation of the first model, further image recordings are made in step 6 under changed circumstances and the descriptive shape features of these images are extracted therefrom in step 7. This are further compared with the existing groups from step 4 and are included in the groups if they are sufficiently similar. Thresholds that may have been changed under the new circumstances can be used with step 8. Overall, very small groups (which did not contribute to the first model) can therefore continue to grow initially. Then, in step 10, a further model is derived from the groups, which represents a more complete and reliable description of the object. This process of change is repeated a few times until the groups no longer change significantly.
- the change in position and size of the object is determined on the basis of an existing model with partial recognition from step 1 of FIG. 2, since at least partial recognition is possible even with shape features that have been changed in perspective.
- the differences between the new position thus determined and the position of the model (which contains the first, undistorted object position) define those for a reverse transformation. on necessary coefficients. These differences are carried out with the evaluation shown in step 2.
- the object recognized with the partial recognition is shown in the left graphic and the distorted object in the right graphic.
- the transformation is shown using the dashed arrow and the changed coordinates x-> x ⁇ and y-> y ⁇ .
- the results of the transformation can first be saved in a step 3.
- the scatter parameters of the groups show how strongly the feature parameters will scatter during the partial recognition for the different image recording conditions.
- measurement values are stored in the recognition model for each characteristic, which characterize this scatter and, in the recognition, for the tolerance of small deviations between the parameters of the model features and those of the new ones generated from the image recordings Characteristics. These scatter measurement values are called tolerances and are derived from the scatter parameters of the groups during model generation.
- the recognition model must take these differences in distance into account so that the influence of the object position in the image on the mutual ge location of the shape features, ie, the influence of perspective distortion can be taken into account.
- recognition models can be generated automatically for different object positions in the image, in which the mutual position of the shape features in the image (2-D) is different due to the perspective distortion.
- This process is shown in Figure 3.
- the perspective image of the camera K and the optics is modeled and a system of imaging equations is created for each object position, which can be done, for example, with an evaluation unit A.
- the system of equations is then solved for the unknown distances of the features.
- These distances can also be specified relative to a basic distance (e.g. relative to the table level on which the object is moved). Then they are called feature heights.
- a model is first created in the middle of the image. Then the object is shifted towards the edge of the image in small steps, the model being adapted to the new object position, ie to the new perspective distortion, with regard to its position parameters after each shift step and after the partial recognition with position calculation. " After some of these adjustment steps, a distance from the optical center (or a relative height above the plane of displacement) can be calculated for each shape feature by comparison with the original model for the center of the image. This shift is calculated in different directions (e.g. in the four Corners of the Image). The distance from the optical center for each shape feature can be determined with the greatest accuracy by means of a compensation calculation over all displacement steps.
- the invention is optimally used in industrial production systems for automatic optical part recognition.
- the invention has the task of determining the position or the installation location of objects, parts or workpieces in the production process and / or of recognizing their type or identity.
- the invention can be used in a quality control to check completeness, manufacturing errors, damage or other quality defects of objects.
- the images could be recorded using a camera, suitable robotics and a computer arrangement.
- the robotics would ensure that the objects to be captured are placed under the camera under different circumstances.
- the camera would first use the commands of a computer to record areas of the image which can first be stored and evaluated by a suitable, stored computer program according to the inventive method.
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
Selon l'invention, des groupes de caractéristiques de formes, ces caractéristiques pouvant constituer des propriétés d'un objet d'image, sont complétés, par ajout d'autres caractéristiques de formes présentant une similitude avec des caractéristiques de formes de groupes existantes, au moyen d'un contrôle avec des valeurs seuils sélectionnables et de l'acquisition d'informations supplémentaires d'après des circonstances modifiées de l'objet, ces groupes représentant ainsi de façon approximative une classe d'objets à reconnaître. Le modèle ainsi produit peut être utilisé dans une première étape pour une reconnaissance optique partielle. Dans une deuxième étape, les caractéristiques de l'objet reconnu par cette reconnaissance partielle peuvent conduire à une extension et un parachèvement du modèle. L'avantage est qu'un contrôle interactif du système de reconnaissance par un spécialiste n'est plus nécessaire et que l'on peut produire des modèles présentant une représentativité élevée.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE10150105A DE10150105A1 (de) | 2001-10-11 | 2001-10-11 | Automatische Ermittlung von geometrischen Modellen für optische Teilerkennungen |
DE10150105 | 2001-10-11 | ||
PCT/DE2002/003814 WO2003034327A1 (fr) | 2001-10-11 | 2002-10-09 | Determination automatique de modeles geometriques pour des reconnaissances optiques partielles |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1435065A1 true EP1435065A1 (fr) | 2004-07-07 |
Family
ID=7702121
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP02774450A Withdrawn EP1435065A1 (fr) | 2001-10-11 | 2002-10-09 | Determination automatique de modeles geometriques pour des reconnaissances optiques partielles |
Country Status (4)
Country | Link |
---|---|
US (1) | US20040258311A1 (fr) |
EP (1) | EP1435065A1 (fr) |
DE (1) | DE10150105A1 (fr) |
WO (1) | WO2003034327A1 (fr) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8488011B2 (en) | 2011-02-08 | 2013-07-16 | Longsand Limited | System to augment a visual data stream based on a combination of geographical and visual information |
US8447329B2 (en) | 2011-02-08 | 2013-05-21 | Longsand Limited | Method for spatially-accurate location of a device using audio-visual information |
US8392450B2 (en) * | 2011-02-08 | 2013-03-05 | Autonomy Corporation Ltd. | System to augment a visual data stream with user-specific content |
US8493353B2 (en) | 2011-04-13 | 2013-07-23 | Longsand Limited | Methods and systems for generating and joining shared experience |
US9430876B1 (en) | 2012-05-10 | 2016-08-30 | Aurasma Limited | Intelligent method of determining trigger items in augmented reality environments |
US9064326B1 (en) | 2012-05-10 | 2015-06-23 | Longsand Limited | Local cache of augmented reality content in a mobile computing device |
US9066200B1 (en) | 2012-05-10 | 2015-06-23 | Longsand Limited | User-generated content in a virtual reality environment |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0848347A1 (fr) * | 1996-12-11 | 1998-06-17 | Sony Corporation | Méthode d'extraction des mesures caractéristiques d'objets |
AU8116498A (en) * | 1997-06-17 | 1999-01-04 | British Telecommunications Public Limited Company | Generating an image of a three-dimensional object |
US6266442B1 (en) * | 1998-10-23 | 2001-07-24 | Facet Technology Corp. | Method and apparatus for identifying objects depicted in a videostream |
JP4392886B2 (ja) * | 1999-01-22 | 2010-01-06 | キヤノン株式会社 | 画像抽出方法及び装置 |
GB0028491D0 (en) * | 2000-11-22 | 2001-01-10 | Isis Innovation | Detection of features in images |
US6834288B2 (en) * | 2001-04-13 | 2004-12-21 | Industrial Technology Research Institute | Content-based similarity retrieval system for image data |
US7337093B2 (en) * | 2001-09-07 | 2008-02-26 | Purdue Research Foundation | Systems and methods for collaborative shape and design |
JP2003132090A (ja) * | 2001-10-26 | 2003-05-09 | Olympus Optical Co Ltd | 類似データ検索装置および方法 |
US7043474B2 (en) * | 2002-04-15 | 2006-05-09 | International Business Machines Corporation | System and method for measuring image similarity based on semantic meaning |
-
2001
- 2001-10-11 DE DE10150105A patent/DE10150105A1/de not_active Ceased
-
2002
- 2002-10-09 WO PCT/DE2002/003814 patent/WO2003034327A1/fr not_active Application Discontinuation
- 2002-10-09 EP EP02774450A patent/EP1435065A1/fr not_active Withdrawn
-
2004
- 2004-04-12 US US10/822,165 patent/US20040258311A1/en not_active Abandoned
Non-Patent Citations (1)
Title |
---|
See references of WO03034327A1 * |
Also Published As
Publication number | Publication date |
---|---|
DE10150105A1 (de) | 2003-04-30 |
US20040258311A1 (en) | 2004-12-23 |
WO2003034327A1 (fr) | 2003-04-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE19521346C2 (de) | Bilduntersuchungs/-Erkennungsverfahren, darin verwendetes Verfahren zur Erzeugung von Referenzdaten und Vorrichtungen dafür | |
DE69027616T2 (de) | Gerät und Verfahren zum Bestimmen von Konturen und Linien | |
DE10081029B4 (de) | Bildbearbeitung zur Vorbereitung einer Texturnalyse | |
DE60307967T2 (de) | Bildverarbeitungsverfahren für die untersuchung des erscheinungsbildes | |
DE602005002176T2 (de) | Vorrichtung und Verfahren zur dreidimensionalen Bildvermessung | |
DE102017001366A1 (de) | Geometrie-messsystem, geometrie-messapparat und geometrie-messverfahren | |
DE102004004528A1 (de) | Verfahren, Vorrichtung und Programm zur Verarbeitung eines Stereobildes | |
DE4418217A1 (de) | Formerkennungsverfahren | |
DE19746939A1 (de) | System und Verfahren zur Messung des Herzmuskels in Herzbildern | |
EP1882232B1 (fr) | Procede et dispositif pour determiner des delimitations de matiere d'un objet a tester | |
DE3505331A1 (de) | Verfahren und geraet zur vermessung des bei der eindringhaertepruefung in einer probe hinterlassenen eindrucks | |
DE102009051925A1 (de) | Verfahren zur Bestimmung von Maschendaten und Verfahren zur Korrektur von Modelldaten | |
DE19633693C1 (de) | Verfahren und Vorrichtung zur Erfassung von Targetmustern in einer Textur | |
EP2753897A1 (fr) | Procédé et dispositif de détection d'écarts d'une surface d'un objet | |
EP3649614B1 (fr) | Procédé de détermination d'incertitudes dans des données de mesure à partir d'une mesure d'un objet | |
DE112019006855T5 (de) | Simulationsvorrichtung und simulationsverfahren | |
DE102005025220B4 (de) | Gerät, Verfahren und Programm zum Beseitigen von Poren | |
DE102019131693A1 (de) | Messgerät zur untersuchung einer probe und verfahren zum bestimmen einer höhenkarte einer probe | |
DE19951146A1 (de) | Verfahren zum Reduzieren des Rauschens in einem durch Abbildung erhaltenen Signal | |
DE102017110339A1 (de) | Computerimplementiertes Verfahren zur Vermessung eines Objekts aus einer digitalen Darstellung des Objekts | |
EP1435065A1 (fr) | Determination automatique de modeles geometriques pour des reconnaissances optiques partielles | |
DE102014103137A1 (de) | Verfahren zur Bestimmung und Korrektur von Oberflächendaten zur dimensionellen Messung mit einer Computertomografiesensorik | |
DE102009056467A1 (de) | Verfahren zur Bestimmung von Oberflächen in Voxeldaten | |
DE102021201031A1 (de) | Programmerstellungsvorrichtung, Objekterkennungssystem, Ankersetzverfahren und Ankersetzprogramm | |
DE112019007961T5 (de) | Modellgenerator und aufnahmeroboter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20040406 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20050503 |