WO2006002320A2 - System and method for 3d object recognition using range and intensity - Google Patents

System and method for 3d object recognition using range and intensity Download PDF

Info

Publication number
WO2006002320A2
WO2006002320A2 PCT/US2005/022294 US2005022294W WO2006002320A2 WO 2006002320 A2 WO2006002320 A2 WO 2006002320A2 US 2005022294 W US2005022294 W US 2005022294W WO 2006002320 A2 WO2006002320 A2 WO 2006002320A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
pose
class
invariant
feature descriptors
Prior art date
Application number
PCT/US2005/022294
Other languages
French (fr)
Other versions
WO2006002320A3 (en
Inventor
Gregory Hager
Eliot Wegbreit
Original Assignee
Strider Labs, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Strider Labs, Inc. filed Critical Strider Labs, Inc.
Priority to EP05763226A priority Critical patent/EP1766552A2/en
Publication of WO2006002320A2 publication Critical patent/WO2006002320A2/en
Publication of WO2006002320A3 publication Critical patent/WO2006002320A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects

Definitions

  • the present invention relates generally to the field of computer vision and, in particular, to recognizing objects and instances of visual classes.
  • the object recognition problem is to determine which, if any, of a set of known objects is present in an image of a scene observed by a video camera system.
  • the first step in object recognition is to build a database of known objects. Information used to build the database may come from controlled observation of known objects, or it may come from an aggregation of objects observed in scenes without formal supervision.
  • the second step in object recognition is to a match a new observation of a previously viewed object with its representation in the database.
  • the difficulties with object recognition are manifold, but generally relate to the fact that objects may appear very differently when viewed from a different perspective, in
  • PA2777US - 1 - a different context, or under different lighting. More specifically, three categories of problems can be identified: (1) difficulties related to changes in object orientation and position relative to the observing camera (collectively referred to as "pose”); (2) difficulties related to change in object appearance due to lighting ("photometry”); and (3) difficulties related to the fact that other objects may intercede and obscure portions of known objects ("occlusion").
  • pose difficulties related to changes in object orientation and position relative to the observing camera
  • photometry difficulties related to change in object appearance due to lighting
  • occlusion difficulties related to the fact that other objects may intercede and obscure portions of known objects.
  • Class recognition is concerned with recognizing instances of a class, to determine which, if any, of a set of known object classes is present in a scene.
  • a general object class may be defined in many ways. For example, if it is defined by function then the general class of chairs contains both rocking chairs and club chairs.
  • Visual object class recognition is then done by visual class recognition of the sub-class, followed by semantic association to find the general class containing the sub-class.
  • visual class recognition In the case of chairs, an instance of a rocking chair might be recognized based on its visual characteristics, and then database lookup might find the higher-level class of chair. A key part of this activity is visual class recognition.
  • the first step in visual class recognition is to build a database of known visual classes.
  • information used to build the database may come from controlled observation of designated objects or it may come from an aggregation, over time, of objects observed in scenes without formal supervision.
  • the second step in visual class recognition is to match new observations with their visual classes as represented in the database. It is convenient to adopt the shorthand "object class" in place of the longer
  • geometry-based approaches rely on matching the geometric structure of an object.
  • Appearance-based approaches rely on using the intensity values of one or more spectral bands in the camera image; this may be grey- scale, color, or other image values.
  • Geometry-based approaches recognize objects by recording aspects of three- dimensional geometry of the object in question.
  • Another system of this type is described in Johnson and Hebert, "Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes", IEEE Transactions on Pattern Analysis and machine Intelligence, Vol. 21, No5. pp 433-448. Another such
  • PA2777US - 3 - system is described in Frome et al, "Recognizing Objects in Range Data Using Regional Point Descriptors", Proceedings of the European Conference on Computer Vision, May 2004, pp 224-237. These systems rely on the fact that certain aspects of object geometry do not change with changes in object pose. Examples of these aspects include the distance between vertices of the object, the angles between faces of an object, or the distribution of surface points about some distinguished point. Geometry-based approaches are insensitive to pose by their choice of representation and they are insensitive to photometry because they do not use intensity information.
  • the method takes advantage of the fact that small areas of the object surface are less prone to occlusion and are less sensitive to illumination changes. There are many variations on the method. In general terms, the method consists of the following steps: detecting significant local regions, constructing descriptors for these local regions, and using these local regions in matching. [0015] Most of these methods build a database of object models from 2D images and recognize acquired scenes as 2D images. There are many papers using this approach.
  • a surface feature viewed at a small distance looks different when viewed from a large distance.
  • the principle difficulty in feature-based object recognition is to find a representation of local features that is insensitive to changes in distance and viewing direction so that objects may be accurately detected from many points of view.
  • Currently available methods do not have a practical means for creating such feature representations.
  • Several of the above methods provide limited allowance for viewpoint change; however, the ambiguity inherent in a 2D image means that in general it is not possible to achieve viewpoint invariance.
  • PA2777US - 6 - [0018] A third approach to object recognition combines 3D and 2D images in the context of face recognition.
  • a survey of this work is given in Bowyer et al, "A Survey of approaches to Three-Dimensional Face Recognition", International Conference on Pattern Recognition, (ICPR), 2004, pp 358-361.
  • This group of techniques is generally referred to as "multi-modal.”
  • the multi-modal approach uses variations of a common technique, which is that a 3D geometry recognition result and a 2D intensity recognition result are each produced without reference to the other modality, and then the recognition results are combined by some voting mechanism. Hence, the information about the 3D location of intensity data is not available for use in recognition.
  • PA2777US - 7 - part in the various training images There are several difficulties with this general approach. The most important limitation is that since the geometric relationship of the parts is not represented, considerable important information is lost. An object with its parts jumbled into random locations will be recognized just as well as the object itself.
  • Another line of research represents a class as a constellation of parts with 2D structure. Each part is represented by a model for the local intensity appearance of that part, generalized over all instances of the class, while the geometric relationship of the parts is represented by a model in which spatial location is generalized over all instances of the class. Two papers applying this approach are Burl et al, "A probabilistic approach to object recognition using local photometry and global geometry", Proc.
  • PA2777US - 8 - PA2777US - 8 -
  • PA2777US - 9 - SUMMARY The present invention provides a system and method for performing object and class recognition that allows for wide changes of viewpoint and distance of objects. This is accomplished by combining various aspects of the 2D and 3D methods of the prior art in a novel fashion.
  • the present invention provides a system and method for choosing pose-invariant interest points of a three-dimensional (3D) image, and for computing pose-invariant feature descriptors of the image.
  • the system and method also allows for the construction of three-dimensional (3D) object and class models from the pose-invariant interest points and feature descriptors of previously obtained scenes.
  • Interest points and feature descriptors of a newly acquired scene may be compared to the object and/or class models to identify the presence of an object or member of the class in the new scene.
  • the present invention discloses a method for recognizing objects in an observed scene, comprising the steps of: acquiring a three- dimensional (3D) image of the scene; choosing pose-invariant interest points in the image; computing pose-invariant feature descriptors of the image at the interest points, each feature descriptor comprising a function of the local intensity component of the 3D image as it would appear if it were viewed in a standard pose with respect to a camera; constructing a database comprising 3D object models, each object model comprising a set of pose-invariant feature descriptors of one or more images of an object; and comparing the pose-invariant feature descriptors of the scene image to pose-invariant feature descriptors of the object models.
  • Embodiments of the system and the other methods, and possible alternatives and variations, are also disclosed.
  • FIG. l is a symbolic diagram showing the principal elements of a system for acquiring a 3D description of a scene according to an embodiment of the invention
  • FIG. 2 is a symbolic diagram showing the principal steps of constructing a pose- invariant feature descriptor according to an embodiment of this invention
  • FIG. 3 is a symbolic diagram showing the principal elements of a system for database construction according to an embodiment of the invention.
  • FIG. 4 is a symbolic diagram showing the principal components of a system for recognition according to an embodiment of the invention.
  • FIG. 5 is a symbolic diagram showing the primary steps of recognition according to an embodiment of the method of the invention.
  • FIG. 6 illustrates the effects of frontal transformation according to an embodiment of the invention.
  • FIG. 1 is a symbolic diagram showing the principal physical components of a system for acquiring a 3D description of a scene configured in accordance with an embodiment of the invention.
  • a set of two or more cameras 101 and a projector of patterned light 102 are used to acquire images of an object 103.
  • a computer 104 is used to compute the 3D position of points in the image using stereo correspondence.
  • a preferred embodiment of the stereo system is disclosed in U.S. Patent Application Serial No. 10/703,831, filed 11/7/03, which is incorporated herein by reference.
  • the 3D description is referred to as a "range image”.
  • This range image is placed into correspondence with the intensity image to produce a "registered range and intensity image", sometimes referred to as the "registered image” and sometimes as a "3D image".
  • each image location has one or more intensity values, and a corresponding 3D coordinate giving its location in space relative to the observing stereo ranging system.
  • the set of intensity values are referred to as the "intensity component"
  • PA2777US - 13 - of the 3D image The set of 3D coordinates are referred to as the "range component" of the 3D image.
  • range component The set of 3D coordinates are referred to as the "range component" of the 3D image.
  • the local surface normal can be computed and, using this, it is possible to remove the effects of slant and tilt. As a result, it is possible to compute local features that are insensitive to all possible changes in the pose of the object relative to the observing camera. Since the
  • FIG. 2 is a symbolic diagram showing the principal steps of a method of constructing a pose-invariant feature descriptor according to an embodiment of this invention.
  • a registered range and intensity image is given as input at step 201.
  • the image is locally transformed at step 202 to a standard pose with respect to the camera, producing a set of transformed images. This transformation is possible because the image contains both range and intensity information.
  • Interest points on the transformed image are chosen at step 203. At each interest point, a feature descriptor is computed in step 204.
  • the feature descriptor includes a function of the local image intensity about the interest point. Additionally, the feature descriptor may also include a function of the local surface geometry about the interest point.
  • the result is a set of pose-invariant feature descriptors 205. This method is explained in detail below, as are various embodiments and elaborations of these steps. Alternatively, it is possible to combine steps; for example, one may incorporate the local transformation into interest point detection, or into the computation of feature descriptors, or into both. This is entirely equivalent to a transformation step followed by interest point detection or feature descriptor computation. [0041] In general terms, recognition using these pose-invariant features has two parts: database construction and recognition per se.
  • FIG. 3 is a symbolic diagram showing the principal components of database construction according to an embodiment of the invention.
  • An imaging system 301 acquires registered images of objects 302 on a
  • FIG. 4 is a symbolic diagram showing the principal components of a recognition system according to an embodiment of this invention.
  • An imaging system 401 acquires registered images of a scene 402 and a computer 403 uses the database 404 to recognize objects or instances of object classes in the scene.
  • the database 404 of FIG. 4 is the database 304 shown as being constructed in FIG. 3.
  • FIG. 5 is a symbolic diagram showing the primary steps of recognition according to an embodiment of the invention.
  • a database is constructed containing 3D models, each model comprising a set of descriptors.
  • the models are object models and the descriptors are pose-invariant feature descriptors; in the case of class recognition, the models are class models and the descriptors are class descriptors.
  • a registered range and intensity image is acquired at step 502. The image is locally transformed in step 503 to a standard pose with respect to the camera, producing a set of transformed images. Interest points on the transformed images are chosen at step 504. Pose-invariant feature descriptors are computed at the interest points in step 505. Pose-invariant feature descriptors of the observed scene are compared to descriptors of the object models at step 506. In step 507, a set of objects identified in the scene is identified. [0044] A system or method utilizing the present invention is able to detect and represent features in a pose-invariant manner; this ability is conferred to both flat and curved objects. An additional property is the use of both range and intensity information to detect and represent said features.
  • Gaussian functions (“Gaussians”) that have a different spread, controlled by using the variance parameter of the Gaussian function.
  • the spread of a Gaussian function is referred to as the "scale” of the operator, and roughly corresponds to choosing a level of detail at which the afore-mentioned image information is computed.
  • Given a neighborhood of pixels it is possible to first compute the image gradient for each pixel location, and then to compute a 2 by 2 matrix consisting of the sum of the outer product of each gradient vector with itself, divided by the number of pixels in the region.
  • the eigenvector associated with the largest eigenvalue is referred to as the "dominant gradient direction" for that neighborhood.
  • the ratio of the smallest eigenvalue to the largest eigenvalue is referred to
  • the present invention also uses a range image that is registered to the intensity image.
  • the fact that the range image is registered to the intensity image means that each location in the intensity image has a corresponding 3D location. It is important to realize that these 3D locations are relative to the camera viewing location, so a change in viewing location will cause both the intensity image and the range image of an object to change.
  • the points that are visible in both views can be related by a single change of coordinates consisting of a translation vector and a rotation matrix.
  • the points in the two images can be merged and/or compared with each other.
  • the process of computing the translation and rotation between views, thus placing points in those two views in a common coordinate system, is referred to as "aligning" the views.
  • All of the preceding concepts can be found in standard undergraduate textbooks on digital signal processing or computer vision. Locally Warping Images
  • the present invention makes use of range information to aid in the location and description of regions of an image that are indicative of an object or class of objects.
  • Such regions are referred to as "features.”
  • the algorithm that locates features in an image is referred to as an "interest operator.”
  • An interest operator is said to be “pose- invariant” if the detection of features is insensitive to a large range' of changes in object pose.
  • a feature is represented in a manner that facilitates matching against features detected in other range and intensity images.
  • the representation of a feature is referred to as a "feature descriptor.”
  • a feature descriptor is said to be "pose- invariant” if the descriptor is insensitive to a large range of changes in object pose.
  • the present invention achieves this result in part by using information in the range image to produce new images of surfaces as viewed from a standard pose with respect to the camera.
  • the standard pose is chosen so that the camera axis is aligned with the surface normal at each feature and the surface appears as it would when imaged at a fixed nominal distance.
  • a portion of a surface modeled in this form is referred to as a "surface patch.”
  • the values of t x and t y do not depend on the position or orientation of the observed surface.
  • the values of t x and t y with associated directions e x and e y can be computed or approximated in a number of ways from range images.
  • smooth connected surfaces are extracted from the range data by first choosing a set of locations, known as seed locations, and subsequently fitting analytic
  • PA2777US - 19 - surfaces to the range image in neighborhoods about these seed locations. For seed locations where the surface fits well, the size of the neighborhood is increased, whereas neighborhoods where the surface fits poorly are reduced or removed entirely. This process is iterated until all areas of the range image are described by some analytic surface patch.
  • Methods for computing quadric surfaces from range data are well- established in the computer vision literature and can be found in a variety of references, e.g., Petitjean, "A survey of methods for recovering quadrics in triangle meshes", ACM Computing Surveys, Vol. 34, No. 2, June 2002, pp. 211-262.
  • Methods for iterative segmentation of range images are well established and can be found in a variety of references, e.g., A.
  • the area of the surface represented in a patch is invariant to changes in object pose, and thus the appearance of features on the object surface are likewise invariant up to the sample spacing of the camera system.
  • s* s (d/f).
  • s .0045mm/pixel
  • d 1000 mm
  • f 12.5mm
  • s* .36mm/pixel.
  • FIG. 6 shows the result of frontal warping. 601 is a surface shown tilted away from the camera axis by a significant angle, while 602 is the corresponding surface transformed to be frontal normal.
  • a combined range and intensity image containing several objects may be segmented into a collection of smaller areas that may be modeled as quadric patches, each of which is transformed to appear in a canonical frontal pose. Additionally, the size of each patch may be restricted to ensure a limited range of surface normal directions within the patch.
  • PA2777US - 21 - [0061] More specifically, patches are chosen such that no surface normal at any sample point in the patch makes an angle larger than ⁇ max with n. This implies that the range of x and y values within the local coordinate system of the patch fall within an elliptical region defined by a value ⁇ such that: t x 2 x 2 + t y 2 y 2 ⁇ sec( ⁇ max ) 2 - l ⁇ 2 Thus, a patch will have the desired range of surface normals if
  • ⁇ x max ⁇ /t x and
  • ⁇ y max ⁇ /t y .
  • An image patch with this property will be referred to as a "restricted viewing angle patch.”
  • the values X n ⁇ x and y max are used to determine the number of sampling locations needed to completely sample a restricted viewing angle patch. In the x direction, the number will be 2*x max /s* and in y it will be 2*y max /s*.
  • the value of ⁇ ma ⁇ is chosen to be 20 degrees, although other embodiments may use other values of ⁇ max .
  • Surface patches that do not satisfy the restricted viewing angle property are subdivided into smaller patches until they are restricted viewing angle patches, or a minimal patch size is reached.
  • the new patches are chosen to overlap at their boundaries to ensure that no image locations (and hence interest points) fall directly on, or directly adjacent to, a patch boundary in all patches.
  • Patches are divided by choosing the coordinate direction (x or y) over which the range of normal directions is the largest, and creating two patches equally divided in this coordinate direction.
  • the restricted viewing angle patches are warped as described above, where the warping is performed on the intensity image.
  • Interest points are located on the warped patches by executing the following steps:
  • PA2777US - 22 - 1. Compute the eigenvalues of the gradient image covariance matrix at every pixel location and for several scales of the aforementioned gradient operator. Let minE and maxE denote the minimum and maximum eigenvalues so computed, and let r denote their eigenvalue ratio. 2. Compute a list Ll of potential interest points by finding all locations where minE is maximal in the image at some scale. 3. Remove from Ll all locations where the ratio r is less than a specified threshold. In the first and second embodiments, the threshold is 0.2, although other embodiments may use other values. 4.
  • the ratio of surface curvatures min(t x , t y )/max(t x , t y ) is compared to E. If E is larger than the surface curvature ratio, the rotation matrix R L is computed from e x , e y , and n as described previously. Otherwise the rotation matrix R is computed from the eigenvectors of E and the surface normal n as follows. A zero is appended to the end of both of the eigenvectors of E. These vectors are then multiplied by the rotation matrix R originally computed when the patch was frontally warped.
  • PA2777US - 24 - P is similarly warped, producing a canonical local range image D'.
  • a patch size of lcm by lcm is used (creating an image patch of size 28 pixels by 28 pixels) although other embodiments may use other patch sizes.
  • P' is normalized by subtracting its mean intensity, and dividing by the square root of the sum of the squares of the resulting intensity values. Thus, changes in brightness and contrast do not affect the appearance of P'.
  • the geometric descriptor specifies the location of a feature; the appearance descriptor specifies the local appearance; and the qualitative descriptor is a summary of the salient aspects of the local appearance.
  • Frontal warping ensures that the locations of the features and their appearance have been corrected for distance, slant, and tilt. Hence, the features are pose invariant and are referred to as "pose-invariant features”. Additionally, their construction makes them invariant to changes in brightness and contrast.
  • An object model O is a collection of pose-invariant feature descriptors expressed in a common geometric coordinate system.
  • F be the collection of pose-invariant feature descriptors observed in the scene.
  • O) is the probability of the feature descriptors F given that the object is present in the scene
  • ⁇ O) is the probability of the feature descriptors F given that the object is not present in the scene.
  • the object O is considered to be present in the scene if L(F, O) is greater than a threshold ⁇ .
  • the threshold ⁇ is empirically determined
  • each feature is composed of an appearance descriptor, a qualitative descriptor, and a geometric PA2777US - 26 - descriptor.
  • F A denote the appearance descriptors of a set of observed features.
  • O A denote the appearance descriptors of a model object O.
  • Fx and O x denote the corresponding observed and model geometric descriptors
  • F Q and O Q denote the corresponding observed and model qualitative descriptors.
  • F A (IC) is the appearance descriptor of the kth feature in the set and O A (h(k)) is the appearance descriptor in the corresponding feature of the model.
  • F x (k) is the geometric descriptor of the kth feature of the set and O x (h(k), ⁇ ) is the geometric descriptor of the corresponding feature of the model when the model is in the pose ⁇ .
  • Feature geometry descriptors are conditionally independent given h and ⁇ . Also, each feature's appearance descriptor is approximately independent of other features.
  • P(F I O, h, ⁇ ) / P(F I ⁇ O) Il k L A (F, O, h, k) L x (F, O, h, ⁇ , k)
  • L A (F, O, h, k) P(F A (k) I O A (h(k))) / P(F A (k)
  • -O) and L x (F, O, h, ⁇ , k) P(F x (IO I O ⁇ (h(k), ⁇ ) / P(F x (k)
  • L A is subsequently referred to as the "appearance likelihood ratio” and L x as the “geometry likelihood ratio.”
  • the numerators of these expressions are referred to as the "appearance likelihood function” and the “geometry likelihood function,” respectively.
  • the denominator of L A can be approximated by observing that the set of detected features in the object database provides an empirical model for the set of all features that PA2777US - 27 - might be detected in images.
  • a feature is highly distinctive if it differs from all other features on all other objects. For such features, L A is large. Conversely, a feature is not distinctive if it occurs genetically on several objects. For such features, L A is close to 1.
  • L(F 5 O) contains two additional terms, P(h
  • the latter is the probability of an object appearing in a specific pose. In the first and second embodiments, this is taken to be a uniform distribution.
  • P(h I O, ⁇ ) is the probability of the hypothesis h given that the object O is in a given pose ⁇ . It can be viewed as a "discount factor" for missing matches. That is, for a given pose ⁇ of object O, there is a set of features that are potentially visible. If every expected (based on visibility) feature on the object were observed, P(h
  • the first embodiment After performing the visibility computation, the first embodiment expects some number N of features to be visible. P(h
  • the first and second embodiments make use of the fact that the likelihoods introduced above may be evaluated more efficiently by taking their natural logarithms.
  • the likelihood functions described above may take many forms.
  • the first and second embodiments assume additive noise in the measurements and thus the probability value P(f
  • ⁇ f is empirically determined for several different feature distances and slant and tilt angles. Features observed at a larger distance and at higher angles have correspondingly larger values in ⁇ f than those observed at a smaller distance and frontally. The value of ⁇ m is determined as the object model is acquired. [0085] Subsequently disclosed aspects of the invention apply and/or make further refinements to the object likelihood ratio, the appearance likelihood ratio, the qualitative likelihood ratio, the geometry likelihood ratio, and the methods of probability calculation described above. [0086] Two possible embodiments of this invention are now described. A first embodiment deals with object recognition. A second embodiment deals with class recognition. There are many possible variations on each of these and some of these variations are described in the section on Alternative Embodiments.
  • FIG. 3 is a symbolic diagram showing the principal components of database construction. For each object to be recognized, several views of the object are obtained under controlled conditions. The scene contains a single foreground object 302 on a horizontal planar surface 306 at a known height. The background is a simple collection of planar surfaces of known pose with uniform color and texture. An imaging system 301 acquires registered range and intensity images. [0089] For each view of the object, registered range and intensity images are acquired, frontally warped patches are computed, interest points are located, and a feature descriptor is computed for each interest point.
  • each view of an object has associated with it a set of features of the form ⁇ X, Q, A> where X is the 3D pose of the feature, Q denotes the qualitative descriptor, and A is the appearance descriptor.
  • the views are taken under controlled conditions, so that each view also has a pose expressed relative to a fixed base coordinate system associated with it.
  • the process of placing points in two or more views into a common coordinate system is referred to as "aligning" the views.
  • views are aligned as follows. Since the pose of each view is known, an initial transformation aligning the observed pose-invariant features in the two images is also known.
  • the view aligns with a single segment and contains new information. This occurs when the viewpoint is partly novel and partly shared with views already accounted for in that segment. In this case, the new features are added to the segment description. Matching pose-invariant feature descriptors are averaged to reduce noise.
  • (3) The view aligns with two or more segments. This occurs when the viewpoint is partly novel and partly shared with viewpoints already accounted for in the database entry for that object. In this case, the segments are geometrically aligned and merged into one unified representation. Matching pose-invariant feature descriptors are averaged to reduce noise.
  • PA2777US - 31 - [0096] (4) The view does not match. This occurs when the viewpoint is entirely novel and shares nothing with viewpoints of the database entry for that object. In this case, a new segment description is created and initialized with the observed features. [0097] In the typical case, sufficient views of an object are obtained that the several segments are aligned and merged, resulting in a single, integrated model of the object.
  • FIG. 4 is a symbolic diagram showing the principal components of a recognition system. Unlike database creation, scenes are acquired under uncontrolled conditions. A scene may contain none, one, or more than one known object. If an object is present, it may be present once or more than once.
  • An object may be partially occluded and may be in contact with other objects.
  • the goal of recognition is to locate known objects in the scene. [00100]
  • the first step of recognition is to find smooth connected surfaces as described previously.
  • the next step is to process each surface to identify interest points and extract a set of scene features as described above.
  • PA2777US - 32 - where X is the 3D pose of the feature, Q is the qualitative descriptor, and A is the appearance descriptor.
  • Object recognition is accomplished by matching scene features with model features, and evaluating the resulting match using the object likelihood ratio. The first step in this process is to locate plausible matches in the model features for each scene feature. For each scene feature, the qualitative descriptor is used to look up only those model features with qualitative descriptors closely matching the candidate scene feature. The lookup is done as follows. An ordered list is constructed for each qualitative feature component. Suppose there are N qualitative feature components, so there are N ordered lists. The elements of each list are the corresponding elements for all feature descriptors in the model database.
  • a binary search is used to locate those values within a range of each qualitative feature component; from these, the matching model features are identified.
  • N sets of model feature identifiers are formed, one for each of the N qualitative feature components.
  • the N sets are then merged to produce a set of candidate pairs, ⁇ f, g> ⁇ , where f is a feature from the scene and g is feature in the model database.
  • the appearance likelihood is computed and stored in a table M, in the position (f, g). In this table, the scene features form the rows, and the candidate matching model features form the columns.
  • M(f, g) denotes the appearance likelihood value for matching scene feature f to a model object feature g.
  • a table, L is constructed holding the appearance likelihood ratio for each pair ⁇ f, g> identified above.
  • An initial alignment of the model with a scene feature is obtained. To do this, the pair ⁇ f*, g*> with the maximal value in table L is located.
  • O g* be the object model associated with the feature g*.
  • an aligning transformation ⁇ is computed.
  • the transformation ⁇ places the model into a position and orientation that is consistent with the scene feature; hence, ⁇ is taken as the initial pose of the model.
  • the set of potentially visible model features of object O g* is computed. These potentially visible model features are now considered to see if they can be matched against the scene.
  • the method is as follows: If a visible model feature k appears in a row j of table M, the geometry likelihood ratio for matching j and k is computed using the previously described approximation method. The appearance likelihood ratio is taken from the table L. The product of the appearance and geometry likelihood ratios of matching j and k is then computed. The product of the appearance and geometry likelihood ratios is then compared to an empirically determined threshold. If this threshold is exceeded, the feature pair ⁇ j, k> is considered a match.
  • the aligning pose is recomputed including the new feature matches and the process above repeated until no new matches are found.
  • the initial match between f* and g* is disallowed as an initial match.
  • the process then repeats using the next-best feature match from the table L. [00108] This process continues until all matches between observed features and model features with an appearance likelihood ratio above a match threshold have been considered.
  • the second embodiment modifies the operation of the first embodiment to perform class-based object recognition.
  • class-based recognition offers many advantages over distinct object recognition. For example, a newly encountered coffee PA2777US - 35 - mug can be recognized as such even though it has not been seen previously.
  • properties of the coffee mug class e.g. the presence and use of the handle
  • the second embodiment is described in two parts: database construction and object recognition.
  • Database Construction [00112] The second embodiment builds on the database of object descriptors constructed as described in the first embodiment.
  • the second embodiment processes a set of model object descriptors to produce a class descriptor comprising: 1) An appearance model consisting of a statistical description of the appearance elements of the pose-invariant feature descriptors of objects belonging to the class; 2) A qualitative model summarizing appearance aspects of the features; 3) A geometry model consisting of a statistical description of geometry elements of the pose-invariant features in a common object reference system, together with statistical information indicating the variability of feature location; and 4) A model of the co-occurrence of appearance features and geometry features. These are each dealt with separately and in turn.
  • the second embodiment builds semi -parametric statistical models for the appearance of the pose-invariant features of objects belonging to the class. This process is performed independently on the intensity and range components of the appearance element of a pose-invariant feature.
  • the statistical model used by the second embodiment is a Gaussian Mixture Model.
  • Each of the Gaussian distributions is referred to as a "cluster".
  • the number of clusters K needs to be chosen. There are various possible methods for making this choice.
  • the second embodiment uses a simple one as described below. Alternative embodiments may choose K according to other techniques.
  • N ⁇ denote the number of features in the kth object.
  • the second embodiment chooses K to be
  • Nmax- [00116] An appearance model with K components is computed to capture the commonly appearing intensity and range properties of the class. It is computed using established methods for statistical data modeling as described in Lu, Hager, and Younes, "A Three- tiered approach to Articulated Object Action Modeling and Recognition", Neural Information Processing and Systems, Vancouver, B.C. Canada, Dec. 2004. The method operates as follows. [00117] A set of K cluster centers is chosen. This is done in a greedy, i.e. no look-ahead, fashion by randomly choosing an initial feature as a cluster center, and then iterative] y choosing additional points that are as far from already chosen points as possible. Once the cluster centers are chosen, the k-means algorithm is applied to adjust the centers. This procedure is repeated several times and the result with the tightest set of clusters in the nearest neighbor sense is taken. That is, for each feature vector f,, the closest (in the
  • the within-class and between-class variances are computed. This is processed using linear discriminant analysis to produce a projection matrix ⁇ .
  • the feature descriptors are projected into a new feature space by multiplying by the matrix ⁇ .
  • the likelihood of any data item i belonging to cluster j can be computed.
  • These weights replace the membership function in the linear discriminant analysis algorithm, a new projection matrix ⁇ is computed, and the steps above repeated. This iteration is continued to convergence.
  • the full range of descriptor values can be represented as a vector of intervals I k bounded by two extremal qualitative descriptors ⁇ and ⁇ + k .
  • I k is stored with each cluster as an index.
  • Constructing a Class Model for Geometry [00125] Finally, a geometric model is computed. Recall that the database in the first embodiment produces a set of pose-invariant features for each model, together with a geometric registration of those features to a common reference frame. The second embodiment preferably makes use of the fact that the model for each member of a class is created starting from a consistent canonical pose.
  • the first step in developing a class-based geometric model is to normalize for differences in size and scale of the objects in the class. This is performed by the following steps:
  • the value o n ⁇ * T7
  • is computed to represent the local orientation of the feature.
  • a semi- parametric model for these features is then computed as described above.
  • the resulting geometric model has two components: a Gaussian Mixture Model GMMs( ⁇ ) that models the variation in the location and orientation of pose-invariant feature descriptors across the class given a nominal pose and scale normalization, and a distribution Ps(So
  • the latter is taken to be a Gaussian distribution with mean and variance as computed in step 5 above.
  • the class C is considered to be present in the scene if Lc(F, C) is greater than a threshold ⁇ .
  • the threshold ⁇ is empirically determined for each class as follows. Several independent images of the class in normally occurring scenes are acquired. For several values of ⁇ , the number of times the class is incorrectly recognized as present when it is not (false positives) and the number of times the class is incorrectly stated as not present when it is (false negatives) is tabulated. The value of ⁇ is taken as that for which the value at which the number of false positives equals the number of false negatives.
  • P(X I CG j , a) represents the probability that the feature pose is taken from cluster j of the GMM modeling geometry. It is computed by aligning the observed feature to the model by first transforming the observed features using the pose component ⁇ followed
  • PA2777US - 42 - by scaling using the value so-
  • the resulting scaled translation values correspond to T' above.
  • the observed value of the local orientation after alignment o is also computed.
  • the second embodiment takes the observed feature value as having zero variance.
  • the probability value comes directly from the associated Gaussian mixture component for the cluster CG j .
  • C, a) can be computed from the appearance/geometry co-occurrence table computed during the database construction and the probability that the object would appear in the image given the class aligned with transform a, as detailed below.
  • the cases of interest are those in which an observed scene feature has a well- defined correspondence with an appearance and geometry cluster.
  • the correspondence hypothesis vector h relates an observed scene feature to a pair of an appearance cluster and a geometry cluster, so that h(k) is the pair [ha(k),hg(k)], where ha(k) is a class appearance cluster and hg(k) is a class geometry cluster.
  • C, a) P co (ha
  • C, hg) with P co (ha(k) I C, hg) P co (ha(k),hg(k)
  • P app is an appearance model computed using a binomial distribution based on the number of correspondences in h and the number of geometric clusters that should be detectable in the scene under the alignment a.
  • a geometric cluster is considered to be detectable as follows.
  • T represent the mean location of geometric cluster c when the object class is geometrically aligned with the observing camera system (using a).
  • ⁇ c denote the location of the origin of the class coordinate system when the object class is geometrically aligned with the observing camera system.
  • denote the angle the vector T- ⁇ c makes with the optical axis of the camera system.
  • the total angle the geometric cluster makes with the camera optical axis then falls in the range ⁇ - ⁇ to ⁇ + ⁇ .
  • ⁇ max represent the maximum detection angle for a feature.
  • the denominator of the geometry likelihood ratio is taken as a constant value as in the first embodiment.
  • Class recognition is performed as follows. The first phase is to find smooth connected surfaces, identify interest points and extract a set of scene features, as previously described. The second phase is to match scene features with class models and evaluate the resulting match using the class likelihood ratio. The second phase is accomplished in the following steps. [00147] First, for each observed scene feature, the qualitative feature descriptors are used to look up only those database appearance clusters with qualitative characteristics closely matching the candidate observed feature. Specifically, if a feature descriptor has qualitative descriptor Q, then all appearance clusters k with Qe I k are returned from the lookup.
  • ⁇ f, c> ⁇ be the set of feature pairs returned from the lookup on qualitative PA2777US - 45 - feature descriptors, where f is a feature observed in the scene and c is a potentially matching model appearance cluster.
  • the appearance likelihood is computed and stored in a table M, in position (f, c).
  • M(f, c) denotes the appearance likelihood value for matching observed feature f to a model appearance cluster c.
  • An approximation to the appearance likelihood ratio is computed as L(f, g) ⁇ M(f, g) / max ⁇ M(f, k) where k comes from a different class than g.
  • a table, L is constructed holding the appearance likelihood ratio for each pair ⁇ f, g> identified above.
  • four or more feature/cluster matches are located that have maximal values of L and belong to the same class model C.
  • a model geometry cluster k is chosen for which P co (g
  • an alignment, a is computed between the scene and the class model using the feature locations T f and corresponding cluster centers ⁇ c .
  • This alignment is computed by the following steps for n feature/cluster matches: 1) The mean value of the feature locations T f is subtracted from each feature location. 2) The mean value of the cluster centers ⁇ c is subtracted from each cluster center. 3) Let y, represent the location of feature i after mean subtraction. Let X 1 denote the corresponding cluster center after mean subtraction. Compute the dimensionless scale s
  • the geometry likelihood ratio is computed using this aligning transformation.
  • the feature likelihood ratio is computed as the product of the appearance likelihood ratio and the geometry likelihood ratio.
  • Let k be the index of a scene feature; let i be the index of an appearance cluster, and j be the index of a geometry cluster such that the feature likelihood ratio exceeds a threshold.
  • h(k) [i, j] is added to the vector h, thereby associating scene feature k with the appearance, geometry pair [i,j].
  • the aligning transformation is recomputed including the new geometry feature/cluster matches and the process above repeated until no new matches are found.
  • range and co-located image intensity information is acquired by a stereo system, as described above.
  • range and co-located image intensity information may be acquired in a variety of ways.
  • a stereo system may be used, but of different implementation. Active lighting may or may not be used. If used, the active lighting may project a 2-dimensional pattern, or a light stripe, or other structure lighting. For the purposes of this invention, it suffices that the stereo system acquires a range image with acceptable density and accuracy.
  • the multiple images used for the stereo computation may be obtained by moving one or more cameras. This has the practical advantage that it increases the effective baseline to the distance of camera motion.
  • range and image intensity by be acquired by different sensors and registered to provide co-located range and intensity. For example, range might be acquired by a laser range finder and image intensity by a camera.
  • the images may be in any part of the electro-magnetic spectrum or may be obtained by combinations of other imaging modalities such as infra-red imaging or ultraviolet imaging, ultra-sound, radar, or lidar.
  • images are locally transformed so they appear as if they were viewed along the surface normal at a fixed distance.
  • other standard orientations or distances could be used. Multiple standard orientations or distances could be used, or the standard orientation and distance may be adapted to the imaging situation or the sampling limitations of the sensing device.
  • images are transformed using a second order approximation, as described above.
  • local transformation may be performed in other ways. For example, a first-order approximation could be used, so that the local region is represented as a flat surface. Alternatively, a higher order approximation could be used.
  • the local transformation may be incorporated into interest point detection, or into the computation of feature descriptors. For example, in
  • the image is locally transformed, and then interest points are found by computing the eigenvalues of the gradient image covariance matrix.
  • An alternative embodiment may omit an explicit transformation step and instead compute the eigenvalues of the gradient image covariance matrix as if the image were transformed.
  • One way to do so is to integrate transformation with the computation of the gradient by using the chain rule applied to the composition of the image function and the transformation function.
  • Such techniques in which the transformation step is incorporated into interest point detection or into feature descriptor computation, are equivalent to a transformation step followed by interest point detection or feature descriptor computation. Hence, when transformation is described, it will be understood that this may be accomplished by a separate step or may be incorporated into other procedures.
  • interest points are found by computing the eigenvalues of the gradient image covariance matrix, as described above.
  • interest points may be found by various alternative techniques. Several interest point detectors are described in Mikolajczyk et al, "A Comparison of Affine Region Detectors", to appear in International Journal of Computer Vision. There are other interest point detectors as well. For such a technique to be suitable, it suffices that points found by a technique be invariant or nearly invariant to substantial changes in rotation about the optical axis and illumination. [00166] In the first and second embodiments, a single technique was described to find interest points. In alternative embodiments, multiple techniques may be applied
  • PA2777US - 50 - simultaneously.
  • an alternative embodiment may use both a Harris-style corner detector and a Harris-Laplace interest point detector.
  • interest points were computed solely from intensity or from range.
  • a combination of both may be used.
  • intensity features located along occluding contours may be detected.
  • specialized feature detectors may be employed.
  • feature detectors may be specifically designed to detect written text.
  • feature detectors for specialized geometries may be employed, for example a detector for handles.
  • Alternative embodiments may also employ specialized feature detectors that locate edges. These edges may be located in the intensity component of the 3D image, the range component of the 3D image, or where the intensity and range components are both consistent with an edge.
  • the intensity image is transformed before computing interest point locations. This carries a certain computational cost.
  • Alternative embodiments may initially locate interest points in the original image and subsequently transform the neighborhood of the image patch to refine the interest point location and compute the feature descriptor. This speeds up the computation, but may result in less repeatability in interest point detection.
  • several interest detectors implicitly constructed to locate features at a specific slant or tilt angle may be constructed. For example,
  • PA2777US - 51 - derivatives may be computed at different scales in the x and y directions to account for the slant or tilt of the surface rather than explicitly transforming the surface. Surfaces may be classified into several classes of slant and tilt, and the detector appropriate for that class applied to the image in that region.
  • the first phase of interest point detection in the untransformed image may be used as an initial filter. In this case, the neighborhood of the image patch is transformed and the transformed neighborhood is retested for an interest point, possibly with a more discriminative interest point detector. Only those interest points that pass the retest step are accepted. In this way, it may be possible to enhance the selectivity or stability of interest points.
  • the location of an interest point is computed to the nearest pixel.
  • the location of an interest point may be refined to sub-pixel accuracy.
  • interest points are associated with image locations. Typically, this will improve matching because it establishes a localization that is less sensitive to sampling effects and change of viewpoint.
  • Choosing Interest Points to Reduce the Effects of Clutter [00174]
  • interest points may be chosen anywhere on an object. In particular, interest points may be chosen on the edge of an object. When this occurs, the appearance about the interest point in an observed scene may not be stable, because different backgrounds may cause the local appearance to change. In alternative embodiments, such unstable interest points may be eliminated in many situations, as follows. From the range data, it is possible to compute range
  • PA2777US - 52 - discontinuities which generally correspond to object discontinuities. Any interest point that lies on a large range discontinuity is eliminated.
  • An alternative embodiment employing this refinement may have interest points that are more stable in cluttered backgrounds.
  • Determining Local Orientation at an Interest Point [00175]
  • the local orientation at an interest point is found as described above.
  • the local orientation may be computed by alternative techniques. For example, a histogram may be computed of the values of the gradient orientation and peaks of the histogram used for local orientations.
  • Standard Viewing Direction [00176]
  • the local image in the neighborhood of an interest point is transformed so it appears as if it were viewed along the surface normal.
  • each feature descriptor includes a geometric descriptor, an appearance descriptor, and a qualitative descriptor.
  • Alternative embodiments may have feature descriptors with fewer or more elements.
  • Some alternatives may have no qualitative descriptor; such alternatives omit the initial filtering step during recognition and all the features in the model database are considered as candidate matches.
  • Other alternatives may omit some of the elements in the qualitative features described in the first and second embodiments.
  • Still other alternatives may include additional elements in the qualitative descriptor.
  • Various PA2777US - 53 - functions of the appearance descriptor may be advantageously used.
  • the first K components of a principal component analysis may be included.
  • a histogram of appearance values in may be included.
  • Some alternatives may have no geometric descriptor. In such cases, recognition is based on appearance.
  • Other alternatives may expand the model to include inter-feature relationships.
  • each feature may have associated with it the K distances to the nearest K features or the K angles between the feature normal and the vector to the nearest K features. These relationships are pose-invariant; other pose-invariant relationships between two or more features may be also included in the object model.
  • Such inter- feature relationships may be used in recognition, particularly in the filtering step.
  • the appearance descriptor is the local intensity image and the local range image, each transformed so it appears to be viewed frontally centered.
  • appearance descriptors may be various functions of the local intensity image and local range image. Various functions may be chosen for various purposes such as speed of computation, compactness of storage and the like.
  • distribution-based appearance descriptors which use a histogram or equivalent technique to represent appearance as a distribution of values.
  • spatial-frequency descriptors which use frequency components.
  • Another group of functions is differential feature descriptors, which use a set of derivatives.
  • appearance descriptors may be explicitly constructed to have special properties desirable for a particular application. For example, appearance descriptors may be constructed to be invariant to rotation about the camera axis. One way of doing this is to use radial histograms. In this case, an appearance descriptor may consist of histograms for each circular ring about an interest point.
  • Appearance Descriptors Based on Geometry PA2777US - 55 - There are additional appearance descriptors based on local geometry information that have the desired invariance properties.
  • One class of such geometry-based appearance descriptors is represented by SPIN images, as described in the paper by Johnson and Hebert, "Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes" IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 21, No. 5, May 1999, pp 433 - 449.
  • An alternative embodiment using these may fit analytic surfaces patches to the range data, growing each patch to be a large as possible consistent with an acceptably good fit to the data. It would classify each surface patch as to quadric type, e.g. plane, elliptic cylinder, elliptic cone, elliptic paraboloid, ellipsoid, etc. Each interest point on a surface would have an appearance descriptor constructed from the surface on which it is found. The descriptor would consist of two levels, lexographically ordered: the quadric type would serve as the first level descriptor, while the parameters of the surface quadric would serve as the second-level descriptor.
  • quadric type e.g. plane, elliptic cylinder, elliptic cone, elliptic paraboloid, ellipsoid, etc.
  • Each interest point on a surface would have an appearance descriptor constructed from the surface on which it is found. The descriptor would consist of two levels, lexographically ordered: the quadric type would serve as
  • the appearance descriptors have a high dimension. For databases consisting of a very large number of objects, this may be undesirable, since the storage requirements and the search are at least linear in the dimension of the appearance descriptors.
  • Alternative embodiments may reduce the dimensionality of the data.
  • principal component analysis sometimes referred to as the "Karhunen-Loeve
  • LDA linear discriminant analysis
  • Alternative embodiments may use other techniques to reduce the dimensionality of the data.
  • the object and class likelihood ratios are approximated by replacing a sum and integral by maximums, as described above. In alternative embodiments, these likelihood ratios may be approximated by considering additional terms. For example, rather than the single maximum, the K elements resulting in the highest probabilities may be used. K may be chosen to mediate between accuracy and computational speed.
  • the feature likelihood ratio was computed by replacing the denominator with a single value.
  • the K largest likelihood values from an object other than that under consideration may be used.
  • -O) may be precomputed from the object database and stored for each feature and object in question.
  • the first and second embodiments approximate the pose distribution by taking it to be uniform.
  • Alternative embodiments may use models with priors on the distribution of the pose of each object.
  • Object Database Construction [00193] In the first and second embodiments, the database of object models is constructed from views of the object obtained under controlled conditions. In alternative
  • the conditions may be less controlled. There may be other objects in the view or the relative pose on the object in the various views may not be known. In these cases, additional processing may be required to construct the object models. In the case of views with high clutter, it may be necessary to build up the model database piecewise by doing object recognition to locate the object in the view.
  • Discriminative Features in the Database [00194] In the first and second embodiments, all feature descriptors in the database are treated equally. In practice, some feature descriptors are more specific in their discrimination than others. The discriminatory power of a feature descriptor in the database may be computed a variety of ways.
  • a discriminatory appearance descriptor is one that is dissimilar to the appearance descriptors of all other objects.
  • mutual information may be used to select a set of features descriptors that are collectively selective.
  • the measure of the discriminatory power of a feature descriptor may be used to impose a cut-off such that all features below a threshold are discarded from the model.
  • the discriminatory power of a feature may be used as a weighting factor.
  • Each class model includes a geometric model, an appearance model, a qualitative model, and a co-occurrence table.
  • Alternative embodiments may have different class models. Some embodiments may have no qualitative model. Other embodiments may have fewer or additional components of the qualitative descriptors of the object models and hence have
  • a fixed number of classes K were chosen. In alternative embodiments, the number of classes K may be varied. In particular, it is desirable to choose classes that contain features coming from a majority of the objects in a class. To create such a model, it may be desirable to create a model with K clusters, then to remove features that appear in clusters with little support. K can then be reduced and the process repeated until all clusters contain features from a majority of the objects in the class.
  • Euclidean distance was used in the nearest neighbor algorithm.
  • the second embodiment uses a set of largely decoupled models.
  • a Gaussian Mixture Model is computed for geometry, for the qualitative descriptor, for the image intensity descriptor, and for the range descriptor, as described above. In alternative embodiments some or all of these may be computed jointly. This may be accomplished by concatenating the appearance descriptor and feature location and clustering this joint vector.
  • a decoupled model can be computed and appearance-geometry pairs with high co-occurrence can be associated to each other.
  • the second embodiment represents the geometry model as a set of distributions of the variation in position of feature descriptors given nominal pose and global scale normalization. Because of the global scale normalization in the class model and in
  • PA2777US - 59 - recognition an object and a scaled version of the object in a scene can be recognized equally well, provided that the scaling is according to the global scale normalization of the class.
  • Alternative embodiments may not model the global scale variation within a class, and in recognition there is no rescaling. Consequently, a scaled version of an object will be penalized for its deviation from the nominal size of the class.
  • either the semantics of the second embodiment or the semantics of an alternative embodiment may be appropriate.
  • a wider range of local and global scale and shape models may be used. Instead of a single global scale, different scaling factors may be used along different axes, resulting in a global shape model.
  • affine deformations might be used as a global shape model.
  • the object may be segmented into parts, and a separate shape model constructed for each part.
  • a human figure may be segmented into the rigid limb structures, and a shape model for each structure developed independently.
  • the second embodiment builds scale models using equal weighting of the features. However, if some feature clusters contain more features and/or have smaller variance, alternative embodiments may weight those features more highly when computing the local and global shape models.
  • the second embodiment performs recognition by computing the class likelihood ratio based on probability models computed from the feature descriptors of objects belonging to a class.
  • Alternative embodiments may represent a class by other means. For example, a support vector machine may be used to learn the properties of a class from the feature descriptors of objects belonging to a class. Alternatively, many other
  • PA2777US - 60 - machine-learning techniques described in the literature may be used to leam the properties of a class from the feature descriptors of objects belonging to a class and may be used in this invention to recognize class instances.
  • Class Database Construction [00203] The second embodiment computes class models by independently normalizing the size of each object in the class, and then computing geometry clusters for all size- normalized features.
  • object models may be matched to each other, subject to a group of global deformations, and clustering performed when all class members have been registered to a common frame. This may be accomplished by first clustering on feature appearance. The features of each object that are associated with a particular cluster may be taken to be potential correspondences among models.
  • the second embodiment constructs a separate model for each class; in particular, the clusters of one class are not linked to the clusters of another.
  • Alternative embodiments may construct class models that share features. This may speed up database construction, since class models for previously encountered features may be re ⁇ used when processing a new class. It may also speed-up recognition, since a shared feature is represented once in the database, rather than multiple times.
  • PA2777US - 61 - Filtering Matches in Recognition [00205]
  • the attempt to match an observed feature to the model database is made faster by using the qualitative descriptor as a filter and by using multiple binary searches to implement the lookup.
  • Alternative embodiments may do the lookup in a different way.
  • Various data structures might be used in place of the ordered lists described in the first embodiment.
  • Various data structures that can efficiently locate nearest neighbors in a multi-dimensional space may be used.
  • Recognition - Obtaining an Initial Alignment [00206] The first embodiment obtains an initial alignment of the model with a portion of the scene by using a single correspondence ⁇ f*, g*> as described above.
  • Alternative embodiments may obtain an initial alignment is other ways.
  • One alternative is to replace the single correspondence ⁇ f*, g*> with multiple corresponding points ⁇ fj, gi>,..., ⁇ %, gN> where all the model features g ⁇ belong to the same object.
  • the latter approach may provide a better approximation to the correct aligning pose if all the f ⁇ are associated with the same object in the scene.
  • N is at least 3
  • the alignment may be computed using only the position components, which may be advantageous if the surface normals are more noisy than the position.
  • Another alternative is to replace the table L with a different mechanism for choosing correspondences. Correspondences may be chosen at random or according to some probability distribution. Alternatively, a probability distribution could be constructed from M or L and the RANSAC method may be employed, sampling from possible feature correspondences. Also, groups of correspondences ⁇ fi , g ⁇ >,..., ⁇ %,
  • PA2777US - 62 - g j s j > may be chosen so that the f ⁇ are in a nearby region of the observed scene, so as to improve the chance than all the f ⁇ are associated with the same object in the scene.
  • distance in the scene may be used as a weighing function for choosing the f ⁇ .
  • Similar considerations apply to class recognition. There are many ways of choosing correspondences to obtain an initial alignment of the class model with a portion of the scene. An example will illustrate the diversity of possible techniques. When choosing the correspondences ⁇ f ⁇ , g ⁇ >,..., ⁇ f ⁇ g4> described in the second embodiment, it is desirable that all the f ⁇ are associated with the same object in the scene.
  • Each interest point may then be associated with the surface on which it is found.
  • each surface so extracted lies on only one object of the scene, so that the collection of interest points on a surface belong to the same object. This association may be used to choose correspondences so that all the f ⁇ are associated with the same object.
  • the initial match between f* and g* is disallowed as an initial match.
  • the initial match may be disallowed only temporarily and other matches considered. If there are disallowed matches and an object is recognized subsequent to the match being disallowed, the match is reallowed and the recognition process repeated.
  • This alternative embodiment may improve detection of objects that are partially occluded.
  • O, ⁇ ) can take into account recognized PA2777US - 63 - objects that may occlude O. This may increase the likelihood ratio for the object O when occluding objects are recognized.
  • the first and second embodiments compute probabilities and approximations to probabilities; they base the decision as to whether an object or class instance is present in an observed scene using an approximation to the likelihood ratio.
  • the computation may be performed without considering explicit probabilities. For example, rather than compute the probability of an observed scene feature f given an model object feature or model class feature g , an alternative embodiment may simply compute a match score between f and g. Various match score functions may be used. Similar considerations apply to matches between groups of scene features F and model or class features G.
  • the decision as to whether an object or class instance is present in an observed scene may be based on the value of a match score compared to empirically obtained criteria and these criteria may vary from object to object and from class to class.
  • Hierarchical Recognition [00211] The first embodiment recognizes specific objects; the second embodiment recognizes classes of objects. In alternative embodiments, these may be combined to enhance recognition performance. That is, an object in the scene may first be classified by class, and subsequent recognition may consider only objects within that class. In other embodiments, there may be a hierarchy of classes, and recognition may proceed by starting with the most general class structure and progressing to the most specific.
  • Implementation of Procedural Steps PA2777US - 64 - [00212] The procedural steps of the several embodiments have been described above.
  • the steps may be implemented in a variety of programming languages, such as C++, C, Java, Ada, Fortran, or any other general-purpose programming language. These implementations may be compiled into the machine language of a particular computer or they may be interpreted. They may also be implemented in the assembly language or the machine language of a particular computer. The method may be implemented on a computer, and executing program instructions may be stored on a computer-readable medium. [00213] The procedural steps may also be implemented in specialized programmable processors. Examples of such specialized hardware include digital signal processors (DSPs), graphics processors (GPUs), media processors, and streaming processors. [00214] The procedural steps may also be implemented in electronic hardware designed for this task. In particular, integrated circuits may be used.
  • DSPs digital signal processors
  • GPUs graphics processors
  • media processors media processors
  • streaming processors streaming processors.
  • PA2777US - 65 - Application to Face Recognition may be applied to face recognition.
  • Prior techniques for face recognition have used either appearance models or 3D models, or have combined their results only after separate recognition operations.
  • face recognition may be performed advantageously.
  • Other Applications [00218] The invention is not limited to the applications listed above.
  • the present invention can also be applied in many other fields such as inspection, assembly, and logistics.. It will be recognized that this list is intended as illustrative rather than limiting and the invention can be utilized for varied purposes.
  • PA2777US - 66 restrictive. It will be recognized that the terms “comprising,” “including,” and “having,” as used herein, are specifically intended to be read as open-ended terms of art. PA2777US - 67 -

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

A system and method for performing object and class recognition that allows for wide changes of viewpoint and distance of objects is disclosed. The invention provides for choosing pose-invariant interest points of a three-dimensional (3D) image, and for computing pose-invariant feature descriptors of the image. The system and method also allows for the construction of three-dimensional (3D) object and class models from the pose-invariant interest points and feature descriptors of previously obtained scenes. Interest points and feature descriptors of a newly acquired scene may be compared to the object and/or class models to identify the presence of an object or member of the class in the new scene.

Description

SYSTEM AND METHOD FOR 3D OBJECT RECOGNITION USING RANGE AND INTENSITY
CROSS-REFERENCE TO RELATED APPLICATIONS [001] This application claims the benefit of U.S. Provisional Patent Application Serial No. 60/582,461, filed June 23, 2004, entitled "A system for 3D Object Recognition Using Range and Appearance," which is incorporated herein by reference in its entirety.
BACKGROUND OF THE INVENTION
Field of the Invention [002] The present invention relates generally to the field of computer vision and, in particular, to recognizing objects and instances of visual classes.
Description of the Prior Art [003] Generally speaking, the object recognition problem is to determine which, if any, of a set of known objects is present in an image of a scene observed by a video camera system. The first step in object recognition is to build a database of known objects. Information used to build the database may come from controlled observation of known objects, or it may come from an aggregation of objects observed in scenes without formal supervision. The second step in object recognition is to a match a new observation of a previously viewed object with its representation in the database. [004] The difficulties with object recognition are manifold, but generally relate to the fact that objects may appear very differently when viewed from a different perspective, in
PA2777US - 1 - a different context, or under different lighting. More specifically, three categories of problems can be identified: (1) difficulties related to changes in object orientation and position relative to the observing camera (collectively referred to as "pose"); (2) difficulties related to change in object appearance due to lighting ("photometry"); and (3) difficulties related to the fact that other objects may intercede and obscure portions of known objects ("occlusion"). [005] Class recognition is concerned with recognizing instances of a class, to determine which, if any, of a set of known object classes is present in a scene. A general object class may be defined in many ways. For example, if it is defined by function then the general class of chairs contains both rocking chairs and club chairs. When a general class contains objects that are visually dissimilar, it is convenient to divide it into sub-classes so that the objects in each are visually similar. Such a subclass is called a "visual object class." General class recognition is then done by visual class recognition of the sub-class, followed by semantic association to find the general class containing the sub-class. In the case of chairs, an instance of a rocking chair might be recognized based on its visual characteristics, and then database lookup might find the higher-level class of chair. A key part of this activity is visual class recognition. [006] The first step in visual class recognition is to build a database of known visual classes. As with objects, information used to build the database may come from controlled observation of designated objects or it may come from an aggregation, over time, of objects observed in scenes without formal supervision. The second step in visual class recognition is to match new observations with their visual classes as represented in the database. It is convenient to adopt the shorthand "object class" in place of the longer
PA2777US - 2 - "visual object class." Subsequent discussion will use "object class" with this specific meaning. [007] Class recognition has the problems of object recognition, plus an additional category: difficulties related to within-class or intra-class variation. The instances of a class may vary in certain aspects of their shape or their visual appearance. A class recognizer must be able to deal with this additional variability. [008] Hithertofore, there have been no entirely satisfactory solution to these problems. Substantial research has been devoted to object and class recognizers, but there are none that can recognize a very wide variety of objects or classes from a wide variety of viewpoints and distances. Prior Work in Object Recognition [009] It is convenient to discuss the work in object recognition first. This work can be divided into two basic approaches: geometry-based approaches and appearance-based approaches. Broadly speaking, geometry-based approaches rely on matching the geometric structure of an object. Appearance-based approaches rely on using the intensity values of one or more spectral bands in the camera image; this may be grey- scale, color, or other image values. [0010] Geometry-based approaches recognize objects by recording aspects of three- dimensional geometry of the object in question. Grimson, Object Recognition by Computer: The Role of Geometric Constraints, MIT Press 1990, describes one such system. Another system of this type is described in Johnson and Hebert, "Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes", IEEE Transactions on Pattern Analysis and machine Intelligence, Vol. 21, No5. pp 433-448. Another such
PA2777US - 3 - system is described in Frome et al, "Recognizing Objects in Range Data Using Regional Point Descriptors", Proceedings of the European Conference on Computer Vision, May 2004, pp 224-237. These systems rely on the fact that certain aspects of object geometry do not change with changes in object pose. Examples of these aspects include the distance between vertices of the object, the angles between faces of an object, or the distribution of surface points about some distinguished point. Geometry-based approaches are insensitive to pose by their choice of representation and they are insensitive to photometry because they do not use intensity information. [0011] The main limitation of these systems is due to the fact that they do not utilize intensity information, i.e., they do not represent the difference between objects that have similar shape, but differing appearance in the intensity image. For example, many objects in a grocery store have similar size and shape (e.g., cans of soup), and only differ in the details of their outward appearance. Furthermore, many common objects that have simple geometric form, such as cylinders, rectangular prisms or spheres, do not provide sufficiently unique or, in some cases, well-defined geometric features to work from. [0012] One group of appearance-based approaches uses the 2D intensity image of the entire object to be recognized or a large portion thereof. There are many variations on the approach. Some of the more important variations are described in the following papers: Turk and Pentland, 'Eigenfaces for Recognition'. Journal of Cognitive Neuroscience, 1991, 3 (1), pp 71-86; Murase and Nayar, "Visual Learning and Recognition of 3-D Objects from Appearance", International Journal of Computer Vision, 1995, 14, pp 5-24; and Belhumeur, et al, "Eigenfaces vs. Fisherfaces: Recognition
PA2777US - 4 - Using Class Specific Linear Projection", IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, 19(7), pp 711-720. [0013] This group of approaches has several difficulties. Images of an object can change greatly based on the pose of the object and the lighting of the scene, so many images are, in principle, necessary. A more fundamental limitation is that the approach assumes that the object to be recognized has already been isolated ("segmented") from the video image by other means, but segmentation is often difficult, if not impossible. Finally, a further limitation arises from the fact that if a significant portion of the object becomes occluded, the recorded images will no longer match. [0014] Another group of approaches uses local, rather than global, intensity image features. These methods take advantage of the fact that small areas of the object surface are less prone to occlusion and are less sensitive to illumination changes. There are many variations on the method. In general terms, the method consists of the following steps: detecting significant local regions, constructing descriptors for these local regions, and using these local regions in matching. [0015] Most of these methods build a database of object models from 2D images and recognize acquired scenes as 2D images. There are many papers using this approach. Representative papers include the following: Schmid and Mohr "Local Grayvalue Invariants for Image Retrieval", /EEE Transactions on Pattern Recognition and Machine Intelligence, 19, 5 (1997) pp 530-534; Mikolajczyk and Schmid, "An affine invariant interest point detector", European Conference on Compute Vision 2002 (ECCV), pp. 128-142; Lowe, "Object recognition from local scale-invariant features", International Conference on Computer Vision, 1999 (ICCV), pp. 1150-1157; and Lowe "Distinctive
PA2777US - 5 - Image Features from Scale-Invariant Keypoints, accepted for publication in the International Journal of Computer Vision, 2004. Patents in this area include Lowe, U.S. Patent No. 6,711,293. [0016] A variant of this technique builds a 3D database of object models from 2D images and recognizes acquired scenes as 2D images. This approach is described in Rothganger et al, "3D Object Modeling and Recognition Using Local Affine-Invariant Patches and Multi-View Spatial Constraints", Conference on Computer Vision and Pattern Recognition, (CVPR 2003), pp 272-277, and Rothganger, et al, "3D Object Modeling and Recognition Using Local Affine-Invariant Image Descriptors and Multi-View Spatial Constraints", International Journal of Computer Vision, 2005. [0017] While local features are less sensitive to changes in illumination and occlusion, they are still sensitive to changes in the geometric relationship between the camera and the viewed surface. That is, a small patch of a surface when viewed head-on looks very different from when the same patch is viewed obliquely. Likewise, a surface feature viewed at a small distance looks different when viewed from a large distance. Thus, the principle difficulty in feature-based object recognition is to find a representation of local features that is insensitive to changes in distance and viewing direction so that objects may be accurately detected from many points of view. Currently available methods do not have a practical means for creating such feature representations. Several of the above methods provide limited allowance for viewpoint change; however, the ambiguity inherent in a 2D image means that in general it is not possible to achieve viewpoint invariance.
PA2777US - 6 - [0018] A third approach to object recognition combines 3D and 2D images in the context of face recognition. A survey of this work is given in Bowyer et al, "A Survey of approaches to Three-Dimensional Face Recognition", International Conference on Pattern Recognition, (ICPR), 2004, pp 358-361. This group of techniques is generally referred to as "multi-modal." In the work surveyed, the multi-modal approach uses variations of a common technique, which is that a 3D geometry recognition result and a 2D intensity recognition result are each produced without reference to the other modality, and then the recognition results are combined by some voting mechanism. Hence, the information about the 3D location of intensity data is not available for use in recognition. In particular, the 2D intensity image used in 2D recognition is not invariant to change of pose. Prior Work in Class Recognition [0019] Prior work in class recognition has been along lines similar to object recognition and suffers from related difficulties. [0020] One line of research represents a class as an unordered set of parts. Each part is represented by a model for the local appearance of that part, generalized over all instances of the class. The spatial relationship of the parts is ignored. One paper taking this approach is Dorko and Schmid, "Selection of Scale-Invariant Parts for Object Class Recognition", ICCV 2003, pp. 634-640. A later paper by the same authors, expanding on this approach, is "Object Class Recognition Using Discriminative Local Features" submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence. In this work, training data is acquired from 2D images. The appearance of each part in a class is represented by a Gaussian mixture model obtained from the intensity appearance of the
PA2777US - 7 - part in the various training images. There are several difficulties with this general approach. The most important limitation is that since the geometric relationship of the parts is not represented, considerable important information is lost. An object with its parts jumbled into random locations will be recognized just as well as the object itself. [0021] Another line of research represents a class as a constellation of parts with 2D structure. Each part is represented by a model for the local intensity appearance of that part, generalized over all instances of the class, while the geometric relationship of the parts is represented by a model in which spatial location is generalized over all instances of the class. Two papers applying this approach are Burl et al, "A probabilistic approach to object recognition using local photometry and global geometry", Proc. European Conference on Computer Vision (ECCV) 1998, pp 628-641, and Fergus et al, "Object Class Recognition by Unsupervised Scale-Invariant Learning", Computer Vision and Pattern Recognition, 2003, pp 264-271. Another paper along these lines is Helmer and Lowe, "Object Class Recognition with Many Local Features", IEEE Computer Vision and Pattern Recognition Workshops, 2004 (CVPRW 04), pp. 187 ff. [0022] The appearance of the parts and their geometric relationship is the result of generalizing from a set of 2D images of class instances. A generalized class instance is represented by a set of Gaussian functions for the appearance of parts and for their relationship in a 2D generalized image. There are two difficulties with this approach. First, the local appearance of parts is not pose invariant. Second, the relationship of the parts is acquired and modeled only as the parts occur in 2D images; the underlying 3D spatial relationship is not observed, computed, nor modeled. Consequently, the range of viewpoints is limited.
PA2777US - 8 - [0023] Hence, there is a need for a system and method able to perform object and class recognition over wide changes in distance and viewing direction, and one that is able to utilize the advantages and abilities of both the 2D and 3D methods of the prior art.. PA2777US - 9 - SUMMARY [0024] The present invention provides a system and method for performing object and class recognition that allows for wide changes of viewpoint and distance of objects. This is accomplished by combining various aspects of the 2D and 3D methods of the prior art in a novel fashion. [0025] The present invention provides a system and method for choosing pose-invariant interest points of a three-dimensional (3D) image, and for computing pose-invariant feature descriptors of the image. The system and method also allows for the construction of three-dimensional (3D) object and class models from the pose-invariant interest points and feature descriptors of previously obtained scenes. Interest points and feature descriptors of a newly acquired scene may be compared to the object and/or class models to identify the presence of an object or member of the class in the new scene. [0026] For example, in one embodiment the present invention discloses a method for recognizing objects in an observed scene, comprising the steps of: acquiring a three- dimensional (3D) image of the scene; choosing pose-invariant interest points in the image; computing pose-invariant feature descriptors of the image at the interest points, each feature descriptor comprising a function of the local intensity component of the 3D image as it would appear if it were viewed in a standard pose with respect to a camera; constructing a database comprising 3D object models, each object model comprising a set of pose-invariant feature descriptors of one or more images of an object; and comparing the pose-invariant feature descriptors of the scene image to pose-invariant feature descriptors of the object models. Embodiments of the system and the other methods, and possible alternatives and variations, are also disclosed.
PA2777US - 10 - [0027] The present invention also provides a computer-readable medium comprising program instructions for performing the steps of the various methods. PA2777US - 11 - BRIEF DESCRIPTION OF DRAWINGS In the attached drawings: [0028] FIG. l is a symbolic diagram showing the principal elements of a system for acquiring a 3D description of a scene according to an embodiment of the invention; [0029] FIG. 2 is a symbolic diagram showing the principal steps of constructing a pose- invariant feature descriptor according to an embodiment of this invention; [0030] FIG. 3 is a symbolic diagram showing the principal elements of a system for database construction according to an embodiment of the invention; [0031] FIG. 4 is a symbolic diagram showing the principal components of a system for recognition according to an embodiment of the invention; [0032] FIG. 5 is a symbolic diagram showing the primary steps of recognition according to an embodiment of the method of the invention; and [0033] FIG. 6 illustrates the effects of frontal transformation according to an embodiment of the invention.
PA2777US - 12 - DETAILED DESCRIPTION [0034] The present invention performs object and class recognition that is robust with respect to changes in viewpoint and distance by using images containing both three- dimensional (3D) and intensity appearance information. This is accomplished by obtaining both about range and intensity images of a scene, and combining the information contained in those images in a novel fashion to describe the scene so that it may be used for recognition of objects in the scene and identification of those objects as belonging to a class. Unless otherwise stated, "recognition" shall include both object recognition and class recognition. [0035] FIG. 1 is a symbolic diagram showing the principal physical components of a system for acquiring a 3D description of a scene configured in accordance with an embodiment of the invention. A set of two or more cameras 101 and a projector of patterned light 102 are used to acquire images of an object 103. A computer 104 is used to compute the 3D position of points in the image using stereo correspondence. A preferred embodiment of the stereo system is disclosed in U.S. Patent Application Serial No. 10/703,831, filed 11/7/03, which is incorporated herein by reference. [0036] The 3D description is referred to as a "range image". This range image is placed into correspondence with the intensity image to produce a "registered range and intensity image", sometimes referred to as the "registered image" and sometimes as a "3D image". In this registered image, each image location has one or more intensity values, and a corresponding 3D coordinate giving its location in space relative to the observing stereo ranging system. The set of intensity values are referred to as the "intensity component"
PA2777US - 13 - of the 3D image. The set of 3D coordinates are referred to as the "range component" of the 3D image. [0037] To explain the operation of the invention it is useful to consider how changes in object pose affect the appearance of local features. There are six possible changes to the pose of an object: Two of these are changes parallel to the camera-imaging plane; One is a rotation about the optical axis of the camera; One is a change in the distance between the camera and the object; Two are changes in the slant and tilt of the surface relative to observing camera. [0038] Changes in the position of the object parallel to the camera imaging plane only cause changes in the position of a feature in an image and therefore do not affect its appearance if a correction is made for the location of the feature in the image plane. Rotation of the object about the optical axis of the camera leads to rotation of the feature in the image. There are many methods for locating and representing features that are not affected by rotation, so this motion is also easily accounted for. [0039] The present invention alleviates the difficulties presented by the remaining three changes — in distance, slant, and tilt. It does so by combining the image intensity of the observed object with simultaneously computed range information to compute pose- invariant feature representations. In particular, by knowing the distance to the feature point, it is possible to remove the effect of scale change. From the range information, the local surface normal can be computed and, using this, it is possible to remove the effects of slant and tilt. As a result, it is possible to compute local features that are insensitive to all possible changes in the pose of the object relative to the observing camera. Since the
PA2777US - 14 - features are local, they can be made insensitive to photometric effects. Since there are many local features, their aggregate is insensitive to occlusion so long as several are visible. [00401 FIG. 2 is a symbolic diagram showing the principal steps of a method of constructing a pose-invariant feature descriptor according to an embodiment of this invention. A registered range and intensity image is given as input at step 201. The image is locally transformed at step 202 to a standard pose with respect to the camera, producing a set of transformed images. This transformation is possible because the image contains both range and intensity information. Interest points on the transformed image are chosen at step 203. At each interest point, a feature descriptor is computed in step 204. The feature descriptor includes a function of the local image intensity about the interest point. Additionally, the feature descriptor may also include a function of the local surface geometry about the interest point. The result is a set of pose-invariant feature descriptors 205. This method is explained in detail below, as are various embodiments and elaborations of these steps. Alternatively, it is possible to combine steps; for example, one may incorporate the local transformation into interest point detection, or into the computation of feature descriptors, or into both. This is entirely equivalent to a transformation step followed by interest point detection or feature descriptor computation. [0041] In general terms, recognition using these pose-invariant features has two parts: database construction and recognition per se. FIG. 3 is a symbolic diagram showing the principal components of database construction according to an embodiment of the invention. An imaging system 301 acquires registered images of objects 302 on a
PA2777US - 15 - horizontal planar surface 306 at a known height. A computer 303 builds object models or class models and stores them in a database 304. [0042] FIG. 4 is a symbolic diagram showing the principal components of a recognition system according to an embodiment of this invention. An imaging system 401 acquires registered images of a scene 402 and a computer 403 uses the database 404 to recognize objects or instances of object classes in the scene. The database 404 of FIG. 4 is the database 304 shown as being constructed in FIG. 3. [0043] FIG. 5 is a symbolic diagram showing the primary steps of recognition according to an embodiment of the invention. At step 501, a database is constructed containing 3D models, each model comprising a set of descriptors. In the case of object recognition, the models are object models and the descriptors are pose-invariant feature descriptors; in the case of class recognition, the models are class models and the descriptors are class descriptors. A registered range and intensity image is acquired at step 502. The image is locally transformed in step 503 to a standard pose with respect to the camera, producing a set of transformed images. Interest points on the transformed images are chosen at step 504. Pose-invariant feature descriptors are computed at the interest points in step 505. Pose-invariant feature descriptors of the observed scene are compared to descriptors of the object models at step 506. In step 507, a set of objects identified in the scene is identified. [0044] A system or method utilizing the present invention is able to detect and represent features in a pose-invariant manner; this ability is conferred to both flat and curved objects. An additional property is the use of both range and intensity information to detect and represent said features.
PA2777US - 16 - Background [0045] In order to understand subsequent descriptions, it is useful to review a few basic definitions and facts about digital range and intensity images. First, at every location in an image, it is possible to compute approximations to spatial derivatives of the image intensity. This is commonly performed by computing the convolution of the image with a convolution kernel that is a discrete sampling of the derivative of a Gaussian function centered at the point in question. The derivatives can be computed along both the columns and rows of the images (the "x" and "y" directions), in which case the combined result is known as the image gradient at that point. [0046] These approximations can be computed with Gaussian functions ("Gaussians") that have a different spread, controlled by using the variance parameter of the Gaussian function. The spread of a Gaussian function is referred to as the "scale" of the operator, and roughly corresponds to choosing a level of detail at which the afore-mentioned image information is computed. [0047] Given a neighborhood of pixels, it is possible to first compute the image gradient for each pixel location, and then to compute a 2 by 2 matrix consisting of the sum of the outer product of each gradient vector with itself, divided by the number of pixels in the region. This is a symmetric positive semidefinite matrix, which is referred to as the "gradient covariance matrix." Since it is 2 by 2 and symmetric, it has two real non- negative eigenvalues with associated eigenvectors. The eigenvector associated with the largest eigenvalue is referred to as the "dominant gradient direction" for that neighborhood. The ratio of the smallest eigenvalue to the largest eigenvalue is referred to
PA2777US - 17 - as the "eigenvalue ratio." The eigenvalue ratio ranges between 0 and 1, and is an indicator of how much one gradient direction dominates the others in the region. [0048] The present invention also uses a range image that is registered to the intensity image. As noted above, the fact that the range image is registered to the intensity image means that each location in the intensity image has a corresponding 3D location. It is important to realize that these 3D locations are relative to the camera viewing location, so a change in viewing location will cause both the intensity image and the range image of an object to change. However, given two range images, the points that are visible in both views can be related by a single change of coordinates consisting of a translation vector and a rotation matrix. In the case that the translation and rotation between views is known, the points in the two images can be merged and/or compared with each other. The process of computing the translation and rotation between views, thus placing points in those two views in a common coordinate system, is referred to as "aligning" the views. [0049] All of the preceding concepts can be found in standard undergraduate textbooks on digital signal processing or computer vision. Locally Warping Images [0050] The present invention makes use of range information to aid in the location and description of regions of an image that are indicative of an object or class of objects. Such regions are referred to as "features." The algorithm that locates features in an image is referred to as an "interest operator." An interest operator is said to be "pose- invariant" if the detection of features is insensitive to a large range' of changes in object pose.
PA2777US - 18 - [0051] Once detected, a feature is represented in a manner that facilitates matching against features detected in other range and intensity images. The representation of a feature is referred to as a "feature descriptor." A feature descriptor is said to be "pose- invariant" if the descriptor is insensitive to a large range of changes in object pose. [0052] The present invention achieves this result in part by using information in the range image to produce new images of surfaces as viewed from a standard pose with respect to the camera. In the first and second embodiments, the standard pose is chosen so that the camera axis is aligned with the surface normal at each feature and the surface appears as it would when imaged at a fixed nominal distance. Such an alignment is said to be "frontal normal". [0053] To describe this process, it is useful to consider a point at location T on an observed surface. If the surface is smooth at this point, there is an associated normal vector n, and two values tx and ty with associated directions ex and ey so that the form of the surface can be locally described as z = (tx x2 + ty y2)/2 where the z coordinate is in the direction of n, and x and y lie along ex and ey, respectively. A portion of a surface modeled in this form is referred to as a "surface patch." The values of tx and ty do not depend on the position or orientation of the observed surface. [0054] For a given image location, the values of tx and ty with associated directions ex and ey can be computed or approximated in a number of ways from range images. In one embodiment, smooth connected surfaces are extracted from the range data by first choosing a set of locations, known as seed locations, and subsequently fitting analytic
PA2777US - 19 - surfaces to the range image in neighborhoods about these seed locations. For seed locations where the surface fits well, the size of the neighborhood is increased, whereas neighborhoods where the surface fits poorly are reduced or removed entirely. This process is iterated until all areas of the range image are described by some analytic surface patch. Methods for computing quadric surfaces from range data are well- established in the computer vision literature and can be found in a variety of references, e.g., Petitjean, "A survey of methods for recovering quadrics in triangle meshes", ACM Computing Surveys, Vol. 34, No. 2, June 2002, pp. 211-262. Methods for iterative segmentation of range images are well established and can be found in a variety of references, e.g., A. Leonardis et al., "Segmentation of range images as the search for geometric parametric models", International Journal of Computer Vision, 1995, 14, pp 253-277. [0055] The values ex, ey, and n together form a rotation matrix, R, that transforms points from patch coordinates x, y, and z to the coordinate system of the range image. The center of the patch, T, specifies the spatial position. The pair X=(T, R) thus defines the pose of the surface patch relative to the observing system. [0056] It is now possible to produce a new intensity image of the area of the surface as if it were viewed along the surface normal at a nominal distance d. To do so, consider a set of sampling locations q. = (x,, y., (tx Xj2 + ty yi 2)/2)τ, for i = 1 , 2 ... N preferably arranged in a grid. Compute p; = R q, + T. The values p, are now locations on the object surface in the coordinate system of original range and intensity images.
PA2777US - 20 - [0057] The image locations corresponding to the points p; can now be computed using standard models of perspective projection, yielding image locations U1, i=l, 2...N. The value of the intensity or range image at this image location can now be sampled, preferably using bilinear interpolation of neighboring values. These samples now constitute the intensity image and geometry of the surface for sample locations (x,, y,), i=l, 2 ...N, corresponding to an orthographic camera looking directly along the surface normal direction. [0058] By construction, the area of the surface represented in a patch is invariant to changes in object pose, and thus the appearance of features on the object surface are likewise invariant up to the sample spacing of the camera system. The sample spacing of the locations (X1, yt) may be chosen to approximate the view of a camera with pixel spacing s and focal length f at distance d by choosing spacing s* = s (d/f). In the first and second embodiments, s=.0045mm/pixel, d = 1000 mm, and f = 12.5mm. Thus s*= .36mm/pixel. [0059] FIG. 6 shows the result of frontal warping. 601 is a surface shown tilted away from the camera axis by a significant angle, while 602 is the corresponding surface transformed to be frontal normal. Detecting Pose-Invariant Interest Points [0060] A combined range and intensity image containing several objects may be segmented into a collection of smaller areas that may be modeled as quadric patches, each of which is transformed to appear in a canonical frontal pose. Additionally, the size of each patch may be restricted to ensure a limited range of surface normal directions within the patch.
PA2777US - 21 - [0061] More specifically, patches are chosen such that no surface normal at any sample point in the patch makes an angle larger than θmax with n. This implies that the range of x and y values within the local coordinate system of the patch fall within an elliptical region defined by a value λ such that: tx 2 x2 + ty 2 y2 ≤ sec(θmax)2 - l = λ2 Thus, a patch will have the desired range of surface normals if |x| < xmax = λ/tx and |y| < ymax = λ /ty. An image patch with this property will be referred to as a "restricted viewing angle patch." The values Xn^x and ymax are used to determine the number of sampling locations needed to completely sample a restricted viewing angle patch. In the x direction, the number will be 2*xmax/s* and in y it will be 2*ymax/s*. [0062] In the first and second embodiments described below, the value of θmaχ is chosen to be 20 degrees, although other embodiments may use other values of θmax. [0063] Surface patches that do not satisfy the restricted viewing angle property are subdivided into smaller patches until they are restricted viewing angle patches, or a minimal patch size is reached. When dividing a patch, the new patches are chosen to overlap at their boundaries to ensure that no image locations (and hence interest points) fall directly on, or directly adjacent to, a patch boundary in all patches. Patches are divided by choosing the coordinate direction (x or y) over which the range of normal directions is the largest, and creating two patches equally divided in this coordinate direction. [0064] The restricted viewing angle patches are warped as described above, where the warping is performed on the intensity image. Interest points are located on the warped patches by executing the following steps:
PA2777US - 22 - 1. Compute the eigenvalues of the gradient image covariance matrix at every pixel location and for several scales of the aforementioned gradient operator. Let minE and maxE denote the minimum and maximum eigenvalues so computed, and let r denote their eigenvalue ratio. 2. Compute a list Ll of potential interest points by finding all locations where minE is maximal in the image at some scale. 3. Remove from Ll all locations where the ratio r is less than a specified threshold. In the first and second embodiments, the threshold is 0.2, although other embodiments may use other values. 4. For each element of Ll, compute the tuple <P, L, S, E, X > where P is the patch, L is the 2D location of the interest point on the patch, S is the scale, E is the eigenvalue ratio, and X is the 3D pose of the interest point. The list of such tuples over all patches is a set of interest points in the intensity image. These are locations in the image where the intensity appearance has distinctive structure. [00651 The same process is applied to the range image: The range image is warped to be frontal normal. In place of the intensity at a given (x, y) location, the range value z in local patch coordinates is used to compute the gradient covariance matrix at each pixel. The other steps are similar. The result is a list of interest points based on the range image. These are locations in the image where the surface geometry has distinctive structure. [0066] This process is repeated for other types of interest point detectors operating on intensity and range images. Several interest point detectors are described in K. Mikolajczyk et al., "A Comparison of Affine Region Detectors", to appear in
PA2777US - 23 - International Journal of Computer Vision. For each interest point, there is a label K indicating the type of interest point detector used to locate it. [0067] The techniques described above ensure that the interest point detection process locates very nearly the same set of interest points at the same locations when the object is viewed over a large range of surface orientations and positions. Representing Pose-Invariant Features [0068] Next, the local appearance at each interest point is computed. Let <P, L, S, E, X> be an interest point. As the surface normal of the interest point may deviate from that of the patch upon which it is detected, a rotation matrix RL is recomputed specifically for the interest point location L. [0069] When computing this rotation matrix, the ratio of surface curvatures, min(tx, ty)/max(tx, ty) is compared to E. If E is larger than the surface curvature ratio, the rotation matrix RL is computed from ex, ey, and n as described previously. Otherwise the rotation matrix R is computed from the eigenvectors of E and the surface normal n as follows. A zero is appended to the end of both of the eigenvectors of E. These vectors are then multiplied by the rotation matrix R originally computed when the patch was frontally warped. This produces two orthogonal vectors ix and iy that represent the dominant intensity gradient direction in the coordinate system of the original range image. The final rotation matrix RL is then created from ix, iy and n. In either case, X is now defined as X = (T, RL). [0070] A fixed size area about L in the restricted viewing angle patch P is now warped using X, producing a new local area P', so that P' now appears to be viewed frontally centered. The corresponding range information associated with the area about L in patch
PA2777US - 24 - P is similarly warped, producing a canonical local range image D'. In the first and second embodiments, a patch size of lcm by lcm is used (creating an image patch of size 28 pixels by 28 pixels) although other embodiments may use other patch sizes. [0071] P' is normalized by subtracting its mean intensity, and dividing by the square root of the sum of the squares of the resulting intensity values. Thus, changes in brightness and contrast do not affect the appearance of P'. A feature descriptor is constructed that includes a geometric descriptor X = (T, RL), an appearance descriptor A = (P', D'), and a qualitative descriptor Q = (K, S, tx, ty, E). The geometric descriptor specifies the location of a feature; the appearance descriptor specifies the local appearance; and the qualitative descriptor is a summary of the salient aspects of the local appearance. [0072] Frontal warping ensures that the locations of the features and their appearance have been corrected for distance, slant, and tilt. Hence, the features are pose invariant and are referred to as "pose-invariant features". Additionally, their construction makes them invariant to changes in brightness and contrast. Recognition Using Pose-Invariant Features - Background [0073] An object model O is a collection of pose-invariant feature descriptors expressed in a common geometric coordinate system. Let F be the collection of pose-invariant feature descriptors observed in the scene. Define the "object likelihood ratio" as L(F, O) = P(F I O) / P(F I - O) where P(F | O) is the probability of the feature descriptors F given that the object is present in the scene and P(F | ~ O) is the probability of the feature descriptors F given that the object is not present in the scene. The object O is considered to be present in the scene if L(F, O) is greater than a threshold τ. The threshold τ is empirically determined
PA2777US - 25 - for each object as follows. Several independent images of the object in normally occurring scenes are acquired. For several values of τ, the number of times the object is incorrectly recognized as present when it is not (false positives) and the number of times the object is incorrectly stated as not present when it is (false negatives) is tabulated. The value of τ is taken as that for which the value at which the number of false positives equals the number of false negatives. [0074] In order to evaluate the numerator of this expression, it is useful to introduce a mapping hypothesis h to describe a match between observed features and model features, and a relative pose χ between the model object coordinate system and the observed feature coordinate system. The equation then becomes: L(F, O) = (∑h Jχ P(F I O, h, χ) P(h | O, χ) P(χ | O)) / P(F | -O)
[00751 As the goal is to exceed the threshold τ, the system will attempt to maximize L over all candidate model objects O. However, the number of hypotheses h over which to evaluate this expression is enormous. In order to improve the computational aspects of the method, the first and second embodiments rely on the fact that, in most cases, the correct match h should be unique, and this match should completely determine the pose χ. Under these assumptions, an approximation to the above equation is given by: L(F, O) ~ maxχ maxh P(F | O, h, χ) P(h | O, χ) P(χ | O)/ P(F | -O) If the result of this expression exceeds τ, then the object O is deemed present. The value of the pose χ that maximizes this expression specifies the position and orientation of the object in the scene. [0076] Elements of the object likelihood ratio can be further refined. Recall that each feature is composed of an appearance descriptor, a qualitative descriptor, and a geometric PA2777US - 26 - descriptor. Let FA denote the appearance descriptors of a set of observed features. Let OA denote the appearance descriptors of a model object O. Likewise, let Fx and Ox denote the corresponding observed and model geometric descriptors, and let FQ and OQ denote the corresponding observed and model qualitative descriptors. Given a mapping h between a set of observed features and a set of model features, FA(IC) is the appearance descriptor of the kth feature in the set and OA(h(k)) is the appearance descriptor in the corresponding feature of the model. Similarly, Fx(k) is the geometric descriptor of the kth feature of the set and Ox(h(k), χ) is the geometric descriptor of the corresponding feature of the model when the model is in the pose χ. [0077] Feature geometry descriptors are conditionally independent given h and χ. Also, each feature's appearance descriptor is approximately independent of other features. Hence, P(F I O, h, χ) / P(F I ~ O) = Ilk LA(F, O, h, k) Lx(F, O, h, χ, k)
where LA(F, O, h, k) = P(FA(k) I OA(h(k))) / P(FA(k) | -O) and Lx(F, O, h, χ, k) = P(Fx(IO I Oχ(h(k), χ) / P(Fx(k) | -O)
[0078] LA is subsequently referred to as the "appearance likelihood ratio" and Lx as the "geometry likelihood ratio." The numerators of these expressions are referred to as the "appearance likelihood function" and the "geometry likelihood function," respectively. [0079] The denominator of LA can be approximated by observing that the set of detected features in the object database provides an empirical model for the set of all features that PA2777US - 27 - might be detected in images. A feature is highly distinctive if it differs from all other features on all other objects. For such features, LA is large. Conversely, a feature is not distinctive if it occurs genetically on several objects. For such features, LA is close to 1. As a result, an effective approximation to P(FA(k) | ~ O) is: P(FA(IC) | ~ O) ~ maxO' ≠ omaxje O' P(FA(k) | O'A(j)) [0080] The denominator of Lx, P(Fx(Ic) | -O), represents the probability of a feature being detected at a given image location when the object O is not present. This value is approximated as the ratio of the average number of features detected in an image to the number of places at which interest points can be detected. When interest point detection is localized to image pixels, the number of places is simply the number of pixels. [0081] L(F5O) contains two additional terms, P(h | O, χ) and P(χ | O). The latter is the probability of an object appearing in a specific pose. In the first and second embodiments, this is taken to be a uniform distribution. [0082] P(h I O, χ) is the probability of the hypothesis h given that the object O is in a given pose χ. It can be viewed as a "discount factor" for missing matches. That is, for a given pose χ of object O, there is a set of features that are potentially visible. If every expected (based on visibility) feature on the object were observed, P(h | O, χ) would be maximal; fewer matches should result in a lower value. After performing the visibility computation, the first embodiment expects some number N of features to be visible. P(h | O, χ) is then approximated using a binomial distribution with parameters N and detection probability p. The latter is determined empirically based on the properties of the interest operator used to detect features.
PA2777US - 28 - [0083] The first and second embodiments make use of the fact that the likelihoods introduced above may be evaluated more efficiently by taking their natural logarithms. [0084] The likelihood functions described above may take many forms. The first and second embodiments assume additive noise in the measurements and thus the probability value P(f | m) for an observed feature value f and matched model feature m is P(f-m). If both f and m are normally distributed with covariances Λf and Λm, the logarithm of this probability is -1/2 (f-m)τf + Λ^'^f-m), plus terms that do not depend on f or m. In the first and second embodiments, Λf is empirically determined for several different feature distances and slant and tilt angles. Features observed at a larger distance and at higher angles have correspondingly larger values in Λf than those observed at a smaller distance and frontally. The value of Λm is determined as the object model is acquired. [0085] Subsequently disclosed aspects of the invention apply and/or make further refinements to the object likelihood ratio, the appearance likelihood ratio, the qualitative likelihood ratio, the geometry likelihood ratio, and the methods of probability calculation described above. [0086] Two possible embodiments of this invention are now described. A first embodiment deals with object recognition. A second embodiment deals with class recognition. There are many possible variations on each of these and some of these variations are described in the section on Alternative Embodiments.
First Embodiment [0087] The first embodiment is concerned with recognizing objects. This first embodiment is described in two parts: (1) database construction and (2) recognition. Database Construction PA2777US - 29 - [0088] FIG. 3 is a symbolic diagram showing the principal components of database construction. For each object to be recognized, several views of the object are obtained under controlled conditions. The scene contains a single foreground object 302 on a horizontal planar surface 306 at a known height. The background is a simple collection of planar surfaces of known pose with uniform color and texture. An imaging system 301 acquires registered range and intensity images. [0089] For each view of the object, registered range and intensity images are acquired, frontally warped patches are computed, interest points are located, and a feature descriptor is computed for each interest point. In this way, each view of an object has associated with it a set of features of the form <X, Q, A> where X is the 3D pose of the feature, Q denotes the qualitative descriptor, and A is the appearance descriptor. The views are taken under controlled conditions, so that each view also has a pose expressed relative to a fixed base coordinate system associated with it. [0090] The process of placing points in two or more views into a common coordinate system is referred to as "aligning" the views. During database construction, views are aligned as follows. Since the pose of each view is known, an initial transformation aligning the observed pose-invariant features in the two images is also known. Once aligned, a match hypothesis h is easily generated by matching each pose-invariant feature to its nearest neighbor, provided that neighbor is sufficiently close. Thus, initial estimates for both h and the pose χ needed to compute the object likelihood ratio L(F,O) are easily computed. [0091] Due to physical process errors, there may be some error in the pose so that the alignment is not exact, merely very close. This may also lead to errors or ambiguities in
PA2777US - 30 - h. In order to deal with these errors, only pose-invariant features with a large appearance and geometry likelihood ratio are first considered in h. A final alignment step is performed by computing the closed form solution to the least-squares problem of absolute orientation using these pose-invariant features. This is used to refine the 3D location of each feature and the process is repeated until convergence. [0092] An object model is thus built up by starting the model as one view and processing others with reference to it. In general, a model has one or more segments. For each new view of the object, there are four possible results of alignment: [0093] (1) The object likelihood ratio is large and there are no unmatched pose-invariant features in the new view. In this case, the view adds no substantial new information. This occurs when the viewpoint is subsumed by viewpoints already accounted for by the model. In this case, the information in corresponding pose- invariant features descriptors is averaged to reduce noise. [0094] (2) The view aligns with a single segment and contains new information. This occurs when the viewpoint is partly novel and partly shared with views already accounted for in that segment. In this case, the new features are added to the segment description. Matching pose-invariant feature descriptors are averaged to reduce noise. [0095] (3) The view aligns with two or more segments. This occurs when the viewpoint is partly novel and partly shared with viewpoints already accounted for in the database entry for that object. In this case, the segments are geometrically aligned and merged into one unified representation. Matching pose-invariant feature descriptors are averaged to reduce noise.
PA2777US - 31 - [0096] (4) The view does not match. This occurs when the viewpoint is entirely novel and shares nothing with viewpoints of the database entry for that object. In this case, a new segment description is created and initialized with the observed features. [0097] In the typical case, sufficient views of an object are obtained that the several segments are aligned and merged, resulting in a single, integrated model of the object. [0098] When database construction is complete, information stored in the database consists of a set of object models, where each object model has associated with it a set of features, each of the form <X, Q, A> where X is the 3D pose of the feature expressed in an object-centered geometric reference system, Q is the list of qualitative descriptors, and A is the appearance descriptor. Each quantity also has an associated covariance matrix that is estimated from the deviation of the original measurements from the averaged descriptor value. Recognition [0099] FIG. 4 is a symbolic diagram showing the principal components of a recognition system. Unlike database creation, scenes are acquired under uncontrolled conditions. A scene may contain none, one, or more than one known object. If an object is present, it may be present once or more than once. An object may be partially occluded and may be in contact with other objects. The goal of recognition is to locate known objects in the scene. [00100] The first step of recognition is to find smooth connected surfaces as described previously. The next step is to process each surface to identify interest points and extract a set of scene features as described above. Each feature has the form F = <X, Q, A>
PA2777US - 32 - where X is the 3D pose of the feature, Q is the qualitative descriptor, and A is the appearance descriptor. [00101] Object recognition is accomplished by matching scene features with model features, and evaluating the resulting match using the object likelihood ratio. The first step in this process is to locate plausible matches in the model features for each scene feature. For each scene feature, the qualitative descriptor is used to look up only those model features with qualitative descriptors closely matching the candidate scene feature. The lookup is done as follows. An ordered list is constructed for each qualitative feature component. Suppose there are N qualitative feature components, so there are N ordered lists. The elements of each list are the corresponding elements for all feature descriptors in the model database. Given a feature descriptor from the scene, a binary search is used to locate those values within a range of each qualitative feature component; from these, the matching model features are identified. N sets of model feature identifiers are formed, one for each of the N qualitative feature components. The N sets are then merged to produce a set of candidate pairs, {<f, g>}, where f is a feature from the scene and g is feature in the model database. [00102] For each pair <f, g>, the appearance likelihood is computed and stored in a table M, in the position (f, g). In this table, the scene features form the rows, and the candidate matching model features form the columns. Thus, M(f, g) denotes the appearance likelihood value for matching scene feature f to a model object feature g. [00103] An approximation to the appearance likelihood ratio is computed as: L(fj g) = M(f, g) / max k M(f, k)
PA2777US - 33 - where k comes from a different object than f. A table, L, is constructed holding the appearance likelihood ratio for each pair <f, g> identified above. [00104] An initial alignment of the model with a scene feature is obtained. To do this, the pair <f*, g*> with the maximal value in table L is located. Let Og* be the object model associated with the feature g*. Using the pose associated with f*, Xf*, and the pose associated with g*, Xg*, an aligning transformation χ is computed. The transformation χ places the model into a position and orientation that is consistent with the scene feature; hence, χ is taken as the initial pose of the model. [001051 From the pose χ, the set of potentially visible model features of object Og* is computed. These potentially visible model features are now considered to see if they can be matched against the scene. The method is as follows: If a visible model feature k appears in a row j of table M, the geometry likelihood ratio for matching j and k is computed using the previously described approximation method. The appearance likelihood ratio is taken from the table L. The product of the appearance and geometry likelihood ratios of matching j and k is then computed. The product of the appearance and geometry likelihood ratios is then compared to an empirically determined threshold. If this threshold is exceeded, the feature pair <j, k> is considered a match. [00106] If new matches are found, the aligning pose is recomputed including the new feature matches and the process above repeated until no new matches are found. The aligning pose is calculated as follows. Each feature match produces an estimate of the aligning rotation Raj and two three-dimensional feature locations Tg; and Tf, for the model and observed feature respectively. The method seeks to find the rotation R* and translation T* such that Tf( = R** Tg; + T*. Let Tf* i and Tg', be Tfj and Tg1 after
PA2777US - 34 - subtraction of the mean feature locations of the observed and model features, respectively. Form the matrix M as M = Σ, Ra,τ + Tg\*Tf',T. The matrix M is now decomposed using SVD as described in Horn's method to produce the rotation R*. Given R* the optimal translation is computed using least squares. These values together form the aligning pose χ. [00107] Finally, the object likelihood ratio is computed using the final value of the pose χ and matched features h. If the object likelihood ratio exceeds τ, the object O is declared present in the image. All scene features (rows of the tables M and L) that are matched are permanently removed from future consideration. If the object likelihood ratio does not exceed this threshold, the initial match between f* and g* is disallowed as an initial match. The process then repeats using the next-best feature match from the table L. [00108] This process continues until all matches between observed features and model features with an appearance likelihood ratio above a match threshold have been considered.
Second Embodiment [00109] The second embodiment modifies the operation of the first embodiment to perform class-based object recognition. There are other embodiments of this invention that perform class-based recognition and several of these are discussed in the alternative embodiments. [00110] By convention, a class is a set of objects that are grouped together under a single label. For example, several distinct chairs belong to the class of chairs, or many distinct coffee mugs comprise the class of coffee mugs. Class-based recognition offers many advantages over distinct object recognition. For example, a newly encountered coffee PA2777US - 35 - mug can be recognized as such even though it has not been seen previously. Likewise, properties of the coffee mug class (e.g. the presence and use of the handle) can be immediately transferred to every new instance of coffee mug. [00111] The second embodiment is described in two parts: database construction and object recognition. Database Construction [00112] The second embodiment builds on the database of object descriptors constructed as described in the first embodiment. The second embodiment processes a set of model object descriptors to produce a class descriptor comprising: 1) An appearance model consisting of a statistical description of the appearance elements of the pose-invariant feature descriptors of objects belonging to the class; 2) A qualitative model summarizing appearance aspects of the features; 3) A geometry model consisting of a statistical description of geometry elements of the pose-invariant features in a common object reference system, together with statistical information indicating the variability of feature location; and 4) A model of the co-occurrence of appearance features and geometry features. These are each dealt with separately and in turn. Constructing a Class Model for Appearance [00113] The second embodiment builds semi -parametric statistical models for the appearance of the pose-invariant features of objects belonging to the class. This process is performed independently on the intensity and range components of the appearance element of a pose-invariant feature.
PA2777US - 36 - [00114] The statistical model used by the second embodiment is a Gaussian Mixture Model. Each of the Gaussian distributions is referred to as a "cluster". In such a model, the number of clusters K needs to be chosen. There are various possible methods for making this choice. The second embodiment uses a simple one as described below. Alternative embodiments may choose K according to other techniques. [00115] Assume that there are n specific objects that are to be grouped into a class. Within these n models, consider all features of a given type (the component K of the qualitative feature descriptor). Let N^ denote the number of features in the kth object. Let Nmax be the max of N^ for k = 1, ...n. The second embodiment chooses K to be
Nmax- [00116] An appearance model with K components is computed to capture the commonly appearing intensity and range properties of the class. It is computed using established methods for statistical data modeling as described in Lu, Hager, and Younes, "A Three- tiered approach to Articulated Object Action Modeling and Recognition", Neural Information Processing and Systems, Vancouver, B.C. Canada, Dec. 2004. The method operates as follows. [00117] A set of K cluster centers is chosen. This is done in a greedy, i.e. no look-ahead, fashion by randomly choosing an initial feature as a cluster center, and then iterative] y choosing additional points that are as far from already chosen points as possible. Once the cluster centers are chosen, the k-means algorithm is applied to adjust the centers. This procedure is repeated several times and the result with the tightest set of clusters in the nearest neighbor sense is taken. That is, for each feature vector f,, the closest (in the
PA2777US - 37 - sense of Euclidean distance) cluster center Cj is chosen. Let d, = ||f, - Cj||. The total penalty for a clustering is the sum of all values d,. [00118] If the number of clusters exceeds the dimension of the feature space, a Gaussian mixture model (GMM) is computed using expectation maximization (EM) using the initial clusters as a starting point. Methods for computing GMMs using EM are described in several standard textbooks on machine learning. [00119] If the number K of clusters is far smaller than the dimensionality of the feature vectors, the modeling step is performed using a combination of linear discriminant analysis (LDA) and modeling as a Gaussian mixture. Given the initial clustering, the within-class and between-class variances are computed. This is processed using linear discriminant analysis to produce a projection matrix Φ. The feature descriptors are projected into a new feature space by multiplying by the matrix Φ. [00120] Given the resulting GMM, the likelihood of any data item i belonging to cluster j can be computed. These weights replace the membership function in the linear discriminant analysis algorithm, a new projection matrix Φ is computed, and the steps above repeated. This iteration is continued to convergence. The result is a final projection matrix Φ and a set of parameters (Gaussian mean, variance and weight) Θj = <μ}, Λj, Wj> for each cluster j = 1,2, ... K. [00121] This modeling process is repeated for every type of feature that has been detected in the class. The resulting set of model parameters, GMMA(κ), summarizes all appearance aspects of features of type K for this class. Constructing a Class Model for the Qualitative Descriptor
PA2777US - 38 - [00122] For every appearance feature A, it is now possible to compute the cluster k such that P(A I Φ , Θk) is maximal. Let Fk denote the set of all features that are associated with cluster k in this manner. Each of these features has a corresponding qualitative feature descriptor Q. Let Ψk denote all qualitative descriptors for feature descriptors in Fk- [00123] For every component of the qualitative descriptor, it is now possible to compute the minimum value that descriptor component takes on in Ψk as well as the maximum value. Thus, the full range of descriptor values can be represented as a vector of intervals Ik bounded by two extremal qualitative descriptors Ψ\ and Ψ+ k. [00124] Any feature that matches well with cluster k is likely to take values in this range. Thus, Ik is stored with each cluster as an index. Constructing a Class Model for Geometry [00125] Finally, a geometric model is computed. Recall that the database in the first embodiment produces a set of pose-invariant features for each model, together with a geometric registration of those features to a common reference frame. The second embodiment preferably makes use of the fact that the model for each member of a class is created starting from a consistent canonical pose. For example, every chair would be facing forward in a canonical model pose, or every coffee mug would have the handle, to the side in a canonical model pose. [00126] The first step in developing a class-based geometric model is to normalize for differences in size and scale of the objects in the class. This is performed by the following steps:
PA2777US - 39 - 1) For each object O of the class C, compute the centroid of the set of 3D feature locations of O. For model features F), F2, ... Fn of the form Fj = <X;, Qj, A,>, and X1 = <T,, R,> the centroid is
Figure imgf000041_0001
2) For each object O of the class C, compute the object scale as σo = sqrt((l/n) ∑i ||Tj - μo) ||2 ). 3) Average the scale values for all objects in the class yielding sc. 4) For each object O of the class, compute the class-relative scale value S0 = O0ISc- 5) Compute the mean and standard deviation of So- [00127] The modeling process is carried out on the geometry component making use of the scale normalization computed above. Consider the set of object features in all the object models that are to be formed into a class C. For each such feature with three- dimensional location T = (x, y, z), normal vector n and centroid μo=(μxo, μyo, μzoj, a new set of values T' = ((x-μxo)/so, (y-μyo)/so, (z-μzo)/so) is computed. Also, the value o = nτ * T7||T'|| is computed to represent the local orientation of the feature. A semi- parametric model for these features is then computed as described above. The resulting geometric model has two components: a Gaussian Mixture Model GMMs(κ) that models the variation in the location and orientation of pose-invariant feature descriptors across the class given a nominal pose and scale normalization, and a distribution Ps(So | C) on the global scale variation within a class C. In this embodiment, the latter is taken to be a Gaussian distribution with mean and variance as computed in step 5 above.
PA2777US - 40 - [00128] After performing this computation for all feature types, the result is an empirical distribution on the appearance, qualitative characteristics and geometric characteristics of all types of features detected on the objects that are to be members of the class. Computing the Co-Occurrence of Appearance and Geometry Features [00129] Finally, for all N features detected in the class, the joint statistics on appearance and geometry are computed as follows. Suppose there are u appearance clusters and v geometry clusters. The appearance/geometry co-occurrence table, of size u by v is created as follows. First, the table is initialized with all its entries set to zero. [00130] For each feature, the likelihood of the appearance component is computed separately for all clusters in the Gaussian mixture model. Let i denote the index of an appearance cluster with likelihood aj. Similarly, let j denote the index of a geometry cluster (again making use of the scale normalization described above) with likelihood gj. The entry (i,j) of the table is incremented by a; * gj. This process is repeated for all N pose-invariant features, and the result is normalized by the total of all values in the table to yield a co-occurrence probability Pco Recognition [00131] Given a set of object class models, recognition proceeds as described in the first embodiment with the following modifications. [00132] Let F be the collection of pose-invariant feature descriptors observed in the scene. Define the "class likelihood ratio" as Lc(F, C) = P(F I C) / P(F I - C) where P(F | C) is the probability of the feature descriptors F given that an instance of the class is present in the scene and P(F | ~ C) is the probability of the feature descriptors F
PA2777US - 41 - given that the object class is not present in the scene. The class C is considered to be present in the scene if Lc(F, C) is greater than a threshold τ. The threshold τ is empirically determined for each class as follows. Several independent images of the class in normally occurring scenes are acquired. For several values of τ, the number of times the class is incorrectly recognized as present when it is not (false positives) and the number of times the class is incorrectly stated as not present when it is (false negatives) is tabulated. The value of τ is taken as that for which the value at which the number of false positives equals the number of false negatives. [00133] Consider a pose-invariant feature descriptor fk = <X,Q,A> with X = <T,R>. Let a denote an aligning transformation consisting of a pose χ augmented with the dimensionless scale factor s<> The calculation of the likelihood function between f^ and a model class C with appearance model CA and geometric model CG given an alignment a is P(fk I C, a) = I10 P(A| CA1 ) P(X | CGj, a) P(i,j | C, a) [00134] P(A I CAi) represents the probability that the appearance component is sampled from cluster i of the GMM modeling appearance. The error in observing f is generally far smaller than the variation within the class, so the second embodiment takes the observed scene feature value as having zero variance, which is a reasonable approximation. As a result, the probability value comes directly from the associated Gaussian mixture component for the cluster CA1. [00135] P(X I CGj, a) represents the probability that the feature pose is taken from cluster j of the GMM modeling geometry. It is computed by aligning the observed feature to the model by first transforming the observed features using the pose component χ followed
PA2777US - 42 - by scaling using the value so- The resulting scaled translation values correspond to T' above. The observed value of the local orientation after alignment o is also computed. As before, the second embodiment takes the observed feature value as having zero variance. As a result, the probability value comes directly from the associated Gaussian mixture component for the cluster CGj. [00136] The final probability value P(i,j | C, a) can be computed from the appearance/geometry co-occurrence table computed during the database construction and the probability that the object would appear in the image given the class aligned with transform a, as detailed below. [00137] The cases of interest are those in which an observed scene feature has a well- defined correspondence with an appearance and geometry cluster. For classes, the correspondence hypothesis vector h relates an observed scene feature to a pair of an appearance cluster and a geometry cluster, so that h(k) is the pair [ha(k),hg(k)], where ha(k) is a class appearance cluster and hg(k) is a class geometry cluster. [00138] With this notation, the class likelihood function may be written as Lc(F, C) = (∑h fa P(F | C, h, a) P(h | C, a) P(a | C)) / P(F | -C)
[00139] As before, an approximation is given by L0(F, C) = maxa maxh P(F | C, h, a) P(h | C, a) P(a | C)/ P(F | -C) The hypothesis vector h is now an explicit correspondence between an observed feature and a pair consisting of a geometry cluster and an appearance cluster. If the result of this expression exceeds τ, then the object C is deemed present. The value of the aligning transformation a that maximizes this expression specifies the position, orientation, and overall scale of the class instance in the scene. PA2777US - 43 - [00140] P(h I C, a) is the probability of the hypothesis h given that the class C is in a given alignment a. It consists of two components: P(h | C, a) = Pco(ha | C, hg)*Papp(hg | C,a) [00141] The first term is computed from the geometry co-occurrence table as Pco(ha I C, hg)= πk Pco(ha(k) | C, hg) with Pco(ha(k) I C, hg) = Pco(ha(k),hg(k) | C)/Σ; Pco(i, hg(k) | C). [00142] Papp is an appearance model computed using a binomial distribution based on the number of correspondences in h and the number of geometric clusters that should be detectable in the scene under the alignment a. A geometric cluster is considered to be detectable as follows. Let T represent the mean location of geometric cluster c when the object class is geometrically aligned with the observing camera system (using a). Let μc denote the location of the origin of the class coordinate system when the object class is geometrically aligned with the observing camera system. Let θ denote the angle the vector T-μc makes with the optical axis of the camera system. Let o denote the center of the values representing the orientation of the geometric cluster and define α= acos(o). The total angle the geometric cluster makes with the camera optical axis then falls in the range θ - α to θ + α. Let θmax represent the maximum detection angle for a feature. Then geometric cluster i is considered to be detectable if [θ - α, θ + α] c [-θmaχ, θmax]). [00143] Applying these refinements yields Lc(F, C) ~ maxa maxh P(F | C, h, a) Pco(ha | C, hg) Papp(hg | C, a) P(a | C)/ P(F | -C) [00144] Finally, let χ be the pose component of a and let so be the scale value. Then P(a I C) = P(χ I C) P5(So I C) As before, P(χ | C) is taken to be constant, so this expression simplifies to P(a I C) = Ps(S0 I C)
PA2777US - 44 - Thus, object matching takes into account global scale, and local shape and local appearance characteristics of the object class. [00145] The class-based feature likelihood ratio is now P(F I C, h, a) / P(F I ~ C) = πk LA(F, C, h, k) Lx(F, C, h, a, k)
where -LA(F, C, h, k) = P(FA(k) I CA(ha(k))) / P(FA(k) | -C) and Lx(F, C, h, a, k) = P(Fx(k) I Cχ(hg(k), a)) Pco(ha(k) | C, hg) / P(Fx(k) | -C) The former is the class-based appearance likelihood ratio. The latter is the class-based geometry likelihood ratio. The denominators are the class-based appearance likelihood function and geometry likelihood function, respectively. The denominator of the appearance likelihood ratio is approximated as described below. The denominator of the geometry likelihood ratio is taken as a constant value as in the first embodiment. [00146] Class recognition is performed as follows. The first phase is to find smooth connected surfaces, identify interest points and extract a set of scene features, as previously described. The second phase is to match scene features with class models and evaluate the resulting match using the class likelihood ratio. The second phase is accomplished in the following steps. [00147] First, for each observed scene feature, the qualitative feature descriptors are used to look up only those database appearance clusters with qualitative characteristics closely matching the candidate observed feature. Specifically, if a feature descriptor has qualitative descriptor Q, then all appearance clusters k with Qe Ik are returned from the lookup. Let {<f, c>} be the set of feature pairs returned from the lookup on qualitative PA2777US - 45 - feature descriptors, where f is a feature observed in the scene and c is a potentially matching model appearance cluster. [00148] For each pair <f, c>, the appearance likelihood is computed and stored in a table M, in position (f, c). In this table, the observed features form the rows, and the candidate model appearance clusters form the columns. Thus M(f, c) denotes the appearance likelihood value for matching observed feature f to a model appearance cluster c. [00149] An approximation to the appearance likelihood ratio is computed as L(f, g) ~ M(f, g) / max ^ M(f, k) where k comes from a different class than g. A table, L, is constructed holding the appearance likelihood ratio for each pair <f, g> identified above. [00150] Next, four or more feature/cluster matches are located that have maximal values of L and belong to the same class model C. For each such matching appearance cluster g, a model geometry cluster k is chosen for which Pco(g | C, k) is large. Using the matches, an alignment, a, is computed between the scene and the class model using the feature locations Tf and corresponding cluster centers μc. This alignment is computed by the following steps for n feature/cluster matches: 1) The mean value of the feature locations Tf is subtracted from each feature location. 2) The mean value of the cluster centers μc is subtracted from each cluster center. 3) Let y, represent the location of feature i after mean subtraction. Let X1 denote the corresponding cluster center after mean subtraction. Compute the dimensionless scale s
Figure imgf000047_0001
PA2777US - 46 - 4) The rotation is computed using Horn's method. Define M = Σ Xj * y,τ and compute the singular value decomposition U*D*VT = M. Define R = V*UT. 5) Solve for the aligning translation Ta as Ta = Tf - s*R*μ. for a corresponding feature f and cluster c. This is done for all correspondences and the result averaged. Let T be the average. 6) Construct the aligning pose χ from R and T which, together with the dimensionless scale s, defines the aligning transformation a. [00151] For every additional scene feature in the table M and every cluster of the class C, the geometry likelihood ratio is computed using this aligning transformation. The feature likelihood ratio is computed as the product of the appearance likelihood ratio and the geometry likelihood ratio. Let k be the index of a scene feature; let i be the index of an appearance cluster, and j be the index of a geometry cluster such that the feature likelihood ratio exceeds a threshold. Then h(k) = [i, j] is added to the vector h, thereby associating scene feature k with the appearance, geometry pair [i,j]. [00152] If new matches are found, the aligning transformation is recomputed including the new geometry feature/cluster matches and the process above repeated until no new matches are found. [00153] The process above is repeated for several choices of geometry clusters associated with the original choice of four matching appearance clusters. The result with the largest feature likelihood ratio is retained. [00154] Finally, the class likelihood ratio is computed. If the class likelihood ratio exceeds τ, the object class C is declared present in the image. All observed scene features that were matched in this process are permanently removed from the tables M and L.
PA2777US - 47 - [00155] If the object likelihood ratio does not exceed this threshold, a new initial match is chosen by varying at least one of the chosen features. The process then repeats using the new match. This process continues until all matches between observed features and model clusters with an appearance likelihood ratio above a match threshold have been considered.
Alternative Embodiments and Implementations [00156] The invention has been described above with reference to certain embodiments and implementations. Various alternative embodiments and implementations are set forth below. It will be recognized that the following discussion is intended as illustrative rather than limiting.
Acquiring Range and Intensity Data [00157] In the first and second embodiments, range and co-located image intensity information is acquired by a stereo system, as described above. In alternative embodiments, range and co-located image intensity information may be acquired in a variety of ways. [00158] In some alternative embodiments, a stereo system may be used, but of different implementation. Active lighting may or may not be used. If used, the active lighting may project a 2-dimensional pattern, or a light stripe, or other structure lighting. For the purposes of this invention, it suffices that the stereo system acquires a range image with acceptable density and accuracy.
PA2777US - 48 - [00159] In other alternative embodiments, the multiple images used for the stereo computation may be obtained by moving one or more cameras. This has the practical advantage that it increases the effective baseline to the distance of camera motion. [00160] In still other alternative embodiments, range and image intensity by be acquired by different sensors and registered to provide co-located range and intensity. For example, range might be acquired by a laser range finder and image intensity by a camera. [00161] The images may be in any part of the electro-magnetic spectrum or may be obtained by combinations of other imaging modalities such as infra-red imaging or ultraviolet imaging, ultra-sound, radar, or lidar. Locally Transforming Imases [00162] In the first and second embodiments, images are locally transformed so they appear as if they were viewed along the surface normal at a fixed distance. In alternative embodiments, other standard orientations or distances could be used. Multiple standard orientations or distances could be used, or the standard orientation and distance may be adapted to the imaging situation or the sampling limitations of the sensing device. [00163] In the first and second embodiments, images are transformed using a second order approximation, as described above. In alternative embodiments, local transformation may be performed in other ways. For example, a first-order approximation could be used, so that the local region is represented as a flat surface. Alternatively, a higher order approximation could be used. [00164] In still other alternatives, the local transformation may be incorporated into interest point detection, or into the computation of feature descriptors. For example, in
PA2777US - 49 - the first and second embodiments, the image is locally transformed, and then interest points are found by computing the eigenvalues of the gradient image covariance matrix. An alternative embodiment may omit an explicit transformation step and instead compute the eigenvalues of the gradient image covariance matrix as if the image were transformed. One way to do so is to integrate transformation with the computation of the gradient by using the chain rule applied to the composition of the image function and the transformation function. Such techniques, in which the transformation step is incorporated into interest point detection or into feature descriptor computation, are equivalent to a transformation step followed by interest point detection or feature descriptor computation. Hence, when transformation is described, it will be understood that this may be accomplished by a separate step or may be incorporated into other procedures. Determining Interest Points [00165] In the first and second embodiments, interest points are found by computing the eigenvalues of the gradient image covariance matrix, as described above. In alternative embodiments, interest points may be found by various alternative techniques. Several interest point detectors are described in Mikolajczyk et al, "A Comparison of Affine Region Detectors", to appear in International Journal of Computer Vision. There are other interest point detectors as well. For such a technique to be suitable, it suffices that points found by a technique be invariant or nearly invariant to substantial changes in rotation about the optical axis and illumination. [00166] In the first and second embodiments, a single technique was described to find interest points. In alternative embodiments, multiple techniques may be applied
PA2777US - 50 - simultaneously. For example, an alternative embodiment may use both a Harris-style corner detector and a Harris-Laplace interest point detector.
[00167] In the first and second embodiments, interest points were computed solely from intensity or from range. In alternative embodiments a combination of both may be used. For example, intensity features located along occluding contours may be detected. [00168] In other alternative embodiments, specialized feature detectors may be employed. For example, feature detectors may be specifically designed to detect written text. Likewise, feature detectors for specialized geometries may be employed, for example a detector for handles. [00169] Alternative embodiments may also employ specialized feature detectors that locate edges. These edges may be located in the intensity component of the 3D image, the range component of the 3D image, or where the intensity and range components are both consistent with an edge. Locating Interest Points and Transforming the Intensity Image [00170] In the first and second embodiments, the intensity image is transformed before computing interest point locations. This carries a certain computational cost. Alternative embodiments may initially locate interest points in the original image and subsequently transform the neighborhood of the image patch to refine the interest point location and compute the feature descriptor. This speeds up the computation, but may result in less repeatability in interest point detection. [00171] In other alternative embodiments, several interest detectors implicitly constructed to locate features at a specific slant or tilt angle may be constructed. For example,
PA2777US - 51 - derivatives may be computed at different scales in the x and y directions to account for the slant or tilt of the surface rather than explicitly transforming the surface. Surfaces may be classified into several classes of slant and tilt, and the detector appropriate for that class applied to the image in that region. [00172] In other alternative embodiments, the first phase of interest point detection in the untransformed image may be used as an initial filter. In this case, the neighborhood of the image patch is transformed and the transformed neighborhood is retested for an interest point, possibly with a more discriminative interest point detector. Only those interest points that pass the retest step are accepted. In this way, it may be possible to enhance the selectivity or stability of interest points. Refining the Location of an Interest Point [00173] In the first and second embodiments, the location of an interest point is computed to the nearest pixel. In alternative embodiments, the location of an interest point may be refined to sub-pixel accuracy. In the general case, interest points are associated with image locations. Typically, this will improve matching because it establishes a localization that is less sensitive to sampling effects and change of viewpoint. Choosing Interest Points to Reduce the Effects of Clutter [00174] In the first and second embodiments, interest points may be chosen anywhere on an object. In particular, interest points may be chosen on the edge of an object. When this occurs, the appearance about the interest point in an observed scene may not be stable, because different backgrounds may cause the local appearance to change. In alternative embodiments, such unstable interest points may be eliminated in many situations, as follows. From the range data, it is possible to compute range
PA2777US - 52 - discontinuities, which generally correspond to object discontinuities. Any interest point that lies on a large range discontinuity is eliminated. An alternative embodiment employing this refinement may have interest points that are more stable in cluttered backgrounds. Determining Local Orientation at an Interest Point [00175] In the first and second embodiments, the local orientation at an interest point is found as described above. In alternative embodiments, the local orientation may be computed by alternative techniques. For example, a histogram may be computed of the values of the gradient orientation and peaks of the histogram used for local orientations. Standard Viewing Direction [00176] In the first and second embodiments, the local image in the neighborhood of an interest point is transformed so it appears as if it were viewed along the surface normal. In alternative embodiments, the local neighborhood may be transformed so it appears as if it were viewed along some other standard viewing direction. Feature Descriptors [00177] In the first and second embodiments, each feature descriptor includes a geometric descriptor, an appearance descriptor, and a qualitative descriptor. Alternative embodiments may have feature descriptors with fewer or more elements. [00178] Some alternatives may have no qualitative descriptor; such alternatives omit the initial filtering step during recognition and all the features in the model database are considered as candidate matches. Other alternatives may omit some of the elements in the qualitative features described in the first and second embodiments. Still other alternatives may include additional elements in the qualitative descriptor. Various PA2777US - 53 - functions of the appearance descriptor may be advantageously used. For example, the first K components of a principal component analysis may be included. Similarly, a histogram of appearance values in may be included. [00179] Some alternatives may have no geometric descriptor. In such cases, recognition is based on appearance. [00180] Other alternatives may expand the model to include inter-feature relationships. For example, each feature may have associated with it the K distances to the nearest K features or the K angles between the feature normal and the vector to the nearest K features. These relationships are pose-invariant; other pose-invariant relationships between two or more features may be also included in the object model. Such inter- feature relationships may be used in recognition, particularly in the filtering step. Appearance Descriptors [00181] In the first and second embodiments, the appearance descriptor is the local intensity image and the local range image, each transformed so it appears to be viewed frontally centered. In alternative embodiments, appearance descriptors may be various functions of the local intensity image and local range image. Various functions may be chosen for various purposes such as speed of computation, compactness of storage and the like. [00182] One group of functions is distribution-based appearance descriptors, which use a histogram or equivalent technique to represent appearance as a distribution of values. Another group of functions is spatial-frequency descriptors, which use frequency components. Another group of functions is differential feature descriptors, which use a set of derivatives. Some specific appearance descriptors include steerable filters,
PA2777US - 54 - differential invariants, complex filters, moment invariants, and SEFT features. Several suitable descriptors are compared in Mikolajczyk and Schmid, "A Performance Evaluation of Local Descriptors", to appear in IEEE Transactions on Pattern Analysis and Machine Intelligence. Depending on circumstances and application, any of these may be useful in alternative embodiments. [00183] Additionally, appearance descriptors may be explicitly constructed to have special properties desirable for a particular application. For example, appearance descriptors may be constructed to be invariant to rotation about the camera axis. One way of doing this is to use radial histograms. In this case, an appearance descriptor may consist of histograms for each circular ring about an interest point. Specifically, let R be such a ring. Compute two histograms of the values of points in the ring, one for the magnitude of the gradients and one for the angle between the local radial direction and the gradient direction. If each histogram has Ng buckets and there are NR rings, then the appearance descriptor has length 2*NB*NR. [00184] There is a very wide diversity of functions that may be used to compute appearance descriptors. Appearance Descriptors Based on Color Information [00185] In the first and second embodiments, visual appearance is represented using intensity, i.e. gray scale values. Alternative embodiments may use sensors that acquire multiple color bands and use these color bands to represent the visual appearance when computing interest points and/or appearance descriptors. This would be effective in distinguishing objects whose appearance differs only in color. Appearance Descriptors Based on Geometry PA2777US - 55 - [00186] There are additional appearance descriptors based on local geometry information that have the desired invariance properties. One class of such geometry-based appearance descriptors is represented by SPIN images, as described in the paper by Johnson and Hebert, "Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes" IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 21, No. 5, May 1999, pp 433 - 449. [00187] There are also additional appearance descriptors based on non-local geometry information. An alternative embodiment using these may fit analytic surfaces patches to the range data, growing each patch to be a large as possible consistent with an acceptably good fit to the data. It would classify each surface patch as to quadric type, e.g. plane, elliptic cylinder, elliptic cone, elliptic paraboloid, ellipsoid, etc. Each interest point on a surface would have an appearance descriptor constructed from the surface on which it is found. The descriptor would consist of two levels, lexographically ordered: the quadric type would serve as the first level descriptor, while the parameters of the surface quadric would serve as the second-level descriptor. Reducing the Dimensionality of the Appearance Descriptors [00188] In the first embodiment, and in several of the alternative embodiments described above, the appearance descriptors have a high dimension. For databases consisting of a very large number of objects, this may be undesirable, since the storage requirements and the search are at least linear in the dimension of the appearance descriptors. Alternative embodiments may reduce the dimensionality of the data. One technique for so doing is principal component analysis, sometimes referred to as the "Karhunen-Loeve
PA2777US - 56 - transformation'. This and other methods for dimensionality reduction are described in standard texts on pattern classification and machine learning. [00189] In the second embodiment, linear discriminant analysis (LDA) is used to project the appearance descriptors down to a smaller dimension. Alternative embodiments may use other techniques to reduce the dimensionality of the data. Computing the Object and Class Likelihood Ratios [00190] In the first and second embodiments, the object and class likelihood ratios are approximated by replacing a sum and integral by maximums, as described above. In alternative embodiments, these likelihood ratios may be approximated by considering additional terms. For example, rather than the single maximum, the K elements resulting in the highest probabilities may be used. K may be chosen to mediate between accuracy and computational speed. [00191] The feature likelihood ratio was computed by replacing the denominator with a single value. In alternative embodiments, the K largest likelihood values from an object other than that under consideration may be used. In other alternative embodiments, an approximation to P(f | -O) may be precomputed from the object database and stored for each feature and object in question. [00192] The first and second embodiments approximate the pose distribution by taking it to be uniform. Alternative embodiments may use models with priors on the distribution of the pose of each object. Object Database Construction [00193] In the first and second embodiments, the database of object models is constructed from views of the object obtained under controlled conditions. In alternative
PA2777US - 57 - embodiments, the conditions may be less controlled. There may be other objects in the view or the relative pose on the object in the various views may not be known. In these cases, additional processing may be required to construct the object models. In the case of views with high clutter, it may be necessary to build up the model database piecewise by doing object recognition to locate the object in the view. Using Discriminative Features in the Database [00194] In the first and second embodiments, all feature descriptors in the database are treated equally. In practice, some feature descriptors are more specific in their discrimination than others. The discriminatory power of a feature descriptor in the database may be computed a variety of ways. For example, it may be computed by comparing each appearance descriptor in the database with every other appearance descriptor; a discriminatory appearance descriptor is one that is dissimilar to the appearance descriptors of all other objects. Alternatively, mutual information may be used to select a set of features descriptors that are collectively selective. The measure of the discriminatory power of a feature descriptor may be used to impose a cut-off such that all features below a threshold are discarded from the model. Alternatively, in some embodiments, the discriminatory power of a feature may be used as a weighting factor. Models for Classes [00195] In the second embodiment, the database consists of a set of class models. Each class model includes a geometric model, an appearance model, a qualitative model, and a co-occurrence table. Alternative embodiments may have different class models. Some embodiments may have no qualitative model. Other embodiments may have fewer or additional components of the qualitative descriptors of the object models and hence have
PA2777US - 58 - fewer or additional components in the qualitative class models. Still other embodiments may include inter-feature relationships in the object models and hence have corresponding elements in the class models. [00196] In the second embodiment, a fixed number of classes K were chosen. In alternative embodiments, the number of classes K may be varied. In particular, it is desirable to choose classes that contain features coming from a majority of the objects in a class. To create such a model, it may be desirable to create a model with K clusters, then to remove features that appear in clusters with little support. K can then be reduced and the process repeated until all clusters contain features from a majority of the objects in the class. [00197] In the second embodiment, Euclidean distance was used in the nearest neighbor algorithm. In alternative embodiments, a robust metric such as the Ll norm or an alpha- trimmed mean may be used. [00198] The second embodiment uses a set of largely decoupled models. In particular, a Gaussian Mixture Model is computed for geometry, for the qualitative descriptor, for the image intensity descriptor, and for the range descriptor, as described above. In alternative embodiments some or all of these may be computed jointly. This may be accomplished by concatenating the appearance descriptor and feature location and clustering this joint vector. Alternatively, a decoupled model can be computed and appearance-geometry pairs with high co-occurrence can be associated to each other. [00199] The second embodiment represents the geometry model as a set of distributions of the variation in position of feature descriptors given nominal pose and global scale normalization. Because of the global scale normalization in the class model and in
PA2777US - 59 - recognition, an object and a scaled version of the object in a scene can be recognized equally well, provided that the scaling is according to the global scale normalization of the class. Alternative embodiments may not model the global scale variation within a class, and in recognition there is no rescaling. Consequently, a scaled version of an object will be penalized for its deviation from the nominal size of the class. Depending on the application, either the semantics of the second embodiment or the semantics of an alternative embodiment may be appropriate. [00200] In other embodiments, a wider range of local and global scale and shape models may be used. Instead of a single global scale, different scaling factors may be used along different axes, resulting in a global shape model. For example, affine deformations might be used as a global shape model. Also, the object may be segmented into parts, and a separate shape model constructed for each part. For example, a human figure may be segmented into the rigid limb structures, and a shape model for each structure developed independently. [00201] The second embodiment builds scale models using equal weighting of the features. However, if some feature clusters contain more features and/or have smaller variance, alternative embodiments may weight those features more highly when computing the local and global shape models. [00202] The second embodiment performs recognition by computing the class likelihood ratio based on probability models computed from the feature descriptors of objects belonging to a class. Alternative embodiments may represent a class by other means. For example, a support vector machine may be used to learn the properties of a class from the feature descriptors of objects belonging to a class. Alternatively, many other
PA2777US - 60 - machine-learning techniques described in the literature may be used to leam the properties of a class from the feature descriptors of objects belonging to a class and may be used in this invention to recognize class instances. Class Database Construction [00203] The second embodiment computes class models by independently normalizing the size of each object in the class, and then computing geometry clusters for all size- normalized features. In alternative embodiments, object models may be matched to each other, subject to a group of global deformations, and clustering performed when all class members have been registered to a common frame. This may be accomplished by first clustering on feature appearance. The features of each object that are associated with a particular cluster may be taken to be potential correspondences among models. For any pair of objects, these correspondences may be sampled using a procedure such as RANSAC to produce an aligning transformation that provided maximal agreement among the features of the models. Sharing Features Among Classes [00204] The second embodiment constructs a separate model for each class; in particular, the clusters of one class are not linked to the clusters of another. Alternative embodiments may construct class models that share features. This may speed up database construction, since class models for previously encountered features may be re¬ used when processing a new class. It may also speed-up recognition, since a shared feature is represented once in the database, rather than multiple times.
PA2777US - 61 - Filtering Matches in Recognition [00205] In the first embodiment, the attempt to match an observed feature to the model database is made faster by using the qualitative descriptor as a filter and by using multiple binary searches to implement the lookup. Alternative embodiments may do the lookup in a different way. Various data structures might be used in place of the ordered lists described in the first embodiment. Various data structures that can efficiently locate nearest neighbors in a multi-dimensional space may be used. Recognition - Obtaining an Initial Alignment [00206] The first embodiment obtains an initial alignment of the model with a portion of the scene by using a single correspondence <f*, g*> as described above. Alternative embodiments may obtain an initial alignment is other ways. [00207] One alternative is to replace the single correspondence <f*, g*> with multiple corresponding points <fj, gi>,..., <%, gN> where all the model features g^ belong to the same object. The latter approach may provide a better approximation to the correct aligning pose if all the f^ are associated with the same object in the scene. In particular, if N is at least 3, then the alignment may be computed using only the position components, which may be advantageous if the surface normals are more noisy than the position. [00208] Another alternative is to replace the table L with a different mechanism for choosing correspondences. Correspondences may be chosen at random or according to some probability distribution. Alternatively, a probability distribution could be constructed from M or L and the RANSAC method may be employed, sampling from possible feature correspondences. Also, groups of correspondences <fi , g\>,..., <%,
PA2777US - 62 - gjsj> may be chosen so that the f^ are in a nearby region of the observed scene, so as to improve the chance than all the f^ are associated with the same object in the scene. Alternatively, distance in the scene may be used as a weighing function for choosing the f^. There are many variations on these ideas. [00209] Similar considerations apply to class recognition. There are many ways of choosing correspondences to obtain an initial alignment of the class model with a portion of the scene. An example will illustrate the diversity of possible techniques. When choosing the correspondences <f\, g \ >,..., <fφ g4> described in the second embodiment, it is desirable that all the f^ are associated with the same object in the scene. One means for insuring this is to extract smooth connected surfaces from the range data, as described as one possible embodiment in the section "Locally Transforming Images". Each interest point may then be associated with the surface on which it is found. In typical situations, each surface so extracted lies on only one object of the scene, so that the collection of interest points on a surface belong to the same object. This association may be used to choose correspondences so that all the f^ are associated with the same object.
Recognition When the Object Likelihood Ratio Does Not Exceed the Threshold In the first embodiment, if the object likelihood ratio does not exceed the threshold, the initial match between f* and g* is disallowed as an initial match. In alternative embodiments, the initial match may be disallowed only temporarily and other matches considered. If there are disallowed matches and an object is recognized subsequent to the match being disallowed, the match is reallowed and the recognition process repeated. This alternative embodiment may improve detection of objects that are partially occluded. In particular, the computation of P(h | O, χ) can take into account recognized PA2777US - 63 - objects that may occlude O. This may increase the likelihood ratio for the object O when occluding objects are recognized. Decision Criteria [00210] The first and second embodiments compute probabilities and approximations to probabilities; they base the decision as to whether an object or class instance is present in an observed scene using an approximation to the likelihood ratio. In alternative embodiments, the computation may be performed without considering explicit probabilities. For example, rather than compute the probability of an observed scene feature f given an model object feature or model class feature g , an alternative embodiment may simply compute a match score between f and g. Various match score functions may be used. Similar considerations apply to matches between groups of scene features F and model or class features G. The decision as to whether an object or class instance is present in an observed scene may be based on the value of a match score compared to empirically obtained criteria and these criteria may vary from object to object and from class to class. Hierarchical Recognition [00211] The first embodiment recognizes specific objects; the second embodiment recognizes classes of objects. In alternative embodiments, these may be combined to enhance recognition performance. That is, an object in the scene may first be classified by class, and subsequent recognition may consider only objects within that class. In other embodiments, there may be a hierarchy of classes, and recognition may proceed by starting with the most general class structure and progressing to the most specific. Implementation of Procedural Steps PA2777US - 64 - [00212] The procedural steps of the several embodiments have been described above. These steps may be implemented in a variety of programming languages, such as C++, C, Java, Ada, Fortran, or any other general-purpose programming language. These implementations may be compiled into the machine language of a particular computer or they may be interpreted. They may also be implemented in the assembly language or the machine language of a particular computer. The method may be implemented on a computer, and executing program instructions may be stored on a computer-readable medium. [00213] The procedural steps may also be implemented in specialized programmable processors. Examples of such specialized hardware include digital signal processors (DSPs), graphics processors (GPUs), media processors, and streaming processors. [00214] The procedural steps may also be implemented in electronic hardware designed for this task. In particular, integrated circuits may be used. Examples of integrated circuit technologies that may be used include Field Programmable Gate Arrays (FPGAs), gate arrays, standard cells, and full custom ICs. [00215] Implementation using any of the methods described in this invention disclosure may carry out some of the procedural steps in parallel rather than serially. Application to Robotics [00216] Among other applications, this invention may be applied to robotic manipulation. Objects may be recognized as described in this invention. Once an object has been recognized, properties relevant to robotic manipulation can be looked up in a database. These properties include its surface(s), its weight, and the coefficient of friction of its surface(s).
PA2777US - 65 - Application to Face Recognition [00217] Among other applications, this invention may be applied to face recognition. Prior techniques for face recognition have used either appearance models or 3D models, or have combined their results only after separate recognition operations. By acquiring registered range intensity images, by constructing models based on pose-invariant features, and by using them for recognition as described above, face recognition may be performed advantageously. Other Applications [00218] The invention is not limited to the applications listed above. The present invention can also be applied in many other fields such as inspection, assembly, and logistics.. It will be recognized that this list is intended as illustrative rather than limiting and the invention can be utilized for varied purposes. Conclusion, Ramifications, and Scope [00219] In summary, the invention disclosed herein provides a system and method for performing 3D object recognition using range and appearance data. [00220] In the foregoing specification, the present invention is described with reference to specific embodiments thereof, but those skilled in the art will recognize that the present invention is not limited thereto. Various features and aspects of the above-described present invention may be used individually or jointly. Further, the present invention can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than
PA2777US - 66 - restrictive. It will be recognized that the terms "comprising," "including," and "having," as used herein, are specifically intended to be read as open-ended terms of art. PA2777US - 67 -

Claims

CLAIMS What is claimed is: 1. A method of choosing pose-invariant interest points on a three-dimensional (3D) image, comprising the steps of transforming the intensity image at a plurality of image locations so that the local region about each image location appears approximately as it would appear if it were viewed in a standard pose with respect to a camera; and applying one or more interest point operators to the transformed image.
2. The method of claim 1 wherein the step of transforming the image is performed by using the range data to compute the standard pose with respect to the camera.
3. The method of claim 2 wherein the standard pose is such that the image appears as if it were viewed with the camera axis along the surface normal.
4. The method of claim 1 wherein the step of transforming the image further comprises the steps of: computing a second-order approximation to the local surface geometry from the range data of the 3D image; and warping the image according to the second-order approximation.
5. The method of claim 2, wherein the step of transforming the image further comprises the steps of:
PA2777US - 68 - using the range data to compute the surface normal at each image location; and using the surface normal and the range data to compute the standard pose with respect to the camera.
6. A method of computing pose-invariant feature descriptors of a three-dimensional (3D) image, comprising the steps of choosing one or more interest points on the intensity image; transforming the intensity image so that the local region about each interest point appears approximately as it would appear if it were viewed in a standard pose with respect to a camera; and computing a feature descriptor comprising a function of the intensity image in the local region about each interest point in the transformed image.
7. The method of claim 6 wherein the step of transforming the image is performed by using the range data to compute the standard pose with respect to the camera.
8. The method of claim 7, wherein the step of transforming the image further comprises the steps of: using the range data to compute the surface normal at each interest point; and using the surface normal and the range data to compute the standard pose with respect to the camera.
PA2777US - 69 -
9. The method of claim 7 wherein the standard pose is such that the image appears as if it were viewed with the camera axis along the surface normal.
10. The method of claim 6 wherein the step of transforming the image further comprises the steps of: computing a second-order approximation to the local surface geometry from the range data of the 3D image; and warping the image according to the second-order approximation.
11. The method of 6 wherein the feature descriptor further comprises a function of the local range image as it would appear if it were viewed in a standard pose with respect to the camera.
12. The method of 6 wherein the feature descriptor further comprises a function of the 3D pose of the interest point.
13. The method of 6 wherein the feature descriptor further comprises a function of the 3D pose of one or more other interest points of the image.
14. The method of 6 wherein the step of computing a feature descriptor further comprises computing a dimensionality reduction in the function of the local region.
PA2777US - 70 -
15. A method for recognizing objects in an observed scene, comprising the steps of acquiring a three-dimensional (3D) image of the scene; choosing pose-invariant interest points by applying one or more interest point operators to the intensity component of the image as it would appear if it were viewed in a standard pose with respect to a camera. computing pose-invariant feature descriptors of the intensity image at the interest points, constructing a database comprising 3D object models, each object model comprising a set of pose-invariant feature descriptors of one or more images of an object; and comparing the pose-invariant feature descriptors of the scene image to pose-invariant feature descriptors of the object models.
16. A method for recognizing objects in an observed scene, comprising the steps of acquiring a three-dimensional (3D) image of the scene; choosing pose-invariant interest points in the image; computing pose-invariant feature descriptors of the image at the interest points, each feature descriptor comprising a function of the local intensity component of the 3D image as it would appear if it were viewed in a standard pose with respect to a camera; constructing a database comprising 3D object models, each object model comprising a set of pose-invariant feature descriptors of one or more images of an object; and
PA2777US - 71 - comparing the pose-invariant feature descriptors of the scene image to pose-invariant feature descriptors of the object models.
17. The method of claim 15 wherein the step of comparing the pose-invariant feature descriptors is performed by evaluating the probability that feature descriptors of the scene are the result of observing feature descriptors of the object models.
18. The method of claim 17 wherein the step of evaluating the probability that feature descriptors of the scene are the result of observing feature descriptors of the object models further comprises the steps of: computing a correspondence of feature descriptors in the scene with feature descriptors of an object model and an alignment under that correspondence, evaluating an approximation to the likelihood ratio under the correspondence and alignment.
19. The method of claim 18 wherein the step of computing a correspondence and alignment further comprises the steps of computing a correspondence of a small number of feature descriptors; computing an alignment based on the small number of feature descriptors; and iteratively performing the sub-steps of: identifying potentially visible model features using the alignment;
PA2777US - 72 - retaining those visible model features that match feature descriptors in the scene; updating the correspondence to include the retained model features; and updating the current alignment based on the retained model features.
20. A method for computing three-dimensional (3D) class models, comprising the steps of acquiring 3D images of objects with class labels; choosing pose-invariant interest points in the images by applying one or more interest point operators to the intensity component of the images as they would appear if viewed in a standard pose with respect to a camera; computing pose-invariant object feature descriptors at the interest points; and computing functions of the pose-invariant object feature descriptors and the class labels.
21. A method for computing three-dimensional (3D) class models, comprising the steps of acquiring 3D images of objects with class labels; choosing pose-invariant interest points in the images;
PA2777US - 73 - computing pose-invariant feature descriptors at the interest points, each feature descriptor comprising a function of the local intensity component of the 3D image as it would appear if it were viewed in a standard pose with respect to a camera; and computing functions of the pose-invariant feature descriptors and the class labels.
22. The method of claim 20 wherein the step of computing functions further comprises computing Gaussian Mixture Models over the feature descriptors, each Gaussian Mixture Model comprising one or more clusters.
23. The method of claim 22 wherein the step of computing functions further comprises computing Gaussian Mixture Models of the global size variation within the class.
24. The method of claim 20 wherein the step of computing functions further comprises computing one or more support vector machines.
25. A method for recognizing instances of classes in an observed scene, comprising the steps of: acquiring a three-dimensional (3D) image of a scene;
PA2777US - 74 - choosing pose-invariant interest points in the image by applying one or more interest point operators to the intensity component of the image as it would appear if it were viewed in a standard pose with respect to a camera; computing pose-invariant feature descriptors at the interest points; constructing a database comprising 3D class models; and comparing pose-invariant feature descriptors of the scene image to the 3D class models.
26. The method of claim 25 wherein the 3D class models comprise Gaussian Mixture Models, each Gaussian Mixture Model comprising one or more clusters.
27. The method of claim 25 wherein the step of comparing the pose-invariant feature descriptors to the 3D class models further comprises evaluating the probability that feature descriptors of the scene are the result of observing clusters of a class model.
28. The method of claim 27 wherein the step of evaluating the probability that feature descriptors of the scene are the result of observing clusters of a class model further comprises the steps of: computing a correspondence of feature descriptors in the scene with clusters of a class model and an alignment under that correspondence; and evaluating an approximation to the likelihood ratio under the correspondence and alignment.
PA2777US - 75 -
29. A method for recognizing instances of classes in an observed scene, comprising the steps of: acquiring a three-dimensional (3D) image of a scene; choosing pose-invariant interest points in the image; computing pose-invariant feature descriptors at the interest points, each feature descriptor comprising a function of the local intensity component of the 3D image as it would appear if it were viewed in a standard pose with respect to a camera; constructing a database comprising 3D class models; and comparing pose-invariant feature descriptors of the scene image to the 3D class models.
PA2777US - 76 -
PCT/US2005/022294 2004-06-23 2005-06-22 System and method for 3d object recognition using range and intensity WO2006002320A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP05763226A EP1766552A2 (en) 2004-06-23 2005-06-22 System and method for 3d object recognition using range and intensity

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US58246104P 2004-06-23 2004-06-23
US60/582,461 2004-06-23

Publications (2)

Publication Number Publication Date
WO2006002320A2 true WO2006002320A2 (en) 2006-01-05
WO2006002320A3 WO2006002320A3 (en) 2006-06-22

Family

ID=35782345

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/022294 WO2006002320A2 (en) 2004-06-23 2005-06-22 System and method for 3d object recognition using range and intensity

Country Status (3)

Country Link
US (1) US20050286767A1 (en)
EP (1) EP1766552A2 (en)
WO (1) WO2006002320A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1897033A2 (en) * 2005-06-16 2008-03-12 Strider Labs, Inc. System and method for recognition in 2d images using 3d class models
WO2009069071A1 (en) * 2007-11-28 2009-06-04 Nxp B.V. Method and system for three-dimensional object recognition
CN104077603A (en) * 2014-07-14 2014-10-01 金陵科技学院 Outdoor scene monocular vision space recognition method in terrestrial gravity field environment
EP3005238A4 (en) * 2013-06-04 2017-06-21 Elbit Systems Land and C4I Ltd. Method and system for coordinating between image sensors

Families Citing this family (106)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7379599B1 (en) * 2003-07-30 2008-05-27 Matrox Electronic Systems Ltd Model based object recognition method using a texture engine
US7623685B2 (en) * 2004-08-20 2009-11-24 The Regents Of The University Of Colorado Biometric signatures and identification through the use of projective invariants
BRPI0514755B1 (en) * 2004-08-30 2017-10-17 Commonwealth Scientific And Industrial Research Organisation METHOD FOR AUTOMATED 3D IMAGE FORMATION
US7684643B2 (en) * 2004-10-26 2010-03-23 Siemens Medical Solutions Usa, Inc. Mutual information regularized Bayesian framework for multiple image restoration
US20060217925A1 (en) * 2005-03-23 2006-09-28 Taron Maxime G Methods for entity identification
US7317416B2 (en) * 2005-12-22 2008-01-08 Leonard Flom Skeletal topography imaging radar for unique individual identification
US20070162505A1 (en) * 2006-01-10 2007-07-12 International Business Machines Corporation Method for using psychological states to index databases
EP2023288A4 (en) * 2006-05-10 2010-11-10 Nikon Corp Object recognition device, object recognition program, and image search service providing method
KR100781239B1 (en) * 2006-06-06 2007-11-30 재단법인서울대학교산학협력재단 Method for tracking bacteria swimming near the solid surface
US8073196B2 (en) * 2006-10-16 2011-12-06 University Of Southern California Detection and tracking of moving objects from a moving platform in presence of strong parallax
US8150101B2 (en) 2006-11-13 2012-04-03 Cybernet Systems Corporation Orientation invariant object identification using model-based image processing
US20100095236A1 (en) * 2007-03-15 2010-04-15 Ralph Andrew Silberstein Methods and apparatus for automated aesthetic transitioning between scene graphs
JP5096776B2 (en) * 2007-04-04 2012-12-12 キヤノン株式会社 Image processing apparatus and image search method
US8086551B2 (en) * 2007-04-16 2011-12-27 Blue Oak Mountain Technologies, Inc. Electronic system with simulated sense perception and method of providing simulated sense perception
US7970226B2 (en) * 2007-04-23 2011-06-28 Microsoft Corporation Local image descriptors
US8126275B2 (en) * 2007-04-24 2012-02-28 Microsoft Corporation Interest point detection
US8180808B2 (en) * 2007-06-08 2012-05-15 Ketera Technologies, Inc. Spend data clustering engine with outlier detection
DE102007048320A1 (en) * 2007-10-09 2008-05-15 Daimler Ag Method for adapting object model to three dimensional scatter plot, involves preparing probability chart from one of images, and determining detention probability of object in images from chart
US8023742B2 (en) * 2007-10-09 2011-09-20 Microsoft Corporation Local image descriptors using linear discriminant embedding
DE602007003849D1 (en) * 2007-10-11 2010-01-28 Mvtec Software Gmbh System and method for 3D object recognition
US8532344B2 (en) * 2008-01-09 2013-09-10 International Business Machines Corporation Methods and apparatus for generation of cancelable face template
US8538096B2 (en) * 2008-01-09 2013-09-17 International Business Machines Corporation Methods and apparatus for generation of cancelable fingerprint template
US8520979B2 (en) * 2008-08-19 2013-08-27 Digimarc Corporation Methods and systems for content processing
US8391640B1 (en) * 2008-08-29 2013-03-05 Adobe Systems Incorporated Method and apparatus for aligning and unwarping distorted images
US8340453B1 (en) 2008-08-29 2012-12-25 Adobe Systems Incorporated Metadata-driven method and apparatus for constraining solution space in image processing techniques
US8842190B2 (en) 2008-08-29 2014-09-23 Adobe Systems Incorporated Method and apparatus for determining sensor format factors from image metadata
US8368773B1 (en) 2008-08-29 2013-02-05 Adobe Systems Incorporated Metadata-driven method and apparatus for automatically aligning distorted images
US8724007B2 (en) 2008-08-29 2014-05-13 Adobe Systems Incorporated Metadata-driven method and apparatus for multi-image processing
US20120075296A1 (en) * 2008-10-08 2012-03-29 Strider Labs, Inc. System and Method for Constructing a 3D Scene Model From an Image
US10650608B2 (en) * 2008-10-08 2020-05-12 Strider Labs, Inc. System and method for constructing a 3D scene model from an image
US8229928B2 (en) * 2009-02-27 2012-07-24 Empire Technology Development Llc 3D object descriptors
JP5310130B2 (en) * 2009-03-11 2013-10-09 オムロン株式会社 Display method of recognition result by three-dimensional visual sensor and three-dimensional visual sensor
JP2010210585A (en) * 2009-03-12 2010-09-24 Omron Corp Model display method in three-dimensional visual sensor, and three-dimensional visual sensor
JP5245937B2 (en) * 2009-03-12 2013-07-24 オムロン株式会社 Method for deriving parameters of three-dimensional measurement processing and three-dimensional visual sensor
JP5316118B2 (en) * 2009-03-12 2013-10-16 オムロン株式会社 3D visual sensor
JP5245938B2 (en) 2009-03-12 2013-07-24 オムロン株式会社 3D recognition result display method and 3D visual sensor
JP5714232B2 (en) * 2009-03-12 2015-05-07 オムロン株式会社 Calibration apparatus and method for confirming accuracy of parameters for three-dimensional measurement
JP5282614B2 (en) * 2009-03-13 2013-09-04 オムロン株式会社 Model data registration method and visual sensor for visual recognition processing
JP5229575B2 (en) * 2009-05-08 2013-07-03 ソニー株式会社 Image processing apparatus and method, and program
US8630456B2 (en) * 2009-05-12 2014-01-14 Toyota Jidosha Kabushiki Kaisha Object recognition method, object recognition apparatus, and autonomous mobile robot
US20100331041A1 (en) * 2009-06-26 2010-12-30 Fuji Xerox Co., Ltd. System and method for language-independent manipulations of digital copies of documents through a camera phone
JP2011034177A (en) * 2009-07-30 2011-02-17 Sony Corp Information processor, information processing method, and program
US8687898B2 (en) * 2010-02-01 2014-04-01 Toyota Motor Engineering & Manufacturing North America System and method for object recognition based on three-dimensional adaptive feature detectors
JP5618569B2 (en) * 2010-02-25 2014-11-05 キヤノン株式会社 Position and orientation estimation apparatus and method
EP2385483B1 (en) 2010-05-07 2012-11-21 MVTec Software GmbH Recognition and pose determination of 3D objects in 3D scenes using geometric point pair descriptors and the generalized Hough Transform
EP2386998B1 (en) * 2010-05-14 2018-07-11 Honda Research Institute Europe GmbH A Two-Stage Correlation Method for Correspondence Search
US9396545B2 (en) 2010-06-10 2016-07-19 Autodesk, Inc. Segmentation of ground-based laser scanning points from urban environment
US8605093B2 (en) * 2010-06-10 2013-12-10 Autodesk, Inc. Pipe reconstruction from unorganized point cloud data
US9122955B2 (en) * 2010-06-28 2015-09-01 Ramot At Tel-Aviv University Ltd. Method and system of classifying medical images
EP2617012B1 (en) 2010-09-16 2015-06-17 Mor Research Applications Ltd. Method and system for analyzing images
US9026536B2 (en) * 2010-10-17 2015-05-05 Canon Kabushiki Kaisha Systems and methods for cluster comparison
JP5158223B2 (en) * 2011-04-06 2013-03-06 カシオ計算機株式会社 3D modeling apparatus, 3D modeling method, and program
US8799201B2 (en) 2011-07-25 2014-08-05 Toyota Motor Engineering & Manufacturing North America, Inc. Method and system for tracking objects
TWI489859B (en) * 2011-11-01 2015-06-21 Inst Information Industry Image warping method and computer program product thereof
WO2013069023A2 (en) * 2011-11-13 2013-05-16 Extreme Reality Ltd. Methods systems apparatuses circuits and associated computer executable code for video based subject characterization, categorization, identification and/or presence response
US9070083B2 (en) * 2011-12-13 2015-06-30 Iucf-Hyu Industry-University Cooperation Foundation Hanyang University Method for learning task skill and robot using thereof
US9002098B1 (en) * 2012-01-25 2015-04-07 Hrl Laboratories, Llc Robotic visual perception system
US9141880B2 (en) * 2012-10-05 2015-09-22 Eagle View Technologies, Inc. Systems and methods for relating images to each other by determining transforms without using image acquisition metadata
US9237340B2 (en) * 2012-10-10 2016-01-12 Texas Instruments Incorporated Camera pose estimation
CN110909825B (en) 2012-10-11 2024-05-28 开文公司 Detecting objects in visual data using probabilistic models
EP2720171B1 (en) * 2012-10-12 2015-04-08 MVTec Software GmbH Recognition and pose determination of 3D objects in multimodal scenes
EP2911116A4 (en) * 2012-10-18 2016-09-07 Konica Minolta Inc Image-processing device, image-processing method, and image-processing program
JP5668042B2 (en) * 2012-10-31 2015-02-12 東芝テック株式会社 Product reading device, product sales data processing device, and product reading program
JP5707375B2 (en) * 2012-11-05 2015-04-30 東芝テック株式会社 Product recognition apparatus and product recognition program
BR112015012073A2 (en) 2012-11-29 2017-07-11 Koninklijke Philips Nv laser device to project a structured light pattern over a scene, and use a device
US9224064B2 (en) * 2013-02-15 2015-12-29 Samsung Electronics Co., Ltd. Electronic device, electronic device operating method, and computer readable recording medium recording the method
US9314219B2 (en) * 2013-02-27 2016-04-19 Paul J Keall Method to estimate real-time rotation and translation of a target with a single x-ray imager
US9259840B1 (en) * 2013-03-13 2016-02-16 Hrl Laboratories, Llc Device and method to localize and control a tool tip with a robot arm
JP5760032B2 (en) * 2013-04-25 2015-08-05 東芝テック株式会社 Recognition dictionary creation device and recognition dictionary creation program
US9355123B2 (en) 2013-07-19 2016-05-31 Nant Holdings Ip, Llc Fast recognition algorithm processing, systems and methods
US10007336B2 (en) * 2013-09-10 2018-06-26 The Board Of Regents Of The University Of Texas System Apparatus, system, and method for mobile, low-cost headset for 3D point of gaze estimation
WO2015123647A1 (en) 2014-02-14 2015-08-20 Nant Holdings Ip, Llc Object ingestion through canonical shapes, systems and methods
JP6331517B2 (en) * 2014-03-13 2018-05-30 オムロン株式会社 Image processing apparatus, system, image processing method, and image processing program
US9361694B2 (en) * 2014-07-02 2016-06-07 Ittiam Systems (P) Ltd. System and method for determining rotation invariant feature descriptors for points of interest in digital images
CN105224582B (en) * 2014-07-03 2018-11-09 联想(北京)有限公司 Information processing method and equipment
US9794542B2 (en) * 2014-07-03 2017-10-17 Microsoft Technology Licensing, Llc. Secure wearable computer interface
DE102014116520B4 (en) * 2014-11-12 2024-05-02 Pepperl+Fuchs Se Method and device for object recognition
CN104657986B (en) * 2015-02-02 2017-09-29 华中科技大学 A kind of quasi- dense matching extended method merged based on subspace with consistency constraint
US10937168B2 (en) 2015-11-02 2021-03-02 Cognex Corporation System and method for finding and classifying lines in an image with a vision system
US10152780B2 (en) 2015-11-02 2018-12-11 Cognex Corporation System and method for finding lines in an image with a vision system
US9868212B1 (en) * 2016-02-18 2018-01-16 X Development Llc Methods and apparatus for determining the pose of an object based on point cloud data
JP6858067B2 (en) * 2016-06-17 2021-04-14 株式会社デンソーテン Radar device and control method of radar device
US9875398B1 (en) 2016-06-30 2018-01-23 The United States Of America As Represented By The Secretary Of The Army System and method for face recognition with two-dimensional sensing modality
US10380767B2 (en) 2016-08-01 2019-08-13 Cognex Corporation System and method for automatic selection of 3D alignment algorithms in a vision system
US10311593B2 (en) 2016-11-16 2019-06-04 International Business Machines Corporation Object instance identification using three-dimensional spatial configuration
US11205103B2 (en) 2016-12-09 2021-12-21 The Research Foundation for the State University Semisupervised autoencoder for sentiment analysis
US10783346B2 (en) * 2017-12-11 2020-09-22 Invensense, Inc. Enhancing quality of a fingerprint image
US10706505B2 (en) * 2018-01-24 2020-07-07 GM Global Technology Operations LLC Method and system for generating a range image using sparse depth data
US10957072B2 (en) 2018-02-21 2021-03-23 Cognex Corporation System and method for simultaneous consideration of edges and normals in image features by a vision system
DE102018206662A1 (en) * 2018-04-30 2019-10-31 Siemens Aktiengesellschaft Method for recognizing a component, computer program and computer-readable storage medium
US11747444B2 (en) * 2018-08-14 2023-09-05 Intel Corporation LiDAR-based object detection and classification
EP3899874A4 (en) * 2018-12-20 2022-09-07 Packsize, LLC Systems and methods for object dimensioning based on partial visual information
US11830274B2 (en) * 2019-01-11 2023-11-28 Infrared Integrated Systems Limited Detection and identification systems for humans or objects
US11565411B2 (en) * 2019-05-29 2023-01-31 Lg Electronics Inc. Intelligent robot cleaner for setting travel route based on video learning and managing method thereof
US11361505B2 (en) * 2019-06-06 2022-06-14 Qualcomm Technologies, Inc. Model retrieval for objects in images using field descriptors
WO2021083475A1 (en) * 2019-10-28 2021-05-06 Telefonaktiebolaget Lm Ericsson (Publ) Method for generating a three dimensional, 3d, model
EP3842911B1 (en) * 2019-12-26 2023-04-05 Dassault Systèmes A 3d interface with an improved object selection
CN112947091B (en) * 2021-03-26 2022-06-10 福州大学 PID control-based method for optimizing heat production of magnetic nanoparticles in biological tissues
CN113740868B (en) * 2021-09-06 2024-01-30 中国联合网络通信集团有限公司 Vegetation distance measuring method and device and vegetation trimming device
CN113870351B (en) * 2021-09-28 2024-09-27 武汉大学 Indoor large scene pedestrian fingerprint positioning method based on monocular vision
GB202114947D0 (en) * 2021-10-19 2021-12-01 Oxbotica Ltd Method and apparatus
GB202114950D0 (en) * 2021-10-19 2021-12-01 Oxbotica Ltd Method and apparatus
GB202114945D0 (en) * 2021-10-19 2021-12-01 Oxbotica Ltd Method and apparatus
GB202114943D0 (en) * 2021-10-19 2021-12-01 Oxbotica Ltd Method and apparatus
US11741753B2 (en) 2021-11-23 2023-08-29 International Business Machines Corporation Augmentation for visual action data
CN115775278B (en) * 2023-02-13 2023-05-05 合肥安迅精密技术有限公司 Element identification positioning method and system containing local feature constraint and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3636513A (en) * 1969-10-17 1972-01-18 Westinghouse Electric Corp Preprocessing method and apparatus for pattern recognition
US6611630B1 (en) * 1996-07-10 2003-08-26 Washington University Method and apparatus for automatic shape characterization
US20030169906A1 (en) * 2002-02-26 2003-09-11 Gokturk Salih Burak Method and apparatus for recognizing objects

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2040273C (en) * 1990-04-13 1995-07-18 Kazu Horiuchi Image displaying system
EP0686932A3 (en) * 1994-03-17 1997-06-25 Texas Instruments Inc A computer vision system to detect 3-D rectangular objects
JPH0877356A (en) * 1994-09-09 1996-03-22 Fujitsu Ltd Method and device for processing three-dimensional multi-view image
US6445814B2 (en) * 1996-07-01 2002-09-03 Canon Kabushiki Kaisha Three-dimensional information processing apparatus and method
US6047078A (en) * 1997-10-03 2000-04-04 Digital Equipment Corporation Method for extracting a three-dimensional model using appearance-based constrained structure from motion
US6256409B1 (en) * 1998-10-19 2001-07-03 Sony Corporation Method for determining a correlation between images using multi-element image descriptors
US6711293B1 (en) * 1999-03-08 2004-03-23 The University Of British Columbia Method and apparatus for identifying scale invariant features in an image and use of same for locating an object in an image
US6532301B1 (en) * 1999-06-18 2003-03-11 Microsoft Corporation Object recognition with occurrence histograms
US6865289B1 (en) * 2000-02-07 2005-03-08 Canon Kabushiki Kaisha Detection and removal of image occlusion errors
US6678414B1 (en) * 2000-02-17 2004-01-13 Xerox Corporation Loose-gray-scale template matching
JP4443722B2 (en) * 2000-04-25 2010-03-31 富士通株式会社 Image recognition apparatus and method
EP1202214A3 (en) * 2000-10-31 2005-02-23 Matsushita Electric Industrial Co., Ltd. Method and apparatus for object recognition
US7016532B2 (en) * 2000-11-06 2006-03-21 Evryx Technologies Image capture and identification system and process
US6879717B2 (en) * 2001-02-13 2005-04-12 International Business Machines Corporation Automatic coloring of pixels exposed during manipulation of image regions
US6845178B1 (en) * 2001-06-27 2005-01-18 Electro Scientific Industries, Inc. Automatic separation of subject pixels using segmentation based on multiple planes of measurement data
US7010158B2 (en) * 2001-11-13 2006-03-07 Eastman Kodak Company Method and apparatus for three-dimensional scene modeling and reconstruction
US6831641B2 (en) * 2002-06-17 2004-12-14 Mitsubishi Electric Research Labs, Inc. Modeling and rendering of surface reflectance fields of 3D objects
US7034822B2 (en) * 2002-06-19 2006-04-25 Swiss Federal Institute Of Technology Zurich System and method for producing 3D video images
US7103212B2 (en) * 2002-11-22 2006-09-05 Strider Labs, Inc. Acquisition of three-dimensional images by an active stereo technique using locally unique patterns
US7289662B2 (en) * 2002-12-07 2007-10-30 Hrl Laboratories, Llc Method and apparatus for apparatus for generating three-dimensional models from uncalibrated views
EP1599830A1 (en) * 2003-03-06 2005-11-30 Animetrics, Inc. Generation of image databases for multifeatured objects
JP3842233B2 (en) * 2003-03-25 2006-11-08 ファナック株式会社 Image processing apparatus and robot system
US7343039B2 (en) * 2003-06-13 2008-03-11 Microsoft Corporation System and process for generating representations of objects using a directional histogram model and matrix descriptor
KR100682889B1 (en) * 2003-08-29 2007-02-15 삼성전자주식회사 Method and Apparatus for image-based photorealistic 3D face modeling
JP3892838B2 (en) * 2003-10-16 2007-03-14 ファナック株式会社 3D measuring device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3636513A (en) * 1969-10-17 1972-01-18 Westinghouse Electric Corp Preprocessing method and apparatus for pattern recognition
US6611630B1 (en) * 1996-07-10 2003-08-26 Washington University Method and apparatus for automatic shape characterization
US20030169906A1 (en) * 2002-02-26 2003-09-11 Gokturk Salih Burak Method and apparatus for recognizing objects

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GROSS ET AL: 'Growing Gaussian Mixture Models for Pose Invariant, Face Recognition' 2000, PROCEEDINGS. 15TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION vol. 1, 03 September 2000 - 07 September 2000, pages 1088 - 1091, XP010533738 *
POPE ET AL: 'Probabilistic Models of Appereance for 3-D Object Recognition' INTERNATIONAL JOURNAL OF COMPUTER VISION vol. 40, no. 2, 2000, pages 149 - 167, XP003001711 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1897033A2 (en) * 2005-06-16 2008-03-12 Strider Labs, Inc. System and method for recognition in 2d images using 3d class models
EP1897033A4 (en) * 2005-06-16 2015-06-24 Strider Labs Inc System and method for recognition in 2d images using 3d class models
WO2009069071A1 (en) * 2007-11-28 2009-06-04 Nxp B.V. Method and system for three-dimensional object recognition
EP3005238A4 (en) * 2013-06-04 2017-06-21 Elbit Systems Land and C4I Ltd. Method and system for coordinating between image sensors
CN104077603A (en) * 2014-07-14 2014-10-01 金陵科技学院 Outdoor scene monocular vision space recognition method in terrestrial gravity field environment
CN104077603B (en) * 2014-07-14 2017-04-19 南京原觉信息科技有限公司 Outdoor scene monocular vision space recognition method in terrestrial gravity field environment

Also Published As

Publication number Publication date
EP1766552A2 (en) 2007-03-28
US20050286767A1 (en) 2005-12-29
WO2006002320A3 (en) 2006-06-22

Similar Documents

Publication Publication Date Title
US20050286767A1 (en) System and method for 3D object recognition using range and intensity
Soltanpour et al. A survey of local feature methods for 3D face recognition
Hodaň et al. Detection and fine 3D pose estimation of texture-less objects in RGB-D images
Song et al. A literature survey on robust and efficient eye localization in real-life scenarios
US7929775B2 (en) System and method for recognition in 2D images using 3D class models
Su et al. Learning a dense multi-view representation for detection, viewpoint classification and synthesis of object categories
Tu et al. Shape matching and recognition–using generative models and informative features
Tuzel et al. Pedestrian detection via classification on riemannian manifolds
US7706603B2 (en) Fast object detection for augmented reality systems
Lowe Distinctive image features from scale-invariant keypoints
Wu et al. Detection and segmentation of multiple, partially occluded objects by grouping, merging, assigning part detection responses
Sangineto Pose and expression independent facial landmark localization using dense-SURF and the Hausdorff distance
Wang et al. Dense sift and gabor descriptors-based face representation with applications to gender recognition
Everingham et al. Automated person identification in video
Ambardekar et al. Vehicle classification framework: a comparative study
Zhang et al. Robust 3D face recognition based on resolution invariant features
Zhou et al. An efficient 3-D ear recognition system employing local and holistic features
Baltieri et al. Mapping appearance descriptors on 3d body models for people re-identification
Shan et al. Shapeme histogram projection and matching for partial object recognition
Andrade-Cetto et al. Object recognition
Perdoch et al. Stable affine frames on isophotes
Aragon-Camarasa et al. Unsupervised clustering in Hough space for recognition of multiple instances of the same object in a cluttered scene
Bressan et al. Using an ICA representation of local color histograms for object recognition
Garcia-Fidalgo et al. Methods for Appearance-based Loop Closure Detection
Wuhrer et al. Posture invariant surface description and feature extraction

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 2005763226

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 2005763226

Country of ref document: EP