US20150294189A1 - Method of providing image feature descriptors - Google Patents

Method of providing image feature descriptors Download PDF

Info

Publication number
US20150294189A1
US20150294189A1 US14/417,046 US201214417046A US2015294189A1 US 20150294189 A1 US20150294189 A1 US 20150294189A1 US 201214417046 A US201214417046 A US 201214417046A US 2015294189 A1 US2015294189 A1 US 2015294189A1
Authority
US
United States
Prior art keywords
descriptors
descriptor
feature
camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/417,046
Other languages
English (en)
Inventor
Selim BenHimane
Daniel Kurz
Thomas Olszamowski
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Metaio GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Metaio GmbH filed Critical Metaio GmbH
Assigned to METAIO GMBH reassignment METAIO GMBH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BENHIMANE, SELIM, KURZ, DANIEL, OLSZAMOWSKI, Thomas
Publication of US20150294189A1 publication Critical patent/US20150294189A1/en
Assigned to APPLE INC. reassignment APPLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: METAIO GMBH
Priority to US15/444,404 priority Critical patent/US10192145B2/en
Priority to US16/259,367 priority patent/US10402684B2/en
Priority to US16/531,678 priority patent/US10528847B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06K9/6255
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2113Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • G06K9/623
    • G06K9/6262
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/771Feature selection, e.g. selecting representative features from a multi-dimensional feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/772Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries

Definitions

  • the invention is related to a method of providing a set of feature descriptors configured to be used in matching at least one feature of an object in an image of a camera, and a corresponding computer program product for performing the method.
  • Such method may be used among other applications, for example, in a method of determining the position and orientation of a camera with respect to an object.
  • a common approach to determine the position and orientation of a camera with respect to an object with a known geometry and visual appearance uses 2D-3D correspondences gained by means of local feature descriptors, such as SIFT described in D. G. Lowe. Distinctive image features from scale-invariant keypoints. Int. Journal on Computer Vision, 60(2):91-110, 2004.
  • local feature descriptors such as SIFT described in D. G. Lowe. Distinctive image features from scale-invariant keypoints. Int. Journal on Computer Vision, 60(2):91-110, 2004.
  • one or more views of the object are used as reference images. Given these images, local features are detected and then described resulting in a set of reference feature descriptors with known 3D positions.
  • the same procedure is performed to gain current feature descriptors with 2D image coordinates.
  • a similarity measure such as the reciprocal of the Euclidean distance of the descriptors, can be used to determine the similarity of two features.
  • Matching the current feature descriptors with the set of reference descriptors results in 2D-3D correspondences between the current camera image and the reference object.
  • the camera pose with respect to the object is then determined based on these correspondences and can be used in Augmented Reality applications to overlay virtual 3D content registered with the real object. Note, that analogously the position and orientation of the object can be determined with respect to the camera coordinate system.
  • both feature detectors and feature description methods need to be invariant to changes in the viewpoint up to a certain extent Affine-invariant feature detectors as described in K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool. A comparison of affine region detectors. Int. Journal Computer Vision, 65:43-72, 2005. that estimate an affine transformation to normalize the neighborhood of a feature exist, but they are currently too expensive for real-time applications on mobile devices. Instead, usually only a uniform scale factor and an in-plane rotation is estimated resulting in true invariance to these two transformations only. The feature description methods then use the determined scale and orientation of a feature to normalize the support region before computing the descriptor. Invariance to out-of-plane rotations, however, is usually fairly limited and in the responsibility of the description method itself.
  • the 3D normal vector of a feature can be determined to create a viewpoint-invariant patch, as described in C. Wu, B. Clipp, X. Li, J.-M. Frahm, and M. Pollefeys. 3d model matching with viewpoint-invariant patches (VIP).
  • VIP viewpoint-invariant patches
  • rendering techniques can be employed to create a multitude of synthetic views, i.e. images, of a feature.
  • synthetic views are used to create different descriptors for different viewpoints and/or rotations to support larger variations, as described in S. Taylor, E. Rosten, and T. Drummond. Robust feature matching in 2.3 ms.
  • Feature classifiers also aim to identify for a given image feature the corresponding reference feature in a database (or second image). This can be formulated as a classification problem, where every reference feature is a class, and the classifier determines the class with the highest probability for a given current feature.
  • An offline training phase is required, where the classifier is trained with different possible appearances of a feature, usually gained by randomly warped patches. Randomized Trees, as described in V. Lepetit and P. Fua. Keypoint recognition using randomized trees. IEEE Trans. Pattern Anal. Mach. Intell., 28(9):1465-1479, 2006, use these to estimate the probabilities over all classes for every leaf node, while the inner nodes contain binary decisions based on image intensity comparisons. After training, a current feature is classified by adding up the probabilities of the reached leaf nodes and finding the class with the highest probability.
  • feature descriptors and feature classifiers
  • classifiers can be provided with warped patches that additionally contain synthetic noise, blur or similar in the training phase.
  • classifiers in general provide a good invariance to the transformations that were synthesized during training.
  • the probabilities that need to be stored for feature classifiers require a lot of memory, which makes them unfeasible for a large amount of features in particular on memory-limited mobile devices.
  • a method of providing a set of feature de-scriptors configured to be used in matching at least one feature of an object in an image of a camera comprising the steps of: a) providing at least two images of a first object or of multiple instances of a first object, wherein the multiple instances provide different appearances or different versions of an object, b) extracting in at least two of the images at least one feature from the respective image, c) providing at least one descriptor for an extracted feature, and storing the descriptors for a plurality of extracted features in a first set of descriptors, d) matching a plurality of the descriptors of the first set of descriptors against a plurality of the descriptors of the first set of descriptors, e) computing a score parameter for a plurality of the descriptors based on the result of the matching process, f) selecting among the descriptors at least one descriptor based on its score parameter in comparison with score parameters of other de
  • view of an object means an image of an object which can either be captured using a real camera or synthetically created using an appropriate synthetic view creation method, as explained in more detail later.
  • Our method in general creates a first set of descriptors and then adds descriptors from the first set of descriptors to a second set of descriptors. It is known to the expert, that this can be implemented in many different ways and does not necessarily mean that a descriptor is physically copied from a certain position in memory in the first set to a different location in memory in the second set of descriptors. Instead, the second set can for example be implemented by marking descriptors in the first set to be part of the second set, e.g. by modifying a designated parameter of the descriptor. Another possible implementation would be to store memory addresses, pointers, references, or indices of the descriptors belonging to the second set of descriptors without modifying the descriptor in memory at all.
  • a method to automatically determine a set of feature descriptors that describes an object such that it can be matched and/or localized under a variety of conditions may include changes in viewpoint, illumination, and camera parameters such as focal length, focus, exposure time, signal-to-noise-ratio, etc.
  • a set of, e.g. synthetically, generated views of the object preferably under different conditions
  • local image features are detected, described and aggregated in a database.
  • the proposed method evaluates matches between these database features to eventually find a reduced, preferably minimal set of most representative descriptors from the database.
  • the matching and/or localization success rate can be significantly increased without adding computational load to the runtime method.
  • steps h) and i) are repeatedly processed until the number of descriptors in the second set of descriptors has reached a particular value or the number of descriptors in the second set of descriptors stops varying.
  • step g) may be preceded by modifying the at least one selected descriptor based on the selection process.
  • the modification of the selected descriptor comprises updating the descriptor as a combination of the selected descriptor and other descriptors in the first set of descriptors.
  • the usage of the result of the matching process in the update step h) is restricted to the result of the matching process of the least one selected descriptor, or the result of the matching process of the descriptors that match with the at least one selected descriptor.
  • a method of providing at least two sets of feature descriptors configured to be used in matching at least one feature of an object in an image of a camera comprising the steps of: a) providing at least two images of a first object or of multiple instances of a first object, wherein the multiple instances provide different appearances or different versions of an object, wherein each of the images is generated by a respective camera having a known orientation with respect to gravity when generating the respective image, b) extracting in at least two of the images at least one feature from the respective image, c) providing at least one descriptor for an extracted feature, and storing the descriptors for a plurality of extracted features in multiple sets of descriptors with at least a first set of descriptors and a second set of descriptors, wherein the first set of descriptors contains descriptors of features which were extracted from images corresponding to a first orientation zone with respect to gravity of the respective camera, and the second set of descriptors contains descript
  • the presented approach aims at benefiting from multiple, e.g. synthetic, views of an object without increasing the memory consumption.
  • the method (which may be implemented as so-called offline method which does not need to run when running the application) therefore first creates a larger database of descriptors from a variety of views, i.e. images of the object, and then determines a preferably most representative subset of those descriptors which enables matching and/or localization of the object under a variety of conditions.
  • steps h) and i) are repeatedly processed until the number of descriptors in the third and/or fourth set of descriptors has reached a particular value or the number of descriptors in the third and/or fourth set of descriptors stops varying.
  • step g) is preceded by modifying the at least one selected descriptor based on the selection process.
  • the modification of the selected descriptor comprises updating the descriptor as a combination of the selected descriptor and other descriptors in the first or second set of descriptors.
  • steps h) and i) are processed iteratively multiple times until the number of descriptors stored in the second, third and/or fourth set of descriptors has reached a particular value.
  • step d) includes determining for each of the descriptors which were matched whether they were correctly or incorrectly matched, and step e) includes computing the score parameter dependent on whether the descriptors were correctly or incorrectly matched.
  • the score parameter is indicative of the number of matches the respective descriptor has been correctly matched with any other of the descriptors. Then, in step f) at least one descriptor with a score parameter indicative of the highest number of matches within the first set of descriptors is selected, and step h) reduces the score parameter of the at least one selected descriptor and the score parameter of the descriptors that match with the at least one selected descriptor.
  • a method of matching at least one feature of an object in an image of a camera comprising providing at least one image with an object captured by a camera, extracting current features from the at least one image and providing a set of current feature descriptors with at least one current feature descriptor provided for an extracted feature, providing a second set of descriptors according to the method as described above, and comparing the set of current feature descriptors with the second set of descriptors for matching at least one feature of the object in the at least one image.
  • a method of matching at least one feature of an object in an image of a camera comprising providing at least one image with an object captured by a camera, extracting current features from the at least one image and providing a set of current feature descriptors with at least one current feature descriptor provided for an extracted feature, providing a third and a fourth set of descriptors according the method as described above, and comparing the set of current feature descriptors with the third and/or fourth set of descriptors for matching at least one feature of the object in the at least one image.
  • the method may further include determining a position and orientation of the camera which captures the at least one image with respect to the object based on correspondences of feature descriptors determined in the matching process.
  • the method may be part of a tracking method for tracking a position and orientation of the camera with respect to an object of a real environment.
  • the method of providing a set of feature descriptors is applied in connection with an augmented reality application and, accordingly, is a method of providing a set of feature descriptors configured to be used in localizing an object in an image of a camera in an augmented reality application.
  • the method of matching at least one feature of an object in an image of a camera is applied in an augmented reality application and, accordingly, is a method of localizing an object in an image of a camera in an augmented reality application.
  • step a) of the above method includes providing the different images of the first object under different conditions which includes changes from one of the images to another one of the images in at least one of the following: viewpoint, illumination, camera parameters such as focal length, focus, exposure time, signal-to-noise-ratio.
  • step a) may include providing the multiple images of the first object by using a synthetic view creation algorithm creating the multiple images by respective virtual cameras as respective synthetic views.
  • a synthetic view creation algorithm creating the multiple images by respective virtual cameras as respective synthetic views.
  • one or more of the multiple images may be generated by a real camera.
  • the synthetic view creation algorithm includes a spatial transformation which projects a 3D model onto the image plane of a respective synthetic view, and a rendering method is applied which is capable to simulate properties of a real camera, particularly such as defocus, motion blur, noise, exposure time, brightness, contrast, and to also simulate different environments, particularly such as by using virtual light sources, shadows, reflections, lens flares, blooming, environment mapping.
  • step c) includes storing the descriptor for an extracted feature together with an index of the image from which the feature has been extracted.
  • the above described methods are performed on a computer system which may have any desired configuration.
  • the methods using such reduced set of descriptors are capable of being applied on mobile devices, such as mobile phones, which have only limited memory capacities.
  • a computer program product adapted to be loaded into the internal memory of a digital computer system, and comprising software code sections by means of which the steps of a method as described above are performed when said product is running on said computer system.
  • FIG. 1 shows a feature description method according to an embodiment
  • FIG. 2 shows a feature description method according to an embodiment of the invention, in particular with respect to multiple views of a planar object
  • FIG. 3 shows a feature description method according to an embodiment of the invention, in particular with respect to multiple views of a general 3D object
  • FIG. 4 shows different exemplary embodiments of a synthetic view creation method
  • FIG. 5 shows a descriptor subset identification method according to an embodiment of the invention
  • FIG. 6 shows an aspect of a feature description method according to an embodiment, particularly a so-called globally gravity-aware method
  • FIG. 7 shows a feature description method according to an embodiment, particularly in connection with a globally gravity-aware method as shown in FIG. 6 ,
  • FIG. 8 shows an aspect of a feature description method according to an embodiment, particularly in connection with a globally gravity-aware method as shown in FIGS. 6 and 7 ,
  • FIG. 9 shows an aspect of a feature description method according to an embodiment, particularly in connection with a so-called locally gravity-aware method
  • FIG. 10 shows a feature description method according to an embodiment, particularly in connection with a locally gravity-aware method as shown in FIG. 9 .
  • many applications in the field of computer vision require localizing one or more features of an object in an image of a camera, e.g. for object recognition or for determining a position and orientation of the camera.
  • Such applications usually include finding corresponding points or other features in two or more images of the same scene or object under varying viewpoints, possibly with changes in illumination and capturing hardware used.
  • the features can be points, or a set of points (lines, segments, regions in the image or simply a group of pixels, a patch, or any set of pixels in an image).
  • Example applications include narrow and wide-baseline stereo matching, camera pose estimation, image retrieval, object recognition, and visual search.
  • Augmented Reality Systems permit the superposition of computer-generated virtual information with visual impressions of a real environment.
  • the visual impressions of the real world for example captured by a camera in one or more images, are mixed with virtual information, e.g., by means of a display device which displays the respective image augmented with the virtual information to a user.
  • Spatial registration of virtual information and the real world requires the computation of the camera pose (position and orientation) that is usually based on feature correspondences.
  • one or more views of an object are used as reference images. Given these views, which are images of the object, local features may be detected and then described. Such views may be generated in an offline step by a virtual camera (generating so-called synthetic views, as set out in more detail below) or by a real camera. According to an aspect of the invention, there are provided at least two views of a first object or of multiple instances of a first object.
  • the first object may be a 1 dollar bill.
  • This 1 dollar bill may be viewed by a camera from different perspectives and respective views captured by a virtual or real camera may be generated. Accordingly, in this way multiple views of the 1 dollar bill are provided.
  • the 1 dollar bill may be captured under various different conditions, such as different light conditions or other different environmental conditions, and/or may be warped in a certain way by a warping function, thus resulting in images with different appearances of the 1 dollar bill.
  • different appearances of the object may be viewed from different perspectives.
  • different versions of the 1 dollar bill may be captured in different images. For example, multiple 1 dollar bills with different wrinkles, stains, drawings, etc. may be captured in the different images. These images accordingly depict different versions of an object, in the present case of a 1 dollar bill. Again, such different versions may also be viewed from different perspectives.
  • the first object or of multiple instances of the first object at least part of it such as its 3D dimensions being known to the system, local features in another image showing the first object or a second object which corresponds somehow to the first object may be detected and then described.
  • features are detected and then described resulting in a set of reference feature descriptors with known 3D positions resulting from the known 3D properties of the reference object.
  • a similarity measure such as the reciprocal of the Euclidean distance of the descriptors, can be used to determine the similarity of two features. Matching the current feature descriptors with the set of reference feature descriptors results in 2D-3D correspondences between the current camera image and the reference object (in the above example, the first object such as the 1 dollar bill).
  • the camera pose with respect to the real object in the live camera image is then determined based on these correspondences and can be used in Augmented Reality applications to overlay virtual 3D content registered with the real object.
  • the position and orientation of the object can be determined with respect to the camera coordinate system.
  • FIGS. 1 to 5 In the following, embodiments and aspects of the invention will be described in more detail with reference first to FIGS. 1 to 5 .
  • FIG. 1 shows a feature description method according to an embodiment. Particularly, it shows a high-level flowchart diagram of a feature description method, as already referred to above.
  • a digital image I 1 acts as an input to a description method DM which outputs a set of feature descriptors D 1 for the image I 1 .
  • the image I 1 may be a view generated by a synthetic camera, i.e. a synthetic view depicting a virtual object, or may be a view captured by a real camera which depicts a real object.
  • the description method DM extracts in the image or view I 1 at least one feature from the image or view, provides a descriptor for an extracted feature, and stores the descriptors for a plurality of extracted features in the set of descriptors D 1 .
  • the aim is to create a descriptor for each extracted feature that enables the comparison and therefore matching of features.
  • requirements for a good descriptor are distinctiveness, i.e. different feature points result in different descriptors, invariance to changes in viewing direction, rotation and scale, changes in illumination, and/or image noise.
  • FIG. 2 shows a feature description method according to an embodiment of the invention, in particular with respect to multiple views of a planar object. Particularly, FIG. 2 depicts an embodiment of the method according to the invention in a high-level flowchart diagram for a planar object. Details thereof will be more evident when viewed in connection with the flow diagram of FIG. 5 .
  • a method to automatically determine a set of feature descriptors for a given object such that it can be matched and/or localized in a camera image under a variety of conditions. These conditions may include changes in viewpoint, illumination, and camera parameters such as focal length, focus, exposure time, signal-to-noise-ratio, etc.
  • the method aims at finding a relatively small set of descriptors, as the computational time needed for descriptor matching increases with the number of reference descriptors.
  • the method may use a model allowing for the creation of synthetic views, e.g. a textured triangle mesh or a point cloud with associated intensity information.
  • synthetic views e.g. a textured triangle mesh or a point cloud with associated intensity information.
  • a fronto-parallel image of the object is fully sufficient and synthetic views, resulting in images captured by virtual cameras, can be created using image warping.
  • the method starts with providing at least two images of a first object or of multiple instances of a first object, wherein the multiple instances may provide different appearances or different versions of the first object, as described in more detail above.
  • a model of a first object O 2 which in this case is represented in a digital view or image 12 (the terms view and image are used interchangeably herein)
  • a multitude of synthetic views V 21 , V 22 , V 23 , V 24 of the first object O 2 is created.
  • at least two of the views V 21 -V 24 at least one feature from the respective view is extracted by a description method providing a descriptor for an extracted feature.
  • the descriptors for a plurality of extracted features are stored in a first set of descriptors D 2 .
  • each view or image is fed into a description method DM resulting in a plurality of subsets of feature de-scriptors which are aggregated in the first set of descriptors D 2 .
  • each descriptor d 1 -dn is represented by a descriptor vector having multiple parameters which describe the respective extracted feature.
  • the method then proceeds with matching a plurality of the descriptors d 1 -dn of the first set of descriptors D 2 against a plurality of the descriptors d 1 -dn of the first set of descriptors D 2 in a matching process performed in a descriptor subset identification method M 2 .
  • each correct match of descriptors d is marked with a “1” in the matrix as shown.
  • the score parameter may be any kind of parameter which is indicative of the number of matches the respective descriptor has been correctly matched with any other of the descriptors. Other possibilities of defining a score parameter instead of number of matches may be the smallest distance to a descriptor over all descriptors or the average similarity over all matched descriptors.
  • a next step among the descriptors at least one descriptor is selected based on its score parameter in comparison with score parameters of other descriptors.
  • the selected descriptor is then stored in a second set of descriptors D 2 ′.
  • descriptor d 17 which has been identified as the descriptor with the highest score parameter s is selected and stored in the second set of descriptors D 2 ′.
  • the highest score parameter is indicative of a high significance of the descriptor d 17 . Accordingly, in other embodiments where the score parameter is determined differently, a descriptor with a score parameter should be selected which is indicative of a higher significance of the respective descriptor compared to other descriptors.
  • the score parameter s of the selected descriptor (i.e. of descriptor d 17 in the present example) is modified in the first set of descriptors D 2 .
  • the score parameter s for descriptor d 17 may be decreased to 3, 2, 1 or 0 (thus, reducing its significance for a following selection step).
  • the selected descriptor (such as d 17 ) may be designated in the first set of descriptors D 2 such that the selected descriptor is disregarded for selection in a following selection step.
  • the selected descriptor (such as d 17 ) may be marked irrelevant or marked to be removed from the database so that it is disregarded for selection in a following selection step.
  • the steps of selecting a descriptor and modifying the score parameter or designating the selected descriptor, as described above, are processed repeatedly multiple times, thereby storing in the second set of descriptors D 2 ′ a number of selected descriptors d which is lower than the number of descriptors d stored in the first set of descriptors D 2 . Accordingly, the proposed method determines a set of descriptors D 2 ′ out of D 2 which provides the most matches between different descriptors d in D 2 , i.e. the most significant descriptors of D 2 , and therefore is expected to be representative for describing the object O 2 under varying viewpoints and conditions.
  • such second set of descriptors D 2 ′ may be used in matching and/or localizing at least one feature of the object O 2 or of a second object, preferably similar to object O 2 , in another image of a camera.
  • FIG. 3 shows a feature description method according to a similar embodiment, but in particular with respect to multiple views of a general 3D object.
  • FIG. 3 illustrates the same method as shown in FIG. 2 , but for a general 3D object O 3 instead of a planar object.
  • the synthetic views V 31 , V 32 , V 33 , V 34 are in this case created by rendering the digital 3D model O 3 under a variety of conditions.
  • the descriptors from all views are collected in a first set of descriptors D 3 , matched in the descriptor subset identification method M 3 which iteratively determines the best descriptors and collects them in a second set of descriptors D 3 ′.
  • FIG. 4 shows different exemplary embodiments of a synthetic view creation method.
  • FIG. 4 illustrates some examples for the method to create synthetic views of an object based on a model of the object.
  • the figure uses planar objects, but all examples apply analogously also for general 3D objects.
  • the synthetic views are created for an object O 41 by means of spatial transformations only resulting in the views V 41 , V 42 , V 43 , V 44 showing the object O 41 from different perspectives.
  • a digital image of the object O 42 only undergoes non-spatial transformations resulting in the synthetic views V 45 , V 46 , V 47 , V 48 .
  • both spatial and non-spatial transformations are used to create the synthetic views V 49 , V 410 , V 411 , V 412 for the object O 43 , again resulting in different appearances of the object O 43 , but in addition with different perspectives.
  • any combination of the three cases can be used, i.e. some synthetic views use spatial transformations only, other use non-spatial transformations only and some us a combination of both.
  • FIG. 5 shows a more detailed flow diagram of a method according to an embodiment of the invention, the principles of which have been described above in connection with FIG. 2 .
  • FIG. 5 shows an iterative descriptor subset identification algorithm that determines a final set of descriptors D′ given an initial set of descriptors D.
  • the set of descriptors D corresponds to the set of descriptors D 2
  • the set of descriptors D′ corresponds to the set of descriptors D 2 ′ as described with reference to FIG. 2 .
  • the method starts with providing multiple views of a first object or of multiple instances of a first object, wherein the multiple instances provide different appearances or different versions of an object, extracting in the views at least one feature from the respective view, providing a respective descriptor for an extracted feature, and storing the descriptors for a plurality of extracted features in the first set of descriptors D. These steps are not shown in FIG. 5 .
  • the descriptors of D are matched against each subset of descriptors in D resulting from one synthetic view.
  • a plurality of the descriptors of the first set of descriptors D is matched against a plurality of the descriptors of the first set of descriptors D.
  • all of the descriptors of the first set of descriptors D are matched against all of the descriptors of the first set of descriptors D.
  • step S 52 selects the best descriptor d from the correct matches M in step S 52 , in the present embodiment the descriptor d with the highest score parameter s, which descriptor d is then added to the second set of descriptors D′ in step S 53 .
  • the descriptor d with the highest score parameter s is designated “imax” (having the highest number of matches).
  • step S 54 determines if D′ contains less descriptors than the desired amount f of descriptors.
  • step S 55 updates the score parameter s of matches involving the previously selected descriptor d in M and then proceeds with selection of the next best descriptor from M in step S 52 . Otherwise, if no, i.e. if the desired amount f of descriptors in D′ is reached, D′ is out-putted in step S 56 as final feature descriptor set.
  • This outputted second set of descriptors D′ is configured to be used in matching and/or localizing at least one feature of the first object or of a second object in an image of a camera, for example in a live camera image of an augmented reality application.
  • a corresponding method of matching at least one feature of an object in an image of a camera comprises providing at least one image (for example, a live camera image of an augmented reality application) with an object captured by a camera, extracting current features from the at least one image and providing a set of current feature descriptors with at least one current feature descriptor provided for an extracted feature.
  • the set of current feature descriptors is then matched with the second set of descriptors D′ for matching and/or localizing at least one feature of the object in the at least one image, e.g. live camera image.
  • the proposed method of providing a set of feature de-scriptors comprises a synthetic view creation algorithm which is composed of two parts.
  • First a spatial transformation projects the 3D model of an object to be rendered onto the image plane of a synthetic view.
  • This transformation can be any kind of transformation including rigid body transformations, parallel projection, perspective projection, non-linear transformations and any combination of those. It is meant to simulate properties of a virtual camera such as its position, orientation, focal length, resolution, skew and radial distortions (e.g. barrel distortion, pincushion distortion).
  • a rendering method is applied to simulate properties of a real camera such as defocus, motion blur, noise, exposure time, brightness, contrast, and also simulating different environments using virtual light sources, shadows, reflections, lens flares, blooming, environment mapping, etc., resulting in a respective synthetic view, which is a digital image.
  • properties of a real camera such as defocus, motion blur, noise, exposure time, brightness, contrast, and also simulating different environments using virtual light sources, shadows, reflections, lens flares, blooming, environment mapping, etc.
  • a set of synthetic views of the object is created (irrespective of whether it is planar or not).
  • image features are detected and described using a feature description method (DM) and all descriptors are aggregated together with the indices of the view they originate from in a database set of descriptors with view indices.
  • DM feature description method
  • the 3D position of the feature on the model that it corresponds to is determined and saved with the descriptor.
  • descriptor database set enables a very good localization of the object in another view, e.g. in a live camera image, under conditions similar to those that were used to create the synthetic views.
  • the method according to the invention is looking for a subset of these descriptors that provides a sufficient amount of descriptor matches among the synthetic views. The assumption is that this subset will also allow for matching and/or localization of the object in a camera image under a variety of conditions, but has only a reduced number of descriptors.
  • the method first matches every descriptor in the initial set of descriptors against all subsets of descriptors from every synthetic view. Note that the matching procedure does not necessarily find a match for every descriptor as it may for instance require a minimal similarity between two descriptors or the most similar descriptor needs to be significantly closer than the second closest descriptor. After having matched all descriptors in the database, all wrong matches are discarded, e.g. where the 3D position of the corresponding features on the model differs by more than a threshold. For all remaining (correct) matches, the feature positions can be optionally updated as the average over all matched features, which results in a more precise position.
  • the iterative descriptor subset identification method then first determines the descriptor with the highest score parameter within the database descriptor set, as described above. Thereby the score parameter corresponds to how “good” a descriptor is. This can be defined in different ways, e.g. as the number of matches for a descriptor or as the sum over the similarities with all other descriptors.
  • the best descriptor (d), with the highest score parameter, is then added to the final set of descriptors (D′).
  • the process of adding the best descriptor to the final set of descriptors can be preceded by modifying this descriptor based on the selection process.
  • the descriptor can be modified such that it corresponds to the weighted average over itself and all descriptors it matches with.
  • adding the descriptor to a second set and updating of the score parameters is repeatedly processed, the additional update of the selected descriptor as described above is performed in every iteration.
  • the method afterwards updates the score parameters not only of the selected (best) descriptor d, but also of other descriptors that the descriptor d matches with, that match with descriptor d and/or that match with descriptors that descriptor d matches with according to the selection process.
  • This is shown in FIGS. 2 and 3 as an example for descriptor d 17 :
  • the row Rm with matches of descriptor d 17 as well as the columns Cm with matches of descriptors the descriptor d 17 matches with are modified in the step S 55 (what is described in FIG.
  • this update of the score parameter according to any preceding selection process and to the result of the matching process is implemented accordingly. If the score parameter for example corresponds to the smallest distance to a descriptor over all descriptors or the average similarity over all matched descriptors, then the update would modify the score parameter such that the modified value is indicative of the selected descriptor(s), and possibly the descriptors it matches with, being more distant from the rest of the descriptors in the set.
  • the score parameters of these descriptors are modified before starting the next iteration or recursion loop. This reduces their significance for following selection steps. For example, the score parameters are chosen such that they are indicative of the number of matches within the first set of descriptors. Accordingly, the score parameter of the selected descriptor is modified so that the modified score parameter is indicative of a reduced number of matches. In the present embodiment, the score parameter is increased with increasing number of matches and is decreased when modified.
  • D′ can be used in the same way as regular feature descriptors (e.g. of set D) would be used, e.g. for matching, camera localization, object localization, or structure from motion.
  • FIGS. 6 to 10 Another aspect of the invention is described with reference to FIGS. 6 to 10 .
  • Basic principles of this aspect correspond to aspects as described with reference to FIGS. 1 to 5 , so that any specifics referring thereto will not be explained in much detail again.
  • FIG. 6 shows an aspect of a feature description method according to an embodiment of this aspect, particularly a so-called globally gravity-aware method, in which it is proposed to create multiple representative feature descriptor sets for different camera orientation zones with respect to gravity, as explained in more detail below.
  • FIG. 6 shows for a planar object O 61 multiple virtual cameras, such as virtual cameras C 61 , C 62 , C 63 located on a hemisphere centered around the object O 61 .
  • the cameras C 61 , C 62 , C 63 are located in a way that they capture the object O 61 from different views, resulting in the respective views V 61 , V 62 , V 63 . That is, camera C 61 captures the object O 61 and generates view V 61 , and so on.
  • the aperture angle of the camera is depicted by a respective pyramid.
  • FIG. 6 illustrates a possible layout of camera centers of virtual cameras (shown by a respective circle) for creating the synthetic views V 64 , V 65 , V 66 , etc.
  • these views are sorted to so-called view bins according to the orientation of the respective camera with respect to gravity, for example according to the angle between the respective virtual camera's principal axis and the gravity vector g.
  • view bin and “orientation zone” have the same meaning and are therefore interchangeable hereafter.
  • the different view bins VB 61 , VB 62 , VB 63 , VB 64 , etc. are illustrated using filled and outlined circles.
  • the view bin VB 61 comprises the views V 64 , V 65 , V 66 , V 67 , V 68 and V 69 which are views captured by cameras which were oriented in a common orientation zone with respect to gravity.
  • the so-called gravity-aware method aims at creating a set of feature descriptors that describes an object best under a certain range of viewpoints.
  • this range would most likely cover viewpoints from all directions for a general 3D object and only those showing the front-face for a planar object.
  • it would comprise those viewpoints of an object that the application should be able to deal with.
  • the globally gravity-aware method then only uses the reference descriptor set of the multiple reference descriptor sets that corresponds to the current measured camera orientation angle of the currently used real camera.
  • the same overall amount of reference descriptors to match against can contain much more descriptors representing the object in an orientation similar to the one of the real camera.
  • different synthetic views of a first object are created. These views may then be sorted into bins based on the orientation of the respective virtual camera with respect to gravity, for example based on the angle between the principal axis of the virtual camera that corresponds to the view and the known gravity vector transformed into the camera coordinate system.
  • the method creates feature descriptors for all synthetic views. The stage matching the descriptors in the database against each other is then carried out for every view bin individually. All descriptors belonging to the views in a particular bin are either matched against themselves only or against all descriptors from all view bins.
  • the iterative or recursive descriptor subset identification is then carried out for every view bin individually, i.e. the descriptor with the highest score parameter may be determined within a particular bin and is added to the final set of descriptors for this bin, containing the feature descriptors from views with a similar camera orientation with respect to gravity (i.e. with a camera orientation belonging to the same orientation zone). Finally, there is provided a set of representative feature descriptors for every view bin.
  • the proposed gravity-aware method For a real camera image, e.g. in a method of matching at least one feature of an object in an image of a camera, the proposed gravity-aware method first measures or loads the gravity vector in the camera coordinate system.
  • the gravity vector is provided from a gravity sensor (e.g. accelerometer) associated with the camera which captures the image. This may then be used to compute an orientation angle between the gravity vector and the principal axis of the real camera.
  • the method finally determines the view bin where the average over all orientation angles of the synthetic cameras is closest to the orientation angle of the current real camera and only uses the reference descriptors of the set resulting from that view bin.
  • the set of reference features to be used might change in every frame (image) based on the current camera orientation, i.e. if the camera orientation changes from one frame to the next frame.
  • FIG. 7 shows a feature description method according to an embodiment, particularly in connection with a globally gravity-aware method as shown in FIG. 6 .
  • a high-level flowchart diagram explains an embodiment of the above described globally gravity-aware method in more detail.
  • FIG. 7 describes a method of providing a set of feature descriptors configured to be used in matching at least one feature of an object in an image of a camera.
  • the method starts with providing multiple views of a first object O 7 or of multiple instances of a first object O 7 , wherein the multiple instances provide different appearances or different versions of an object.
  • Each of the views V 70 -V 79 is generated by a respective camera (such as C 61 -C 63 shown in FIG. 6 ) having a known orientation with respect to gravity (e.g., indicated by a gravity vector g) when generating the respective view.
  • a gravity vector g e.g., indicated by a gravity vector g
  • an appropriate model of an object O 7 is used to create synthetic views V 70 -V 79 under different conditions.
  • the views V 70 -V 79 may be sorted to view bins based on their orientation with respect to gravity.
  • the view bin VB 71 comprises the views V 70 , V 71 , V 72
  • the view bin VB 72 contains the views V 73 , V 74 , V 75 , V 76 , and the views V 77 , V 78 and V 79 fall into the bin VB 73 .
  • this method then proceeds as in the proposed method shown in FIG. 2 .
  • At least one feature is extracted from the respective view, and a descriptor for an extracted feature is provided.
  • the descriptors for a plurality of extracted features are stored in multiple sets of descriptors D 71 -D 73 with at least a first set of descriptors (such as D 71 ) and a second set of descriptors (such as D 72 ).
  • the first set of descriptors D 71 contains descriptors of features which were extracted from views V 70 -V 72 corresponding to a first orientation zone with respect to gravity of the respective camera
  • the second set of descriptors D 72 contains descriptors of features which were extracted from views V 73 -V 76 corresponding to a second orientation zone with respect to gravity of the respective camera.
  • This step may also include storing the descriptors in three or more sets of descriptors corresponding to three or more orientation zones with respect to gravity of the respective camera, as shown in FIG. 7 for three orientation zones.
  • a plurality of the descriptors d of the first set of descriptors D 71 is matched against a plurality of the descriptors d of the first set of descriptors D 71
  • a plurality of the descriptors d of the second set of descriptors D 72 is matched against a plurality of the descriptors d of the second set of descriptors D 72 .
  • This matching may be performed in respective descriptor subset identification methods M 71 -M 73 , comparable to descriptor subset identification method M 2 described with reference to FIG. 2 .
  • the descriptors of set D 71 from the view bin VB 71 are fed into the descriptor subset identification method M 71 which results in a final set of descriptors D′ 71 for this view bin.
  • the descriptor set D′ 72 is created for view bin VB 72 and descriptor set D′ 73 is based on the descriptors from view bin VB 73 .
  • this step may include matching a plurality of the descriptors of the first set of descriptors D 71 against a plurality of the descriptors of the first set of descriptors D 71 or of the first set of descriptors D 71 and the second set of descriptors D 72 , and matching a plurality of the descriptors of the second set of descriptors D 72 against a plurality of the descriptors of the second set of descriptors D 72 or of the first set of descriptors D 71 and the second set of descriptors D 72 .
  • this may be applied analogously for set of descriptors D 73 , i.e., for example, the descriptors of the first set of descriptors D 71 may be matched against descriptors of D 71 only, or against descriptors of a plurality or all of D 71 to D 73 .
  • a score parameter is assigned to a plurality of the descriptors as a result of the matching process, similar as in the method of FIG. 2 .
  • the selected descriptor is stored in a third set of descriptors D′ 71 .
  • the selected another descriptor is stored in a fourth set of descriptors D′ 72 . If more than two orientation zones are used, this process is analogously performed for descriptor set D 73 resulting in a reduced set of descriptors D′ 73 , and so on.
  • the score parameter of a selected descriptor in the first and/or second set of descriptors D 71 , D 72 is modified, or alternatively a selected descriptor in the first and/or second set of descriptors D 71 , D 72 is designated such that the selected descriptor is disregarded for selection in a following selection step as described above.
  • the steps of selecting and modifying are processed repeatedly multiple times, thereby storing in the third and fourth set of descriptors D′ 71 , D′ 72 each a number of selected descriptors which is lower than the number of descriptors stored in the first set and second set of descriptors D 71 , D 72 , respectively.
  • this step includes storing in three or more sets of descriptors each a number of selected descriptors which is lower than the number of descriptors stored in the respective initial sets of descriptors.
  • the third and fourth set of de-scriptors D′ 71 , D′ 72 and any further set of descriptors, such as D′ 73 are configured to be used in matching at least one feature of the first object or of a second object in an image of a camera, for example in a live camera image of an augmented reality application.
  • the method may include calculating an orientation angle between the principal axis and a provided gravity vector of the camera that corresponds to the respective view in order to determine an orientation of the respective camera with respect to gravity. For the calculated orientation angle it is determined whether it corresponds to the first or second orientation zone.
  • the first orientation zone may comprise orientation angles from 60° to 90° and the second orientation zone angles from 60° to 30°.
  • the descriptor of the extracted feature of the respective view (such as V 70 -V 72 ) is stored in the first set of descriptors (such as D 71 ), and if it corresponds to the second orientation zone the descriptor of the extracted feature of the respective view (such as V 73 -V 76 ) is stored in the second set of descriptors (such as D 72 ).
  • the method further includes determining for each descriptor a gravity vector g of the camera which provides the respective view.
  • a method of matching at least one feature of an object in an image of a camera comprising providing at least one image with an object captured by a camera, extracting current features from the at least one image and providing a set of current feature descriptors with at least one current feature descriptor provided for an extracted feature, providing the third and the fourth set of descriptors (such as D′ 71 and D′ 72 of FIG. 7 ), and comparing the set of current feature descriptors with the third and/or fourth set of descriptors for matching at least one feature of the object in the at least one image.
  • FIG. 8 shows an aspect of such feature description method according to an embodiment, particularly in connection with a globally gravity-aware method as shown in FIGS.
  • the method measures or loads the gravity vector g in the camera coordinate system. This vector g is then used to compute an orientation angle ⁇ c between the gravity vector g and the principal axis pa 8 of the camera C 8 .
  • the method comprises providing a gravity vector g of the camera C 8 which captures the at least one image, determining an orientation of the camera C 8 with respect to gravity and associating the determined orientation of the camera C 8 with the first orientation zone or with the second orientation zone.
  • the set of current feature descriptors are then matched with the third set of descriptors (such as D′ 71 in FIG. 7 ), if the determined orientation of the camera C 8 is associated with the first orientation zone (in the example of FIG. 7 , corresponding to view bin VB 71 ), and the set of current feature descriptors is matched with the fourth set of descriptors (such as D′ 72 in FIG. 7 ), if the determined orientation of the camera is associated with the second orientation zone (in the example of FIG. 7 , corresponding to view bin VB 72 ).
  • the gravity vector g is provided from a gravity sensor associated with the camera C 8 .
  • the method determines the view bin where the average over all gravity angles of the synthetic cameras is closest to the orientation angle ⁇ c.
  • this bin is VB 85 .
  • the features in the current camera image of the real camera C 8 are then only matched against the descriptors of the reduced descriptor set (corresponding to D′71-D′ 73 of FIG. 7 ) resulting from the views in the bin VB 85 which consists of the views V 81 , V 82 , V 83 , V 84 , etc. illustrated as black circles.
  • FIGS. 9 and 10 show an aspect of a feature description method according to another embodiment of the invention, particularly a so-called locally gravity-aware method. Similar as in the previous aspect of the globally gravity-aware method ( FIGS. 6-8 ), it is proposed to create multiple representative feature descriptor sets for different orientation zones with respect to gravity.
  • the intrinsic parameters i.e. the focal length and the principal point
  • the intrinsic parameters are either known or can be estimated.
  • a 3D ray in the camera coordinate system can be computed that originates from the camera's origin and points towards the 3D point imaged in this pixel.
  • FIG. 9 shows a camera C 10 in an arrangement similar as camera C 8 according to FIG. 8 .
  • the intrinsic parameters of a camera C 10 are known or can be estimated, it is proposed a so-called locally gravity-aware method, which computes an orientation angle with respect to gravity for multiple feature descriptors individually, as illustrated in FIG. 9 .
  • the points P 0 , P 1 , P 2 , P 3 , P 4 that are located on the object O 10 which has a known and static orientation with respect to gravity, are imaged by a camera C 10 as features F 0 , F 1 , F 2 , F 3 , F 4 on the image plane.
  • orientation angles ⁇ 0 , ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 of the individual descriptors for the corresponding features F 0 , F 1 , F 2 , F 3 , F 4 can be computed. They correspond to the angle between the ray from the camera center of C 10 to the respective feature point on the surface (image plane) and the gravity vector g in the camera coordinate system.
  • the orientation angle of a descriptor may be defined as the angle between the gravity vector and the ray pointing from the camera center towards the feature that is described by the descriptor.
  • the proposed locally gravity-aware method first creates multiple views under different conditions of the object and detects and describes features from every view and collects them in a database set of descriptors. For every descriptor, the corresponding gravity vector or gravity orientation angle is stored with the descriptor. The orientation angle is then used to sort the descriptors into at least two bins, where descriptors with similar orientation angles fall in the same bin. Every such orientation angle subset is then processed in the same way as the descriptors of a view set in the previous approach described with reference to FIGS. 6-8 . The offline algorithm then continues in a manner as the globally gravity-aware method described in the previous aspect.
  • FIG. 10 shows a feature description method according to an embodiment in connection with a locally gravity-aware method as shown in FIG. 9 .
  • a high-level flowchart diagram explains an embodiment of the above described locally gravity-aware method in more detail.
  • the method starts with providing multiple views of an object O 9 or of multiple instances of the object O 9 .
  • Each of the views V 90 -V 99 is generated by a respective camera (such as C 10 shown in FIG. 9 ) having a known orientation with respect to gravity (e.g., indicated by a gravity vector g) when generating the respective view.
  • a descriptor for an extracted feature is provided.
  • the descriptors for a plurality of extracted features are first stored in a common database D 9 .
  • the descriptors for a plurality of extracted features are then stored in multiple sets of descriptors D 91 -D 93 .
  • an orientation angle such as ⁇ 0 - ⁇ 4
  • the calculated orientation angle it is determined whether it corresponds to a first or second orientation zone (if the method implements two orientation zones).
  • the first orientation zone may comprise orientation angles from 60° to 90° and the second orientation zone angles from 60° to 30°. If the calculated orientation angle corresponds to the first orientation zone, the respective descriptor is stored in the first set of descriptors (such as D 91 ) and if it corresponds to the second orientation zone, the respective descriptor is stored in the second set of descriptors (such as D 92 ).
  • a plurality of the descriptors d of a first set of descriptors D 91 is matched against a plurality of the descriptors d of the first set of descriptors D 91
  • a plurality of the descriptors d of a second set of descriptors D 92 is matched against a plurality of the descriptors d of the second set of descriptors D 92 .
  • This matching may be performed in respective descriptor subset identification methods M 91 -M 93 , comparable to descriptor subset identification method M 2 described with reference to FIG. 2 .
  • the descriptors of set D 91 are fed into the descriptor subset identification method M 91 which results in a reduced final set of descriptors D′ 91 .
  • the descriptor sets D′ 92 and D′ 93 are created. This step may also include the variations as described with reference to FIG. 7 .
  • a score parameter is assigned to a plurality of the descriptors as a result of the matching process, similar as in the methods of FIG. 2 and FIG. 7 .
  • the selected descriptor is stored in a third set of descriptors D′ 91 .
  • the selected another descriptor is stored in a fourth set of descriptors D′ 92 .
  • this process is analogously performed for descriptor set D 93 resulting in a reduced set of descriptors D′ 93 , and so on.
  • the score parameter of a selected descriptor in the first and/or second set of descriptors D 91 , D 92 is modified, or alternatively a selected descriptor in the first and/or second set of descriptors D 91 , D 92 is designated such that the selected descriptor is disregarded for selection in a following selection step as described above with reference to FIG. 7 .
  • a method of matching at least one feature of an object in an image of a camera comprising providing at least one image with an object captured by a camera, extracting current features from the at least one image and providing a set of current feature descriptors with at least one current feature descriptor provided for an extracted feature, providing a third and a fourth set of descriptors as set out above, and comparing the set of current feature descriptors with the third and/or fourth set of descriptors for matching at least one feature of the object in the at least one image.
  • the method of matching at least one feature of an object in the image of the camera measures or loads the gravity vector g in the camera coordinate system.
  • Features are then extracted from the camera image resulting in a set of current feature descriptors.
  • an orientation angle (such as ⁇ 0 - ⁇ 4 shown in FIG. 9 ) is computed for every feature descriptor in the current camera image as the angle between the gravity vector g and a ray pointing from the camera center towards that feature. Every feature descriptor from the current camera image is then only matched against the reference set of descriptors that has the closest orientation angle.
  • an orientation angle is calculated and associated with the first orientation zone or with the second orientation zone.
  • At least one of the current feature descriptors is matched with the third set of descriptors (such as D′ 91 ), if the determined orientation angle of that current feature descriptor is associated with the first orientation zone, and at least one of the current feature descriptors is matched with the fourth set of descriptors (such as D′ 92 ), if the determined orientation angle of that current feature descriptor is associated with the second orientation zone.
  • the geometric transformation which is part of the synthetic view creation method can for instance be a projective or affine homography for planar objects.
  • synthetic views are created by means of image warping using bilinear interpolation or nearest-neighbor interpolation.
  • a rigid body transformation and the pinhole camera model can be applied as the basis of the geometric transformation.
  • the centers of the virtual cameras can for instance be located at the vertices of an ico-sphere centered on the object as shown in FIG. 6 .
  • the positions of the virtual camera are chosen accordingly, i.e. on that plane.
  • the model of the object can for instance be a textured triangle mesh which can be rendered using rasterization or a point cloud or volume which can be rendered using ray tracing, ray casting or splatting.
  • global illumination rendering methods such as ray tracing or radiosity can be applied.
  • the features that are detected in the synthetic views of an object can be point features, e.g. detected by means of detectors like SIFT, SURF, Harris, FAST, etc.
  • a feature can also be an edge or any other geometrical primitive or set of pixels that can be described.
  • the matching descriptor within a set of descriptors for a given descriptor can for instance be defined as the nearest neighbor in descriptor space using a distance function such as the sum-of-squared-differences.
  • the nearest neighbor can be determined, for instance, using exhaustive search or can be approximated by approximate nearest neighbor search methods such as KD-trees.
  • the matching method can contain a condition which a match needs to fulfill. This can be, for instance, that the distance of the matching descriptors is below a particular threshold or that the ratio between the distance to the nearest neighbor and the second nearest neighbor is above a certain threshold.
  • the score parameter of a descriptor that is computed and used in the iterative subset identification method can be defined and computed in different ways. Examples include the number of matches of a descriptor, the smallest distance to a descriptor over all descriptors or the average similarity over all matched descriptors.
  • the orientation angle may be defined in the range [0°, 180°].
  • One possible strategy is to evenly divide this range into N bins being [0°, 180°/N], [180°/N, 2*180°/n], . . . , [(N ⁇ 1)*180°/N, 180°].

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
US14/417,046 2012-07-23 2012-07-23 Method of providing image feature descriptors Abandoned US20150294189A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/444,404 US10192145B2 (en) 2012-07-23 2017-02-28 Method of providing image feature descriptors
US16/259,367 US10402684B2 (en) 2012-07-23 2019-01-28 Method of providing image feature descriptors
US16/531,678 US10528847B2 (en) 2012-07-23 2019-08-05 Method of providing image feature descriptors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2012/064422 WO2014015889A1 (fr) 2012-07-23 2012-07-23 Procédé de fourniture de descripteurs de caractéristiques d'image

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
PCT/EP2012/064422 A-371-Of-International WO2014015889A1 (fr) 2012-07-23 2012-07-23 Procédé de fourniture de descripteurs de caractéristiques d'image
EPPCT/EP2012/006442 A-371-Of-International 2012-07-23

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/444,404 Division US10192145B2 (en) 2012-07-23 2017-02-28 Method of providing image feature descriptors

Publications (1)

Publication Number Publication Date
US20150294189A1 true US20150294189A1 (en) 2015-10-15

Family

ID=46545812

Family Applications (4)

Application Number Title Priority Date Filing Date
US14/417,046 Abandoned US20150294189A1 (en) 2012-07-23 2012-07-23 Method of providing image feature descriptors
US15/444,404 Active US10192145B2 (en) 2012-07-23 2017-02-28 Method of providing image feature descriptors
US16/259,367 Active US10402684B2 (en) 2012-07-23 2019-01-28 Method of providing image feature descriptors
US16/531,678 Active US10528847B2 (en) 2012-07-23 2019-08-05 Method of providing image feature descriptors

Family Applications After (3)

Application Number Title Priority Date Filing Date
US15/444,404 Active US10192145B2 (en) 2012-07-23 2017-02-28 Method of providing image feature descriptors
US16/259,367 Active US10402684B2 (en) 2012-07-23 2019-01-28 Method of providing image feature descriptors
US16/531,678 Active US10528847B2 (en) 2012-07-23 2019-08-05 Method of providing image feature descriptors

Country Status (4)

Country Link
US (4) US20150294189A1 (fr)
EP (1) EP2875471B1 (fr)
CN (1) CN104541290A (fr)
WO (1) WO2014015889A1 (fr)

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150269778A1 (en) * 2014-03-20 2015-09-24 Kabushiki Kaisha Toshiba Identification device, identification method, and computer program product
US20170024901A1 (en) * 2013-11-18 2017-01-26 Nant Holdings Ip, Llc Silhouette-based object and texture alignment, systems and methods
US20170069056A1 (en) * 2015-09-04 2017-03-09 Adobe Systems Incorporated Focal Length Warping
US20170200049A1 (en) * 2014-02-28 2017-07-13 Nant Holdings Ip, Llc Object recognition trait analysis systems and methods
US20170243231A1 (en) * 2016-02-19 2017-08-24 Alitheon, Inc. Personal history in track and trace system
WO2017156043A1 (fr) 2016-03-08 2017-09-14 Nant Holdings Ip, Llc Association de caractéristiques d'images destinée à la reconnaissance d'objet à base d'image
US20180012411A1 (en) * 2016-07-11 2018-01-11 Gravity Jack, Inc. Augmented Reality Methods and Devices
US10043073B2 (en) 2011-03-02 2018-08-07 Alitheon, Inc. Document authentication using extracted digital fingerprints
JP2018195270A (ja) * 2017-05-22 2018-12-06 日本電信電話株式会社 局所特徴表現学習装置、及び方法
US20180374237A1 (en) * 2017-06-23 2018-12-27 Canon Kabushiki Kaisha Method, system and apparatus for determining a pose for an object
US10192140B2 (en) 2012-03-02 2019-01-29 Alitheon, Inc. Database for detecting counterfeit items using digital fingerprint records
US10331970B2 (en) 2014-04-24 2019-06-25 Nant Holdings Ip, Llc Robust feature identification for image-based object recognition
US10402682B1 (en) * 2017-04-19 2019-09-03 The United States Of America, As Represented By The Secretary Of The Navy Image-matching navigation using thresholding of local image descriptors
US10504008B1 (en) * 2016-07-18 2019-12-10 Occipital, Inc. System and method for relocalization and scene recognition
US10586385B2 (en) * 2015-03-05 2020-03-10 Commonwealth Scientific And Industrial Research Organisation Structure modelling
US10692000B2 (en) * 2017-03-20 2020-06-23 Sap Se Training machine learning models
US10740767B2 (en) 2016-06-28 2020-08-11 Alitheon, Inc. Centralized databases storing digital fingerprints of objects for collaborative authentication
US10839528B2 (en) 2016-08-19 2020-11-17 Alitheon, Inc. Authentication-based tracking
US10867301B2 (en) 2016-04-18 2020-12-15 Alitheon, Inc. Authentication-triggered processes
US10902540B2 (en) 2016-08-12 2021-01-26 Alitheon, Inc. Event-driven authentication of physical objects
US10915612B2 (en) 2016-07-05 2021-02-09 Alitheon, Inc. Authenticated production
US10930037B2 (en) * 2016-02-25 2021-02-23 Fanuc Corporation Image processing device for displaying object detected from input picture image
US10944960B2 (en) * 2017-02-10 2021-03-09 Panasonic Intellectual Property Corporation Of America Free-viewpoint video generating method and free-viewpoint video generating system
US10963670B2 (en) 2019-02-06 2021-03-30 Alitheon, Inc. Object change detection and measurement using digital fingerprints
US11062118B2 (en) 2017-07-25 2021-07-13 Alitheon, Inc. Model-based digital fingerprinting
US11087013B2 (en) 2018-01-22 2021-08-10 Alitheon, Inc. Secure digital fingerprint key object database
US20210350629A1 (en) * 2012-09-21 2021-11-11 Navvis Gmbh Visual localisation
US11238146B2 (en) 2019-10-17 2022-02-01 Alitheon, Inc. Securing composite objects using digital fingerprints
US11250286B2 (en) 2019-05-02 2022-02-15 Alitheon, Inc. Automated authentication region localization and capture
US11321964B2 (en) 2019-05-10 2022-05-03 Alitheon, Inc. Loop chain digital fingerprint method and system
US11341348B2 (en) 2020-03-23 2022-05-24 Alitheon, Inc. Hand biometrics system and method using digital fingerprints
US20220295040A1 (en) * 2021-03-11 2022-09-15 Quintar, Inc. Augmented reality system with remote presentation including 3d graphics extending beyond frame
US11557046B2 (en) 2020-09-30 2023-01-17 Argyle Inc. Single-moment alignment of imprecise overlapping digital spatial datasets, maximizing local precision
US11568683B2 (en) 2020-03-23 2023-01-31 Alitheon, Inc. Facial biometrics system and method using digital fingerprints
US20230063215A1 (en) * 2020-01-23 2023-03-02 Sony Group Corporation Information processing apparatus, information processing method, and program
TWI798459B (zh) * 2018-10-18 2023-04-11 南韓商三星電子股份有限公司 提取特徵之方法、圖像匹配之方法以及處理圖像之方法
US11636645B1 (en) * 2021-11-11 2023-04-25 Microsoft Technology Licensing, Llc Rapid target acquisition using gravity and north vectors
WO2023073398A1 (fr) * 2021-10-26 2023-05-04 Siemens Industry Software Ltd. Procédé et système permettant de déterminer un emplacement d'une caméra virtuelle dans une simulation industrielle
EP4184446A1 (fr) * 2021-11-23 2023-05-24 Virnect Inc. Procédé et système d'amélioration de performance de détection de cible par apprentissage dynamique
US11663849B1 (en) 2020-04-23 2023-05-30 Alitheon, Inc. Transform pyramiding for fingerprint matching system and method
US11700123B2 (en) 2020-06-17 2023-07-11 Alitheon, Inc. Asset-backed digital security tokens
US11915503B2 (en) 2020-01-28 2024-02-27 Alitheon, Inc. Depth-based digital fingerprinting
US11948377B2 (en) 2020-04-06 2024-04-02 Alitheon, Inc. Local encoding of intrinsic authentication data
US11983957B2 (en) 2020-05-28 2024-05-14 Alitheon, Inc. Irreversible digital fingerprints for preserving object security
US12028507B2 (en) * 2021-11-04 2024-07-02 Quintar, Inc. Augmented reality system with remote presentation including 3D graphics extending beyond frame

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10694106B2 (en) 2013-06-14 2020-06-23 Qualcomm Incorporated Computer vision application processing
US10083368B2 (en) * 2014-01-28 2018-09-25 Qualcomm Incorporated Incremental learning for dynamic feature database management in an object recognition system
US9710706B2 (en) 2014-09-23 2017-07-18 GM Global Technology Operations LLC Method for classifying a known object in a field of view of a camera
WO2016050290A1 (fr) 2014-10-01 2016-04-07 Metaio Gmbh Procédé et système de détermination d'au moins une propriété liée à au moins une partie d'un environnement réel
GB201511334D0 (en) 2015-06-29 2015-08-12 Nokia Technologies Oy A method, apparatus, computer and system for image analysis
CN105403469B (zh) * 2015-11-13 2018-08-21 北京理工大学 一种基于仿射变换最佳匹配图像的热力参数识别方法
TWI578240B (zh) * 2015-12-01 2017-04-11 財團法人工業技術研究院 特徵描述方法及應用其之特徵描述器
US10218728B2 (en) 2016-06-21 2019-02-26 Ebay Inc. Anomaly detection for web document revision
US11227435B2 (en) 2018-08-13 2022-01-18 Magic Leap, Inc. Cross reality system
US11232635B2 (en) 2018-10-05 2022-01-25 Magic Leap, Inc. Rendering location specific virtual content in any location
CN114600064A (zh) 2019-10-15 2022-06-07 奇跃公司 具有定位服务的交叉现实系统
EP4062381A4 (fr) * 2019-11-18 2023-11-29 Elbit Systems Ltd. Système et procédé pour une réalité mixte
US11562542B2 (en) 2019-12-09 2023-01-24 Magic Leap, Inc. Cross reality system with simplified programming of virtual content
EP4104144A4 (fr) * 2020-02-13 2024-06-05 Magic Leap, Inc. Système de réalité mélangée pour environnements à grande échelle
US11562525B2 (en) 2020-02-13 2023-01-24 Magic Leap, Inc. Cross reality system with map processing using multi-resolution frame descriptors
US11461578B2 (en) * 2021-02-04 2022-10-04 Verizon Patent And Licensing Inc. Methods and systems for generating composite image descriptors
US11475240B2 (en) * 2021-03-19 2022-10-18 Apple Inc. Configurable keypoint descriptor generation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1835460A1 (fr) * 2005-01-07 2007-09-19 Sony Corporation Systeme de traitement d´image, dispositif et methode d´apprentissage; et programme
US20100277572A1 (en) * 2009-04-30 2010-11-04 Canon Kabushiki Kaisha Information processing apparatus and control method thereof
US20120218296A1 (en) * 2011-02-25 2012-08-30 Nokia Corporation Method and apparatus for feature-based presentation of content
US20150040074A1 (en) * 2011-08-18 2015-02-05 Layar B.V. Methods and systems for enabling creation of augmented reality content

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080112593A1 (en) * 2006-11-03 2008-05-15 Ratner Edward R Automated method and apparatus for robust image object recognition and/or classification using multiple temporal views
US8126260B2 (en) * 2007-05-29 2012-02-28 Cognex Corporation System and method for locating a three-dimensional object using machine vision
WO2009070712A2 (fr) * 2007-11-27 2009-06-04 Jadi, Inc. Procédé et système de localisation d'une cible et de navigation vers celle-ci
CA2748037C (fr) * 2009-02-17 2016-09-20 Omek Interactive, Ltd. Procede et systeme de reconnaissance de geste
EP2339537B1 (fr) * 2009-12-23 2016-02-24 Metaio GmbH Procédé pour la détermination de caractéristiques de référence pour une utilisation dans un procédé de suivi optique d'initialisation d'objets et procédé de suivi d'initialisation d'objets
CN101984463A (zh) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 全景图合成方法及装置
KR101912748B1 (ko) * 2012-02-28 2018-10-30 한국전자통신연구원 확장성을 고려한 특징 기술자 생성 및 특징 기술자를 이용한 정합 장치 및 방법

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1835460A1 (fr) * 2005-01-07 2007-09-19 Sony Corporation Systeme de traitement d´image, dispositif et methode d´apprentissage; et programme
US20100277572A1 (en) * 2009-04-30 2010-11-04 Canon Kabushiki Kaisha Information processing apparatus and control method thereof
US20120218296A1 (en) * 2011-02-25 2012-08-30 Nokia Corporation Method and apparatus for feature-based presentation of content
US20150040074A1 (en) * 2011-08-18 2015-02-05 Layar B.V. Methods and systems for enabling creation of augmented reality content

Cited By (81)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10872265B2 (en) 2011-03-02 2020-12-22 Alitheon, Inc. Database for detecting counterfeit items using digital fingerprint records
US10043073B2 (en) 2011-03-02 2018-08-07 Alitheon, Inc. Document authentication using extracted digital fingerprints
US10915749B2 (en) 2011-03-02 2021-02-09 Alitheon, Inc. Authentication of a suspect object using extracted native features
US11423641B2 (en) 2011-03-02 2022-08-23 Alitheon, Inc. Database for detecting counterfeit items using digital fingerprint records
US10192140B2 (en) 2012-03-02 2019-01-29 Alitheon, Inc. Database for detecting counterfeit items using digital fingerprint records
US11887247B2 (en) * 2012-09-21 2024-01-30 Navvis Gmbh Visual localization
US20210350629A1 (en) * 2012-09-21 2021-11-11 Navvis Gmbh Visual localisation
US9728012B2 (en) * 2013-11-18 2017-08-08 Nant Holdings Ip, Llc Silhouette-based object and texture alignment, systems and methods
US20170024901A1 (en) * 2013-11-18 2017-01-26 Nant Holdings Ip, Llc Silhouette-based object and texture alignment, systems and methods
US10013612B2 (en) * 2014-02-28 2018-07-03 Nant Holdings Ip, Llc Object recognition trait analysis systems and methods
US20170200049A1 (en) * 2014-02-28 2017-07-13 Nant Holdings Ip, Llc Object recognition trait analysis systems and methods
US20150269778A1 (en) * 2014-03-20 2015-09-24 Kabushiki Kaisha Toshiba Identification device, identification method, and computer program product
US10331970B2 (en) 2014-04-24 2019-06-25 Nant Holdings Ip, Llc Robust feature identification for image-based object recognition
US10719731B2 (en) 2014-04-24 2020-07-21 Nant Holdings Ip, Llc Robust feature identification for image-based object recognition
US10586385B2 (en) * 2015-03-05 2020-03-10 Commonwealth Scientific And Industrial Research Organisation Structure modelling
US9865032B2 (en) * 2015-09-04 2018-01-09 Adobe Systems Incorporated Focal length warping
US20170069056A1 (en) * 2015-09-04 2017-03-09 Adobe Systems Incorporated Focal Length Warping
US20180315058A1 (en) * 2016-02-19 2018-11-01 Alitheon, Inc. Personal history in track and trace system
US10346852B2 (en) 2016-02-19 2019-07-09 Alitheon, Inc. Preserving authentication under item change
US20170243231A1 (en) * 2016-02-19 2017-08-24 Alitheon, Inc. Personal history in track and trace system
US10037537B2 (en) * 2016-02-19 2018-07-31 Alitheon, Inc. Personal history in track and trace system
US11593815B2 (en) 2016-02-19 2023-02-28 Alitheon Inc. Preserving authentication under item change
US10572883B2 (en) 2016-02-19 2020-02-25 Alitheon, Inc. Preserving a level of confidence of authenticity of an object
US11682026B2 (en) 2016-02-19 2023-06-20 Alitheon, Inc. Personal history in track and trace system
US11301872B2 (en) * 2016-02-19 2022-04-12 Alitheon, Inc. Personal history in track and trace system
US11068909B1 (en) 2016-02-19 2021-07-20 Alitheon, Inc. Multi-level authentication
US10861026B2 (en) * 2016-02-19 2020-12-08 Alitheon, Inc. Personal history in track and trace system
US11100517B2 (en) 2016-02-19 2021-08-24 Alitheon, Inc. Preserving authentication under item change
US10930037B2 (en) * 2016-02-25 2021-02-23 Fanuc Corporation Image processing device for displaying object detected from input picture image
WO2017156043A1 (fr) 2016-03-08 2017-09-14 Nant Holdings Ip, Llc Association de caractéristiques d'images destinée à la reconnaissance d'objet à base d'image
US20170263019A1 (en) * 2016-03-08 2017-09-14 Nant Holdings Ip, Llc Image feature combination for image-based object recognition
US10861129B2 (en) * 2016-03-08 2020-12-08 Nant Holdings Ip, Llc Image feature combination for image-based object recognition
US11551329B2 (en) 2016-03-08 2023-01-10 Nant Holdings Ip, Llc Image feature combination for image-based object recognition
US11842458B2 (en) 2016-03-08 2023-12-12 Nant Holdings Ip, Llc Image feature combination for image-based object recognition
EP3427165A4 (fr) * 2016-03-08 2019-11-06 Nant Holdings IP, LLC Association de caractéristiques d'images destinée à la reconnaissance d'objet à base d'image
US10867301B2 (en) 2016-04-18 2020-12-15 Alitheon, Inc. Authentication-triggered processes
US11830003B2 (en) 2016-04-18 2023-11-28 Alitheon, Inc. Authentication-triggered processes
US10740767B2 (en) 2016-06-28 2020-08-11 Alitheon, Inc. Centralized databases storing digital fingerprints of objects for collaborative authentication
US11379856B2 (en) 2016-06-28 2022-07-05 Alitheon, Inc. Centralized databases storing digital fingerprints of objects for collaborative authentication
US10915612B2 (en) 2016-07-05 2021-02-09 Alitheon, Inc. Authenticated production
US11636191B2 (en) 2016-07-05 2023-04-25 Alitheon, Inc. Authenticated production
US20180012411A1 (en) * 2016-07-11 2018-01-11 Gravity Jack, Inc. Augmented Reality Methods and Devices
US10504008B1 (en) * 2016-07-18 2019-12-10 Occipital, Inc. System and method for relocalization and scene recognition
US10803365B2 (en) 2016-07-18 2020-10-13 Occipital, Inc. System and method for relocalization and scene recognition
US10902540B2 (en) 2016-08-12 2021-01-26 Alitheon, Inc. Event-driven authentication of physical objects
US10839528B2 (en) 2016-08-19 2020-11-17 Alitheon, Inc. Authentication-based tracking
US11741205B2 (en) 2016-08-19 2023-08-29 Alitheon, Inc. Authentication-based tracking
US10944960B2 (en) * 2017-02-10 2021-03-09 Panasonic Intellectual Property Corporation Of America Free-viewpoint video generating method and free-viewpoint video generating system
US10692000B2 (en) * 2017-03-20 2020-06-23 Sap Se Training machine learning models
US10402682B1 (en) * 2017-04-19 2019-09-03 The United States Of America, As Represented By The Secretary Of The Navy Image-matching navigation using thresholding of local image descriptors
JP2018195270A (ja) * 2017-05-22 2018-12-06 日本電信電話株式会社 局所特徴表現学習装置、及び方法
US20180374237A1 (en) * 2017-06-23 2018-12-27 Canon Kabushiki Kaisha Method, system and apparatus for determining a pose for an object
US11062118B2 (en) 2017-07-25 2021-07-13 Alitheon, Inc. Model-based digital fingerprinting
US11593503B2 (en) 2018-01-22 2023-02-28 Alitheon, Inc. Secure digital fingerprint key object database
US11843709B2 (en) 2018-01-22 2023-12-12 Alitheon, Inc. Secure digital fingerprint key object database
US11087013B2 (en) 2018-01-22 2021-08-10 Alitheon, Inc. Secure digital fingerprint key object database
TWI798459B (zh) * 2018-10-18 2023-04-11 南韓商三星電子股份有限公司 提取特徵之方法、圖像匹配之方法以及處理圖像之方法
US10963670B2 (en) 2019-02-06 2021-03-30 Alitheon, Inc. Object change detection and measurement using digital fingerprints
US11386697B2 (en) 2019-02-06 2022-07-12 Alitheon, Inc. Object change detection and measurement using digital fingerprints
US11488413B2 (en) 2019-02-06 2022-11-01 Alitheon, Inc. Object change detection and measurement using digital fingerprints
US11250286B2 (en) 2019-05-02 2022-02-15 Alitheon, Inc. Automated authentication region localization and capture
US11321964B2 (en) 2019-05-10 2022-05-03 Alitheon, Inc. Loop chain digital fingerprint method and system
US11922753B2 (en) 2019-10-17 2024-03-05 Alitheon, Inc. Securing composite objects using digital fingerprints
US11238146B2 (en) 2019-10-17 2022-02-01 Alitheon, Inc. Securing composite objects using digital fingerprints
US20230063215A1 (en) * 2020-01-23 2023-03-02 Sony Group Corporation Information processing apparatus, information processing method, and program
US11915503B2 (en) 2020-01-28 2024-02-27 Alitheon, Inc. Depth-based digital fingerprinting
US11341348B2 (en) 2020-03-23 2022-05-24 Alitheon, Inc. Hand biometrics system and method using digital fingerprints
US11568683B2 (en) 2020-03-23 2023-01-31 Alitheon, Inc. Facial biometrics system and method using digital fingerprints
US11948377B2 (en) 2020-04-06 2024-04-02 Alitheon, Inc. Local encoding of intrinsic authentication data
US11663849B1 (en) 2020-04-23 2023-05-30 Alitheon, Inc. Transform pyramiding for fingerprint matching system and method
US11983957B2 (en) 2020-05-28 2024-05-14 Alitheon, Inc. Irreversible digital fingerprints for preserving object security
US11700123B2 (en) 2020-06-17 2023-07-11 Alitheon, Inc. Asset-backed digital security tokens
US11557046B2 (en) 2020-09-30 2023-01-17 Argyle Inc. Single-moment alignment of imprecise overlapping digital spatial datasets, maximizing local precision
US20220295040A1 (en) * 2021-03-11 2022-09-15 Quintar, Inc. Augmented reality system with remote presentation including 3d graphics extending beyond frame
WO2023073398A1 (fr) * 2021-10-26 2023-05-04 Siemens Industry Software Ltd. Procédé et système permettant de déterminer un emplacement d'une caméra virtuelle dans une simulation industrielle
US12028507B2 (en) * 2021-11-04 2024-07-02 Quintar, Inc. Augmented reality system with remote presentation including 3D graphics extending beyond frame
US20230260204A1 (en) * 2021-11-11 2023-08-17 Microsoft Technology Licensing, Llc Rapid target acquisition using gravity and north vectors
US20230148231A1 (en) * 2021-11-11 2023-05-11 Microsoft Technology Licensing, Llc Rapid target acquisition using gravity and north vectors
US11941751B2 (en) * 2021-11-11 2024-03-26 Microsoft Technology Licensing, Llc Rapid target acquisition using gravity and north vectors
US11636645B1 (en) * 2021-11-11 2023-04-25 Microsoft Technology Licensing, Llc Rapid target acquisition using gravity and north vectors
EP4184446A1 (fr) * 2021-11-23 2023-05-24 Virnect Inc. Procédé et système d'amélioration de performance de détection de cible par apprentissage dynamique

Also Published As

Publication number Publication date
US20170236033A1 (en) 2017-08-17
US10528847B2 (en) 2020-01-07
US20190362179A1 (en) 2019-11-28
US20190156143A1 (en) 2019-05-23
EP2875471B1 (fr) 2021-10-27
US10192145B2 (en) 2019-01-29
WO2014015889A1 (fr) 2014-01-30
US10402684B2 (en) 2019-09-03
EP2875471A1 (fr) 2015-05-27
CN104541290A (zh) 2015-04-22

Similar Documents

Publication Publication Date Title
US10528847B2 (en) Method of providing image feature descriptors
Kendall et al. Posenet: A convolutional network for real-time 6-dof camera relocalization
US8942418B2 (en) Method of providing a descriptor for at least one feature of an image and method of matching features
Kurz et al. Inertial sensor-aligned visual feature descriptors
Kluger et al. Deep learning for vanishing point detection using an inverse gnomonic projection
Buoncompagni et al. Saliency-based keypoint selection for fast object detection and matching
Uchiyama et al. Toward augmenting everything: Detecting and tracking geometrical features on planar objects
Son et al. A multi-vision sensor-based fast localization system with image matching for challenging outdoor environments
Yang et al. Large-scale and rotation-invariant template matching using adaptive radial ring code histograms
CN105427333A (zh) 视频序列图像的实时配准方法、系统及拍摄终端
Potje et al. Learning geodesic-aware local features from RGB-D images
JP6016242B2 (ja) 視点推定装置及びその分類器学習方法
CN108197631B (zh) 提供图像特征描述符的方法
JP6304815B2 (ja) 画像処理装置ならびにその画像特徴検出方法、プログラムおよび装置
Mentzer et al. Self-calibration of wide baseline stereo camera systems for automotive applications
CN113012298B (zh) 一种基于区域检测的弯曲mark三维注册增强现实方法
JP5975484B2 (ja) 画像処理装置
Alam et al. A comparative analysis of feature extraction algorithms for augmented reality applications
Bermudez et al. Comparison of natural feature descriptors for rigid-object tracking for real-time augmented reality
Sallam Fatouh et al. Image-based localization for augmented reality application: A review
Molyneaux et al. Vision-based detection of mobile smart objects
Resch et al. Local image feature matching improvements for omnidirectional camera systems
Wu Image Registration Algorithm Based on SIFT and BFO.
Zhou et al. On user-defined region matching for augmented reality
Keršner Parkovací asistent s využitím web kamer

Legal Events

Date Code Title Description
AS Assignment

Owner name: METAIO GMBH, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BENHIMANE, SELIM;KURZ, DANIEL;OLSZAMOWSKI, THOMAS;SIGNING DATES FROM 20150415 TO 20150425;REEL/FRAME:036012/0873

AS Assignment

Owner name: APPLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:METAIO GMBH;REEL/FRAME:040821/0462

Effective date: 20161118

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION