US20100208057A1 - Methods and systems for determining the pose of a camera with respect to at least one object of a real environment - Google Patents

Methods and systems for determining the pose of a camera with respect to at least one object of a real environment Download PDF

Info

Publication number
US20100208057A1
US20100208057A1 US12/371,490 US37149009A US2010208057A1 US 20100208057 A1 US20100208057 A1 US 20100208057A1 US 37149009 A US37149009 A US 37149009A US 2010208057 A1 US2010208057 A1 US 2010208057A1
Authority
US
United States
Prior art keywords
camera
image
reference model
real environment
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/371,490
Other versions
US8970690B2 (en
Inventor
Peter Meier
Selim Ben Himane
Stefan Misslinger
Ben Blachntizky
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Metaio GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Metaio GmbH filed Critical Metaio GmbH
Priority to US12/371,490 priority Critical patent/US8970690B2/en
Assigned to METAIO GMBH reassignment METAIO GMBH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BENHIMANE, SELIM, BLACHNITZKY, BEN, MEIER, PETER, MISSLINGER, STEFAN
Priority to EP10713797.8A priority patent/EP2396767B1/en
Priority to PCT/EP2010/000882 priority patent/WO2010091875A2/en
Priority to CN201510393737.9A priority patent/CN105701790B/en
Priority to CN201080016314.0A priority patent/CN102395997B/en
Publication of US20100208057A1 publication Critical patent/US20100208057A1/en
Priority to US14/633,386 priority patent/US9934612B2/en
Publication of US8970690B2 publication Critical patent/US8970690B2/en
Application granted granted Critical
Assigned to APPLE INC. reassignment APPLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: METAIO GMBH
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design

Definitions

  • the invention is directed to methods and systems for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring, e.g. for geospatial databases, or augmented reality application, wherein at least one or two images are generated by the camera capturing a real object of a real environment.
  • the image or images may by augmented with virtual objects according to the authoring or augmented reality technology.
  • a camera coupled to a processing unit such as a microprocessor takes a picture of a real environment, wherein the real environment is displayed on a display screen and virtual objects may be displayed in addition to the real environment, so that the real environment displayed on the display screen is augmented with virtual objects of any kind on a display screen.
  • a processing unit such as a microprocessor
  • the microprocessor in order to augment the image with virtual objects, there is the need for the microprocessor to determine the position and orientation (so-called pose) of the camera with respect to at least one object of the real environment in order for the microprocessor to correctly augment the captured image with any virtual objects.
  • correctly augmenting the captured image with any virtual objects means that the virtual objects are displayed in a manner that the virtual objects fit in a perspectively and dimensionally correct fashion into the scene of the image.
  • a known method for determining the pose of the camera uses a virtual reference model of a corresponding part of the real environment captured by the camera, wherein the virtual reference model is projected into the image, using an initially known approximation of the pose, and superimposed with the corresponding part of the real environment.
  • a tracking algorithm of the image processing uses the virtual reference model to determine the pose of the camera with respect to the real environment, for example by feature detection and comparison between the reference model and the corresponding part of the real environment.
  • Another known method for determining the pose of the camera uses a marker that is placed in the real environment and captured by the camera when taking the image.
  • a tracking algorithm of the image processing uses the marker to determine the pose of the camera with respect to the real environment, particularly by analysing of the marker in the image using known image processing methods.
  • a disadvantage of the above-mentioned methods is that either a virtual reference model has to be conceived first and stored, which is very time and resource consuming and almost impossible if the AR technology shall be capable of being used spontaneously in any real environment.
  • the user With respect to using a marker, the user has to place in an initial step the marker in the real environment before taking the image, which is also time consuming and troublesome.
  • these methods may hardly be used in connection with any consumer products, such as mobile phones having an integrated camera and display, or other mobile devices.
  • SLAM structure from motion and simultaneous localization and tracking
  • All these methods serve for determining the position and orientation (pose) of a camera in relation to the real world or of part of the real world. If there is no pre-information available, in some cases it is not possible to determine the absolute pose of the camera in relation to the real world or part of the real world, but only the changes of the camera poses from a particular point of time.
  • SLAM methods may be used to get orientation from planar points, but a disadvantage is that one is not sure if ground plane or some other plane is identified. Further, with such methods one may only get an initial scale by translating the camera, e.g. along a distance of 10 cm, and communicating the covered distance to the system.
  • SLAM methods need at least two images (so-called frames) taken at different camera poses, and a calibrated camera.
  • mydeco available on the Internet exists in which an image showing a real environment may be augmented with virtual objects.
  • this system needs to set the rotation of the ground plane, which is quite cumbersome to the user.
  • U.S. Pat. No. 7,002,551 there is disclosed a method and system for providing an optical see-through Augmented Reality modified-scale display. It includes a sensor suite that includes a compass, an inertial measuring unit, and a video camera for precise measurement of a user's current orientation and angular rotation rate.
  • a sensor fusion module may be included to produce a unified estimate of the user's angular rotation rate and current orientation to be provided to an orientation and rate estimate module.
  • the orientation and rate estimate module operates in a static or dynamic (prediction) mode.
  • a render module receives an orientation; and the render module uses the orientation, a position from a position measuring system, and data from a database to render graphic images of an object in their correct orientations and positions in an optical display.
  • the position measuring system is effective for position estimation for producing the computer generated image of the object to combine with the real scene, and is connected with the render module.
  • An example of the position measuring system is a differential GPS. Since the user is viewing targets that are a significant distance away (as through binoculars), the registration error caused by position errors in the position measuring system is minimized.
  • Embodiments of the invention include the following aspects.
  • a method for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality application comprising the following steps: Generating at least one first image by the camera capturing a first object of a real environment, generating first orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the first image for finding and determining features which are indicative of an orientation of the camera, providing a means for allocating a distance of the camera to the first object of the real environment displayed in the first image, the means generating distance data which are indicative of the allocated distance of the camera to the first object, and determining the pose of the camera with respect to a coordinate system related to the first object of the real environment using the distance data and the first orientation data.
  • a method for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality application comprising the following steps: Generating at least one image by the camera capturing a object of a real environment, displaying the image on an image displaying means, generating orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the image for finding features which are indicative of an orientation of the camera, providing a virtual reference model which is displayed superimposed with the real environment in the image and generating distance data from the reference model, the distance data being indicative of an allocated distance of the camera to the object, and determining the pose of the camera with respect to a coordinate system related to the object of the real environment using the distance data and the orientation data.
  • a method for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality application comprising the following steps: Generating at least one image by the camera capturing an object of a real environment, generating orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the image for finding and determining features which are indicative of an orientation of the camera, providing a measurement device associated with the camera for measuring at least one parameter indicative of the distance between the camera and the object, and determining the pose of the camera with respect to a coordinate system related to the object of the real environment on the basis of the at least one parameter and the orientation data.
  • a method for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality application comprising: generating at least one first image by the camera capturing a real object of a real environment, generating first orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the first image for finding and determining features which are indicative of an orientation of the camera or by user-interaction, providing a means for allocating a distance of the camera to the real object of the real environment displayed in the first image, the means generating distance data which are indicative of the allocated distance of the camera to the real object, and determining the pose of the camera with respect to a coordinate system related to the real object of the real environment using the distance data and the first orientation data.
  • the method proceeds with generating a second image by a camera capturing the real object of the real environment, extracting at least one respective feature from the first image and the second image and matching the respective features to provide at least one relation indicative of a correspondence between the first image and the second image, providing the pose of the camera with respect to the real object in the first image, and determining the pose of the camera with respect to a coordinate system related to the real object in the second image using the pose of the camera with respect to the real object in the first image and the at least one relation, and extracting at least one respective feature from the first image and the second image for determining a ground plane in both the first and the second images and moving the placement coordinate system to be positioned on the ground plane.
  • a system for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality system may comprise the following components and features: at least one camera for generating at least one image capturing at least one object of a real environment, an image displaying device for displaying the image, means for generating orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the image for finding and determining features which are indicative of an orientation of the camera, particularly at least one orientation sensor associated with the camera for generating orientation data of the camera, and a processing device coupled with the camera and with the image displaying device.
  • the processing device is arranged to perform the following steps in interaction with the camera and the image displaying device: providing a virtual reference model which is displayed superimposed with the real environment in the image and generating distance data from the reference model, the distance data being indicative of an allocated distance of the camera to the object, receiving user's instructions via a user interface for manipulation of the reference model by the user placing the reference model at a particular position within the at least one image, and determining the pose of the camera with respect to a coordinate system related to the at least one object of the real environment using the distance data and the orientation data.
  • Another system for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality system may comprises the following components and features: At least one camera for generating at least one image capturing at least one object of a real environment, means for generating orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the image for finding and determining features which are indicative of an orientation of the camera, particularly at least one orientation sensor associated with the camera for generating orientation data of the camera, a measurement device associated with the camera for measuring a distance between the camera and the at least one object, and a processing device coupled with the camera, wherein the processing device is arranged for generating distance data from the measured distance, and for determining the pose of the camera with respect to a coordinate system related to the at least one object of the real environment using the distance data and the orientation data.
  • the system is included in a mobile device, wherein the mobile device may be a mobile telephone.
  • the invention may use the fact that a number of mobile phones today offer various required components for Augmented Reality (AR), such as high resolution cameras and displays, accelerometers, orientation sensor, GPS, wireless connectivity by WLAN and/or radio links. Further aspects, embodiments and advantageous features of the invention are evident from the following disclosure of embodiments.
  • AR Augmented Reality
  • FIG. 1 shows a flowchart illustration of a method according to an embodiment of the invention using a reference model.
  • FIG. 2 shows schematic illustrations of an embodiment using a reference model in a process according to FIG. 1 as viewed from the user.
  • FIG. 3 shows a schematic illustration of an embodiment of a system and exemplary scenery according to the invention.
  • FIG. 4 shows another schematic illustration of an embodiment of a system and exemplary scenery according to the invention.
  • FIG. 5 shows a schematic illustration of the scenery of FIG. 3 augmented with a virtual object.
  • FIG. 6 shows a flowchart illustration of an embodiment of a method according to the invention for calculating a placement coordinate system in an image taken by a camera.
  • FIG. 7 shows a diagram in terms of calculating an orientation from lines.
  • FIG. 8 shows a flowchart illustration of another embodiment of a method according to the invention for calculating a ground coordinate system when having two or more images taken by a camera.
  • FIGS. 9-12 show exemplary images when performing a process as shown in FIG. 8 .
  • FIGS. 3 , 4 and 5 there is shown a schematic illustration of an embodiment of a system and an exemplary scenery according to the invention.
  • FIG. 3 shows a system 1 in which a user (not shown) holds a mobile device 10 which incorporates or is coupled with a camera 11 for generating at least one image 30 of the real world, for example containing the real objects 31 , 32 as shown.
  • the real objects 31 , 32 may be a table and a cabinet which are placed in a room having a ground plane 35 , and the camera 11 takes an image of the real environment to be displayed on a display screen 20 .
  • the ground plane itself could also be considered to be a real object.
  • the real environment is provided with a coordinate system 33 , such as shown in FIG. 3 .
  • the camera 11 is coupled with an image displaying means 20 , such as a touchscreen that is incorporated in the mobile device 10 .
  • image displaying means 20 such as a touchscreen that is incorporated in the mobile device 10 .
  • any other image displaying means may be used which is suitable for displaying an image to a user, such as a head mounted display or any other type of mobile or stationary display device.
  • a processing device 13 which may be for example a microprocessor, is connected with or incorporated in the mobile device 10 .
  • the mobile device 10 also incorporates or is coupled with an orientation sensor 12 .
  • the mobile device may be a mobile telephone having an integrated orientation sensor 12 , camera 11 , touchscreen 20 , and processing device 13 .
  • the components may also be distributed and/or used in different applications. Further, they may be coupled with each other in wired or wireless fashion.
  • the system 1 is used for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality system.
  • an image of the real environment may be augmented with a virtual object, such as shown in FIG. 5 , by displaying the virtual object 40 superimposed with the real environment in the image 30 in accordance with the pose of the camera.
  • the pose (including position and orientation data) of the camera may be, for example, the pose with respect to the real object 31 .
  • the pose may be determined with respect to the coordinate system 33 as shown in FIG. 3 , which in turn is associated with the respective object 31 .
  • the processing device 13 is arranged to perform the following steps in interaction with the camera 11 and the image displaying device 20 :
  • the camera 11 takes a first image of the real environment, for example as shown in FIG. 3 .
  • At least one orientation sensor 12 is associated with the camera 11 for generating orientation data of the camera in step 2 . 0 .
  • the orientation data may alternatively or additionally be generated from an algorithm which analyses the first image for finding and determining features which are indicative of an orientation of the camera.
  • algorithms are known in the art, such as “Orientation from lines” as disclosed in the paper ZuWhan Kim, “Geometry of Vanishing Points and its Application to External Calibration and Realtime Pose Estimation” (Jul. 1, 2006). Institute of Transportation Studies. Research Reports. Paper UCB-ITS-RR-2006-5.
  • FIGS. 2A-2D show a scene that is different from the previously described scene of FIGS. 3-5 .
  • the images as shown in FIGS. 2A , 2 B correspond to a first image taken by a camera, such as the first image 30 as previously described.
  • FIGS. 2C , 2 D display a respective second image 60 taken by the same or a different camera as described in more detail below.
  • the same reference numerals for a first image ( 30 ) and a second image ( 60 ) are used in connection with the different scenes.
  • a means for allocating a distance of the camera 11 to the real object 31 displayed in the image 30 generates distance data that is indicative of the allocated distance of the camera 11 to the real object 31 .
  • providing the means for allocating the distance of the camera to the real object includes providing a virtual reference model that is displayed superimposed with the real environment in the first image 30 .
  • the initial distance of the reference object can be one of a fixed value (e.g. 2.5 meters) and a distance provided by a distance sensor.
  • a reference model 50 is placed in the image 30 in accordance with an orientation provided by the orientation sensor 12 .
  • the dimensions of the reference model 50 (which may be any kind of virtual model), such as height, width or depth thereof, are known to the system.
  • One implementation may include: Translate reference model parallel to oriented plane or apply a rotation to the reference model with an axis parallel to the plane's normal or move reference model along the line defined by the camera center and the center of mass of the object, already placed (which allows increasing or decreasing the size of the reference object in the image).
  • the system is receiving user's instructions for manipulation of the reference model 50 by the user, the manipulation of the reference model including at least one of moving the reference model 50 at a particular position within the first image 30 on a plane, with the plane defined at least in part by the orientation data, and changing a dimension or variable of the reference model 50 , such as moving the reference model 50 and/or changing the height of the reference model 50 (in FIG. 1 designated as moving and scaling the reference model).
  • the distance data between camera 11 and real object 31 is determined using the virtual reference model 50 , particularly using its known parameters such as height actually as actually displayed in the image 30 .
  • the system assumes that the user has correctly placed the reference model 50 within the image, so that the proportions of the reference model 50 correspond to the proportions of the real objects 31 , 32 in the image 30 .
  • the distance between the camera 11 and the real object 31 can be derived taking into account the intrinsic camera parameters.
  • the pose of the camera 11 with respect to the real object 31 is then determined using the distance data from step 4 . 0 and the orientation data from step 2 . 0 .
  • the reference model 50 according to its final position and orientation defines the position and orientation of the coordinate system 33 .
  • at least one virtual object 40 as shown in FIG. 2B or FIG. 5 is superimposed with the real environment in the first image 30 in accordance with the determined pose of the camera.
  • the reference model 50 may be blanked out, as shown in FIG. 2B .
  • the manipulation of the reference model 50 by the user may include at least one of the following steps: touching the reference model by means of a touchscreen, using two fingers and moving the fingers away from each other to increase the size and moving the fingers closer to each other to decrease the size of the reference model (this might not be necessary, when having a distance measurement from a sensor), touching the reference model by means of a touchscreen with two fingers and rotating the fingers to one another in order to rotate the reference model around an axis perpendicular to the ground plane, touching the reference model with at least one finger and moving the finger in order to move the reference model across a plane.
  • providing the means for allocating the distance of the camera 11 to the real object 31 includes providing a measurement device, such as a distance sensor, associated with the camera 11 for measuring at least one parameter indicative of the distance between the camera 11 and the real object 31 , wherein the distance data are generated on the basis of the at least one parameter.
  • the measurement device includes one of the following devices: a distance provided by a focussing unit of the camera, a distance sensor, at least one time of flight camera, and/or a stereo camera or cameras.
  • a measurement device 14 may be associated with the camera 11 for measuring a distance between the camera and the object 31 .
  • the processing device 13 coupled with the camera 11 is arranged for generating distance data from the measured distance, and for determining an initial pose of the camera with respect to the object 31 using the distance data and the orientation data.
  • the initial pose can be refined in terms of the position on the ground plane and the rotation around the axis parallel to the plane's normal.
  • FIG. 4 shows an example of how to determine the pose of the camera 11 knowing the distance to the focused object (which is in FIG. 4 different from table 31 ) and two angles describing the normal to the ground plane provided by the orientation sensor.
  • the first matrix column is set as the gravity vector (which might be provided by the orientation sensor).
  • the second matrix vector is set as an arbitrary vector parallel to the plane perpendicular to the gravity vector.
  • the third matrix column can be obtained using the cross product of the two other columns. All columns should be normalized.
  • the method may further include providing a parameter indicative of a rotation of the camera around an axis which is perpendicular to the earth ground plane, e.g., provided by a rotation sensor or a compass (in FIG. 6 designated as z-rotation).
  • a rotation sensor or a compass in FIG. 6 designated as z-rotation.
  • providing the means for allocating the distance of the camera 11 to the real object 31 may include providing at least one parameter indicative of a distance between two features of the real environment which are visible in the image 30 , such as a distance between the stand of the table 31 and one edge of the cabinet 32 and using the intrinsic camera parameters. Another helpful example is providing the distance of two features located on the ground plane.
  • the parameter may be provided, interactively by the user, after an appropriate image processing algorithm, shows detected features and the user providing a distance from his knowledge of the real environment.
  • the distance data and the orientation data are used to determine at least one placement coordinate system in the image 30 , such as coordinate system 33 , wherein the at least one virtual object 40 is superimposed with the real environment relative to the placement coordinate system 33 .
  • the system may not precisely determine the ground plane for allocating a corresponding coordinate system thereon, without, e.g. a distance sensor directed toward the ground.
  • the placement coordinate system 33 is approximately the ground coordinate system.
  • the ground plane and, thus, the ground coordinate system may be determined by means of a second image of the same scenery taken from a different pose of the camera, as explained in more detail below.
  • the virtual model 40 may also serve as the reference model 50 as discussed above with respect to FIG. 2A .
  • the user may get an impression of how a real sofa corresponding to the virtual model 40 would look like if placed in the room of the real world. Therefore, the user may use his or her mobile phone for taking an image and for augmenting the image with virtual objects of any kind, wherein the skilled person is aware of a wide variety of applications.
  • Another application for example could be placing objects in pictures of the real world and having a GPS position of the camera and an absolute orientation and determining the pose of a virtual model 40 using this invention, allowing to feed a global database of objects positioned on the earth, like GOOGLE® Earth.
  • FIG. 6 the process as explained above is shown in more detailed manner, as discussed above.
  • a distance of the reference model 50 is initially assumed. With moving the reference model 50 on a plane (with the plane being parallel to the ground plane) and/or with changing a dimension/parameter of the reference object according to the subjective impression of the user, the necessary translation data may be determined.
  • An algorithm following the “Orientation from lines” approach for determining the orientation of the camera instead of using an orientation sensor is explained in more detail with reference to FIG. 7 :
  • Vanishing points are the images of the intersection of parallel lines.
  • vx, vy, vz be the vanishing points.
  • the method includes the step of generating a second image 60 by a camera (which may be the same or a different camera for which the intrinsic parameters are also known than the camera which took the first image) capturing the real object (e.g. object 31 of image 30 ) of the real environment from a different pose.—
  • a camera which may be the same or a different camera for which the intrinsic parameters are also known than the camera which took the first image
  • step A 2 At least one respective feature from the first image 30 and the second image 60 are extracted, wherein the respective features are matched to provide at least one relation indicative of a correspondence between the first image and the second image (the relation is designated in FIG. 8 in step A 2 as “fundamental matrix” which is defined up to a scale (For more details see chapter 11 of Multiple View Geometry in Computer Vision. Second Edition. Richard Hartley and Andrew Zisserman, Cambridge University Press, March 2004). Te scale designated as “alpha” is determined in step B 5 ).
  • step A 1 second orientation data derived from the new pose when taking the second image might be used to reduce the amount of needed features or to check the result of the fundamental matrix.
  • the placement coordinate system 33 of the first image may be transitioned to the second image.
  • the calculation of the fundamental matrix can of course be supported by the use of the orientation sensors, reducing the amount of necessary feature matches.
  • K 1 is the camera intrinsic parameter matrix of the camera, which acquired the first image
  • K 2 is the intrinsic parameter matrix of the camera, which acquired the second image
  • t is the translation between the two camera views
  • R is the rotation.
  • the essential matrix can also be computed directly from point correspondences. Therefore to get the translation t up to a scale and the rotation R (for more information see B. Horn: Recovering baseline and orientation. from essential matrix. Journal. of. the Optical Society. of. America,. January 1990.). Now if a plane is visible in both images it is possible to compute the homography that transforms every point on this plane in the image 1 to its corresponding point in the image 2 .
  • the homography can be written:
  • n is the normal vector, expressed in the first camera view, to the plane and d is the distance between the camera center of the first view and the plane. If we know that the two images contain the plane and that the plane has many feature points, the detection of the points lying on the plane is very easy, for example: the biggest set of points that are verifying the same homography.
  • Another possibility to detect a plane is to reconstruct the 3D-points out of correspondences and find the plane that includes a high number of 3D-points (and optionally being nearly parallel to the plane from the orientation in claim 1 , e.g. to reduce the search-space and improve robustness).
  • t 0 is a vector collinear to the true translation, with a norm equal to 1 and the unknowns are alpha and beta and the solution is very easy to find (more equations than unknowns).
  • step C 1 the pose of the camera with respect to the real object in the first image is provided from the previous process with respect to the first image, and the pose of the camera with respect to the real object in the second image is determined using the pose of the camera with respect to the real object in the first image and the at least one relation, i.e. the fundamental matrix and “alpha” for determining the translation parameters tx, ty, tz of the pose which is consisting of the parameters tx, ty, tz, rx, ry, rz (with “t” standing for translation and “r” standing for rotation parameters in the three different dimensions) defining the position and orientation of the camera.
  • the fundamental matrix and “alpha” for determining the translation parameters tx, ty, tz of the pose which is consisting of the parameters tx, ty, tz, rx, ry, rz (with “t” standing for translation and “r” standing for rotation parameters in the three different dimensions) defining the position and orientation of
  • the distance data and the first orientation data recorded with respect to the first image were used to determine at least one placement coordinate system (such as coordinate system 33 ) in the first image and a position and orientation thereof in the first image (as discussed above), wherein the fundamental matrix and “alpha” are used for allocating the placement coordinate system in the second image with a position and orientation corresponding to the respective position and orientation in the first image.
  • FIG. 2C shows in which the reference model 50 of FIG. 2A (positioned in accordance with the placement coordinate system 33 ) is shown in the second image 60 (positioned in accordance with the “transferred” placement coordinate system 33 ) with a position and orientation corresponding to the respective position and orientation in the first image 30 (i.e. with the back of the reference model turned to the wall as in the first image 30 )
  • the step of providing the at least one relation may further include one or more of the following steps:
  • providing at least one parameter indicative of a movement of the camera between taking the first image and the second image may include providing a first location parameter of the camera when taking the first image and a second location parameter of the camera when taking the second image, the location parameters generated by a positioning system or detector, such as used with GPS. At least one of the first and second location parameters may be generated by at least one of a satellite locating system, wireless network positioning mechanisms, mobile phone cell location mechanisms and an elevation measuring device, such as an altimeter. Note, that the measurement of one translation dimension (from tx, ty, tz) or the norm of t are sufficient to solve for alpha.
  • step B 2 Providing at least one parameter indicative of a distance between two features of the real environment which are visible in both the first and second images (step B 2 ).
  • step B 3 using 2D-3D-correspondences created from scale given by reference model 50 (and optionally assuming the reference model being on one plane) (step B 3 ) may be applied for providing input to step B 2 .
  • a database of features including 3D positions or feature distances, near the position of one of the cameras could be queried for feature correspondences between features in one of the images and features in the database.
  • This database could be created using sources of images, taken at known poses, like GOOGLE® Streetview. Matching features from overlapping images at two positions and using the mechanisms described above.
  • Providing at least one parameter indicative of a distance between at least one feature, which is matched in both images, of the real environment and one camera (step B 4 ). For example, a feature extracted close to the center of projection of the camera or close to where the distance measurement unit is aiming.
  • step C 1 after having placed the placement coordinate system in the second image with a position and orientation corresponding to the respective position and orientation in the first image, the pose to the placement coordinate system in both images is determined. Further, the 3D position of all matched feature correspondences in both images is determined.
  • step C 2 either using the 3D positions of features or using the homography constraint, described above, the main plane can be determined, e.g. the ground plane.
  • the placement coordinate system in the first image is positioned to be on the ground plane, e.g. by moving it along the plane's normal.
  • the placement coordinate system 33 b is moved and positioned on the ground plane in that the projection of the virtual object in accordance with the (original) placement coordinate system 33 a in the first image (object 71 a ) substantially equals or comes near (i.e. substantially corresponds to) the projection of the virtual object in accordance with the placement coordinate system 33 b moved and positioned on the ground plane (object 71 b ).
  • the process may continue with optionally scaling the virtual object 71 b (i.e. change a dimension such as height thereof) so that it corresponds to the original dimension of the virtual object (object 71 a ) which was placed by the user originally in the first image 30 .
  • the virtual object 71 b is now superimposed with the real environment in accordance with the moved placement coordinate system on the ground plane in a second image 60 , wherein FIG. 12 shows that the placement coordinate system as assumed in the first image 30 may be displaced from the actual ground plane.
  • the superimposing of virtual objects 71 a, 71 b may be performed in the background, i.e. is not displayed on the display screen, but is only superimposed in the algorithm for determining the final placement coordinate system positioned correctly on the ground plane.
  • This ground plane may be used in any further image for superimposing any virtual object with the real environment, irrespective of the respective perspective of the further image.

Abstract

Method for determining the pose of a camera with respect to at least one object of a real environment for use in authoring/augmented reality application that includes generating a first image by the camera capturing a real object of a real environment, generating first orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the first image for finding and determining features which are indicative of an orientation of the camera, allocating a distance of the camera to the real object, generating distance data indicative of the allocated distance, determining the pose of the camera with respect to a coordinate system related to the real object of the real environment using the distance data and the first orientation data. May be performed with reduced processing requirements and/or higher processing speed, in mobile device such as mobile phones having display, camera and orientation sensor.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The invention is directed to methods and systems for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring, e.g. for geospatial databases, or augmented reality application, wherein at least one or two images are generated by the camera capturing a real object of a real environment. According to the determined pose of the camera, the image or images may by augmented with virtual objects according to the authoring or augmented reality technology.
  • 2. Description of the Related Art
  • Applications are known which augment an image or images generated by at least on camera with virtual objects using the so-called Augmented Reality (AR) technology. In such application, a camera coupled to a processing unit such as a microprocessor takes a picture of a real environment, wherein the real environment is displayed on a display screen and virtual objects may be displayed in addition to the real environment, so that the real environment displayed on the display screen is augmented with virtual objects of any kind on a display screen. In such application, in order to augment the image with virtual objects, there is the need for the microprocessor to determine the position and orientation (so-called pose) of the camera with respect to at least one object of the real environment in order for the microprocessor to correctly augment the captured image with any virtual objects. In this context, correctly augmenting the captured image with any virtual objects means that the virtual objects are displayed in a manner that the virtual objects fit in a perspectively and dimensionally correct fashion into the scene of the image.
  • A known method for determining the pose of the camera uses a virtual reference model of a corresponding part of the real environment captured by the camera, wherein the virtual reference model is projected into the image, using an initially known approximation of the pose, and superimposed with the corresponding part of the real environment. A tracking algorithm of the image processing then uses the virtual reference model to determine the pose of the camera with respect to the real environment, for example by feature detection and comparison between the reference model and the corresponding part of the real environment. Another known method for determining the pose of the camera uses a marker that is placed in the real environment and captured by the camera when taking the image. A tracking algorithm of the image processing then uses the marker to determine the pose of the camera with respect to the real environment, particularly by analysing of the marker in the image using known image processing methods.
  • A disadvantage of the above-mentioned methods is that either a virtual reference model has to be conceived first and stored, which is very time and resource consuming and almost impossible if the AR technology shall be capable of being used spontaneously in any real environment. With respect to using a marker, the user has to place in an initial step the marker in the real environment before taking the image, which is also time consuming and troublesome. Particularly, for these reasons these methods may hardly be used in connection with any consumer products, such as mobile phones having an integrated camera and display, or other mobile devices.
  • Moreover, from the prior art there are known so called structure from motion and simultaneous localization and tracking (SLAM) methods. All these methods serve for determining the position and orientation (pose) of a camera in relation to the real world or of part of the real world. If there is no pre-information available, in some cases it is not possible to determine the absolute pose of the camera in relation to the real world or part of the real world, but only the changes of the camera poses from a particular point of time. In the above-mentioned applications, SLAM methods may be used to get orientation from planar points, but a disadvantage is that one is not sure if ground plane or some other plane is identified. Further, with such methods one may only get an initial scale by translating the camera, e.g. along a distance of 10 cm, and communicating the covered distance to the system. Moreover, SLAM methods need at least two images (so-called frames) taken at different camera poses, and a calibrated camera.
  • Another known technology is disclosed in “Initialisation for Visual Tracking in Urban Environments”, Gerhard Reitmayr, Tom W. Drummond, Engineering DepartmentCambridge, University Cambridge, UK (Reitmayr, G. and Drummond, T. W. (2007) Initialisation for visual tracking in urban environments In: 6th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR 2007), 13-16 Nov. 2007, Nara, Japan.). The model-based tracking system is integrated with a sensor pack measuring 3D rotation rates, 3D acceleration and 3D magnetic field strength to be more robust against fast motions and to have an absolute orientation reference through gravity and the magnetic field sensor. Sensor fusion is implemented with a standard extended Kalman filter using a constant velocity model for the camera pose dynamics. Different inputs such as a camera pose from the tracking system or measurements from the sensor pack are incorporated using individual measurement functions in a SCAAT-style approach (Greg Welch and Gary Bishop. Scaat: incremental tracking with incomplete information. In Proc. SIGGRAPH '97, pages 333-344, New York, N.Y., USA, 1997. ACM Press/Addison-Wesley Publishing Co.).
  • Another technique is disclosed in “Robust Model-based Tracking for Outdoor Augmented Reality”, Gerhard Reitmayr, Tom W. Drummond (Gerhard Reitmayr and Tom Drummond, Going Out: Robust Model-based Tracking for Outdoor Augmented Reality Proc. IEEE ISMAR'06, 2006, Santa Barbara, Calif., USA.) The tracking system relies on a 3D model of the scene to be tracked. In former systems the 3D model describes salient edges and occluding faces. Using a prior estimate of camera pose, this 3D model is projected into the camera's view for every frame, computing the visible parts of edges.
  • Another application known as “mydeco” available on the Internet exists in which an image showing a real environment may be augmented with virtual objects. However, this system needs to set the rotation of the ground plane, which is quite cumbersome to the user.
  • In “A Lightweight Approach for Augmented Reality on Camera Phones using 2D Images to Simulate 3D”, Petri Honkamaa, Jani Jaeppinen, Charles Woodward, ACM International Conference Proceeding Series; Vol. 284, Proceedings of the 6th international conference on Mobile and ubiquitous multimedia, Oulu, Finland, Pages 155-159, Year of Publication: 2007, ISBN:978-1-59593-916-6 there is described that using manual interaction for the initialization purpose, particularly by means of a reference model and the user's manipulation thereof, is an appropriate way as the tracking initialization is an easy task for the user, but automating it would require pre-knowledge of the environment, quite much processing power and/or additional sensors. Furthermore, this kind of an interactive solution is independent of the environment, it can be applied “anytime, anywhere”.
  • In U.S. Pat. No. 7,002,551 there is disclosed a method and system for providing an optical see-through Augmented Reality modified-scale display. It includes a sensor suite that includes a compass, an inertial measuring unit, and a video camera for precise measurement of a user's current orientation and angular rotation rate. A sensor fusion module may be included to produce a unified estimate of the user's angular rotation rate and current orientation to be provided to an orientation and rate estimate module. The orientation and rate estimate module operates in a static or dynamic (prediction) mode. A render module receives an orientation; and the render module uses the orientation, a position from a position measuring system, and data from a database to render graphic images of an object in their correct orientations and positions in an optical display. The position measuring system is effective for position estimation for producing the computer generated image of the object to combine with the real scene, and is connected with the render module. An example of the position measuring system is a differential GPS. Since the user is viewing targets that are a significant distance away (as through binoculars), the registration error caused by position errors in the position measuring system is minimized.
  • Therefore, it would be beneficial to provide a method and a system for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality application which may be performed with reduced processing requirements and/or at a higher processing speed and, more particularly, to provide methods of authoring 3D objects without knowing much about the environment in advance and, where necessary, being able to integrate user-interaction to serve the pose estimation.
  • BRIEF SUMMARY OF THE INVENTION
  • Embodiments of the invention include the following aspects.
  • In a first aspect there is provided a method for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality application, comprising the following steps: Generating at least one first image by the camera capturing a first object of a real environment, generating first orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the first image for finding and determining features which are indicative of an orientation of the camera, providing a means for allocating a distance of the camera to the first object of the real environment displayed in the first image, the means generating distance data which are indicative of the allocated distance of the camera to the first object, and determining the pose of the camera with respect to a coordinate system related to the first object of the real environment using the distance data and the first orientation data.
  • In a second aspect, there is provided a method for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality application, comprising the following steps: Generating at least one image by the camera capturing a object of a real environment, displaying the image on an image displaying means, generating orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the image for finding features which are indicative of an orientation of the camera, providing a virtual reference model which is displayed superimposed with the real environment in the image and generating distance data from the reference model, the distance data being indicative of an allocated distance of the camera to the object, and determining the pose of the camera with respect to a coordinate system related to the object of the real environment using the distance data and the orientation data.
  • According to another aspect, there is provided a method for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality application, comprising the following steps: Generating at least one image by the camera capturing an object of a real environment, generating orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the image for finding and determining features which are indicative of an orientation of the camera, providing a measurement device associated with the camera for measuring at least one parameter indicative of the distance between the camera and the object, and determining the pose of the camera with respect to a coordinate system related to the object of the real environment on the basis of the at least one parameter and the orientation data.
  • According to another aspect, there is provided a method for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality application, comprising: generating at least one first image by the camera capturing a real object of a real environment, generating first orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the first image for finding and determining features which are indicative of an orientation of the camera or by user-interaction, providing a means for allocating a distance of the camera to the real object of the real environment displayed in the first image, the means generating distance data which are indicative of the allocated distance of the camera to the real object, and determining the pose of the camera with respect to a coordinate system related to the real object of the real environment using the distance data and the first orientation data. The method proceeds with generating a second image by a camera capturing the real object of the real environment, extracting at least one respective feature from the first image and the second image and matching the respective features to provide at least one relation indicative of a correspondence between the first image and the second image, providing the pose of the camera with respect to the real object in the first image, and determining the pose of the camera with respect to a coordinate system related to the real object in the second image using the pose of the camera with respect to the real object in the first image and the at least one relation, and extracting at least one respective feature from the first image and the second image for determining a ground plane in both the first and the second images and moving the placement coordinate system to be positioned on the ground plane.
  • A system for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality system, may comprise the following components and features: at least one camera for generating at least one image capturing at least one object of a real environment, an image displaying device for displaying the image, means for generating orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the image for finding and determining features which are indicative of an orientation of the camera, particularly at least one orientation sensor associated with the camera for generating orientation data of the camera, and a processing device coupled with the camera and with the image displaying device. The processing device is arranged to perform the following steps in interaction with the camera and the image displaying device: providing a virtual reference model which is displayed superimposed with the real environment in the image and generating distance data from the reference model, the distance data being indicative of an allocated distance of the camera to the object, receiving user's instructions via a user interface for manipulation of the reference model by the user placing the reference model at a particular position within the at least one image, and determining the pose of the camera with respect to a coordinate system related to the at least one object of the real environment using the distance data and the orientation data.
  • Another system for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality system may comprises the following components and features: At least one camera for generating at least one image capturing at least one object of a real environment, means for generating orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the image for finding and determining features which are indicative of an orientation of the camera, particularly at least one orientation sensor associated with the camera for generating orientation data of the camera, a measurement device associated with the camera for measuring a distance between the camera and the at least one object, and a processing device coupled with the camera, wherein the processing device is arranged for generating distance data from the measured distance, and for determining the pose of the camera with respect to a coordinate system related to the at least one object of the real environment using the distance data and the orientation data.
  • For example, the system is included in a mobile device, wherein the mobile device may be a mobile telephone.
  • The invention may use the fact that a number of mobile phones today offer various required components for Augmented Reality (AR), such as high resolution cameras and displays, accelerometers, orientation sensor, GPS, wireless connectivity by WLAN and/or radio links. Further aspects, embodiments and advantageous features of the invention are evident from the following disclosure of embodiments.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention will now be described in more detail in conjunction with the accompanying drawings that illustrate various embodiments of the invention.
  • FIG. 1 shows a flowchart illustration of a method according to an embodiment of the invention using a reference model.
  • FIG. 2 shows schematic illustrations of an embodiment using a reference model in a process according to FIG. 1 as viewed from the user.
  • FIG. 3 shows a schematic illustration of an embodiment of a system and exemplary scenery according to the invention.
  • FIG. 4 shows another schematic illustration of an embodiment of a system and exemplary scenery according to the invention.
  • FIG. 5 shows a schematic illustration of the scenery of FIG. 3 augmented with a virtual object.
  • FIG. 6 shows a flowchart illustration of an embodiment of a method according to the invention for calculating a placement coordinate system in an image taken by a camera.
  • FIG. 7 shows a diagram in terms of calculating an orientation from lines.
  • FIG. 8 shows a flowchart illustration of another embodiment of a method according to the invention for calculating a ground coordinate system when having two or more images taken by a camera.
  • FIGS. 9-12 show exemplary images when performing a process as shown in FIG. 8.
  • DETAILED DESCRIPTION OF THE INVENTION
  • In FIGS. 3, 4 and 5 there is shown a schematic illustration of an embodiment of a system and an exemplary scenery according to the invention. Particularly, FIG. 3 shows a system 1 in which a user (not shown) holds a mobile device 10 which incorporates or is coupled with a camera 11 for generating at least one image 30 of the real world, for example containing the real objects 31, 32 as shown. According to a particular example, the real objects 31, 32 may be a table and a cabinet which are placed in a room having a ground plane 35, and the camera 11 takes an image of the real environment to be displayed on a display screen 20. The ground plane itself could also be considered to be a real object. After determining the pose, the real environment is provided with a coordinate system 33, such as shown in FIG. 3. Further, the camera 11 is coupled with an image displaying means 20, such as a touchscreen that is incorporated in the mobile device 10. However, any other image displaying means may be used which is suitable for displaying an image to a user, such as a head mounted display or any other type of mobile or stationary display device. Furthermore, a processing device 13, which may be for example a microprocessor, is connected with or incorporated in the mobile device 10. In the present example, the mobile device 10 also incorporates or is coupled with an orientation sensor 12. In a particular application, the mobile device may be a mobile telephone having an integrated orientation sensor 12, camera 11, touchscreen 20, and processing device 13. However, for the purposes of the invention, the components may also be distributed and/or used in different applications. Further, they may be coupled with each other in wired or wireless fashion.
  • The system 1 is used for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality system. On the displaying means 20, an image of the real environment may be augmented with a virtual object, such as shown in FIG. 5, by displaying the virtual object 40 superimposed with the real environment in the image 30 in accordance with the pose of the camera. The pose (including position and orientation data) of the camera may be, for example, the pose with respect to the real object 31. To this end, the pose may be determined with respect to the coordinate system 33 as shown in FIG. 3, which in turn is associated with the respective object 31.
  • In the following, an embodiment of a process according to the invention shall be described in connection with the flow chart as shown in FIG. 1 and in FIG. 6. Particularly, the processing device 13 is arranged to perform the following steps in interaction with the camera 11 and the image displaying device 20:
  • In step 1.0, the camera 11 takes a first image of the real environment, for example as shown in FIG. 3. At least one orientation sensor 12 is associated with the camera 11 for generating orientation data of the camera in step 2.0. It should be noted, however, that an orientation sensor is not necessary in either case. Rather, the orientation data may alternatively or additionally be generated from an algorithm which analyses the first image for finding and determining features which are indicative of an orientation of the camera. The skilled person will appreciate that such algorithms are known in the art, such as “Orientation from lines” as disclosed in the paper ZuWhan Kim, “Geometry of Vanishing Points and its Application to External Calibration and Realtime Pose Estimation” (Jul. 1, 2006). Institute of Transportation Studies. Research Reports. Paper UCB-ITS-RR-2006-5.
  • The FIGS. 2A-2D show a scene that is different from the previously described scene of FIGS. 3-5. In FIG. 2, the images as shown in FIGS. 2A, 2B correspond to a first image taken by a camera, such as the first image 30 as previously described. FIGS. 2C, 2D display a respective second image 60 taken by the same or a different camera as described in more detail below. In order to reflect the correspondence of the images with a respective first and second image, the same reference numerals for a first image (30) and a second image (60) are used in connection with the different scenes.
  • Generally, according to the invention, a means for allocating a distance of the camera 11 to the real object 31 displayed in the image 30 generates distance data that is indicative of the allocated distance of the camera 11 to the real object 31. According to step 3.0 in FIG. 1, providing the means for allocating the distance of the camera to the real object includes providing a virtual reference model that is displayed superimposed with the real environment in the first image 30. The initial distance of the reference object can be one of a fixed value (e.g. 2.5 meters) and a distance provided by a distance sensor. For example, as shown in FIG. 2A, a reference model 50 is placed in the image 30 in accordance with an orientation provided by the orientation sensor 12. The dimensions of the reference model 50 (which may be any kind of virtual model), such as height, width or depth thereof, are known to the system.
  • One implementation may include: Translate reference model parallel to oriented plane or apply a rotation to the reference model with an axis parallel to the plane's normal or move reference model along the line defined by the camera center and the center of mass of the object, already placed (which allows increasing or decreasing the size of the reference object in the image).
  • In connection with step 4.0, the system is receiving user's instructions for manipulation of the reference model 50 by the user, the manipulation of the reference model including at least one of moving the reference model 50 at a particular position within the first image 30 on a plane, with the plane defined at least in part by the orientation data, and changing a dimension or variable of the reference model 50, such as moving the reference model 50 and/or changing the height of the reference model 50 (in FIG. 1 designated as moving and scaling the reference model).
  • In a next step, the distance data between camera 11 and real object 31 is determined using the virtual reference model 50, particularly using its known parameters such as height actually as actually displayed in the image 30. In this regard, the system assumes that the user has correctly placed the reference model 50 within the image, so that the proportions of the reference model 50 correspond to the proportions of the real objects 31, 32 in the image 30. From the dimensions of the reference model 50 the distance between the camera 11 and the real object 31 can be derived taking into account the intrinsic camera parameters. The pose of the camera 11 with respect to the real object 31 (or with respect to the coordinate system 33 which is associated with the object 31) is then determined using the distance data from step 4.0 and the orientation data from step 2.0. The reference model 50 according to its final position and orientation defines the position and orientation of the coordinate system 33. According to step 5.0, at least one virtual object 40 as shown in FIG. 2B or FIG. 5 is superimposed with the real environment in the first image 30 in accordance with the determined pose of the camera. The reference model 50 may be blanked out, as shown in FIG. 2B.
  • The manipulation of the reference model 50 by the user may include at least one of the following steps: touching the reference model by means of a touchscreen, using two fingers and moving the fingers away from each other to increase the size and moving the fingers closer to each other to decrease the size of the reference model (this might not be necessary, when having a distance measurement from a sensor), touching the reference model by means of a touchscreen with two fingers and rotating the fingers to one another in order to rotate the reference model around an axis perpendicular to the ground plane, touching the reference model with at least one finger and moving the finger in order to move the reference model across a plane.
  • Another interaction possibility, which might be decided to be activated instead of the interaction method above, when the orientation sensor is indicating, that the camera's view axis is oriented nearly parallel to the ground or looking upwards, is a method, wherein manipulation of the reference model by the user may include at least one of the following steps:
      • touching the reference model by means of a touchscreen, using two fingers and moving the fingers away from each other to move the model closer to the viewer, the movement taking place on the assumed ground plane and parallel to the intersection of the assumed ground plane with the plane containing the camera center, x in the camera coordinate system being zero.
      • and touching the reference model by means of a touchscreen, using two fingers and moving the fingers closer to each other to move the model farther from the viewer, the movement taking place on the assumed ground plane and parallel to the intersection of the assumed ground plane with the plane containing the camera center, x in the camera coordinate system being zero.
      • touching the reference model by means of a touchscreen with two fingers and rotating the fingers to one another in order to rotate the reference model around an axis perpendicular to the assumed ground plane.
      • touching the reference model with at least one finger and moving the finger up and down in order to move the reference model parallel to the normal of the assumed ground plane.
      • touching the reference model with at least one finger and moving the finger left and right in order to move the reference model on the assumed ground plane and parallel to the intersection of the assumed ground plane with the plane containing the camera center, y in the camera coordinate system being zero.
  • According to another embodiment, providing the means for allocating the distance of the camera 11 to the real object 31 includes providing a measurement device, such as a distance sensor, associated with the camera 11 for measuring at least one parameter indicative of the distance between the camera 11 and the real object 31, wherein the distance data are generated on the basis of the at least one parameter. For example, the measurement device includes one of the following devices: a distance provided by a focussing unit of the camera, a distance sensor, at least one time of flight camera, and/or a stereo camera or cameras.
  • According to the embodiment of FIG. 3, a measurement device 14 may be associated with the camera 11 for measuring a distance between the camera and the object 31. The processing device 13 coupled with the camera 11 is arranged for generating distance data from the measured distance, and for determining an initial pose of the camera with respect to the object 31 using the distance data and the orientation data. The initial pose can be refined in terms of the position on the ground plane and the rotation around the axis parallel to the plane's normal. In this regard, FIG. 4 shows an example of how to determine the pose of the camera 11 knowing the distance to the focused object (which is in FIG. 4 different from table 31) and two angles describing the normal to the ground plane provided by the orientation sensor. One possibility to create the rotation matrix describing the plane's rotation to the camera is to set the first matrix column as the gravity vector (which might be provided by the orientation sensor). The second matrix vector is set as an arbitrary vector parallel to the plane perpendicular to the gravity vector. The third matrix column can be obtained using the cross product of the two other columns. All columns should be normalized.
  • In an embodiment of the invention, the method may further include providing a parameter indicative of a rotation of the camera around an axis which is perpendicular to the earth ground plane, e.g., provided by a rotation sensor or a compass (in FIG. 6 designated as z-rotation). This makes the second matrix vector, as described above, not arbitrary. Models, which are related to the earth's coordinate system, e.g. sign showing north, could be initially oriented correctly.
  • In a further approach, providing the means for allocating the distance of the camera 11 to the real object 31 may include providing at least one parameter indicative of a distance between two features of the real environment which are visible in the image 30, such as a distance between the stand of the table 31 and one edge of the cabinet 32 and using the intrinsic camera parameters. Another helpful example is providing the distance of two features located on the ground plane. The parameter may be provided, interactively by the user, after an appropriate image processing algorithm, shows detected features and the user providing a distance from his knowledge of the real environment.
  • Any of the above approaches for providing the means for allocating the distance of the camera 11 to the real object 31 may also be combined in any suitable manner.
  • As shown in FIG. 3, the distance data and the orientation data are used to determine at least one placement coordinate system in the image 30, such as coordinate system 33, wherein the at least one virtual object 40 is superimposed with the real environment relative to the placement coordinate system 33. When having only one image, the system may not precisely determine the ground plane for allocating a corresponding coordinate system thereon, without, e.g. a distance sensor directed toward the ground. In this regard, it is assumed that the placement coordinate system 33 is approximately the ground coordinate system. The ground plane and, thus, the ground coordinate system may be determined by means of a second image of the same scenery taken from a different pose of the camera, as explained in more detail below.
  • With respect to FIG. 5, showing a virtual model 40 superimposed with the real environment, it should be noted that the virtual model 40 may also serve as the reference model 50 as discussed above with respect to FIG. 2A.
  • With superimposing the virtual model 40 in the room as shown in the image 30, the user may get an impression of how a real sofa corresponding to the virtual model 40 would look like if placed in the room of the real world. Therefore, the user may use his or her mobile phone for taking an image and for augmenting the image with virtual objects of any kind, wherein the skilled person is aware of a wide variety of applications. Another application for example could be placing objects in pictures of the real world and having a GPS position of the camera and an absolute orientation and determining the pose of a virtual model 40 using this invention, allowing to feed a global database of objects positioned on the earth, like GOOGLE® Earth.
  • In FIG. 6, the process as explained above is shown in more detailed manner, as discussed above. In the initializing step, a distance of the reference model 50 is initially assumed. With moving the reference model 50 on a plane (with the plane being parallel to the ground plane) and/or with changing a dimension/parameter of the reference object according to the subjective impression of the user, the necessary translation data may be determined. An algorithm following the “Orientation from lines” approach for determining the orientation of the camera instead of using an orientation sensor is explained in more detail with reference to FIG. 7:
  • Vanishing points are the images of the intersection of parallel lines. Let vx, vy, vz be the vanishing points. vx is the image of “the point at infinity Ix=(1,0,0,0)” (the x axis), vy is the image of “the point at infinity Iy=(0,1,0,0)” (the y axis), vz is the image of “the point at infinity Iz=(0,0,1,0)” (the z axis). Further, let the homogeneous coordinates of vx be vx=[ul, vl, wl]=K*[R t]*Ix. It is possible to get the first column of the matrix R using inv(K)*vx and then normalizing it to 1. Further, let the homogeneous coordinates of vy be vy=[u2, v2, w2]=K*[R t]*Iy. It is possible to get the second column of the matrix R using inv(K)*vy and then normalizing it to 1. Finally, let the homogeneous coordinates of vz be vz=[u3, v3, w3]=K*[R t]*Iz. It is possible to get the third column of the matrix R using inv(K)*vz and then normalizing it to 1. If only two points among vx, vy and vz are available then it is still possible to compute the third using the cross-product, e.g. vz=vx̂vy. See Z. Kim, “Geometry of vanishing points and its application to external calibration and realtime pose estimation,” Inst. Transp. Stud., Res. Rep. UCB-ITS-RR-2006-5, Univ. Calif, Berkeley, Calif., 2006 for more details.
  • In accordance with FIG. 8, a method will be explained in more detail for calculating a ground coordinate system on the basis of at least one second image or more images of the same scenery but taken from a different pose.
  • The method includes the step of generating a second image 60 by a camera (which may be the same or a different camera for which the intrinsic parameters are also known than the camera which took the first image) capturing the real object (e.g. object 31 of image 30) of the real environment from a different pose.—
  • In a further step, at least one respective feature from the first image 30 and the second image 60 are extracted, wherein the respective features are matched to provide at least one relation indicative of a correspondence between the first image and the second image (the relation is designated in FIG. 8 in step A2 as “fundamental matrix” which is defined up to a scale (For more details see chapter 11 of Multiple View Geometry in Computer Vision. Second Edition. Richard Hartley and Andrew Zisserman, Cambridge University Press, March 2004). Te scale designated as “alpha” is determined in step B5). According to step A1, second orientation data derived from the new pose when taking the second image might be used to reduce the amount of needed features or to check the result of the fundamental matrix. As a result, the placement coordinate system 33 of the first image may be transitioned to the second image.
  • The calculation of the fundamental matrix can of course be supported by the use of the orientation sensors, reducing the amount of necessary feature matches.
  • In the following, the calculation of the missing “alpha” is explained in more detail:
  • From point correspondences between the first image (image1) and the second image (image2), one can build the Fundamental matrix:

  • F=K2−T [t] x RK1−1
  • where K1 is the camera intrinsic parameter matrix of the camera, which acquired the first image, and K2 is the intrinsic parameter matrix of the camera, which acquired the second image, t is the translation between the two camera views and R is the rotation. Let p1 and p2 be two corresponding points in image1 and image2, they verify:

  • p2 T F p1=0
  • So F is defined up to a scale. Having K and F it is possible to get the essential matrix

  • E=[t]x R
  • up to a scale. The essential matrix can also be computed directly from point correspondences. Therefore to get the translation t up to a scale and the rotation R (for more information see B. Horn: Recovering baseline and orientation. from essential matrix. Journal. of. the Optical Society. of. America,. January 1990.). Now if a plane is visible in both images it is possible to compute the homography that transforms every point on this plane in the image1 to its corresponding point in the image2. The homography can be written:

  • G=K2(R+t/d n T)K1−1
  • where n is the normal vector, expressed in the first camera view, to the plane and d is the distance between the camera center of the first view and the plane. If we know that the two images contain the plane and that the plane has many feature points, the detection of the points lying on the plane is very easy, for example: the biggest set of points that are verifying the same homography. Another possibility to detect a plane is to reconstruct the 3D-points out of correspondences and find the plane that includes a high number of 3D-points (and optionally being nearly parallel to the plane from the orientation in claim 1, e.g. to reduce the search-space and improve robustness). From the homography G, we can get R (again) and we can get t/d and we can get n (use the algorithm of Motion and Structure From Motion in a Piecewise Planar Environment by: OD Faugeras, F Lustman Intern. J. of Pattern Recogn. and Artific. Intelige., Vol. 2, No. 3. (1988), pp. 485-508.
  • From here we can see the problem to be solved, that having the scale of t will give us d, and having d will give us the scale of t (“alpha”). The translation between the two views (odometrie in cars or GPS, . . . ) will give us the scale of t. The distance of one feature p1 gives us the depth z of this feature and we know that:

  • zp1 =K1 [xyz]
  • When p is on the plane, we have

  • n T [x y z]=d, this means d=z n T K1−1 p 1.
  • When p1 is not on the plane, we need to solve

  • K2(z RK1−1 p 1+alpha t 0)=beta p 2
  • where t0 is a vector collinear to the true translation, with a norm equal to 1 and the unknowns are alpha and beta and the solution is very easy to find (more equations than unknowns).
  • The distance between two features X=[x y z] and X′=[x′ y′ z′] allows to get the scale of t:
  • In fact ∥X−X′∥=∥zK1 −1 p1−z′K1 −1 p1′∥ is given. Using the equation

  • K2(z RK1−1 p 1+alpha t)=beta p 2
  • we can express z and z′ using such that z=A alpha and z′=A′ alpha where A depends only on K, R, t (up to the scale), p1 and p2 while A′ depends only on K1, K2, R, t (up to the scale), p1′ and p2′ (these parameters are either supposed given or computed above).

  • alpha=∥X−X′∥/∥A−A′∥
  • In step C1, the pose of the camera with respect to the real object in the first image is provided from the previous process with respect to the first image, and the pose of the camera with respect to the real object in the second image is determined using the pose of the camera with respect to the real object in the first image and the at least one relation, i.e. the fundamental matrix and “alpha” for determining the translation parameters tx, ty, tz of the pose which is consisting of the parameters tx, ty, tz, rx, ry, rz (with “t” standing for translation and “r” standing for rotation parameters in the three different dimensions) defining the position and orientation of the camera.
  • Particularly, the distance data and the first orientation data recorded with respect to the first image were used to determine at least one placement coordinate system (such as coordinate system 33) in the first image and a position and orientation thereof in the first image (as discussed above), wherein the fundamental matrix and “alpha” are used for allocating the placement coordinate system in the second image with a position and orientation corresponding to the respective position and orientation in the first image. This is shown in FIG. 2C in which the reference model 50 of FIG. 2A (positioned in accordance with the placement coordinate system 33) is shown in the second image 60 (positioned in accordance with the “transferred” placement coordinate system 33) with a position and orientation corresponding to the respective position and orientation in the first image 30 (i.e. with the back of the reference model turned to the wall as in the first image 30)
  • According to an embodiment, the step of providing the at least one relation (fundamental matrix and alpha) may further include one or more of the following steps:
  • Providing at least one parameter indicative of a movement of the camera between taking the first image and the second image (step B1). For example, providing at least one parameter in step B1 may include providing a first location parameter of the camera when taking the first image and a second location parameter of the camera when taking the second image, the location parameters generated by a positioning system or detector, such as used with GPS. At least one of the first and second location parameters may be generated by at least one of a satellite locating system, wireless network positioning mechanisms, mobile phone cell location mechanisms and an elevation measuring device, such as an altimeter. Note, that the measurement of one translation dimension (from tx, ty, tz) or the norm of t are sufficient to solve for alpha.
  • Providing at least one parameter indicative of a distance between two features of the real environment which are visible in both the first and second images (step B2). Optionally, using 2D-3D-correspondences created from scale given by reference model 50 (and optionally assuming the reference model being on one plane) (step B3) may be applied for providing input to step B2.
  • Providing at least one parameter indicative of a distance between two features of the real environment which are visible in one of the images (step B2) by assuming they are on the plane and assuming the reference model 50 is placed on the plane, the dimensions of the reference model providing a scale for the distance between the two features.
  • Also, a database of features, including 3D positions or feature distances, near the position of one of the cameras could be queried for feature correspondences between features in one of the images and features in the database. This database could be created using sources of images, taken at known poses, like GOOGLE® Streetview. Matching features from overlapping images at two positions and using the mechanisms described above.
  • Providing at least one parameter indicative of a distance between at least one feature, which is matched in both images, of the real environment and one camera (step B4). For example, a feature extracted close to the center of projection of the camera or close to where the distance measurement unit is aiming.
  • Further in step C1, after having placed the placement coordinate system in the second image with a position and orientation corresponding to the respective position and orientation in the first image, the pose to the placement coordinate system in both images is determined. Further, the 3D position of all matched feature correspondences in both images is determined. In step C2, either using the 3D positions of features or using the homography constraint, described above, the main plane can be determined, e.g. the ground plane.
  • Proceeding with step C3, the placement coordinate system in the first image is positioned to be on the ground plane, e.g. by moving it along the plane's normal.
  • The FIGS. 9-12 show a scenery which is different from the previously described scenes. In FIGS. 9-12, the images as shown in FIGS. 9-11 correspond to a first image taken by a camera, thus is designated with reference numeral 30 as with the previous described scenes. On the other hand, FIG. 12 displays a second image taken by the same or a different camera corresponding to the second image 60 as previously described.
  • Turning to step C4, as shown in FIGS. 9-12, the method proceeds with superimposing at least one virtual object (such as object 71 a in FIG. 9, which may be a reference model or any virtual object to be superimposed with the real world) with the real environment in accordance with the placement coordinate system 33 a as determined in the first image. Thereafter, the virtual object is superimposed with the real environment in accordance with the placement coordinate system 33 a now positioned on the previously determined ground plane in the first image (now displayed as object 71 b in FIG. 9 after moving the coordinate system 33 a to the determined ground plane, the new positioned coordinate system designated as 33 b). The process continues with moving the placement coordinate system 33 b along the ground plane (in other words, adjust x,y). As shown in FIG. 10, the placement coordinate system 33 b is moved and positioned on the ground plane in that the projection of the virtual object in accordance with the (original) placement coordinate system 33 a in the first image (object 71 a) substantially equals or comes near (i.e. substantially corresponds to) the projection of the virtual object in accordance with the placement coordinate system 33 b moved and positioned on the ground plane (object 71 b). As shown in FIG. 11, the process may continue with optionally scaling the virtual object 71 b (i.e. change a dimension such as height thereof) so that it corresponds to the original dimension of the virtual object (object 71 a) which was placed by the user originally in the first image 30.
  • Another possibility to achieve this is to shoot a ray from the camera center on a point of the object (e.g. located in the lower part). The next step is intersecting the ray with the plane. Then, finally, we render the virtual object such that the point is superimposed with the intersection point.
  • The process of correcting the position of objects might be done for all placed virtual objects individually not necessarily changing the placement coordinate system, but the relationship (tx,ty,tz, ry,ry,rz) of the virtual model to the placement coordinate system.
  • As shown in FIG. 12, the virtual object 71 b is now superimposed with the real environment in accordance with the moved placement coordinate system on the ground plane in a second image 60, wherein FIG. 12 shows that the placement coordinate system as assumed in the first image 30 may be displaced from the actual ground plane.
  • It should be noted, that the superimposing of virtual objects 71 a, 71 b may be performed in the background, i.e. is not displayed on the display screen, but is only superimposed in the algorithm for determining the final placement coordinate system positioned correctly on the ground plane. This ground plane may be used in any further image for superimposing any virtual object with the real environment, irrespective of the respective perspective of the further image.
  • While the invention has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (30)

1. A method for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality application, comprising:
generating a first image comprising at least one image by a camera capturing a real object of a real environment;
generating first orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the first image to find and determine features which are indicative of an orientation of the camera;
allocating a distance of the camera to the real object of the real environment displayed in the first image, and generating distance data that is indicative of the distance; and,
determining a pose of the camera with respect to a coordinate system related to the real object of the real environment using the distance data and the first orientation data.
2. The method of claim 1, wherein the allocating the distance of the camera to the real object includes providing a virtual reference model which is displayed superimposed with the real environment in the first image and generating the distance data from the virtual reference model.
3. The method of claim 2, wherein the method further includes receiving user instructions for manipulating the virtual reference model by a user, the manipulating of the virtual reference model including at least one of moving the virtual reference model at a particular position within the first image on a plane, with the plane defined at least in part by the first orientation data, and changing a dimension or variable of the virtual reference model.
4. The method of claim 3, wherein the manipulating of the virtual reference model by the user may include at least one of:
touching the virtual reference model using a touchscreen, using two fingers and moving the two fingers away from each other to increase a size and moving the two fingers closer to each other to decrease the size of the virtual reference model;
touching the virtual reference model using a touchscreen with the two fingers and rotating the two fingers about one another in order to rotate the virtual reference model around an axis perpendicular to a ground plane;
touching the virtual reference model with at least one finger and moving the at least one finger in order to move the virtual reference model across the plane.
5. The method of claim 3, wherein manipulation of the virtual reference model by the user may include at least one of:
touching the virtual reference model using a touchscreen, using two fingers and moving the two fingers away from each other to move the virtual reference model closer to a viewer, wherein the move takes place on an assumed ground plane and parallel to an intersection of the assumed ground plane with a plane containing a camera center, x in a camera coordinate system being zero;
touching the virtual reference model using a touchscreen, using the two fingers and moving the two fingers closer to each other to move the virtual reference model farther from the viewer, wherein the move takes place on the assumed ground plane and parallel to the intersection of the assumed ground plane with the plane containing the camera center, x in the camera coordinate system being zero;
touching the virtual reference model using the touchscreen with the two fingers and rotating the two fingers about one another in order to rotate the virtual reference model around an axis perpendicular to the assumed ground plane;
touching the virtual reference model with at least one finger and moving the at least one finger up and down in order to move the virtual reference model parallel to a normal of the assumed ground plane;
touching the virtual reference model with the at least one finger and moving the at least one finger left and right in order to move the virtual reference model on the assumed ground plane and parallel to the intersection of the assumed ground plane with the plane containing the camera center, y in the camera coordinate system being zero.
6. The method of claim 1, wherein allocating the distance of the camera to the real object includes providing a measurement device associated with the camera that measures at least one parameter that is indicative of the distance between the camera and the real object, wherein the distance data is generated based on the at least one parameter.
7. The method of claim 6, wherein the providing the measurement device includes providing at least one of: a focussing unit of the camera, an independent distance sensor, at least one time of flight camera, a stereo camera or cameras.
8. The method of claim 1, further comprising providing a parameter indicative of a rotation of the camera around an axis that is perpendicular to an Earth ground plane.
9. The method of claim 1, wherein the allocating the distance of the camera to the real object includes providing at least one parameter indicative of a distance between two features of the real environment which are visible in the first image.
10. The method of claim 1, further including displaying at least one virtual object superimposed with the real environment in the first image in accordance with the pose of the camera with respect to the coordinate system related to the real object of the real environment.
11. The method of claim 10, determining at least one placement coordinate system in the first image using the distance data and the first orientation data, wherein the at least one virtual object is superimposed with the real environment relative to the at least one placement coordinate system.
12. The method of claim 1, further comprising:
generating a second image by the camera capturing the real object of the real environment;
extracting at least one respective feature from the first image and the second image and matching the at least one respective feature therein providing at least one relation that is indicative of a correspondence between the first image and the second image; and,
providing the pose of the camera with respect to the coordinate system related to the real object in the first image, and determining the pose of the camera with respect to a second coordinate system related to the real object in the second image using the pose of the camera with respect to the coordinate system related to the real object in the first image and the at least one relation.
13. The method of claim 12, further comprising generating second orientation data associated with the second image from the at least one orientation sensor.
14. The method of claim 12, further comprising:
determining at least one placement coordinate system in the first image and a position and orientation thereof in the first image using the distance data and the first orientation data; and,
allocating a second at least one placement coordinate system in the second image with a second position and orientation corresponding to the respective position and orientation in the first image using the at least one relation.
15. The method of claim 12, wherein the providing the at least one relation further includes at least one of the following:
providing at least one parameter indicative of a movement of the camera between capturing the first image and the second image;
providing a second at least one parameter indicative of a distance between two features of the real environment which are visible in both the first image and the second image;
providing a third at least one parameter indicative of a distance between at least one feature of the real environment and the camera.
16. The method of claim 15, wherein providing the at least one parameter indicative of the movement of the camera between taking the first image and the second image includes providing a first location parameter of the camera when taking the first image and a second location parameter of the camera when taking the second image.
17. The method of claim 16, generating at least one of the first and second location parameters by at least one of a satellite locating system, and an elevation measuring device.
18. The method of claim 1, further comprising providing a virtual reference model which is displayed superimposed with the real environment in the first image, placing the virtual reference model on an assumed ground plane and identifying in the first image at least two features of the real environment on the assumed ground plane near the virtual reference model, wherein a distance between the at least two features is determined using a known dimension of the virtual reference model.
19. The method of claim 14, further comprising:
extracting at least one second respective feature from the first image and the second image for determining a ground plane in both the first image and the second image and moving a placement coordinate system to be positioned on the ground plane.
20. The method of claim 19, further comprising:
superimposing at least one virtual object with the real environment in accordance with the placement coordinate system in the first image;
superimposing the at least one virtual object with the real environment in accordance with the placement coordinate system positioned on the ground plane in the first image;
moving the placement coordinate system along the ground plane to be positioned in that a projection of the at least one virtual object in accordance with the placement coordinate system in the first image substantially equals or comes near the projection of the at least one virtual object in accordance with the placement coordinate system positioned on the ground plane.
21. The method of claim 20, further comprising superimposing the at least one virtual object with the real environment in accordance with the placement coordinate system on the ground plane.
22. A method for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality application, comprising:
generating at least one image by a camera capturing a object of a real environment;
displaying the at least one image on an image display;
generating orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the at least one image for finding and features which are indicative of an orientation of the camera;
providing a virtual reference model which is displayed superimposed with the real environment in the at least one image and generating distance data from the virtual reference model, wherein the distance data is indicative of an allocated distance of the camera to the object; and,
determining a pose of the camera with respect to a coordinate system related to the object of the real environment using the distance data and the orientation data.
23. A method for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality application, comprising:
generating at least one image by a camera capturing an object of a real environment;
generating orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the at least one image for finding and determining features which are indicative of an orientation of the camera;
providing a measurement device associated with the camera for measuring at least one parameter that is indicative of distance between the camera and the object;
determining a pose of the camera with respect to a coordinate system related to the object of the real environment based on the at least one parameter and the orientation data.
24. A system for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality system, comprising:
at least one camera configured to generate at least one image that captures at least one object of a real environment;
an image displaying device configured to display the at least one image;
at least one orientation sensor associated with the at least one camera, configured to generate orientation data of the at least one camera;
a processing device coupled with the at least one camera and with the image displaying device, the processing device and configured to
provide a virtual reference model which is displayed superimposed with the real environment in the at least one image and generate distance data from the virtual reference model, the distance data being indicative of an allocated distance of the at least one camera to the at least one object;
receive user instructions via a user interface for manipulation of the virtual reference model by user placement of the virtual reference model at a particular position within the at least one image;
determine a pose of the at least one camera with respect to a coordinate system related to the at least one object of the real environment through use of the distance data and the orientation data.
25. The system of claim 24, wherein the system is included in a mobile device.
26. The system of claim 25, wherein the mobile device is a mobile telephone.
27. A system for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality system, comprising:
at least one camera configured to generate at least one image capturing at least one object of a real environment;
at least one orientation sensor associated with the at least one camera, configured to generate orientation data of the at least one camera;
a measurement device associated with the at least one camera, configured to measure a distance between the at least one camera and the at least one object; and,
a processing device coupled with the at least one camera, wherein the processing device is configured to
generate distance data from the distance, and
determine a pose of the at least one camera with respect to a coordinate system related to the at least one object of the real environment through use of the distance data and the orientation data.
28. The system of claim 27, wherein the system is included in a mobile device.
29. The system of claim 28, wherein the mobile device is a mobile telephone.
30. A method for determining the pose of a camera with respect to at least one object of a real environment for use in an authoring or augmented reality application, comprising:
generating a first image comprising at least one image by a camera that captures a real object of a real environment;
generating first orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the first image to find and determine features which are indicative of an orientation of the camera or by user-interaction;
allocating a distance of the camera to the real object of the real environment displayed in the first image, and generating distance data that is indicative of an allocated distance of the camera to the real object;
determining a pose of the camera with respect to a coordinate system related to the real object of the real environment in the first image using the distance data and the first orientation data;
generating a second image by the camera that captures the real object of the real environment;
extracting at least one respective feature from the first image and the second image and matching the at least one respective feature to provide at least one relation that is indicative of a correspondence between the first image and the second image;
providing the pose of the camera with respect to the coordinate system related to the real object in the first image, and determining the pose of the camera with respect to a second coordinate system related to the real object in the second image using the pose of the camera with respect to the coordinate system related to the real object in the first image and the at least one relation; and,
extracting at least one second respective feature from the first image and the second image for determining a ground plane in both the first image and the second image and moving a placement coordinate system to be positioned on the ground plane.
US12/371,490 2009-02-13 2009-02-13 Methods and systems for determining the pose of a camera with respect to at least one object of a real environment Active 2032-03-03 US8970690B2 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US12/371,490 US8970690B2 (en) 2009-02-13 2009-02-13 Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
CN201080016314.0A CN102395997B (en) 2009-02-13 2010-02-12 For determining the method and system of video camera relative to the attitude of at least one object of true environment
PCT/EP2010/000882 WO2010091875A2 (en) 2009-02-13 2010-02-12 Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
CN201510393737.9A CN105701790B (en) 2009-02-13 2010-02-12 For determining method and system of the video camera relative to the posture of at least one object of true environment
EP10713797.8A EP2396767B1 (en) 2009-02-13 2010-02-12 Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
US14/633,386 US9934612B2 (en) 2009-02-13 2015-02-27 Methods and systems for determining the pose of a camera with respect to at least one object of a real environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/371,490 US8970690B2 (en) 2009-02-13 2009-02-13 Methods and systems for determining the pose of a camera with respect to at least one object of a real environment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/633,386 Division US9934612B2 (en) 2009-02-13 2015-02-27 Methods and systems for determining the pose of a camera with respect to at least one object of a real environment

Publications (2)

Publication Number Publication Date
US20100208057A1 true US20100208057A1 (en) 2010-08-19
US8970690B2 US8970690B2 (en) 2015-03-03

Family

ID=42235284

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/371,490 Active 2032-03-03 US8970690B2 (en) 2009-02-13 2009-02-13 Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
US14/633,386 Active 2030-04-27 US9934612B2 (en) 2009-02-13 2015-02-27 Methods and systems for determining the pose of a camera with respect to at least one object of a real environment

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/633,386 Active 2030-04-27 US9934612B2 (en) 2009-02-13 2015-02-27 Methods and systems for determining the pose of a camera with respect to at least one object of a real environment

Country Status (4)

Country Link
US (2) US8970690B2 (en)
EP (1) EP2396767B1 (en)
CN (2) CN102395997B (en)
WO (1) WO2010091875A2 (en)

Cited By (103)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100220173A1 (en) * 2009-02-20 2010-09-02 Google Inc. Estimation of Panoramic Camera Orientation Relative to a Vehicle Coordinate Frame
US20110201362A1 (en) * 2010-02-12 2011-08-18 Samsung Electronics Co., Ltd. Augmented Media Message
US20110216206A1 (en) * 2009-12-31 2011-09-08 Sony Computer Entertainment Europe Limited Media viewing
US20110216179A1 (en) * 2010-02-24 2011-09-08 Orang Dialameh Augmented Reality Panorama Supporting Visually Impaired Individuals
JP2012168798A (en) * 2011-02-15 2012-09-06 Sony Corp Information processing device, authoring method, and program
US20120229508A1 (en) * 2011-03-10 2012-09-13 Microsoft Corporation Theme-based augmentation of photorepresentative view
US20120269388A1 (en) * 2011-04-20 2012-10-25 Qualcomm Incorporated Online reference patch generation and pose estimation for augmented reality
US20120314936A1 (en) * 2010-03-17 2012-12-13 Sony Corporation Information processing device, information processing method, and program
US20130069986A1 (en) * 2010-06-01 2013-03-21 Saab Ab Methods and arrangements for augmented reality
US20130141565A1 (en) * 2011-12-01 2013-06-06 Curtis Ling Method and System for Location Determination and Navigation using Structural Visual Information
US20130169628A1 (en) * 2012-01-03 2013-07-04 Harman Becker Automotive Systems Gmbh Geographical map landscape texture generation on the basis of hand-held camera images
EP2615580A1 (en) * 2012-01-13 2013-07-17 Softkinetic Software Automatic scene calibration
US20130286014A1 (en) * 2011-06-22 2013-10-31 Gemvision Corporation, LLC Custom Jewelry Configurator
CN103390287A (en) * 2012-05-11 2013-11-13 索尼电脑娱乐欧洲有限公司 Apparatus and method for augmented reality
US20140050353A1 (en) * 2012-08-15 2014-02-20 International Business Machines Corporation Extracting feature quantities from an image to perform location estimation
CN103673990A (en) * 2012-09-13 2014-03-26 北京同步科技有限公司 Device and method for obtaining camera posture data
WO2014070483A1 (en) * 2012-11-02 2014-05-08 Qualcomm Incorporated Fast initialization for monocular visual slam
US20140125700A1 (en) * 2012-11-02 2014-05-08 Qualcomm Incorporated Using a plurality of sensors for mapping and localization
US20140123507A1 (en) * 2012-11-02 2014-05-08 Qualcomm Incorporated Reference coordinate system determination
EP2736247A1 (en) * 2012-11-26 2014-05-28 Brainstorm Multimedia, S.L. A method for obtaining a virtual object within a virtual studio from a real object
CN103907139A (en) * 2011-11-11 2014-07-02 索尼公司 Information processing device, information processing method, and program
US20140198227A1 (en) * 2013-01-17 2014-07-17 Qualcomm Incorporated Orientation determination based on vanishing point computation
US20140200060A1 (en) * 2012-05-08 2014-07-17 Mediatek Inc. Interaction display system and method thereof
US20140267869A1 (en) * 2013-03-15 2014-09-18 Olympus Imaging Corp. Display apparatus
US20140267397A1 (en) * 2013-03-14 2014-09-18 Qualcomm Incorporated In situ creation of planar natural feature targets
CN104346036A (en) * 2013-07-29 2015-02-11 京瓷办公信息系统株式会社 Display operating device and image forming apparatus with same
US8970690B2 (en) * 2009-02-13 2015-03-03 Metaio Gmbh Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
CN104462730A (en) * 2014-12-31 2015-03-25 广东电网有限责任公司电力科学研究院 Online simulation system and method for power plant
US20150092048A1 (en) * 2013-09-27 2015-04-02 Qualcomm Incorporated Off-Target Tracking Using Feature Aiding in the Context of Inertial Navigation
US9013550B2 (en) 2010-09-09 2015-04-21 Qualcomm Incorporated Online reference generation and tracking for multi-user augmented reality
US9013505B1 (en) * 2007-11-27 2015-04-21 Sprint Communications Company L.P. Mobile system representing virtual objects on live camera image
JP2015080199A (en) * 2013-09-11 2015-04-23 ソニー株式会社 Image processor and method
CN104717413A (en) * 2013-12-12 2015-06-17 北京三星通信技术研究有限公司 Shooting assistance method and equipment
US20150189178A1 (en) * 2013-12-30 2015-07-02 Google Technology Holdings LLC Method and Apparatus for Activating a Hardware Feature of an Electronic Device
US9147122B2 (en) 2012-05-31 2015-09-29 Qualcomm Incorporated Pose estimation based on peripheral information
US9159133B2 (en) 2012-11-05 2015-10-13 Qualcomm Incorporated Adaptive scale and/or gravity estimation
WO2015181827A1 (en) * 2014-05-28 2015-12-03 Elbit Systems Land & C4I Ltd. Method and system for image georegistration
US9213419B1 (en) * 2012-11-13 2015-12-15 Amazon Technologies, Inc. Orientation inclusive interface navigation
US9224205B2 (en) 2012-06-14 2015-12-29 Qualcomm Incorporated Accelerated geometric shape detection and accurate pose tracking
US20150377612A1 (en) * 2010-07-16 2015-12-31 Canon Kabushiki Kaisha Position/orientation measurement apparatus, measurement processing method thereof, and non-transitory computer-readable storage medium
WO2016012041A1 (en) * 2014-07-23 2016-01-28 Metaio Gmbh Method and system for presenting at least part of an image of a real object in a view of a real environment, and method and system for selecting a subset of a plurality of images
US20160026242A1 (en) 2014-07-25 2016-01-28 Aaron Burns Gaze-based object placement within a virtual reality environment
US20160098095A1 (en) * 2004-01-30 2016-04-07 Electronic Scripting Products, Inc. Deriving Input from Six Degrees of Freedom Interfaces
US9398210B2 (en) 2011-02-24 2016-07-19 Digimarc Corporation Methods and systems for dealing with perspective distortion in connection with smartphone cameras
CN105793882A (en) * 2013-12-12 2016-07-20 富士通株式会社 Equipment inspection work assistance program, equipment inspection work assistance method, and equipment inspection work assistance device
US9412034B1 (en) * 2015-01-29 2016-08-09 Qualcomm Incorporated Occlusion handling for computer vision
CN105844381A (en) * 2015-01-30 2016-08-10 株式会社日立制作所 Business influenced part extraction method and business influenced part extraction device based on business variation
US9418284B1 (en) * 2014-04-09 2016-08-16 Vortex Intellectual Property Holding LLC Method, system and computer program for locating mobile devices based on imaging
US9443353B2 (en) 2011-12-01 2016-09-13 Qualcomm Incorporated Methods and systems for capturing and moving 3D models and true-scale metadata of real world objects
US20160275079A1 (en) * 2015-03-17 2016-09-22 Siemens Aktiengesellschaft Part Identification using a Photograph and Engineering Data
US20160343166A1 (en) * 2013-12-24 2016-11-24 Teamlab Inc. Image-capturing system for combining subject and three-dimensional virtual space in real time
US20160350906A1 (en) * 2013-12-19 2016-12-01 Metaio Gmbh Method of tracking a mobile device and method of generating a geometrical model of a real environment using a camera of a mobile device
US9529426B2 (en) 2012-02-08 2016-12-27 Microsoft Technology Licensing, Llc Head pose tracking using a depth camera
US9582707B2 (en) 2011-05-17 2017-02-28 Qualcomm Incorporated Head pose estimation using RGBD camera
US9595109B1 (en) * 2014-01-30 2017-03-14 Inertial Labs, Inc. Digital camera with orientation sensor for optical tracking of objects
US9602718B2 (en) 2012-01-06 2017-03-21 Blackberry Limited System and method for providing orientation of a camera
US20170084051A1 (en) * 2009-12-24 2017-03-23 Sony Interactive Entertainment America Llc Tracking position of device inside-out for virtual reality interactivity
US9645397B2 (en) 2014-07-25 2017-05-09 Microsoft Technology Licensing, Llc Use of surface reconstruction data to identify real world floor
US9683849B2 (en) * 2015-04-01 2017-06-20 Trimble Inc. Vehicle navigation system with adaptive gyroscope bias compensation
US20170237892A1 (en) * 2016-02-16 2017-08-17 Fujitsu Limited Non-transitory computer-readable storage medium, control method, and computer
US9767606B2 (en) * 2016-01-12 2017-09-19 Lenovo (Singapore) Pte. Ltd. Automatic modification of augmented reality objects
US9835448B2 (en) 2013-11-29 2017-12-05 Hewlett-Packard Development Company, L.P. Hologram for alignment
US9858720B2 (en) 2014-07-25 2018-01-02 Microsoft Technology Licensing, Llc Three-dimensional mixed-reality viewport
US9865089B2 (en) 2014-07-25 2018-01-09 Microsoft Technology Licensing, Llc Virtual reality environment with real world objects
US20180012411A1 (en) * 2016-07-11 2018-01-11 Gravity Jack, Inc. Augmented Reality Methods and Devices
US9904055B2 (en) 2014-07-25 2018-02-27 Microsoft Technology Licensing, Llc Smart placement of virtual objects to stay in the field of view of a head mounted display
US9911190B1 (en) * 2014-04-09 2018-03-06 Vortex Intellectual Property Holding LLC Method and computer program for generating a database for use in locating mobile devices based on imaging
EP2508233A3 (en) * 2011-04-08 2018-04-04 Nintendo Co., Ltd. Information processing program, information processing apparatus, information processing system, and information processing method
US9939911B2 (en) 2004-01-30 2018-04-10 Electronic Scripting Products, Inc. Computer interface for remotely controlled objects and wearable articles with absolute pose detection component
US9965471B2 (en) 2012-02-23 2018-05-08 Charles D. Huston System and method for capturing and sharing a location based experience
US9971853B2 (en) 2014-05-13 2018-05-15 Atheer, Inc. Method for replacing 3D objects in 2D environment
US10001376B1 (en) * 2015-02-19 2018-06-19 Rockwell Collins, Inc. Aircraft position monitoring system and method
US10026226B1 (en) * 2014-06-10 2018-07-17 Ripple Inc Rendering an augmented reality object
US10037628B2 (en) * 2010-02-02 2018-07-31 Sony Corporation Image processing device, image processing method, and program
US10089681B2 (en) 2015-12-04 2018-10-02 Nimbus Visulization, Inc. Augmented reality commercial platform and method
US10146300B2 (en) 2017-01-25 2018-12-04 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Emitting a visual indicator from the position of an object in a simulated reality emulation
US10157189B1 (en) 2014-04-09 2018-12-18 Vortex Intellectual Property Holding LLC Method and computer program for providing location data to mobile devices
US10242292B2 (en) * 2017-06-13 2019-03-26 Digital Surgery Limited Surgical simulation for training detection and classification neural networks
WO2019096543A1 (en) * 2017-11-16 2019-05-23 Robert Bosch Gmbh Calibration device and method for calibrating a reference target
US20190164305A1 (en) * 2016-06-13 2019-05-30 Goertek Technology Co., Ltd. Indoor distance measurement method
US10311638B2 (en) 2014-07-25 2019-06-04 Microsoft Technology Licensing, Llc Anti-trip when immersed in a virtual reality environment
US10339714B2 (en) * 2017-05-09 2019-07-02 A9.Com, Inc. Markerless image analysis for augmented reality
US20190260939A1 (en) * 2018-02-22 2019-08-22 Adobe Systems Incorporated Enhanced automatic perspective and horizon correction
US10451875B2 (en) 2014-07-25 2019-10-22 Microsoft Technology Licensing, Llc Smart transparency for virtual objects
CN110827411A (en) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 Self-adaptive environment augmented reality model display method, device, equipment and storage medium
US10600235B2 (en) 2012-02-23 2020-03-24 Charles D. Huston System and method for capturing and sharing a location based experience
JP2020509505A (en) * 2017-03-06 2020-03-26 Line株式会社 Method, apparatus and computer program for providing augmented reality
US20200126317A1 (en) * 2018-10-17 2020-04-23 Siemens Schweiz Ag Method for determining at least one region in at least one input model for at least one element to be placed
US10735665B2 (en) * 2018-10-30 2020-08-04 Dell Products, Lp Method and system for head mounted display infrared emitter brightness optimization based on image saturation
US10735902B1 (en) 2014-04-09 2020-08-04 Accuware, Inc. Method and computer program for taking action based on determined movement path of mobile devices
US10740613B1 (en) 2017-04-20 2020-08-11 Digimarc Corporation Hybrid feature point/watermark-based augmented reality
WO2020198963A1 (en) * 2019-03-29 2020-10-08 深圳市大疆创新科技有限公司 Data processing method and apparatus related to photographing device, and image processing device
US10843068B2 (en) * 2017-01-18 2020-11-24 Xvisio Technology Corp. 6DoF inside-out tracking game controller
US10930038B2 (en) 2014-06-10 2021-02-23 Lab Of Misfits Ar, Inc. Dynamic location based digital element
US10932103B1 (en) * 2014-03-21 2021-02-23 Amazon Technologies, Inc. Determining position of a user relative to a tote
US10937239B2 (en) 2012-02-23 2021-03-02 Charles D. Huston System and method for creating an environment and for sharing an event
CN112771536A (en) * 2018-09-25 2021-05-07 电子湾有限公司 Augmented reality digital content search and sizing techniques
US11138760B2 (en) * 2019-11-06 2021-10-05 Varjo Technologies Oy Display systems and methods for correcting drifts in camera poses
US11348277B2 (en) * 2020-08-12 2022-05-31 Hong Kong Applied Science and Technology Research Institute Company Limited Apparatus and method for estimating camera orientation relative to ground surface
US11410394B2 (en) 2020-11-04 2022-08-09 West Texas Technology Partners, Inc. Method for interactive catalog for 3D objects within the 2D environment
CN115176285A (en) * 2020-02-26 2022-10-11 奇跃公司 Cross reality system with buffering for positioning accuracy
US20220383600A1 (en) * 2014-05-13 2022-12-01 West Texas Technology Partners, Llc Method for interactive catalog for 3d objects within the 2d environment
US11577159B2 (en) 2016-05-26 2023-02-14 Electronic Scripting Products Inc. Realistic virtual/augmented/mixed reality viewing and interactions

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100306825A1 (en) 2009-05-27 2010-12-02 Lucid Ventures, Inc. System and method for facilitating user interaction with a simulated object associated with a physical location
US9400941B2 (en) * 2011-08-31 2016-07-26 Metaio Gmbh Method of matching image features with reference features
US20130278633A1 (en) * 2012-04-20 2013-10-24 Samsung Electronics Co., Ltd. Method and system for generating augmented reality scene
EP2699006A1 (en) * 2012-08-16 2014-02-19 ESSILOR INTERNATIONAL (Compagnie Générale d'Optique) Pictures positioning on display elements
JP6255706B2 (en) * 2013-04-22 2018-01-10 富士通株式会社 Display control apparatus, display control method, display control program, and information providing system
US9595125B2 (en) * 2013-08-30 2017-03-14 Qualcomm Incorporated Expanding a digital representation of a physical plane
US9286718B2 (en) * 2013-09-27 2016-03-15 Ortery Technologies, Inc. Method using 3D geometry data for virtual reality image presentation and control in 3D space
TWI628613B (en) * 2014-12-09 2018-07-01 財團法人工業技術研究院 Augmented reality method and system
TWI524758B (en) 2014-12-09 2016-03-01 財團法人工業技術研究院 Electronic apparatus and method for incremental pose estimation and photographing thereof
CN108139876B (en) * 2015-03-04 2022-02-25 杭州凌感科技有限公司 System and method for immersive and interactive multimedia generation
ES2834553T3 (en) * 2015-06-12 2021-06-17 Accenture Global Services Ltd An augmented reality measurement and / or manufacturing method and system
US9865091B2 (en) * 2015-09-02 2018-01-09 Microsoft Technology Licensing, Llc Localizing devices in augmented reality environment
US9881378B2 (en) 2016-02-12 2018-01-30 Vortex Intellectual Property Holding LLC Position determining techniques using image analysis of marks with encoded or associated position data
US10824878B2 (en) 2016-03-08 2020-11-03 Accuware, Inc. Method and arrangement for receiving data about site traffic derived from imaging processing
DE102016204140B3 (en) * 2016-03-14 2017-04-06 pmdtechnologies ag Apparatus and method for calibrating a light runtime camera
WO2018027206A1 (en) 2016-08-04 2018-02-08 Reification Inc. Methods for simultaneous localization and mapping (slam) and related apparatus and systems
CN109154499A (en) 2016-08-18 2019-01-04 深圳市大疆创新科技有限公司 System and method for enhancing stereoscopic display
CN106780757B (en) * 2016-12-02 2020-05-12 西北大学 Method for enhancing reality
CN106774870A (en) * 2016-12-09 2017-05-31 武汉秀宝软件有限公司 A kind of augmented reality exchange method and system
US10139934B2 (en) 2016-12-22 2018-11-27 Microsoft Technology Licensing, Llc Magnetic tracker dual mode
CN107168514B (en) * 2017-03-27 2020-02-21 联想(北京)有限公司 Image processing method and electronic equipment
US10692287B2 (en) 2017-04-17 2020-06-23 Microsoft Technology Licensing, Llc Multi-step placement of virtual objects
US10282860B2 (en) * 2017-05-22 2019-05-07 Honda Motor Co., Ltd. Monocular localization in urban environments using road markings
EP3416027B1 (en) * 2017-06-12 2020-04-08 Hexagon Technology Center GmbH Augmented-reality device and system with seamless bridging
US10462370B2 (en) 2017-10-03 2019-10-29 Google Llc Video stabilization
CN109840947B (en) * 2017-11-28 2023-05-09 广州腾讯科技有限公司 Implementation method, device, equipment and storage medium of augmented reality scene
CN110349472B (en) * 2018-04-02 2021-08-06 北京五一视界数字孪生科技股份有限公司 Virtual steering wheel and real steering wheel butt joint method in virtual driving application
US10171738B1 (en) 2018-05-04 2019-01-01 Google Llc Stabilizing video to reduce camera and face movement
CN110827376A (en) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 Augmented reality multi-plane model animation interaction method, device, equipment and storage medium
CN110825279A (en) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 Method, apparatus and computer readable storage medium for inter-plane seamless handover
JP7160183B2 (en) * 2019-03-28 2022-10-25 日本電気株式会社 Information processing device, display system, display method, and program
CN110069135A (en) * 2019-04-28 2019-07-30 联想(北京)有限公司 The data processing method of human-computer interaction device a kind of and human-computer interaction device
US10955245B2 (en) 2019-04-30 2021-03-23 Samsung Electronics Co., Ltd. System and method for low latency, high performance pose fusion
BR112022009768A2 (en) 2019-11-26 2022-08-16 Hoffmann La Roche METHOD FOR PERFORMING AN ANALYTICAL MEASUREMENT, COMPUTER PROGRAM, COMPUTER-LEABLE STORAGE MEDIA, MOBILE DEVICE AND KIT
CN111260793B (en) * 2020-01-10 2020-11-24 中国电子科技集团公司第三十八研究所 Remote virtual-real high-precision matching positioning method for augmented and mixed reality
US11288877B2 (en) 2020-01-10 2022-03-29 38th Research Institute, China Electronics Technology Group Corp. Method for matching a virtual scene of a remote scene with a real scene for augmented reality and mixed reality
DE102020101398A1 (en) * 2020-01-22 2021-07-22 Audi Aktiengesellschaft Process for generating reproducible perspectives from photographs of an object as well as mobile device with integrated camera
US20220036087A1 (en) 2020-07-29 2022-02-03 Optima Sports Systems S.L. Computing system and a computer-implemented method for sensing events from geospatial data
EP3945464A1 (en) 2020-07-29 2022-02-02 Optima Sports Systems S.L. A computing system and a computer-implemented method for sensing events from geospatial data
US11190689B1 (en) 2020-07-29 2021-11-30 Google Llc Multi-camera video stabilization
US20220051430A1 (en) * 2020-08-12 2022-02-17 Hong Kong Applied Science and Technology Research Institute Company Limited Apparatus and method for estimating camera orientation relative to ground surface
CN112102406A (en) * 2020-09-09 2020-12-18 东软睿驰汽车技术(沈阳)有限公司 Monocular vision scale correction method and device and delivery vehicle
EP4352451A1 (en) * 2021-05-20 2024-04-17 Eigen Innovations Inc. Texture mapping to polygonal models for industrial inspections
WO2024059953A1 (en) * 2022-09-23 2024-03-28 Eigen Innovations Inc. Inspection camera deployment solution

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020075201A1 (en) * 2000-10-05 2002-06-20 Frank Sauer Augmented reality visualization device
US20020154812A1 (en) * 2001-03-12 2002-10-24 Eastman Kodak Company Three dimensional spatial panorama formation with a range imaging system
US20040105573A1 (en) * 2002-10-15 2004-06-03 Ulrich Neumann Augmented virtual environments
US6785589B2 (en) * 2001-11-30 2004-08-31 Mckesson Automation, Inc. Dispensing cabinet with unit dose dispensing drawer
US20040189675A1 (en) * 2002-12-30 2004-09-30 John Pretlove Augmented reality system and method
US20050018058A1 (en) * 2001-04-16 2005-01-27 Aliaga Daniel G. Method and system for reconstructing 3D interactive walkthroughs of real-world environments
US20050157931A1 (en) * 2004-01-15 2005-07-21 Delashmit Walter H.Jr. Method and apparatus for developing synthetic three-dimensional models from imagery
US7002551B2 (en) * 2002-09-25 2006-02-21 Hrl Laboratories, Llc Optical see-through augmented reality modified-scale display
US20060090135A1 (en) * 2002-06-20 2006-04-27 Takahito Fukuda Job guiding system
US20060146142A1 (en) * 2002-12-27 2006-07-06 Hiroshi Arisawa Multi-view-point video capturing system
US20070159527A1 (en) * 2006-01-09 2007-07-12 Samsung Electronics Co., Ltd. Method and apparatus for providing panoramic view with geometric correction
US20080075358A1 (en) * 2006-09-27 2008-03-27 Electronics And Telecommunications Research Institute Apparatus for extracting camera motion, system and method for supporting augmented reality in ocean scene using the same
US20080239076A1 (en) * 2007-03-26 2008-10-02 Trw Automotive U.S. Llc Forward looking sensor system
US20080316203A1 (en) * 2007-05-25 2008-12-25 Canon Kabushiki Kaisha Information processing method and apparatus for specifying point in three-dimensional space
US20090183930A1 (en) * 2008-01-21 2009-07-23 Elantech Devices Corporation Touch pad operable with multi-objects and method of operating same
US20090190798A1 (en) * 2008-01-25 2009-07-30 Sungkyunkwan University Foundation For Corporate Collaboration System and method for real-time object recognition and pose estimation using in-situ monitoring
US20090208054A1 (en) * 2008-02-20 2009-08-20 Robert Lee Angell Measuring a cohort's velocity, acceleration and direction using digital video
US20100033404A1 (en) * 2007-03-08 2010-02-11 Mehdi Hamadou Method and device for generating tracking configurations for augmented reality applications
US20100045701A1 (en) * 2008-08-22 2010-02-25 Cybernet Systems Corporation Automatic mapping of augmented reality fiducials
US20100066559A1 (en) * 2002-07-27 2010-03-18 Archaio, Llc System and method for simultaneously viewing, coordinating, manipulating and interpreting three-dimensional and two-dimensional digital images of structures for providing true scale measurements and permitting rapid emergency information distribution
US20100194863A1 (en) * 2009-02-02 2010-08-05 Ydreams - Informatica, S.A. Systems and methods for simulating three-dimensional virtual interactions from two-dimensional camera images
US20100309311A1 (en) * 2008-02-12 2010-12-09 Trimble Ab Localization of a surveying instrument in relation to a ground mark
US20110292167A1 (en) * 2002-10-16 2011-12-01 Barbaro Technologies Interactive virtual thematic environment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3363861B2 (en) * 2000-01-13 2003-01-08 キヤノン株式会社 Mixed reality presentation device, mixed reality presentation method, and storage medium
US6765569B2 (en) 2001-03-07 2004-07-20 University Of Southern California Augmented-reality tool employing scene-feature autocalibration during camera motion
CN100398083C (en) * 2002-08-30 2008-07-02 延自强 Virtual reality acupuncture point location method and system
CN1918532A (en) * 2003-12-09 2007-02-21 雷阿卡特瑞克斯系统公司 Interactive video window display system
US7295220B2 (en) * 2004-05-28 2007-11-13 National University Of Singapore Interactive system and method
JP4708752B2 (en) * 2004-09-28 2011-06-22 キヤノン株式会社 Information processing method and apparatus
WO2007011306A2 (en) 2005-07-20 2007-01-25 Bracco Imaging S.P.A. A method of and apparatus for mapping a virtual model of an object to the object
WO2009036782A1 (en) * 2007-09-18 2009-03-26 Vrmedia S.R.L. Information processing apparatus and method for remote technical assistance
US8970690B2 (en) * 2009-02-13 2015-03-03 Metaio Gmbh Methods and systems for determining the pose of a camera with respect to at least one object of a real environment

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020075201A1 (en) * 2000-10-05 2002-06-20 Frank Sauer Augmented reality visualization device
US20020154812A1 (en) * 2001-03-12 2002-10-24 Eastman Kodak Company Three dimensional spatial panorama formation with a range imaging system
US20050018058A1 (en) * 2001-04-16 2005-01-27 Aliaga Daniel G. Method and system for reconstructing 3D interactive walkthroughs of real-world environments
US6785589B2 (en) * 2001-11-30 2004-08-31 Mckesson Automation, Inc. Dispensing cabinet with unit dose dispensing drawer
US20060090135A1 (en) * 2002-06-20 2006-04-27 Takahito Fukuda Job guiding system
US20100066559A1 (en) * 2002-07-27 2010-03-18 Archaio, Llc System and method for simultaneously viewing, coordinating, manipulating and interpreting three-dimensional and two-dimensional digital images of structures for providing true scale measurements and permitting rapid emergency information distribution
US7002551B2 (en) * 2002-09-25 2006-02-21 Hrl Laboratories, Llc Optical see-through augmented reality modified-scale display
US20040105573A1 (en) * 2002-10-15 2004-06-03 Ulrich Neumann Augmented virtual environments
US20110292167A1 (en) * 2002-10-16 2011-12-01 Barbaro Technologies Interactive virtual thematic environment
US20060146142A1 (en) * 2002-12-27 2006-07-06 Hiroshi Arisawa Multi-view-point video capturing system
US20040189675A1 (en) * 2002-12-30 2004-09-30 John Pretlove Augmented reality system and method
US20050157931A1 (en) * 2004-01-15 2005-07-21 Delashmit Walter H.Jr. Method and apparatus for developing synthetic three-dimensional models from imagery
US20070159527A1 (en) * 2006-01-09 2007-07-12 Samsung Electronics Co., Ltd. Method and apparatus for providing panoramic view with geometric correction
US20080075358A1 (en) * 2006-09-27 2008-03-27 Electronics And Telecommunications Research Institute Apparatus for extracting camera motion, system and method for supporting augmented reality in ocean scene using the same
US20100033404A1 (en) * 2007-03-08 2010-02-11 Mehdi Hamadou Method and device for generating tracking configurations for augmented reality applications
US20080239076A1 (en) * 2007-03-26 2008-10-02 Trw Automotive U.S. Llc Forward looking sensor system
US20080316203A1 (en) * 2007-05-25 2008-12-25 Canon Kabushiki Kaisha Information processing method and apparatus for specifying point in three-dimensional space
US20090183930A1 (en) * 2008-01-21 2009-07-23 Elantech Devices Corporation Touch pad operable with multi-objects and method of operating same
US20090190798A1 (en) * 2008-01-25 2009-07-30 Sungkyunkwan University Foundation For Corporate Collaboration System and method for real-time object recognition and pose estimation using in-situ monitoring
US20100309311A1 (en) * 2008-02-12 2010-12-09 Trimble Ab Localization of a surveying instrument in relation to a ground mark
US20090208054A1 (en) * 2008-02-20 2009-08-20 Robert Lee Angell Measuring a cohort's velocity, acceleration and direction using digital video
US20100045701A1 (en) * 2008-08-22 2010-02-25 Cybernet Systems Corporation Automatic mapping of augmented reality fiducials
US20100194863A1 (en) * 2009-02-02 2010-08-05 Ydreams - Informatica, S.A. Systems and methods for simulating three-dimensional virtual interactions from two-dimensional camera images

Cited By (195)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9939911B2 (en) 2004-01-30 2018-04-10 Electronic Scripting Products, Inc. Computer interface for remotely controlled objects and wearable articles with absolute pose detection component
US10191559B2 (en) 2004-01-30 2019-01-29 Electronic Scripting Products, Inc. Computer interface for manipulated objects with an absolute pose detection component
US20160098095A1 (en) * 2004-01-30 2016-04-07 Electronic Scripting Products, Inc. Deriving Input from Six Degrees of Freedom Interfaces
US9013505B1 (en) * 2007-11-27 2015-04-21 Sprint Communications Company L.P. Mobile system representing virtual objects on live camera image
US8970690B2 (en) * 2009-02-13 2015-03-03 Metaio Gmbh Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
US9934612B2 (en) 2009-02-13 2018-04-03 Apple Inc. Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
US8698875B2 (en) * 2009-02-20 2014-04-15 Google Inc. Estimation of panoramic camera orientation relative to a vehicle coordinate frame
US20100220173A1 (en) * 2009-02-20 2010-09-02 Google Inc. Estimation of Panoramic Camera Orientation Relative to a Vehicle Coordinate Frame
US9270891B2 (en) 2009-02-20 2016-02-23 Google Inc. Estimation of panoramic camera orientation relative to a vehicle coordinate frame
US20170084051A1 (en) * 2009-12-24 2017-03-23 Sony Interactive Entertainment America Llc Tracking position of device inside-out for virtual reality interactivity
US10535153B2 (en) * 2009-12-24 2020-01-14 Sony Interactive Entertainment America Llc Tracking position of device inside-out for virtual reality interactivity
US8432476B2 (en) * 2009-12-31 2013-04-30 Sony Corporation Entertainment Europe Limited Media viewing
US20110216206A1 (en) * 2009-12-31 2011-09-08 Sony Computer Entertainment Europe Limited Media viewing
US10515488B2 (en) 2010-02-02 2019-12-24 Sony Corporation Image processing device, image processing method, and program
US11189105B2 (en) 2010-02-02 2021-11-30 Sony Corporation Image processing device, image processing method, and program
US10223837B2 (en) 2010-02-02 2019-03-05 Sony Corporation Image processing device, image processing method, and program
US10810803B2 (en) 2010-02-02 2020-10-20 Sony Corporation Image processing device, image processing method, and program
US11651574B2 (en) 2010-02-02 2023-05-16 Sony Corporation Image processing device, image processing method, and program
US10037628B2 (en) * 2010-02-02 2018-07-31 Sony Corporation Image processing device, image processing method, and program
US20110201362A1 (en) * 2010-02-12 2011-08-18 Samsung Electronics Co., Ltd. Augmented Media Message
US8797353B2 (en) * 2010-02-12 2014-08-05 Samsung Electronics Co., Ltd. Augmented media message
US8605141B2 (en) 2010-02-24 2013-12-10 Nant Holdings Ip, Llc Augmented reality panorama supporting visually impaired individuals
US11348480B2 (en) 2010-02-24 2022-05-31 Nant Holdings Ip, Llc Augmented reality panorama systems and methods
US20110216179A1 (en) * 2010-02-24 2011-09-08 Orang Dialameh Augmented Reality Panorama Supporting Visually Impaired Individuals
US10535279B2 (en) 2010-02-24 2020-01-14 Nant Holdings Ip, Llc Augmented reality panorama supporting visually impaired individuals
US9526658B2 (en) 2010-02-24 2016-12-27 Nant Holdings Ip, Llc Augmented reality panorama supporting visually impaired individuals
US11244469B2 (en) * 2010-03-05 2022-02-08 Sony Interactive Entertainment LLC Tracking position of device inside-out for augmented reality interactivity
US20120314936A1 (en) * 2010-03-17 2012-12-13 Sony Corporation Information processing device, information processing method, and program
US9076256B2 (en) * 2010-03-17 2015-07-07 Sony Corporation Information processing device, information processing method, and program
US8917289B2 (en) * 2010-06-01 2014-12-23 Saab Ab Methods and arrangements for augmented reality
US20130069986A1 (en) * 2010-06-01 2013-03-21 Saab Ab Methods and arrangements for augmented reality
US20150377612A1 (en) * 2010-07-16 2015-12-31 Canon Kabushiki Kaisha Position/orientation measurement apparatus, measurement processing method thereof, and non-transitory computer-readable storage medium
US9927222B2 (en) * 2010-07-16 2018-03-27 Canon Kabushiki Kaisha Position/orientation measurement apparatus, measurement processing method thereof, and non-transitory computer-readable storage medium
US9558557B2 (en) * 2010-09-09 2017-01-31 Qualcomm Incorporated Online reference generation and tracking for multi-user augmented reality
US9013550B2 (en) 2010-09-09 2015-04-21 Qualcomm Incorporated Online reference generation and tracking for multi-user augmented reality
US20150193935A1 (en) * 2010-09-09 2015-07-09 Qualcomm Incorporated Online reference generation and tracking for multi-user augmented reality
JP2012168798A (en) * 2011-02-15 2012-09-06 Sony Corp Information processing device, authoring method, and program
US9996982B2 (en) 2011-02-15 2018-06-12 Sony Corporation Information processing device, authoring method, and program
US9398210B2 (en) 2011-02-24 2016-07-19 Digimarc Corporation Methods and systems for dealing with perspective distortion in connection with smartphone cameras
US10972680B2 (en) * 2011-03-10 2021-04-06 Microsoft Technology Licensing, Llc Theme-based augmentation of photorepresentative view
US20120229508A1 (en) * 2011-03-10 2012-09-13 Microsoft Corporation Theme-based augmentation of photorepresentative view
KR101961964B1 (en) 2011-03-10 2019-03-25 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Theme-based augmentation of photorepresentative view
KR20140007427A (en) * 2011-03-10 2014-01-17 마이크로소프트 코포레이션 Theme-based augmentation of photorepresentative view
EP2508233A3 (en) * 2011-04-08 2018-04-04 Nintendo Co., Ltd. Information processing program, information processing apparatus, information processing system, and information processing method
US20120269388A1 (en) * 2011-04-20 2012-10-25 Qualcomm Incorporated Online reference patch generation and pose estimation for augmented reality
CN103492899A (en) * 2011-04-20 2014-01-01 高通股份有限公司 Online reference patch generation and pose estimation for augmented reality
US8638986B2 (en) * 2011-04-20 2014-01-28 Qualcomm Incorporated Online reference patch generation and pose estimation for augmented reality
US9582707B2 (en) 2011-05-17 2017-02-28 Qualcomm Incorporated Head pose estimation using RGBD camera
US20130286014A1 (en) * 2011-06-22 2013-10-31 Gemvision Corporation, LLC Custom Jewelry Configurator
US20140240552A1 (en) * 2011-11-11 2014-08-28 Sony Corporation Information processing device, information processing method, and program
US9497389B2 (en) * 2011-11-11 2016-11-15 Sony Corporation Information processing device, information processing method, and program for reduction of noise effects on a reference point
CN103907139A (en) * 2011-11-11 2014-07-02 索尼公司 Information processing device, information processing method, and program
US9395188B2 (en) * 2011-12-01 2016-07-19 Maxlinear, Inc. Method and system for location determination and navigation using structural visual information
US9443353B2 (en) 2011-12-01 2016-09-13 Qualcomm Incorporated Methods and systems for capturing and moving 3D models and true-scale metadata of real world objects
US20130141565A1 (en) * 2011-12-01 2013-06-06 Curtis Ling Method and System for Location Determination and Navigation using Structural Visual Information
US20130169628A1 (en) * 2012-01-03 2013-07-04 Harman Becker Automotive Systems Gmbh Geographical map landscape texture generation on the basis of hand-held camera images
US9602718B2 (en) 2012-01-06 2017-03-21 Blackberry Limited System and method for providing orientation of a camera
EP2615580A1 (en) * 2012-01-13 2013-07-17 Softkinetic Software Automatic scene calibration
CN103718213A (en) * 2012-01-13 2014-04-09 索弗特凯耐提克软件公司 Automatic scene calibration
WO2013104800A1 (en) 2012-01-13 2013-07-18 Softkinetic Software Automatic scene calibration
US9529426B2 (en) 2012-02-08 2016-12-27 Microsoft Technology Licensing, Llc Head pose tracking using a depth camera
US10936537B2 (en) 2012-02-23 2021-03-02 Charles D. Huston Depth sensing camera glasses with gesture interface
US10937239B2 (en) 2012-02-23 2021-03-02 Charles D. Huston System and method for creating an environment and for sharing an event
US9977782B2 (en) 2012-02-23 2018-05-22 Charles D. Huston System, method, and device including a depth camera for creating a location based experience
US11449460B2 (en) 2012-02-23 2022-09-20 Charles D. Huston System and method for capturing and sharing a location based experience
US9965471B2 (en) 2012-02-23 2018-05-08 Charles D. Huston System and method for capturing and sharing a location based experience
US10600235B2 (en) 2012-02-23 2020-03-24 Charles D. Huston System and method for capturing and sharing a location based experience
US11783535B2 (en) 2012-02-23 2023-10-10 Charles D. Huston System and method for capturing and sharing a location based experience
US20140200060A1 (en) * 2012-05-08 2014-07-17 Mediatek Inc. Interaction display system and method thereof
CN103390287A (en) * 2012-05-11 2013-11-13 索尼电脑娱乐欧洲有限公司 Apparatus and method for augmented reality
US9724609B2 (en) * 2012-05-11 2017-08-08 Sony Computer Entertainment Europe Limited Apparatus and method for augmented reality
US20130303285A1 (en) * 2012-05-11 2013-11-14 Sony Computer Entertainment Europe Limited Apparatus and method for augmented reality
US9147122B2 (en) 2012-05-31 2015-09-29 Qualcomm Incorporated Pose estimation based on peripheral information
US9224205B2 (en) 2012-06-14 2015-12-29 Qualcomm Incorporated Accelerated geometric shape detection and accurate pose tracking
US9087268B2 (en) * 2012-08-15 2015-07-21 International Business Machines Corporation Extracting feature quantities from an image to perform location estimation
US20140050353A1 (en) * 2012-08-15 2014-02-20 International Business Machines Corporation Extracting feature quantities from an image to perform location estimation
US9020196B2 (en) 2012-08-15 2015-04-28 International Business Machines Corporation Extracting feature quantities from an image to perform location estimation
CN103673990A (en) * 2012-09-13 2014-03-26 北京同步科技有限公司 Device and method for obtaining camera posture data
US20140125700A1 (en) * 2012-11-02 2014-05-08 Qualcomm Incorporated Using a plurality of sensors for mapping and localization
US10309762B2 (en) * 2012-11-02 2019-06-04 Qualcomm Incorporated Reference coordinate system determination
US9953618B2 (en) * 2012-11-02 2018-04-24 Qualcomm Incorporated Using a plurality of sensors for mapping and localization
US9576183B2 (en) 2012-11-02 2017-02-21 Qualcomm Incorporated Fast initialization for monocular visual SLAM
US20160300340A1 (en) * 2012-11-02 2016-10-13 Qualcomm Incorporated Reference coordinate system determination
WO2014070483A1 (en) * 2012-11-02 2014-05-08 Qualcomm Incorporated Fast initialization for monocular visual slam
US20140123507A1 (en) * 2012-11-02 2014-05-08 Qualcomm Incorporated Reference coordinate system determination
JP2016507793A (en) * 2012-11-02 2016-03-10 クアルコム,インコーポレイテッド Determining the reference coordinate system
WO2014070312A3 (en) * 2012-11-02 2014-06-26 Qualcomm Incorporated Reference coordinate system determination
US9159133B2 (en) 2012-11-05 2015-10-13 Qualcomm Incorporated Adaptive scale and/or gravity estimation
US9213419B1 (en) * 2012-11-13 2015-12-15 Amazon Technologies, Inc. Orientation inclusive interface navigation
EP2736247A1 (en) * 2012-11-26 2014-05-28 Brainstorm Multimedia, S.L. A method for obtaining a virtual object within a virtual studio from a real object
WO2014079585A1 (en) * 2012-11-26 2014-05-30 Brainstorm Multimedia, SL A method for obtaining and inserting in real time a virtual object within a virtual scene from a physical object
US20140198227A1 (en) * 2013-01-17 2014-07-17 Qualcomm Incorporated Orientation determination based on vanishing point computation
US9113077B2 (en) * 2013-01-17 2015-08-18 Qualcomm Incorporated Orientation determination based on vanishing point computation
US11481982B2 (en) 2013-03-14 2022-10-25 Qualcomm Incorporated In situ creation of planar natural feature targets
US10733798B2 (en) * 2013-03-14 2020-08-04 Qualcomm Incorporated In situ creation of planar natural feature targets
US20140267397A1 (en) * 2013-03-14 2014-09-18 Qualcomm Incorporated In situ creation of planar natural feature targets
US9888182B2 (en) * 2013-03-15 2018-02-06 Olympus Corporation Display apparatus
US20140267869A1 (en) * 2013-03-15 2014-09-18 Olympus Imaging Corp. Display apparatus
CN104346036A (en) * 2013-07-29 2015-02-11 京瓷办公信息系统株式会社 Display operating device and image forming apparatus with same
EP3349175A1 (en) * 2013-09-11 2018-07-18 Sony Corporation Image processing device and method
JP2015080199A (en) * 2013-09-11 2015-04-23 ソニー株式会社 Image processor and method
US10587864B2 (en) * 2013-09-11 2020-03-10 Sony Corporation Image processing device and method
US20160381348A1 (en) * 2013-09-11 2016-12-29 Sony Corporation Image processing device and method
US20150092048A1 (en) * 2013-09-27 2015-04-02 Qualcomm Incorporated Off-Target Tracking Using Feature Aiding in the Context of Inertial Navigation
US9835448B2 (en) 2013-11-29 2017-12-05 Hewlett-Packard Development Company, L.P. Hologram for alignment
CN104717413A (en) * 2013-12-12 2015-06-17 北京三星通信技术研究有限公司 Shooting assistance method and equipment
CN105793882A (en) * 2013-12-12 2016-07-20 富士通株式会社 Equipment inspection work assistance program, equipment inspection work assistance method, and equipment inspection work assistance device
US10582166B2 (en) * 2013-12-19 2020-03-03 Apple Inc. Method of tracking a mobile device and method of generating a geometrical model of a real environment using a camera of a mobile device
US10121247B2 (en) * 2013-12-19 2018-11-06 Apple Inc. Method of tracking a mobile device and method of generating a geometrical model of a real environment using a camera of a mobile device
US20160350906A1 (en) * 2013-12-19 2016-12-01 Metaio Gmbh Method of tracking a mobile device and method of generating a geometrical model of a real environment using a camera of a mobile device
US11336867B2 (en) * 2013-12-19 2022-05-17 Apple Inc. Method of tracking a mobile device and method of generating a geometrical model of a real environment using a camera of a mobile device
US20190075274A1 (en) * 2013-12-19 2019-03-07 Apple Inc. Method of Tracking a Mobile Device and Method of Generating a Geometrical Model of a Real Environment Using a Camera of a Mobile Device
US11968479B2 (en) * 2013-12-19 2024-04-23 Apple Inc. Method of tracking a mobile device and method of generating a geometrical model of a real environment using a camera of a mobile device
US20220279147A1 (en) * 2013-12-19 2022-09-01 Apple Inc. Method of Tracking a Mobile Device and Method of Generating a Geometrical Model of a Real Environment Using a Camera of a Mobile Device
JPWO2015098807A1 (en) * 2013-12-24 2017-03-23 チームラボ株式会社 An imaging system that synthesizes a subject and a three-dimensional virtual space in real time
US20160343166A1 (en) * 2013-12-24 2016-11-24 Teamlab Inc. Image-capturing system for combining subject and three-dimensional virtual space in real time
US10057484B1 (en) 2013-12-30 2018-08-21 Google Technology Holdings LLC Method and apparatus for activating a hardware feature of an electronic device
US9560254B2 (en) * 2013-12-30 2017-01-31 Google Technology Holdings LLC Method and apparatus for activating a hardware feature of an electronic device
US20150189178A1 (en) * 2013-12-30 2015-07-02 Google Technology Holdings LLC Method and Apparatus for Activating a Hardware Feature of an Electronic Device
US9595109B1 (en) * 2014-01-30 2017-03-14 Inertial Labs, Inc. Digital camera with orientation sensor for optical tracking of objects
US10932103B1 (en) * 2014-03-21 2021-02-23 Amazon Technologies, Inc. Determining position of a user relative to a tote
US10157189B1 (en) 2014-04-09 2018-12-18 Vortex Intellectual Property Holding LLC Method and computer program for providing location data to mobile devices
US9911190B1 (en) * 2014-04-09 2018-03-06 Vortex Intellectual Property Holding LLC Method and computer program for generating a database for use in locating mobile devices based on imaging
US10735902B1 (en) 2014-04-09 2020-08-04 Accuware, Inc. Method and computer program for taking action based on determined movement path of mobile devices
US9418284B1 (en) * 2014-04-09 2016-08-16 Vortex Intellectual Property Holding LLC Method, system and computer program for locating mobile devices based on imaging
US20220383600A1 (en) * 2014-05-13 2022-12-01 West Texas Technology Partners, Llc Method for interactive catalog for 3d objects within the 2d environment
US10635757B2 (en) 2014-05-13 2020-04-28 Atheer, Inc. Method for replacing 3D objects in 2D environment
US10860749B2 (en) 2014-05-13 2020-12-08 Atheer, Inc. Method for interactive catalog for 3D objects within the 2D environment
US9971853B2 (en) 2014-05-13 2018-05-15 Atheer, Inc. Method for replacing 3D objects in 2D environment
US9996636B2 (en) 2014-05-13 2018-06-12 Atheer, Inc. Method for forming walls to align 3D objects in 2D environment
US9977844B2 (en) 2014-05-13 2018-05-22 Atheer, Inc. Method for providing a projection to align 3D objects in 2D environment
US11144680B2 (en) 2014-05-13 2021-10-12 Atheer, Inc. Methods for determining environmental parameter data of a real object in an image
US10678960B2 (en) 2014-05-13 2020-06-09 Atheer, Inc. Method for forming walls to align 3D objects in 2D environment
US10002208B2 (en) 2014-05-13 2018-06-19 Atheer, Inc. Method for interactive catalog for 3D objects within the 2D environment
US20180225392A1 (en) * 2014-05-13 2018-08-09 Atheer, Inc. Method for interactive catalog for 3d objects within the 2d environment
US11544418B2 (en) 2014-05-13 2023-01-03 West Texas Technology Partners, Llc Method for replacing 3D objects in 2D environment
WO2015181827A1 (en) * 2014-05-28 2015-12-03 Elbit Systems Land & C4I Ltd. Method and system for image georegistration
US10204454B2 (en) 2014-05-28 2019-02-12 Elbit Systems Land And C4I Ltd. Method and system for image georegistration
AU2015265416B2 (en) * 2014-05-28 2017-03-02 Elbit Systems Land & C4I Ltd. Method and system for image georegistration
US11532140B2 (en) * 2014-06-10 2022-12-20 Ripple, Inc. Of Delaware Audio content of a digital object associated with a geographical location
US10930038B2 (en) 2014-06-10 2021-02-23 Lab Of Misfits Ar, Inc. Dynamic location based digital element
US11069138B2 (en) 2014-06-10 2021-07-20 Ripple, Inc. Of Delaware Audio content of a digital object associated with a geographical location
US11403797B2 (en) 2014-06-10 2022-08-02 Ripple, Inc. Of Delaware Dynamic location based digital element
US10026226B1 (en) * 2014-06-10 2018-07-17 Ripple Inc Rendering an augmented reality object
US10659750B2 (en) 2014-07-23 2020-05-19 Apple Inc. Method and system for presenting at least part of an image of a real object in a view of a real environment, and method and system for selecting a subset of a plurality of images
WO2016012041A1 (en) * 2014-07-23 2016-01-28 Metaio Gmbh Method and system for presenting at least part of an image of a real object in a view of a real environment, and method and system for selecting a subset of a plurality of images
US9858720B2 (en) 2014-07-25 2018-01-02 Microsoft Technology Licensing, Llc Three-dimensional mixed-reality viewport
US20160026242A1 (en) 2014-07-25 2016-01-28 Aaron Burns Gaze-based object placement within a virtual reality environment
US9904055B2 (en) 2014-07-25 2018-02-27 Microsoft Technology Licensing, Llc Smart placement of virtual objects to stay in the field of view of a head mounted display
US10096168B2 (en) 2014-07-25 2018-10-09 Microsoft Technology Licensing, Llc Three-dimensional mixed-reality viewport
US9865089B2 (en) 2014-07-25 2018-01-09 Microsoft Technology Licensing, Llc Virtual reality environment with real world objects
US10649212B2 (en) 2014-07-25 2020-05-12 Microsoft Technology Licensing Llc Ground plane adjustment in a virtual reality environment
US9645397B2 (en) 2014-07-25 2017-05-09 Microsoft Technology Licensing, Llc Use of surface reconstruction data to identify real world floor
US9766460B2 (en) * 2014-07-25 2017-09-19 Microsoft Technology Licensing, Llc Ground plane adjustment in a virtual reality environment
US10311638B2 (en) 2014-07-25 2019-06-04 Microsoft Technology Licensing, Llc Anti-trip when immersed in a virtual reality environment
US10416760B2 (en) 2014-07-25 2019-09-17 Microsoft Technology Licensing, Llc Gaze-based object placement within a virtual reality environment
US10451875B2 (en) 2014-07-25 2019-10-22 Microsoft Technology Licensing, Llc Smart transparency for virtual objects
CN104462730A (en) * 2014-12-31 2015-03-25 广东电网有限责任公司电力科学研究院 Online simulation system and method for power plant
US9412034B1 (en) * 2015-01-29 2016-08-09 Qualcomm Incorporated Occlusion handling for computer vision
CN105844381A (en) * 2015-01-30 2016-08-10 株式会社日立制作所 Business influenced part extraction method and business influenced part extraction device based on business variation
US10001376B1 (en) * 2015-02-19 2018-06-19 Rockwell Collins, Inc. Aircraft position monitoring system and method
US20160275079A1 (en) * 2015-03-17 2016-09-22 Siemens Aktiengesellschaft Part Identification using a Photograph and Engineering Data
US10073848B2 (en) * 2015-03-17 2018-09-11 Siemens Aktiengesellschaft Part identification using a photograph and engineering data
US9683849B2 (en) * 2015-04-01 2017-06-20 Trimble Inc. Vehicle navigation system with adaptive gyroscope bias compensation
US10089681B2 (en) 2015-12-04 2018-10-02 Nimbus Visulization, Inc. Augmented reality commercial platform and method
US9767606B2 (en) * 2016-01-12 2017-09-19 Lenovo (Singapore) Pte. Ltd. Automatic modification of augmented reality objects
US20170237892A1 (en) * 2016-02-16 2017-08-17 Fujitsu Limited Non-transitory computer-readable storage medium, control method, and computer
US9906702B2 (en) * 2016-02-16 2018-02-27 Fujitsu Limited Non-transitory computer-readable storage medium, control method, and computer
US11577159B2 (en) 2016-05-26 2023-02-14 Electronic Scripting Products Inc. Realistic virtual/augmented/mixed reality viewing and interactions
US10769802B2 (en) * 2016-06-13 2020-09-08 Goertek Technology Co., Ltd. Indoor distance measurement method
US20190164305A1 (en) * 2016-06-13 2019-05-30 Goertek Technology Co., Ltd. Indoor distance measurement method
US20180012411A1 (en) * 2016-07-11 2018-01-11 Gravity Jack, Inc. Augmented Reality Methods and Devices
US10843068B2 (en) * 2017-01-18 2020-11-24 Xvisio Technology Corp. 6DoF inside-out tracking game controller
US11504608B2 (en) 2017-01-18 2022-11-22 Xvisio Technology Corp. 6DoF inside-out tracking game controller
US10146300B2 (en) 2017-01-25 2018-12-04 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Emitting a visual indicator from the position of an object in a simulated reality emulation
JP2020509505A (en) * 2017-03-06 2020-03-26 Line株式会社 Method, apparatus and computer program for providing augmented reality
US10740613B1 (en) 2017-04-20 2020-08-11 Digimarc Corporation Hybrid feature point/watermark-based augmented reality
US11393200B2 (en) 2017-04-20 2022-07-19 Digimarc Corporation Hybrid feature point/watermark-based augmented reality
US10733801B2 (en) 2017-05-09 2020-08-04 A9.Com. Inc. Markerless image analysis for augmented reality
US10339714B2 (en) * 2017-05-09 2019-07-02 A9.Com, Inc. Markerless image analysis for augmented reality
US10242292B2 (en) * 2017-06-13 2019-03-26 Digital Surgery Limited Surgical simulation for training detection and classification neural networks
WO2019096543A1 (en) * 2017-11-16 2019-05-23 Robert Bosch Gmbh Calibration device and method for calibrating a reference target
US20190260939A1 (en) * 2018-02-22 2019-08-22 Adobe Systems Incorporated Enhanced automatic perspective and horizon correction
US10652472B2 (en) * 2018-02-22 2020-05-12 Adobe Inc. Enhanced automatic perspective and horizon correction
CN110827411A (en) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 Self-adaptive environment augmented reality model display method, device, equipment and storage medium
CN112771536A (en) * 2018-09-25 2021-05-07 电子湾有限公司 Augmented reality digital content search and sizing techniques
US11748964B2 (en) * 2018-10-17 2023-09-05 Siemens Schweiz Ag Method for determining at least one region in at least one input model for at least one element to be placed
US20200126317A1 (en) * 2018-10-17 2020-04-23 Siemens Schweiz Ag Method for determining at least one region in at least one input model for at least one element to be placed
US10735665B2 (en) * 2018-10-30 2020-08-04 Dell Products, Lp Method and system for head mounted display infrared emitter brightness optimization based on image saturation
WO2020198963A1 (en) * 2019-03-29 2020-10-08 深圳市大疆创新科技有限公司 Data processing method and apparatus related to photographing device, and image processing device
US11138760B2 (en) * 2019-11-06 2021-10-05 Varjo Technologies Oy Display systems and methods for correcting drifts in camera poses
US11557099B2 (en) * 2020-02-26 2023-01-17 Magic Leap, Inc. Cross reality system with buffering for localization accuracy
CN115176285A (en) * 2020-02-26 2022-10-11 奇跃公司 Cross reality system with buffering for positioning accuracy
US11348277B2 (en) * 2020-08-12 2022-05-31 Hong Kong Applied Science and Technology Research Institute Company Limited Apparatus and method for estimating camera orientation relative to ground surface
US11410394B2 (en) 2020-11-04 2022-08-09 West Texas Technology Partners, Inc. Method for interactive catalog for 3D objects within the 2D environment

Also Published As

Publication number Publication date
EP2396767A2 (en) 2011-12-21
WO2010091875A2 (en) 2010-08-19
EP2396767B1 (en) 2017-03-22
WO2010091875A3 (en) 2010-12-02
US8970690B2 (en) 2015-03-03
CN102395997A (en) 2012-03-28
CN105701790A (en) 2016-06-22
US20150310666A1 (en) 2015-10-29
CN105701790B (en) 2019-11-01
CN102395997B (en) 2015-08-05
US9934612B2 (en) 2018-04-03

Similar Documents

Publication Publication Date Title
US9934612B2 (en) Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
US11393173B2 (en) Mobile augmented reality system
US9953461B2 (en) Navigation system applying augmented reality
US9501872B2 (en) AR image processing apparatus and method technical field
US9576183B2 (en) Fast initialization for monocular visual SLAM
JP4789745B2 (en) Image processing apparatus and method
US20160210785A1 (en) Augmented reality system and method for positioning and mapping
US20120120199A1 (en) Method for determining the pose of a camera with respect to at least one real object
US20120120113A1 (en) Method and apparatus for visualizing 2D product images integrated in a real-world environment
KR20150013709A (en) A system for mixing or compositing in real-time, computer generated 3d objects and a video feed from a film camera
US20130176337A1 (en) Device and Method For Information Processing
Kurz et al. Handheld augmented reality involving gravity measurements
US10388069B2 (en) Methods and systems for light field augmented reality/virtual reality on mobile devices
Deng et al. Registration of multiple rgbd cameras via local rigid transformations
Kim et al. IMAF: in situ indoor modeling and annotation framework on mobile phones
JPH10188029A (en) Virtual space generating device
Cosco et al. Real Time Tracking using Stereo Vision for Augmented Reality

Legal Events

Date Code Title Description
AS Assignment

Owner name: METAIO GMBH, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MEIER, PETER;BENHIMANE, SELIM;MISSLINGER, STEFAN;AND OTHERS;REEL/FRAME:022574/0297

Effective date: 20090317

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: APPLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:METAIO GMBH;REEL/FRAME:040821/0462

Effective date: 20161118

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8