WO2008031369A1 - Système et procédé pour déterminer la position et l'orientation d'un utilisateur - Google Patents

Système et procédé pour déterminer la position et l'orientation d'un utilisateur Download PDF

Info

Publication number
WO2008031369A1
WO2008031369A1 PCT/DE2006/001631 DE2006001631W WO2008031369A1 WO 2008031369 A1 WO2008031369 A1 WO 2008031369A1 DE 2006001631 W DE2006001631 W DE 2006001631W WO 2008031369 A1 WO2008031369 A1 WO 2008031369A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
pose
camera
real environment
tracking
Prior art date
Application number
PCT/DE2006/001631
Other languages
German (de)
English (en)
Inventor
Mehdi Hamadou
Andreas MÜLLER
Original Assignee
Siemens Aktiengesellschaft
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Aktiengesellschaft filed Critical Siemens Aktiengesellschaft
Priority to PCT/DE2006/001631 priority Critical patent/WO2008031369A1/fr
Priority to DE112006004131T priority patent/DE112006004131A5/de
Publication of WO2008031369A1 publication Critical patent/WO2008031369A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the invention relates to a system and a method for determining the position and orientation of a user with respect to a real environment which he regards.
  • Augmented reality is a form of human-technology interaction that gives people, e.g. Using a pair of data goggles fades information into his field of view and thus extends the reality perceived by him. This extension of reality is also called augmentation. It happens contextually, i. suitable for and derived from the object considered.
  • the object under consideration may be, for example, a component, a tool, a machine, an automation system or an open engine compartment of a car, to name but a few examples. For example, when augmenting the field of view of the user, safety, assembly or dismantling instructions are displayed which assist a user in his or her activity.
  • Automation devices can be used in the application domain producing industry, medicine or in the consumer sector.
  • applications ranging from simple operator control and observation processes to complex service activities can be supported.
  • In operations, examinations and treatments in the medical environment, such methods and devices serve a user to improve the quality of work.
  • applications such as navigation of people, information provision, etc. can be realized.
  • tracking methods are used. With these methods, the position and orientation of the user are first determined. Position and orientation of the user in relation to his real environment are also referred to in the general jargon with the term "pose" which includes both sizes.
  • the field of view of the user is continuously recorded with a camera.
  • markerless tracking matches between certain features in the recorded camera image and a model of the real environment of the user is determined and determines the pose of the user.
  • Such a method is referred to as markerless tracking, since it requires no special emphasis in reality.
  • the augmented reality system recognizes the pose exclusively by means of the image features in the detection range of the camera.
  • a so-called initialization In order to be able to perform markerless tracking, a so-called initialization must first be carried out. This provides the user's pose for the first time.
  • the initialization of the augmented reality system is computationally intensive and requires an interaction on the part of the user. For example, the user must explicitly adjust his viewing direction to a defined object with the camera for initialization in such a way that the augmented reality system recognizes the real object based on an augmentation of the object in the camera image displayed to the user at a defined position. So he has to bring the augmentation of the object and the real object to coincide. From this, the system can determine an initial position and orientation, a so-called initial pose, in space. However, the augmentation does not have to be absolutely exactly brought to coincidence by the user.
  • the augmented reality system can independently bring the augmentation with the real object within a certain tolerance range to cover.
  • An alternative method of initialization is the use of so-called keyframes, ie predefined views of the environment, which the augmented reality system must recognize again within a tolerance range based on the current camera image. Depending on the initialization method used, more or less user interaction is required.
  • the user can vary his viewing direction and thus the coverage of the camera as desired.
  • the augmented reality system can then display augmentations with exact position as long as the object or its model used for initialization is within the detection range of the camera. Tracking examines the movements of the camera image and derives therefrom the position and orientation of the user in relation to the real environment, the so-called pose. This happens almost in real time and without any user interaction. Overall, the posi- tion determination is divided into an initialization phase and a subsequent tracking phase in which the pose is redefined from image to image.
  • the workspaces to which an operator wishes to use an augmented reality system can be much larger than the detection range of the camera, in particular if the user makes meaningful distances to the objects to be augmented.
  • no pose determination can take place within the scope of the tracking phase described above.
  • the augmented reality system loses its initialization and pose relative to the object originally used for initialization. As a result, the tracking process aborts, and initially no augmentation can be displayed.
  • the invention is based on the object of enabling a pose determination of a user within a spatially extended real environment as efficiently as possible.
  • This object is achieved by a method for determining the pose of a user with respect to a real environment considered for him, in which sections of the real environment in the field of vision of the user are detected with a camera, the method being an initialization phase with the process steps:
  • a partial model suitable for determining an initial pose of a user from an overall model of the real environment which is subdivided into different partial models, the selection being carried out as a function of a section of the real environment detected by the camera during the initialization phase, and determining the initial pose of the user by comparing the user Section with the partial model, and wherein the method has a tracking phase following the initialization phase, in which the pose of the user is determined continuously from the initial pose by means of a tracking algorithm, wherein the initialization phase is restarted as soon as it reaches the tracking phase or achievable accuracy of the pose determination a predetermined quality criterion is no longer met.
  • a system for determining the pose of a user with respect to a real environment comprising: a camera for capturing sections of the real environment which are in the field of vision of the user, a first memory area for an overall model of the real environment decomposed into different submodels, initialization means for selecting a submodel suitable for determining an initial pose of a user from the overall model as a function of an excerpt captured by the camera during an initialization phase tes of the real environment and for determining the initial pose by comparing the detail with the first partial model,
  • Tracking means for determining the pose of the user during a tracking phase following the initialization phase, wherein the tracking means are provided for continuously determining the pose from the initial pose by means of a tracking algorithm, and monitoring means for restarting the initialization phase as soon as said in the Tracking phase achieved or achievable accuracy of the pose determination no longer meets a predetermined quality criterion.
  • the determination of the pose of the user is divided into two phases: the initialization phase and the tracking phase.
  • the tracking phase can only be carried out if at least once before an initial pose of the user has been determined, ie at least one initialization phase has preceded, during which the position and orientation of the user with respect to the real environment was determined for the first time.
  • it is usually attempted to determine the pose as long as possible with the aid of an efficient and fast tracking algorithm. If, however, the user changes his viewing direction very strongly after the initial pose has been determined, so that the detection range of the camera differs greatly from that used during the initialization phase, the quality of the pose determination during tracking generally suffers. At the latest when the detection area is completely outside the section used in the initial position determination, a pose determination by means of the tracking algorithm is no longer possible.
  • a new initialization phase is started as soon as the achieved accuracy no longer fulfills the previously determined quality criterion or as soon as the achievable accuracy no longer fulfills this criterion.
  • Letztgenann- Causal connection has prophylactic character.
  • a new initialization is carried out if, for example, it is no longer possible to expect a fulfillment of the quality criterion due to the change in the pose that has taken place since the last initialization.
  • One criterion for the accuracy of the pose determination is the tracking error during the tracking phase.
  • Various methods are known for determining the tracking error. For example, the number of model features used in the initial pose setup is continuously determined in the current frame. If, in particular, this falls permanently below a predetermined threshold value, it is concluded therefrom that the tracking error and thus the non-compliance with the required quality criterion are too great.
  • Such a method is also called a robust method.
  • so-called non-robust methods return for each feature a probability with which it was found. From the total probability for all characteristics, the overall accuracy of the tracking results.
  • the invention is based on the finding that the expense for the reinitialization can be significantly reduced if the overall model which simulates the real environment considered by the user is broken down into individual smaller submodels.
  • the purpose of this decomposition is not to have to use the entire model in the reinitialization, but only a suitable and much smaller sub-model.
  • This has the advantage that the algorithms used for initialization must be applied to a much smaller data area. Thus, the initialization process can be performed much faster. This advantage turns out in particular in very large Environments of the user noticeable, which must be emulated accordingly with very large environment models.
  • the tracking phase is interrupted and a new initialization phase is activated.
  • the image section of the camera acquired at the end of the tracking phase is again used as the basis for the determination of a new submodel.
  • a comparison with the section in the detection range of the camera is then performed again and a corresponding initial pose is redetermined. This happens at least largely without interaction of the user and can run almost unnoticed in the background.
  • the subject matter of the invention opens up the possibility, for the first time, of expediently using an augmented reality system even in very large environments.
  • the specific selection of individual partial models for determining the initial position as a function of a current viewing angle of a user and depending on its position makes it possible to control the computational effort that the image processing algorithms need to determine the initial pose.
  • a submodel is determined which is adjacent to a submodel used in the overall model in the preceding initialization phase. If, during the tracking phase, it is determined that the quality If the determination of the pose no longer corresponds to the predefined quality criterion or if such a deviation is to be expected on the basis of the changes in the user field of view during the tracking phase, one of the submodels adjacent to the previous submodel is selected in order to redetermine the initial pose.
  • the determination of a suitable submodel among the adjacent submodels is simplified in that the submodel is determined by evaluating the last pose determined within the tracking phase and the position of the submodel used in the preceding initialization phase.
  • the submodel used in the previous initialization phase is the starting point in the search for a new submodel.
  • the tracking phase the change in the field of vision of the user or the detection range of the camera was tracked, so that with this information a new sub-model can be found to determine a new initial pose.
  • the initialization phase is restarted as soon as the detection range of the camera has left the cutout detected during the initialization phase by a predetermined amount.
  • a new suitable partial model is determined, for example, by evaluating the change in the detection range of the camera determined during the tracking phase and calculating a new initial pose. This happens especially without User interaction and therefore can be done almost unnoticed by the user of the system.
  • a further advantageous embodiment of the invention is characterized in that the scope of the environment modeled by the submodels is dependent on the size of the detection range of the camera. This makes sense, since the initial pose is performed by comparing the detection range of the camera with one or more elements of a suitable sub-model. If the partial model were much larger than the detection range of the camera, then elements of the partial model would have to be examined by a corresponding algorithm, which does not appear in the detection range of the camera and thus is not available for comparison. Conversely, if the partial model is too small, the problem would be that elements of the real environment captured by the camera would be searched unsuccessfully in the corresponding submodel.
  • an embodiment of the invention is advantageous in which the overall model changes when the size of the sensor changes.
  • the scope of the camera is now divided into different submodels.
  • the decomposition of the overall model into individual submodels would happen "on the fly".
  • it would be ensured that a submodel of ideal size is always available for comparison with the detection range of the camera.
  • a less performant embodiment of the invention is also possible in which the decomposition of the overall model into individual submodels is already carried out in a preceding engineering phase.
  • a simple determination of the partial model suitable for the initial position determination can be achieved in an advantageous embodiment of the invention in that the partial model is determined by backprojecting the detected section onto the real environment.
  • a suitable criterion for determining the suitable submodel is given in a further advantageous embodiment of the invention in that the submodel is selected such that it has the largest overlap area with the detected section of the real environment among the various submodels. The larger the overlapping area of the detected section with the partial model, the greater the probability that the comparison of the section with the partial model leads to a successful initial pose determination.
  • a suitable procedure for comparing the section with the selected submodel is given in an advantageous embodiment in that, to determine the initial pose, an object of the real environment located in the detected section is recognized with the aid of the selected submodel and an augmentation of the object is made to coincide with the object becomes.
  • the initial initial position determination in particular, user interaction may be necessary for this, in which the user aligns his field of vision with respect to the real environment in such a way that the augmentation of the object coincides with the object itself.
  • smaller deviations in particular can also be compensated for by the system itself, in that the augmentation and the real object are reconstructed mathematically and from this process the information required for determining the initial position is determined.
  • a particularly advantageous application of the method results in an embodiment in which the particular pose is used for positionally accurate augmentation of the field of view of the user with information.
  • information For example, installation instructions for an automation technician can be displayed with exact position, assistance for a surgeon, which can be shown in his field of view during an operation, or even simple explanations for a visitor to an exhibition, fitting in and out of the elements he is looking at.
  • a particularly user-friendly superimposition of such augmentations is provided by an embodiment of the invention, in which the information is faded into the field of vision of the user by means of data glasses.
  • a so-called optical see-through method can be realized, in which the user perceives the objects of the real environment directly through the data goggles and the information about this is displayed at a suitable location in the data goggles.
  • FIG. 1 shows a detection range of a camera of an embodiment of the system for determining the pose of a user
  • FIG. 2 shows augmented information in the field of vision of the user
  • FIG. 3 shows an initialization of an embodiment of the system for determining the pose of a user
  • FIG. 5 shows a perspective view of the captured detail of the real environment and its modeling
  • FIG. 6 shows possibilities of movement within the space modeled by neighboring partial models
  • 7 shows a first area of a camera detected at a first initialization phase
  • 8 shows a second section captured by the camera at the beginning of a second initialization phase
  • FIG. 9 shows a flow chart of an embodiment of the method for determining the pose of a user
  • FIG. 10 shows an application example of an embodiment of the method within an augmented reality system.
  • FIG. 1 shows a detection area 30 of a camera 20 of an embodiment of the system for determining the pose of a user.
  • the illustrated camera 20 is part of an augmented reality system with which information is displayed in a positionally accurate and context-dependent manner to objects of the real environment viewed by a user in its field of view.
  • the system is designed such that the camera 20 always at least partially detects the field of view of the user.
  • the camera 20 is mounted on the user's head so that it automatically follows his gaze.
  • the so-called pose objects of the real environment lying in the detection area 30 of the camera 20 are compared with a three-dimensional model of the real environment. If the objects recognized by the camera 20 are found again in the three-dimensional model, a determination of the pose of the user is possible.
  • the camera 20 has the considered real environment. In particular, in very large work environments of the user, it may happen that at a time only a very small portion of the real environment in the detection range 30 of the camera 20 is located. An object, which is initially located in the detection area 30 of the camera 20 and was used to determine an initial pose, can become very hot when using the system in a very large environment quickly outside the detection range 30, which would require reinitialization of the system.
  • FIG. 2 shows an augmented information 60 in the field of vision of the user, which is recorded by a camera 20.
  • the augmented information 60 was previously derived as a function of an object detected in the detection area 30 of the camera 20 and is now superimposed in an exact position for this purpose in the field of view of the user, for example via data glasses. If the user now changes his field of view and thus the detection range 30 of the camera 20, then the augmented information 60 first of all "sticks" to said object. This is just a desirable feature of the augmented reality system.
  • the four dashed rectangles arranged around the augmented information 60 indicate the maximum deflection of the detection area 30 of the camera 20, which is possible without augmentation loss. Provided that the detection range corresponds to the tracking range, the augmented information 60 can no longer be displayed as soon as the detection range 30 moves further away from the starting position shown in FIG. 2 than indicated by these dashed rectangles. Such a tracking loss and thus augmentation loss occur frequently, especially in large environments.
  • FIG. 3 shows an initialization of an embodiment of the system for determining the pose of a user.
  • a real object 40 which is simulated by a three-dimensional environment model and used to initialize the system, ie to determine the position and orientation of the user.
  • the real object 40 is found again in the model of the real environment.
  • a corresponding augmentation 50 is finally projected into the real environment.
  • the user now has to try to implement the augmentation 50 with the real object 40 in bring. This is done by adjusting his pose and thus the pose of the camera 20 with respect to the real environment. Once the augmentation 50 and the real object 40 are brought into coincidence, the pose of the user can be uniquely determined.
  • a continuous pose determination is carried out by a tracking algorithm with which changes in the position of the real object 40 in the detection area 30 of the camera 20 are tracked and the pose is determined continuously with relatively little computation effort.
  • this is only possible as long as the real object 40 is located at least partially in the detection area 30 of the camera 20.
  • the system must be reinitialized using another object in the detection area of the camera 20.
  • the previously described initial initialization of the system can also be performed at least partially without user interaction.
  • the augmentation 50 can also be automatically brought into coincidence by the system with the real object 40 "mathematically" and from this the initial pose can be determined.
  • FIG. 4 shows a planar representation of a captured section of the real environment and its modeling.
  • an overall model is provided, which is broken down into individual submodels 0..8. Illustrated here are, by way of example, a first submodel 0, whose modeled reality is located in the detection range of a camera 20, and directly adjacent submodels 1.
  • the overall model of the real environment is subdivided into substantially more submodels, which are not all shown here as well as in FIGS. 5, 6, 7 and 8 for reasons of clarity.
  • the size of the submodels 0..8 largely corresponds to the detection range of the camera 20. If, therefore, during an initial altechnischsphase is located in the detection range section of the real environment with the image to be compared in the model, so to be processed for the corresponding algorithms three-dimensional model is significantly smaller than the overall model of the entire real environment of the user's workspace. As shown in FIG. 4, the first partial model 0 is located in the current detection range of the camera 20. When moving the camera 20, the area of the real environment modeled by the first partial model 0 may possibly be left out, so that one of the adjacent partial models 1..8 must be used for a re-initialization determination.
  • FIG. 5 shows a perspective view of the detected area of the real environment and its modeling. Again, it can be seen that movements of the camera 20 always lead to a continuous displacement of the detection area within the neighborhood area with respect to the last camera position.
  • FIG. 6 shows various possibilities of movement within the space modeled by the adjacent partial models 0..8.
  • FIG. 7 shows a first section 10 detected by a camera 20 during a first initialization phase.
  • the first detail 10 of the real environment lying in the detection range of the camera 20 is at least partially modeled by a first sub-model 0 which is part of an overall model of the real environment.
  • a comparison of the first detail 10 with the first partial model 0 is performed.
  • the initial pose is determined according to the method already described in FIG. If, for the first time, the user's position and orientation relative to the real environment are fixed, the pose can first be tracked using a tracking algorithm that tracks movements of real objects within images recorded with the camera 20. be determined continuously and with relatively little computational time almost in ⁇ chtzeit. However, this is possible only as long as objects used for initialization in the detection range of the camera 20 are at least partially located. If the detection range of the camera 20 leaves the said area of the real environment beyond, the system must be reinitialized.
  • FIG. 8 shows a second section detected by the camera 20 at the beginning of a second initialization phase, which follows the tracking phase described above in FIG.
  • the camera has been moved out of the position that it held during the first initialization phase so far that a tracking error exceeds a previously determined maximum value. Therefore, it is first checked which sub-model 0..8 of the real environment is best suited to re-determine the initial pose within a new initialization phase.
  • the now detected second section 11 of the real, environment with a second
  • Submodel 5 which is arranged in direct proximity to the first submodel 0, forms the largest possible overlap area of all submodels 0 to 8 of the overall model. Therefore, this second submodel 5 is selected for reinitialization. After the initial pose has been redetermined, further pose changes can be tracked using the tracking algorithm.
  • the redetermination of the initial pose can be performed almost unnoticed by the user and without its interaction.
  • a first method step 80 the initialization phase is first performed for the initial pose determination.
  • a suitable submodel from the overall model is selected. By comparing this selected sub-model with the area of the real environment captured by the camera in this step, the position and orientation of the user is determined for the first time.
  • a tracking error is continuously determined. As long as this tracking error is less than a predetermined maximum value, the tracking phase will continue. However, if the tracking error is greater than the previously determined maximum value, it is checked in a third method step 82 whether the partial model used in the first method step 80 has sufficient coverage with the current detection range of the camera. If this is the case, an automatic reinitialization is carried out with the partial model already used previously. However, if the coverage quality is not sufficient, a new partial model is determined in a fourth method step 83, which is loaded into a fast memory in a fifth method step 84.
  • an automatic initialization of the system ie a new determination of the initial pose, is carried out using the newly determined submodel, if previously an insufficient coverage quality was determined with the old submodel, or an automatic initialization using the submodel already used previously if the coverage quality in the third method step 82 appeared to be sufficient.
  • the pose can be continued again using the track-in algorithm as described in the second method step 81.
  • 10 shows an application example of an embodiment of the method within an augmented reality system.
  • the augmented reality system has a camera 20, with which a spatially far-reaching pipeline system is received in the real environment of a user.
  • the user is, for example, an installer who is to carry out repair work on the illustrated piping system.
  • this information should be displayed contextually in his field of view.
  • the user wears a data glasses, which allows a positionally accurate display of this information.
  • the user While working on the piping system, the user should be shown successively 90.91.92 augmentations at three separate points. Initially, it is located at the time t 1 at a position at which a camera 20 located on its head detects a first cutout 10 of the pipeline system. Since the detection range of the camera 20 approximately corresponds to the field of view of the user, said field of view can be set approximately equal to the first section 10. After a first initialization of the augmented reality system, the augmentation can be superimposed with exact position at the location 90 provided for this purpose.
  • the detection area of the camera 20 leaves the first section 10 and thus the elements of the partial model used for the initialization.
  • the pose determined by tracking deteriorates in advance, which leads to a check of the current coverage of the detection range of the camera with the partial model used for the initialization.
  • a sub-model adjacent to the previously used sub-model should be used for reinitialization.
  • the corresponding submodel is automatically loaded into memory and used for reinitialization. From here an accurate tracking can take place again.
  • the partial model for the initialization used in the first section 10 is accordingly deleted from the memory.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un système et procédé pour déterminer la position et l'orientation d'un utilisateur par rapport à un environnement réel qu'il observe. Pour permettre une détermination la plus efficace possible de la posture d'un utilisateur au sein d'un environnement réel qui s'étend dans l'espace, il est prévu selon l'invention un procédé avec lequel des portions (10, 11) de l'environnement réel qui se trouvent dans le champ de vision de l'utilisateur sont acquises avec une caméra (20). Selon l'invention, le procédé présente une phase d'initialisation comprenant les étapes suivantes : Sélection parmi un modèle global de l'environnement réel décomposé en plusieurs modèles (0..8) partiels d'un modèle (0..8) partiel approprié pour la détermination d'une posture initiale de l'utilisateur, la sélection étant effectuée en fonction d'une portion (10, 11) de l'environnement réel acquise par la caméra (20) pendant la phase d'initialisation, et détermination de la posture initiale de l'utilisateur en comparant la portion (10, 11) avec le modèle (0..8) partiel. Selon l'invention, le procédé présente une phase de poursuite qui suit la phase d'initialisation et au cours de laquelle la posture de l'utilisateur est déterminée continuellement à partir de la posture initiale au moyen d'un algorithme de poursuite et la phase d'initialisation est redémarrée dès que la précision de détermination de la posture obtenue ou pouvant être obtenue dans la phase de poursuite ne satisfait plus à un critère de qualité prédéfini.
PCT/DE2006/001631 2006-09-15 2006-09-15 Système et procédé pour déterminer la position et l'orientation d'un utilisateur WO2008031369A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/DE2006/001631 WO2008031369A1 (fr) 2006-09-15 2006-09-15 Système et procédé pour déterminer la position et l'orientation d'un utilisateur
DE112006004131T DE112006004131A5 (de) 2006-09-15 2006-09-15 System und Verfahren zur Bestimmung der Position und der Orientierung eines Anwenders

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/DE2006/001631 WO2008031369A1 (fr) 2006-09-15 2006-09-15 Système et procédé pour déterminer la position et l'orientation d'un utilisateur

Publications (1)

Publication Number Publication Date
WO2008031369A1 true WO2008031369A1 (fr) 2008-03-20

Family

ID=37451409

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/DE2006/001631 WO2008031369A1 (fr) 2006-09-15 2006-09-15 Système et procédé pour déterminer la position et l'orientation d'un utilisateur

Country Status (2)

Country Link
DE (1) DE112006004131A5 (fr)
WO (1) WO2008031369A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013021137A1 (de) 2013-12-13 2015-06-18 Audi Ag Verfahren zum Betreiben einer Datenschnittstelle eines Kraftwagens und Kraftwagen
US9282238B2 (en) 2010-10-29 2016-03-08 Hewlett-Packard Development Company, L.P. Camera system for determining pose quality and providing feedback to a user

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2315124A (en) * 1996-07-09 1998-01-21 Gen Electric Real time tracking of camera pose
WO2001067749A2 (fr) * 2000-03-07 2001-09-13 Sarnoff Corporation Procede d'estimation de pose et d'affinage de modele pour une representation video d'une scene tridimensionnelle
DE102004061841A1 (de) * 2003-12-22 2005-07-14 Augmented Solutions Gmbh Markerloses Tracking System für Augmented Reality Anwendungen
GB2411532A (en) * 2004-02-11 2005-08-31 British Broadcasting Corp Camera position determination

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2315124A (en) * 1996-07-09 1998-01-21 Gen Electric Real time tracking of camera pose
WO2001067749A2 (fr) * 2000-03-07 2001-09-13 Sarnoff Corporation Procede d'estimation de pose et d'affinage de modele pour une representation video d'une scene tridimensionnelle
DE102004061841A1 (de) * 2003-12-22 2005-07-14 Augmented Solutions Gmbh Markerloses Tracking System für Augmented Reality Anwendungen
GB2411532A (en) * 2004-02-11 2005-08-31 British Broadcasting Corp Camera position determination

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LEPETIT V ET AL: "Fully automated and stable registration for augmented reality applications", MIXED AND AUGMENTED REALITY, 2003. PROCEEDINGS. THE SECOND IEEE AND ACM INTERNATIONAL SYMPOSIUM ON 7-10 OCT. 2003, PISCATAWAY, NJ, USA,IEEE, 7 October 2003 (2003-10-07), pages 93 - 102, XP010662800, ISBN: 0-7695-2006-5 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9282238B2 (en) 2010-10-29 2016-03-08 Hewlett-Packard Development Company, L.P. Camera system for determining pose quality and providing feedback to a user
DE102013021137A1 (de) 2013-12-13 2015-06-18 Audi Ag Verfahren zum Betreiben einer Datenschnittstelle eines Kraftwagens und Kraftwagen
DE102013021137B4 (de) 2013-12-13 2022-01-27 Audi Ag Verfahren zum Betreiben einer Datenschnittstelle eines Kraftwagens und Kraftwagen

Also Published As

Publication number Publication date
DE112006004131A5 (de) 2009-08-20

Similar Documents

Publication Publication Date Title
DE102012204019B4 (de) Verfahren zur Reduzierung von Bewegungsartefakten
EP2082687A1 (fr) Représentation superposée de saisies
DE102019106277A1 (de) Bildanalysevorrichtung, -verfahren und -programm
DE102014212632B4 (de) Verfahren zum Überwachen eines Betriebs eines Medizingeräts
DE102005034597A1 (de) Verfahren und Anordnung zur Erzeugung einer Tiefenkarte
DE102009014154A1 (de) Verfahren zur Kalibrierung der Position von einem Laserfächerstrahl zur Projektionsgeometrie eines Röntgengerätes und Röntgengerät
DE112015003148T5 (de) Sichtlinieneingabeparameterkorrekturvorrichtung und Sichtlinieneingabevorrichtung
EP2562681A1 (fr) Procédé de suivi d'objet pour un système d'assistance du conducteur à caméra
WO2005039836A2 (fr) Procede pour imprimer un mouvement a un appareil de manutention et dispositif de traitement d'image
DE102011080588A1 (de) Verfahren sowie Datenverarbeitungseinrichtung zur 3-D/3-D-Registrierung von Bilddatensätzen der medizinischen Bildgebung
WO2008031369A1 (fr) Système et procédé pour déterminer la position et l'orientation d'un utilisateur
DE102015012344A1 (de) Verfahren zum Kalibrieren einer Kamera
DE10145608B4 (de) Modellbasierte Objektklassifikation und Zielerkennung
DE102017210258A1 (de) Vorrichtung und Verfahren zum Kompensieren einer Bildstörung
EP3659113B1 (fr) Système de reconnaissance, procédé de travail et procédé d'apprentissage pour générer un modèle 3d avec des données de référence
DE102019214302A1 (de) Verfahren zum Registrieren eines Röntgenbilddatensatzes mit einem Navigationssystem, Computerprogrammprodukt und System
DE102019101222A1 (de) Verfahren zur Auswahl von Kamerabildabschnitten
DE102009037251A1 (de) Verfahren zum Erzeugen von 3D-Bilddaten eines Körpers
DE102020006160A1 (de) Verfahren zur Lageerkennung eines Objekts mittels eines Lageerfassungssystems, Verfahren zum Bearbeiten eines Objekts sowie Lageerfassungssystem
DE102020126954A1 (de) System und Verfahren zum Erfassen einer räumlichen Orientierung einer tragbaren Vorrichtung
DE102020204608A1 (de) Verfahren zum Bestimmen eines zeitlichen Temperaturverlaufs eines Körperteils eines Lebewesens, sowie elektronisches Bestimmungssystem
DE102019008892A1 (de) Verfahren zum Betreiben eines Kamerasystems mit einem kameragestützten Bildstitching oder einem merkmalsgestützten Bildstitching, sowie Kamerasystem
EP3518059B1 (fr) Procédé d'aide à l'utilisateur assistée par ordinateur lors de la mise en service d'un planificateur de déplacement pour une machine
WO2007048674A1 (fr) Système et procédé de poursuite par caméra
DE102022200823B3 (de) Verfahren zum Bestimmen einer optischen Achse einer Hauptbeobachterkamera eines medizinischen Mikroskops in einem Referenzkoordinatensystem und medizinisches Mikroskop

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06805293

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 1120060041312

Country of ref document: DE

REF Corresponds to

Ref document number: 112006004131

Country of ref document: DE

Date of ref document: 20090820

Kind code of ref document: P

122 Ep: pct application non-entry in european phase

Ref document number: 06805293

Country of ref document: EP

Kind code of ref document: A1