DE102017110726A1 - System and method for the representation correction of an object image virtually represented in a virtual environment of a real object of a real environment - Google Patents

System and method for the representation correction of an object image virtually represented in a virtual environment of a real object of a real environment

Info

Publication number
DE102017110726A1
DE102017110726A1 DE102017110726.6A DE102017110726A DE102017110726A1 DE 102017110726 A1 DE102017110726 A1 DE 102017110726A1 DE 102017110726 A DE102017110726 A DE 102017110726A DE 102017110726 A1 DE102017110726 A1 DE 102017110726A1
Authority
DE
Germany
Prior art keywords
real
virtual
environment
user
input unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
DE102017110726.6A
Other languages
German (de)
Inventor
Gunther Göbel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hochschule fur Technik und Wirtsch Dresden
Hochschule fur Technik und Wirtschaft Dresden
Original Assignee
Hochschule fur Technik und Wirtsch Dresden
Hochschule fur Technik und Wirtschaft Dresden
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hochschule fur Technik und Wirtsch Dresden, Hochschule fur Technik und Wirtschaft Dresden filed Critical Hochschule fur Technik und Wirtsch Dresden
Priority to DE102017110726.6A priority Critical patent/DE102017110726A1/en
Publication of DE102017110726A1 publication Critical patent/DE102017110726A1/en
Application status is Pending legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • G06F3/0308Detection arrangements using opto-electronic means comprising a plurality of distinctive and separately oriented light emitters or reflectors associated to the pointing device, e.g. remote cursor controller with distinct and separately oriented LEDs at the tip whose radiations are captured by a photo-detector associated to the screen
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00624Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
    • G06K9/00664Recognising scenes such as could be captured by a camera operated by a pedestrian or robot, including objects at substantially different ranges from the camera
    • G06K9/00671Recognising scenes such as could be captured by a camera operated by a pedestrian or robot, including objects at substantially different ranges from the camera for providing information about objects in the scene to a user, e.g. as in augmented reality applications

Abstract

The invention relates to a system and a method for correcting the representation of an object image, which is virtually displayed in a virtual environment, of a real object of a real environment. In order to correct the representation of an object image of a real object of a real environment that is virtually displayed in a virtual environment, the invention has a display unit for providing a virtual user environment in which virtual object images of real objects of a real environment of a user can be displayed. Furthermore, at least one user input unit configured for user interaction with a virtual object image is provided, whose spatial position can be detected by means of a reference system in the real environment and displayed in the virtual environment. In this case, the user input unit is set up to detect a collision with a real object of the real environment, wherein a detected collision position is compared directly with the position of the virtual object image of the real object, so that at a deviation between the real collision position and the display position of the virtual object image of the real object, the representation of the virtual object image of the real object can be corrected on the basis of the real collision position.

Description

  • The invention relates to a system and a method for correcting the representation of an object image, which is virtually displayed in a virtual environment, of a real object of a real environment. The invention is particularly suitable for recalibrating object images in virtual reality.
  • Powerful computers enable the creation of virtual realities in which users can interact with virtually created objects. Systems for the creation of a virtual reality, referred to as VR for short, comprise, in addition to a computer, usually a display unit located in real space with which a user can view the virtual reality and an input unit located in real space, for example in the form of a hand controller, for an interaction in to enable the virtual environment. For various applications, it is desired that real objects of a user's real environment can be represented in the virtual environment to realize merging of real and virtual objects. An important component for a successful immersion of a user therefore relates to the realistic reproduction of the position, location and size of real objects in the virtual environment. However, there is the problem that in the reproduction of virtual object images of real objects, display errors may occur that cause a deviation of the presentation form or the display position of the virtual object images with respect to the position of the real object. To remedy such deviations, a recalibration or recalibration of the system is often required, the system being unavailable for use for the duration of the corrective actions to be performed. For example, corrective measures are known in which a targeted collision of a sensor element and a real object is carried out in order to obtain on the basis of the collision position concrete position information of real objects or their surface contour. However, these are usually intended calibration operations that require an interruption of the usage operation, which leads to disturbances of the immersion.
  • Other systems, such as Microsoft's Hololens system, use complex camera-based spatial surveying systems to determine distances between real objects and VR glasses. Furthermore, a solution is known in which a motion capture camera system is used to view a moving system equipped with markers to facilitate position tracking. In addition, existing inherent information on the size of the tracked objects is taken into account by checking collisions in the virtual image of the objects and, if necessary, calculating them out by means of position correction. Thus, a precise calibration of the object position is possible even with marker shifts. The known solutions have the disadvantage that in a calibration of virtual object images of real objects an interruption of the use of virtual reality operation and / or additional sensor components are required, resulting in higher overall costs.
  • The object of the invention is now to propose a system and a method for the uninterrupted representation correction of object images virtually displayed in a virtual environment.
  • The object is achieved by a system having the features of patent claim 1 and a method having the features of patent claim 7. Advantageous embodiments or developments are specified in the respective dependent claims.
  • The system according to the invention for correcting the representation of an object image of a real object of a real environment that is virtually displayed in a virtual environment has a display unit for providing a virtual user environment in which virtual object images of real objects of a real environment of a user can be displayed. For the purposes of the invention, a user is to be understood as a person who interacts with the provided virtual user environment using the system. The system further comprises at least one user input unit configured for user interaction with a virtual object image whose spatial position can be detected in the real environment using a reference system and can be displayed in the virtual environment. Here, the user input unit is further configured to detect a collision with a real object of the real environment, wherein a detected collision position is compared directly with the position of the virtual object image of the real object, so that in a deviation between the real collision position and the display position of the virtual Object image of the real object, the representation of the virtual object image of the real object based on the real collision position is correctable.
  • In the system according to the invention, a virtual user environment is provided via the display unit, which may comprise a computer or may be coupled to a computer, in which real user environment real objects are represented as virtual object images corresponding to the position, location and size of the respective real objects to enable haptic user interaction in the virtual environment.
  • Advantageously, in the system according to the invention, all collisions of the user input unit with the real environment during operation, that is in user operation, each collision position is compared directly with a display position of a virtual object image of the real object, so that in a deviation between the real collision position and the display position the virtual object image of the real object, a correction of the representation of the virtual object image can be performed. A correction may provide for a shift of the presentation position of the virtual object image to a representation position corresponding to the collision position in the virtual environment. Furthermore, a correction may be that a representation of the surface contour of the virtual object image is adapted to a contour of the real object.
  • The at least one user input unit is a computer-connectable device having at least one button, switch or lever for inputting information by a user. In this case, the user input unit can be configured such that user inputs in the virtual environment can be visualized. The user input unit may preferably be held in the hand of a user. For certain applications, provision may be made for the user input unit to be attached to another body part, for example the user's leg or foot. The spatial position of the user input unit can be detected on the basis of a reference system in real space. The reference system can be formed by sensors arranged in real space, the sensors serving as reference points for a coordinate system in which the position of the user input unit can be determined.
  • To determine the position of the user input unit, a camera-based system can furthermore be used. In this case, the position of the user input unit is determined by means of cameras arranged in real space, in which the user input unit is recognized, for example, by optically clear, actively illuminating or passively reflecting patterns or markers in the camera image. Furthermore, sensors arranged on the user input unit can be used, which react to the irradiation of a laser. In this case, a device for generating a rotating laser beam is arranged at a known position in the real space, wherein a position of the user input unit is determined on the basis of the transit times of the laser beam when sweeping the sensors arranged on the user input unit.
  • According to an advantageous embodiment variant of the system according to the invention, the user input unit has at least one pushbutton in order to detect a contact with a real object, wherein the pushbutton is actuated, as a collision. According to a further advantageous embodiment variant of the system according to the invention, it can be provided that the user input unit has at least one proximity sensor for collision detection of objects of the real environment. It may further be provided that the proximity sensor is designed as a magnetic proximity sensor, wherein an approach or collision of the user input unit to a real object is detectable by means of a magnetic field change. It is also conceivable that the user input unit has a plurality of different proximity sensors, wherein capacitive, inductive and / or magnetic proximity sensors can be used. Preferably, the plurality of proximity sensors may be located at different positions on the user input unit. This makes it possible for a contour or a profile of the real object to additionally be detected during collision detection. The use of proximity sensors has the advantage that collisions can be detected prior to their formation, whereby the system can initiate corrective measures even before a collision.
  • In a further embodiment variant of the system according to the invention, the user input unit may comprise acceleration sensors, wherein a collision can be detected on the basis of a change in an acceleration of the user input unit. In this embodiment, a computer-aided software can be used, which evaluates an abrupt termination of movement of an executed movement of the user input unit as a collision with a real object. In this case, the software is set up to distinguish between a motion abort caused by a normal stop movement of a user and a crash caused by a collision. Furthermore, there is the possibility for collision detection without additional acceleration sensors, since movement, acceleration and abrupt termination of the acceleration of the user input unit can be determined on the basis of the position determination of the user input unit in the reference system.
  • Because the continuous collision detection with the user input unit can be based on different sensors, the system can be integrated into existing VR systems, since the sensors already present in hand controllers of known systems, such as mechanical push buttons, proximity sensors or acceleration sensors for Collision detection with real objects can be used without the need for additional external sensors. Furthermore, since it is possible to detect movement, acceleration and abortion of the user input unit based on the position determination of the user input unit, collision detection can be performed even without additional acceleration sensors in the user input unit.
  • The display unit may be a device worn by a user on the head in the form of a pair of glasses. Preferably, the spatial position of the display unit can also be sensed by the reference system in the real environment in order to be able to reproduce the viewing position in the virtual environment accordingly. Furthermore, it can be provided that the portable device of the display unit has at least one proximity sensor to detect collisions of the portable device with a real object.
  • According to an advantageous development of the system according to the invention, the display unit can have a device for viewing direction detection, with which a virtual object image of the virtual environment fixed by the user with his gaze can be detected. By detecting object fixation in the virtual environment, a collision event can be associated with a real object of a user-fixed virtual object map. The assignment of a collision event to a virtual object image fixed by the user facilitates a representation correction, in particular in the case of a dense arrangement of a plurality of virtual object images, since a misalignment of a collision event can occur without consideration of the object fixation. Camera-based eye-tracking systems, as known from the prior art, are suitable for viewing direction detection.
  • An object identification of a virtual object image can furthermore be achieved with a target marker displayed in the user's field of vision, which is positioned with the user's head movement in such a way that the target marker and the virtual object image to be fixed coincide or the target marker at least partially covers the virtual object image to be fixed , If the overlap of the target marker and the virtual object image to be fixed is achieved, the object identification can be triggered by a confirmation of the user, for example by pressing a button, by word or automatically, after the lapse of a predetermined period of object fixation with the target marker.
  • The invention further comprises a method for correcting the representation of an object image, which is virtually displayed in a virtual environment, of a real object of a real environment. The procedure is such that the spatial position of at least one body part, for example the hand, of a user and / or a user input unit is / are determined on the basis of a reference system in the real environment, a collision of the body part and / or the user input unit with a real object Detected real environment and the detected collision position with the display position of the virtual object image of the real object is compared directly, wherein in a deviation between the real collision position and the display position of the virtual object image of the real object, the representation of the virtual object image of the real object in the virtual environment the real collision position is corrected.
  • In the method according to the invention, a collision with a real object is detected and the collision position, which is determined on the basis of the real-space position of a body part or a user input unit, is compared directly with the representation position of the virtual object image of the real object in the virtual environment, so that in the event of a deviation between the object real collision position and the display position of the virtual object image of the real object, the representation of the virtual object image of the real object in the virtual environment can be corrected based on the real collision position without the use of the virtual environment must be interrupted. The inventive method thus allows a continuous representation correction or recalibration of virtual object images in the current use operation, without the immersion is interrupted or significantly disturbed.
  • The determination of the real-space position of a body part of a user can be realized with an input unit or a sensor provided on the body part of the user or held by the body part, wherein the input unit or the sensor can be spatially located within the reference system of the real environment.
  • The reference system can be formed by sensors arranged in real space, the sensors serving as reference points for a coordinate system in which the position of the user input unit can be determined.
  • In order to determine the position of the user input unit, a camera-based reference system can furthermore be used. Here is the Determined position of the user input unit by means of cameras arranged in real space, in which the user input unit is detected, for example, by optically clear visible in the camera image, actively lit or passive reflective pattern or marker. Furthermore, sensors arranged on the user input unit can be used, which react to the irradiation of a laser. In this case, a device for generating a rotating laser beam is arranged at a known position in the real space, wherein a position of the user input unit is determined on the basis of the transit times of the laser beam when sweeping the sensors arranged on the user input unit. When determining the position of the user input unit, position data of the user input unit may be provided in the form of path-time data, wherein the path-time data may be used to determine a speed and an acceleration of the user input unit.
  • According to an advantageous development of the method according to the invention, a viewing direction of the user can additionally be detected in order to identify a virtual object image fixed by the user in the virtual environment on the basis of the user viewing direction, a detected collision event being associated with a real object of the virtual object image fixed by the user. This has the advantage that a collision event with a real object can be correctly assigned to the virtual object image of the real object displayed in the virtual environment, so that errors can be avoided in the representation correction.
  • In accordance with the method according to the invention, a physical presentation form and / or representation dimensions of the virtual object image of the real object can be corrected on the basis of a detected collision event with a real object. It can therefore be provided that in addition to a displacement of the display position of a virtual object image to the collision position with the real object, a contour or a profile of the virtual object image is corrected.
  • According to an advantageous embodiment variant of the method according to the invention, the collision position can be detected when the user input unit is in contact with the real object. According to a further embodiment, in which a magnetic proximity sensor is used, the collision position can be detected by the sensor as the user input unit approaches a real object. In a magnetic proximity sensor, a collision is detected based on a magnetic field change and compared with known position data of the user input unit to obtain the concrete collision position for comparison with the display position of the virtual object image.
  • According to a further advantageous embodiment variant of the method according to the invention, an acceleration and / or a speed of the user input unit is detected, the collision position being determined on the basis of a change in an acceleration of the user input unit. In this case, an abrupt termination of movement of a movement of the user input unit is evaluated by means of software, wherein the software is set up to differentiate between a movement stop caused by a normal stop movement of a user and a collision caused by movement.
  • For reasons of efficiency, it may be desirable for the acceleration value based collision analysis to be limited to objects located in a close vicinity of the user input unit with the software. A nearby environment is an area of the real environment that can be reached by a user with one arm's length. In the collision analysis based on acceleration values for determining the collision position, the course of the acceleration values of the real user input unit is detected, wherein the first derivative of the instantaneous speeds after a suitable filtering, such as a smoothing of the noise components by a low-pass filter to large negative values, ie motion aborted out become. High negative acceleration values typically occur when the user input unit physically collides with a real object. The point in time of an abrupt stop of movement is compared with the determined position data of the user input unit and thus the collision position is determined.
  • Since it is not necessarily evident to which side or where the user input unit collided with the real object, the acquired acceleration measurements are for the entire user input unit, the expected local collision positions from the software collision analysis are used for the closest expected collision position Collision position used for a representation correction of the virtual object representation. To identify user-induced movement crashes, the time domain of the velocity values and acceleration values of ordinary motion sequences and collision events may be considered, since the motion sequences of ordinary motion aborts differ from collision events. To improve the detection reliability of a It has proved to be advantageous for a collision event to compare the second derivative of the velocity values of ordinary motion sequences and collision events with one another, since a clear difference can be ascertained here. For identification of a collision event, it may therefore be provided that a value range of a second derivative of speed values is determined in order to enable a clear assignment of a collision event. The software is expediently set up to form a second derivative of detected speed values of the user operating unit. The speed and the acceleration as well as an abrupt termination of the movement of the user input unit can preferably be determined with the software based on the tracking of the positions of the user input unit detected in real space. In this case, provided during the position determination path-time data of the user input unit can be evaluated.
  • Furthermore, there is the possibility that the speed and the acceleration are determined by means of acceleration sensors of the user input unit.
  • Because of the various possibilities for determining a collision event, the method according to the invention can be used with existing VR systems since hand controllers of known design usually already have at least one proximity sensor and / or acceleration sensor which can be used for collision detection with a real object /can. Thus, the inventive method can be integrated into existing system, without further sensors are required.
  • According to an advantageous development of the method according to the invention, it can be provided that a spatial tolerance range is specified in the virtual environment in which a representation correction for a virtual object image fixed by the user is permitted. This has the advantage that an object image fixed by the user is not incorrectly corrected in the event of a collision with a real object that can not be assigned to the virtual object image. The tolerance range is therefore to be understood as an area limited in virtual space, in which a representation correction for the virtual object image is permitted.
  • According to a further advantageous embodiment variant of the method according to the invention, provision can be made for at least one further collision position of the real object to be determined for the correction of the representation of the virtual object image.
  • Further details, features and advantages of embodiments of the invention will become apparent from the following description of exemplary embodiments
  • Show it
    • 1 : a schematic representation of a first embodiment of the invention
    • 2 : a schematic representation of a second embodiment of the invention
    • 3 : a schematic representation of a third embodiment of the invention
  • For the identification of recurrent features, the same reference numerals have been used in the figures.
  • 1 shows a schematic representation of a first embodiment of the system according to the invention for correcting the representation of an object image virtually displayed in a virtual environment 2 a real object 1 a real environment. The system has a display unit 8th on, which is in the form of glasses and by a user 9 at the head 3 will be carried. The display unit 8th is set up to provide a virtual user environment in which virtual object images 2 real objects 1 a real environment of the user 9 are representable. For this purpose, the display unit 8th be coupled to a computer, not shown, or have a computer. The system further includes at least one of user interaction with a virtual object map 2 equipped user input unit 4 whose spatial position can be detected by means of a reference system in the real environment. The user input unit 4 by the user 9 in the hand 10 as a body part of the user 9 is held as a virtual user input unit 5 , which is illustrated by a dashed line, can be displayed in the virtual environment provided by the display unit.
  • In the dashed line virtual object image 2 it is a reproduction of the real object displayed in the virtual environment 1 , This is the user 9 possible in the virtual environment with the addition of the user input unit 4 in the virtual environment as a virtual user input device 5 For example, as a virtual tool can be displayed, with the real object 1 to interact haptically.
  • The user input unit 4 is further set up to collide with the real object 1 to capture the real environment. According to a particularly simple embodiment, the user input unit 4 having a pushbutton, not shown, which in contact with the Real Property 1 triggers, whereby a collision is detected. Due to the position detection of the user input unit 4 is a collision position determinable in the 1 with the reference number 6 is marked. The detected collision position 6 becomes directly related to the position of the virtual object image 2 compared, wherein a deviation between the representation of the virtual object image 2 and the collision position 6 can be determined. In the event that like in 1 a deviation between the real collision position 6 and the presentation position of the virtual object image 2 is detectable, is a correction of the representation of the virtual object image 2 based on the real collision position 6 performed in which the virtual object image 2 in the virtual environment is shifted so that the collision of the real environment in the virtual environment is displayed accordingly.
  • According to an advantageous embodiment, not shown, it may be provided that for collision detection, a user input unit 4 is used with proximity sensors, which may be inductive proximity sensors, capacitive proximity sensors, magnetic proximity sensors, optical proximity sensors or a combination of said proximity sensors. Through the use of proximity sensors, collisions can be detected before their formation, whereby the system already provides measures for correcting the representation of an optical object image before a collision 2 can initiate.
  • Furthermore, the user input unit 4 Acceleration sensors, not shown, wherein a collision based on a change in an acceleration of the user input unit 4 is detectable. In this embodiment, a computer-aided software is used, which is an abrupt termination of movement of an executed movement of the user input unit 4 as a collision with the real object 1 evaluates. The software is set up to switch between one by a normal stop motion of a user 9 Distinguish movement cancellation caused and caused by a collision motion demolition.
  • The reference system in which the spatial positions of the user input unit 4 and the display unit 8th can be determined may be formed by arranged in real space, not shown sensors, the sensors serve as reference points for a coordinate system. Furthermore, the reference system is used for the placement of virtual object images in the virtual space.
  • 2 shows a schematic representation of a second embodiment of the system according to the invention for correcting the representation of an object image virtually displayed in a virtual environment 2 a real object 1 a real environment. This in 2 embodiment shown substantially corresponds to the in 1 shown embodiment, with the difference that the display unit 8th a sight line detection device 8.1 has, with a look 7 the user 9 can be tracked. With the help of the sight line detection device 8.1 becomes one by the user 9 virtual object image fixed in the virtual environment 2 identified, the identified virtual object image 2 a collision event of the user input unit 4 and the real object 1 is assigned. The assignment of a collision event to one by the user 9 fixed virtual object illustration 2 facilitates a representation correction, in particular in a dense arrangement of a plurality of virtual object images, since it can lead to a misalignment of a collision event without taking into account the object fixing, whereby the representation correction is carried out incorrectly.
  • 3 shows a schematic representation of a third embodiment of the system according to the invention for correcting the representation of an object image virtually displayed in a virtual environment 2 a real object 1 a real environment. In this embodiment, the real object 1 a contour 11 on that with the user 9 in the hand 10 held user input unit 4 is touched, whereby a collision is detected. Due to the positioning of the user input unit 4 can be a collision position 6 are detected, based on a representation correction of the virtual object image 2 takes place because the virtual object image 2 in the virtual environment, not in collision with the virtual user input device 5 is shown. Non-illustrated proximity sensors of the user input unit 4 allow a scan of the contour 11 , so that in addition to correcting the presentation position of the virtual object image 2 also a correction of the virtual representation of the contour 11 can be achieved.
  • LIST OF REFERENCE NUMBERS
  • 1
    real object, real object
    2
    virtual object illustration
    3
    head
    4
    User input unit
    5
    virtual user input device
    6
    collision position
    7
    Line of sight, view
    8th
    display unit
    8.1
    Sight detector
    9
    user
    10
    Hand, body part
    11
    contour

Claims (15)

  1. A system for visualization correction of an object image (2) of a real object (1) of a real environment virtually represented in a virtual environment, comprising a display unit (8) for providing a virtual user environment in which virtual object images (2) of real objects (1) of a real environment of a user (9) can be displayed, and at least one user input unit (4) configured for user interaction with a virtual object image (2) whose spatial position can be detected using a reference system in the real environment and displayed in the virtual environment, the user input unit (4) also being set up is to detect a collision with a real object (1) of the real environment, wherein a detected collision position (6) with the position of the virtual object image (2) of the real object (1) is compared, so that at a deviation between the real collision position (6) and the display position of the virtual object Formation (2) of the real object (1), the representation of the virtual object image (2) of the real object (1) on the basis of the real collision position (6) is correctable.
  2. System after Claim 1 , characterized in that the user input unit (4) comprises at least one proximity sensor for collision detection of real objects (1) of the real environment.
  3. System according to one of Claims 1 or 2 , characterized in that a collision position based on a change in an acceleration of the user input unit (4) is detectable.
  4. System according to one of Claims 1 to 3 , characterized in that the spatial position of the display unit (8) based on the reference system in the real environment is detectable.
  5. System according to one of Claims 1 to 4 , characterized in that the display unit (8) comprises one of a user (9) on the head (3) portable device.
  6. System according to one of Claims 1 to 5 , characterized in that the display unit (8) has a device for viewing direction detection (8.1), with a by the user (9) fixed virtual object image (2) of the virtual environment is identifiable.
  7. A method for correcting the representation of an object image (2) of a real object (1) of a real environment in which the spatial position of at least one body part (10) of a user (9) and / or a user input unit (4) is based on a reference system in FIG the real environment is detected, wherein a collision of the body part (10) and / or the user input unit (4) with a real object (1) of the real environment detected and the detected collision position (6) with the display position of the virtual object image (2) of the real object (1) is directly compared, wherein in a deviation between the real collision position (6) and the display position of the virtual object image (2) of the real object (1), the representation of the virtual object image (2) of the real object (1) in the virtual environment based on the real collision position (6) is corrected.
  8. Method according to Claim 7 , characterized in that additionally a viewing direction (7) of the user (9) is detected in order to identify a virtual object image (2) fixed by the user (9) in the virtual environment on the basis of the user viewing direction (7), wherein a detected collision event of the assigned by the user (9) fixed virtual object image (2).
  9. Method according to one of Claims 7 or 8th , characterized in that a physical representation form and / or representation dimensions of the virtual object image (2) of the real object (1) is / are corrected.
  10. Method according to one of Claims 7 to 9 , characterized in that the collision position (6) is detected upon contact of the user input unit (4) with the real object (1).
  11. Method according to one of Claims 7 to 10 , characterized in that the collision position (6) is detected by a magnetic sensor as the user input unit (4) approaches the real object (1).
  12. Method according to one of Claims 7 to 11 characterized in that an acceleration and / or a speed of the user input unit (4) is detected, the collision position (6) being determined based on a change in an acceleration of the user input unit (4).
  13. Method according to Claim 12 , characterized in that for determining the collision position (6), the second derivative of a movement speed of the user input unit (4) is formed.
  14. Method according to one of Claims 7 to 13 Characterized in that a spatial tolerance range is specified in the virtual environment in which a representation of fixed correction for a user (9) virtual object image (2) is permitted.
  15. Method according to one of Claims 7 to 14 , characterized in that for the representation correction of the virtual object image (2) at least one further collision position (6) of the real object (1) is determined.
DE102017110726.6A 2017-05-17 2017-05-17 System and method for the representation correction of an object image virtually represented in a virtual environment of a real object of a real environment Pending DE102017110726A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE102017110726.6A DE102017110726A1 (en) 2017-05-17 2017-05-17 System and method for the representation correction of an object image virtually represented in a virtual environment of a real object of a real environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DE102017110726.6A DE102017110726A1 (en) 2017-05-17 2017-05-17 System and method for the representation correction of an object image virtually represented in a virtual environment of a real object of a real environment

Publications (1)

Publication Number Publication Date
DE102017110726A1 true DE102017110726A1 (en) 2018-11-22

Family

ID=64278479

Family Applications (1)

Application Number Title Priority Date Filing Date
DE102017110726.6A Pending DE102017110726A1 (en) 2017-05-17 2017-05-17 System and method for the representation correction of an object image virtually represented in a virtual environment of a real object of a real environment

Country Status (1)

Country Link
DE (1) DE102017110726A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030227470A1 (en) * 2002-06-06 2003-12-11 Yakup Genc System and method for measuring the registration accuracy of an augmented reality system
US20160353094A1 (en) * 2015-05-29 2016-12-01 Seeing Machines Limited Calibration of a head mounted eye tracking system
US20170099482A1 (en) * 2015-10-02 2017-04-06 Atheer, Inc. Method and apparatus for individualized three dimensional display calibration

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030227470A1 (en) * 2002-06-06 2003-12-11 Yakup Genc System and method for measuring the registration accuracy of an augmented reality system
US20160353094A1 (en) * 2015-05-29 2016-12-01 Seeing Machines Limited Calibration of a head mounted eye tracking system
US20170099482A1 (en) * 2015-10-02 2017-04-06 Atheer, Inc. Method and apparatus for individualized three dimensional display calibration

Similar Documents

Publication Publication Date Title
JP4820285B2 (en) Automatic alignment touch system and method
KR100948704B1 (en) Movement detection device
JP4701424B2 (en) Image recognition apparatus, operation determination method, and program
JP5544042B2 (en) Method and apparatus for controlling a laser tracker using a gesture
KR101953165B1 (en) Gesture recognition devices and methods
KR20120123487A (en) System and method for contactless detection and recognition of gestures in a three-dimensional space
US20080013826A1 (en) Gesture recognition interface system
KR20080108970A (en) Interactive operating device and method for operating the interactive operating device
KR101711619B1 (en) Remote control of computer devices
KR20120094929A (en) Methods for detecting and tracking touch object
KR20130010012A (en) Apparatus and methods for dynamically correlating virtual keyboard dimensions to user finger size
USRE45411E1 (en) Method for entering commands and/or characters for a portable communication device equipped with a tilt sensor
CN100363880C (en) Operation input device and method of operation input
US10088902B2 (en) Fiducial rings in virtual reality
US8933882B2 (en) User centric interface for interaction with visual display that recognizes user intentions
JP2005107607A (en) Optical position detecting apparatus
CA2553960A1 (en) Processing pose data derived from the pose of an elongate object
TW201126378A (en) User interface using hologram and method thereof
JP2002352234A (en) Fingerprint sensor and position controller
EP2550579A1 (en) Gesture mapping for display device
JP5762892B2 (en) Information display system, information display method, and information display program
EP1516280A2 (en) Apparatus and method for inputting data
JPWO2011092746A1 (en) Map information processing device
KR20110063075A (en) Gesture input apparatus and gesture recognition method and apparatus using the same
EP2645207A1 (en) Input device and control method of input device

Legal Events

Date Code Title Description
R012 Request for examination validly filed
R016 Response to examination communication