WO2001022149A1 - Method and system for time/motion compensation for head mounted displays - Google Patents

Method and system for time/motion compensation for head mounted displays Download PDF

Info

Publication number
WO2001022149A1
WO2001022149A1 PCT/CA2000/001063 CA0001063W WO0122149A1 WO 2001022149 A1 WO2001022149 A1 WO 2001022149A1 CA 0001063 W CA0001063 W CA 0001063W WO 0122149 A1 WO0122149 A1 WO 0122149A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
hmd
data
position data
Prior art date
Application number
PCT/CA2000/001063
Other languages
French (fr)
Inventor
Eric C. Edwards
Original Assignee
Canadian Space Agency
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canadian Space Agency filed Critical Canadian Space Agency
Priority to CA2385548A priority Critical patent/CA2385548C/en
Priority to GB0206051A priority patent/GB2372169B/en
Priority to JP2001525460A priority patent/JP2003510864A/en
Priority to US10/088,747 priority patent/US7312766B1/en
Publication of WO2001022149A1 publication Critical patent/WO2001022149A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted

Definitions

  • the present invention generally relates to telepresence systems and more particularly relates to motion compensation in telepresence systems.
  • Telepresence systems are sensory feedback systems for allowing sensing and monitoring of remote systems.
  • a typical telepresence sensor is a camera and a head mounted display.
  • the system provides visual feedback from a remote location to an operator.
  • the front windshield is provided with a camera.
  • the controls of the vehicle are provided with actuators for automatically manipulating same.
  • An operator is provided with a duplicate of the cabin of the car.
  • the windshield is replaced with a display and the controls are linked via communications to the actuators within the vehicle. Turning of the steering wheel in the cabin of the car causes the steering wheel to turn in the vehicle.
  • the camera captures images in front of the car and they are displayed on the display in the cabin of the car.
  • a head mounted display is a small display or two small displays mounted for being worn on a users head.
  • an HMD with two displays provides stereo imaging allowing a user to perceive depth of field.
  • such an HMD provides two identical images, one to each display.
  • the head mounted display only presents a user with information from approximately in front of the user.
  • the camera is mounted on a mechanism which moves in accordance with detected HMD movement.
  • the image before the user is in accordance with the user's head position.
  • telepresence systems it is an object of telepresence systems to provide a visual sensation of being in the place of the robot and a control system for controlling the robot as well.
  • telepresence systems aim to provide feedback that is appropriate to different situations.
  • U.S. Patent 5,579,026 relates to displaying a simulated planar image, such as a simulation of a television screen located in virtual space in front of the user, the patent provides for a fixed frame of reference relative to a wearer of the HMD. The images in any direction are simulated thus being formed as needed.
  • a simulated planar image such as a simulation of a television screen located in virtual space in front of the user
  • the images in any direction are simulated thus being formed as needed.
  • U.S. Patent 5,933,125 a system is disclosed using prediction of the head movement to pre-compensate for the delay expected in the generation of a virtual image, nominally in a simulated environment.
  • a time lag in the generation of imagery is compensated for by shifting the scene to provide a stable visual frame of reference.
  • This method is applicable to short delays and small displacements, where head tracking information can be used to predict the next head position with reasonable accuracy.
  • the patent discloses 100msec as a normal value. Effective prediction of head motion is aided by comprehensive information about head movement, including angular head velocity and angular acceleration. For small head movements, errors induced are small. Typically, these occur in a small period of time.
  • the disclosed embodiments rely on knowledge of the time delay, which is nominally considered to be constant.
  • US 5,933,125 cannot compensate for unanticipated image movement, only that which occurs in correct response to the operator's head movement. Also, it does not relate to visual telepresence systems using remote slave cameras. It wou e g y advantageous to provide a system that does not rely on any form of prediction for compensation and which works with variable delays between image capture and image display.
  • the invention relates to a method and apparatus that provides a wearer of an
  • HMD with a stable frame of visual reference in cases where there may be time delays or unwanted motion within the visual capture/visual display systems.
  • an image shown on the display of a head-mounted display is offset relative to the field of view of the HMD until the camera position is again synchronised with the HMD position. Offsetting of the image results in areas of the display for which no image information is available. These display areas are provided fill data in the form of a solid shading or some feature set for providing visual cues. When the transformed images again overlap the display, the fill is no longer necessary.
  • a method of motion compensation for head mounted displays includes the following steps: providing an image from an image capture device to a head mounted display including a monitor having a field of view; providing camera position data associated with the image; providing head position data; adjusting the image location relative to the field of view of the monitor in accordance with the camera position data and the head position data; and, displaying portions of the image at the adjusted locations, those portions remaining within the field of view.
  • position data typically includes at least one of orientation data and location data.
  • Location data is also referred to as displacement data.
  • portions of the field of view without image data are filled with a predetermined fill. When none of the image data is for display within the field of view, the entire field of view is filled with the predetermined fill.
  • the image is adjusted by the following steps: determining an offset between the head mounted display position and the camera position; and, offsetting the image such that it is offset an amount equal to the offset between the head mounted display position and the camera position.
  • such a system is not limited by the accuracy of a predictive process nor by the time delay between image capture and image display. Instead, it is reactive, and uses sensed information on HMD position and camera position to formulate a transformation for the captured image.
  • the present invention has no limit to the time delays for which compensation is possible since the required head position information and camera position information are sensed at different times allowing compensation for any delay between sensing one and then sensing the other.
  • the present invention requires no knowledge of the time delay in the system and functions properly in the presence of non-constant time delays. There is no requirement that the time delay be measured and it is not used in determining the transform of the image.
  • Fig. 1 is a simplified block diagram of a system incorporating an HMD coupled to a first computer and in communication with a remote camera;
  • Fig. 2 is a simplified diagram showing axes of movement of the systems involved:
  • Fig. 3a is a simplified diagram showing a simulated view of an image appearing within the field of view of an HMD
  • Fig. 3b is a simplified diagram showing a simulated view of a portion of the image of Fig. 3a offset vertically within the field of view of an HMD in response to a downward motion of a user's head
  • Fig. 3 c is a simplified diagram showing a simulated view of a portion of the image of Fig. 3a offset horizontally within the field of view of an HMD in response to a lateral motion of a user's head;
  • Fig. 3d is a simplified diagram showing a simulated view of a portion of the image of Fig. 3a tilted within the field of view of an HMD in response to a tilting motion of a user's head;
  • Fig. 3e is a simplified diagram showing a simulated view of a portion of the image of Fig. 3a tilted and offset both vertically and horizontally within the field of view of an HMD in response to a tilting motion combined with a lateral and a horizontal motion of a user's head;
  • Fig. 4a is a simplified diagram showing a simulated view of an image appearing within the field of view of an HMD
  • Fig. 4b is a simplified diagram showing a simulated view of a portion of the image of Fig. 4a offset horizontally within the field of view of an HMD in response to a lateral motion of a user's head;
  • Fig. 4c is a simplified diagram showing a simulated view of an image appearing within the field of view of an HMD including a portion of the image of Fig. 4a as well as additional image data captured and displayed within the field of view of an HMD when the camera motion is partially caught up with the lateral motion of a user's head:
  • Fig. 4d is a simplified diagram showing a simulated view of an image appearing within the field of view of an HMD including a portion of the image of Fig. 4a as well as additional image data captured and displayed within the field of view of an HMD after the camera motion is fully caught up with the lateral motion of a user's head;
  • Fig. 5 is a simplified block diagram of a system incorporating an HMD coupled to a first computer and in communication across a network with a second computer coupled to a remote camera;
  • Fig. 6 is a simplified flow diagram of a method according to the invention
  • Fig. 7 is a simplified block diagram of a telepresence system communicating via a satellite communications link
  • Fig. 8 is an image captured by a remote camera as captured
  • Fig. 9 is an image of the same image as that of fig. 8 transformed within an endless image display space.
  • Fig. 10 is an image of the same image as that of fig. 9 as displayed on a display having a finite image display space.
  • the present invention is described with reference to telepresence systems operating over long distances such that significant time delays occur between head motion and image display of an image for a current head orientation. It is, however, equally applicable when a camera drive mechanism provides insufficient response rate to allow comfortable viewing of images during normal head motion. It is also applicable in situations where unwanted and unmodeled motion of the camera is possible, such as when the camera is mounted on a moving platform.
  • FIG. 1 a simplified block diagram of a telepresence system is shown.
  • a head mounted display (HMD) 1 including a three-axes head tracker 3 is worn by an operator 5.
  • the HMD 1 is coupled with a first computer 7 and provides to the first computer 7 HMD values for the HMD position in the form of pitch 21, yaw 22, and roll 23 angles of the HMD 1 as shown in Fig. 2.
  • these HMD values relate to the head position of the operator 5.
  • These values are provided to the first computer 7 at intervals, preferably more than 100 times each second, though other intervals are also acceptable.
  • the HMD values are converted by the first computer 7 into control values for controlling positioning of a camera 11.
  • the control values are transmitted to a mechanism 13 for pointing the camera in order to affect camera orientation.
  • the mechanism controls pitch 24, yaw 25 and roll 26 of the camera 11.
  • the camera position moves in accordance with the movement of the HMD 1.
  • the mechanism 13 for pointing the camera is physically coupled to the first computer 7, the camera 11 begins to move when HMD motion is detected.
  • the lag between camera motion and HMD motion is determined by communication delays, which are very small, processing delays, which may be minimal, and pointing mechanism performance, which varies. These delays often result in an image provided from the camera 11 remaining static while the HMD 1 is in motion or moving substantially slower than the HMD motion.
  • communication delays which are very small, processing delays, which may be minimal
  • pointing mechanism performance which varies.
  • the camera 11 is constantly acquiring images at a video data capture rate. Each image is transmitted to the first computer for processing, if required, and for provision to the HMD 1.
  • the remote system also provides camera position information to the first computer 7 and associated with each image.
  • each frame received by the first computer 7 has associated therewith camera position information.
  • the camera position information is preferably relative to a known orientation. Alternatively, it is transformed by the first computer 7 into position information relative to a known camera orientation and in a coordinate space analogous to that of the HMD 1.
  • the HMD position values are used to determine a current HMD orientation in a coordinate space analogous to that of the camera 11. As such, an offset between camera orientation and HMD orientation is determinable. Since the HMD 1 is being worn by an operator 5 the HMD orientation is directly correlated to the position of the head of the operator 5. Of course, the direct correlation is related to sensed position data and in use is generally an approximate direct correlation due to a refresh rate of the HMD position sensor. The offset between the camera orientation and the HMD orientation is related to a delay between the local system and the remote system. Therefore when a non-zero offset is determined, the first computer offsets the image provided by the camera relative to the field of view of the HMD in order to compensate for the determined offset. Referring to Figs.
  • FIG. 3a the image is shown for a zero offset between camera orientation and HMD orientation. This is the steady state of the feedback system since the HMD 1 is directed in a same direction as the camera 11 and the image displayed on the display within the HMD 1 is the same as the image captured by the camera.
  • the HMD orientation is angled down from the camera position, the image is offset in a vertical direction as shown in Fig. 3b.
  • the camera orientation is offset horizontally, the image is offset horizontally as shown in Fig. 3c.
  • Figs. 3d and Fig. 3e the image is rotated and offset relative to the field of view because the camera orientation is rotated and offset relative to the HMD orientation.
  • FIGs. 4a, 4b, 4c and 4d field of view is shown for the HMD 1 during a left turn of the operator's head.
  • the exact image captured by the camera 11 is shown in the display of the HMD 1 at Fig. 4a.
  • the operator 5 "expects" the image to move to the right since the image is not part of the operator 5 and is within their field of view. This expectation is either conscious or unconscious. Imagining that the image remains static as the HMD moves, it is clear that disorientation would result since individuals take cues from their visual field of view during head movement.
  • the image is offset to a location approximately the same as the orientation difference between the HMD 1 and the camera 1 1.
  • the lighthouse is shifted out of the field of view by the rotation of the head.
  • an operator expects static objects within the field of view, such as a lighthouse, to shift ⁇ degrees within the field of view. This is important to maintaining comfort of the operator in their personal vision system (their eyes and their mind).
  • the camera begins to move to match its orientation to that of the HMD.
  • more of the scene within the operator's field of view is now available from the camera 11.
  • the field of view of the camera and that of the HMD overlap more.
  • the field of view of the HMD again shows an entire image captured by the camera as shown in the image of Fig. 4d.
  • FIG. 5 another embodiment of the invention is shown for use on a network.
  • two computers 7 and 15 communicate via a network or networks 17.
  • the first computer 7 includes the HMD 1 as a peripheral thereof.
  • the second computer 15 includes the camera 11 and mechanism 13 for pointing the camera as peripherals thereof.
  • the processing is performed in either of the first computer 7 or the second computer 15 though the image processing is preferably performed in the first computer 7 in case of network delays that could cause image offset and result in disorientation of an operator 5.
  • network delays are known to be significant, it is important that image processing is performed on the first computer.
  • FIG. 6 a simplified flow diagram of a method of performing the invention is shown.
  • An image is captured by the camera 11.
  • a sensor captures position data in the form of camera orientation values for pitch, roll and yaw.
  • the position data is preferably captured concurrently with the image. Alternatively, it is captured approximately at a same time but offset by a finite amount either immediately after image capture or immediately before.
  • the position data is then associated with the image data.
  • a simple method of associating the data is by encoding the position data with the image data either as header or trailer information.
  • the image and the position data could also be identified with an associating identifier such as an image frame number.
  • the two data are transmitted in parallel in a synchronous environment.
  • the image and position data are then transmitted to the first computer 7.
  • the image and position data are received, they are prepared for processing at the first computer 7.
  • the position data of the HMD 1 is acquired by the first computer 7 and is used to transform the image in accordance with the invention.
  • the transformed image is provided to the display and is displayed thereon to the operator 5. Because the HMD position data is gathered immediately before it is needed, the delay between HMD position data capture and display of the transformed image is very small and results in little or no operator disorientation.
  • position data is provided to the mechanism 13 at intervals and the mechanism moves the camera 11 in accordance with received position data and a current orientation of the camera 11.
  • the step of transforming the image comprises the following steps, some of which are performed in advance.
  • a correlation between angular movement and display or image pixels is determined such that an offset of ⁇ degrees results in displacement of the image by N pixels in a first direction and by M pixels in a second other direction.
  • a transform for rotating the image based on rotations is also determined.
  • the transforms are sufficiently simple to provide fast image processing. That said, a small image processing delay, because it forms substantially the delay in displaying the data, is acceptable.
  • the image data is received, it is stored in memory for fast processing thereof.
  • the HMD position data is acquired and is compared to the camera position data. The difference is used in performing the transform to correct the image position for any HMD motion unaccounted for by the mechanism 13, as of yet. Also, the method corrects for unintentional movements of the camera 11 when the camera position sensor is independent of the mechanism 13, for example with an inertial position sensor.
  • a general purpose processor is used to transform the image.
  • a hardware implementation of the transform is used.
  • a hardware implementation is less easily modified, but has a tremendous impact on performance.
  • parallel hardware transformation processors an image can be transformed in a small fraction of the time necessary for performing a software transformation of the image.
  • a satellite based telepresence system is shown.
  • delays in the order of seconds between head motion and image display result. Further, the delays are not always predictable.
  • an HMD 101 is shown positioned on the head of an operator 5.
  • the HMD is provided with a head tracker 103 for sensing position data relative to the HMD.
  • the HMD is also coupled with a computer 107 for providing display data to the HMD and for providing the HMD position data to a communications link 108.
  • the communications link 108 uplinks the HMD position data to a satellite 1 17 from which it is transmitted to a transceiver 116.
  • a computer 115 in communication with the transce ver prov es t e ata to a g m a or repos t on ng a camera n accordance therewith.
  • the camera 111 captures images which are provided along a reverse communication path - computer 115, transceiver 116, satellite 117, communications link 108 - to the computer 107.
  • a different return path is used.
  • There the image data is processed for display within the HMD 101. With the images, camera position data sensed by a sensor 114 is also provided. The camera position data is associated with an image or images captured at approximately a time the camera 111 was in that sensed position.
  • the computer 107 uses the camera position data and the image along with data received from the head tracker 103 to transform the image in accordance with the invention. As is evident, the delay between HMD motion and camera motion is measurable in seconds. The delay between camera image capture and receipt of the image at the computer 107 is also measurable in seconds. As such, significant disorientation of the user results absent application of the present invention.
  • an image captured by the camera 111 is shown.
  • the image is displayed as captured when the HMD and the camera orientations are aligned, the camera orientation at a time of image capture and the HMD orientation at a time of display. If the orientations are offset one from another, the image is shifted within the field of view of the operator as shown in Fig. 9. Since there is no image data beyond the camera imaging field of view, the remainder of the display area is shaded with a neutral colour such as gray. Alternatively, the remainder of the display area is shaded to provide reference points to further limit disorientation of the operator 105. Further alternatively, the portion of the field of view for which no image data is available is left blank. Typically blank areas are black in order not to distract an operator. Referring to Fig. 10, when the camera 111 orientation is "caught up" with the HMD orientation, the field of view of the HMD again shows an entire image captured by the camera.
  • the camera captures images of areas larger than can be displayed and only a portion of the image is displayed. This is considered less preferable since it increases the bandwidth requirements and often for no reason as the additional data is not displayed.
  • all types of motion are compensated for including inaccuracies of the mechanism
  • delays induced by communications delays induced by the mechanism 13, processing delays, fine operator motions and so forth.
  • each image is accurately aligned on the display within a time delay error related only to the processing and the delay in reading HMD sensor data.
  • discontinuous scene changes are changed into smooth transitions in accordance with the expected visual result.
  • image data prior to display thereof in order to determine features or locations within the image data to highlight or indicate within the displayed image. For example, contrast may be improved for generally light or dark images. Also, features may be identified and labeled or highlighted. Alternatively, icons or other images are superimposed on the displayed image without processing thereof.
  • control values are determined in the mechanism for pointing the camera instead of by the first computer.
  • the HMD position data is transmitted to the remote system wherein a camera movement related to the HMD movement is determined and initiated.
  • the above described embodiment compensates for orientation - motion about any of three rotational axes.
  • the invention compensates for displacement - linear motion along an axis.
  • the invention compensates for both linear motion and motion about any of the rotational axes. Displacement and orientation are both forms of position and data relating to one or both is referred to here and in the claims, which follow, as position data.
  • the above described embodiment does not correct images for perspective distortion. Doing so is feasible within the concept of time/motion compensation according to the invention, however it is not generally applicable to use with a single camera, since the depth of field of the observed scene varies. It would require capturing of depth data using a range sensor or a three-dimensional vision system.
  • areas within the field of view that do not correspond to displayed image locations are filled with current image data relating to earlier captured images for those locations.
  • any earlier captured images are deemphasized within the field of view in order to prevent the operator from being confused by "stale" image data.
  • each image received from the camera is buffered with its associated position data.
  • the processor determines another image having image data for those locations within the field of view, the locations determined in accordance with the transform performed based on the camera position data associated with the earlier captured image and with the current HMD position data.
  • the image data is then displayed at the determined location(s) in a "transparent" fashion. For example, it may be displayed with a lower contrast appearing almost ghostlike. Alternatively, the colours are faded to provide this more ghostlike appearance. Further alternatively, it is displayed identically to the current image data.

Abstract

A method and system for time-motion compensation for use with head mounted displays is disclosed. According to the method, a remote camera captures an image for display on a head-mounted display (HMD) including a viewing window. The image and camera position data is transmitted to a system including the HMD for display to a wearer of the HMD. The HMD position is determined. An offset between the HMD position and a known position of the HMD is determined as is an offset between the camera position and a known position of the camera. The image is offset relative to the viewing window of the image based on the difference between the two determined offsets.

Description

Method and System for Time/motion Compensation for Head Mounted Displays
Field of the Invention
The present invention generally relates to telepresence systems and more particularly relates to motion compensation in telepresence systems.
Background of the Invention
The field of remote control has come a long way since the days of watching a model aircraft fly under the control of a handheld controller. Robotics and remote robotic manipulation have created a strong and pressing need for more remote and better remote control systems. Obviously, an ideal form of remote control involves providing an operator with all the sensations of operating the remote robot without the inherent dangers, travel, and so forth. In order to achieve this, a telepresence system is used.
Telepresence systems are sensory feedback systems for allowing sensing and monitoring of remote systems. A typical telepresence sensor is a camera and a head mounted display. The system provides visual feedback from a remote location to an operator. For example, in a telepresence system for an automobile, the front windshield is provided with a camera. The controls of the vehicle are provided with actuators for automatically manipulating same. An operator is provided with a duplicate of the cabin of the car. The windshield is replaced with a display and the controls are linked via communications to the actuators within the vehicle. Turning of the steering wheel in the cabin of the car causes the steering wheel to turn in the vehicle. Similarly, the camera captures images in front of the car and they are displayed on the display in the cabin of the car.
Presently, there is a trend toward providing the visual feedback using a head mounted display (HMD). A head mounted display is a small display or two small displays mounted for being worn on a users head. Advantageously, an HMD with two displays provides stereo imaging allowing a user to perceive depth of field. Alternatively, such an HMD provides two identical images, one to each display. Unfortunately, the head mounted display only presents a user with information from approximately in front of the user. Thus, when a user turns their head, the image seen and the expected image differ. Therefore, the camera is mounted on a mechanism which moves in accordance with detected HMD movement. Thus, the image before the user is in accordance with the user's head position.
Generally, it is an object of telepresence systems to provide a visual sensation of being in the place of the robot and a control system for controlling the robot as well. Thus, telepresence systems aim to provide feedback that is appropriate to different situations.
Unfortunately, a camera does not move in exact synchronisation with the HMD so the image is not perfectly aligned with the expectations of the user during head motion. This misalignment can result in disorientation and nausea on the part of an operator.
The disclosures in U.S. Patent 5,579,026 issued on Nov. 26, 1996 in the name of
Tabata and in U.S. Patent 5,917,460 issued on Jun. 29, 1999 in the name of Kodama focus on image display for use in, for example, virtual reality and games. In there is described a head mounted display in which the position of the projected image can be displaced in response to a control unit or in response to the rotational motion of the operator's head. The essence of the head-tracking implementation is that from the user's perspective, the image can be made to remain substantially stationary in space during head movements, by being manipulated in a manner opposite to the movements. Significantly, the patents do not relate to visual telepresence using slaved cameras. In the slaved camera implementation, the camera should follow the motion of the HMD and, as such, compensation for HMD motion is unnecessary since the image is always of a direction in which the head is directed.
Further because U.S. Patent 5,579,026 relates to displaying a simulated planar image, such as a simulation of a television screen located in virtual space in front of the user, the patent provides for a fixed frame of reference relative to a wearer of the HMD. The images in any direction are simulated thus being formed as needed. Unfortunately, in telepresence systems, often the video data relating to a particular direction of view is unavailable. This complicates the system significantly and as such, the prior art relating to video data display is not truly applicable and, one of skill in the art would not refer to such.
In U.S. Patent 5,917,460 issued on Jun. 29, 1999 in the name of Kodama a system addressing the three-axes displacement (up/down, left/right, frontwards'backwards) of a HMD is provided. The displacement appears to linear and is accommodated through a mechanical mechanism. The displays are moved in response to detected movement of a head and as such, objects remain somewhat stationary from the visual perspective of the user.
It is not well suited to use in telepresence wherein a camera tracks the motion of the HMD. One of skill in the art, absent hindsight, would not be drawn to maintaining a visual reference when a head is turned, for a telepresence system wherein a camera is rotated in response to head movement. Of course, the different problem results in a different solution.
For example, in telepresence systems, the delay between camera image capture and head motion is often indeterminate. It is not a workable solution to implement the system of the above referenced patents to solve this problem. Because of the unknown delays caused by camera response time and communication delays, the solution is not trivial.
In U.S. Patent 5,933,125 a system is disclosed using prediction of the head movement to pre-compensate for the delay expected in the generation of a virtual image, nominally in a simulated environment. By this means, a time lag in the generation of imagery is compensated for by shifting the scene to provide a stable visual frame of reference. This method is applicable to short delays and small displacements, where head tracking information can be used to predict the next head position with reasonable accuracy. The patent discloses 100msec as a normal value. Effective prediction of head motion is aided by comprehensive information about head movement, including angular head velocity and angular acceleration. For small head movements, errors induced are small. Typically, these occur in a small period of time. The disclosed embodiments rely on knowledge of the time delay, which is nominally considered to be constant.
Unfortunately, when the time delays grow large allowing for substantial motion of a head, the errors in the predictive algorithm are unknown and the system is somewhat unworkable.
Furthermore, US 5,933,125 cannot compensate for unanticipated image movement, only that which occurs in correct response to the operator's head movement. Also, it does not relate to visual telepresence systems using remote slave cameras. It wou e g y advantageous to provide a system that does not rely on any form of prediction for compensation and which works with variable delays between image capture and image display.
Object of the Invention
In order to overcome these and other shortcomings of the prior art, it is an object of the invention to provide a method of compensating for time delays between head motion and camera motion in telepresence systems.
Summary of the Invention
The invention relates to a method and apparatus that provides a wearer of an
HMD with a stable frame of visual reference in cases where there may be time delays or unwanted motion within the visual capture/visual display systems.
According to the invention, in order to eliminate some of the disorientation caused by time delays in camera motion when a head motion occurs, an image shown on the display of a head-mounted display (HMD) is offset relative to the field of view of the HMD until the camera position is again synchronised with the HMD position. Offsetting of the image results in areas of the display for which no image information is available. These display areas are provided fill data in the form of a solid shading or some feature set for providing visual cues. When the transformed images again overlap the display, the fill is no longer necessary.
In accordance with the invention there is provided a method of motion compensation for head mounted displays. The method includes the following steps: providing an image from an image capture device to a head mounted display including a monitor having a field of view; providing camera position data associated with the image; providing head position data; adjusting the image location relative to the field of view of the monitor in accordance with the camera position data and the head position data; and, displaying portions of the image at the adjusted locations, those portions remaining within the field of view.
Typically position data includes at least one of orientation data and location data. Location data is also referred to as displacement data. Typically, portions of the field of view without image data are filled with a predetermined fill. When none of the image data is for display within the field of view, the entire field of view is filled with the predetermined fill.
For example, the image is adjusted by the following steps: determining an offset between the head mounted display position and the camera position; and, offsetting the image such that it is offset an amount equal to the offset between the head mounted display position and the camera position.
Advantageously, such a system is not limited by the accuracy of a predictive process nor by the time delay between image capture and image display. Instead, it is reactive, and uses sensed information on HMD position and camera position to formulate a transformation for the captured image. The present invention has no limit to the time delays for which compensation is possible since the required head position information and camera position information are sensed at different times allowing compensation for any delay between sensing one and then sensing the other.
Further advantageously, the present invention requires no knowledge of the time delay in the system and functions properly in the presence of non-constant time delays. There is no requirement that the time delay be measured and it is not used in determining the transform of the image.
Brief Description of the Drawings
The invention will now be described in conjunction with the drawings in which:
Fig. 1 is a simplified block diagram of a system incorporating an HMD coupled to a first computer and in communication with a remote camera;
Fig. 2 is a simplified diagram showing axes of movement of the systems involved:
Fig. 3a is a simplified diagram showing a simulated view of an image appearing within the field of view of an HMD;
Fig. 3b is a simplified diagram showing a simulated view of a portion of the image of Fig. 3a offset vertically within the field of view of an HMD in response to a downward motion of a user's head; Fig. 3 c is a simplified diagram showing a simulated view of a portion of the image of Fig. 3a offset horizontally within the field of view of an HMD in response to a lateral motion of a user's head;
Fig. 3d is a simplified diagram showing a simulated view of a portion of the image of Fig. 3a tilted within the field of view of an HMD in response to a tilting motion of a user's head;
Fig. 3e is a simplified diagram showing a simulated view of a portion of the image of Fig. 3a tilted and offset both vertically and horizontally within the field of view of an HMD in response to a tilting motion combined with a lateral and a horizontal motion of a user's head;
Fig. 4a is a simplified diagram showing a simulated view of an image appearing within the field of view of an HMD;
Fig. 4b is a simplified diagram showing a simulated view of a portion of the image of Fig. 4a offset horizontally within the field of view of an HMD in response to a lateral motion of a user's head;
Fig. 4c is a simplified diagram showing a simulated view of an image appearing within the field of view of an HMD including a portion of the image of Fig. 4a as well as additional image data captured and displayed within the field of view of an HMD when the camera motion is partially caught up with the lateral motion of a user's head:
Fig. 4d is a simplified diagram showing a simulated view of an image appearing within the field of view of an HMD including a portion of the image of Fig. 4a as well as additional image data captured and displayed within the field of view of an HMD after the camera motion is fully caught up with the lateral motion of a user's head;
Fig. 5 is a simplified block diagram of a system incorporating an HMD coupled to a first computer and in communication across a network with a second computer coupled to a remote camera;
Fig. 6 is a simplified flow diagram of a method according to the invention; Fig. 7 is a simplified block diagram of a telepresence system communicating via a satellite communications link;
Fig. 8 is an image captured by a remote camera as captured;
Fig. 9 is an image of the same image as that of fig. 8 transformed within an endless image display space; and,
Fig. 10 is an image of the same image as that of fig. 9 as displayed on a display having a finite image display space.
Detailed Description of the Invention
The present invention is described with reference to telepresence systems operating over long distances such that significant time delays occur between head motion and image display of an image for a current head orientation. It is, however, equally applicable when a camera drive mechanism provides insufficient response rate to allow comfortable viewing of images during normal head motion. It is also applicable in situations where unwanted and unmodeled motion of the camera is possible, such as when the camera is mounted on a moving platform.
Referring to Fig. 1, a simplified block diagram of a telepresence system is shown. A head mounted display (HMD) 1 including a three-axes head tracker 3 is worn by an operator 5. The HMD 1 is coupled with a first computer 7 and provides to the first computer 7 HMD values for the HMD position in the form of pitch 21, yaw 22, and roll 23 angles of the HMD 1 as shown in Fig. 2. Of course, since the HMD 1 is being worn by an operator 5, these HMD values relate to the head position of the operator 5. These values are provided to the first computer 7 at intervals, preferably more than 100 times each second, though other intervals are also acceptable. The HMD values are converted by the first computer 7 into control values for controlling positioning of a camera 11. The control values are transmitted to a mechanism 13 for pointing the camera in order to affect camera orientation. As is seen in Figure 2, the mechanism controls pitch 24, yaw 25 and roll 26 of the camera 11. In response to the control values, the camera position moves in accordance with the movement of the HMD 1. When the mechanism 13 for pointing the camera is physically coupled to the first computer 7, the camera 11 begins to move when HMD motion is detected. The lag between camera motion and HMD motion is determined by communication delays, which are very small, processing delays, which may be minimal, and pointing mechanism performance, which varies. These delays often result in an image provided from the camera 11 remaining static while the HMD 1 is in motion or moving substantially slower than the HMD motion. Of course, since the operator's mind expects motion within the visual frame, this is disconcerting and often results in nausea and disorientation.
This problem is even more notable when communication delay times are significant such as when used for terrestrial control of systems in space. There, the delay is in the order of seconds and, as such, the disorientation of an operator during HMD motion is significant. Significantly, disorientation is a cause of operator fatigue resulting in limited operator use of a system or limited use of a system during a day.
Referring again to Fig. 1, the camera 11 is constantly acquiring images at a video data capture rate. Each image is transmitted to the first computer for processing, if required, and for provision to the HMD 1. According to the invention, the remote system also provides camera position information to the first computer 7 and associated with each image. Thus, each frame received by the first computer 7 has associated therewith camera position information. The camera position information is preferably relative to a known orientation. Alternatively, it is transformed by the first computer 7 into position information relative to a known camera orientation and in a coordinate space analogous to that of the HMD 1.
The HMD position values are used to determine a current HMD orientation in a coordinate space analogous to that of the camera 11. As such, an offset between camera orientation and HMD orientation is determinable. Since the HMD 1 is being worn by an operator 5 the HMD orientation is directly correlated to the position of the head of the operator 5. Of course, the direct correlation is related to sensed position data and in use is generally an approximate direct correlation due to a refresh rate of the HMD position sensor. The offset between the camera orientation and the HMD orientation is related to a delay between the local system and the remote system. Therefore when a non-zero offset is determined, the first computer offsets the image provided by the camera relative to the field of view of the HMD in order to compensate for the determined offset. Referring to Figs. 3a, 3b, 3c, 3d and 3e, some examples of image displays are shown. In Fig. 3a, the image is shown for a zero offset between camera orientation and HMD orientation. This is the steady state of the feedback system since the HMD 1 is directed in a same direction as the camera 11 and the image displayed on the display within the HMD 1 is the same as the image captured by the camera. When the HMD orientation is angled down from the camera position, the image is offset in a vertical direction as shown in Fig. 3b. When the camera orientation is offset horizontally, the image is offset horizontally as shown in Fig. 3c. In Figs. 3d and Fig. 3e, the image is rotated and offset relative to the field of view because the camera orientation is rotated and offset relative to the HMD orientation.
Though the described instantaneous corrections shown in Figs. 4a, 4b, 4c and 4d appear simple, the steady state nature of the system requires an ever changing imaging perspective and display perspective. Thus a comparison is necessary between two dynamic sets of sensed position data.
Referring to Figs. 4a, 4b, 4c and 4d, field of view is shown for the HMD 1 during a left turn of the operator's head. At first (before the head turn) in a steady state, the exact image captured by the camera 11 is shown in the display of the HMD 1 at Fig. 4a. When the head turns, the operator 5 "expects" the image to move to the right since the image is not part of the operator 5 and is within their field of view. This expectation is either conscious or unconscious. Imagining that the image remains static as the HMD moves, it is clear that disorientation would result since individuals take cues from their visual field of view during head movement. In order to provide the operator 5 with the "expected" displacement of objects in the image, the image is offset to a location approximately the same as the orientation difference between the HMD 1 and the camera 1 1. For example, in the image of Fig. 4b the lighthouse is shifted out of the field of view by the rotation of the head. Turning the HMD α degrees, an operator expects static objects within the field of view, such as a lighthouse, to shift α degrees within the field of view. This is important to maintaining comfort of the operator in their personal vision system (their eyes and their mind). At a same time, the camera begins to move to match its orientation to that of the HMD. Thus, as shown in the image of Fig. 4c more of the scene within the operator's field of view is now available from the camera 11. As the camera orientation "catches up" with the HMD orientation, the field of view of the camera and that of the HMD overlap more. When the camera 11 is "caught up," the field of view of the HMD again shows an entire image captured by the camera as shown in the image of Fig. 4d.
Referring to Fig. 5, another embodiment of the invention is shown for use on a network. Here for example, two computers 7 and 15 communicate via a network or networks 17. The first computer 7 includes the HMD 1 as a peripheral thereof. The second computer 15 includes the camera 11 and mechanism 13 for pointing the camera as peripherals thereof. Here the processing is performed in either of the first computer 7 or the second computer 15 though the image processing is preferably performed in the first computer 7 in case of network delays that could cause image offset and result in disorientation of an operator 5. Of course, when network delays are known to be significant, it is important that image processing is performed on the first computer.
Referring to Fig. 6, a simplified flow diagram of a method of performing the invention is shown. An image is captured by the camera 11. A sensor captures position data in the form of camera orientation values for pitch, roll and yaw. The position data is preferably captured concurrently with the image. Alternatively, it is captured approximately at a same time but offset by a finite amount either immediately after image capture or immediately before. The position data is then associated with the image data. A simple method of associating the data is by encoding the position data with the image data either as header or trailer information. Of course, the image and the position data could also be identified with an associating identifier such as an image frame number. Alternatively, the two data are transmitted in parallel in a synchronous environment.
The image and position data are then transmitted to the first computer 7. When the image and position data are received, they are prepared for processing at the first computer 7. Then, the position data of the HMD 1 is acquired by the first computer 7 and is used to transform the image in accordance with the invention. The transformed image is provided to the display and is displayed thereon to the operator 5. Because the HMD position data is gathered immediately before it is needed, the delay between HMD position data capture and display of the transformed image is very small and results in little or no operator disorientation. Concurrently, position data is provided to the mechanism 13 at intervals and the mechanism moves the camera 11 in accordance with received position data and a current orientation of the camera 11.
Typically, the step of transforming the image comprises the following steps, some of which are performed in advance. A correlation between angular movement and display or image pixels is determined such that an offset of α degrees results in displacement of the image by N pixels in a first direction and by M pixels in a second other direction. A transform for rotating the image based on rotations is also determined. Preferably, the transforms are sufficiently simple to provide fast image processing. That said, a small image processing delay, because it forms substantially the delay in displaying the data, is acceptable.
Once the image data is received, it is stored in memory for fast processing thereof. The HMD position data is acquired and is compared to the camera position data. The difference is used in performing the transform to correct the image position for any HMD motion unaccounted for by the mechanism 13, as of yet. Also, the method corrects for unintentional movements of the camera 11 when the camera position sensor is independent of the mechanism 13, for example with an inertial position sensor.
In the above embodiment, a general purpose processor is used to transform the image. In an alternative embodiment, a hardware implementation of the transform is used. A hardware implementation is less easily modified, but has a tremendous impact on performance. Using parallel hardware transformation processors, an image can be transformed in a small fraction of the time necessary for performing a software transformation of the image.
Referring to Fig. 7 a satellite based telepresence system is shown. Here delays in the order of seconds between head motion and image display result. Further, the delays are not always predictable. Here an HMD 101 is shown positioned on the head of an operator 5. The HMD is provided with a head tracker 103 for sensing position data relative to the HMD. The HMD is also coupled with a computer 107 for providing display data to the HMD and for providing the HMD position data to a communications link 108. The communications link 108 uplinks the HMD position data to a satellite 1 17 from which it is transmitted to a transceiver 116. A computer 115 in communication with the transce ver prov es t e ata to a g m a or repos t on ng a camera n accordance therewith. The camera 111 captures images which are provided along a reverse communication path - computer 115, transceiver 116, satellite 117, communications link 108 - to the computer 107. Optionally, a different return path is used. There the image data is processed for display within the HMD 101. With the images, camera position data sensed by a sensor 114 is also provided. The camera position data is associated with an image or images captured at approximately a time the camera 111 was in that sensed position.
The computer 107 uses the camera position data and the image along with data received from the head tracker 103 to transform the image in accordance with the invention. As is evident, the delay between HMD motion and camera motion is measurable in seconds. The delay between camera image capture and receipt of the image at the computer 107 is also measurable in seconds. As such, significant disorientation of the user results absent application of the present invention.
Referring to Fig. 8, an image captured by the camera 111 is shown. The image is displayed as captured when the HMD and the camera orientations are aligned, the camera orientation at a time of image capture and the HMD orientation at a time of display. If the orientations are offset one from another, the image is shifted within the field of view of the operator as shown in Fig. 9. Since there is no image data beyond the camera imaging field of view, the remainder of the display area is shaded with a neutral colour such as gray. Alternatively, the remainder of the display area is shaded to provide reference points to further limit disorientation of the operator 105. Further alternatively, the portion of the field of view for which no image data is available is left blank. Typically blank areas are black in order not to distract an operator. Referring to Fig. 10, when the camera 111 orientation is "caught up" with the HMD orientation, the field of view of the HMD again shows an entire image captured by the camera.
Alternatively, the camera captures images of areas larger than can be displayed and only a portion of the image is displayed. This is considered less preferable since it increases the bandwidth requirements and often for no reason as the additional data is not displayed. Advantageously, when implemented with independent position indicators for each of the HMD 1 and the camera 11 and independent from the mechanism 13 for moving the camera, all types of motion are compensated for including inaccuracies of the mechanism
13, delays induced by communications, delays induced by the mechanism 13, processing delays, fine operator motions and so forth.
When processing is done local to the HMD or on a computer at a same location with minimal delays therebetween, each image is accurately aligned on the display within a time delay error related only to the processing and the delay in reading HMD sensor data.
Thus, discontinuous scene changes are changed into smooth transitions in accordance with the expected visual result.
It is also within the scope of the invention to process the image data prior to display thereof in order to determine features or locations within the image data to highlight or indicate within the displayed image. For example, contrast may be improved for generally light or dark images. Also, features may be identified and labeled or highlighted. Alternatively, icons or other images are superimposed on the displayed image without processing thereof.
Alternatively, the control values are determined in the mechanism for pointing the camera instead of by the first computer. In such an embodiment, the HMD position data is transmitted to the remote system wherein a camera movement related to the HMD movement is determined and initiated.
The above described embodiment compensates for orientation - motion about any of three rotational axes. Alternatively, the invention compensates for displacement - linear motion along an axis. Further alternatively, the invention compensates for both linear motion and motion about any of the rotational axes. Displacement and orientation are both forms of position and data relating to one or both is referred to here and in the claims, which follow, as position data.
The above described embodiment does not correct images for perspective distortion. Doing so is feasible within the concept of time/motion compensation according to the invention, however it is not generally applicable to use with a single camera, since the depth of field of the observed scene varies. It would require capturing of depth data using a range sensor or a three-dimensional vision system.
Though the above embodiment is described with reference to a physical communication link or a wireless communication link between different components, clearly, either is useful with the invention so long as it is practicable. Also, though the HMD is described as a computer peripheral, it could be provided with an internal processor and act as a stand alone device.
According to another embodiment of the invention, areas within the field of view that do not correspond to displayed image locations are filled with current image data relating to earlier captured images for those locations. Preferably, any earlier captured images are deemphasized within the field of view in order to prevent the operator from being confused by "stale" image data. For example, each image received from the camera is buffered with its associated position data. When some areas within the field of view are not occupied by image data, the processor determines another image having image data for those locations within the field of view, the locations determined in accordance with the transform performed based on the camera position data associated with the earlier captured image and with the current HMD position data. The image data is then displayed at the determined location(s) in a "transparent" fashion. For example, it may be displayed with a lower contrast appearing almost ghostlike. Alternatively, the colours are faded to provide this more ghostlike appearance. Further alternatively, it is displayed identically to the current image data.
The above description is by way of example and is not intended to limit the forgoing claims.

Claims

Claims
1. A method of motion compensation for head mounted displays comprising the steps of: providing a captured image from an image capture device to a head mounted display including a display having a field of view; providing camera position data relating to a position of the camera and associated with the captured image; providing HMD position data relating to a position of the head mounted display; transforming the image to vary a displayed location of static objects within the image relative to the field of view in accordance with the camera position data and the HMD position data; and, displaying portions of the image at the displayed locations, those portions remaining within the field of view.
2. A method according to claim 1 wherein the HMD position data is data relating to the present position of the head mounted display and wherein the camera position data is data relating to the position of the camera when the associated image is captured and wherein the step of transforming the image includes the steps of: determining an offset between the HMD position and the camera position; and, offsetting the image such that it is offset an amount proportional to the offset between the HMD position and the camera position.
3. A method according to claim 2 wherein portions of the field of view for which image data is unavailable are filled with a predetermined fill.
4. A method according to claim 3 wherein the predetermined fill has features for maintaining orientation of a wearer of the head mounted display.
5. A method according to claim 1 wherein the captured image is larger than the image necessary to fill the field of view of the head mounted display, and wherein only a portion of the captured image is displayed.
6. A method according to claim 1 wherein the system comprises two head mounted displays and two cameras.
7. A method according to claim 1 wherein the system comprises: an independent position sensor for sensing the camera position and for providing the camera position data.
8. A method according to claim 7 wherein the system comprises: an independent position sensor for sensing the head mounted display position and for providing the HMD position data.
9. A method according to claim 1 wherein the system comprises: an independent position sensor for sensing the head mounted display position and for providing the HMD position data.
10. A method according to claim 1 wherein the system comprises: a mechanism for moving the camera; and means for transmitting the HMD position data to a system in communication with the mechanism for moving the camera, wherein the mechanism for moving the camera moves the camera in response to a change in HMD position data.
11. A method according to claim 1 wherein the HMD position data comprises orientation data.
12. A method according to claim 11 wherein the camera position data comprises orientation data.
13. A method according to claim 12 wherein the camera position data comprises displacement data.
14. A method according to claim 1 wherein the HMD position data comprises displacement data.
15. A method according to claim 14 wherein the camera position data comprises displacement data.
16. A method according to claim 15 wherein the camera position data comprises orientation data.
17. A method according to claim 1 comprising the step of transforming the image to reduce perspective distortion, wherein the camera includes a range sensor.
18. A motion compensation apparatus for head mounted displays comprising: a head mounted display for displaying portions of an image at displayed locations, those portions remaining within the field of view; an image capture device for providing a captured image to a head mounted display including a monitor having a field of view; a sensor for providing camera position data relating to a position of the camera and associated with the captured image; a sensor for providing HMD position data relating to a position of the head mounted display; and a processor for transforming the image to vary a displayed location of static objects within the image relative to the field of view in accordance with the camera position data and the HMD position data.
19. A motion compensation apparatus for head mounted displays according to claim 18 wherein the image capture device comprises a range sensor.
20. A motion compensation apparatus for head mounted displays according to claim 19 wherein the range sensor includes a stereoscopic imaging system.
21. A method according to claim 2 comprising the step of: when portions of the field of view for which image data is unavailable are detected, determining earlier captured image data that when transformed in dependence upon its associated camera position data and the HMD position data is for display within the portions; and, displaying with those portions of the field of view the transformed earlier captured image data.
22. A method according to claim 21 wherein the step of displaying transformed earlier captured image data includes a step of de-emphasising the transformed earlier captured image data.
PCT/CA2000/001063 1999-09-22 2000-09-22 Method and system for time/motion compensation for head mounted displays WO2001022149A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CA2385548A CA2385548C (en) 1999-09-22 2000-09-22 Method and system for time/motion compensation for head mounted displays
GB0206051A GB2372169B (en) 1999-09-22 2000-09-22 Method and system for time/motion compensation for head mounted displays
JP2001525460A JP2003510864A (en) 1999-09-22 2000-09-22 Method and system for time / motion compensation for a head mounted display
US10/088,747 US7312766B1 (en) 2000-09-22 2000-09-22 Method and system for time/motion compensation for head mounted displays

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15513299P 1999-09-22 1999-09-22
US60/155,132 1999-09-22

Publications (1)

Publication Number Publication Date
WO2001022149A1 true WO2001022149A1 (en) 2001-03-29

Family

ID=22554218

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2000/001063 WO2001022149A1 (en) 1999-09-22 2000-09-22 Method and system for time/motion compensation for head mounted displays

Country Status (4)

Country Link
JP (1) JP2003510864A (en)
CA (1) CA2385548C (en)
GB (1) GB2372169B (en)
WO (1) WO2001022149A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7734101B2 (en) 2000-10-11 2010-06-08 The United States Of America As Represented By The Secretary Of The Army Apparatus and system for testing an image produced by a helmet-mounted display
CN101127834B (en) * 2003-05-20 2011-04-06 松下电器产业株式会社 Image capturing system
WO2014053063A1 (en) * 2012-10-04 2014-04-10 Ati Technologies Ulc Method and apparatus for changing a perspective of a video
CN105721856A (en) * 2014-12-05 2016-06-29 北京蚁视科技有限公司 Remote image display method for near-to-eye display
WO2017140309A1 (en) * 2016-02-17 2017-08-24 Krauss-Maffei Wegmann Gmbh & Co. Kg Method for controlling a viewing device arranged on a vehicle in an orientable manner
US10054796B2 (en) 2013-03-25 2018-08-21 Sony Interactive Entertainment Europe Limited Display
WO2019204012A1 (en) * 2018-04-20 2019-10-24 Covidien Lp Compensation for observer movement in robotic surgical systems having stereoscopic displays

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007333929A (en) * 2006-06-14 2007-12-27 Nikon Corp Video display system
JP4851412B2 (en) * 2007-09-27 2012-01-11 富士フイルム株式会社 Image display apparatus, image display method, and image display program
KR20170106163A (en) * 2016-03-11 2017-09-20 주식회사 상화 Virtual reality experience apparatus
KR102528318B1 (en) * 2016-12-26 2023-05-03 엘지디스플레이 주식회사 Apparatus and method for measuring latency of head mounted display
JP2018137609A (en) * 2017-02-22 2018-08-30 株式会社東芝 Partial image processing device
CN110709803A (en) * 2017-11-14 2020-01-17 深圳市柔宇科技有限公司 Data processing method and device
JP7121523B2 (en) * 2018-04-10 2022-08-18 キヤノン株式会社 Image display device, image display method
GB2614698A (en) * 2021-11-15 2023-07-19 Mo Sys Engineering Ltd Controlling adaptive backdrops

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4347508A (en) * 1978-12-21 1982-08-31 Redifon Simulation Limited Visual display apparatus
EP0537945A1 (en) * 1991-10-12 1993-04-21 British Aerospace Public Limited Company Computer generated images with real view overlay
US5579026A (en) * 1993-05-14 1996-11-26 Olympus Optical Co., Ltd. Image display apparatus of head mounted type
US5610678A (en) * 1993-12-30 1997-03-11 Canon Kabushiki Kaisha Camera including camera body and independent optical viewfinder
US5917460A (en) * 1994-07-06 1999-06-29 Olympus Optical Company, Ltd. Head-mounted type image display system
US5933125A (en) * 1995-11-27 1999-08-03 Cae Electronics, Ltd. Method and apparatus for reducing instability in the display of a virtual environment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06326944A (en) * 1993-05-14 1994-11-25 Olympus Optical Co Ltd Head-mounted type video display device
JPH07199280A (en) * 1993-12-30 1995-08-04 Canon Inc Camera system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4347508A (en) * 1978-12-21 1982-08-31 Redifon Simulation Limited Visual display apparatus
EP0537945A1 (en) * 1991-10-12 1993-04-21 British Aerospace Public Limited Company Computer generated images with real view overlay
US5579026A (en) * 1993-05-14 1996-11-26 Olympus Optical Co., Ltd. Image display apparatus of head mounted type
US5610678A (en) * 1993-12-30 1997-03-11 Canon Kabushiki Kaisha Camera including camera body and independent optical viewfinder
US5917460A (en) * 1994-07-06 1999-06-29 Olympus Optical Company, Ltd. Head-mounted type image display system
US5933125A (en) * 1995-11-27 1999-08-03 Cae Electronics, Ltd. Method and apparatus for reducing instability in the display of a virtual environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FISHER R W: "Variable acuity remote viewing system (VARVS)", PROCEEDINGS OF THE IEEE 1978 NATIONAL AEROSPACE AND ELECTRONICS CONFERENCE NAECON 78, DAYTON, OH, USA, 16-18 MAY 1978, 1978, New York, NY, USA, IEEE, USA, pages 1172 - 1179, XP002016920 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7734101B2 (en) 2000-10-11 2010-06-08 The United States Of America As Represented By The Secretary Of The Army Apparatus and system for testing an image produced by a helmet-mounted display
CN101127834B (en) * 2003-05-20 2011-04-06 松下电器产业株式会社 Image capturing system
WO2014053063A1 (en) * 2012-10-04 2014-04-10 Ati Technologies Ulc Method and apparatus for changing a perspective of a video
US10054796B2 (en) 2013-03-25 2018-08-21 Sony Interactive Entertainment Europe Limited Display
EP4012482A1 (en) * 2013-03-25 2022-06-15 Sony Interactive Entertainment Inc. Display
CN105721856A (en) * 2014-12-05 2016-06-29 北京蚁视科技有限公司 Remote image display method for near-to-eye display
WO2017140309A1 (en) * 2016-02-17 2017-08-24 Krauss-Maffei Wegmann Gmbh & Co. Kg Method for controlling a viewing device arranged on a vehicle in an orientable manner
WO2019204012A1 (en) * 2018-04-20 2019-10-24 Covidien Lp Compensation for observer movement in robotic surgical systems having stereoscopic displays
US11647888B2 (en) 2018-04-20 2023-05-16 Covidien Lp Compensation for observer movement in robotic surgical systems having stereoscopic displays

Also Published As

Publication number Publication date
CA2385548C (en) 2010-09-21
JP2003510864A (en) 2003-03-18
GB2372169B (en) 2004-05-26
GB2372169A (en) 2002-08-14
CA2385548A1 (en) 2001-03-29
GB0206051D0 (en) 2002-04-24

Similar Documents

Publication Publication Date Title
US7312766B1 (en) Method and system for time/motion compensation for head mounted displays
CA2385548C (en) Method and system for time/motion compensation for head mounted displays
EP3379525B1 (en) Image processing device and image generation method
US6633267B2 (en) Head-mounted display controlling method and head-mounted display system
US20160282619A1 (en) Image generation apparatus and image generation method
GB2528699A (en) Image processing
EP1168830A1 (en) Computer aided image capturing system
JP2015231106A (en) Device and system for head-mounted display
WO2015048905A1 (en) System and method for incorporating a physical image stream in a head mounted display
EP3966670B1 (en) Display apparatus and method of correcting image distortion therefor
US11647292B2 (en) Image adjustment system, image adjustment device, and image adjustment
JP2010153983A (en) Projection type video image display apparatus, and method therein
GB2623041A (en) Method for operating a head-mounted display in a motor vehicle during a journey, correspondingly operable head-mounted display and motor vehicle
US20100149319A1 (en) System for projecting three-dimensional images onto a two-dimensional screen and corresponding method
CN105828021A (en) Specialized robot image acquisition control method and system based on augmented reality technology
CN111345037B (en) Virtual reality image providing method and program using the same
GB2575824A (en) Generating display data
US20210192805A1 (en) Video display control apparatus, method, and non-transitory computer readable medium
WO2021117606A1 (en) Image processing device, system, image processing method and image processing program
CN114742977A (en) Video perspective method based on AR technology
JP4696825B2 (en) Blind spot image display device for vehicles
WO2021220407A1 (en) Head-mounted display device and display control method
KR102542641B1 (en) Apparatus and operation method for rehabilitation training using hand tracking
US20240020927A1 (en) Method and system for optimum positioning of cameras for accurate rendering of a virtual scene
Ikeda et al. Immersive telepresence system with a locomotion interface using high-resolution omnidirectional videos

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA GB JP US

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
ENP Entry into the national phase

Ref country code: GB

Ref document number: 200206051

Kind code of ref document: A

Format of ref document f/p: F

ENP Entry into the national phase

Ref country code: JP

Ref document number: 2001 525460

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 10088747

Country of ref document: US

Ref document number: 2385548

Country of ref document: CA