DE112012001022T5 - Alignment control in a head-worn augmented reality device - Google Patents

Alignment control in a head-worn augmented reality device

Info

Publication number
DE112012001022T5
DE112012001022T5 DE112012001022T DE112012001022T DE112012001022T5 DE 112012001022 T5 DE112012001022 T5 DE 112012001022T5 DE 112012001022 T DE112012001022 T DE 112012001022T DE 112012001022 T DE112012001022 T DE 112012001022T DE 112012001022 T5 DE112012001022 T5 DE 112012001022T5
Authority
DE
Germany
Prior art keywords
image
viewer
scene
view
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
DE112012001022T
Other languages
German (de)
Inventor
John N. Border
John D. Haddick
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Osterhout Group Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to USUS-13/037,335 priority Critical
Priority to US13/037,324 priority patent/US20110214082A1/en
Priority to US13/037,335 priority patent/US20110213664A1/en
Priority to USUS-13/037,324 priority
Application filed by Osterhout Group Inc filed Critical Osterhout Group Inc
Priority to PCT/US2012/022568 priority patent/WO2012118575A2/en
Publication of DE112012001022T5 publication Critical patent/DE112012001022T5/en
Application status is Withdrawn legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Other optical systems; Other optical apparatus
    • G02B27/28Other optical systems; Other optical apparatus for polarising
    • G02B27/281Other optical systems; Other optical apparatus for polarising used for attenuating light intensity, e.g. comprising rotatable polarising elements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Other optical systems; Other optical apparatus
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Other optical systems; Other optical apparatus
    • G02B27/10Beam splitting or combining systems
    • G02B27/1066Beam splitting or combining systems for enhancing image performance, like resolution, pixel numbers, dual magnifications or dynamic range, by tiling, slicing or overlapping fields of view
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Other optical systems; Other optical apparatus
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0118Head-up displays characterised by optical features comprising devices for improving the contrast of the display / brillance control visibility
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/30Polarising elements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B6/00Light guides

Abstract

This patent discloses a method for providing an enhanced image in a head-mounted transparent display. The method includes capturing an image of a scene with objects and displaying the image to a viewer. The method also includes capturing one or more additional images of the scene in which the viewer indicates misalignment between the displayed image and a see-through view of the scene. The captured images are then compared to determine an image adjustment to align corresponding objects in displayed images with the objects in the review view of the scene. This method provides enhanced image information that is displayed according to the image settings so that the viewer sees an enhanced image containing the enhanced image information overlaid and aligned with the see-through view.

Description

  • Cross-reference to related applications
  • The present application is a continuation-in-part of U.S. Patent Application No. 13 / 037,324, filed on February 28, 2011, now U.S. Patent No. ..., and U.S. Patent Application No. 13 / 037,335, also filed February 28, 2011, now US Patent No. ..., and claims its priorities, the disclosure of which is hereby incorporated by reference in its entirety.
  • Field of the invention
  • The present invention relates to enhanced reality image processing with a head-mounted transparent display.
  • background
  • Head-mounted transparency displays allow a viewer to view the surrounding environment, which can be combined with a superimposed displayed image. The superimposed image may be semi-transparent so that the superimposed displayed image and the surrounding environment view can be seen simultaneously. In different modes, a see-through display may be transparent, semi-transparent or opaque. In the transparent mode, the view of the environment is not blocked, and a superimposed displayed image can be provided with a low contrast. In Semitransparent mode, the view of the environment is partially blocked, and a superimposed displayed image can be provided with a higher contrast. In opaque mode, the view of the environment is completely blocked, and a superimposed displayed image can be provided with a high contrast.
  • Augmented Reality Imaging provides additional information related to the surrounding environment. Normally, in augmented reality imaging, objects in the surrounding environment are identified in images of the surrounding environment, and extended image contents pertaining to those objects are provided in an enhanced image. Examples of advanced image content provided in enhanced images include: address labels for buildings; Names of shops; Advertising of products; Sign for virtual reality gaming and news for specific people. For extended reality imaging to be effective, it is necessary for the extended image content to be aligned with the objects of the surrounding environment in the enhanced images.
  • In the case of head-mounted transparent displays, the view of the surrounding environment is not necessarily aligned with the displayed image. Variations in the position of the manufactured display area, variations in the way an observer wears the head-mounted transparency display, and variations in the characteristics of the viewer's eye may contribute to misalignments of the displayed image relative to the see-through view , As a result, adjustments to the head-mounted transparency display are required to align the displayed image relative to the see-through view so that the extended image content is aligned with the objects from the surrounding environment in the enhanced images.
  • in the U.S. Patent No. 7,369,101 is a head-mounted translucent display with a light source shown to project a marker on a calibration screen. The displayed image is set in the head-mounted transparent display to align the displayed image with the projected marker. Although this technique provides a method for correcting lateral and longitudinal misalignments, this method is incapable of correcting differences in image size, also known as magnification, with respect to the phantom view. Moreover, the approach of projecting a marker onto the scene is only practical if the scene is only a few meters away from the head-mounted transparent display, since the projected marker would not be visible in a more distant scene.
  • In US Patent Application No. 2002/0167536 an alignment indicator is generated in the displayed image, and the indicator is aligned with respect to the see-through view by the viewer manually moving the device relative to the viewer. This invention is directed to a hand-held see-through display device that can be moved within the field of view of the observer, and is not applicable to a head-mounted see-through display in which the display is mounted on the head of the observer ,
  • In the article "Single point active alignment method (SPAAM) for optical see-through HMD calibration for AR" by M. Tuceryan, N. Navab, Proceedings of the IEEE and ACM International Symposium on Augmented Reality, pp. 149-158, Munich, Germany October 2000 , a method for calibrating a head-mounted transparency display to a surrounding environment is described. The method is for a head-mounted transparent A display apparatus having an inertial location device is provided to determine movement of the viewer's head relative to the surrounding environment. Twelve points are collected, with the viewer moving his head to align virtual markers in the displayed image with a single point in the surrounding environment. For each point, the data is collected by the inertial location device to detect the relative position of the viewer's head. A click with an associated mouse is used to indicate that the viewer has completed the alignment of each point and to record the inertial location data. In the article "Practical solutions for calibration of optical see-through devices" by Y. Gene, M. Tuceryan, N. Navab, Proceedings of International Symposium on Mixed and Augmented Reality (ISMAR'02), 169-175, Darmstadt, Germany, 2002 , a two-step approach to aligning a displayed image in a head-mounted transparent display device based on the SPAAM technique will be described. The two-step approach includes an 11-point offline calibration and a 2-point user-based calibration. All the points in this two-step alignment approach are collected by moving the head-mounted transparent display to align virtual markers in the displayed image with a single point in the real world, and a head locator is used to locate the point for each point determine relative positions of the head-mounted transparent display.
  • in the U.S. Patent No. 6,753,828 A 3D marker is generated in a head-mounted stereo transparent display. The 3D marker is visually aligned by the viewer with a particular point in the real world, and calibration data is collected. This process is repeated for multiple positions in the space used for augmented reality. A model of the augmented reality space is built using the collected calibration data.
  • Summary
  • In one embodiment, a method for aligning a displayed image in a head-mounted transparency display relative to the viewer's perceived see-through view is provided. The combined image, which contains the displayed image superimposed with the see-through view, provides the viewer with an augmented reality image. The method includes capturing a first image of a scene with a camera in the head-mounted transparent display device, which scene contains objects. The captured first image is then displayed to a viewer using the head-mounted transparent display device so that both the displayed image and the review view of the scene are visible. One or more additional images of the scene are taken with the camera, in which the viewer indicates a misalignment between the first displayed image and a see-through view of the scene. The captured images are then compared with each other to determine an image adjustment to align corresponding objects in displayed images with objects in the review view of the scene. Subsequently, enhanced image information is provided that includes the determined image settings, and the enhanced image information is displayed to the viewer so that the viewer sees an enhanced image that contains extended image information superimposed on the see-through view.
  • Brief description of the drawings
  • 1 is an illustration of a head-mounted transparent display device;
  • 2 is a representation of a scene and the related displayed image as viewed from the viewer's perspective with both eyes;
  • 3 Figure 12 is an illustration of a combined view as seen with the viewer's right eye, with a displayed image of the scene superimposed on the scene with a see-through view and with both images out of alignment;
  • 4 Figure 12 is an illustration of a combined view of a scene wherein the viewer uses a finger gesture to indicate the perceived position of an object (the window) in the displayed image that is not aligned in the see-through view;
  • 5 is a representation of a captured image of the viewer's finger gesture indicating the position of the object (the window), as in FIG 4 shown;
  • 6 Figure 12 is an illustration of a see-through view as seen by the viewer, including the viewer's finger gesture indicating the position of the object (the window) in the see-through view;
  • 7 is a representation of a captured image of the viewer's finger gesture indicating the position of the object (the window), as in FIG 6 shown;
  • 8th is a representation of a combined view as seen with the right eye of the Viewers, wherein the displayed image of the scene is overlaid with the see-through view of the scene, and wherein the two images are aligned with an object (the window);
  • 9 is a representation of a combined view of a scene with the two images aligned with an object (the window). The viewer uses a finger gesture to indicate the perceived position of another object (the car tire) in the displayed image that is not aligned with the see-through view;
  • 10 is a representation of a captured image of the viewer's finger gesture to indicate the position of the other object (the car tire), as in FIG 9 shown;
  • 11 Figure 12 is an illustration of a see-through view as perceived by the viewer, including the viewer's finger gesture to indicate the position of the other object (the car tire) in the see-through view;
  • 12 is a representation of a captured image of the viewer's finger gesture to indicate the position of the other object (the car tire), as in FIG 11 shown;
  • 13 is a representation of a combined view as seen with the viewer's right eye, the two images being aligned with the object (s) and adjusted in size to align the further object (the car tire);
  • 14A is a representation of a combined view of the augmented reality image as viewed with the viewer's right eye, where in the transparent view a displayed label (address) is overlaid with an object (the house) and the label on the object is aligned;
  • 14B is a representation of a combined view of the augmented reality image as seen with the right eye of the observer, where in the phantom view extended image information in the form of the displayed objects (the tree and the bushes) on objects (the car and house ) and wherein the displayed objects are aligned with the objects in the see-through view;
  • 15 is a representation of a scene and its associated displayed image, as seen from the perspective of the viewer with both eyes. A marker is visible in the image displayed to the left eye, the marker indicating the region for the first alignment between the displayed image and the see-through view;
  • 16 Figure 12 is an illustration of a combined view as seen with a viewer's left eye, with a displayed image of the scene superimposed on the scene with a see-through view and with the two images unaligned. A marker indicates a first area for alignment;
  • 17 Figure 12 is an illustration of a combined view as seen with the left eye of a viewer, with a displayed image of the scene overlaid with a see-through view of the scene and with the viewer moving his head to objects (the roof) in both Align images in the area of the mark;
  • 18 Figure 12 is an illustration of a combined view as seen with the left eye of a viewer, with a displayed image of the scene superimposed on the scene with a see-through view and with the two images aligned in one area and with a marker a second area indicates alignment;
  • 19 Figure 12 is an illustration of a combined view as seen with the left eye of a viewer, with a displayed image of the scene superimposed on the scene with a see-through view, and with objects (the car tires) in the two images aligned in a second area ;
  • 20 Figure 12 is an illustration of a combined view as seen with a viewer's left eye, where the displayed image of the scene is superimposed on the scene with the phantom view, and where the two images in the two regions of the markers are aligned by shifting and resizing ;
  • 21 Fig. 10 is a flowchart of the alignment process used to determine image adjustments to align displayed images with the see-through view seen by the viewer;
  • 22 FIG. 10 is a flowchart for using the determined image adjustments to display extended image information with the corresponding objects as seen by the viewer in the see-through view.
  • Detailed description
  • With a see-through display, a displayed image can be seen by a viewer, simultaneously with a see-through view of the surrounding environment as well. The displayed image and the see-through view can be viewed as a combined image, wherein an image is superimposed on the other image or the two images can be seen simultaneously in different areas of the see-through display that can be viewed by the viewer.
  • To provide an effective augmented reality image to an observer, it is important that the augmented image information be aligned relative to objects in the phantom view so that the viewer can see the augmented image information with the correct object in phantom view connect visually. The invention provides a simple and intuitive method for indicating misalignments between the displayed images and the phantom views, as well as a method for determining the direction and extent of misalignment so that misalignment can be achieved by changing the manner in which how the displayed image is made available to the viewer can be corrected.
  • 1 shows an illustration of a head-mounted transparent display device 100 , The device comprises a frame 105 with lenses 110 . 115 , the display areas and transparent areas 102 to have. The frame 105 is by brackets or arms 130 held at the head of the beholder. The poor 130 also contain the electronics 125 with a processor to the displays and peripheral electronics 127 including batteries and wireless connections to other sources of information, such as those that can be realized with the Internet or local servers via Wifi, Bluetooth, cellular technologies or other wireless technologies. A camera 120 is intended to exclude images of the surrounding environment. The head-mounted see-through display device 100 can have one or more cameras 120 which are mounted in the center as shown, or at various locations within the frame 105 or the poor 130 ,
  • To align images in a head-mounted transparency display, it is necessary to know at least two different points in the images where related objects are aligned in the images. This allows calculations to move the images to align and resize at a first point to align the second point. However, this assumes that there is no rotational misalignment between the two images and that the images are not bent or distorted. As in 1 shown has see-through display device 100 a camera 120 to take pictures of the surroundings. For digital cameras, it is common for distortions in the image to be corrected during production. A rotation orientation of the camera 120 in the frame 105 So it is usually achieved in the production.
  • In one embodiment of the invention, the viewer uses a finger gesture to indicate misalignments between a captured image of the environment displayed on the head-mounted see-through display and the perimeter view of the surrounding environment, as seen by the viewer ,
  • 2 is a representation of a scene 250 and the associated images displayed 240 and 245 how they are seen from behind and slightly above the viewer's perspective with both eyes. The pictures displayed 240 and 245 , as in 2 shown by the camera 120 taken from the scene in front of the viewer. The pictures 240 and 245 can be the same picture. In the event that the see-through display device 100 two cameras 120 (not shown), the images may show the same scene but with different perspectives, as in a stereo image sentence for three-dimensional viewing.
  • 3 is a representation of a combined view as seen with the right eye of the viewer, being a displayed image 240 the scene with a see-through view 342 the scene is superimposed. The displayed image 240 , this in 3 shown was with the camera 120 is recorded and then on the head-mounted see-through display device 100 displayed as a combined image, with the displayed image 240 appears as a semitransparent image with the see-through view 342 is superimposed. As in 3 can be seen are the displayed image 240 and the see-through view 342 misaligned, which is also perceived by the viewer. The misalignment between the displayed image 240 and the see-through view 342 may vary with changes in the viewer or with changes in the way the viewer views the head-mounted see-through display device 100 every time it wears when using the device. As a result, the present invention provides a simple and intuitive method for correcting misalignments.
  • One method for determining misalignments is in 3 - 13 and in the flow chart of 21 shown. In one embodiment of the invention, the camera 120 used to capture a first image of a scene in front of the viewer. The captured first image is then displayed as a semitransparent image on the head-mounted transparent display device 100 is displayed so that the viewer sees the displayed image overlaid with the see-through view of the same scene in front of the observer as in 3 is shown. The viewer then selects a first object in the displayed image that is for the Determination of misalignments is used. The viewer then uses his fingers to indicate the perceived position of the selected object in the displayed image, as in FIG 4 is shown, in which example it is shown that the viewer has specified the window as the first selected object.
  • As in 4 can be seen, the displayed image is superimposed with the see-through view of the scene in which the fingers 425 of the viewer are included. With the camera 120 Then, a second image is captured containing the viewer's finger gesture indicating the perceived position of the first object, as in FIG 5 is shown. Due to the misalignment between the see-through view and the camera 120 taken pictures and due to the different perspectives of the scene (also called parallax) between the camera 120 and the right eye of the observer, there is a misalignment in the second image between the fingers 525 of the viewer and the selected first object (window), as in 5 is shown. The misalignment of the finger relative to the selected first object, as seen in the second image, may be dependent on the relative positions and associated perspectives of the scene between the camera 120 and be different to the eye of the beholder.
  • The displayed image is then turned off or off the head-mounted transparent display 100 away so that the viewer sees only the see-through view. The viewer then points the same selected first object (in this example the window) with his finger 625 the viewer in the see-through view, as in 6 shown, and then with the camera 120 a third picture was taken, showing the scene in front of the viewer and the finger 725 contains the observer, as in 7 is shown. As with the second picture is the finger 725 the viewer is not aligned with the selected first object (the window) in the third image due to the combined effects of the misalignment of the camera 120 relative to the see-through view and also because of the different perspective of the scene taken by the camera 120 is delivered, and the right eye of the beholder. The transverse and longitudinal image adjustments (also known as image shifts) needed to align the displayed image relative to the see-through view are then made by comparing the position of the finger 525 of the observer in the second picture with the position of the finger 725 determined by the viewer in the third picture. Methods for comparing images in order to be able to align images based on associated objects in the images are, for example, in US Pat U.S. Patent 7,755,667 described. The determined transverse and longitudinal image settings are then applied to other displayed images in order to be able to align the displayed images in the transverse and longitudinal direction with the transparent view.
  • 8th Figure 12 is an illustration of a combined view as seen with the right eye of the viewer, where the displayed first image of the scene is aligned with the see-through view of the scene, and wherein the first image is aligned with the first object (the window) , But in this case, as in 8th As can be seen, objects in the displayed image are not the same size as in the see-through view, and as a result, objects other than the selected first object are not yet in alignment. To determine the image settings needed to align the rest of the displayed image with the see-through view by sizing the displayed image, a second object (in this example the car tire) is selected by the viewer and the viewer uses his finger 925 to specify the position of the object in the displayed image, as in 9 is shown.
  • Subsequently, a fourth image is captured, as in 10 shown the scene and the finger 1025 contains the viewer. The displayed image is then turned off or removed so that the viewer sees only the see-through view of the scene, and the viewer uses his finger 1125 to specify the perceived position of the second selected object in the see-through view, as in 11 is shown. Then a fifth image is captured, as in 12 shown the scene and the finger 1225 contains the viewer. The fourth and fifth images are then compared to the respective positions of the finger 1025 and 1225 of the observer, and then to determine the image adjustment required to align the displayed image with the see-through view at the location of the second selected object (the car tire). The determined image settings for the positions of the second selected object are then used along with the distance in the images between the selected first and second objects to determine the sizing of the displayed image so that, if these match the previously determined lateral and lateral settings combined, the displayed image is essentially across the display area 115 aligned with the see-through view as seen by the viewer. The lateral and longitudinal settings are determined based on x and y pixel shifts.
  • The sizing is then considered as a relative or percentage change in the distance between the positions of the finger 525 and 1125 of the viewer in the third and fourth picture in comparison to the distance between the positions of the finger 525 and 1225 determined by the viewer in the third and fifth picture. The percentage change is then applied to the displayed image to change the size of the displayed image based on the number of pixels. In an alternative method, the sizing of the displayed image is performed prior to registration at a position in the displayed image. 13 Figure 12 shows a representation of the displayed image superimposed on the see-through view, with the displayed image aligned with the window object and then resized to align the remaining objects so that the combined image is substantially imperceptible to misalignments between the displayed image and the see-through view.
  • The timing in which the multiple images are captured in the method of the present invention may be performed automatically or manually. For example, the pictures may be taken every two seconds until all pictures are taken that are needed to determine the picture settings. The distance between shots of two seconds allows the viewer enough time to evaluate the misalignment and to provide an indication of the misalignment. Alternatively, the viewer may provide a manual indication to the head-mounted transparent display device 100 if it is satisfied that the misalignment is properly displayed. The manual indication can be made, for example, by pressing a button on the head-mounted transparent display 100 respectively. The viewer can see pictures with instructions on what and when to do something.
  • It should be understood that the methods of determining image adjustments described herein are possible for reducing misalignment between displayed images and phantom views, as the misalignments are substantially due to differences in the angular differences in the positions and sizes of objects in the camera 120 recorded images as well as the positions and sizes of corresponding objects in the perusal perspective. Because both the camera 120 As well as the viewer's eye perceiving images in angular segments within their respective fields of view, angular adjustments in the displayed image may be implemented based on pixel shifts and pixel number changes or image size changes of the displayed image. Thus, the image adjustments may take the form of x and y pixel shifts in the displayed image along with "upsampling" or "downsampling" the displayed image to increase or decrease the number of x and y pixels in the displayed image ,
  • Although the examples described above relate to the case that misalignments between the displayed image and the phantom view are based on transverse and longitudinal misalignments as well as size differences, more complicated misalignments are possible, such as distortions or rotations. Rotational misalignments may be determined in the method of determining the size adjustments required when comparing the fourth and fifth captured images.
  • The determination of image adjustments required to align displayed images with the phantom view when there is distortion either in the displayed image or in the phantom view requires determining more information. In this case, the viewer must select at least one other object in a position different from the positions of the first or second object, and perform the operation described above.
  • The examples given describe methods for determining the image settings based on one-eye vision. These determined image settings can be applied to the displayed images in both eyes, or the image settings can be independently applied to each eye.
  • After the image adjustments have been determined, the displayed images may be modified to compensate for misalignments. The displayed images may be still images or video. Additional images of the scene can be taken to identify objects and to determine the positions of the objects in the other images. Methods for identifying objects and determining the positions of objects in images are described, for example, in US Pat US 7,805,003 described. Extended image information can be displayed relative to the detected positions of the objects so that the extended image information is aligned with the objects in the see-through view by applying the image settings to the displayed images. In another embodiment, to save power when displaying advanced image information, additional additional images of the scene are recorded only when there is movement of the viewer or the head-mounted transparent display device 100 be detected, since the determined positions of the objects in the other images remain unchanged when the viewer or the head-mounted transparent display 100 stay stationary. When the viewer or the head Mounted see-through display device 100 is stationary, the same image settings can be used for multiple displays of enhanced image information to align the extended image information with the objects viewed by the viewer in the see-through view.
  • In another method, the viewer indicates the misalignment between a displayed image and the see-through view by moving his head. Illustrations of this method are in 15 - 20 shown. One or more positions are then selected in the combined image that is viewed by the viewer where alignment can be performed. If more than one position is used for alignment, the positions must be in different portions of the combined image, such as near opposite corners. In order to assist the viewer in selecting the positions used to perform the alignment, in one embodiment, a marker is provided in the displayed image, as in FIG 15 shown, with the marker 1550 a circle is.
  • The displayed image that is in 15 shown on the head-mounted see-through display 100 is a first shot of the scene taken by the camera 120 and the displayed image is shown from behind and slightly above the perspective of the viewer so that both objects in the scene and the displayed image can be seen. 16 Figure 12 is an illustration of the combined view as seen with the viewer's left eye, with the displayed image of the scene superimposed on the scene's see-through view to see misalignment. A marker 1550 indicates a first range for alignment. 17 Figure 12 shows a combined view as seen with a viewer's left eye, with the viewer moving his head around objects (the roof) in the displayed image and in the see-through view in the region of the marker 1550 align.
  • Subsequently, a second picture is taken by the camera 120 added. The first captured image is then captured by the electronics with the second captured image 125 which has a processor to compare the difference between the two images at the position of the marker 1550 to determine. At this point, the displayed image and the see-through view would be aligned if the perceived sizes of the displayed image and the see-through view were equal, with the determined difference between the first and second captured images being an image adjustment of the x and y pixels Shift is on the displayed image. If there is still misalignment between the displayed image and the see-through view after an alignment at the position of the marker 1550 there, as in 17 shown, then becomes a second orientation at a second marker 1850 performed as in 18 is shown.
  • As in 18 can be seen, the two images are aligned at the position where the marker 1550 but the rest of the image has misalignments based on a difference in size between the displayed image and the see-through view. The viewer then moves his head around objects in the displayed image to corresponding objects (such as the car tire) in the see-through view in the region of the marker 1850 to specify the additional image setting that is a resizing of the displayed image.
  • 19 Figure 10 shows a representation of the combined image as seen by the viewer after the viewer's head has been moved to align objects in the displayed image with corresponding objects in the phantom view. Then a third picture is taken by the camera 120 added. The third image is then compared to the second image or image by the electronics 125 containing the processor to determine the image adjustment required to display the displayed image with the see-through view in the region of the second marker 1850 align. The determined image setting to the displayed image with the see-through view in the area of the first marker 1550 align is then a pixel shift. The percent change in the distance between the positions of objects in the region of the first and second markers when aligning the displayed image with the see-through view in the region of the second marker 1850 corresponds to the image adjustment for resizing the displayed image. 20 then displays the fully aligned displayed image, after applying the pixel displacement and sizing overlaid with the see-through view, as seen by the viewer, misalignments are not visible.
  • The alignment method will be further described with reference to the flowchart in FIG 21 described. In step 2110 the viewer looks at a scene, and in step 2120 takes the camera 120 a picture of the scene. The captured image will then be in step 2130 on the display areas 115 the head-mounted see-through display device 100 displayed, which operates in a transparent or semi-transparent mode, so that the viewer sees a combined representation of the displayed image, which overlaps with the see-through view. The viewer then delivers in step 2140 an indication of misalignment between objects in the displayed image and corresponding objects in the see-through view. The indication of the misalignments can be made by a series of finger gestures or by moving the head of the observer, as previously described. The camera 120 will be in step 2150 used to take more pictures of the scene according to the observer's indication of misalignments. Then in step 2160 the captured further pictures in the electronics 125 compared to determine the image settings needed to align the displayed images with the see-through view as seen by the viewer.
  • In another embodiment, the viewer displays misalignments between captured images of the scene and the see-through view through a combination of hand gestures and head movements. One or more additional images are captured and compared to determine the image settings, as previously described.
  • In a further embodiment, the head-mounted transparent display device contains 100 a GPS device or a magnetometer. The GPS device provides data regarding the current position or positions of the head-mounted transparent display 100 , The magnetometer provides data regarding the current direction and previous directions of the line of sight of the observer. The data from the GPS device or the magnetometer or the combination of data from the GPS device and the magnetometer are used to identify objects in the scene or addresses or locations of objects in the images taken by the camera 120 were recorded. By aligning the displayed image with the see-through view, enhanced image information regarding the detected objects in the combined view aligned with the respective objects can be displayed so that they are perceived by the viewer.
  • After alignment, extended image information can be aligned with identified objects in the captured images and identified edges of objects in the captured images. In addition, in head-mounted see-through display devices 100 which include copying means, such as gyros or accelerometers, copying information, are used to align extended image information and the position of extended image information relative to objects in the displayed images.
  • 22 shows a flowchart for the use of a head-mounted transparent display 100 with a GPS device or a magnetometer, with the displayed image aligned with the see-through view, as perceived by the viewer. In step 2210 For example, the GPS device or the magnetometer are used to determine the position of the viewer or the direction the viewer is looking. The camera 120 then take in step 2220 a picture of the scene. The Electronic 125 that contains the processor is then used to step the captured image 2230 analyze along with the particular position or direction information to identify objects in the scene. The head-mounted see-through display device 100 then used in step 2240 the peripheral electronics 127 which contains a wireless connection to determine if extended information is available for the identified objects or the particular location or direction. In step 2250 Available advanced information is displayed in areas or locations or locations of the displayed image that correspond to the object locations when the transparency view is aligned.
  • For example, a house in the captured image can be determined by combining its shape in the captured image and from the GPS position and direction, and then the address of the house can be determined from a map available on the Internet, and then the address in the displayed image can be displayed so that it covers the area of the see-through view that contains the house (see 14A ). In another example, an image may be taken of a building. GPS data and magnetometer data can be used to determine the approximate GPS location of the building. Extended information, including the name of the building and current activity in the building can be determined from information from a server in the building that is transmitted via Bluetooth when the GPS position and the direction the viewer is watching match. A displayed image with the name of the building and a list of current activities are then displayed in the area of the displayed image that corresponds to the aligned position of the building in the see-through view. An expanded image is then provided to the viewer as a combined image, with the displayed image overlapping the see-through view.
  • The enhanced images produced by these methods can be used for a variety of applications. In one embodiment, the enhanced image may be part of a user interface, wherein the enhanced image information is a virtual keyboard that includes Finger movements of the viewer can be operated. In this example, the virtual keyboard must be aligned with the viewer's finger-peek view of the viewer's fingers on the desired keys. In another embodiment, the positions of the objects may be determined using GPS data or magnetometer data, and the enhanced image information may be advertisements or names of objects or addresses of objects. The objects may be buildings, exhibitions or tourist attractions, with the viewer using the expanded image to decide where to go or what to do. This information should be aligned with the perusal view of the buildings, exhibitions or attractions.
  • 14A is an illustration of a combined view of an augmented reality image as viewed with the viewer's right eye, with a displayed label 1470 (Address) an object overlaid in the see-through view (the house) and the displayed label 1470 aligned with the object. In another embodiment, the enhanced image includes directions or procedural information about the objects in the scene, and the directions or procedural information must be aligned with the objects so that the viewer can perform an operation correctly. In another embodiment, the enhanced image may be a modified version of the scene in which objects are added to form a virtual image of the scene. 14B Figure 12 is an illustration of a combined view of an augmented reality image as viewed with the viewer's right eye, the augmented image information being the form of displayed objects 1475 (Tree and bushes) have the objects superimposed in the see-through view (car and house), with the displayed objects 1475 are aligned with the objects in the see-through view.
  • LIST OF REFERENCE NUMBERS
  • 100
    Head-mounted transparent display
    102
    lens
    105
    frame
    110
    transparent lens area
    115
    Display area
    120
    camera
    125
    Electronics with processor
    127
    Peripheral electronics with wireless connection and image memory
    130
    poor
    240
    Object in the displayed image
    245
    displayed image in the left eye
    250
    displayed image in the right eye
    342
    Object in the displayed image
    425
    Finger of the viewer
    525
    Finger of the viewer
    625
    Finger of the viewer
    725
    Finger of the viewer
    925
    Finger of the viewer
    1025
    Finger of the viewer
    1125
    Finger of the viewer
    1225
    Finger of the viewer
    1470
    displayed label
    1475
    displayed objects
    1550
    marker
    1850
    marker
    2110
    Step: Viewer looks at scene
    2120
    Step: Camera takes picture of the scene
    2130
    Step: View the captured image
    2140
    Step: Viewer provides an indication of misalignment
    2150
    Step: Camera takes more pictures according to the indication of the viewer
    2160
    Step: Recorded images are compared to determine required image settings
    2170
    Step: Determine position information
    2220
    Step: Camera takes a picture of the scene
    2230
    Step: Analyze the image to identify objects
    2240
    Step: Determine if advanced information is available for objects
    2250
    Step: View extended information for objects in areas of the displayed image that are aligned with corresponding objects in the Review view
  • This disclosure has been made with particular reference to particular embodiments, but it is to be understood that variations and modifications may be effected within the scope of the invention.
  • QUOTES INCLUDE IN THE DESCRIPTION
  • This list of the documents listed by the applicant has been generated automatically and is included solely for the better information of the reader. The list is not part of the German patent or utility model application. The DPMA assumes no liability for any errors or omissions.
  • Cited patent literature
    • US 7369101 [0006]
    • US 6753828 [0009]
    • US 7755667 [0043]
    • US 7805003 [0052]
  • Cited non-patent literature
    • "Single point active alignment method (SPAAM) for optical see-through HMD calibration for AR" by M. Tuceryan, N. Navab, Proceedings of the IEEE and ACM International Symposium on Augmented Reality, pp. 149-158, Munich, Germany October 2000 [0008]
    • "Practical solutions for calibration of optical see-through devices" by Y. Gene, M. Tuceryan, N. Navab, Proceedings of International Symposium on Mixed and Augmented Reality (ISMAR'02), 169-175, Darmstadt, Germany, 2002 [ 0008]

Claims (17)

  1. A method of providing an enhanced image in a head-mounted transparent display device including a camera, comprising: Taking a first image of a scene with the camera, the scene containing objects; Displaying the first image to a viewer; Taking one or more additional images of the scene with the camera, in which the viewer indicates a misalignment between the displayed first image and a see-through view of the scene; Comparing the captured images to determine an image adjustment to align corresponding objects in the first image with the objects in the review view of the scene; Provision of extended image information; Applying the specified image settings to the expanded image information; and Displaying the expanded image information so that the viewer sees an expanded image containing the extended image information superimposed on the see-through view.
  2. The method of claim 1, wherein the image adjustment comprises a lateral shift, a longitudinal shift, or a resize.
  3. The method of claim 1, wherein the viewer indicates the misalignment by a hand gesture captured in the one or more additional images of the scene.
  4. The method of claim 1, wherein the viewer indicates the misalignment by moving his head in the captured images.
  5. The method of claim 1, wherein the viewer indicates misalignments at two or more different positions in the review view of the scene.
  6. The method of claim 1, further comprising: Taking another picture of a scene with the camera; Analyzing the further image to identify the locations of objects in the scene; and Providing enhanced image information using the determined image settings such that the enhanced image information is aligned with objects in the scene.
  7. The method of claim 6, further comprising: identifying the objects.
  8. The method of claim 7, wherein the enhanced image information relates to the objects in the scene.
  9. The method of claim 1, further comprising: Taking a further picture of the scene; Analyzing the further image and using the determined image adjustment to determine the positions of the objects in the see-through view; Provision of extended image information; Applying the determined image setting to the expanded image information; Displaying the expanded image information so that the viewer sees another enhanced image containing the enhanced image information superimposed on the see-through view; and Repeat these steps for more other images to provide an enhanced video.
  10. The method of claim 9, wherein the head-mounted transparent display device further comprises a GPS sensor or a magnetometer, wherein the positions of objects are additionally determined by using data provided by the GPS sensor or the magnetometer be put.
  11. The method of claim 9, wherein the head-mounted transparent display device further comprises a gyroscope or an accelerometer, wherein the positions of objects are additionally determined by use of data provided by the gyroscope or the accelerometer.
  12. The method of claim 1, wherein the enhanced image is part of a user interface.
  13. The method of claim 8, wherein the enhanced image includes instructions.
  14. The method of claim 8, wherein the enhanced image includes names or addresses of objects.
  15. The method of claim 1, wherein the viewer indicates the misalignment by a combination of hand gestures and head movements.
  16. The method of claim 9, wherein the taking of the additional other images of the scene and the analyzing of the additional other images is performed when a movement of the viewer is detected.
  17. The method of claim 1, wherein instructions are displayed to the viewer.
DE112012001022T 2010-02-28 2012-01-25 Alignment control in a head-worn augmented reality device Withdrawn DE112012001022T5 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
USUS-13/037,335 2011-02-28
US13/037,324 US20110214082A1 (en) 2010-02-28 2011-02-28 Projection triggering through an external marker in an augmented reality eyepiece
US13/037,335 US20110213664A1 (en) 2010-02-28 2011-02-28 Local advertising content on an interactive head-mounted eyepiece
USUS-13/037,324 2011-02-28
PCT/US2012/022568 WO2012118575A2 (en) 2011-02-28 2012-01-25 Alignment control in an augmented reality headpiece

Publications (1)

Publication Number Publication Date
DE112012001022T5 true DE112012001022T5 (en) 2013-12-19

Family

ID=46758533

Family Applications (2)

Application Number Title Priority Date Filing Date
DE112012001022T Withdrawn DE112012001022T5 (en) 2010-02-28 2012-01-25 Alignment control in a head-worn augmented reality device
DE201211001032 Withdrawn DE112012001032T5 (en) 2010-02-28 2012-01-25 Lighting control in displays to be worn on the head

Family Applications After (1)

Application Number Title Priority Date Filing Date
DE201211001032 Withdrawn DE112012001032T5 (en) 2010-02-28 2012-01-25 Lighting control in displays to be worn on the head

Country Status (3)

Country Link
CA (2) CA2828413A1 (en)
DE (2) DE112012001022T5 (en)
WO (2) WO2012118573A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017217923A1 (en) 2017-10-09 2019-04-11 Audi Ag Method for operating a display device in a motor vehicle

Families Citing this family (80)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2712059A1 (en) 2008-01-22 2009-07-30 The Arizona Board Of Regents On Behalf Of The University Of Arizona Head-mounted projection display using reflective microdisplays
US9965681B2 (en) 2008-12-16 2018-05-08 Osterhout Group, Inc. Eye imaging in head worn computing
WO2010123934A1 (en) 2009-04-20 2010-10-28 The Arizona Board Of Regents On Behalf Of The University Of Arizona Optical see-through free-form head-mounted display
US9223134B2 (en) 2010-02-28 2015-12-29 Microsoft Technology Licensing, Llc Optical imperfections in a light transmissive illumination system for see-through near-eye display glasses
US9285589B2 (en) 2010-02-28 2016-03-15 Microsoft Technology Licensing, Llc AR glasses with event and sensor triggered control of AR eyepiece applications
US9759917B2 (en) 2010-02-28 2017-09-12 Microsoft Technology Licensing, Llc AR glasses with event and sensor triggered AR eyepiece interface to external devices
US10180572B2 (en) 2010-02-28 2019-01-15 Microsoft Technology Licensing, Llc AR glasses with event and user action control of external applications
US9366862B2 (en) 2010-02-28 2016-06-14 Microsoft Technology Licensing, Llc System and method for delivering content to a group of see-through near eye display eyepieces
US9341843B2 (en) 2010-02-28 2016-05-17 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a small scale image source
US9097891B2 (en) 2010-02-28 2015-08-04 Microsoft Technology Licensing, Llc See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment
US8467133B2 (en) 2010-02-28 2013-06-18 Osterhout Group, Inc. See-through display with an optical assembly including a wedge-shaped illumination system
US8488246B2 (en) 2010-02-28 2013-07-16 Osterhout Group, Inc. See-through near-eye display glasses including a curved polarizing film in the image source, a partially reflective, partially transmitting optical element and an optically flat film
US9229227B2 (en) 2010-02-28 2016-01-05 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a light transmissive wedge shaped illumination system
US9134534B2 (en) 2010-02-28 2015-09-15 Microsoft Technology Licensing, Llc See-through near-eye display glasses including a modular image source
US9182596B2 (en) 2010-02-28 2015-11-10 Microsoft Technology Licensing, Llc See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light
US8482859B2 (en) 2010-02-28 2013-07-09 Osterhout Group, Inc. See-through near-eye display glasses wherein image light is transmitted to and reflected from an optically flat film
US8472120B2 (en) 2010-02-28 2013-06-25 Osterhout Group, Inc. See-through near-eye display glasses with a small scale image source
US8477425B2 (en) 2010-02-28 2013-07-02 Osterhout Group, Inc. See-through near-eye display glasses including a partially reflective, partially transmitting optical element
US9091851B2 (en) 2010-02-28 2015-07-28 Microsoft Technology Licensing, Llc Light control in head mounted displays
WO2011106797A1 (en) 2010-02-28 2011-09-01 Osterhout Group, Inc. Projection triggering through an external marker in an augmented reality eyepiece
US9097890B2 (en) 2010-02-28 2015-08-04 Microsoft Technology Licensing, Llc Grating in a light transmissive illumination system for see-through near-eye display glasses
US9128281B2 (en) 2010-09-14 2015-09-08 Microsoft Technology Licensing, Llc Eyepiece with uniformly illuminated reflective display
US9129295B2 (en) 2010-02-28 2015-09-08 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a fast response photochromic film system for quick transition from dark to clear
AU2013331179B2 (en) 2012-10-18 2017-08-24 The Arizona Board Of Regents On Behalf Of The University Of Arizona Stereoscopic displays with addressable focus cues
US9448404B2 (en) * 2012-11-13 2016-09-20 Qualcomm Incorporated Modifying virtual object display properties to increase power performance of augmented reality devices
US9619021B2 (en) 2013-01-09 2017-04-11 Lg Electronics Inc. Head mounted display providing eye gaze calibration and control method thereof
US20140191927A1 (en) * 2013-01-09 2014-07-10 Lg Electronics Inc. Head mount display device providing eye gaze calibration and control method thereof
KR20140090552A (en) 2013-01-09 2014-07-17 엘지전자 주식회사 Head Mounted Display and controlling method for eye-gaze calibration
US10254856B2 (en) 2014-01-17 2019-04-09 Osterhout Group, Inc. External user interface for head worn computing
US9939934B2 (en) 2014-01-17 2018-04-10 Osterhout Group, Inc. External user interface for head worn computing
US9298007B2 (en) 2014-01-21 2016-03-29 Osterhout Group, Inc. Eye imaging in head worn computing
US9766463B2 (en) 2014-01-21 2017-09-19 Osterhout Group, Inc. See-through computer display systems
US9715112B2 (en) 2014-01-21 2017-07-25 Osterhout Group, Inc. Suppression of stray light in head worn computing
US9529195B2 (en) 2014-01-21 2016-12-27 Osterhout Group, Inc. See-through computer display systems
US20150205135A1 (en) 2014-01-21 2015-07-23 Osterhout Group, Inc. See-through computer display systems
US9836122B2 (en) 2014-01-21 2017-12-05 Osterhout Group, Inc. Eye glint imaging in see-through computer display systems
US9952664B2 (en) 2014-01-21 2018-04-24 Osterhout Group, Inc. Eye imaging in head worn computing
US9538915B2 (en) 2014-01-21 2017-01-10 Osterhout Group, Inc. Eye imaging in head worn computing
US10191279B2 (en) 2014-03-17 2019-01-29 Osterhout Group, Inc. Eye imaging in head worn computing
US20150241963A1 (en) 2014-02-11 2015-08-27 Osterhout Group, Inc. Eye imaging in head worn computing
US20150277120A1 (en) 2014-01-21 2015-10-01 Osterhout Group, Inc. Optical configurations for head worn computing
US9811159B2 (en) 2014-01-21 2017-11-07 Osterhout Group, Inc. Eye imaging in head worn computing
WO2016133886A1 (en) * 2015-02-17 2016-08-25 Osterhout Group, Inc. See-through computer display systems
US9529199B2 (en) 2014-01-21 2016-12-27 Osterhout Group, Inc. See-through computer display systems
US9684172B2 (en) 2014-12-03 2017-06-20 Osterhout Group, Inc. Head worn computer display systems
US9310610B2 (en) 2014-01-21 2016-04-12 Osterhout Group, Inc. See-through computer display systems
US9594246B2 (en) 2014-01-21 2017-03-14 Osterhout Group, Inc. See-through computer display systems
US20150205111A1 (en) 2014-01-21 2015-07-23 Osterhout Group, Inc. Optical configurations for head worn computing
US9651784B2 (en) 2014-01-21 2017-05-16 Osterhout Group, Inc. See-through computer display systems
US9494800B2 (en) 2014-01-21 2016-11-15 Osterhout Group, Inc. See-through computer display systems
US9753288B2 (en) 2014-01-21 2017-09-05 Osterhout Group, Inc. See-through computer display systems
US9400390B2 (en) 2014-01-24 2016-07-26 Osterhout Group, Inc. Peripheral lighting for head worn computing
US9229233B2 (en) 2014-02-11 2016-01-05 Osterhout Group, Inc. Micro Doppler presentations in head worn computing
US9401540B2 (en) 2014-02-11 2016-07-26 Osterhout Group, Inc. Spatial location presentation in head worn computing
US9299194B2 (en) 2014-02-14 2016-03-29 Osterhout Group, Inc. Secure sharing in head worn computing
CN106662731B (en) 2014-03-05 2019-11-15 亚利桑那大学评议会 Wearable 3D augmented reality display
JP2017510844A (en) * 2014-03-18 2017-04-13 スリーエム イノベイティブ プロパティズ カンパニー Flat image synthesizer for near-eye display
US20150277118A1 (en) 2014-03-28 2015-10-01 Osterhout Group, Inc. Sensor dependent content position in head worn computing
US9672210B2 (en) 2014-04-25 2017-06-06 Osterhout Group, Inc. Language translation with head-worn computing
US9158116B1 (en) 2014-04-25 2015-10-13 Osterhout Group, Inc. Temple and ear horn assembly for headworn computer
US9651787B2 (en) 2014-04-25 2017-05-16 Osterhout Group, Inc. Speaker assembly for headworn computer
US9746686B2 (en) 2014-05-19 2017-08-29 Osterhout Group, Inc. Content position calibration in head worn computing
US9841599B2 (en) 2014-06-05 2017-12-12 Osterhout Group, Inc. Optical configurations for head-worn see-through displays
US9575321B2 (en) 2014-06-09 2017-02-21 Osterhout Group, Inc. Content presentation in head worn computing
US9810906B2 (en) 2014-06-17 2017-11-07 Osterhout Group, Inc. External user interface for head worn computing
US9366867B2 (en) 2014-07-08 2016-06-14 Osterhout Group, Inc. Optical systems for see-through displays
US9829707B2 (en) 2014-08-12 2017-11-28 Osterhout Group, Inc. Measuring content brightness in head worn computing
US9423842B2 (en) 2014-09-18 2016-08-23 Osterhout Group, Inc. Thermal management for head-worn computer
US9366868B2 (en) 2014-09-26 2016-06-14 Osterhout Group, Inc. See-through computer display systems
US9671613B2 (en) 2014-09-26 2017-06-06 Osterhout Group, Inc. See-through computer display systems
US9448409B2 (en) 2014-11-26 2016-09-20 Osterhout Group, Inc. See-through computer display systems
USD743963S1 (en) 2014-12-22 2015-11-24 Osterhout Group, Inc. Air mouse
USD751552S1 (en) 2014-12-31 2016-03-15 Osterhout Group, Inc. Computer glasses
USD753114S1 (en) 2015-01-05 2016-04-05 Osterhout Group, Inc. Air mouse
US20160239985A1 (en) 2015-02-17 2016-08-18 Osterhout Group, Inc. See-through computer display systems
US9910284B1 (en) 2016-09-08 2018-03-06 Osterhout Group, Inc. Optical systems for head-worn computers
US10495895B2 (en) 2017-06-14 2019-12-03 Varjo Technologies Oy Display apparatus and method of displaying using polarizers
US10422995B2 (en) 2017-07-24 2019-09-24 Mentor Acquisition One, Llc See-through computer display systems with stray light management
CN109991744A (en) * 2018-01-02 2019-07-09 京东方科技集团股份有限公司 Display device, display methods and head-up display
CN110546550A (en) * 2018-02-12 2019-12-06 优奈柯恩(北京)科技有限公司 Augmented reality device and optical system used therein

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6753828B2 (en) 2000-09-25 2004-06-22 Siemens Corporated Research, Inc. System and method for calibrating a stereo optical see-through head-mounted display system for augmented reality
US7369101B2 (en) 2003-06-12 2008-05-06 Siemens Medical Solutions Usa, Inc. Calibrating real and virtual views
US7755667B2 (en) 2005-05-17 2010-07-13 Eastman Kodak Company Image sequence stabilization method and camera having dual path image sequence stabilization
US7805003B1 (en) 2003-11-18 2010-09-28 Adobe Systems Incorporated Identifying one or more objects within an image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5151722A (en) * 1990-11-05 1992-09-29 The Johns Hopkins University Video display on spectacle-like frame
US5625765A (en) * 1993-09-03 1997-04-29 Criticom Corp. Vision systems including devices and methods for combining images for extended magnification schemes
JPH09219832A (en) * 1996-02-13 1997-08-19 Olympus Optical Co Ltd Image display
WO1997034182A1 (en) * 1996-03-11 1997-09-18 Seiko Epson Corporation Head-mounted display
US7898504B2 (en) * 2007-04-06 2011-03-01 Sony Corporation Personal theater display

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6753828B2 (en) 2000-09-25 2004-06-22 Siemens Corporated Research, Inc. System and method for calibrating a stereo optical see-through head-mounted display system for augmented reality
US7369101B2 (en) 2003-06-12 2008-05-06 Siemens Medical Solutions Usa, Inc. Calibrating real and virtual views
US7805003B1 (en) 2003-11-18 2010-09-28 Adobe Systems Incorporated Identifying one or more objects within an image
US7755667B2 (en) 2005-05-17 2010-07-13 Eastman Kodak Company Image sequence stabilization method and camera having dual path image sequence stabilization

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Practical solutions for Calibration of optical see-through devices" von Y. Gene, M. Tuceryan, N. Navab, Proceedings of International Symposium on Mixed and Augmented Reality (ISMAR'02), 169-175, Darmstadt, Deutschland, 2002
"Single point aktive allignment method (SPAAM) for optical see-through HMD calibration for AR" von M. Tuceryan, N. Navab, Proceedings of the IEEE and ACM International Symposium on Augmented Reality, Seiten 149-158, München, Deutschland Oktober 2000

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017217923A1 (en) 2017-10-09 2019-04-11 Audi Ag Method for operating a display device in a motor vehicle
WO2019072481A1 (en) 2017-10-09 2019-04-18 Audi Ag Method for operating a display device in a motor vehicle

Also Published As

Publication number Publication date
DE112012001032T5 (en) 2014-01-30
WO2012118575A2 (en) 2012-09-07
CA2828413A1 (en) 2012-09-07
WO2012118575A3 (en) 2013-03-14
CA2828407A1 (en) 2012-09-07
WO2012118573A1 (en) 2012-09-07

Similar Documents

Publication Publication Date Title
Azuma et al. Recent advances in augmented reality
US6633304B2 (en) Mixed reality presentation apparatus and control method thereof
US7589747B2 (en) Mixed reality space image generation method and mixed reality system
KR101309176B1 (en) Apparatus and method for augmented reality
Azuma A survey of augmented reality
JP4848339B2 (en) Virtual window method, system, and computer program recorded with simulated parallax and visual field change (virtual window with simulated parallax and visual field change)
JP3926837B2 (en) Display control method and apparatus, program, and portable device
US8970690B2 (en) Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
US20060262140A1 (en) Method and apparatus to facilitate visual augmentation of perceived reality
US10095030B2 (en) Shape recognition device, shape recognition program, and shape recognition method
US20030107643A1 (en) Method and system for controlling the motion of stereoscopic cameras based on a viewer's eye motion
US20120299962A1 (en) Method and apparatus for collaborative augmented reality displays
Kruijff et al. Perceptual issues in augmented reality revisited
DE102009037835B4 (en) Method for displaying virtual information in a real environment
Schmalstieg et al. Augmented reality: principles and practice
US20110084983A1 (en) Systems and Methods for Interaction With a Virtual Environment
US9372348B2 (en) Apparatus and method for a bioptic real time video system
JP5582548B2 (en) Display method of virtual information in real environment image
JP4927631B2 (en) Display device, control method therefor, program, recording medium, and integrated circuit
EP1404126B1 (en) Video combining apparatus and method
TWI591378B (en) Calibration method of ost hmd of ar system and non-transient computer readable media
WO2012029576A1 (en) Mixed reality display system, image providing server, display apparatus, and display program
CN104205175B (en) Information processor, information processing system and information processing method
US20100287500A1 (en) Method and system for displaying conformal symbology on a see-through display
JP2008521110A (en) Personal device with image capture function for augmented reality resources application and method thereof

Legal Events

Date Code Title Description
R082 Change of representative

Representative=s name: UEXKUELL & STOLBERG, DE

R081 Change of applicant/patentee

Owner name: MICROSOFT CORPORATION, US

Free format text: FORMER OWNER: OSTERHOUT GROUP, INC., SAN FRANCISCO, US

Effective date: 20140212

Owner name: MICROSOFT CORPORATION, REDMOND, US

Free format text: FORMER OWNER: OSTERHOUT GROUP, INC., SAN FRANCISCO, CALIF., US

Effective date: 20140212

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, REDMOND, US

Free format text: FORMER OWNER: OSTERHOUT GROUP, INC., SAN FRANCISCO, CALIF., US

Effective date: 20140212

R082 Change of representative

Representative=s name: UEXKUELL & STOLBERG, DE

Effective date: 20140212

Representative=s name: OLSWANG GERMANY LLP, DE

Effective date: 20140212

R082 Change of representative

Representative=s name: OLSWANG GERMANY LLP, DE

R082 Change of representative

Representative=s name: OLSWANG GERMANY LLP, DE

R082 Change of representative

Representative=s name: OLSWANG GERMANY LLP, DE

Effective date: 20141202

Representative=s name: OLSWANG GERMANY LLP, DE

Effective date: 20150219

R081 Change of applicant/patentee

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, REDMOND, US

Free format text: FORMER OWNER: MICROSOFT CORPORATION, REDMOND, WASH., US

Effective date: 20150219

R005 Application deemed withdrawn due to failure to request examination