US20210318547A1 - Augmented reality viewer with automated surface selection placement and content orientation placement - Google Patents

Augmented reality viewer with automated surface selection placement and content orientation placement Download PDF

Info

Publication number
US20210318547A1
US20210318547A1 US17/357,795 US202117357795A US2021318547A1 US 20210318547 A1 US20210318547 A1 US 20210318547A1 US 202117357795 A US202117357795 A US 202117357795A US 2021318547 A1 US2021318547 A1 US 2021318547A1
Authority
US
United States
Prior art keywords
surface area
user
orientation
orientation vector
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/357,795
Inventor
Victor Ng-Thow-Hing
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Magic Leap Inc
Original Assignee
Magic Leap Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Magic Leap Inc filed Critical Magic Leap Inc
Priority to US17/357,795 priority Critical patent/US20210318547A1/en
Assigned to MAGIC LEAP, INC. reassignment MAGIC LEAP, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NG-THOW-HING, VICTOR
Publication of US20210318547A1 publication Critical patent/US20210318547A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0176Head mounted characterised by mechanical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/373Details of the operation on graphic patterns for modifying the size of the graphic pattern
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0261Improving the quality of display appearance in the context of movement of objects on the screen or movement of the observer relative to the screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0492Change of orientation of the displayed image, e.g. upside-down, mirrored
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user

Definitions

  • This invention relates to an augmented reality viewer and to an augmented reality viewing method.
  • An augment reality viewer is a wearable device that presents the user with two images, one for the left eye and one for the right eye. Objects in the images for each eye are rendered with slightly different viewpoints that allows the brain to process the objects as three-dimensional objects. When the images constantly change viewpoints as the viewer moves, movement around synthetic three-dimensional content can be simulated.
  • An augmented reality viewer usually includes technology that allows the presentation of digital or virtual image information as an augmentation to visualization of the actual world around the user.
  • the virtual image information is presented in a static location relative to the augmented reality viewer so that, if the user moves their head, and the augmented reality viewer with their head, the user is presented with an image that remains in a stationary position in front of them while real world objects shift in their view. This gives the user the appearance that the virtual image information is not fixed relative to the real world objects, but instead is fixed in the viewer's point of view.
  • technologies exist to keep the virtual image information in a stationary position relative to the real world objects when the user moves their head. In the latter scenario, the user may be given some control over the initial placement of the virtual image information relative to the real world objects.
  • the invention provides an augmented reality viewer including, a display that permits a user to see real world objects, a data channel to hold content, a user orientation determination module to determine a first user orientation of a user relative to a first display area and to determine a second user orientation of the user relative to the first display area, a projector connected to the data channel to display the content through the display to the user within confines of the first display area while the user views the real world objects and a content orientation selection module connected to the surface extraction module and the user orientation module to display the content in a first content orientation relative to the first display area so that a near edge of the content is close to the user when the user is in the first user orientation, and display the content in a second content orientation relative to the first display area so that the near edge is rotated closer to the user when the user is in the second user orientation and the content is rotated relative to the first display area from the first content orientation to the second content orientation.
  • the invention further provides an augmented reality viewing method comprising determining, by the processor, a first user orientation of a user relative to a first display area, determining, by the processor, a first content orientation relative to the display when the user is in the first orientation, displaying, by the processor, content in the first content orientation through a display to the user within confines of the first display area while the user views real world objects through the display while in the first user orientation, determining, by the processor, a second user orientation of the user relative to the first display area, determining, by the processor, a second content orientation relative to the display when the user is in the second location and displaying, by the processor, content in the second content orientation through a display to the user within confines of the display area while the user views real world objects through the display from the second location, wherein the content is rotated relative to the first display area from the first content orientation to the second content orientation.
  • the invention also provides an augmented reality viewer including a display that permits a user to see real world objects, a data channel to hold content, a surface area extraction module to determine a first surface area and a second surface area, a user orientation determination module to determine a first orientation of a user relative to the first surface area and the second surface area, a surface area selection module to select a preferred surface area between the first surface area and the second surface area based on normal to the respective surface area being directed more opposite to the first user orientation of the user and a projector that displays the content through the display to the user within confines of the preferred surface area while the user views the real world objects.
  • the invention further provides an augmented reality viewing method including determining, by a processor, a first surface area and a second surface area, determining, by the processor, a first orientation of a user relative to the first surface area and the second surface area, selecting, by the processor, a preferred surface area between the first surface area of the second surface area based on normal to the respective surface area being directed more towards the first location of the user and displaying, by the processor, content through a display to the user within confines of the preferred surface area while the user views real world objects through the display from the first location.
  • the invention also provides an augmented reality viewer including an environmental calculation unit to determine a first vector indicative an orientation of a user, a vector calculator to a calculate a second vector, a selection module to calculate a dot product of the first vector and the second vector, a data channel to hold content, a content rendering module to determine placement of the content based on the dot product, a display that permits the user to see real world objects and a projector that displays the content through the display to the user while the user views the real world objects through the display, the content being displayed based on the placement determined by the content rendering module.
  • the invention further provides an augmented reality viewing method including determining, by a processor, a first vector indicative an orientation of a user, calculating, by the processor, a second vector, calculating, by the processor, a dot product of the first vector and the second vector, determining, by the processor, placement of content based on the dot product and displaying, by the processor, the content through a display to the user while the user views real world objects through the display, the content being displayed based on the placement determined by the content rendering module.
  • FIG. 1A is a block diagram of an augmented reality viewer that is used by a user to see real world objects augmented with content from a computer;
  • FIG. 1B is a perspective view of the augmented reality viewer
  • FIG. 2 is a perspective view illustrating a user wearing the augmented reality viewer in a three-dimensional environment while viewing two-dimensional content;
  • FIG. 3 is a perspective view illustrating a three-dimensional data map that is created with the augmented reality viewer
  • FIG. 4 is a perspective view illustrating the determination of a user orientation vector, the extraction of surface areas and the calculation of surface area orientation vectors
  • FIG. 5 is a view similar to FIG. 4 illustrating placement of a rendering of content on one of the surface areas
  • FIG. 6 is a view similar to FIG. 5 illustrating a change in the user orientation vector
  • FIG. 7 is a view similar to FIG. 6 illustrating placement of a rendering of the content due to the change in the user orientation vector
  • FIG. 8 is a view similar to FIG. 7 illustrating a change in the user orientation vector due to movement of the user
  • FIG. 9 is a view similar to FIG. 8 illustrating rotation of the rendering of the content due to the change in the user orientation vector
  • FIG. 10 is a view similar to FIG. 9 illustrating a change in the user orientation vector due to movement of the user;
  • FIG. 11 is a view similar to FIG. 10 illustrating rotation of the rendering of the content due to the change in the user orientation vector;
  • FIG. 12 is a view similar to FIG. 11 illustrating a change in the user orientation vector due to movement of the user;
  • FIG. 13 is a view similar to FIG. 12 illustrating rotation of the rendering of the content due to the change in the user orientation vector;
  • FIG. 14 is a view similar to FIG. 13 illustrating a change in the user orientation vector due to the user looking up;
  • FIG. 15 is a view similar to FIG. 14 illustrating the placement of a rendering of the content on another surface area due to the change in the user orientation vector;
  • FIG. 16 is a flow chart illustrating the functioning of an algorithm to carry out the method of the preceding figures
  • FIG. 17 is a perspective view illustrating a user wearing the augmented reality viewer in a three-dimensional environment while viewing three-dimensional content;
  • FIG. 18 is a top plan view of FIG. 17 ;
  • FIG. 19 is a view similar to FIG. 18 wherein the user has rotated in a clockwise direction around a display surface
  • FIG. 20 is a view similar to FIG. 19 wherein the content has rotated in a clockwise direction;
  • FIG. 21 is a perspective view illustrating a user while viewing content on a vertical surface
  • FIG. 22 is a view similar to FIG. 21 wherein the user has rotated in a counter-clockwise direction;
  • FIG. 23 is a view similar to FIG. 2 wherein the content has rotated in a counter-clockwise direction;
  • FIG. 24 is a block diagram of a machine in the form of a computer that can find application in the present invention system, in accordance with one embodiment of the invention.
  • surface and “surface area” are used herein to describe two-dimensional areas that are suitable for use as display areas. Aspects of the invention may find application when other display areas are used, for example a display area that is a three-dimensional surface area or a display area representing a slice within a three-dimensional volume.
  • FIG. 1A of the accompanying drawings illustrates an augmented reality viewer 12 that a user uses to see a direct view of a real world scene, including real world surfaces and real world objects 14 , that is augmented with content 16 of the kind that is stored on, received by, or otherwise generated by a computer or computer network.
  • the augmented reality viewer 12 includes a display 18 , a data channel 20 , a content rendering module 22 , a projector 24 , a depth sensor 28 , a position sensor such as an accelerometer 30 , a camera 32 , an environmental calculation unit 34 , and a content placement and content orientation unit 36 .
  • the data channel 20 may be connected to a storage device that holds the content 16 or may be connected to a service that provides the content 16 in real time.
  • the content 16 may for example be static images such as photographs, images that remain static for a period of time and can be manipulated by a user such as web pages, text documents or other data that is displayed on a computer display, or moving images such as videos or animations.
  • the content 16 may be two-dimensional, three-dimensional, static, dynamic, text, image, video, etc.
  • the content 16 may include games, books, movies, video clips, advertisements, avatars, drawings, applications, web pages, decorations, sports games, replays, 3-D models or any other type of content as will be appreciated by one of skill in the art.
  • the content rendering module 22 is connected to the data channel 20 to receive the content 16 from the data channel 20 .
  • the content rendering module 22 converts the content 16 into a form that is suitable for three-dimensional viewing.
  • the projector 24 is connected to the content rendering module 22 .
  • the projector 24 converts data generated by the content rendering module 22 into light and delivers the light to the display 18 .
  • the light travels from the display 18 to eyes 26 of the user.
  • One way that virtual content can be made to appear to be at a certain depth is by causing light rays to diverge and form a curved wavefront in a way that mimics how light from real physical objects reaches an eye.
  • the eye then focuses the diverging light beams onto the retina by changing shape of the anatomic lens in a process called accommodation. Different divergence angles represent different depths and are created using diffraction gratings on the exit
  • the display 18 is a transparent display.
  • the display 18 allows the user to see the real world objects 14 through the display 18 .
  • the user thus perceives an augmented reality view 40 wherein the real world objects 14 that the user sees in three-dimensions are augmented with a three-dimensional image that is provide to the user from the projector 24 via the display 18 .
  • the depth sensor 28 and the camera 32 are mounted in a position to capture the real world objects 14 .
  • the depth sensor 28 typically detects electromagnetic waves in the infrared range and the camera 32 detects electromagnetic waves in the visible light spectrum.
  • more than one camera 32 may be mounted on a frame 13 of the augmented reality viewer 12 in a world-facing position.
  • four cameras 32 are mounted to the frame 13 with two in a forward world-facing position and two in a left and right side or oblique world-facing position.
  • the fields of view of the multiple cameras 32 may overlap.
  • the depth sensor 28 and the cameras 32 are mounted in a static position relative to a frame 13 of the augmented reality viewer 12 . Center points of images that are captured by the depth sensor 28 and the camera 32 are always in the same, forward direction relative to the augmented reality viewer 12 .
  • the accelerometer 30 is mounted in a stationary position to the frame of the augmented reality viewer 12 .
  • the accelerometer 30 detects the direction of gravitation force.
  • the accelerometer 30 can be used to determine the orientation of the augmented reality viewer with respect to the Earth's gravitational field.
  • the combination of the depth sensor 28 and a head pose algorithm that relies on visual simultaneous localization and mapping (“SLAM”) and inertial measurement unit (“IMU”) input, accelerometer 30 permits the augmented reality viewer 12 to establish the locations of the real world objects 14 relative to the direction of gravitation force and relative to the augmented reality viewer 12 .
  • SLAM visual simultaneous localization and mapping
  • IMU inertial measurement unit
  • the camera 32 captures images of the real world objects 14 and further processing of the images on a continual basis provides data that indicates movement of the augmented reality viewer 12 relative to the real world objects 14 . Because the depth sensor 28 , world cameras 32 , and the accelerometer 30 determine the locations of the real world objects 14 relative to gravitation force on a continual basis, the movement of the augmented reality viewer 12 relative to gravitation force and a mapped real world environment can also be calculated.
  • the environmental calculation unit 34 includes an environment mapping module 44 , a surface extraction module 46 and a viewer orientation determination module 48 .
  • the environment mapping module 44 may receive input from one or more sensors.
  • the one or more sensors may include, for example, the depth sensor 28 , one or more world camera 32 , and the accelerometer 30 to determine the locations of the real world surfaces and objects 14 .
  • the surface extraction module 46 is may receive data from the environment mapping module 44 and determines planar surfaces in the environment.
  • the viewer orientation determination module 48 is connected to and receives input from the depth sensor 28 , the cameras 32 , and the accelerometer 30 to determine a user orientation of the user relative to the real world objects 14 and the surfaces that are identified by the surface extraction module 46 .
  • the content placement and content orientation unit 36 includes a surface vector calculator 50 , a surface selection module 52 , a content size determination module 54 , a content vector calculator 56 and a content orientation selection module 58 .
  • the surface vector calculator 50 , the surface selection module 52 and content size determination module 54 may be sequentially connected to one another.
  • the surface selection module 52 is connected to and provides input to the viewer orientation determination module 48 .
  • the content vector calculator 56 is connected to the data channel 20 so as to be able to receive the content 16 .
  • the content orientation selection module 58 connected to and receives input from the content vector calculator 56 and the viewer orientation determination module 48 .
  • the content size determination module 54 is connected and provides input to the content orientation selection module 58 .
  • the content rendering module 22 is connected and receives input from the content size determination module 54 .
  • FIG. 2 illustrates a user 60 who is wearing the augmented reality viewer 12 within a three-dimensional environment.
  • a vector 62 signifies a direction of gravitation force as detected by one or more sensors on the augmented reality viewer 12 .
  • a vector 64 signifies a direction to the right from a perspective of the user 60 .
  • a user orientation vector 66 signifies a user orientation, in the present example a forward direction in the middle of a view of the user 60 .
  • the user orientation vector 66 also points in a direction that is to the center points of the images captured by the depth sensor 28 and camera 32 in FIG. 1 .
  • FIG. 1B shows a further coordinate system 63 that includes the vector 64 to the right, the user orientation vector 66 and a device upright vector 67 that are orthogonal to one another.
  • the three-dimensional environment includes a table 68 with a horizontal surface 70 , surfaces 72 and 74 , objects 76 that provide obstructions that may make the surfaces 72 and 74 unsuitable for placement of content.
  • objects 76 that disrupt continuous surfaces 72 and 74 may include picture frames, mirrors, cracks in a wall, rough texture, a different colored area, a hole in the surface, a protrusion of the surface, or any other non-uniformity with respect to the planar surfaces 72 , 74 .
  • the surfaces 78 and 80 may be more suitable for placement of content because of their relatively large size and their proximity to the user 60 .
  • FIG. 3 illustrates the functioning of the depth sensor 28 , accelerometer 30 and environment mapping module 44 in FIG. 1 .
  • the depth sensor 28 captures the depth of all features, including objects and surfaces in the three-dimensional environment.
  • the environment mapping module 44 receives data, directly or indirectly, from one or more sensors on the augmented reality viewer 12 .
  • the depth sensor 28 and the accelerometer 30 may provide input to the environment mapping module 44 for mapping the depth of the three-dimensional environment in three dimensions.
  • FIG. 3 also illustrates the functioning of the camera 32 and the viewer orientation determination module 48 .
  • the camera 32 captures an image of the objects 76 and surfaces 78 .
  • the viewer orientation determination module 48 receives an image from the camera 32 and processes the image to determine that an orientation of the augmented reality viewer 12 that is worn by the user 60 is as represented by the user orientation vector 66 .
  • mapping a three-dimensional environment may be employed, for example using one or more cameras that are located in a stationary position within a room.
  • the integration of the depth sensor 28 and the environment mapping module 44 within the augmented reality viewer 12 provides for a more mobile application.
  • FIG. 4 illustrates the functioning of the surface extraction module 46 in FIG. 1 .
  • the surface extraction module 46 processes the three-dimensional map that is created in FIG. 3 to determine whether there are any surfaces that are suitable for placement and viewing of content, in the present example two-dimensional content.
  • the surface extraction module 46 determines a horizontal surface area 82 and two vertical surface areas 84 and 86 .
  • the surface areas 82 , 84 and 86 are not real surfaces, but instead electronically represent two-dimensional planar surfaces oriented in a three-dimensional environment.
  • the surface areas 82 , 84 and 86 which are data representations, correspond respectively to the real surfaces 70 , 78 and 80 in FIG. 2 forming part of the real world objects 14 in FIG. 1 .
  • FIG. 4 illustrates a cube 88 and a shadow 90 of the cube 88 . These elements are used by the author to assist the viewer to track changes in the user orientation vector 66 and movement of the user 60 and the augmented reality viewer 12 in FIG. 2 through the three-dimensional space.
  • FIG. 4 also illustrates the functioning of the surface vector calculator 50 in FIG. 1 .
  • the surface vector calculator 50 calculates a surface area orientation vector for each extracted surface of the mapped three-dimensional environment. For example, the surface vector calculator 50 calculates a surface area orientation vector 92 that is normal to a plane of the surface area 82 . Similarly, the surface vector calculator 50 calculates a surface area orientation vector 94 that is normal to the surface area 84 and a surface area orientation vector 94 that is normal to the surface area 86 .
  • a surface selection module 52 that calculates a relationship between the surface and the user.
  • the surface selection module 52 in FIG. 1A calculates a dot product of the user orientation vector 66 and the surface area orientation vector 92 .
  • the dot product of unit vectors a and b is represented by the following equation:
  • the user orientation vector 66 and the surface area orientation vector 92 are orthogonal to one another, which means their dot product is zero.
  • the surface selection module 52 also calculates a dot product of the user orientation vector 66 and the surface area orientation vector 94 . Because the user orientation vector 66 and the surface area orientation vector 94 are orthogonal their dot product is zero.
  • the surface selection module 52 also calculates a dot product of the user orientation vector 66 and the surface area orientation vector 96 . Because the user orientation vector 66 and the surface area orientation vector 96 are 180° relative to one another, their dot product is ⁇ 1. Because the dot product that includes the surface area orientation vector 96 is the most negative of the three dot products, the surface selection module 52 determines that the surface area 86 is the preferred surface area between the surface areas 82 , 84 and 86 for displaying content. The more negative the dot product is, the more likely it will be that content will be oriented to be directly facing the viewer. Because the surface area 86 is a vertical surface area, the content placement and content orientation unit 36 does not invoke the content orientation selection module 58 in FIG. 1 .
  • the dot product is one of many surface characteristics that can be prioritized by the system or by the needs of the virtual content for choosing the best surface. For example, if the surface that has a dot product of ⁇ 1.0 is tiny and is far away from the user, it may not be preferable over a surface that has a dot product of ⁇ 0.8 but is large and near to the user. The system may choose a surface that has good contrast ratio properties when placing content, so it will be easier for the user to see.
  • the content size determination module 54 determines an appropriate size of content to display on the surface area 86 .
  • the content has an optimal aspect ratio, for example an aspect ratio of 16 on a near edge and 9 on a side edge.
  • the content size determination module 54 uses the ratio of the near edge to the side edge to determine the size and shape of the content, preserving this aspect ratio at all viewing angles so as not to distort content.
  • the content size determination module 54 calculates the optimal height and width of the content with the optimal aspect ratio that will fit with the surface area 86 . In the given example, the distance between left and right edges of the surface area 86 determines the size of the content.
  • FIG. 5 illustrates the functioning of the content rendering module 22 and the projector 24 in FIG. 1 .
  • the content rendering module 22 provides the content 16 in its calculated orientation to the projector 24 based on the size determination of the content size determination module 54 and the surface selection module 52 .
  • the viewer views the content 16 as a rendering 98 that is placed in three-dimensional space on and coplanar with the surface area 86 .
  • the content 16 is not rendered on the surface areas 82 and 84 . All other surface characteristics being equal, the surface area 86 provides an optimal area for the rendering 98 when compared to the surface areas 82 and 84 , because of the user orientation as represented by the user orientation vector 66 .
  • the rendering 98 remains static on the surface area 86 when the user orientation vector changes by a small degree.
  • the viewer orientation determination module 48 in FIG. 1A senses that the user orientation vector changes by more than a predetermined threshold degree, for example by five degrees, the system automatically proceeds to recalculate all dot-products as described above and, if necessary, reposition and resize the content that is being rendered for display to the user. Alternatively, the system my routinely, e.g. every 15 seconds recalculate all dot-products and place content as described above.
  • the user may select the area 86 for the content to remain even when they change their orientation.
  • the user 60 changes the inclination of their head.
  • the user orientation vector 66 rotates in a downward direction 100 .
  • a new user orientation is represented by a new user orientation vector 102 .
  • the cameras 32 in FIGS. 1A and 1B continually capture images of the real world objects 14 . Additional sensors such as the depth sensor 28 and the accelerometer 30 may also continually capture and provide updated information.
  • the viewer orientation determination module 48 processes the images, along with other data captured by sensors on board the augmented reality viewer 12 , to determine relative movement of the real world objects 14 within a view of the camera 32 .
  • the viewer orientation determination module 48 then processes such movement to determine the change of the user orientation vector from the user orientation vector 66 in FIG. 5 to the user orientation vector 102 in FIG. 6 .
  • the system normally selects the surface with the most optimal dot-product, although there may be some tolerance/range allowable for the dot-product so that jitter and processing is reduced.
  • the system may move the content when there is another dot-product that is more optimal and if the dot-product that is more optimal is at least 5 percent better than the dot-product of the surface where the content is currently displayed.
  • the surface selection module 52 again calculates three dot products, namely between the user orientation vector 102 and the surface area orientation vector 92 , the user orientation vector 102 and the surface area orientation vector 94 , and the user orientation vector 102 and the surface area orientation vector 96 .
  • the surface selection module 52 determines which one of the three dot products is the most negative. In the present example, the dot product between the user orientation vector 102 and the surface area orientation vector 92 is the most negative.
  • the surface selection module 52 determines that the surface area 82 is the preferred surface because its associated dot product is more negative than for the surface areas 84 and 86 . The system may also consider other factors as described above.
  • the content placement and content orientation unit 36 in FIG. 1A invokes the content vector calculator 56 and the content orientation selection module 58 . Following operation of the content orientation selection module 58 , the content size determination module 54 is again invoked.
  • the functioning of the content vector calculator 56 , content orientation selection module 58 and content size determination module 54 are better illustrated with the assistance of FIG. 7 .
  • FIG. 7 illustrates that the content rendering module 22 and projector 24 create a rendering 104 of the content 16 within and coplanar with the surface area 82 .
  • the rendering on the surface area 86 is no longer displayed to the user 60 .
  • the rendering 104 has a far edge 106 , a near edge 108 , a right edge 110 and a left edge 112 .
  • the content vector calculator 56 in FIG. 1A may calculate a content orientation vector 114 .
  • the content orientation vector 114 extends from the near edge 108 to the far edge 106 and is orthogonal to both the near edge 108 and the far edge 106 .
  • the calculations that are made by the content vector calculator depend on the content that is provided on the data channel. Some content my already have a content orientation vector extends from the near edge to the far edge of the content, in which case the content vector calculator 56 simply identifies and isolates the content orientation vector within the code of the content. In other instances, a content orientation vector may be associated with the content and the content vector calculator 56 may have to re-orient the content orientation vector to extend from the near edge to the far edge of the content. In other instances, no the content vector calculator 56 may generate a content orientation vector based on other data such as image analysis, the placement of tools in the content, etc.
  • the content orientation selection module 58 calculates a dot product between the user orientation vector 102 and the content orientation vector 114 .
  • the dot product is calculated for four scenarios, namely when the content orientation vector 114 is oriented in the direction shown in FIG. 7 , when the content orientation vector 114 is oriented 90° to the right, when the content orientation vector 114 is oriented 180°, and when the content orientation vector 114 is oriented 90° to the left.
  • the content orientation selection module 58 selects the dot product that is the lowest among the four dot products and places the rendering 104 so that the content orientation vector 114 is aligned in the direction with the lowest associated dot product.
  • the near edge 108 is then located closer to the user 60 than the far edge 106 and the right and left edges 112 and 110 are located to the right and to the left from the orientation of the user 60 as depicted by the user orientation vector 102 .
  • the content 16 is thus oriented in a manner that is easily viewable by the user 60 . For example, a photograph of a head and torso of a person is displayed with the head farthest from the user 60 and the torso closest to the user 60 , and a text document is displayed with the first lines farthest from the user 60 and the last lines closest to the user 60 .
  • the content size determination module 54 has determined an appropriate size for the rendering 104 with the right edge 110 and the left edge 112 defining the width of the rendering 104 within the surface area 82 and a distance between the far edge 106 and the near edge 108 being determined by the desired aspect ratio.
  • the user 60 has moved in a direction 116 counterclockwise around the surface area 82 .
  • the user 60 has also rotated their body counterclockwise by 90°.
  • the user 60 has now established a new orientation as represented by a new user orientation vector 118 .
  • the user's head is still inclined downward toward the surface area 82 and the surface areas 84 and 86 are now located behind and to the right of the user 60 , respectively.
  • the surface selection module 52 again calculates a dot product associated with each one of the surface area orientation vectors 92 , 94 and 96 .
  • the dot product of the user orientation vector 118 and the surface area orientation vector 94 has now become positive.
  • the dot product between the user orientation vector 118 and the surface area orientation vector 96 is approximately zero.
  • the dot product between the user orientation vector 118 and the surface area orientation vector 92 is the most negative.
  • the surface selection module 52 in FIG. 1A selects the surface area 82 associated with the surface area orientation vector 92 as the preferred surface for positioning of a rendering of the content 16 .
  • the content orientation selection module 58 in FIG. 1A again calculates four dot products, each one associated with a respective direction of a content orientation vector, namely a dot product between the user orientation vector 118 and the content orientation vector 114 in the direction shown in FIG. 8 , and further dot products respectively between the user orientation vector 118 and content orientation vectors at 90° to the right, 180° and 90° to the left relative to the content orientation vector 114 in FIG. 8 .
  • the content orientation selection module 58 determines that the dot product associated with the content orientation vector 114 that is 90° to the left relative to the direction of the content orientation vector 114 shown in FIG. 7 is the most positive of the four dot products.
  • the content size determination module 54 determines an appropriate size for the rendering if the content orientation vector 114 is rotated 90° to the left.
  • FIG. 9 illustrates how the content rendering module 22 creates the rendering 104 based on the user orientation as represented by the user orientation vector 118 .
  • the rendering 104 is rotated 90° counterclockwise so that the content orientation vector 114 is directed 90° to the left when compared to FIG. 8 .
  • the near edge 108 is now located closest to the user 60 .
  • the content size determination module 54 in FIG. 1A has made the rendering 104 smaller than in FIG. 8 due to the available proportions of the surface area 82 . Renderings could snap between positions, smoothly rotate, fade in/fade out as selected by the content creator or by user preference.
  • the user 60 has moved further around the surface area 82 in a direction 120 and has established a new user orientation as represented by a new user orientation vector 122 .
  • the dot product between the user orientation vector 122 and the surface area orientation vector 96 is now positive.
  • the dot product between the user orientation vector 122 and the surface area orientation vector 94 is approximately zero.
  • the dot product between the user orientation vector 122 and the surface area orientation vector 92 is the most negative.
  • the surface area 82 is thus the preferred surface for displaying content.
  • the dot product between the user orientation vector 122 and the content orientation vector 114 as shown in FIG. 10 is approximately zero. If the content orientation vector 114 is rotated 90° clockwise, 180° and 90° counterclockwise, the respective dot products differ in magnitude with the dot product of the content orientation vector 114 that is 90° to the left being the most positive.
  • the rendering 104 should thus be rotated 90° counterclockwise and be resized based on the proportions of the surface area 82 .
  • FIG. 11 illustrates how the rendering 104 is rotated and resized due to the change in the user orientation vector 122 while remaining on the surface area 82 .
  • the user 60 has moved in a direction 124 around the surface area 82 and has established a new user orientation as represented by a new user orientation vector 126 .
  • a dot product of the user orientation vector 126 and the surface area orientation vector 94 is now negative.
  • a dot product between the user orientation vector 126 and the surface area orientation vector 92 is more negative.
  • the surface area 82 is thus the preferred surface area for creating a rendering of the content 16 .
  • a dot product between the user orientation vector 126 and the content orientation vector 114 as shown in FIG. 12 is approximately zero.
  • a dot product between the user orientation vector 126 and the content orientation vector 114 if it is rotated 90° to the left, is positive.
  • the rendering 104 should thus be rotated counterclockwise while remaining on the surface area 82 .
  • FIG. 13 illustrates the placement, orientation and size of the rendering 104 as modified based on the new user orientation vector 126 .
  • FIG. 14 illustrates a new user orientation vector 132 that is established when the user 60 rotates their head in an upward direction 134 .
  • a dot product between the user orientation vector 132 and the surface area orientation vector 92 is approximately zero.
  • a dot product between the user orientation vector 132 and the surface area orientation vector 96 is also approximately zero.
  • a dot product between the user orientation vector 132 and the surface area orientation vector 94 is, or approaches ⁇ 1 and is thus the most negative of the three surface-based dot products.
  • the surface area 84 is now the preferred surface area for placement of a rendering of the content 16 .
  • FIG. 15 illustrates a rendering 136 that is displayed to the user 60 on the surface area 84 .
  • the rendering on the surface area 82 is no longer displayed to the user 60 .
  • the near edge 108 is always at the bottom.
  • FIG. 16 illustrates the algorithm for carrying out the method as described above.
  • the three-dimensional space is mapped as described with reference to FIG. 3 .
  • the surface areas are extracted as described with reference to FIG. 4 .
  • the surface vectors are calculated as described with reference to FIG. 4 .
  • a user orientation vector is determined as described with reference to FIGS. 1 to 4 .
  • a respective dot product is calculated between the user orientation vector and each respective surface area orientation vector, as described with reference to FIG. 4 .
  • a preferred surface area is determined as described with reference to FIG. 4 .
  • the size of the content is determined as described with reference to FIG. 5 and FIG. 7 .
  • the content is displayed as described with reference to FIG. 5 and FIG. 7 .
  • a new user orientation vector may be determined at 156 as described with reference to FIGS. 6, 8, 9, 10 and 12 .
  • the process may then be repeated without again calculating the surface area orientation vectors at 154 A, B and C.
  • FIGS. 17 and 18 an embodiment is shown in perspective view and in top view, respectively, with three-dimensional virtual content 180 rendered on a mapped surface 182 within an environment 184 for viewing by a user 60 .
  • the principles described above are used to position the three-dimensional virtual content 180 that the user 60 can view the content as easily and naturally as possible.
  • the three-dimensional virtual content 180 placed on the mapped surface 182 will be seen by the user from the side.
  • a dot product relationship at or near ⁇ 1 may be more desirable if the three-dimensional virtual content 180 is meant to be viewed from above, as has been described herein with respect to other embodiments.
  • the ideal dot product relationship may be an attribute set by the creator of the three-dimensional virtual content 180 , may be selected as a preference by the user, or may be otherwise determined by the augmented reality viewing system based on the type of content to be displayed.
  • orientation of the three-dimensional virtual content 180 on the mapped surface 182 is determined with respect to the user.
  • three-dimensional virtual content 180 is provided with a content orientation vector 188 that may be used to align the three-dimensional virtual content 180 to a reference vector of the user device.
  • the three-dimensional virtual content 180 is the head of a character with a near edge of the character being where its mouth is. A far edge of the character will typically not be rendered for viewing by the user 60 because the far edge is on a side of the character that the user cannot see.
  • the content orientation vector 188 is aligned parallel with the near edge of the character.
  • the content orientation vector 188 may be used to align the three-dimensional virtual content 180 with the augmented reality viewer 12 such that the dot product between the content orientation vector 188 and the device right vector 64 is at or near 1, indicating that the two vectors are pointing in substantially the same direction.
  • FIGS. 19 and 20 examples of three-dimensional content re-orientation based on a user's movement are shown.
  • the user 60 has moved clockwise around the table by a certain distance and angle with respect to FIG. 18 .
  • the dot product relationship between the content orientation vector 188 and the device right vector 64 is less than 1.
  • this change in position may not require re-orientation of three-dimensional virtual content 180 .
  • a content creator, a user, or software within the augmented reality viewer 12 may indicate that re-orientation of three-dimensional virtual content 180 is necessary only when the dot product between the content orientation vector 188 and a device reference vector is less than a predetermined threshold. Large or small threshold tolerances may be set depending on the type of content being displayed.
  • the orientation module may re-render three-dimensional virtual content 180 such that the content orientation vector 188 aligns with the device right vector 64 to result in a dot product equal to or near 1 for the two vectors, as shown in FIG. 20 .
  • re-orientation of three-dimensional virtual content 180 may also allow for re-sizing of the content; however, content may also remain the same size such that it appears only to re-orient about an axis normal to the mapped surface 182 as the user moves within the environment.
  • FIGS. 21, 22 and 23 an example is shown of virtual content 196 re-orientation on a vertical surface 198 .
  • a user 60 is shown viewing virtual content 196 on a vertical surface 198 that is oriented vertically in the environment.
  • the virtual content 196 may have at least one of a content right orientation vector 200 and a content upright orientation vector 202 which may be used to measure alignment with respect to the device right vector 64 and the device upright vector 67 , respectively.
  • the alignment between one of the content orientation vectors ( 200 , 202 ) and the corresponding device orientation vectors ( 64 , 67 ) results in a dot product value of approximately 1. As discussed above, dot product values closer to 1 indicate more similar alignment between the two vectors being compared.
  • the alignment between content orientation vectors ( 200 , 202 ) and corresponding device orientation vectors ( 64 , 67 ) may be near zero, indicating a less optimal alignment between the user 60 and the virtual content 196 than the alignment shown in FIG. 21 .
  • virtual content 196 may be re-rendered at a new orientation, as shown in FIG. 23 , such that the dot product relationships are within the predetermined thresholds.
  • re-rendering the virtual content 196 at a new orientation may re-establish optimal dot product relationships between content orientation vectors ( 200 , 202 ) and corresponding device orientation vectors ( 64 , 67 ).
  • FIG. 24 shows a diagrammatic representation of a machine in the exemplary form of a computer system 900 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
  • the machine operates as a standalone device or may be connected (e.g., networked) to other machines.
  • the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • the exemplary computer system 900 includes a processor 902 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 904 (e.g., read only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), and a static memory 906 (e.g., flash memory, static random access memory (SRAM), etc.), which communicate with each other via a bus 908 .
  • a processor 902 e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both
  • main memory 904 e.g., read only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.
  • static memory 906 e.g., flash memory, static random access memory (SRAM), etc.
  • the computer system 900 may further include a disk drive unit 916 , and a network interface device 920 .
  • the disk drive unit 916 includes a machine-readable medium 922 on which is stored one or more sets of instructions 924 (e.g., software) embodying any one or more of the methodologies or functions described herein.
  • the software may also reside, completely or at least partially, within the main memory 904 and/or within the processor 902 during execution thereof by the computer system 900 , the main memory 904 and the processor 902 also constituting machine-readable media.
  • the software may further be transmitted or received over a network 928 via the network interface device 920 .
  • machine-readable medium 924 is shown in an exemplary embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention.
  • the term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.

Abstract

An augmented reality viewer is described. A user orientation determination module determines a user orientation. A content vector calculator calculates a content orientation vector relative to a near edge and a far edge of content, determines a dot product of the user orientation vector and the content orientation vector, and positions the content based on a magnitude of the dot product. A surface area vector calculator calculates a surface area orientation vector for each of a plurality of surface area. A surface selection module determines a dot product of the user orientation vector and each surface area orientation vector and selects a preferred surface based on the relative magnitude of the dot products.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 16/435,933, filed on Jun. 10, 2019, which claims priority from U.S. Provisional Patent Application No. 62/682,788, filed on Jun. 8, 2018, each of which is incorporated herein by reference in its entirety.
  • BACKGROUND OF THE INVENTION
  • 1). Field of the Invention
  • This invention relates to an augmented reality viewer and to an augmented reality viewing method.
  • 2). Discussion of Related Art
  • Modern computing and display technologies have facilitated development of “augmented reality” viewers. An augment reality viewer is a wearable device that presents the user with two images, one for the left eye and one for the right eye. Objects in the images for each eye are rendered with slightly different viewpoints that allows the brain to process the objects as three-dimensional objects. When the images constantly change viewpoints as the viewer moves, movement around synthetic three-dimensional content can be simulated.
  • An augmented reality viewer usually includes technology that allows the presentation of digital or virtual image information as an augmentation to visualization of the actual world around the user. In one implementation, the virtual image information is presented in a static location relative to the augmented reality viewer so that, if the user moves their head, and the augmented reality viewer with their head, the user is presented with an image that remains in a stationary position in front of them while real world objects shift in their view. This gives the user the appearance that the virtual image information is not fixed relative to the real world objects, but instead is fixed in the viewer's point of view. In other implementations, technologies exist to keep the virtual image information in a stationary position relative to the real world objects when the user moves their head. In the latter scenario, the user may be given some control over the initial placement of the virtual image information relative to the real world objects.
  • SUMMARY OF THE INVENTION
  • The invention provides an augmented reality viewer including, a display that permits a user to see real world objects, a data channel to hold content, a user orientation determination module to determine a first user orientation of a user relative to a first display area and to determine a second user orientation of the user relative to the first display area, a projector connected to the data channel to display the content through the display to the user within confines of the first display area while the user views the real world objects and a content orientation selection module connected to the surface extraction module and the user orientation module to display the content in a first content orientation relative to the first display area so that a near edge of the content is close to the user when the user is in the first user orientation, and display the content in a second content orientation relative to the first display area so that the near edge is rotated closer to the user when the user is in the second user orientation and the content is rotated relative to the first display area from the first content orientation to the second content orientation.
  • The invention further provides an augmented reality viewing method comprising determining, by the processor, a first user orientation of a user relative to a first display area, determining, by the processor, a first content orientation relative to the display when the user is in the first orientation, displaying, by the processor, content in the first content orientation through a display to the user within confines of the first display area while the user views real world objects through the display while in the first user orientation, determining, by the processor, a second user orientation of the user relative to the first display area, determining, by the processor, a second content orientation relative to the display when the user is in the second location and displaying, by the processor, content in the second content orientation through a display to the user within confines of the display area while the user views real world objects through the display from the second location, wherein the content is rotated relative to the first display area from the first content orientation to the second content orientation.
  • The invention also provides an augmented reality viewer including a display that permits a user to see real world objects, a data channel to hold content, a surface area extraction module to determine a first surface area and a second surface area, a user orientation determination module to determine a first orientation of a user relative to the first surface area and the second surface area, a surface area selection module to select a preferred surface area between the first surface area and the second surface area based on normal to the respective surface area being directed more opposite to the first user orientation of the user and a projector that displays the content through the display to the user within confines of the preferred surface area while the user views the real world objects.
  • The invention further provides an augmented reality viewing method including determining, by a processor, a first surface area and a second surface area, determining, by the processor, a first orientation of a user relative to the first surface area and the second surface area, selecting, by the processor, a preferred surface area between the first surface area of the second surface area based on normal to the respective surface area being directed more towards the first location of the user and displaying, by the processor, content through a display to the user within confines of the preferred surface area while the user views real world objects through the display from the first location.
  • The invention also provides an augmented reality viewer including an environmental calculation unit to determine a first vector indicative an orientation of a user, a vector calculator to a calculate a second vector, a selection module to calculate a dot product of the first vector and the second vector, a data channel to hold content, a content rendering module to determine placement of the content based on the dot product, a display that permits the user to see real world objects and a projector that displays the content through the display to the user while the user views the real world objects through the display, the content being displayed based on the placement determined by the content rendering module.
  • The invention further provides an augmented reality viewing method including determining, by a processor, a first vector indicative an orientation of a user, calculating, by the processor, a second vector, calculating, by the processor, a dot product of the first vector and the second vector, determining, by the processor, placement of content based on the dot product and displaying, by the processor, the content through a display to the user while the user views real world objects through the display, the content being displayed based on the placement determined by the content rendering module.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention is further described by way of example with reference to the accompanying drawings, wherein:
  • FIG. 1A is a block diagram of an augmented reality viewer that is used by a user to see real world objects augmented with content from a computer;
  • FIG. 1B is a perspective view of the augmented reality viewer;
  • FIG. 2 is a perspective view illustrating a user wearing the augmented reality viewer in a three-dimensional environment while viewing two-dimensional content;
  • FIG. 3 is a perspective view illustrating a three-dimensional data map that is created with the augmented reality viewer;
  • FIG. 4 is a perspective view illustrating the determination of a user orientation vector, the extraction of surface areas and the calculation of surface area orientation vectors;
  • FIG. 5 is a view similar to FIG. 4 illustrating placement of a rendering of content on one of the surface areas;
  • FIG. 6 is a view similar to FIG. 5 illustrating a change in the user orientation vector;
  • FIG. 7 is a view similar to FIG. 6 illustrating placement of a rendering of the content due to the change in the user orientation vector;
  • FIG. 8 is a view similar to FIG. 7 illustrating a change in the user orientation vector due to movement of the user;
  • FIG. 9 is a view similar to FIG. 8 illustrating rotation of the rendering of the content due to the change in the user orientation vector;
  • FIG. 10 is a view similar to FIG. 9 illustrating a change in the user orientation vector due to movement of the user;
  • FIG. 11 is a view similar to FIG. 10 illustrating rotation of the rendering of the content due to the change in the user orientation vector;
  • FIG. 12 is a view similar to FIG. 11 illustrating a change in the user orientation vector due to movement of the user;
  • FIG. 13 is a view similar to FIG. 12 illustrating rotation of the rendering of the content due to the change in the user orientation vector;
  • FIG. 14 is a view similar to FIG. 13 illustrating a change in the user orientation vector due to the user looking up;
  • FIG. 15 is a view similar to FIG. 14 illustrating the placement of a rendering of the content on another surface area due to the change in the user orientation vector;
  • FIG. 16 is a flow chart illustrating the functioning of an algorithm to carry out the method of the preceding figures;
  • FIG. 17 is a perspective view illustrating a user wearing the augmented reality viewer in a three-dimensional environment while viewing three-dimensional content;
  • FIG. 18 is a top plan view of FIG. 17;
  • FIG. 19 is a view similar to FIG. 18 wherein the user has rotated in a clockwise direction around a display surface;
  • FIG. 20 is a view similar to FIG. 19 wherein the content has rotated in a clockwise direction;
  • FIG. 21 is a perspective view illustrating a user while viewing content on a vertical surface;
  • FIG. 22 is a view similar to FIG. 21 wherein the user has rotated in a counter-clockwise direction;
  • FIG. 23 is a view similar to FIG. 2 wherein the content has rotated in a counter-clockwise direction; and
  • FIG. 24 is a block diagram of a machine in the form of a computer that can find application in the present invention system, in accordance with one embodiment of the invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The terms “surface” and “surface area” are used herein to describe two-dimensional areas that are suitable for use as display areas. Aspects of the invention may find application when other display areas are used, for example a display area that is a three-dimensional surface area or a display area representing a slice within a three-dimensional volume.
  • FIG. 1A of the accompanying drawings illustrates an augmented reality viewer 12 that a user uses to see a direct view of a real world scene, including real world surfaces and real world objects 14, that is augmented with content 16 of the kind that is stored on, received by, or otherwise generated by a computer or computer network.
  • The augmented reality viewer 12 includes a display 18, a data channel 20, a content rendering module 22, a projector 24, a depth sensor 28, a position sensor such as an accelerometer 30, a camera 32, an environmental calculation unit 34, and a content placement and content orientation unit 36.
  • The data channel 20 may be connected to a storage device that holds the content 16 or may be connected to a service that provides the content 16 in real time. The content 16 may for example be static images such as photographs, images that remain static for a period of time and can be manipulated by a user such as web pages, text documents or other data that is displayed on a computer display, or moving images such as videos or animations. The content 16 may be two-dimensional, three-dimensional, static, dynamic, text, image, video, etc. The content 16 may include games, books, movies, video clips, advertisements, avatars, drawings, applications, web pages, decorations, sports games, replays, 3-D models or any other type of content as will be appreciated by one of skill in the art.
  • The content rendering module 22 is connected to the data channel 20 to receive the content 16 from the data channel 20. The content rendering module 22 converts the content 16 into a form that is suitable for three-dimensional viewing. Various techniques exist for viewing two-dimensional planes in three-dimensional space depending on the orientation of the user, or viewing three-dimensional volumes in three dimensions by the user.
  • The projector 24 is connected to the content rendering module 22. The projector 24 converts data generated by the content rendering module 22 into light and delivers the light to the display 18. The light travels from the display 18 to eyes 26 of the user. Various techniques exist for providing the user with a three-dimensional experience. Each eye is provided with a different image and objects in the images are perceived by the user as being constructed in three dimensions. Techniques also exist for the user to focus on the objects at a field of depth that is not necessarily in the plane of the display 18 and is typically at some distance behind the display 18. One way that virtual content can be made to appear to be at a certain depth is by causing light rays to diverge and form a curved wavefront in a way that mimics how light from real physical objects reaches an eye. The eye then focuses the diverging light beams onto the retina by changing shape of the anatomic lens in a process called accommodation. Different divergence angles represent different depths and are created using diffraction gratings on the exit pupil expander on the waveguides.
  • The display 18 is a transparent display. The display 18 allows the user to see the real world objects 14 through the display 18. The user thus perceives an augmented reality view 40 wherein the real world objects 14 that the user sees in three-dimensions are augmented with a three-dimensional image that is provide to the user from the projector 24 via the display 18.
  • The depth sensor 28 and the camera 32 are mounted in a position to capture the real world objects 14. The depth sensor 28 typically detects electromagnetic waves in the infrared range and the camera 32 detects electromagnetic waves in the visible light spectrum. As more clearly shown in FIG. 1B, more than one camera 32 may be mounted on a frame 13 of the augmented reality viewer 12 in a world-facing position. In the particular embodiment, four cameras 32 are mounted to the frame 13 with two in a forward world-facing position and two in a left and right side or oblique world-facing position. The fields of view of the multiple cameras 32 may overlap. The depth sensor 28 and the cameras 32 are mounted in a static position relative to a frame 13 of the augmented reality viewer 12. Center points of images that are captured by the depth sensor 28 and the camera 32 are always in the same, forward direction relative to the augmented reality viewer 12.
  • The accelerometer 30 is mounted in a stationary position to the frame of the augmented reality viewer 12. The accelerometer 30 detects the direction of gravitation force. The accelerometer 30 can be used to determine the orientation of the augmented reality viewer with respect to the Earth's gravitational field. The combination of the depth sensor 28 and a head pose algorithm that relies on visual simultaneous localization and mapping (“SLAM”) and inertial measurement unit (“IMU”) input, accelerometer 30 permits the augmented reality viewer 12 to establish the locations of the real world objects 14 relative to the direction of gravitation force and relative to the augmented reality viewer 12.
  • The camera 32 captures images of the real world objects 14 and further processing of the images on a continual basis provides data that indicates movement of the augmented reality viewer 12 relative to the real world objects 14. Because the depth sensor 28, world cameras 32, and the accelerometer 30 determine the locations of the real world objects 14 relative to gravitation force on a continual basis, the movement of the augmented reality viewer 12 relative to gravitation force and a mapped real world environment can also be calculated.
  • In FIG. 1A, the environmental calculation unit 34 includes an environment mapping module 44, a surface extraction module 46 and a viewer orientation determination module 48. The environment mapping module 44 may receive input from one or more sensors. The one or more sensors may include, for example, the depth sensor 28, one or more world camera 32, and the accelerometer 30 to determine the locations of the real world surfaces and objects 14. The surface extraction module 46 is may receive data from the environment mapping module 44 and determines planar surfaces in the environment. The viewer orientation determination module 48 is connected to and receives input from the depth sensor 28, the cameras 32, and the accelerometer 30 to determine a user orientation of the user relative to the real world objects 14 and the surfaces that are identified by the surface extraction module 46.
  • The content placement and content orientation unit 36 includes a surface vector calculator 50, a surface selection module 52, a content size determination module 54, a content vector calculator 56 and a content orientation selection module 58. The surface vector calculator 50, the surface selection module 52 and content size determination module 54 may be sequentially connected to one another. The surface selection module 52 is connected to and provides input to the viewer orientation determination module 48. The content vector calculator 56 is connected to the data channel 20 so as to be able to receive the content 16. The content orientation selection module 58 connected to and receives input from the content vector calculator 56 and the viewer orientation determination module 48. The content size determination module 54 is connected and provides input to the content orientation selection module 58. The content rendering module 22 is connected and receives input from the content size determination module 54.
  • FIG. 2 illustrates a user 60 who is wearing the augmented reality viewer 12 within a three-dimensional environment.
  • A vector 62 signifies a direction of gravitation force as detected by one or more sensors on the augmented reality viewer 12. A vector 64 signifies a direction to the right from a perspective of the user 60. A user orientation vector 66 signifies a user orientation, in the present example a forward direction in the middle of a view of the user 60. The user orientation vector 66 also points in a direction that is to the center points of the images captured by the depth sensor 28 and camera 32 in FIG. 1. FIG. 1B shows a further coordinate system 63 that includes the vector 64 to the right, the user orientation vector 66 and a device upright vector 67 that are orthogonal to one another.
  • The three-dimensional environment, by way of illustration, includes a table 68 with a horizontal surface 70, surfaces 72 and 74, objects 76 that provide obstructions that may make the surfaces 72 and 74 unsuitable for placement of content. For example, objects 76 that disrupt continuous surfaces 72 and 74 may include picture frames, mirrors, cracks in a wall, rough texture, a different colored area, a hole in the surface, a protrusion of the surface, or any other non-uniformity with respect to the planar surfaces 72, 74. In contrast, the surfaces 78 and 80 may be more suitable for placement of content because of their relatively large size and their proximity to the user 60. Depending on the type of content being displayed, it may also be advantageous to find a surface having rectangular dimensions, although other shapes such as squares, triangles, circles, ovals, or polygons may also be used.
  • FIG. 3 illustrates the functioning of the depth sensor 28, accelerometer 30 and environment mapping module 44 in FIG. 1. The depth sensor 28 captures the depth of all features, including objects and surfaces in the three-dimensional environment. The environment mapping module 44 receives data, directly or indirectly, from one or more sensors on the augmented reality viewer 12. For example, the depth sensor 28 and the accelerometer 30 may provide input to the environment mapping module 44 for mapping the depth of the three-dimensional environment in three dimensions.
  • FIG. 3 also illustrates the functioning of the camera 32 and the viewer orientation determination module 48. The camera 32 captures an image of the objects 76 and surfaces 78. The viewer orientation determination module 48 receives an image from the camera 32 and processes the image to determine that an orientation of the augmented reality viewer 12 that is worn by the user 60 is as represented by the user orientation vector 66.
  • Other methods of mapping a three-dimensional environment may be employed, for example using one or more cameras that are located in a stationary position within a room. However, the integration of the depth sensor 28 and the environment mapping module 44 within the augmented reality viewer 12 provides for a more mobile application.
  • FIG. 4 illustrates the functioning of the surface extraction module 46 in FIG. 1. The surface extraction module 46 processes the three-dimensional map that is created in FIG. 3 to determine whether there are any surfaces that are suitable for placement and viewing of content, in the present example two-dimensional content. The surface extraction module 46 determines a horizontal surface area 82 and two vertical surface areas 84 and 86. The surface areas 82, 84 and 86 are not real surfaces, but instead electronically represent two-dimensional planar surfaces oriented in a three-dimensional environment. The surface areas 82, 84 and 86, which are data representations, correspond respectively to the real surfaces 70, 78 and 80 in FIG. 2 forming part of the real world objects 14 in FIG. 1.
  • FIG. 4 illustrates a cube 88 and a shadow 90 of the cube 88. These elements are used by the author to assist the viewer to track changes in the user orientation vector 66 and movement of the user 60 and the augmented reality viewer 12 in FIG. 2 through the three-dimensional space.
  • FIG. 4 also illustrates the functioning of the surface vector calculator 50 in FIG. 1. The surface vector calculator 50 calculates a surface area orientation vector for each extracted surface of the mapped three-dimensional environment. For example, the surface vector calculator 50 calculates a surface area orientation vector 92 that is normal to a plane of the surface area 82. Similarly, the surface vector calculator 50 calculates a surface area orientation vector 94 that is normal to the surface area 84 and a surface area orientation vector 94 that is normal to the surface area 86.
  • Selection of a surface on which to display virtual content is done by a surface selection module 52 that calculates a relationship between the surface and the user. The surface selection module 52 in FIG. 1A calculates a dot product of the user orientation vector 66 and the surface area orientation vector 92. The dot product of unit vectors a and b is represented by the following equation:

  • a·b=|a| |b|cos θ  [1]
  • where |a|=1
      • |b|=1
      • θ is the angle between unit vectors a and b.
  • The user orientation vector 66 and the surface area orientation vector 92 are orthogonal to one another, which means their dot product is zero.
  • The surface selection module 52 also calculates a dot product of the user orientation vector 66 and the surface area orientation vector 94. Because the user orientation vector 66 and the surface area orientation vector 94 are orthogonal their dot product is zero.
  • The surface selection module 52 also calculates a dot product of the user orientation vector 66 and the surface area orientation vector 96. Because the user orientation vector 66 and the surface area orientation vector 96 are 180° relative to one another, their dot product is −1. Because the dot product that includes the surface area orientation vector 96 is the most negative of the three dot products, the surface selection module 52 determines that the surface area 86 is the preferred surface area between the surface areas 82, 84 and 86 for displaying content. The more negative the dot product is, the more likely it will be that content will be oriented to be directly facing the viewer. Because the surface area 86 is a vertical surface area, the content placement and content orientation unit 36 does not invoke the content orientation selection module 58 in FIG. 1. The dot product is one of many surface characteristics that can be prioritized by the system or by the needs of the virtual content for choosing the best surface. For example, if the surface that has a dot product of −1.0 is tiny and is far away from the user, it may not be preferable over a surface that has a dot product of −0.8 but is large and near to the user. The system may choose a surface that has good contrast ratio properties when placing content, so it will be easier for the user to see. Next, the content size determination module 54 determines an appropriate size of content to display on the surface area 86. The content has an optimal aspect ratio, for example an aspect ratio of 16 on a near edge and 9 on a side edge. The content size determination module 54 uses the ratio of the near edge to the side edge to determine the size and shape of the content, preserving this aspect ratio at all viewing angles so as not to distort content. The content size determination module 54 calculates the optimal height and width of the content with the optimal aspect ratio that will fit with the surface area 86. In the given example, the distance between left and right edges of the surface area 86 determines the size of the content.
  • FIG. 5 illustrates the functioning of the content rendering module 22 and the projector 24 in FIG. 1. The content rendering module 22 provides the content 16 in its calculated orientation to the projector 24 based on the size determination of the content size determination module 54 and the surface selection module 52. The viewer views the content 16 as a rendering 98 that is placed in three-dimensional space on and coplanar with the surface area 86. The content 16 is not rendered on the surface areas 82 and 84. All other surface characteristics being equal, the surface area 86 provides an optimal area for the rendering 98 when compared to the surface areas 82 and 84, because of the user orientation as represented by the user orientation vector 66. The rendering 98 remains static on the surface area 86 when the user orientation vector changes by a small degree. If the viewer orientation determination module 48 in FIG. 1A senses that the user orientation vector changes by more than a predetermined threshold degree, for example by five degrees, the system automatically proceeds to recalculate all dot-products as described above and, if necessary, reposition and resize the content that is being rendered for display to the user. Alternatively, the system my routinely, e.g. every 15 seconds recalculate all dot-products and place content as described above.
  • Alternatively, the user may select the area 86 for the content to remain even when they change their orientation.
  • In FIG. 6, the user 60 changes the inclination of their head. As a result, the user orientation vector 66 rotates in a downward direction 100. A new user orientation is represented by a new user orientation vector 102. The cameras 32 in FIGS. 1A and 1B continually capture images of the real world objects 14. Additional sensors such as the depth sensor 28 and the accelerometer 30 may also continually capture and provide updated information. The viewer orientation determination module 48 processes the images, along with other data captured by sensors on board the augmented reality viewer 12, to determine relative movement of the real world objects 14 within a view of the camera 32. The viewer orientation determination module 48 then processes such movement to determine the change of the user orientation vector from the user orientation vector 66 in FIG. 5 to the user orientation vector 102 in FIG. 6. The system normally selects the surface with the most optimal dot-product, although there may be some tolerance/range allowable for the dot-product so that jitter and processing is reduced. By way of example, the system may move the content when there is another dot-product that is more optimal and if the dot-product that is more optimal is at least 5 percent better than the dot-product of the surface where the content is currently displayed.
  • Assuming that the user did not select the surface 86 for the content to remain after they change their orientation. the surface selection module 52 again calculates three dot products, namely between the user orientation vector 102 and the surface area orientation vector 92, the user orientation vector 102 and the surface area orientation vector 94, and the user orientation vector 102 and the surface area orientation vector 96. The surface selection module 52 then determines which one of the three dot products is the most negative. In the present example, the dot product between the user orientation vector 102 and the surface area orientation vector 92 is the most negative. The surface selection module 52 determines that the surface area 82 is the preferred surface because its associated dot product is more negative than for the surface areas 84 and 86. The system may also consider other factors as described above.
  • The content placement and content orientation unit 36 in FIG. 1A invokes the content vector calculator 56 and the content orientation selection module 58. Following operation of the content orientation selection module 58, the content size determination module 54 is again invoked.
  • The functioning of the content vector calculator 56, content orientation selection module 58 and content size determination module 54 are better illustrated with the assistance of FIG. 7.
  • FIG. 7 illustrates that the content rendering module 22 and projector 24 create a rendering 104 of the content 16 within and coplanar with the surface area 82. The rendering on the surface area 86 is no longer displayed to the user 60.
  • The rendering 104 has a far edge 106, a near edge 108, a right edge 110 and a left edge 112. The content vector calculator 56 in FIG. 1A may calculate a content orientation vector 114. The content orientation vector 114 extends from the near edge 108 to the far edge 106 and is orthogonal to both the near edge 108 and the far edge 106.
  • The calculations that are made by the content vector calculator depend on the content that is provided on the data channel. Some content my already have a content orientation vector extends from the near edge to the far edge of the content, in which case the content vector calculator 56 simply identifies and isolates the content orientation vector within the code of the content. In other instances, a content orientation vector may be associated with the content and the content vector calculator 56 may have to re-orient the content orientation vector to extend from the near edge to the far edge of the content. In other instances, no the content vector calculator 56 may generate a content orientation vector based on other data such as image analysis, the placement of tools in the content, etc.
  • The content orientation selection module 58 calculates a dot product between the user orientation vector 102 and the content orientation vector 114. The dot product is calculated for four scenarios, namely when the content orientation vector 114 is oriented in the direction shown in FIG. 7, when the content orientation vector 114 is oriented 90° to the right, when the content orientation vector 114 is oriented 180°, and when the content orientation vector 114 is oriented 90° to the left. The content orientation selection module 58 then selects the dot product that is the lowest among the four dot products and places the rendering 104 so that the content orientation vector 114 is aligned in the direction with the lowest associated dot product. The near edge 108 is then located closer to the user 60 than the far edge 106 and the right and left edges 112 and 110 are located to the right and to the left from the orientation of the user 60 as depicted by the user orientation vector 102. The content 16 is thus oriented in a manner that is easily viewable by the user 60. For example, a photograph of a head and torso of a person is displayed with the head farthest from the user 60 and the torso closest to the user 60, and a text document is displayed with the first lines farthest from the user 60 and the last lines closest to the user 60.
  • The content size determination module 54 has determined an appropriate size for the rendering 104 with the right edge 110 and the left edge 112 defining the width of the rendering 104 within the surface area 82 and a distance between the far edge 106 and the near edge 108 being determined by the desired aspect ratio.
  • In FIG. 8, the user 60 has moved in a direction 116 counterclockwise around the surface area 82. The user 60 has also rotated their body counterclockwise by 90°. The user 60 has now established a new orientation as represented by a new user orientation vector 118. The user's head is still inclined downward toward the surface area 82 and the surface areas 84 and 86 are now located behind and to the right of the user 60, respectively.
  • The surface selection module 52 again calculates a dot product associated with each one of the surface area orientation vectors 92, 94 and 96. The dot product of the user orientation vector 118 and the surface area orientation vector 94 has now become positive. The dot product between the user orientation vector 118 and the surface area orientation vector 96 is approximately zero. The dot product between the user orientation vector 118 and the surface area orientation vector 92 is the most negative. The surface selection module 52 in FIG. 1A selects the surface area 82 associated with the surface area orientation vector 92 as the preferred surface for positioning of a rendering of the content 16.
  • The content orientation selection module 58 in FIG. 1A again calculates four dot products, each one associated with a respective direction of a content orientation vector, namely a dot product between the user orientation vector 118 and the content orientation vector 114 in the direction shown in FIG. 8, and further dot products respectively between the user orientation vector 118 and content orientation vectors at 90° to the right, 180° and 90° to the left relative to the content orientation vector 114 in FIG. 8. The content orientation selection module 58 determines that the dot product associated with the content orientation vector 114 that is 90° to the left relative to the direction of the content orientation vector 114 shown in FIG. 7 is the most positive of the four dot products.
  • The content size determination module 54 then determines an appropriate size for the rendering if the content orientation vector 114 is rotated 90° to the left.
  • FIG. 9 illustrates how the content rendering module 22 creates the rendering 104 based on the user orientation as represented by the user orientation vector 118. The rendering 104 is rotated 90° counterclockwise so that the content orientation vector 114 is directed 90° to the left when compared to FIG. 8. The near edge 108 is now located closest to the user 60. The content size determination module 54 in FIG. 1A has made the rendering 104 smaller than in FIG. 8 due to the available proportions of the surface area 82. Renderings could snap between positions, smoothly rotate, fade in/fade out as selected by the content creator or by user preference.
  • In FIG. 10, the user 60 has moved further around the surface area 82 in a direction 120 and has established a new user orientation as represented by a new user orientation vector 122. The dot product between the user orientation vector 122 and the surface area orientation vector 96 is now positive. The dot product between the user orientation vector 122 and the surface area orientation vector 94 is approximately zero. The dot product between the user orientation vector 122 and the surface area orientation vector 92 is the most negative. The surface area 82 is thus the preferred surface for displaying content.
  • The dot product between the user orientation vector 122 and the content orientation vector 114 as shown in FIG. 10 is approximately zero. If the content orientation vector 114 is rotated 90° clockwise, 180° and 90° counterclockwise, the respective dot products differ in magnitude with the dot product of the content orientation vector 114 that is 90° to the left being the most positive. The rendering 104 should thus be rotated 90° counterclockwise and be resized based on the proportions of the surface area 82. FIG. 11 illustrates how the rendering 104 is rotated and resized due to the change in the user orientation vector 122 while remaining on the surface area 82.
  • In FIG. 12, the user 60 has moved in a direction 124 around the surface area 82 and has established a new user orientation as represented by a new user orientation vector 126. A dot product of the user orientation vector 126 and the surface area orientation vector 94 is now negative. However, a dot product between the user orientation vector 126 and the surface area orientation vector 92 is more negative. The surface area 82 is thus the preferred surface area for creating a rendering of the content 16.
  • A dot product between the user orientation vector 126 and the content orientation vector 114 as shown in FIG. 12 is approximately zero. A dot product between the user orientation vector 126 and the content orientation vector 114, if it is rotated 90° to the left, is positive. The rendering 104 should thus be rotated counterclockwise while remaining on the surface area 82. FIG. 13 illustrates the placement, orientation and size of the rendering 104 as modified based on the new user orientation vector 126.
  • FIG. 14 illustrates a new user orientation vector 132 that is established when the user 60 rotates their head in an upward direction 134. A dot product between the user orientation vector 132 and the surface area orientation vector 92 is approximately zero. A dot product between the user orientation vector 132 and the surface area orientation vector 96 is also approximately zero. A dot product between the user orientation vector 132 and the surface area orientation vector 94 is, or approaches −1 and is thus the most negative of the three surface-based dot products. The surface area 84 is now the preferred surface area for placement of a rendering of the content 16. FIG. 15 illustrates a rendering 136 that is displayed to the user 60 on the surface area 84. The rendering on the surface area 82 is no longer displayed to the user 60. On vertical surface areas such as the surface area 84 and the surface area 86, the near edge 108 is always at the bottom.
  • FIG. 16 illustrates the algorithm for carrying out the method as described above. At 150, the three-dimensional space is mapped as described with reference to FIG. 3. At 152A, B and C, the surface areas are extracted as described with reference to FIG. 4. At 154A, B and C, the surface vectors are calculated as described with reference to FIG. 4. At 156, a user orientation vector is determined as described with reference to FIGS. 1 to 4. At 158A, B and C, a respective dot product is calculated between the user orientation vector and each respective surface area orientation vector, as described with reference to FIG. 4. At 160, a preferred surface area is determined as described with reference to FIG. 4.
  • At 162, a determination is made whether the preferred surface area is vertical. If the preferred surface area is not vertical then, at 164, a direction of a content orientation vector relative far, near, right and left edges of the content is determined as described with reference to FIG. 7. Following 164, at 166A, B, C and D, content vectors are calculated at 0°, 90° right, 180° and 90° left as described with reference to FIG. 7. At 168A, B, C and D, a dot product is calculated between the user orientation vector and the content orientation vectors calculated at 166A, B, C and D, respectively. At 170, a content orientation is selected as described with reference to FIG. 7.
  • At 172, the size of the content is determined as described with reference to FIG. 5 and FIG. 7. At 174, the content is displayed as described with reference to FIG. 5 and FIG. 7.
  • Following 174, a new user orientation vector may be determined at 156 as described with reference to FIGS. 6, 8, 9, 10 and 12. The process may then be repeated without again calculating the surface area orientation vectors at 154A, B and C.
  • Referring to FIGS. 17 and 18, an embodiment is shown in perspective view and in top view, respectively, with three-dimensional virtual content 180 rendered on a mapped surface 182 within an environment 184 for viewing by a user 60. In such an embodiment, the principles described above are used to position the three-dimensional virtual content 180 that the user 60 can view the content as easily and naturally as possible.
  • The user orientation vector 66 is the same as a forward vector of the device 12 and is henceforth referred to as the “device forward vector 66”. Determining a surface on which to place three-dimensional virtual content 180 may rely, at least in part, on a dot product relationship between a device forward vector 66 and a surface normal vector 186 of mapped surfaces in the environment 184. For optimal viewing of the three-dimensional virtual content 180, one of many dot product relationships may be considered optimal depending on the content. For example, if the content is meant to be viewed from the side, it may be ideal for the dot product relationship between the device forward vector 66 and the surface normal vector 186 to be close to zero indicating that the user is nearly orthogonal to the mapped surface 182. In such an embodiment, the three-dimensional virtual content 180 placed on the mapped surface 182 will be seen by the user from the side. Alternatively, a dot product relationship at or near −1 may be more desirable if the three-dimensional virtual content 180 is meant to be viewed from above, as has been described herein with respect to other embodiments. The ideal dot product relationship may be an attribute set by the creator of the three-dimensional virtual content 180, may be selected as a preference by the user, or may be otherwise determined by the augmented reality viewing system based on the type of content to be displayed.
  • Once a placement surface is determined, either by the system or by placement by a user, orientation of the three-dimensional virtual content 180 on the mapped surface 182 is determined with respect to the user. In the example shown, three-dimensional virtual content 180 is provided with a content orientation vector 188 that may be used to align the three-dimensional virtual content 180 to a reference vector of the user device. The three-dimensional virtual content 180 is the head of a character with a near edge of the character being where its mouth is. A far edge of the character will typically not be rendered for viewing by the user 60 because the far edge is on a side of the character that the user cannot see. The content orientation vector 188 is aligned parallel with the near edge of the character. The content orientation vector 188 may be used to align the three-dimensional virtual content 180 with the augmented reality viewer 12 such that the dot product between the content orientation vector 188 and the device right vector 64 is at or near 1, indicating that the two vectors are pointing in substantially the same direction.
  • Referring to FIGS. 19 and 20, examples of three-dimensional content re-orientation based on a user's movement are shown. In FIG. 19, the user 60 has moved clockwise around the table by a certain distance and angle with respect to FIG. 18. As a result, the dot product relationship between the content orientation vector 188 and the device right vector 64 is less than 1. In some embodiments, this change in position may not require re-orientation of three-dimensional virtual content 180. For example, a content creator, a user, or software within the augmented reality viewer 12 may indicate that re-orientation of three-dimensional virtual content 180 is necessary only when the dot product between the content orientation vector 188 and a device reference vector is less than a predetermined threshold. Large or small threshold tolerances may be set depending on the type of content being displayed.
  • If the change in position of the user 60 from the location of FIG. 18 to the location of FIG. 19 triggers a re-orientation of three-dimensional virtual content 180, the orientation module may re-render three-dimensional virtual content 180 such that the content orientation vector 188 aligns with the device right vector 64 to result in a dot product equal to or near 1 for the two vectors, as shown in FIG. 20. As discussed above, re-orientation of three-dimensional virtual content 180 may also allow for re-sizing of the content; however, content may also remain the same size such that it appears only to re-orient about an axis normal to the mapped surface 182 as the user moves within the environment.
  • Referring to FIGS. 21, 22 and 23, an example is shown of virtual content 196 re-orientation on a vertical surface 198. In FIG. 21, a user 60 is shown viewing virtual content 196 on a vertical surface 198 that is oriented vertically in the environment. The virtual content 196 may have at least one of a content right orientation vector 200 and a content upright orientation vector 202 which may be used to measure alignment with respect to the device right vector 64 and the device upright vector 67, respectively. In FIG. 21, the alignment between one of the content orientation vectors (200, 202) and the corresponding device orientation vectors (64, 67) results in a dot product value of approximately 1. As discussed above, dot product values closer to 1 indicate more similar alignment between the two vectors being compared.
  • If the user 60 were to change positions, for example by lying down on a couch as shown in FIG. 22, without re-orientation of the virtual content 196, the alignment between content orientation vectors (200, 202) and corresponding device orientation vectors (64, 67) may be near zero, indicating a less optimal alignment between the user 60 and the virtual content 196 than the alignment shown in FIG. 21. If a dot product relationship of zero is less than the required dot product relationship for the virtual content-to-user relative orientation, virtual content 196 may be re-rendered at a new orientation, as shown in FIG. 23, such that the dot product relationships are within the predetermined thresholds. In some embodiments, re-rendering the virtual content 196 at a new orientation may re-establish optimal dot product relationships between content orientation vectors (200, 202) and corresponding device orientation vectors (64, 67).
  • FIG. 24 shows a diagrammatic representation of a machine in the exemplary form of a computer system 900 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • The exemplary computer system 900 includes a processor 902 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 904 (e.g., read only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), and a static memory 906 (e.g., flash memory, static random access memory (SRAM), etc.), which communicate with each other via a bus 908.
  • The computer system 900 may further include a disk drive unit 916, and a network interface device 920.
  • The disk drive unit 916 includes a machine-readable medium 922 on which is stored one or more sets of instructions 924 (e.g., software) embodying any one or more of the methodologies or functions described herein. The software may also reside, completely or at least partially, within the main memory 904 and/or within the processor 902 during execution thereof by the computer system 900, the main memory 904 and the processor 902 also constituting machine-readable media.
  • The software may further be transmitted or received over a network 928 via the network interface device 920.
  • While the machine-readable medium 924 is shown in an exemplary embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.
  • While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative and not restrictive of the current invention, and that this invention is not restricted to the specific constructions and arrangements shown and described since modifications may occur to those ordinarily skilled in the art.

Claims (10)

What is claimed:
1. An augmented reality viewer comprising:
a display that permits a user to see real world objects;
a data channel to hold content;
a surface area extraction module to determine a first surface area and a second surface area;
a user orientation determination module to determine a first user orientation vector indicative of a first orientation of a user relative to the first surface area and the second surface area;
a surface area selection module to select a preferred surface area between the first surface area of the second surface area based on normal to the respective surface area being directed more opposite to the first user orientation of the user, including a surface area vector calculator to a calculate a first surface area orientation vector indicative of an orientation of the first surface area and a second surface area orientation vector indicative of an orientation of the second surface area, wherein the surface area selection module determines a dot product of the first user orientation vector and the first surface area orientation vector and a dot product of the first user orientation vector and the second surface area orientation vector and selects the preferred surface area based on a relative magnitude of the dot product of the first user orientation vector and the first surface area orientation vector and the dot product of first user orientation vector and the second surface area orientation vector; and
a projector that displays the content through the display to the user within confines of the preferred surface area while the user views the real world objects.
2. The augmented reality viewer of claim 1, wherein the first surface area orientation vector is normal to the first surface area and the second surface area orientation vector is normal to the second surface area and the preferred surface area is selected based on the dot product that is the most negative in magnitude.
3. The augmented reality viewer of claim 1, wherein the user position and user orientation determination module determines a second user orientation vector indicative a second orientation of the user and the surface area selection module determines a dot product of the second user orientation vector and the first surface area orientation vector and a dot product of the second user orientation vector and the second surface area orientation vector and selects the preferred surface area based on a relative magnitude of the dot product of the second user orientation vector and the first surface area orientation vector and the dot product of the second user orientation vector and the second surface area orientation vector.
4. The augmented reality viewer of claim 3, wherein the user remains in the same position relative to the first surface area and the second surface area when the first user orientation changes to the second user orientation.
5. The augmented reality viewer of claim 3, wherein the user moves from a first position to a second position relative to the first surface area and the second surface area when the first user orientation changes to the second user orientation.
6. The augmented reality viewer of claim 3, wherein the preferred surface area remains the same when the user orientation vector changes from the first user orientation vector to the second user orientation vector.
7. The augmented reality viewer of claim 3, wherein the preferred surface changes from the first surface to the second surface when the user orientation vector changes from the first user orientation vector to the second user orientation vector.
8. The augmented reality viewer of claim 3, further comprising:
a size determination module that resizes the content in the preferred surface to fit the second surface area.
9. The augmented reality viewer of claim 8, wherein the content has the same aspect ratio in the first surface area and in the second surface area.
10. An augmented reality viewing method comprising:
determining, by a processor, a first surface area and a second surface area;
determining, by the processor, a first user orientation vector indicative of a first orientation of a user relative to the first surface area and the second surface area;
selecting, by the processor, a preferred surface area between the first surface area of the second surface area based on normal to the respective surface area being directed more towards the first location of the user, including calculating a first surface area orientation vector indicative of an orientation of the first surface area and a second surface area orientation vector indicative of an orientation of the second surface area, determining a dot product of the first user orientation vector and the first surface area orientation vector and a dot product of the first user orientation vector and the second surface area orientation vector, and selecting the preferred surface area based on a relative magnitude of the dot product of the first user orientation vector and the first surface area orientation vector and the dot product of first user orientation vector and the second surface area orientation vector; and
displaying, by the processor, content through a display to the user within confines of the preferred surface area while the user views real world objects through the display from the first location.
US17/357,795 2018-06-08 2021-06-24 Augmented reality viewer with automated surface selection placement and content orientation placement Pending US20210318547A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/357,795 US20210318547A1 (en) 2018-06-08 2021-06-24 Augmented reality viewer with automated surface selection placement and content orientation placement

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201862682788P 2018-06-08 2018-06-08
US16/435,933 US11092812B2 (en) 2018-06-08 2019-06-10 Augmented reality viewer with automated surface selection placement and content orientation placement
US17/357,795 US20210318547A1 (en) 2018-06-08 2021-06-24 Augmented reality viewer with automated surface selection placement and content orientation placement

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/435,933 Continuation US11092812B2 (en) 2018-06-08 2019-06-10 Augmented reality viewer with automated surface selection placement and content orientation placement

Publications (1)

Publication Number Publication Date
US20210318547A1 true US20210318547A1 (en) 2021-10-14

Family

ID=68764935

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/435,933 Active US11092812B2 (en) 2018-06-08 2019-06-10 Augmented reality viewer with automated surface selection placement and content orientation placement
US17/357,795 Pending US20210318547A1 (en) 2018-06-08 2021-06-24 Augmented reality viewer with automated surface selection placement and content orientation placement

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/435,933 Active US11092812B2 (en) 2018-06-08 2019-06-10 Augmented reality viewer with automated surface selection placement and content orientation placement

Country Status (5)

Country Link
US (2) US11092812B2 (en)
EP (1) EP3803545A4 (en)
JP (1) JP7421505B2 (en)
CN (1) CN112513785A (en)
WO (1) WO2019237099A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190385372A1 (en) * 2018-06-15 2019-12-19 Microsoft Technology Licensing, Llc Positioning a virtual reality passthrough region at a known distance
TWI779305B (en) * 2020-06-24 2022-10-01 奧圖碼股份有限公司 Simulation method for setting projector by augmented reality and terminal device thereof
CN112348889A (en) * 2020-10-23 2021-02-09 浙江商汤科技开发有限公司 Visual positioning method and related device and equipment
US11620796B2 (en) * 2021-03-01 2023-04-04 International Business Machines Corporation Expert knowledge transfer using egocentric video
CN113141346B (en) * 2021-03-16 2023-04-28 青岛小鸟看看科技有限公司 VR one-to-multiple system and method based on series flow

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011033993A (en) * 2009-08-05 2011-02-17 Sharp Corp Information presenting apparatus and method for presenting information

Family Cites Families (224)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6541736B1 (en) 2001-12-10 2003-04-01 Usun Technology Co., Ltd. Circuit board/printed circuit board having pre-reserved conductive heating circuits
US4344092A (en) 1980-10-21 1982-08-10 Circon Corporation Miniature video camera means for video system
US4652930A (en) 1984-11-19 1987-03-24 Rca Corporation Television camera structure
US4810080A (en) 1987-09-03 1989-03-07 American Optical Corporation Protective eyewear with removable nosepiece and corrective spectacle
US4997268A (en) 1989-07-24 1991-03-05 Dauvergne Hector A Corrective lens configuration
US5074295A (en) 1989-08-03 1991-12-24 Jamie, Inc. Mouth-held holder
US5007727A (en) 1990-02-26 1991-04-16 Alan Kahaney Combination prescription lens and sunglasses assembly
US5240220A (en) 1990-09-12 1993-08-31 Elbex Video Ltd. TV camera supporting device
WO1993001743A1 (en) 1991-07-22 1993-02-04 Adair Edwin Lloyd Sterile video microscope holder for operating room
US5224198A (en) 1991-09-30 1993-06-29 Motorola, Inc. Waveguide virtual image display
US5497463A (en) 1992-09-25 1996-03-05 Bull Hn Information Systems Inc. Ally mechanism for interconnecting non-distributed computing environment (DCE) and DCE systems to operate in a network system
US5410763A (en) 1993-02-11 1995-05-02 Etablissments Bolle Eyeshield with detachable components
US5682255A (en) 1993-02-26 1997-10-28 Yeda Research & Development Co. Ltd. Holographic optical devices for the transmission of optical signals of a plurality of channels
US6023288A (en) 1993-03-31 2000-02-08 Cairns & Brother Inc. Combination head-protective helmet and thermal imaging apparatus
US5455625A (en) 1993-09-23 1995-10-03 Rosco Inc. Video camera unit, protective enclosure and power circuit for same, particularly for use in vehicles
US5835061A (en) 1995-06-06 1998-11-10 Wayport, Inc. Method and apparatus for geographic-based communications service
US5864365A (en) 1996-01-26 1999-01-26 Kaman Sciences Corporation Environmentally controlled camera housing assembly
US5854872A (en) 1996-10-08 1998-12-29 Clio Technologies, Inc. Divergent angle rotator system and method for collimating light beams
US8005254B2 (en) 1996-11-12 2011-08-23 Digimarc Corporation Background watermark processing
US6012811A (en) 1996-12-13 2000-01-11 Contour Optik, Inc. Eyeglass frames with magnets at bridges for attachment
JP3465528B2 (en) 1997-04-22 2003-11-10 三菱瓦斯化学株式会社 New resin for optical materials
JPH11142783A (en) 1997-11-12 1999-05-28 Olympus Optical Co Ltd Image display device
US6191809B1 (en) 1998-01-15 2001-02-20 Vista Medical Technologies, Inc. Method and apparatus for aligning stereo images
US6076927A (en) 1998-07-10 2000-06-20 Owens; Raymond L. Adjustable focal length eye glasses
JP2000099332A (en) 1998-09-25 2000-04-07 Hitachi Ltd Remote procedure call optimization method and program execution method using the optimization method
US6918667B1 (en) 1998-11-02 2005-07-19 Gary Martin Zelman Auxiliary eyewear attachment apparatus
US6556245B1 (en) 1999-03-08 2003-04-29 Larry Allan Holmberg Game hunting video camera
US6375369B1 (en) 1999-04-22 2002-04-23 Videolarm, Inc. Housing for a surveillance camera
WO2001056007A1 (en) 2000-01-28 2001-08-02 Intersense, Inc. Self-referenced tracking
JP4921634B2 (en) 2000-01-31 2012-04-25 グーグル インコーポレイテッド Display device
JP4646374B2 (en) 2000-09-29 2011-03-09 オリンパス株式会社 Image observation optical system
TW522256B (en) 2000-12-15 2003-03-01 Samsung Electronics Co Ltd Wearable display system
US6807352B2 (en) 2001-02-11 2004-10-19 Georgia Tech Research Corporation Optical waveguides with embedded air-gap cladding layer and methods of fabrication thereof
US6931596B2 (en) * 2001-03-05 2005-08-16 Koninklijke Philips Electronics N.V. Automatic positioning of display depending upon the viewer's location
US20020140848A1 (en) 2001-03-30 2002-10-03 Pelco Controllable sealed chamber for surveillance camera
EP1249717A3 (en) 2001-04-10 2005-05-11 Matsushita Electric Industrial Co., Ltd. Antireflection coating and optical element using the same
JP4682470B2 (en) 2001-07-16 2011-05-11 株式会社デンソー Scan type display device
US6762845B2 (en) 2001-08-23 2004-07-13 Zygo Corporation Multiple-pass interferometry
EP1430351B1 (en) 2001-09-25 2006-11-29 Cambridge Flat Projection Displays Limited Flat-panel projection display
US6833955B2 (en) 2001-10-09 2004-12-21 Planop Planar Optics Ltd. Compact two-plane optical device
US7305020B2 (en) 2002-02-04 2007-12-04 Vizionware, Inc. Method and system of reducing electromagnetic interference emissions
US6849558B2 (en) 2002-05-22 2005-02-01 The Board Of Trustees Of The Leland Stanford Junior University Replication and transfer of microstructures and nanostructures
US6714157B2 (en) 2002-08-02 2004-03-30 The Boeing Company Multiple time-interleaved radar operation using a single radar at different angles
KR100480786B1 (en) 2002-09-02 2005-04-07 삼성전자주식회사 Integrated type optical head with coupler
US7306337B2 (en) 2003-03-06 2007-12-11 Rensselaer Polytechnic Institute Calibration-free gaze tracking under natural head movement
DE10311972A1 (en) 2003-03-18 2004-09-30 Carl Zeiss Head-mounted display (HMD) apparatus for use with eyeglasses, has optical projector that is fastened to rack, and under which eyeglasses are positioned when rack and eyeglasses are attached together
AU2003901272A0 (en) 2003-03-19 2003-04-03 Martin Hogan Pty Ltd Improvements in or relating to eyewear attachments
US7294360B2 (en) 2003-03-31 2007-11-13 Planar Systems, Inc. Conformal coatings for micro-optical elements, and method for making the same
US20060132914A1 (en) 2003-06-10 2006-06-22 Victor Weiss Method and system for displaying an informative image against a background image
JP4699699B2 (en) 2004-01-15 2011-06-15 株式会社東芝 Beam light scanning apparatus and image forming apparatus
CN101174028B (en) 2004-03-29 2015-05-20 索尼株式会社 Optical device and virtual image display device
GB0416038D0 (en) 2004-07-16 2004-08-18 Portland Press Ltd Document display system
EP1769275A1 (en) 2004-07-22 2007-04-04 Pirelli & C. S.p.A. Integrated wavelength selective grating-based filter
US8109635B2 (en) 2004-08-12 2012-02-07 Ophthalmic Imaging Systems Integrated retinal imager and method
US9030532B2 (en) 2004-08-19 2015-05-12 Microsoft Technology Licensing, Llc Stereoscopic image display
US7029114B2 (en) 2004-09-03 2006-04-18 E'lite Optik U.S. L.P. Eyewear assembly with auxiliary frame and lens assembly
JP4858170B2 (en) 2004-09-16 2012-01-18 株式会社ニコン Method for producing MgF2 optical thin film having amorphous silicon oxide binder
US20060126181A1 (en) 2004-12-13 2006-06-15 Nokia Corporation Method and system for beam expansion in a display device
US8619365B2 (en) 2004-12-29 2013-12-31 Corning Incorporated Anti-reflective coating for optical windows and elements
GB0502453D0 (en) 2005-02-05 2005-03-16 Cambridge Flat Projection Flat panel lens
US7573640B2 (en) 2005-04-04 2009-08-11 Mirage Innovations Ltd. Multi-plane optical apparatus
US20060250322A1 (en) 2005-05-09 2006-11-09 Optics 1, Inc. Dynamic vergence and focus control for head-mounted displays
US20090303599A1 (en) 2005-06-03 2009-12-10 Nokia Corporation General diffractive optics method for expanding an exit pupil
JP4776285B2 (en) 2005-07-01 2011-09-21 ソニー株式会社 Illumination optical device and virtual image display device using the same
US20080043334A1 (en) 2006-08-18 2008-02-21 Mirage Innovations Ltd. Diffractive optical relay and method for manufacturing the same
US20070058248A1 (en) 2005-09-14 2007-03-15 Nguyen Minh T Sport view binocular-zoom lens focus system
EP1938141A1 (en) 2005-09-28 2008-07-02 Mirage Innovations Ltd. Stereoscopic binocular system, device and method
US20070081123A1 (en) 2005-10-07 2007-04-12 Lewis Scott W Digital eyewear
US11428937B2 (en) 2005-10-07 2022-08-30 Percept Technologies Enhanced optical and perceptual digital eyewear
US8696113B2 (en) 2005-10-07 2014-04-15 Percept Technologies Inc. Enhanced optical and perceptual digital eyewear
US9658473B2 (en) 2005-10-07 2017-05-23 Percept Technologies Inc Enhanced optical and perceptual digital eyewear
EP1943556B1 (en) 2005-11-03 2009-02-11 Mirage Innovations Ltd. Binocular optical relay device
ATE550685T1 (en) 2005-11-18 2012-04-15 Nanocomp Oy Ltd METHOD FOR PRODUCING A DIFFRACTION GRIDING ELEMENT
JP5226528B2 (en) 2005-11-21 2013-07-03 マイクロビジョン,インク. Display having an image guiding substrate
EP1983884B1 (en) 2006-01-26 2016-10-26 Nokia Technologies Oy Eye tracker device
JP2007219106A (en) 2006-02-16 2007-08-30 Konica Minolta Holdings Inc Optical device for expanding diameter of luminous flux, video display device and head mount display
US7461535B2 (en) 2006-03-01 2008-12-09 Memsic, Inc. Multi-temperature programming for accelerometer
IL174170A (en) 2006-03-08 2015-02-26 Abraham Aharoni Device and method for binocular alignment
BRPI0709548A2 (en) 2006-03-15 2011-07-19 Google Inc automatic display of reframed images
US7692855B2 (en) 2006-06-28 2010-04-06 Essilor International Compagnie Generale D'optique Optical article having a temperature-resistant anti-reflection coating with optimized thickness ratio of low index and high index layers
US7724980B1 (en) 2006-07-24 2010-05-25 Adobe Systems Incorporated System and method for selective sharpening of images
US20080068557A1 (en) 2006-09-20 2008-03-20 Gilbert Menduni Lens holding frame
US20080146942A1 (en) 2006-12-13 2008-06-19 Ep Medsystems, Inc. Catheter Position Tracking Methods Using Fluoroscopy and Rotational Sensors
JP4348441B2 (en) * 2007-01-22 2009-10-21 国立大学法人 大阪教育大学 Position detection apparatus, position detection method, data determination apparatus, data determination method, computer program, and storage medium
US20090017910A1 (en) 2007-06-22 2009-01-15 Broadcom Corporation Position and motion tracking of an object
WO2008148927A1 (en) 2007-06-04 2008-12-11 Nokia Corporation A diffractive beam expander and a virtual display based on a diffractive beam expander
WO2009077802A1 (en) 2007-12-18 2009-06-25 Nokia Corporation Exit pupil expanders with wide field-of-view
DE102008005817A1 (en) 2008-01-24 2009-07-30 Carl Zeiss Ag Optical display device
US8494229B2 (en) 2008-02-14 2013-07-23 Nokia Corporation Device and method for determining gaze direction
JP2009244869A (en) 2008-03-11 2009-10-22 Panasonic Corp Display apparatus, display method, goggle-type head-mounted display, and vehicle
US8197088B2 (en) 2008-06-13 2012-06-12 Barco, Inc. Vertical handling apparatus for a display
JP5181860B2 (en) 2008-06-17 2013-04-10 セイコーエプソン株式会社 Pulse width modulation signal generation apparatus, image display apparatus including the same, and pulse width modulation signal generation method
US10885471B2 (en) 2008-07-18 2021-01-05 Disney Enterprises, Inc. System and method for providing location-based data on a wireless portable device
US7850306B2 (en) 2008-08-28 2010-12-14 Nokia Corporation Visual cognition aware display and visual data transmission architecture
US7885506B2 (en) 2008-09-26 2011-02-08 Nokia Corporation Device and a method for polarized illumination of a micro-display
US9775538B2 (en) 2008-12-03 2017-10-03 Mediguide Ltd. System and method for determining the position of the tip of a medical catheter within the body of a patient
JP5121764B2 (en) 2009-03-24 2013-01-16 株式会社東芝 Solid-state imaging device
US9095436B2 (en) 2009-04-14 2015-08-04 The Invention Science Fund I, Llc Adjustable orthopedic implant and method for treating an orthopedic condition in a subject
JP5316391B2 (en) 2009-08-31 2013-10-16 ソニー株式会社 Image display device and head-mounted display
US11320571B2 (en) 2012-11-16 2022-05-03 Rockwell Collins, Inc. Transparent waveguide display providing upper and lower fields of view with uniform light extraction
US8305502B2 (en) 2009-11-11 2012-11-06 Eastman Kodak Company Phase-compensated thin-film beam combiner
US8605209B2 (en) 2009-11-24 2013-12-10 Gregory Towle Becker Hurricane damage recording camera system
US8909962B2 (en) 2009-12-16 2014-12-09 Qualcomm Incorporated System and method for controlling central processing unit power with guaranteed transient deadlines
US8565554B2 (en) 2010-01-09 2013-10-22 Microsoft Corporation Resizing of digital images
US8467133B2 (en) 2010-02-28 2013-06-18 Osterhout Group, Inc. See-through display with an optical assembly including a wedge-shaped illumination system
US9547910B2 (en) 2010-03-04 2017-01-17 Honeywell International Inc. Method and apparatus for vision aided navigation using image registration
JP5499854B2 (en) 2010-04-08 2014-05-21 ソニー株式会社 Optical position adjustment method for head mounted display
US8118499B2 (en) 2010-05-19 2012-02-21 LIR Systems, Inc. Infrared camera assembly systems and methods
US20110291964A1 (en) 2010-06-01 2011-12-01 Kno, Inc. Apparatus and Method for Gesture Control of a Dual Panel Electronic Device
JP2012015774A (en) 2010-06-30 2012-01-19 Toshiba Corp Stereoscopic image processing device and stereoscopic image imaging method
US8854594B2 (en) 2010-08-31 2014-10-07 Cast Group Of Companies Inc. System and method for tracking
US20120081392A1 (en) * 2010-09-30 2012-04-05 Apple Inc. Electronic device operation adjustment based on face detection
US20120113235A1 (en) 2010-11-08 2012-05-10 Sony Corporation 3d glasses, systems, and methods for optimized viewing of 3d video content
US9304319B2 (en) * 2010-11-18 2016-04-05 Microsoft Technology Licensing, Llc Automatic focus improvement for augmented reality displays
US9213405B2 (en) * 2010-12-16 2015-12-15 Microsoft Technology Licensing, Llc Comprehension and intent-based content for augmented reality displays
US8949637B2 (en) 2011-03-24 2015-02-03 Intel Corporation Obtaining power profile information with low overhead
WO2012135553A1 (en) * 2011-03-29 2012-10-04 Qualcomm Incorporated Selective hand occlusion over virtual projections onto physical surfaces using skeletal tracking
KR101210163B1 (en) 2011-04-05 2012-12-07 엘지이노텍 주식회사 Optical sheet and method of fabricating the same
US8856355B2 (en) 2011-05-09 2014-10-07 Samsung Electronics Co., Ltd. Systems and methods for facilitating communication between mobile devices and display devices
US9245307B2 (en) 2011-06-01 2016-01-26 Empire Technology Development Llc Structured light projection for motion detection in augmented reality
US9087267B2 (en) 2011-06-10 2015-07-21 Image Vision Labs, Inc. Image scene recognition
US10606066B2 (en) 2011-06-21 2020-03-31 Gholam A. Peyman Fluidic light field camera
US20120326948A1 (en) 2011-06-22 2012-12-27 Microsoft Corporation Environmental-light filter for see-through head-mounted display device
EP2723240B1 (en) 2011-06-27 2018-08-08 Koninklijke Philips N.V. Live 3d angiogram using registration of a surgical tool curve to an x-ray image
US9342610B2 (en) * 2011-08-25 2016-05-17 Microsoft Technology Licensing, Llc Portals: registered objects as virtualized, personalized displays
US9025252B2 (en) 2011-08-30 2015-05-05 Microsoft Technology Licensing, Llc Adjustment of a mixed reality display for inter-pupillary distance alignment
US8998414B2 (en) 2011-09-26 2015-04-07 Microsoft Technology Licensing, Llc Integrated eye tracking and display system
US9835765B2 (en) 2011-09-27 2017-12-05 Canon Kabushiki Kaisha Optical element and method for manufacturing the same
US8847988B2 (en) 2011-09-30 2014-09-30 Microsoft Corporation Exercising applications for personal audio/visual system
US9125301B2 (en) 2011-10-18 2015-09-01 Integrated Microwave Corporation Integral heater assembly and method for carrier or host board of electronic package assembly
WO2013101273A1 (en) 2011-12-30 2013-07-04 St. Jude Medical, Atrial Fibrillation Division, Inc. System and method for detection and avoidance of collisions of robotically-controlled medical devices
US8608309B2 (en) 2011-12-30 2013-12-17 A New Vision Llc Eyeglass system
US9704220B1 (en) * 2012-02-29 2017-07-11 Google Inc. Systems, methods, and media for adjusting one or more images displayed to a viewer
CN104471463B (en) 2012-05-03 2018-02-13 诺基亚技术有限公司 Image providing device, method and computer program
US8989535B2 (en) 2012-06-04 2015-03-24 Microsoft Technology Licensing, Llc Multiple waveguide imaging structure
US9671566B2 (en) 2012-06-11 2017-06-06 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
US9113291B2 (en) 2012-06-18 2015-08-18 Qualcomm Incorporated Location detection within identifiable pre-defined geographic areas
US9031283B2 (en) 2012-07-12 2015-05-12 Qualcomm Incorporated Sensor-aided wide-area localization on mobile devices
CN104756078B (en) 2012-08-20 2018-07-13 唐纳德·凯文·卡梅伦 The device and method of processing resource allocation
US9177404B2 (en) 2012-10-31 2015-11-03 Qualcomm Incorporated Systems and methods of merging multiple maps for computer vision based tracking
US9576183B2 (en) 2012-11-02 2017-02-21 Qualcomm Incorporated Fast initialization for monocular visual SLAM
US9584382B2 (en) 2012-11-28 2017-02-28 At&T Intellectual Property I, L.P. Collecting and using quality of experience information
US20140168260A1 (en) 2012-12-13 2014-06-19 Paul M. O'Brien Waveguide spacers within an ned device
US8988574B2 (en) 2012-12-27 2015-03-24 Panasonic Intellectual Property Corporation Of America Information communication method for obtaining information using bright line image
EP2939065A4 (en) 2012-12-31 2016-08-10 Esight Corp Apparatus and method for fitting head mounted vision augmentation systems
US9336629B2 (en) 2013-01-30 2016-05-10 F3 & Associates, Inc. Coordinate geometry augmented reality process
GB201301764D0 (en) 2013-01-31 2013-03-20 Adlens Ltd Actuation of fluid-filled lenses
US9600068B2 (en) 2013-03-13 2017-03-21 Sony Interactive Entertainment Inc. Digital inter-pupillary distance adjustment
US9779517B2 (en) 2013-03-15 2017-10-03 Upskill, Inc. Method and system for representing and interacting with augmented reality content
JP6232763B2 (en) 2013-06-12 2017-11-22 セイコーエプソン株式会社 Head-mounted display device and method for controlling head-mounted display device
US9753303B2 (en) 2013-08-27 2017-09-05 Frameri Inc. Removable eyeglass lens and frame platform
US9256072B2 (en) 2013-10-02 2016-02-09 Philip Scott Lyren Wearable electronic glasses that detect movement of a real object copies movement of a virtual object
US20150123966A1 (en) 2013-10-03 2015-05-07 Compedia - Software And Hardware Development Limited Interactive augmented virtual reality and perceptual computing platform
KR102189115B1 (en) 2013-11-11 2020-12-09 삼성전자주식회사 System on-chip having a symmetric multi-processor, and method of determining a maximum operating clock frequency for the same
US9286725B2 (en) 2013-11-14 2016-03-15 Nintendo Co., Ltd. Visually convincing depiction of object interactions in augmented reality images
JP5973087B2 (en) * 2013-11-19 2016-08-23 日立マクセル株式会社 Projection-type image display device
US10234699B2 (en) 2013-11-26 2019-03-19 Sony Corporation Head-mounted display
KR102651578B1 (en) 2013-11-27 2024-03-25 매직 립, 인코포레이티드 Virtual and augmented reality systems and methods
US20160327798A1 (en) 2014-01-02 2016-11-10 Empire Technology Development Llc Augmented reality (ar) system
US9600925B2 (en) 2014-01-06 2017-03-21 Oculus Vr, Llc Calibration of multiple rigid bodies in a virtual reality system
US9383630B2 (en) 2014-03-05 2016-07-05 Mygo, Llc Camera mouth mount
US9871741B2 (en) 2014-03-10 2018-01-16 Microsoft Technology Licensing, Llc Resource management based on device-specific or user-specific resource usage profiles
US9251598B2 (en) 2014-04-10 2016-02-02 GM Global Technology Operations LLC Vision-based multi-camera factory monitoring with dynamic integrity scoring
US20150301955A1 (en) 2014-04-21 2015-10-22 Qualcomm Incorporated Extending protection domains to co-processors
US9626802B2 (en) 2014-05-01 2017-04-18 Microsoft Technology Licensing, Llc Determining coordinate frames in a dynamic environment
KR102173699B1 (en) 2014-05-09 2020-11-03 아이플루언스, 인크. Systems and methods for discerning eye signals and continuous biometric identification
EP2952850A1 (en) 2014-06-03 2015-12-09 Optotune AG Optical device, particularly for tuning the focal length of a lens of the device by means of optical feedback
US9865089B2 (en) 2014-07-25 2018-01-09 Microsoft Technology Licensing, Llc Virtual reality environment with real world objects
US20160077338A1 (en) 2014-09-16 2016-03-17 Steven John Robbins Compact Projection Light Engine For A Diffractive Waveguide Display
US9494799B2 (en) 2014-09-24 2016-11-15 Microsoft Technology Licensing, Llc Waveguide eye tracking employing switchable diffraction gratings
US10176625B2 (en) 2014-09-25 2019-01-08 Faro Technologies, Inc. Augmented reality camera for use with 3D metrology equipment in forming 3D images from 2D camera images
WO2016054092A1 (en) 2014-09-29 2016-04-07 Magic Leap, Inc. Architectures and methods for outputting different wavelength light out of waveguides
US9612722B2 (en) 2014-10-31 2017-04-04 Microsoft Technology Licensing, Llc Facilitating interaction between users and their environments using sounds
US10371936B2 (en) 2014-11-10 2019-08-06 Leo D. Didomenico Wide angle, broad-band, polarization independent beam steering and concentration of wave energy utilizing electronically controlled soft matter
US10096162B2 (en) 2014-12-22 2018-10-09 Dimensions And Shapes, Llc Headset vision system for portable devices that provides an augmented reality display and/or a virtual reality display
US10018844B2 (en) 2015-02-09 2018-07-10 Microsoft Technology Licensing, Llc Wearable image display system
US10180734B2 (en) 2015-03-05 2019-01-15 Magic Leap, Inc. Systems and methods for augmented reality
WO2016146963A1 (en) 2015-03-16 2016-09-22 Popovich, Milan, Momcilo Waveguide device incorporating a light pipe
WO2016149536A1 (en) 2015-03-17 2016-09-22 Ocutrx Vision Technologies, Llc. Correction of vision defects using a visual display
US10909464B2 (en) 2015-04-29 2021-02-02 Microsoft Technology Licensing, Llc Semantic locations prediction
KR20160139727A (en) 2015-05-28 2016-12-07 엘지전자 주식회사 Glass type terminal and method of controlling the same
GB2539009A (en) 2015-06-03 2016-12-07 Tobii Ab Gaze detection method and apparatus
CN107683497B (en) * 2015-06-15 2022-04-08 索尼公司 Information processing apparatus, information processing method, and program
KR20190087292A (en) 2015-06-15 2019-07-24 시리트 엘엘씨 Method and system for communication using beam forming antenna
US9519084B1 (en) 2015-06-18 2016-12-13 Oculus Vr, Llc Securing a fresnel lens to a refractive optical element
WO2017004695A1 (en) 2015-07-06 2017-01-12 Frank Jones Methods and devices for demountable head mounted displays
US20170100664A1 (en) 2015-10-12 2017-04-13 Osterhout Group, Inc. External user interface for head worn computing
US20170038607A1 (en) 2015-08-04 2017-02-09 Rafael Camara Enhanced-reality electronic device for low-vision pathologies, and implant procedure
US20170061696A1 (en) 2015-08-31 2017-03-02 Samsung Electronics Co., Ltd. Virtual reality display apparatus and display method thereof
US10067346B2 (en) 2015-10-23 2018-09-04 Microsoft Technology Licensing, Llc Holographic display
US9671615B1 (en) 2015-12-01 2017-06-06 Microsoft Technology Licensing, Llc Extended field of view in near-eye display using wide-spectrum imager
US10025060B2 (en) 2015-12-08 2018-07-17 Oculus Vr, Llc Focus adjusting virtual reality headset
EP3190447B1 (en) 2016-01-06 2020-02-05 Ricoh Company, Ltd. Light guide and virtual image display device
US9978180B2 (en) 2016-01-25 2018-05-22 Microsoft Technology Licensing, Llc Frame projection for augmented reality environments
US9891436B2 (en) 2016-02-11 2018-02-13 Microsoft Technology Licensing, Llc Waveguide-based displays with anti-reflective and highly-reflective coating
JP6686504B2 (en) 2016-02-15 2020-04-22 セイコーエプソン株式会社 Head-mounted image display device
US20170256096A1 (en) * 2016-03-07 2017-09-07 Google Inc. Intelligent object sizing and placement in a augmented / virtual reality environment
CN108882892A (en) 2016-03-31 2018-11-23 Zoll医疗公司 The system and method for tracking patient motion
KR20180125600A (en) 2016-04-07 2018-11-23 매직 립, 인코포레이티드 Systems and methods for augmented reality
EP3236211A1 (en) 2016-04-21 2017-10-25 Thomson Licensing Method and apparatus for estimating a pose of a rendering device
US20170312032A1 (en) 2016-04-27 2017-11-02 Arthrology Consulting, Llc Method for augmenting a surgical field with virtual guidance content
US11228770B2 (en) 2016-05-16 2022-01-18 Qualcomm Incorporated Loop sample processing for high dynamic range and wide color gamut video coding
US10215986B2 (en) 2016-05-16 2019-02-26 Microsoft Technology Licensing, Llc Wedges for light transformation
US10078377B2 (en) 2016-06-09 2018-09-18 Microsoft Technology Licensing, Llc Six DOF mixed reality input by fusing inertial handheld controller with hand tracking
PL3494695T3 (en) 2016-08-04 2024-02-19 Dolby Laboratories Licensing Corporation Single depth tracked accommodation-vergence solutions
JP6795683B2 (en) 2016-08-11 2020-12-02 マジック リープ, インコーポレイテッドMagic Leap,Inc. Automatic placement of virtual objects in 3D space
US10690936B2 (en) 2016-08-29 2020-06-23 Mentor Acquisition One, Llc Adjustable nose bridge assembly for headworn computer
US20180067779A1 (en) 2016-09-06 2018-03-08 Smartiply, Inc. AP-Based Intelligent Fog Agent
EP3512452A1 (en) 2016-09-16 2019-07-24 Zimmer, Inc. Augmented reality surgical technique guidance
WO2018058063A1 (en) 2016-09-26 2018-03-29 Magic Leap, Inc. Calibration of magnetic and optical sensors in a virtual reality or augmented reality display system
EP3320829A1 (en) 2016-11-10 2018-05-16 E-Health Technical Solutions, S.L. System for integrally measuring clinical parameters of visual function
US10489975B2 (en) 2017-01-04 2019-11-26 Daqri, Llc Environmental mapping system
US20180255285A1 (en) 2017-03-06 2018-09-06 Universal City Studios Llc Systems and methods for layered virtual features in an amusement park environment
EP3376279B1 (en) 2017-03-13 2022-08-31 Essilor International Optical device for a head-mounted display, and head-mounted device incorporating it for augmented reality
US10241545B1 (en) 2017-06-01 2019-03-26 Facebook Technologies, Llc Dynamic distortion correction for optical compensation
US10402448B2 (en) 2017-06-28 2019-09-03 Google Llc Image retrieval with deep local feature descriptors and attention-based keypoint descriptors
US20190056591A1 (en) 2017-08-18 2019-02-21 Microsoft Technology Licensing, Llc Optical waveguide with multiple antireflective coatings
US9948612B1 (en) 2017-09-27 2018-04-17 Citrix Systems, Inc. Secure single sign on and conditional access for client applications
US10437065B2 (en) 2017-10-03 2019-10-08 Microsoft Technology Licensing, Llc IPD correction and reprojection for accurate mixed reality object placement
WO2019148154A1 (en) 2018-01-29 2019-08-01 Lang Philipp K Augmented reality guidance for orthopedic and other surgical procedures
US10422989B2 (en) 2018-02-06 2019-09-24 Microsoft Technology Licensing, Llc Optical systems including a single actuator and multiple fluid-filled optical lenses for near-eye-display devices
GB201805301D0 (en) 2018-03-29 2018-05-16 Adlens Ltd Improvements In Or Relating To Variable Focusing Power Optical Devices
US10740966B2 (en) * 2018-05-14 2020-08-11 Microsoft Technology Licensing, Llc Fake thickness on a two-dimensional object
US11510027B2 (en) 2018-07-03 2022-11-22 Magic Leap, Inc. Systems and methods for virtual and augmented reality

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011033993A (en) * 2009-08-05 2011-02-17 Sharp Corp Information presenting apparatus and method for presenting information

Also Published As

Publication number Publication date
JP7421505B2 (en) 2024-01-24
WO2019237099A1 (en) 2019-12-12
US11092812B2 (en) 2021-08-17
US20190377192A1 (en) 2019-12-12
CN112513785A (en) 2021-03-16
JP2021527252A (en) 2021-10-11
EP3803545A4 (en) 2022-01-26
EP3803545A1 (en) 2021-04-14

Similar Documents

Publication Publication Date Title
US11092812B2 (en) Augmented reality viewer with automated surface selection placement and content orientation placement
US11378798B2 (en) Surface modeling systems and methods
US11024093B2 (en) Live augmented reality guides
US10096157B2 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
US9824485B2 (en) Presenting a view within a three dimensional scene
JP6340017B2 (en) An imaging system that synthesizes a subject and a three-dimensional virtual space in real time
US20180165879A1 (en) Live augmented reality using tracking
US11869135B2 (en) Creating action shot video from multi-view capture data
US10523912B2 (en) Displaying modified stereo visual content
CN106843790A (en) A kind of information display system and method
US20170052684A1 (en) Display control apparatus, display control method, and program
WO2023049087A1 (en) Portal view for content items

Legal Events

Date Code Title Description
AS Assignment

Owner name: MAGIC LEAP, INC., FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NG-THOW-HING, VICTOR;REEL/FRAME:056823/0256

Effective date: 20190311

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED