WO2016043893A1 - Technologies for adjusting a perspective of a captured image for display - Google Patents

Technologies for adjusting a perspective of a captured image for display Download PDF

Info

Publication number
WO2016043893A1
WO2016043893A1 PCT/US2015/045517 US2015045517W WO2016043893A1 WO 2016043893 A1 WO2016043893 A1 WO 2016043893A1 US 2015045517 W US2015045517 W US 2015045517W WO 2016043893 A1 WO2016043893 A1 WO 2016043893A1
Authority
WO
WIPO (PCT)
Prior art keywords
computing device
mobile computing
user
distance
relative
Prior art date
Application number
PCT/US2015/045517
Other languages
French (fr)
Inventor
Dror REIF
Amit MORAN
Nizan Horesh
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to EP15842700.5A priority Critical patent/EP3195595B1/en
Priority to KR1020177003808A priority patent/KR102291461B1/en
Priority to JP2017505822A priority patent/JP2017525052A/en
Priority to CN201580043825.4A priority patent/CN106662930B/en
Publication of WO2016043893A1 publication Critical patent/WO2016043893A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0112Head-up displays characterised by optical features comprising device for genereting colour display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0118Head-up displays characterised by optical features comprising devices for improving the contrast of the display / brillance control visibility
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/16Indexing scheme relating to G06F1/16 - G06F1/18
    • G06F2200/163Indexing scheme relating to constructional details of the computer
    • G06F2200/1637Sensing arrangement for detection of housing movement or orientation, e.g. for controlling scrolling or cursor movement on the display of an handheld computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • Augmented reality systems fuse the real-world and virtual-world environments by projecting virtual characters and objects into physical locations, thereby allowing for immersive experiences and novel interaction models.
  • virtual characters or objects may be inserted into captured images of real-world environments (e.g., by overlaying a two- or three-dimensional rendering of a virtual character on a captured image or video stream of the real-world environment).
  • a physical object recognized in the captured image may be replaced by a virtual object associated with that physical object.
  • recognized vehicles in the captured image may be recognized and replaced with animated or cartoon-like vehicles.
  • Augmented reality systems have been implemented in both stationary and mobile computing devices.
  • a camera of a mobile computing device e.g., a smart phone camera positioned opposite the display
  • the augmented reality system then makes augmented reality modifications to the captured images and displays the augmented images in the display of the mobile computing device (e.g., in real time).
  • the user is able to see a virtual world corresponding with his or her actual real-world environment.
  • the immersive experience suffers due to an obstructed visual flow.
  • real- world objects e.g., those at the periphery of the mobile computing device
  • real-world renderings are duplicated in the augmented reality renderings.
  • FIG. 1 is a simplified block diagram of at least one embodiment of a mobile computing device for adjusting a perspective of a captured image for display;
  • FIG. 2 is a simplified block diagram of at least one embodiment of an environment established by the mobile computing device of FIG. 1;
  • FIG. 3 is a simplified flow diagram of at least one embodiment of a method for adjusting a perspective of a captured image for display by the mobile computing device of FIG. l ;
  • FIG. 4 is a simplified flow diagram of at least one embodiment of a method for generating a back projection of a real-world environment of the mobile computing device of FIG. 1;
  • FIG. 5 is a simplified illustration of a user holding the mobile computing device of FIG. 1 during execution of the method of FIG. 4;
  • FIG. 6 is simplified flow diagram of at least one other embodiment of a method for generating a back projection of the real-world environment of the mobile computing device of FIG. 1;
  • FIGS. 7-8 are simplified illustrations of the user holding the mobile computing device of FIG. 1 showing various angular relationships
  • FIG. 9 is a simplified illustration of a real-world environment of the mobile computing device of FIG. 1;
  • FIG. 10 is a simplified illustration of the user holding the mobile computing device of FIG. 1 and a corresponding captured image displayed on the mobile computing device without an adjusted perspective;
  • FIG. 11 is a simplified illustration of the user holding the mobile computing device of FIG. 1 and a corresponding captured image displayed on the mobile computing device having an adjusted perspective by virtue of the method of FIG. 3.
  • references in the specification to "one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • items included in a list in the form of "at least one A, B, and C” can mean (A); (B); (C): (A and B); (B and C); or (A, B, and C).
  • items listed in the form of "at least one of A, B, or C” can mean (A); (B); (C): (A and B); (B and C); or (A, B, and C).
  • the disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof.
  • the disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors.
  • a machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
  • a mobile computing device 100 for adjusting a perspective of a captured image for display is shown.
  • the mobile computing device 100 is configured to capture an image of a user of the mobile computing device 100 and an image of a real- world environment of the mobile computing device 100.
  • the mobile computing device 100 further analyzes the captured image of the user to determine a position of the user's eye(s) relative to the mobile computing device 100.
  • the mobile computing device 100 may determine the distance of the user to the mobile computing device 100 and identify/detect the position of the user's eye(s) in the captured image.
  • the mobile computing device 100 determines a distance of one or more objects (e.g., a principal object and/or other objects in the captured scene) in the captured real- world environment relative to the mobile computing device 100. For example, as described below, the mobile computing device 100 may analyze the captured image of the real- world environment, utilize depth or distance sensor data, or otherwise determine the relative distance of the object depending on the particular embodiment. The mobile computing device 100 determines a back projection of the real-world environment to a display 120 of the mobile computing device 100 based on the distance of the real- world object relative to the mobile computing device 100, the position of the user's eye(s) relative to the mobile computing device 100, and one or more device parameters.
  • objects e.g., a principal object and/or other objects in the captured scene
  • the mobile computing device 100 may analyze the captured image of the real- world environment, utilize depth or distance sensor data, or otherwise determine the relative distance of the object depending on the particular embodiment.
  • the mobile computing device 100 determines a back projection of the real-world environment to a
  • the back projection may be embodied as a back projection image, a set of data (e.g., pixel values) usable to generate a back projection image, and/or other data indicative of the corresponding back projection image.
  • the device parameters may include, for example, a focal length of a camera of the mobile computing device 100, a size of the display 120 or of the mobile computing device 100 itself, a location of components of the mobile computing device 100 relative to one another or a reference point, and/or other relevant information associated with the mobile computing device 100.
  • the mobile computing device 100 displays an image based on the determined back projection and, in doing so, may apply virtual objects, characters, and/or scenery or otherwise modify the image for augmented reality.
  • the techniques described herein result in an image back-projected to the display 120 such that the image visable on the display 120 maps directly, or near directly, to the real world such that the user feels as though she is looking at the real-world environment through a window. That is, in the illustrative embodiment, the displayed image includes the same content as that which is occluded by the mobile computing device 100 viewed from the same perspective as the user.
  • the mobile computing device 100 may be embodied as any type of computing device capable of performing the functions described herein.
  • the mobile computing device 100 may be embodied as a smartphone, cellular phone, wearable computing device, personal digital assistant, mobile Internet device, tablet computer, netbook, notebook, ultrabook, laptop computer, and/or any other mobile computing/communication device.
  • the illustrative mobile computing device 100 includes a processor 110, an input/output ("I/O") subsystem 112, a memory 114, a data storage 116, a camera system 118, a display 120, one or more sensors 122, and a communication circuitry 124.
  • I/O input/output
  • the mobile computing device 100 may include other or additional components, such as those commonly found in a typical computing device (e.g., various input/output devices and/or other components), in other embodiments. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. For example, the memory 114, or portions thereof, may be incorporated in the processor 110 in some embodiments.
  • the processor 110 may be embodied as any type of processor capable of performing the functions described herein.
  • the processor 110 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit.
  • the memory 114 may be embodied as any type of volatile or non- volatile memory or data storage capable of performing the functions described herein. In operation, the memory 114 may store various data and software used during operation of the mobile computing device 100 such as operating systems, applications, programs, libraries, and drivers.
  • the memory 114 is communicatively coupled to the processor 110 via the I/O subsystem 112, which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 110, the memory 114, and other components of the mobile computing device 100.
  • the I/O subsystem 112 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations.
  • the I/O subsystem 112 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with the processor 110, the memory 114, and other components of the mobile computing device 100, on a single integrated circuit chip.
  • SoC system-on-a-chip
  • the data storage 116 may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices.
  • the data storage 116 may store device parameters 130 of the mobile computing device 100. It should be appreciated that the particular device parameters 130 may vary depending on the particular embodiment.
  • the device parameters 130 may include, for example, information or data associated with a size/shape of the mobile computing device 100, the display 120, and/or another component of the mobile computing device 100, intrinsic parameters or other data regarding one or more cameras of the mobile computing device 100 (e.g., focal length, principle point, zoom information, etc.), a location of components of the mobile computing device 100 relative to a reference point (e.g., a coordinate system identifying relative locations of the components of the mobile computing device 100), and/or other information associated with the mobile computing device 100. Additionally, in some embodiments, the data storage 116 and/or the memory 114 may store various other data useful during the operation of the mobile computing device 100.
  • the camera system 118 includes a plurality of cameras configured to capture images or video (i.e., collections of images or frames) and capable of performing the functions described herein. It should be appreciated that each of the cameras of the camera system 118 may be embodied as any peripheral or integrated device suitable for capturing images, such as a still camera, a video camera, or other device capable of capturing video and/or images. In the illustrative embodiment, the camera system 118 includes a user-facing camera 126 and an environment-facing camera 128.
  • each of the user-facing camera 126, the environment-facing camera 128, and/or other cameras of the camera system 118 may be embodied as a two-dimensional (2D) camera (e.g., an RGB camera) or a three-dimensional (3D) camera.
  • 3D cameras include, for example, depth cameras, bifocal cameras, and/or cameras otherwise capable of generating a depth image, channel, or stream.
  • one or more cameras may include an infrared (IR) projector and an IR sensor such that the IR sensor estimates depth values of objects in the scene by analyzing the IR light pattern projected on the scene by the IR projector.
  • one or more of the cameras of the camera system 118 include at least two lenses and corresponding sensors configured to capture images from at least two different viewpoints of a scene (e.g., a stereo camera).
  • the user-facing camera 126 is configured to capture images of the user of the mobile computing device 100.
  • the user-facing camera 126 captures images of the user's face, which may be analyzed to determine the location of the user's eye(s) relative to the mobile computing device 100 (e.g., relative to the user-facing camera 126 or another reference point of the mobile computing device 100).
  • the environment-facing camera 128 captures images of the real- world environment of the mobile computing device 100.
  • the user-facing camera 126 and the environment-facing camera 128 are positioned on opposite sides of the mobile computing device 100 and therefore have fields of view in opposite directions.
  • the user- facing camera 126 is on the same side of the mobile computing device 100 as the display 120 such that the user-facing camera 126 can capture images of the user as she views the display 120.
  • the display 120 of the mobile computing device 100 may be embodied as any type of display on which information may be displayed to a user of the mobile computing device 100. Further, the display 120 may be embodied as, or otherwise use any suitable display technology including, for example, a liquid crystal display (LCD), a light emitting diode (LED) display, a cathode ray tube (CRT) display, a plasma display, a touchscreen display, and/or other display technology. Although only one display 120 is shown in the illustrative embodiment of FIG. 1, in other embodiments, the mobile computing device 100 may include multiple displays 120.
  • LCD liquid crystal display
  • LED light emitting diode
  • CRT cathode ray tube
  • the mobile computing device 100 may include one or more sensors 122 configured to collect data useful in performing the functions described herein.
  • the sensors 122 may include a depth sensor that may be used to determine the distance of objects from the mobile computing device 100.
  • the sensors 122 may include an accelerometer, gyroscope, and/or magnetometer to determine the relative orientation of the mobile computing device 100.
  • the sensors 122 may be embodied as, or otherwise include, for example, proximity sensors, optical sensors, light sensors, audio sensors, temperature sensors, motion sensors, piezoelectric sensors, and/or other types of sensors.
  • the mobile computing device 100 may also include components and/or devices configured to facilitate the use of the sensor(s) 122.
  • the communication circuitry 124 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the mobile computing device 100 and other remote devices over a network (not shown). For example, in some embodiments, the mobile computing device 100 may offload one or more of the functions described herein (e.g., determination of the back projection) to a remote computing device.
  • the communication circuitry 124 may be configured to use any one or more communication technologies (e.g., wireless or wired communications) and associated protocols (e.g., Ethernet, Bluetooth ® , Wi-Fi ® , WiMAX, etc.) to effect such communication.
  • the mobile computing device 100 establishes an environment 200 for adjusting a perspective of a captured image for display on the display 120 of the mobile computing device 100.
  • the mobile computing device 100 captures an image of the user with the user-facing camera 126 and an image of the real- world environment of the mobile computing device 100 with the environment-facing camera 128.
  • the mobile computing device determines a position of the user's eye(s) relative to the mobile computing device 100 based on the image captured by the user-facing camera 126 and a distance of an object(s) in the real-world environment relative to the mobile computing device 100 based on the image captured by the environment-facing camera 128.
  • the mobile computing device 100 then generates a back projection of the real- world object(s) to the display 120 and displays a corresponding image on the display 120 (e.g., including augmented reality modifications) based on the generated back projection.
  • the illustrative environment 200 of the mobile computing device 100 includes an image capturing module 202, an eye tracking module 204, an object distance determination module 206, an image projection module 208, and a display module 210.
  • Each of the modules of the environment 200 may be embodied as hardware, software, firmware, or a combination thereof.
  • each of the modules of the environment 200 is embodied as a circuit (e.g., an image capturing circuit, an eye tracking circuit, an object distance determination circuit, an image projection circuit, and a display circuit).
  • one or more of the illustrative modules may form a portion of another module.
  • the image projection module 208 may form a portion of the display module 210.
  • the image capturing module 202 controls the camera system 118 (e.g., the user- facing camera 126 and the environment-facing camera 128) to capture images within the field of view of the respective camera 126, 128.
  • the user-facing camera 126 is configured to capture images of the user's face (e.g., for eye detection/tracking).
  • the mobile computing device 100 may detect and/or track one or both of the user's eyes and, therefore, in the illustrative embodiment, the images captured by the user-facing camera 126 for analysis by the mobile computing device 100 include at least one of the user's eyes.
  • the environment-facing camera 128 is configured to capture images of the real- world environment of the mobile computing device 100. It should be appreciated that the captured scene may include any number of principal objects (e.g., distinct or otherwise important objects) although, for simplicity, such captured images are oftentimes described herein as having a single principal object.
  • the eye tracking module 204 determines the location/position of the user's eye relative to the mobile computing device 100 (e.g., relative to the user-facing camera 126 or another reference point). In doing so, the eye tracking module 204 detects the existence of one or more person's eyes in an image captured by the user-facing camera 126 and determines the location of the eye in the captured image (i.e., the portion of the image associated with the eye) that is to be tracked. To do so, the eye tracking module 204 may use any suitable techniques, algorithms, and/or image filters (e.g., edge detection and image segmentation).
  • image filters e.g., edge detection and image segmentation
  • the eye tracking module 204 determines the location of the user's face in the captured image and utilizes the location of the user's face to, for example, reduce the region of the captured image that is analyzed to locate the user's eye(s). Additionally, in some embodiments, the eye tracking module 204 analyzes the user's eyes to determine various characteristics/features of the user's eyes (e.g., glint location, iris location, pupil location, iris- pupil contrast, eye size/shape, and/or other characteristics) to determine the gaze direction of the user.
  • characteristics/features of the user's eyes e.g., glint location, iris location, pupil location, iris- pupil contrast, eye size/shape, and/or other characteristics
  • the user's gaze direction may be used, for example, to determine whether the user is looking at the display 120, to identify objects in the scene captured by the environment-facing camera 128 toward which the user's gaze is directed (e.g., principal objects), to determine a relative location or position (e.g., in three-dimensional space) of the user's eye(s), and/or for other purposes. Additionally, in some embodiments, the eye tracking module 204 may further determine the orientation of the user's head or otherwise determine the user's head pose.
  • the eye tracking module 204 determines the distance of the user's eye relative to the mobile computing device 100 (e.g., relative to the user-facing camera 126 or another reference point). It should be appreciated that the eye tracking module 204 may utilize any suitable algorithms and/or techniques for doing so.
  • the user-facing camera 126 may be embodied as a depth camera or other 3D camera capable of generating data (e.g., a depth stream or depth image) corresponding with the distance of objects in the captured scene.
  • the eye tracking module 204 may use face detection in conjunction with a known approximate size of a person's face to estimate the distance of the user's face from the mobile computing device 100.
  • the eye tracking module 204 may analyze the region of the captured image corresponding with the user's eye to find reflections of light off the user's cornea (i.e., the glints) and/or pupil. Based on those reflections, the eye tracking module 204 may determine the location or position (e.g., in three-dimensional space) of the user's eye relative to the mobile computing device 100.
  • the eye tracking module 204 may utilize data generated by the sensors 122 (e.g., depth/distance information) in conjunction with the location of the user's eye in the captured image to determine the location of the user's eye relative to the mobile computing device 100.
  • data generated by the sensors 122 e.g., depth/distance information
  • the object distance determination module 206 determines the distance of one or more objects in the real-world environment captured by the environment-facing camera 128 relative to the mobile computing device 100 (e.g., relative to the environment-facing camera 128 or another reference point of the mobile computing device 100).
  • the real-world environment within the field of view of and therefore captured by the environment- facing camera 128 may include any number of objects. Accordingly, depending on the particular embodiment, the object distance determination module 206 may determine the distance of each of the objects from the mobile computing device 100 or the distance of a subset of the objects from the mobile computing device 100 (e.g., a single object).
  • the object distance determination module 206 identifies a principal object in the captured image for which to determine the distance.
  • a principal object may be, for example, an object toward which the user's gaze is directed or a main object in the scene.
  • the object distance determination module 206 assumes that each of the objects in the scene is approximately the same distance from the mobile computing device 100 for simplicity. Further, in some embodiments, the object distance determination module 206 assumes or otherwise sets the distance of the object(s) to a predefined distance from the mobile computing device 100.
  • the predefined distance may be a value significantly greater than the focal length of the environment-facing camera 128, a value approximating infinity (e.g., the largest number in the available number space), or another predefined distance value.
  • a number representing infinity may be referred to herein simply as "infinity.”
  • the object distance determination module 206 may determine the distance of an object in the real-world environment relative to the mobile computing device 100 using any suitable techniques and/or algorithms.
  • the object distance determination module 206 may use one or more of the techniques and algorithms described above in reference to determining the distance of the user relative to the mobile computing device 100 (i.e., by the eye tracking module 204).
  • the environment-facing camera 128 may be embodied as a depth camera or other 3D camera, which generates depth data for determining the distance of objects in the captured image.
  • the object distance determination module 206 may reference stored data regarding the size of certain objects to estimate the distance of the objects to the mobile computing device 100 in some embodiments.
  • the object distance determination module 206 may utilize data generated by the sensors 122 (e.g., depth/distance information) to determine the distance and/or location of the objects relative to the mobile computing device 100.
  • the object distance determination module 206 may assign a distance of a particular object to a predefined value. For example, the object distance determination module 206 may assume the object is infinitely far in response to determining that the object's distance exceeds a predefined threshold. That is, in some embodiments, objects that are at least a threshold distance (e.g., four meters) from the mobile computing device 100 may be treated as though they are, for example, infinitely far from the mobile computing device 100.
  • a threshold distance e.g., four meters
  • the distance of the object(s) relative to the mobile computing device 100 may be used to determine the location of the object(s) relative to the mobile computing device 100 and to generate a back projection of the real-world environment (e.g., based on the device parameters 130).
  • the image projection module 208 generates a back projection of the real-world environment captured by the environment-facing camera 128 to the display 120.
  • the image projection module 208 generates the back projection based on the distance of the object in the real-world environment relative to the mobile computing device 100 (e.g., infinity, a predefined distance, or the determined distance), the position/location of the user's eye relative to the mobile computing device 100, and/or the device parameters 130 of the mobile computing device 100 (e.g., intrinsic parameters of the cameras 126, 128, the size of the mobile computing device 100 or the display 120, etc.).
  • the image projection module 208 may utilize any suitable techniques and/or algorithms for generating the back projection image for display on the display 120 of the mobile computing device 100. As described below, FIGS. 4-8 show illustrative embodiments for doing so.
  • the display module 210 renders images on the display 120 for the user of the mobile computing device 100 to view.
  • the display module 210 may render an image based on the back projection generated by the image projection module 208 on the display 120.
  • the back projections may not be "projected" onto the display 120 in the traditional sense; rather, corresponding images may be generated for rendering on the display 120.
  • the display module 210 may modify the back projection image to include virtual objects, characters, and/or environments for augmented reality and render the modified image on the display 120.
  • the communication module 212 handles the communication between the mobile computing device 100 and remote devices through the corresponding network.
  • the mobile computing device 100 may communicate with a remote computing device to offload one or more of the functions of the mobile computing device 100 described herein to a remote computing device (e.g., for determination of the back projection image or modification of the images for augmented reality).
  • relevant data associated with such analyses may be transmitted by the remote computing device and received by the communication module 212 of the mobile computing device 100.
  • the mobile computing device 100 may execute a method 300 for adjusting a perspective of a captured image for display by the mobile computing device 100.
  • the illustrative method 300 begins with blocks 302 and 310.
  • the mobile computing device 100 captures an image of the user's face with the user-facing camera 126.
  • the user-facing camera 126 may capture images continuously (e.g., as a video stream) for analysis or in response to a user input (e.g., a button press).
  • the mobile computing device 100 identifies the user's eye(s) in the captured image.
  • the mobile computing device 100 may utilize any suitable techniques and/or algorithms for doing so (e.g., edge detection and/or image segmentation). Further, depending on the particular embodiment, the mobile computing device 100 may determine and utilize the location of one or both of the user's eyes.
  • the mobile computing device 100 determines the position of the user's eye(s) relative to the user-facing camera 126 or another reference point of the mobile computing device 100. In doing so, the mobile computing device 100 determines the distance of the user, or more particularly, of the user's eye(s) relative to the user-facing camera 126. As discussed above, the mobile computing device 100 may make such a determination based on, for example, a depth image or other depth information generated by the user-facing camera 126 (i.e., if the user-facing camera 126 is a depth camera or other 3D camera), user gaze information, distance information generated by the sensors 122, device parameters 130, and/or other relevant data.
  • a depth image or other depth information generated by the user-facing camera 126 i.e., if the user-facing camera 126 is a depth camera or other 3D camera
  • user gaze information i.e., if the user-facing camera 126 is a depth camera or other 3D camera
  • distance information generated by the sensors 122 i.
  • the distance of the user relative to the user-facing camera 126 may be used in conjunction with the location of the user's eye(s) in the captured image to determine the position of the user's eye(s) relative to the user-facing camera 126 or other reference point of the mobile computing device 100.
  • the device parameters 130 may include information regarding the locations of the components of the mobile computing device 100 relative to one another, thereby establishing a coordinate system having a reference point as the origin.
  • the reference point selected to be the origin may vary depending on the particular embodiment and may be, for example, the location of the user-facing camera 126, the location of the environment-facing camera 128, the center of the display 120, or another suitable location.
  • the mobile computing device 100 captures an image of the real- world environment of the mobile computing device 100 with the environment-facing camera 128. Similar to the user-facing camera 126, depending on the particular embodiment, the environment-facing camera 128 may capture images continuously (e.g., as a video stream) for analysis or in response to a user input such as a button press. For example, in some embodiments, the user may provide some input to commence execution of the method 300 in which the mobile computing device 100 executes each of block 302 and 310. As indicated above, in the illustrative embodiment, the environment-facing camera 128 is position opposite the user-facing camera 126 such that the environment-facing camera 128 has a field of view similar to the user (i.e., in the same general direction).
  • the mobile computing device 100 determines the distance of one or more objects in the corresponding real-world environment relative to the environment-facing camera 128 or another reference point of the mobile computing device 100. As discussed above, the mobile computing device 100 may make such a determination based on, for example, depth information generated by the environment-facing camera 128 (i.e., if the environment-facing camera 128 is a depth camera or other 3D camera), distance information generated by the sensors 122, device parameters 130, and/or other relevant data. Further, the object(s) for which the relative distance is determined may vary depending on the particular embodiment.
  • the mobile computing device 100 may determine the relative distance of each object or each principal object in the captured image, whereas in other embodiments, the mobile computing device 100 may determine only the relative distance of the main object in the captured image (e.g., the object to which the user's gaze is directed or otherwise determined to be the primary object). Further, as indicated above, the mobile computing device 100 may set the distance of the object(s) to a predefined distance in block 314.
  • the mobile computing device 100 generates a back projection of the real-world environment to the display 120 based on the distance of the real-world object(s) relative to the mobile computing device 100 (e.g., the determined or predefined distance), the position of the user's eye relative to the mobile computing device 100, and/or one or more device parameters 130 (e.g., intrinsic parameters of the cameras 126, 128, the size of the mobile computing device 100 or the display 120, etc.).
  • the mobile computing device 100 may generate a back projection image using any suitable algorithms and/or techniques for doing so. For example, in some embodiments, the mobile computing device 100 may generate a back projection by executing a method 400 as shown in FIG.
  • the mobile computing device 100 may generate the back projection by executing a method 600 as shown in FIG. 6.
  • a method 600 as shown in FIG. 6.
  • FIGS. 4 and 9 are provided as illustrative embodiments and do not limit the concepts described herein.
  • the mobile computing device 100 displays an image on the display 120 based on the generated back projection in block 318.
  • the mobile computing device 100 may modify the back projection or corresponding image for augmented reality purposes as discussed above.
  • the mobile computing device 100 may incorporate virtual characters, objects, and/or other virtual features into the constructed/generated back projection image for rendering on the display 120.
  • the mobile computing device 100 may not modify the back projection for augmented reality or other purposes such that the viewer truly feels as though the display 120 is a window through which she can see the real- world environment occluded by the mobile computing device 100.
  • the illustrative method 400 begins with block 402 in which the mobile computing device 100 determines whether to generate a back projection. If so, in block 404, the mobile computing device 100 determines a ray 502 from the user's eye 504 through the next display pixel 506 of the display 120 to the real-world object(s) 508 as shown in FIG. 5. It should be appreciated that which display pixel 506 constitutes the "next" display pixel 506 may vary depending on the particular embodiment. In the illustrative embodiment, the mobile computing device 100 selects a display pixel 506 for which a ray 502 has not yet been determined during the execution of the method 400 as the "next" display pixel 506.
  • the mobile computing device 100 may determine a ray 502 through another sub-region of the display 120 (i.e., a sub-region other than the display pixels at, for example, a different level of granularity).
  • the device parameters 130 of the mobile computing device are the device parameters 130 of the mobile computing device.
  • the mobile computing device 100 may include data regarding relative locations of the various components of the mobile computing device 100 and establish, for example, a three-dimensional coordinate system having some reference point as the origin.
  • the environment-facing camera 128 may be the origin. It should be appreciated that every pixel/point on the display 120 is located at some point relative to the environment-facing camera 128. Accordingly, in some embodiments, the mobile computing device 100 determines the corresponding three- dimensional coordinates of the user's eye 504 and the object(s) 508 based on the analyses described above.
  • the mobile computing device 100 determines a corresponding ray 502 from the user's eye 504 through each of the display pixels 506 to the object(s) 508.
  • the mobile computing device 100 identifies the image pixel of the image of the real-world environment captured by the environment-facing camera 128 (see block 310 of FIG. 3) corresponding with the position/location 510 of the real- world object(s) to which the corresponding ray 502 is directed. For example, based on the device parameters 130 such as the intrinsic parameters (e.g., focal length) of the environment-facing camera 128 and the real-world coordinates or relative location of the object(s) 508, the mobile computing device 100 may determine how the image captured by the environment-facing camera 128 is projected from the real- world environment to the captured image coordinates. In such embodiments, the mobile computing device 100 may thereby identify the image pixel associated with the real- world coordinates (i.e., the location 510) to which the ray 502 is directed.
  • the device parameters 130 such as the intrinsic parameters (e.g., focal length) of the environment-facing camera 128 and the real-world coordinates or relative location of the object(s) 508
  • the mobile computing device 100 may
  • the mobile computing device 100 determines whether any display pixels 506 are remaining. If so, the method 400 returns to block 404 in which the mobile computing device 100 determines a ray 502 from the user's eye 504 through the next display pixel 506 to the real-world object(s) 508.
  • the mobile computing device 100 determines a ray 502 from the user's eye 504 through a corresponding display pixel 506 to the object(s) 508 in the real-world environment for each display pixel 506 of the display 120 (or other sub-region of the display 120) and identifies an image pixel of the image of the real-world environment captured by the environment-facing camera 128 corresponding with a location of the object(s) in the real-world environment to which the corresponding ray 502 is directed for each determined ray 502.
  • the mobile computing device 100 constructs an image (e.g., a back projection image) from the identified image pixels for display on the mobile computing device 100.
  • the mobile computing device 100 generates an image having the identified image pixels in the appropriate image coordinates of the generated image. In other words, the mobile computing device 100 may back project the visual content from a location to which each corresponding ray 502 is directed to the corresponding point on the display 120 through which the ray 502 is directed.
  • the mobile computing device 100 may execute a method 600 for generating a back projection of the real-world environment of the mobile computing device 100 as indicated above.
  • the illustrative method 600 begins with block 602 in which the mobile computing device 100 determines whether to generate a back projection. If so, in block 604, the mobile computing device 100 determines the angular size 702 of the mobile computing device 100 from the user's perspective based on the distance 704 of the user 706 relative to the user-facing camera 126 (or other reference point of the mobile computing device 100) and device parameters 130 as shown with regard to FIGS. 7-8.
  • the device parameters 130 may include, for example, the size, shaped, and other characteristics of the mobile computing device 100 and/or components of the mobile computing device 100.
  • the angular size of an object is indicative of the viewing angle required to encompass the object from a reference point (e.g., a viewer or camera) that is a given distance from the object.
  • angular size of the object d is an actual size of the corresponding object, and D is a distance between the corresponding object and the perspective point (i.e., the point from which the angular size is determined).
  • the angular size of an object may be otherwise determined. It should further be appreciated that, while the angular size may at times be discussed herein with respect to two dimensions, the techniques described herein may be applied to three dimensions as well (e.g., accounting for both a horizontal angular size and a vertical angular size, determining the angular size across a diagonal of the object, projecting the three-dimensional size to two dimensions, employing a three-dimensional equivalent of the angular size formula provided above, etc.).
  • the mobile computing device 100 determines the distance 708 of the real-world object(s) 710 relative to the user 706. In the illustrative embodiment, the mobile computing device 100 makes such a determination based on the distance 704 of the user 706 to the user-facing camera 126 (see block 308 of FIG. 3) and the distance 712 of the real-world object(s) 710 to the environment-facing camera 128 (see block 312 of FIG. 3) or other reference point of the mobile computing device 100.
  • the mobile computing device 100 may, in some embodiments, assume the user 706, the mobile computing device 100, and the object(s) 710 are relatively collinear and add the two previously calculated distances to determine the distance 708 between the user 706 and the real- world object(s) 710 (e.g., if the objects are far from the user). In other embodiments, the mobile computing device 100 may employ a more sophisticated algorithm for determining the distance 708 between the user 706 and the real- world object(s) 710.
  • the mobile computing device 100 may make such a determination based on the relative locations of the mobile computing device 100, the user 706 (or, more particularly, the user's eye(s)), and the object(s) 710 to one another or to a particular reference point (e.g., a defined origin), the mobile computing device 100 and the known distances 704, 712 between the user 706 and the mobile computing device 100 and between the mobile computing device 100 and the object(s) 710 (e.g., based on the properties of a triangle).
  • a particular reference point e.g., a defined origin
  • the mobile computing device 100 determines the region 714 of the real- world object(s) 710 that is occluded by the mobile computing device 100 from the user's perspective. In the illustrative embodiment, the mobile computing device 100 makes such a determination based on the angular size 702 of the mobile computing device 100 from the user's perspective and the distance 708 of the real-world object 710 relative to the user 706.
  • the mobile computing device 100 determines a corrected zoom magnitude of the environment-facing camera 128 based on the region 714 of the real- world object(s) occluded from the user's perspective and the distance 712 of the real- world object(s) to the environment-facing camera 128. In other words, the mobile computing device 100 determines a zoom magnitude of the environment-facing camera 128 needed to capture an image with the environment-facing camera 128 corresponding with the region 714 of the object(s) 710 occluded by the mobile computing device 100 from the user's perspective.
  • the device parameters 130 may include intrinsic parameters (e.g., the focal length, image projection parameters, etc.) of the camera 128.
  • such device parameters 130 may be used, in some embodiments, to identify the zoom magnitude corresponding with capturing a particular region of an environment that is a certain distance from the camera 128.
  • the zoom magnitude is determined such that the environment-facing camera 128 captures an image having only image pixels corresponding with visual content (e.g., features of the object(s) 710) of the object(s) 710 from the region 714 of the object(s) occluded by the mobile computing device 100 from the user's perspective.
  • the mobile computing device 100 determines the angular size 716 of a region 718 of the real- world object(s) 710 from the perspective of the environment-facing camera 128 corresponding with the region 714 of the real world object(s) 710 occluded from the user's perspective.
  • the mobile computing device 100 may make such a determination based on, for example, the device parameters 130 and/or corresponding geometry. That is, in some embodiments, the mobile computing device 100 may determine the angular size 716 based on the size of the region 714, the distance 712, and the angular size formula provided above.
  • the region 718 and the region 714 are the same region, whereas in other embodiments, those reasons may differ to some extent.
  • the corrected zoom magnitude may diverge from the precise zoom required to generate the region 718 (e.g., based on technological, hardware, and/or spatial limitations).
  • the mobile computing device 100 generates an image with the corrected zoom magnitude for display on the mobile computing device 100.
  • the mobile computing device 100 may capture a new image with the environment-facing camera 128 from the same perspective but having a different zoom magnitude.
  • the mobile computing device 100 may, for example, modify the original image captured by the environment-facing camera 128 to generate an image with the desired zoom magnitude and other characteristics.
  • FIGS. 9-11 simplified illustrations of a real- world environment 900 (see, for example, FIG. 9) of the mobile computing device 100 and of a user holding the mobile computing device 100 (see FIGS. 10-11) are shown.
  • the real-world environment 900 may be captured by the environment-facing camera 128 and rendered on the display 120.
  • the captured images may be modified to incorporate, for example, virtual characters, objects, or other features into the captured image for display on the mobile computing device 100.
  • the image 902 displayed on the display 120 includes real-world objects 904 that are also visible in the real-world environment 900 around the periphery of the mobile computing device 100 (see, for example, FIG. 10).
  • certain real-world objects 904 that are visible to the user are duplicated in the displayed image 902 thereby obstructing the visual flow.
  • the image 906 displayed on the display 120 includes the same visual content as what is occluded by the mobile computing device 100 viewed from the same perspective as the user. Because visual continuity between the displayed image 906 and the background real- world environment 900 is maintained, the user feels as though she is looking at the real- world environment 900 through a window.
  • An embodiment of the technologies may include any one or more, and any combination of, the examples described below.
  • Example 1 includes a mobile computing device for adjusting a perspective of a captured image for display, the mobile computing device comprising a display; a camera system comprising a first camera and a second camera, the camera system to capture (i) a first image of a user of the mobile computing device with the first camera and (i) a second image of a real-world environment of the mobile computing device with the second camera; an eye tracking module to determine a position of an eye of the user relative to the mobile computing device based on the first captured image; an object distance determination module to determine a distance of an object in the real-world environment relative to the mobile computing device based on the second captured image; and an image projection module to generate a back projection of the real- world environment captured by the second camera to the display based on the determined distance of the object in the real-world environment relative to the mobile computing device, the determined position of the user's eye relative to the mobile computing device, and at least one device parameter of the mobile computing device.
  • Example 2 includes the subject matter of Example 1, and wherein to generate the back projection comprises to determine, for each display pixel of the display, a ray from the user's eye through a corresponding display pixel to the object in the real- world environment; identify, for each determined ray, an image pixel of the second captured image of the real- world environment corresponding with a position of the object in the real- world environment to which the corresponding ray is directed; and construct a back projection image based on the identified image pixels for display on the display of the mobile computing device.
  • Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to generate the back projection comprises to determine an angular size of the mobile computing device from a perspective of the user; determine a distance of the object in the real-world environment relative to the user; determine a region of the object occluded by the mobile computing device from the user's perspective; determine a corrected zoom magnitude of the second camera based on the determined region of the object occluded by the mobile computing device and the distance of the object relative to the mobile computing device; and generate a back projection image based on the corrected zoom magnitude for display on the display of the mobile computing device.
  • Example 4 includes the subject matter of any of Examples 1-3, and wherein to determine the corrected zoom magnitude comprises to determine an angular size of a region of the object from a perspective of the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
  • Example 5 includes the subject matter of any of Examples 1-4, and wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
  • Example 6 includes the subject matter of any of Examples 1-5, and wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera having only image pixels corresponding with features of the object from the region of the object occluded by the mobile computing device from the user's perspective.
  • Example 7 includes the subject matter of any of Examples 1-6, and wherein to determine the angular size of the mobile computing device from the user's perspective comprises to determine an angular size of the mobile computing device from the user's perspective based on a distance of the user' s eye relative to the mobile computing device and a size of the mobile computing device; determine the distance of the object relative to the user comprises to determine the distance of the object relative to the user based on the distance of the user's eye relative to the mobile computing device and the distance of the object relative to the mobile computing device; and determine the region of the object occluded by the mobile computing device from the user's perspective comprises to determine the angular size of the region of the object occluded by the mobile computing device based on the angular size of the mobile computing device from the user's perspective and the distance of the object relative to the user.
  • Example 8 includes the subject matter of any of Examples 1-7, and wherein
  • D is a distance between the corresponding object and a point, the point being a perspective from which the angular size is determined.
  • Example 9 includes the subject matter of any of Examples 1-8, and wherein to capture the first image of the user comprises to capture an image of a face of the user; and determine the position of the user's eye relative to the mobile computing device comprises to identify a location of the user's eye in the image of the user's face.
  • Example 10 includes the subject matter of any of Examples 1-9, and wherein to determine the position of the user's eye relative to the mobile computing device comprises to determine a distance of the user's eye to the mobile computing device.
  • Example 11 includes the subject matter of any of Examples 1-10, and wherein to determine the position of the user's eye relative to the mobile computing device comprises to determine a position of the user's eye relative to the first camera; and determine the distance of the object in the real-world environment relative to the mobile computing device comprises to determine a distance of the object relative to the second camera.
  • Example 12 includes the subject matter of any of Examples 1-11, and wherein the first camera has a field of view in a direction opposite a field of view of the second camera about the display.
  • Example 13 includes the subject matter of any of Examples 1-12, and wherein to determine the distance of the object in the real-world environment relative to the mobile computing device comprises to set a distance of the object relative to the mobile computing device to a predefined distance.
  • Example 14 includes the subject matter of any of Examples 1-13, and wherein the predefined distance is greater than a focal length of the second camera.
  • Example 15 includes the subject matter of any of Examples 1-14, and further including a display module to display an image on the display based on the generated back projection of the real- world environment captured by the second camera.
  • Example 16 includes the subject matter of any of Examples 1-15, and wherein to display the image based on the generated back projection comprises to display an image corresponding with the back projection modified to include augmented reality features.
  • Example 17 includes the subject matter of any of Examples 1-16, and wherein the at least one device parameter comprises at least one of (i) a focal length of the second camera, (ii) a size of the display, (iii) a size of the mobile computing device, or (iv) a location of components of the mobile computing device relative to a reference point.
  • the at least one device parameter comprises at least one of (i) a focal length of the second camera, (ii) a size of the display, (iii) a size of the mobile computing device, or (iv) a location of components of the mobile computing device relative to a reference point.
  • Example 18 includes a method for adjusting a perspective of a captured image for display on a mobile computing device, the method comprising capturing, by a first camera of the mobile computing device, a first image of a user of the mobile computing device; determining, by the mobile computing device, a position of an eye of the user relative to the mobile computing device based on the first captured image; capturing, by a second camera of the mobile computing device different from the first camera, a second image of a real-world environment of the mobile computing device; determining, by the mobile computing device, a distance of an object in the real-world environment relative to the mobile computing device based on the second captured image; and generating, by the mobile computing device, a back projection of the real-world environment captured by the second camera to a display of the mobile computing device based on the determined distance of the object in the real-world environment relative to the mobile computing device, the determined position of the user's eye relative to the mobile computing device, and at least one device parameter of the mobile computing device.
  • Example 19 includes the subject matter of Example 18, and wherein generating the back projection comprises determining, for each display pixel of the display, a ray from the user's eye through a corresponding display pixel to the object in the real- world environment; identifying, for each determined ray, an image pixel of the second captured image of the real- world environment corresponding with a position of the object in the real- world environment to which the corresponding ray is directed; and constructing a back projection image based on the identified image pixels for display on the display of the mobile computing device.
  • Example 20 includes the subject matter of any of Examples 18 and 19, and wherein generating the back projection comprises determining an angular size of the mobile computing device from a perspective of the user; determining a distance of the object in the real-world environment relative to the user; determining a region of the object occluded by the mobile computing device from the user's perspective; determining a corrected zoom magnitude of the second camera based on the determined region of the object occluded by the mobile computing device and the distance of the object relative to the mobile computing device; and generating a back projection image based on the corrected zoom magnitude for display on the display of the mobile computing device.
  • Example 21 includes the subject matter of any of Examples 18-20, and wherein determining the corrected zoom magnitude comprises determining an angular size of a region of the object from a perspective of the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
  • Example 22 includes the subject matter of any of Examples 18-21, and wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
  • Example 23 includes the subject matter of any of Examples 18-22, and wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera having only image pixels corresponding with features of the object from the region of the object occluded by the mobile computing device from the user's perspective.
  • Example 24 includes the subject matter of any of Examples 18-23, and wherein determining the angular size of the mobile computing device from the user's perspective comprises determining an angular size of the mobile computing device from the user's perspective based on a distance of the user' s eye relative to the mobile computing device and a size of the mobile computing device; determining the distance of the object relative to the user comprises determining the distance of the object relative to the user based on the distance of the user's eye relative to the mobile computing device and the distance of the object relative to the mobile computing device; and determining the region of the object occluded by the mobile computing device from the user's perspective comprises determining the region of the object occluded by the mobile computing device based on the angular size of the mobile computing device from the user's perspective and the distance of the object relative to the user.
  • Example 25 includes the subject matter of any of Examples 18-24, and wherein
  • D is a distance between the corresponding object and a point, the point being a perspective from which the angular size is determined.
  • Example 26 includes the subject matter of any of Examples 18-25, and wherein capturing the first image of the user comprises capturing an image of a face of the user; and determining the position of the user's eye relative to the mobile computing device comprises identifying a location of the user's eye in the image of the user's face.
  • Example 27 includes the subject matter of any of Examples 18-26, and wherein determining the position of the user's eye relative to the mobile computing device comprises determining a distance of the user's eye to the mobile computing device.
  • Example 28 includes the subject matter of any of Examples 18-27, and wherein determining the position of the user's eye relative to the mobile computing device comprises determining a position of the user's eye relative to the first camera; and determining the distance of the object in the real-world environment relative to the mobile computing device comprises determining a distance of the object relative to the second camera.
  • Example 29 includes the subject matter of any of Examples 18-28, and wherein the first camera has a field of view in a direction opposite a field of view of the second camera about the display.
  • Example 30 includes the subject matter of any of Examples 18-29, and wherein determining the distance of the object in the real-world environment relative to the mobile computing device comprises setting a distance of the object relative to the mobile computing device to a predefined distance.
  • Example 31 includes the subject matter of any of Examples 18-30, and wherein the predefined distance is greater than a focal length of the second camera.
  • Example 32 includes the subject matter of any of Examples 18-31, and further including displaying, by the mobile computing device, an image on the display based on the generated back projection of the real- world environment captured by the second camera.
  • Example 33 includes the subject matter of any of Examples 18-32, and wherein displaying the image based on the generated back projection comprises displaying an image corresponding with the back projection modified to include augmented reality features.
  • Example 34 includes the subject matter of any of Examples 18-33, and wherein the at least one device parameter comprises at least one of (i) a focal length of the second camera, (ii) a size of the display, (iii) a size of the mobile computing device, or (iv) a location of components of the mobile computing device relative to a reference point.
  • the at least one device parameter comprises at least one of (i) a focal length of the second camera, (ii) a size of the display, (iii) a size of the mobile computing device, or (iv) a location of components of the mobile computing device relative to a reference point.
  • Example 35 includes a mobile computing device comprising a processor; and a memory having stored therein a plurality of instructions that when executed by the processor cause the mobile computing device to perform the method of any of Examples 18-34.
  • Example 36 includes one or more machine -readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, result in a mobile computing device performing the method of any of Examples 18-34.
  • Example 37 includes a mobile computing device for adjusting a perspective of a captured image for display, the mobile computing device comprising means for capturing, by a first camera of the mobile computing device, a first image of a user of the mobile computing device; means for determining a position of an eye of the user relative to the mobile computing device based on the first captured image; means for capturing, by a second camera of the mobile computing device different from the first camera, a second image of a real-world environment of the mobile computing device; means for determining a distance of an object in the real-world environment relative to the mobile computing device based on the second captured image; and means for generating a back projection of the real-world environment captured by the second camera to a display of the mobile computing device based on the determined distance of the object in the real-world environment relative to the mobile computing device, the determined position of the user's eye relative to the mobile computing device, and at least one device parameter of the mobile computing device.
  • Example 38 includes the subject matter of Example 37, and wherein the means for generating the back projection comprises means for determining, for each display pixel of the display, a ray from the user's eye through a corresponding display pixel to the object in the real- world environment; means for identifying, for each determined ray, an image pixel of the second captured image of the real-world environment corresponding with a position of the object in the real-world environment to which the corresponding ray is directed; and means for constructing a back projection image based on the identified image pixels for display on the display of the mobile computing device.
  • Example 39 includes the subject matter of any of Examples 37 and 38, and wherein the means for generating the back projection comprises means for determining an angular size of the mobile computing device from a perspective of the user; means for determining a distance of the object in the real-world environment relative to the user; means for determining a region of the object occluded by the mobile computing device from the user's perspective; means for determining a corrected zoom magnitude of the second camera based on the determined region of the object occluded by the mobile computing device and the distance of the object relative to the mobile computing device; and means for generating a back projection image based on the corrected zoom magnitude for display on the display of the mobile computing device.
  • Example 40 includes the subject matter of any of Examples 37-39, and wherein the means for determining the corrected zoom magnitude comprises means for determining an angular size of a region of the object from a perspective of the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
  • Example 41 includes the subject matter of any of Examples 37-40, and wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
  • Example 42 includes the subject matter of any of Examples 37-41, and wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera having only image pixels corresponding with features of the object from the region of the object occluded by the mobile computing device from the user's perspective.
  • Example 43 includes the subject matter of any of Examples 37-42, and wherein the means for determining the angular size of the mobile computing device from the user's perspective comprises means for determining an angular size of the mobile computing device from the user's perspective based on a distance of the user's eye relative to the mobile computing device and a size of the mobile computing device; means for determining the distance of the object relative to the user comprises means for determining the distance of the object relative to the user based on the distance of the user's eye relative to the mobile computing device and the distance of the object relative to the mobile computing device; and means for determining the region of the object occluded by the mobile computing device from the user's perspective comprises means for determining the region of the object occluded by the mobile computing device based on the angular size of the mobile computing device from the user's perspective and the distance of the object relative to the user.
  • Example 44 includes the subject matter of any of Examples 37-43, and wherein
  • D is a distance between the corresponding object and a point, the point being a perspective from which the angular size is determined.
  • Example 45 includes the subject matter of any of Examples 37-44, and wherein the means for capturing the first image of the user comprises means for capturing an image of a face of the user; and means for determining the position of the user's eye relative to the mobile computing device comprises means for identifying a location of the user' s eye in the image of the user's face.
  • Example 46 includes the subject matter of any of Examples 37-45, and wherein the means for determining the position of the user's eye relative to the mobile computing device comprises means for determining a distance of the user's eye to the mobile computing device.
  • Example 47 includes the subject matter of any of Examples 37-46, and wherein the means for determining the position of the user's eye relative to the mobile computing device comprises means for determining a position of the user's eye relative to the first camera; and means for determining the distance of the object in the real- world environment relative to the mobile computing device comprises means for determining a distance of the object relative to the second camera.
  • Example 48 includes the subject matter of any of Examples 37-47, and wherein the first camera has a field of view in a direction opposite a field of view of the second camera about the display.
  • Example 49 includes the subject matter of any of Examples 37-48, and wherein the means for determining the distance of the object in the real-world environment relative to the mobile computing device comprises means for setting a distance of the object relative to the mobile computing device to a predefined distance.
  • Example 50 includes the subject matter of any of Examples 37-49, and wherein the predefined distance is greater than a focal length of the second camera.
  • Example 51 includes the subject matter of any of Examples 37-50, and further including means for displaying an image on the display based on the generated back projection of the real- world environment captured by the second camera.
  • Example 52 includes the subject matter of any of Examples 37-51, and wherein the means for displaying the image based on the generated back projection comprises means for displaying an image corresponding with the back projection modified to include augmented reality features.
  • Example 53 includes the subject matter of any of Examples 37-52, and wherein the at least one device parameter comprises at least one of (i) a focal length of the second camera, (ii) a size of the display, (iii) a size of the mobile computing device, or (iv) a location of components of the mobile computing device relative to a reference point.
  • the at least one device parameter comprises at least one of (i) a focal length of the second camera, (ii) a size of the display, (iii) a size of the mobile computing device, or (iv) a location of components of the mobile computing device relative to a reference point.

Abstract

Technologies for adjusting a perspective of a captured image for display on a mobile computing device include capturing a first image of a user by a first camera and a second image of a real-world environment by a second camera. The mobile computing device determines a position of an eye of the user relative to the mobile computing device based on the first captured image and a distance of an object in the real-world environment from the mobile computing device based on the second captured image. The mobile computing device generates a back projection of the real-world environment captured by the second camera to the display based on the determined distance of the object in the real-world environment relative to the mobile computing device, the determined position of the user's eye relative to the mobile computing device, and at least one device parameter of the mobile computing device.

Description

TECHNOLOGIES FOR ADJUSTING A PERSPECTIVE OF
A CAPTURED IMAGE FOR DISPLAY
CROSS-REFERENCE TO RELATED U.S. PATENT APPLICATION
[0001] The present application claims priority to U.S. Utility Patent Application Serial
No. 14/488,516, entitled "TECHNOLOGIES FOR ADJUSTING A PERSPECTIVE OF A CAPTURED IMAGE FOR DISPLAY," which was filed on September 17, 2014.
BACKGROUND
[0002] Augmented reality systems fuse the real-world and virtual-world environments by projecting virtual characters and objects into physical locations, thereby allowing for immersive experiences and novel interaction models. In particular, in some augmented reality systems, virtual characters or objects may be inserted into captured images of real-world environments (e.g., by overlaying a two- or three-dimensional rendering of a virtual character on a captured image or video stream of the real-world environment). In some systems, a physical object recognized in the captured image may be replaced by a virtual object associated with that physical object. For example, recognized vehicles in the captured image may be recognized and replaced with animated or cartoon-like vehicles.
[0003] Augmented reality systems have been implemented in both stationary and mobile computing devices. In some mobile augmented reality systems, a camera of a mobile computing device (e.g., a smart phone camera positioned opposite the display) captures images of the real-world environment. The augmented reality system then makes augmented reality modifications to the captured images and displays the augmented images in the display of the mobile computing device (e.g., in real time). In such a way, the user is able to see a virtual world corresponding with his or her actual real-world environment. However, because the user and the camera of the mobile computing device have different perspectives of the real-world environment, the immersive experience suffers due to an obstructed visual flow. For example, from the user's perspective, real- world objects (e.g., those at the periphery of the mobile computing device) are duplicated in the augmented reality renderings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.
[0005] FIG. 1 is a simplified block diagram of at least one embodiment of a mobile computing device for adjusting a perspective of a captured image for display;
[0006] FIG. 2 is a simplified block diagram of at least one embodiment of an environment established by the mobile computing device of FIG. 1;
[0007] FIG. 3 is a simplified flow diagram of at least one embodiment of a method for adjusting a perspective of a captured image for display by the mobile computing device of FIG. l ;
[0008] FIG. 4 is a simplified flow diagram of at least one embodiment of a method for generating a back projection of a real-world environment of the mobile computing device of FIG. 1;
[0009] FIG. 5 is a simplified illustration of a user holding the mobile computing device of FIG. 1 during execution of the method of FIG. 4;
[0010] FIG. 6 is simplified flow diagram of at least one other embodiment of a method for generating a back projection of the real-world environment of the mobile computing device of FIG. 1;
[0011] FIGS. 7-8 are simplified illustrations of the user holding the mobile computing device of FIG. 1 showing various angular relationships;
[0012] FIG. 9 is a simplified illustration of a real-world environment of the mobile computing device of FIG. 1;
[0013] FIG. 10 is a simplified illustration of the user holding the mobile computing device of FIG. 1 and a corresponding captured image displayed on the mobile computing device without an adjusted perspective; and
[0014] FIG. 11 is a simplified illustration of the user holding the mobile computing device of FIG. 1 and a corresponding captured image displayed on the mobile computing device having an adjusted perspective by virtue of the method of FIG. 3.
DETAILED DESCRIPTION OF THE DRAWINGS
[0015] While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
[0016] References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of "at least one A, B, and C" can mean (A); (B); (C): (A and B); (B and C); or (A, B, and C). Similarly, items listed in the form of "at least one of A, B, or C" can mean (A); (B); (C): (A and B); (B and C); or (A, B, and C).
[0017] The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
[0018] In the drawings, some structural or method features may be shown in specific arrangements and/or ordering s. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.
[0019] Referring now to FIG. 1, a mobile computing device 100 for adjusting a perspective of a captured image for display is shown. In use, as described in more detail below, the mobile computing device 100 is configured to capture an image of a user of the mobile computing device 100 and an image of a real- world environment of the mobile computing device 100. The mobile computing device 100 further analyzes the captured image of the user to determine a position of the user's eye(s) relative to the mobile computing device 100. As discussed below, in doing so, the mobile computing device 100 may determine the distance of the user to the mobile computing device 100 and identify/detect the position of the user's eye(s) in the captured image. Additionally, the mobile computing device 100 determines a distance of one or more objects (e.g., a principal object and/or other objects in the captured scene) in the captured real- world environment relative to the mobile computing device 100. For example, as described below, the mobile computing device 100 may analyze the captured image of the real- world environment, utilize depth or distance sensor data, or otherwise determine the relative distance of the object depending on the particular embodiment. The mobile computing device 100 determines a back projection of the real-world environment to a display 120 of the mobile computing device 100 based on the distance of the real- world object relative to the mobile computing device 100, the position of the user's eye(s) relative to the mobile computing device 100, and one or more device parameters. As discussed below, the back projection may be embodied as a back projection image, a set of data (e.g., pixel values) usable to generate a back projection image, and/or other data indicative of the corresponding back projection image. As discussed below, the device parameters may include, for example, a focal length of a camera of the mobile computing device 100, a size of the display 120 or of the mobile computing device 100 itself, a location of components of the mobile computing device 100 relative to one another or a reference point, and/or other relevant information associated with the mobile computing device 100. The mobile computing device 100 displays an image based on the determined back projection and, in doing so, may apply virtual objects, characters, and/or scenery or otherwise modify the image for augmented reality. It should be appreciated that the techniques described herein result in an image back-projected to the display 120 such that the image visable on the display 120 maps directly, or near directly, to the real world such that the user feels as though she is looking at the real-world environment through a window. That is, in the illustrative embodiment, the displayed image includes the same content as that which is occluded by the mobile computing device 100 viewed from the same perspective as the user.
[0020] The mobile computing device 100 may be embodied as any type of computing device capable of performing the functions described herein. For example, the mobile computing device 100 may be embodied as a smartphone, cellular phone, wearable computing device, personal digital assistant, mobile Internet device, tablet computer, netbook, notebook, ultrabook, laptop computer, and/or any other mobile computing/communication device. As shown in FIG. 1, the illustrative mobile computing device 100 includes a processor 110, an input/output ("I/O") subsystem 112, a memory 114, a data storage 116, a camera system 118, a display 120, one or more sensors 122, and a communication circuitry 124. Of course, the mobile computing device 100 may include other or additional components, such as those commonly found in a typical computing device (e.g., various input/output devices and/or other components), in other embodiments. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. For example, the memory 114, or portions thereof, may be incorporated in the processor 110 in some embodiments.
[0021] The processor 110 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor 110 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. Similarly, the memory 114 may be embodied as any type of volatile or non- volatile memory or data storage capable of performing the functions described herein. In operation, the memory 114 may store various data and software used during operation of the mobile computing device 100 such as operating systems, applications, programs, libraries, and drivers. The memory 114 is communicatively coupled to the processor 110 via the I/O subsystem 112, which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 110, the memory 114, and other components of the mobile computing device 100. For example, the I/O subsystem 112 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 112 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with the processor 110, the memory 114, and other components of the mobile computing device 100, on a single integrated circuit chip.
[0022] The data storage 116 may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. In the illustrative embodiment, the data storage 116 may store device parameters 130 of the mobile computing device 100. It should be appreciated that the particular device parameters 130 may vary depending on the particular embodiment. The device parameters 130 may include, for example, information or data associated with a size/shape of the mobile computing device 100, the display 120, and/or another component of the mobile computing device 100, intrinsic parameters or other data regarding one or more cameras of the mobile computing device 100 (e.g., focal length, principle point, zoom information, etc.), a location of components of the mobile computing device 100 relative to a reference point (e.g., a coordinate system identifying relative locations of the components of the mobile computing device 100), and/or other information associated with the mobile computing device 100. Additionally, in some embodiments, the data storage 116 and/or the memory 114 may store various other data useful during the operation of the mobile computing device 100.
[0023] The camera system 118 includes a plurality of cameras configured to capture images or video (i.e., collections of images or frames) and capable of performing the functions described herein. It should be appreciated that each of the cameras of the camera system 118 may be embodied as any peripheral or integrated device suitable for capturing images, such as a still camera, a video camera, or other device capable of capturing video and/or images. In the illustrative embodiment, the camera system 118 includes a user-facing camera 126 and an environment-facing camera 128. As indicated below, each of the user-facing camera 126, the environment-facing camera 128, and/or other cameras of the camera system 118 may be embodied as a two-dimensional (2D) camera (e.g., an RGB camera) or a three-dimensional (3D) camera. Such 3D cameras include, for example, depth cameras, bifocal cameras, and/or cameras otherwise capable of generating a depth image, channel, or stream. For example, one or more cameras may include an infrared (IR) projector and an IR sensor such that the IR sensor estimates depth values of objects in the scene by analyzing the IR light pattern projected on the scene by the IR projector. In another embodiment, one or more of the cameras of the camera system 118 include at least two lenses and corresponding sensors configured to capture images from at least two different viewpoints of a scene (e.g., a stereo camera).
[0024] As described in greater detail below, the user-facing camera 126 is configured to capture images of the user of the mobile computing device 100. In particular, the user-facing camera 126 captures images of the user's face, which may be analyzed to determine the location of the user's eye(s) relative to the mobile computing device 100 (e.g., relative to the user-facing camera 126 or another reference point of the mobile computing device 100). The environment-facing camera 128 captures images of the real- world environment of the mobile computing device 100. In the illustrative embodiment, the user-facing camera 126 and the environment-facing camera 128 are positioned on opposite sides of the mobile computing device 100 and therefore have fields of view in opposite directions. In particular, the user- facing camera 126 is on the same side of the mobile computing device 100 as the display 120 such that the user-facing camera 126 can capture images of the user as she views the display 120. [0025] The display 120 of the mobile computing device 100 may be embodied as any type of display on which information may be displayed to a user of the mobile computing device 100. Further, the display 120 may be embodied as, or otherwise use any suitable display technology including, for example, a liquid crystal display (LCD), a light emitting diode (LED) display, a cathode ray tube (CRT) display, a plasma display, a touchscreen display, and/or other display technology. Although only one display 120 is shown in the illustrative embodiment of FIG. 1, in other embodiments, the mobile computing device 100 may include multiple displays 120.
[0026] As shown in FIG. 1, the mobile computing device 100 may include one or more sensors 122 configured to collect data useful in performing the functions described herein. For example, the sensors 122 may include a depth sensor that may be used to determine the distance of objects from the mobile computing device 100. Additionally, in some embodiments, the sensors 122 may include an accelerometer, gyroscope, and/or magnetometer to determine the relative orientation of the mobile computing device 100. In various embodiments, the sensors 122 may be embodied as, or otherwise include, for example, proximity sensors, optical sensors, light sensors, audio sensors, temperature sensors, motion sensors, piezoelectric sensors, and/or other types of sensors. Of course, the mobile computing device 100 may also include components and/or devices configured to facilitate the use of the sensor(s) 122.
[0027] The communication circuitry 124 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the mobile computing device 100 and other remote devices over a network (not shown). For example, in some embodiments, the mobile computing device 100 may offload one or more of the functions described herein (e.g., determination of the back projection) to a remote computing device. The communication circuitry 124 may be configured to use any one or more communication technologies (e.g., wireless or wired communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.
[0028] Referring now to FIG. 2, in use, the mobile computing device 100 establishes an environment 200 for adjusting a perspective of a captured image for display on the display 120 of the mobile computing device 100. As discussed below, the mobile computing device 100 captures an image of the user with the user-facing camera 126 and an image of the real- world environment of the mobile computing device 100 with the environment-facing camera 128. Further, the mobile computing device determines a position of the user's eye(s) relative to the mobile computing device 100 based on the image captured by the user-facing camera 126 and a distance of an object(s) in the real-world environment relative to the mobile computing device 100 based on the image captured by the environment-facing camera 128. The mobile computing device 100 then generates a back projection of the real- world object(s) to the display 120 and displays a corresponding image on the display 120 (e.g., including augmented reality modifications) based on the generated back projection.
[0029] The illustrative environment 200 of the mobile computing device 100 includes an image capturing module 202, an eye tracking module 204, an object distance determination module 206, an image projection module 208, and a display module 210. Each of the modules of the environment 200 may be embodied as hardware, software, firmware, or a combination thereof. For example, in an embodiment, each of the modules of the environment 200 is embodied as a circuit (e.g., an image capturing circuit, an eye tracking circuit, an object distance determination circuit, an image projection circuit, and a display circuit). Additionally, in some embodiments, one or more of the illustrative modules may form a portion of another module. For example, in some embodiments, the image projection module 208 may form a portion of the display module 210.
[0030] The image capturing module 202 controls the camera system 118 (e.g., the user- facing camera 126 and the environment-facing camera 128) to capture images within the field of view of the respective camera 126, 128. For example, as described herein, the user-facing camera 126 is configured to capture images of the user's face (e.g., for eye detection/tracking). It should be appreciated that the mobile computing device 100 may detect and/or track one or both of the user's eyes and, therefore, in the illustrative embodiment, the images captured by the user-facing camera 126 for analysis by the mobile computing device 100 include at least one of the user's eyes. Although eye tracking and analysis is, at times, discussed herein in reference to a single eye of the user for simplicity and clarity of the description, the techniques described herein equally apply to detecting/tracking both of the user's eyes. Additionally, as described herein, the environment-facing camera 128 is configured to capture images of the real- world environment of the mobile computing device 100. It should be appreciated that the captured scene may include any number of principal objects (e.g., distinct or otherwise important objects) although, for simplicity, such captured images are oftentimes described herein as having a single principal object.
[0031] The eye tracking module 204 determines the location/position of the user's eye relative to the mobile computing device 100 (e.g., relative to the user-facing camera 126 or another reference point). In doing so, the eye tracking module 204 detects the existence of one or more person's eyes in an image captured by the user-facing camera 126 and determines the location of the eye in the captured image (i.e., the portion of the image associated with the eye) that is to be tracked. To do so, the eye tracking module 204 may use any suitable techniques, algorithms, and/or image filters (e.g., edge detection and image segmentation). In some embodiments, the eye tracking module 204 determines the location of the user's face in the captured image and utilizes the location of the user's face to, for example, reduce the region of the captured image that is analyzed to locate the user's eye(s). Additionally, in some embodiments, the eye tracking module 204 analyzes the user's eyes to determine various characteristics/features of the user's eyes (e.g., glint location, iris location, pupil location, iris- pupil contrast, eye size/shape, and/or other characteristics) to determine the gaze direction of the user. The user's gaze direction may be used, for example, to determine whether the user is looking at the display 120, to identify objects in the scene captured by the environment-facing camera 128 toward which the user's gaze is directed (e.g., principal objects), to determine a relative location or position (e.g., in three-dimensional space) of the user's eye(s), and/or for other purposes. Additionally, in some embodiments, the eye tracking module 204 may further determine the orientation of the user's head or otherwise determine the user's head pose.
[0032] As described below, in determining the position of the user's eye relative to the mobile computing device 100, the eye tracking module 204 determines the distance of the user's eye relative to the mobile computing device 100 (e.g., relative to the user-facing camera 126 or another reference point). It should be appreciated that the eye tracking module 204 may utilize any suitable algorithms and/or techniques for doing so. For example, in some embodiments, the user-facing camera 126 may be embodied as a depth camera or other 3D camera capable of generating data (e.g., a depth stream or depth image) corresponding with the distance of objects in the captured scene. In another embodiment, the eye tracking module 204 may use face detection in conjunction with a known approximate size of a person's face to estimate the distance of the user's face from the mobile computing device 100. In yet another embodiment, the eye tracking module 204 may analyze the region of the captured image corresponding with the user's eye to find reflections of light off the user's cornea (i.e., the glints) and/or pupil. Based on those reflections, the eye tracking module 204 may determine the location or position (e.g., in three-dimensional space) of the user's eye relative to the mobile computing device 100. Further, in some embodiments, the eye tracking module 204 may utilize data generated by the sensors 122 (e.g., depth/distance information) in conjunction with the location of the user's eye in the captured image to determine the location of the user's eye relative to the mobile computing device 100.
[0033] The object distance determination module 206 determines the distance of one or more objects in the real-world environment captured by the environment-facing camera 128 relative to the mobile computing device 100 (e.g., relative to the environment-facing camera 128 or another reference point of the mobile computing device 100). As indicated above, the real- world environment within the field of view of and therefore captured by the environment- facing camera 128 may include any number of objects. Accordingly, depending on the particular embodiment, the object distance determination module 206 may determine the distance of each of the objects from the mobile computing device 100 or the distance of a subset of the objects from the mobile computing device 100 (e.g., a single object). For example, in some embodiments, the object distance determination module 206 identifies a principal object in the captured image for which to determine the distance. Such a principal object may be, for example, an object toward which the user's gaze is directed or a main object in the scene. In some embodiments, the object distance determination module 206 assumes that each of the objects in the scene is approximately the same distance from the mobile computing device 100 for simplicity. Further, in some embodiments, the object distance determination module 206 assumes or otherwise sets the distance of the object(s) to a predefined distance from the mobile computing device 100. For example, the predefined distance may be a value significantly greater than the focal length of the environment-facing camera 128, a value approximating infinity (e.g., the largest number in the available number space), or another predefined distance value. For ease of discussion, a number representing infinity may be referred to herein simply as "infinity."
[0034] It should be appreciated that the object distance determination module 206 may determine the distance of an object in the real-world environment relative to the mobile computing device 100 using any suitable techniques and/or algorithms. For example, in some embodiments, the object distance determination module 206 may use one or more of the techniques and algorithms described above in reference to determining the distance of the user relative to the mobile computing device 100 (i.e., by the eye tracking module 204). In particular, the environment-facing camera 128 may be embodied as a depth camera or other 3D camera, which generates depth data for determining the distance of objects in the captured image. Additionally or alternatively, the object distance determination module 206 may reference stored data regarding the size of certain objects to estimate the distance of the objects to the mobile computing device 100 in some embodiments. In yet another embodiment, the object distance determination module 206 may utilize data generated by the sensors 122 (e.g., depth/distance information) to determine the distance and/or location of the objects relative to the mobile computing device 100. Of course, in some embodiments, the object distance determination module 206 may assign a distance of a particular object to a predefined value. For example, the object distance determination module 206 may assume the object is infinitely far in response to determining that the object's distance exceeds a predefined threshold. That is, in some embodiments, objects that are at least a threshold distance (e.g., four meters) from the mobile computing device 100 may be treated as though they are, for example, infinitely far from the mobile computing device 100. Such embodiments may appreciate that calculation differences may become negligible (e.g., calculations based on a distance of ten meters and twenty meters may yield approximately the same result). As described below, the distance of the object(s) relative to the mobile computing device 100 (e.g., relative to the camera 128) may be used to determine the location of the object(s) relative to the mobile computing device 100 and to generate a back projection of the real-world environment (e.g., based on the device parameters 130).
[0035] The image projection module 208 generates a back projection of the real-world environment captured by the environment-facing camera 128 to the display 120. In the illustrative embodiment, the image projection module 208 generates the back projection based on the distance of the object in the real-world environment relative to the mobile computing device 100 (e.g., infinity, a predefined distance, or the determined distance), the position/location of the user's eye relative to the mobile computing device 100, and/or the device parameters 130 of the mobile computing device 100 (e.g., intrinsic parameters of the cameras 126, 128, the size of the mobile computing device 100 or the display 120, etc.). As indicated above, by back projecting the real-world environment to the display 120 (i.e., toward the user's eye), the visual content occluded by the mobile computing device 100 is shown on the display 120 such that the user feels as though she is looking through a window. In other words, visual continuity is maintained, as objects around the periphery are not duplicated in the displayed image. It should be appreciated that the image projection module 208 may utilize any suitable techniques and/or algorithms for generating the back projection image for display on the display 120 of the mobile computing device 100. As described below, FIGS. 4-8 show illustrative embodiments for doing so.
[0036] The display module 210 renders images on the display 120 for the user of the mobile computing device 100 to view. For example, the display module 210 may render an image based on the back projection generated by the image projection module 208 on the display 120. Of course, in some embodiments, the back projections may not be "projected" onto the display 120 in the traditional sense; rather, corresponding images may be generated for rendering on the display 120. Further, as discussed above, in some embodiments, the display module 210 may modify the back projection image to include virtual objects, characters, and/or environments for augmented reality and render the modified image on the display 120.
[0037] The communication module 212 handles the communication between the mobile computing device 100 and remote devices through the corresponding network. For example, in some embodiments, the mobile computing device 100 may communicate with a remote computing device to offload one or more of the functions of the mobile computing device 100 described herein to a remote computing device (e.g., for determination of the back projection image or modification of the images for augmented reality). Of course, relevant data associated with such analyses may be transmitted by the remote computing device and received by the communication module 212 of the mobile computing device 100.
[0038] Referring now to FIG. 3, in use, the mobile computing device 100 may execute a method 300 for adjusting a perspective of a captured image for display by the mobile computing device 100. The illustrative method 300 begins with blocks 302 and 310. In block 302, the mobile computing device 100 captures an image of the user's face with the user-facing camera 126. Depending on the particular embodiment, the user-facing camera 126 may capture images continuously (e.g., as a video stream) for analysis or in response to a user input (e.g., a button press). In block 304, the mobile computing device 100 identifies the user's eye(s) in the captured image. As discussed above, the mobile computing device 100 may utilize any suitable techniques and/or algorithms for doing so (e.g., edge detection and/or image segmentation). Further, depending on the particular embodiment, the mobile computing device 100 may determine and utilize the location of one or both of the user's eyes.
[0039] In block 306, the mobile computing device 100 determines the position of the user's eye(s) relative to the user-facing camera 126 or another reference point of the mobile computing device 100. In doing so, the mobile computing device 100 determines the distance of the user, or more particularly, of the user's eye(s) relative to the user-facing camera 126. As discussed above, the mobile computing device 100 may make such a determination based on, for example, a depth image or other depth information generated by the user-facing camera 126 (i.e., if the user-facing camera 126 is a depth camera or other 3D camera), user gaze information, distance information generated by the sensors 122, device parameters 130, and/or other relevant data. The distance of the user relative to the user-facing camera 126 may be used in conjunction with the location of the user's eye(s) in the captured image to determine the position of the user's eye(s) relative to the user-facing camera 126 or other reference point of the mobile computing device 100. It should be appreciated that the device parameters 130 may include information regarding the locations of the components of the mobile computing device 100 relative to one another, thereby establishing a coordinate system having a reference point as the origin. The reference point selected to be the origin may vary depending on the particular embodiment and may be, for example, the location of the user-facing camera 126, the location of the environment-facing camera 128, the center of the display 120, or another suitable location.
[0040] As shown, in the illustrative embodiment of FIG. 3, blocks 302-308 and blocks
310-314 occur in parallel; however, in other embodiments, those blocks may be executed serially. In block 310, the mobile computing device 100 captures an image of the real- world environment of the mobile computing device 100 with the environment-facing camera 128. Similar to the user-facing camera 126, depending on the particular embodiment, the environment-facing camera 128 may capture images continuously (e.g., as a video stream) for analysis or in response to a user input such as a button press. For example, in some embodiments, the user may provide some input to commence execution of the method 300 in which the mobile computing device 100 executes each of block 302 and 310. As indicated above, in the illustrative embodiment, the environment-facing camera 128 is position opposite the user-facing camera 126 such that the environment-facing camera 128 has a field of view similar to the user (i.e., in the same general direction).
[0041] In block 312, the mobile computing device 100 determines the distance of one or more objects in the corresponding real-world environment relative to the environment-facing camera 128 or another reference point of the mobile computing device 100. As discussed above, the mobile computing device 100 may make such a determination based on, for example, depth information generated by the environment-facing camera 128 (i.e., if the environment-facing camera 128 is a depth camera or other 3D camera), distance information generated by the sensors 122, device parameters 130, and/or other relevant data. Further, the object(s) for which the relative distance is determined may vary depending on the particular embodiment. For example, as discussed above, in some embodiments, the mobile computing device 100 may determine the relative distance of each object or each principal object in the captured image, whereas in other embodiments, the mobile computing device 100 may determine only the relative distance of the main object in the captured image (e.g., the object to which the user's gaze is directed or otherwise determined to be the primary object). Further, as indicated above, the mobile computing device 100 may set the distance of the object(s) to a predefined distance in block 314.
[0042] In block 316, the mobile computing device 100 generates a back projection of the real-world environment to the display 120 based on the distance of the real-world object(s) relative to the mobile computing device 100 (e.g., the determined or predefined distance), the position of the user's eye relative to the mobile computing device 100, and/or one or more device parameters 130 (e.g., intrinsic parameters of the cameras 126, 128, the size of the mobile computing device 100 or the display 120, etc.). As indicated above, the mobile computing device 100 may generate a back projection image using any suitable algorithms and/or techniques for doing so. For example, in some embodiments, the mobile computing device 100 may generate a back projection by executing a method 400 as shown in FIG. 4 and, in other embodiments, the mobile computing device 100 may generate the back projection by executing a method 600 as shown in FIG. 6. Of course, it should be appreciated that the embodiments of FIGS. 4 and 9 are provided as illustrative embodiments and do not limit the concepts described herein.
[0043] After the back projection has been determined, the mobile computing device 100 displays an image on the display 120 based on the generated back projection in block 318. In doing so, in block 320, the mobile computing device 100 may modify the back projection or corresponding image for augmented reality purposes as discussed above. For example, the mobile computing device 100 may incorporate virtual characters, objects, and/or other virtual features into the constructed/generated back projection image for rendering on the display 120. Of course, in some embodiments, the mobile computing device 100 may not modify the back projection for augmented reality or other purposes such that the viewer truly feels as though the display 120 is a window through which she can see the real- world environment occluded by the mobile computing device 100.
[0044] Referring now to FIG. 4, the illustrative method 400 begins with block 402 in which the mobile computing device 100 determines whether to generate a back projection. If so, in block 404, the mobile computing device 100 determines a ray 502 from the user's eye 504 through the next display pixel 506 of the display 120 to the real-world object(s) 508 as shown in FIG. 5. It should be appreciated that which display pixel 506 constitutes the "next" display pixel 506 may vary depending on the particular embodiment. In the illustrative embodiment, the mobile computing device 100 selects a display pixel 506 for which a ray 502 has not yet been determined during the execution of the method 400 as the "next" display pixel 506. It should further be appreciated that, in other embodiments, the mobile computing device 100 may determine a ray 502 through another sub-region of the display 120 (i.e., a sub-region other than the display pixels at, for example, a different level of granularity).
[0045] As discussed above, the device parameters 130 of the mobile computing device
100 may include data regarding relative locations of the various components of the mobile computing device 100 and establish, for example, a three-dimensional coordinate system having some reference point as the origin. For example, in some embodiments, the environment-facing camera 128 may be the origin. It should be appreciated that every pixel/point on the display 120 is located at some point relative to the environment-facing camera 128. Accordingly, in some embodiments, the mobile computing device 100 determines the corresponding three- dimensional coordinates of the user's eye 504 and the object(s) 508 based on the analyses described above. It should be appreciated that, armed with the coordinates or relative locations of the user's eye 504, the display pixels 506, and the object(s) 508, the mobile computing device 100, in the illustrative embodiment, determines a corresponding ray 502 from the user's eye 504 through each of the display pixels 506 to the object(s) 508.
[0046] In block 406, the mobile computing device 100 identifies the image pixel of the image of the real-world environment captured by the environment-facing camera 128 (see block 310 of FIG. 3) corresponding with the position/location 510 of the real- world object(s) to which the corresponding ray 502 is directed. For example, based on the device parameters 130 such as the intrinsic parameters (e.g., focal length) of the environment-facing camera 128 and the real-world coordinates or relative location of the object(s) 508, the mobile computing device 100 may determine how the image captured by the environment-facing camera 128 is projected from the real- world environment to the captured image coordinates. In such embodiments, the mobile computing device 100 may thereby identify the image pixel associated with the real- world coordinates (i.e., the location 510) to which the ray 502 is directed.
[0047] In block 408, the mobile computing device 100 determines whether any display pixels 506 are remaining. If so, the method 400 returns to block 404 in which the mobile computing device 100 determines a ray 502 from the user's eye 504 through the next display pixel 506 to the real-world object(s) 508. In other words, the mobile computing device 100 determines a ray 502 from the user's eye 504 through a corresponding display pixel 506 to the object(s) 508 in the real-world environment for each display pixel 506 of the display 120 (or other sub-region of the display 120) and identifies an image pixel of the image of the real-world environment captured by the environment-facing camera 128 corresponding with a location of the object(s) in the real-world environment to which the corresponding ray 502 is directed for each determined ray 502. In block 410, the mobile computing device 100 constructs an image (e.g., a back projection image) from the identified image pixels for display on the mobile computing device 100. In the illustrative embodiment, the mobile computing device 100 generates an image having the identified image pixels in the appropriate image coordinates of the generated image. In other words, the mobile computing device 100 may back project the visual content from a location to which each corresponding ray 502 is directed to the corresponding point on the display 120 through which the ray 502 is directed.
[0048] Referring now to FIG. 6, in use, the mobile computing device 100 may execute a method 600 for generating a back projection of the real-world environment of the mobile computing device 100 as indicated above. The illustrative method 600 begins with block 602 in which the mobile computing device 100 determines whether to generate a back projection. If so, in block 604, the mobile computing device 100 determines the angular size 702 of the mobile computing device 100 from the user's perspective based on the distance 704 of the user 706 relative to the user-facing camera 126 (or other reference point of the mobile computing device 100) and device parameters 130 as shown with regard to FIGS. 7-8. As indicated above, the device parameters 130 may include, for example, the size, shaped, and other characteristics of the mobile computing device 100 and/or components of the mobile computing device 100. It should be appreciated that the angular size of an object is indicative of the viewing angle required to encompass the object from a reference point (e.g., a viewer or camera) that is a given distance from the object. In the illustrative embodiment, the angular size of an object (e.g., the mobile computing device 100) from a perspective po user's eye or the environment-facing camera 128) is determined according to δ = , where δ is the
Figure imgf000017_0001
angular size of the object, d is an actual size of the corresponding object, and D is a distance between the corresponding object and the perspective point (i.e., the point from which the angular size is determined). In other embodiments, however, the angular size of an object may be otherwise determined. It should further be appreciated that, while the angular size may at times be discussed herein with respect to two dimensions, the techniques described herein may be applied to three dimensions as well (e.g., accounting for both a horizontal angular size and a vertical angular size, determining the angular size across a diagonal of the object, projecting the three-dimensional size to two dimensions, employing a three-dimensional equivalent of the angular size formula provided above, etc.).
[0049] In block 606, the mobile computing device 100 determines the distance 708 of the real-world object(s) 710 relative to the user 706. In the illustrative embodiment, the mobile computing device 100 makes such a determination based on the distance 704 of the user 706 to the user-facing camera 126 (see block 308 of FIG. 3) and the distance 712 of the real-world object(s) 710 to the environment-facing camera 128 (see block 312 of FIG. 3) or other reference point of the mobile computing device 100. In doing so, the mobile computing device 100 may, in some embodiments, assume the user 706, the mobile computing device 100, and the object(s) 710 are relatively collinear and add the two previously calculated distances to determine the distance 708 between the user 706 and the real- world object(s) 710 (e.g., if the objects are far from the user). In other embodiments, the mobile computing device 100 may employ a more sophisticated algorithm for determining the distance 708 between the user 706 and the real- world object(s) 710. For example, the mobile computing device 100 may make such a determination based on the relative locations of the mobile computing device 100, the user 706 (or, more particularly, the user's eye(s)), and the object(s) 710 to one another or to a particular reference point (e.g., a defined origin), the mobile computing device 100 and the known distances 704, 712 between the user 706 and the mobile computing device 100 and between the mobile computing device 100 and the object(s) 710 (e.g., based on the properties of a triangle).
[0050] In block 608, the mobile computing device 100 determines the region 714 of the real- world object(s) 710 that is occluded by the mobile computing device 100 from the user's perspective. In the illustrative embodiment, the mobile computing device 100 makes such a determination based on the angular size 702 of the mobile computing device 100 from the user's perspective and the distance 708 of the real-world object 710 relative to the user 706.
[0051] In block 610, the mobile computing device 100 determines a corrected zoom magnitude of the environment-facing camera 128 based on the region 714 of the real- world object(s) occluded from the user's perspective and the distance 712 of the real- world object(s) to the environment-facing camera 128. In other words, the mobile computing device 100 determines a zoom magnitude of the environment-facing camera 128 needed to capture an image with the environment-facing camera 128 corresponding with the region 714 of the object(s) 710 occluded by the mobile computing device 100 from the user's perspective. As discussed above, the device parameters 130 may include intrinsic parameters (e.g., the focal length, image projection parameters, etc.) of the camera 128. It should be appreciated that such device parameters 130 may be used, in some embodiments, to identify the zoom magnitude corresponding with capturing a particular region of an environment that is a certain distance from the camera 128. In some embodiments, the zoom magnitude is determined such that the environment-facing camera 128 captures an image having only image pixels corresponding with visual content (e.g., features of the object(s) 710) of the object(s) 710 from the region 714 of the object(s) occluded by the mobile computing device 100 from the user's perspective.
[0052] In block 612 of the illustrative embodiment, to determine the corrected zoom magnitude, the mobile computing device 100 determines the angular size 716 of a region 718 of the real- world object(s) 710 from the perspective of the environment-facing camera 128 corresponding with the region 714 of the real world object(s) 710 occluded from the user's perspective. The mobile computing device 100 may make such a determination based on, for example, the device parameters 130 and/or corresponding geometry. That is, in some embodiments, the mobile computing device 100 may determine the angular size 716 based on the size of the region 714, the distance 712, and the angular size formula provided above. It should be appreciated that, in some embodiments, the region 718 and the region 714 are the same region, whereas in other embodiments, those reasons may differ to some extent. Similarly, the corrected zoom magnitude may diverge from the precise zoom required to generate the region 718 (e.g., based on technological, hardware, and/or spatial limitations). In block 614, the mobile computing device 100 generates an image with the corrected zoom magnitude for display on the mobile computing device 100. For example, in some embodiments, the mobile computing device 100 may capture a new image with the environment-facing camera 128 from the same perspective but having a different zoom magnitude. In other embodiments, the mobile computing device 100 may, for example, modify the original image captured by the environment-facing camera 128 to generate an image with the desired zoom magnitude and other characteristics.
[0053] Referring now to FIGS. 9-11, simplified illustrations of a real- world environment 900 (see, for example, FIG. 9) of the mobile computing device 100 and of a user holding the mobile computing device 100 (see FIGS. 10-11) are shown. As discussed above, the real-world environment 900 may be captured by the environment-facing camera 128 and rendered on the display 120. Further, in circumstances in which augmented reality systems are utilized, the captured images may be modified to incorporate, for example, virtual characters, objects, or other features into the captured image for display on the mobile computing device 100. In embodiments in which the method 300 of FIG. 3 is not utilized (i.e., if the captured image or modified version for augmented reality is displayed on the display 120 of the mobile computing device 100), the image 902 displayed on the display 120 includes real-world objects 904 that are also visible in the real-world environment 900 around the periphery of the mobile computing device 100 (see, for example, FIG. 10). In other words, certain real-world objects 904 that are visible to the user are duplicated in the displayed image 902 thereby obstructing the visual flow. However, in embodiments in which the method 300 of FIG.3 is utilized, the image 906 displayed on the display 120 includes the same visual content as what is occluded by the mobile computing device 100 viewed from the same perspective as the user. Because visual continuity between the displayed image 906 and the background real- world environment 900 is maintained, the user feels as though she is looking at the real- world environment 900 through a window. EXAMPLES
[0054] Illustrative examples of the technologies disclosed herein are provided below.
An embodiment of the technologies may include any one or more, and any combination of, the examples described below.
[0055] Example 1 includes a mobile computing device for adjusting a perspective of a captured image for display, the mobile computing device comprising a display; a camera system comprising a first camera and a second camera, the camera system to capture (i) a first image of a user of the mobile computing device with the first camera and (i) a second image of a real-world environment of the mobile computing device with the second camera; an eye tracking module to determine a position of an eye of the user relative to the mobile computing device based on the first captured image; an object distance determination module to determine a distance of an object in the real-world environment relative to the mobile computing device based on the second captured image; and an image projection module to generate a back projection of the real- world environment captured by the second camera to the display based on the determined distance of the object in the real-world environment relative to the mobile computing device, the determined position of the user's eye relative to the mobile computing device, and at least one device parameter of the mobile computing device.
[0056] Example 2 includes the subject matter of Example 1, and wherein to generate the back projection comprises to determine, for each display pixel of the display, a ray from the user's eye through a corresponding display pixel to the object in the real- world environment; identify, for each determined ray, an image pixel of the second captured image of the real- world environment corresponding with a position of the object in the real- world environment to which the corresponding ray is directed; and construct a back projection image based on the identified image pixels for display on the display of the mobile computing device.
[0057] Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to generate the back projection comprises to determine an angular size of the mobile computing device from a perspective of the user; determine a distance of the object in the real-world environment relative to the user; determine a region of the object occluded by the mobile computing device from the user's perspective; determine a corrected zoom magnitude of the second camera based on the determined region of the object occluded by the mobile computing device and the distance of the object relative to the mobile computing device; and generate a back projection image based on the corrected zoom magnitude for display on the display of the mobile computing device. [0058] Example 4 includes the subject matter of any of Examples 1-3, and wherein to determine the corrected zoom magnitude comprises to determine an angular size of a region of the object from a perspective of the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
[0059] Example 5 includes the subject matter of any of Examples 1-4, and wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
[0060] Example 6 includes the subject matter of any of Examples 1-5, and wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera having only image pixels corresponding with features of the object from the region of the object occluded by the mobile computing device from the user's perspective.
[0061] Example 7 includes the subject matter of any of Examples 1-6, and wherein to determine the angular size of the mobile computing device from the user's perspective comprises to determine an angular size of the mobile computing device from the user's perspective based on a distance of the user' s eye relative to the mobile computing device and a size of the mobile computing device; determine the distance of the object relative to the user comprises to determine the distance of the object relative to the user based on the distance of the user's eye relative to the mobile computing device and the distance of the object relative to the mobile computing device; and determine the region of the object occluded by the mobile computing device from the user's perspective comprises to determine the angular size of the region of the object occluded by the mobile computing device based on the angular size of the mobile computing device from the user's perspective and the distance of the object relative to the user.
[0062] Example 8 includes the subject matter of any of Examples 1-7, and wherein
( d
angular size, δ , is determined according to δ = 2 arctan Λ , wherein d is an actual size of a
2D J
corresponding object and D is a distance between the corresponding object and a point, the point being a perspective from which the angular size is determined.
[0063] Example 9 includes the subject matter of any of Examples 1-8, and wherein to capture the first image of the user comprises to capture an image of a face of the user; and determine the position of the user's eye relative to the mobile computing device comprises to identify a location of the user's eye in the image of the user's face. [0064] Example 10 includes the subject matter of any of Examples 1-9, and wherein to determine the position of the user's eye relative to the mobile computing device comprises to determine a distance of the user's eye to the mobile computing device.
[0065] Example 11 includes the subject matter of any of Examples 1-10, and wherein to determine the position of the user's eye relative to the mobile computing device comprises to determine a position of the user's eye relative to the first camera; and determine the distance of the object in the real-world environment relative to the mobile computing device comprises to determine a distance of the object relative to the second camera.
[0066] Example 12 includes the subject matter of any of Examples 1-11, and wherein the first camera has a field of view in a direction opposite a field of view of the second camera about the display.
[0067] Example 13 includes the subject matter of any of Examples 1-12, and wherein to determine the distance of the object in the real-world environment relative to the mobile computing device comprises to set a distance of the object relative to the mobile computing device to a predefined distance.
[0068] Example 14 includes the subject matter of any of Examples 1-13, and wherein the predefined distance is greater than a focal length of the second camera.
[0069] Example 15 includes the subject matter of any of Examples 1-14, and further including a display module to display an image on the display based on the generated back projection of the real- world environment captured by the second camera.
[0070] Example 16 includes the subject matter of any of Examples 1-15, and wherein to display the image based on the generated back projection comprises to display an image corresponding with the back projection modified to include augmented reality features.
[0071] Example 17 includes the subject matter of any of Examples 1-16, and wherein the at least one device parameter comprises at least one of (i) a focal length of the second camera, (ii) a size of the display, (iii) a size of the mobile computing device, or (iv) a location of components of the mobile computing device relative to a reference point.
[0072] Example 18 includes a method for adjusting a perspective of a captured image for display on a mobile computing device, the method comprising capturing, by a first camera of the mobile computing device, a first image of a user of the mobile computing device; determining, by the mobile computing device, a position of an eye of the user relative to the mobile computing device based on the first captured image; capturing, by a second camera of the mobile computing device different from the first camera, a second image of a real-world environment of the mobile computing device; determining, by the mobile computing device, a distance of an object in the real-world environment relative to the mobile computing device based on the second captured image; and generating, by the mobile computing device, a back projection of the real-world environment captured by the second camera to a display of the mobile computing device based on the determined distance of the object in the real-world environment relative to the mobile computing device, the determined position of the user's eye relative to the mobile computing device, and at least one device parameter of the mobile computing device.
[0073] Example 19 includes the subject matter of Example 18, and wherein generating the back projection comprises determining, for each display pixel of the display, a ray from the user's eye through a corresponding display pixel to the object in the real- world environment; identifying, for each determined ray, an image pixel of the second captured image of the real- world environment corresponding with a position of the object in the real- world environment to which the corresponding ray is directed; and constructing a back projection image based on the identified image pixels for display on the display of the mobile computing device.
[0074] Example 20 includes the subject matter of any of Examples 18 and 19, and wherein generating the back projection comprises determining an angular size of the mobile computing device from a perspective of the user; determining a distance of the object in the real-world environment relative to the user; determining a region of the object occluded by the mobile computing device from the user's perspective; determining a corrected zoom magnitude of the second camera based on the determined region of the object occluded by the mobile computing device and the distance of the object relative to the mobile computing device; and generating a back projection image based on the corrected zoom magnitude for display on the display of the mobile computing device.
[0075] Example 21 includes the subject matter of any of Examples 18-20, and wherein determining the corrected zoom magnitude comprises determining an angular size of a region of the object from a perspective of the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
[0076] Example 22 includes the subject matter of any of Examples 18-21, and wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
[0077] Example 23 includes the subject matter of any of Examples 18-22, and wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera having only image pixels corresponding with features of the object from the region of the object occluded by the mobile computing device from the user's perspective.
[0078] Example 24 includes the subject matter of any of Examples 18-23, and wherein determining the angular size of the mobile computing device from the user's perspective comprises determining an angular size of the mobile computing device from the user's perspective based on a distance of the user' s eye relative to the mobile computing device and a size of the mobile computing device; determining the distance of the object relative to the user comprises determining the distance of the object relative to the user based on the distance of the user's eye relative to the mobile computing device and the distance of the object relative to the mobile computing device; and determining the region of the object occluded by the mobile computing device from the user's perspective comprises determining the region of the object occluded by the mobile computing device based on the angular size of the mobile computing device from the user's perspective and the distance of the object relative to the user.
[0079] Example 25 includes the subject matter of any of Examples 18-24, and wherein
( d
angular size, δ , is determined according to δ = 2 arctan Λ , wherein d is an actual size of a
{2D J
corresponding object and D is a distance between the corresponding object and a point, the point being a perspective from which the angular size is determined.
[0080] Example 26 includes the subject matter of any of Examples 18-25, and wherein capturing the first image of the user comprises capturing an image of a face of the user; and determining the position of the user's eye relative to the mobile computing device comprises identifying a location of the user's eye in the image of the user's face.
[0081] Example 27 includes the subject matter of any of Examples 18-26, and wherein determining the position of the user's eye relative to the mobile computing device comprises determining a distance of the user's eye to the mobile computing device.
[0082] Example 28 includes the subject matter of any of Examples 18-27, and wherein determining the position of the user's eye relative to the mobile computing device comprises determining a position of the user's eye relative to the first camera; and determining the distance of the object in the real-world environment relative to the mobile computing device comprises determining a distance of the object relative to the second camera.
[0083] Example 29 includes the subject matter of any of Examples 18-28, and wherein the first camera has a field of view in a direction opposite a field of view of the second camera about the display. [0084] Example 30 includes the subject matter of any of Examples 18-29, and wherein determining the distance of the object in the real-world environment relative to the mobile computing device comprises setting a distance of the object relative to the mobile computing device to a predefined distance.
[0085] Example 31 includes the subject matter of any of Examples 18-30, and wherein the predefined distance is greater than a focal length of the second camera.
[0086] Example 32 includes the subject matter of any of Examples 18-31, and further including displaying, by the mobile computing device, an image on the display based on the generated back projection of the real- world environment captured by the second camera.
[0087] Example 33 includes the subject matter of any of Examples 18-32, and wherein displaying the image based on the generated back projection comprises displaying an image corresponding with the back projection modified to include augmented reality features.
[0088] Example 34 includes the subject matter of any of Examples 18-33, and wherein the at least one device parameter comprises at least one of (i) a focal length of the second camera, (ii) a size of the display, (iii) a size of the mobile computing device, or (iv) a location of components of the mobile computing device relative to a reference point.
[0089] Example 35 includes a mobile computing device comprising a processor; and a memory having stored therein a plurality of instructions that when executed by the processor cause the mobile computing device to perform the method of any of Examples 18-34.
[0090] Example 36 includes one or more machine -readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, result in a mobile computing device performing the method of any of Examples 18-34.
[0091] Example 37 includes a mobile computing device for adjusting a perspective of a captured image for display, the mobile computing device comprising means for capturing, by a first camera of the mobile computing device, a first image of a user of the mobile computing device; means for determining a position of an eye of the user relative to the mobile computing device based on the first captured image; means for capturing, by a second camera of the mobile computing device different from the first camera, a second image of a real-world environment of the mobile computing device; means for determining a distance of an object in the real-world environment relative to the mobile computing device based on the second captured image; and means for generating a back projection of the real-world environment captured by the second camera to a display of the mobile computing device based on the determined distance of the object in the real-world environment relative to the mobile computing device, the determined position of the user's eye relative to the mobile computing device, and at least one device parameter of the mobile computing device.
[0092] Example 38 includes the subject matter of Example 37, and wherein the means for generating the back projection comprises means for determining, for each display pixel of the display, a ray from the user's eye through a corresponding display pixel to the object in the real- world environment; means for identifying, for each determined ray, an image pixel of the second captured image of the real-world environment corresponding with a position of the object in the real-world environment to which the corresponding ray is directed; and means for constructing a back projection image based on the identified image pixels for display on the display of the mobile computing device.
[0093] Example 39 includes the subject matter of any of Examples 37 and 38, and wherein the means for generating the back projection comprises means for determining an angular size of the mobile computing device from a perspective of the user; means for determining a distance of the object in the real-world environment relative to the user; means for determining a region of the object occluded by the mobile computing device from the user's perspective; means for determining a corrected zoom magnitude of the second camera based on the determined region of the object occluded by the mobile computing device and the distance of the object relative to the mobile computing device; and means for generating a back projection image based on the corrected zoom magnitude for display on the display of the mobile computing device.
[0094] Example 40 includes the subject matter of any of Examples 37-39, and wherein the means for determining the corrected zoom magnitude comprises means for determining an angular size of a region of the object from a perspective of the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
[0095] Example 41 includes the subject matter of any of Examples 37-40, and wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
[0096] Example 42 includes the subject matter of any of Examples 37-41, and wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera having only image pixels corresponding with features of the object from the region of the object occluded by the mobile computing device from the user's perspective. [0097] Example 43 includes the subject matter of any of Examples 37-42, and wherein the means for determining the angular size of the mobile computing device from the user's perspective comprises means for determining an angular size of the mobile computing device from the user's perspective based on a distance of the user's eye relative to the mobile computing device and a size of the mobile computing device; means for determining the distance of the object relative to the user comprises means for determining the distance of the object relative to the user based on the distance of the user's eye relative to the mobile computing device and the distance of the object relative to the mobile computing device; and means for determining the region of the object occluded by the mobile computing device from the user's perspective comprises means for determining the region of the object occluded by the mobile computing device based on the angular size of the mobile computing device from the user's perspective and the distance of the object relative to the user.
[0098] Example 44 includes the subject matter of any of Examples 37-43, and wherein
( d
angular size, δ , is determined according to δ = 2 arctan Λ , wherein d is an actual size of a
2D J
corresponding object and D is a distance between the corresponding object and a point, the point being a perspective from which the angular size is determined.
[0099] Example 45 includes the subject matter of any of Examples 37-44, and wherein the means for capturing the first image of the user comprises means for capturing an image of a face of the user; and means for determining the position of the user's eye relative to the mobile computing device comprises means for identifying a location of the user' s eye in the image of the user's face.
[00100] Example 46 includes the subject matter of any of Examples 37-45, and wherein the means for determining the position of the user's eye relative to the mobile computing device comprises means for determining a distance of the user's eye to the mobile computing device.
[00101] Example 47 includes the subject matter of any of Examples 37-46, and wherein the means for determining the position of the user's eye relative to the mobile computing device comprises means for determining a position of the user's eye relative to the first camera; and means for determining the distance of the object in the real- world environment relative to the mobile computing device comprises means for determining a distance of the object relative to the second camera.
[00102] Example 48 includes the subject matter of any of Examples 37-47, and wherein the first camera has a field of view in a direction opposite a field of view of the second camera about the display. [00103] Example 49 includes the subject matter of any of Examples 37-48, and wherein the means for determining the distance of the object in the real-world environment relative to the mobile computing device comprises means for setting a distance of the object relative to the mobile computing device to a predefined distance.
[00104] Example 50 includes the subject matter of any of Examples 37-49, and wherein the predefined distance is greater than a focal length of the second camera.
[00105] Example 51 includes the subject matter of any of Examples 37-50, and further including means for displaying an image on the display based on the generated back projection of the real- world environment captured by the second camera.
[00106] Example 52 includes the subject matter of any of Examples 37-51, and wherein the means for displaying the image based on the generated back projection comprises means for displaying an image corresponding with the back projection modified to include augmented reality features.
[00107] Example 53 includes the subject matter of any of Examples 37-52, and wherein the at least one device parameter comprises at least one of (i) a focal length of the second camera, (ii) a size of the display, (iii) a size of the mobile computing device, or (iv) a location of components of the mobile computing device relative to a reference point.

Claims

WHAT IS CLAIMED IS:
1. A mobile computing device for adjusting a perspective of a captured image for display, the mobile computing device comprising:
a display;
a camera system comprising a first camera and a second camera, the camera system to capture (i) a first image of a user of the mobile computing device with the first camera and (i) a second image of a real-world environment of the mobile computing device with the second camera;
an eye tracking module to determine a position of an eye of the user relative to the mobile computing device based on the first captured image;
an object distance determination module to determine a distance of an object in the real-world environment relative to the mobile computing device based on the second captured image; and
an image projection module to generate a back projection of the real-world environment captured by the second camera to the display based on the determined distance of the object in the real-world environment relative to the mobile computing device, the determined position of the user's eye relative to the mobile computing device, and at least one device parameter of the mobile computing device.
2. The mobile computing device of claim 1, wherein to generate the back projection comprises to:
determine, for each display pixel of the display, a ray from the user's eye through a corresponding display pixel to the object in the real-world environment;
identify, for each determined ray, an image pixel of the second captured image of the real-world environment corresponding with a position of the object in the real-world environment to which the corresponding ray is directed; and
construct a back projection image based on the identified image pixels for display on the display of the mobile computing device.
3. The mobile computing device of claim 1, wherein to generate the back projection comprises to:
determine an angular size of the mobile computing device from a perspective of the user; determine a distance of the object in the real-world environment relative to the user;
determine a region of the object occluded by the mobile computing device from the user's perspective;
determine a corrected zoom magnitude of the second camera based on the determined region of the object occluded by the mobile computing device and the distance of the object relative to the mobile computing device; and
generate a back projection image based on the corrected zoom magnitude for display on the display of the mobile computing device.
4. The mobile computing device of claim 3, wherein to determine the corrected zoom magnitude comprises to determine an angular size of a region of the object from a perspective of the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
5. The mobile computing device of claim 3, wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
6. The mobile computing device of claim 5, wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera having only image pixels corresponding with features of the object from the region of the object occluded by the mobile computing device from the user's perspective.
7. The mobile computing device of claim 3, wherein to:
determine the angular size of the mobile computing device from the user's perspective comprises to determine an angular size of the mobile computing device from the user' s perspective based on a distance of the user' s eye relative to the mobile computing device and a size of the mobile computing device;
determine the distance of the object relative to the user comprises to determine the distance of the object relative to the user based on the distance of the user's eye relative to the mobile computing device and the distance of the object relative to the mobile computing device; and determine the region of the object occluded by the mobile computing device from the user's perspective comprises to determine the angular size of the region of the object occluded by the mobile computing device based on the angular size of the mobile computing device from the user's perspective and the distance of the object relative to the user.
8. The mobile computing device of claim 3, wherein angular size, δ , is f d λ
determined according to δ = 2 arctanl -^j I , wherein d is an actual size of a corresponding object and D is a distance between the corresponding object and a point, the point being a perspective from which the angular size is determined.
9. The mobile computing device of any of claims 1-8, wherein to:
determine the position of the user's eye relative to the mobile computing device comprises to determine a position of the user's eye relative to the first camera; and
determine the distance of the object in the real-world environment relative to the mobile computing device comprises to determine a distance of the object relative to the second camera.
10. The mobile computing device of any of claims 1-8, wherein the first camera has a field of view in a direction opposite a field of view of the second camera about the display.
11. The mobile computing device of claim 1, wherein to determine the distance of the object in the real-world environment relative to the mobile computing device comprises to set a distance of the object relative to the mobile computing device to a predefined distance.
12. The mobile computing device of claim 1, further comprising a display module to display an image on the display based on the generated back projection of the real- world environment captured by the second camera.
13. The mobile computing device of claim 12, wherein to display the image based on the generated back projection comprises to display an image corresponding with the back projection modified to include augmented reality features.
14. The mobile computing device of any of claims 1-8, wherein the at least one device parameter comprises at least one of (i) a focal length of the second camera, (ii) a size of the display, (iii) a size of the mobile computing device, or (iv) a location of components of the mobile computing device relative to a reference point.
15. A method for adjusting a perspective of a captured image for display on a mobile computing device, the method comprising:
capturing, by a first camera of the mobile computing device, a first image of a user of the mobile computing device;
determining, by the mobile computing device, a position of an eye of the user relative to the mobile computing device based on the first captured image;
capturing, by a second camera of the mobile computing device different from the first camera, a second image of a real- world environment of the mobile computing device;
determining, by the mobile computing device, a distance of an object in the real- world environment relative to the mobile computing device based on the second captured image; and
generating, by the mobile computing device, a back projection of the real-world environment captured by the second camera to a display of the mobile computing device based on the determined distance of the object in the real-world environment relative to the mobile computing device, the determined position of the user's eye relative to the mobile computing device, and at least one device parameter of the mobile computing device.
16. The method of claim 15, wherein generating the back projection comprises:
determining, for each display pixel of the display, a ray from the user's eye through a corresponding display pixel to the object in the real-world environment;
identifying, for each determined ray, an image pixel of the second captured image of the real-world environment corresponding with a position of the object in the real- world environment to which the corresponding ray is directed; and
constructing a back projection image based on the identified image pixels for display on the display of the mobile computing device.
17. The method of claim 15, wherein generating the back projection comprises:
determining an angular size of the mobile computing device from a perspective of the user based on a distance of the user's eye relative to the mobile computing device and a size of the mobile computing device;
determining a distance of the object in the real-world environment relative to the user based on the distance of the user's eye relative to the mobile computing device and the distance of the object relative to the mobile computing device;
determining a region of the object occluded by the mobile computing device from the user's perspective based on the angular size of the mobile computing device from the user's perspective and the distance of the object relative to the user;
determining a corrected zoom magnitude of the second camera based on the determined region of the object occluded by the mobile computing device and the distance of the object relative to the mobile computing device; and
generating a back projection image based on the corrected zoom magnitude for display on the display of the mobile computing device.
18. The method of claim 17, wherein the corrected zoom magnitude is a zoom magnitude required to capture an image with the second camera corresponding with the region of the object occluded by the mobile computing device from the user's perspective.
19. The method of claim 15, wherein:
capturing the first image of the user comprises capturing an image of a face of the user; and
determining the position of the user's eye relative to the mobile computing device comprises identifying a location of the user's eye in the image of the user's face.
20. The method of claim 15, wherein determining the position of the user's eye relative to the mobile computing device comprises determining a distance of the user's eye to the mobile computing device.
21. The method of claim 15, wherein:
determining the position of the user's eye relative to the mobile computing device comprises determining a position of the user's eye relative to the first camera; and determining the distance of the object in the real-world environment relative to the mobile computing device comprises determining a distance of the object relative to the second camera.
22. The method of claim 15, wherein determining the distance of the object in the real-world environment relative to the mobile computing device comprises setting a distance of the object relative to the mobile computing device to a predefined distance.
23. The method of claim 15, further comprising displaying, by the mobile computing device, an image on the display based on the generated back projection of the real- world environment captured by the second camera.
24. The method of claim 15, wherein the at least one device parameter comprises at least one of (i) a focal length of the second camera, (ii) a size of the display, (iii) a size of the mobile computing device, or (iv) a location of components of the mobile computing device relative to a reference point.
25. One or more machine -readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, result in a mobile computing device performing the method of any of claims 15-24.
PCT/US2015/045517 2014-09-17 2015-08-17 Technologies for adjusting a perspective of a captured image for display WO2016043893A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP15842700.5A EP3195595B1 (en) 2014-09-17 2015-08-17 Technologies for adjusting a perspective of a captured image for display
KR1020177003808A KR102291461B1 (en) 2014-09-17 2015-08-17 Technologies for adjusting a perspective of a captured image for display
JP2017505822A JP2017525052A (en) 2014-09-17 2015-08-17 Technology that adjusts the field of view of captured images for display
CN201580043825.4A CN106662930B (en) 2014-09-17 2015-08-17 Techniques for adjusting a perspective of a captured image for display

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/488,516 US9934573B2 (en) 2014-09-17 2014-09-17 Technologies for adjusting a perspective of a captured image for display
US14/488,516 2014-09-17

Publications (1)

Publication Number Publication Date
WO2016043893A1 true WO2016043893A1 (en) 2016-03-24

Family

ID=55455242

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/045517 WO2016043893A1 (en) 2014-09-17 2015-08-17 Technologies for adjusting a perspective of a captured image for display

Country Status (6)

Country Link
US (1) US9934573B2 (en)
EP (1) EP3195595B1 (en)
JP (1) JP2017525052A (en)
KR (1) KR102291461B1 (en)
CN (1) CN106662930B (en)
WO (1) WO2016043893A1 (en)

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101511442B1 (en) * 2013-10-28 2015-04-13 서울과학기술대학교 산학협력단 LED-ID/RF communication smart device using camera and the method of LBS using the same
CN104102349B (en) * 2014-07-18 2018-04-27 北京智谷睿拓技术服务有限公司 Content share method and device
CN105527825B (en) * 2014-09-28 2018-02-27 联想(北京)有限公司 Electronic equipment and display methods
US9727977B2 (en) * 2014-12-29 2017-08-08 Daqri, Llc Sample based color extraction for augmented reality
US9554121B2 (en) * 2015-01-30 2017-01-24 Electronics And Telecommunications Research Institute 3D scanning apparatus and method using lighting based on smart phone
WO2016126863A1 (en) * 2015-02-04 2016-08-11 Invensense, Inc Estimating heading misalignment between a device and a person using optical sensor
JP2016163166A (en) * 2015-03-02 2016-09-05 株式会社リコー Communication terminal, interview system, display method, and program
US10134188B2 (en) * 2015-12-21 2018-11-20 Intel Corporation Body-centric mobile point-of-view augmented and virtual reality
IL245339A (en) * 2016-04-21 2017-10-31 Rani Ben Yishai Method and system for registration verification
DE112016007015T5 (en) * 2016-07-29 2019-03-21 Mitsubishi Electric Corporation DISPLAY DEVICE, DISPLAY CONTROL DEVICE AND DISPLAY CONTROL METHOD
US10963044B2 (en) * 2016-09-30 2021-03-30 Intel Corporation Apparatus, system and method for dynamic modification of a graphical user interface
US20180160093A1 (en) 2016-12-05 2018-06-07 Sung-Yang Wu Portable device and operation method thereof
US11240487B2 (en) 2016-12-05 2022-02-01 Sung-Yang Wu Method of stereo image display and related device
US10469819B2 (en) * 2017-08-17 2019-11-05 Shenzhen China Star Optoelectronics Semiconductor Display Technology Co., Ltd Augmented reality display method based on a transparent display device and augmented reality display device
CN109427089B (en) * 2017-08-25 2023-04-28 微软技术许可有限责任公司 Mixed reality object presentation based on ambient lighting conditions
CA3075775A1 (en) 2017-09-21 2019-03-28 Becton, Dickinson And Company High dynamic range assays in hazardous contaminant testing
US11385146B2 (en) 2017-09-21 2022-07-12 Becton, Dickinson And Company Sampling systems and techniques to collect hazardous contaminants with high pickup and shedding efficiencies
CN111108214B (en) 2017-09-21 2023-10-13 贝克顿·迪金森公司 Dangerous pollutant collecting kit and rapid test
US11002642B2 (en) 2017-09-21 2021-05-11 Becton, Dickinson And Company Demarcation template for hazardous contaminant testing
AU2018337036B2 (en) 2017-09-21 2023-07-06 Becton, Dickinson And Company Augmented reality devices for hazardous contaminant testing
CN111108380B (en) 2017-09-21 2022-11-01 贝克顿·迪金森公司 Hazardous contaminant collection kit and rapid test
WO2019060266A1 (en) 2017-09-21 2019-03-28 Becton, Dickinson And Company Reactive demarcation template for hazardous contaminant testing
US10547790B2 (en) * 2018-06-14 2020-01-28 Google Llc Camera area locking
CN108924414B (en) * 2018-06-27 2019-12-10 维沃移动通信有限公司 Shooting method and terminal equipment
CN109040617B (en) * 2018-08-16 2021-01-15 五八有限公司 Image presentation method, device, equipment and computer readable storage medium
TWI719343B (en) 2018-08-28 2021-02-21 財團法人工業技術研究院 Method and display system for information display
KR102164879B1 (en) * 2018-11-20 2020-10-13 한국과학기술원 Method and apparatus for providing user focuse match type augmented reality
CN113170077A (en) 2018-11-30 2021-07-23 麦克赛尔株式会社 Display device
WO2020152585A1 (en) 2019-01-21 2020-07-30 Insightness Ag Transparent smartphone
EP3918300A4 (en) 2019-01-28 2022-11-16 Becton, Dickinson and Company Hazardous contaminant collection device with integrated swab and test device
US11494953B2 (en) * 2019-07-01 2022-11-08 Microsoft Technology Licensing, Llc Adaptive user interface palette for augmented reality
EP4290878A3 (en) * 2019-07-23 2024-03-06 R-Go Robotics Ltd Techniques for co-optimization of motion and sensory control
US11106929B2 (en) * 2019-08-29 2021-08-31 Sony Interactive Entertainment Inc. Foveated optimization of TV streaming and rendering content assisted by personal devices
WO2021040107A1 (en) * 2019-08-30 2021-03-04 엘지전자 주식회사 Ar device and method for controlling same
KR102359601B1 (en) * 2019-11-29 2022-02-08 한국과학기술원 Image processing method using transparent plate and appartus for performing the same
US11538199B2 (en) * 2020-02-07 2022-12-27 Lenovo (Singapore) Pte. Ltd. Displaying a window in an augmented reality view
CN111552076B (en) 2020-05-13 2022-05-06 歌尔科技有限公司 Image display method, AR glasses and storage medium
GB2603947A (en) * 2021-02-22 2022-08-24 Nokia Technologies Oy Method and apparatus for depth estimation relative to a mobile terminal
CN114666557A (en) * 2022-03-31 2022-06-24 苏州三星电子电脑有限公司 Mobile computing device and image display method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10112831A (en) * 1996-10-07 1998-04-28 Minolta Co Ltd Display method and display device for real space image and virtual space image
JP2003244726A (en) * 2002-02-20 2003-08-29 Canon Inc Image composite processing apparatus
US20110140994A1 (en) * 2009-12-15 2011-06-16 Noma Tatsuyoshi Information Presenting Apparatus, Method, and Computer Program Product
KR20120017293A (en) * 2010-08-18 2012-02-28 주식회사 팬택 Apparatus and method for providing augmented reality
US20120169846A1 (en) * 2010-12-30 2012-07-05 Altek Corporation Method for capturing three dimensional image

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0415764A (en) * 1990-05-01 1992-01-21 Canon Inc Image processor
US6069608A (en) * 1996-12-03 2000-05-30 Sony Corporation Display device having perception image for improving depth perception of a virtual image
US6798429B2 (en) * 2001-03-29 2004-09-28 Intel Corporation Intuitive mobile device interface to virtual spaces
JP2006148347A (en) * 2004-11-17 2006-06-08 Fuji Photo Film Co Ltd Image display system
JP5017382B2 (en) * 2010-01-21 2012-09-05 株式会社コナミデジタルエンタテインメント Image display device, image processing method, and program
US8964298B2 (en) * 2010-02-28 2015-02-24 Microsoft Corporation Video display modification based on sensor input for a see-through near-to-eye display
JP5656246B2 (en) * 2010-09-30 2015-01-21 Necカシオモバイルコミュニケーションズ株式会社 Mobile terminal, camera magnification adjustment method and program
US8953022B2 (en) * 2011-01-10 2015-02-10 Aria Glassworks, Inc. System and method for sharing virtual and augmented reality scenes between users and viewers
JP4968389B2 (en) * 2011-01-14 2012-07-04 カシオ計算機株式会社 Imaging apparatus and program
US8754831B2 (en) * 2011-08-02 2014-06-17 Microsoft Corporation Changing between display device viewing modes
US20130083007A1 (en) * 2011-09-30 2013-04-04 Kevin A. Geisner Changing experience using personal a/v system
US8965741B2 (en) * 2012-04-24 2015-02-24 Microsoft Corporation Context aware surface scanning and reconstruction
JP5805013B2 (en) * 2012-06-13 2015-11-04 株式会社Nttドコモ Captured image display device, captured image display method, and program
US20140152558A1 (en) * 2012-11-30 2014-06-05 Tom Salter Direct hologram manipulation using imu
US9142019B2 (en) * 2013-02-28 2015-09-22 Google Technology Holdings LLC System for 2D/3D spatial feature processing
US20140267411A1 (en) * 2013-03-15 2014-09-18 Elwha Llc Indicating observation or visibility patterns in augmented reality systems
US9286727B2 (en) * 2013-03-25 2016-03-15 Qualcomm Incorporated System and method for presenting true product dimensions within an augmented real-world setting
US9390561B2 (en) * 2013-04-12 2016-07-12 Microsoft Technology Licensing, Llc Personal holographic billboard
US9256987B2 (en) * 2013-06-24 2016-02-09 Microsoft Technology Licensing, Llc Tracking head movement when wearing mobile device
US20150123966A1 (en) * 2013-10-03 2015-05-07 Compedia - Software And Hardware Development Limited Interactive augmented virtual reality and perceptual computing platform

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10112831A (en) * 1996-10-07 1998-04-28 Minolta Co Ltd Display method and display device for real space image and virtual space image
JP2003244726A (en) * 2002-02-20 2003-08-29 Canon Inc Image composite processing apparatus
US20110140994A1 (en) * 2009-12-15 2011-06-16 Noma Tatsuyoshi Information Presenting Apparatus, Method, and Computer Program Product
KR20120017293A (en) * 2010-08-18 2012-02-28 주식회사 팬택 Apparatus and method for providing augmented reality
US20120169846A1 (en) * 2010-12-30 2012-07-05 Altek Corporation Method for capturing three dimensional image

Also Published As

Publication number Publication date
US20160078680A1 (en) 2016-03-17
EP3195595A1 (en) 2017-07-26
KR102291461B1 (en) 2021-08-20
CN106662930A (en) 2017-05-10
KR20170031733A (en) 2017-03-21
US9934573B2 (en) 2018-04-03
EP3195595A4 (en) 2018-04-25
CN106662930B (en) 2020-03-13
JP2017525052A (en) 2017-08-31
EP3195595B1 (en) 2021-04-07

Similar Documents

Publication Publication Date Title
US9934573B2 (en) Technologies for adjusting a perspective of a captured image for display
CN106415445B (en) Techniques for viewer attention area estimation
US11747893B2 (en) Visual communications methods, systems and software
CN109074681B (en) Information processing apparatus, information processing method, and program
EP3047361B1 (en) A method and device for displaying a graphical user interface
US9785234B2 (en) Analysis of ambient light for gaze tracking
TWI610571B (en) Display method, system and computer-readable recording medium thereof
KR20160094190A (en) Apparatus and method for tracking an eye-gaze
US10957063B2 (en) Dynamically modifying virtual and augmented reality content to reduce depth conflict between user interface elements and video content
KR102450236B1 (en) Electronic apparatus, method for controlling thereof and the computer readable recording medium
US11076100B2 (en) Displaying images on a smartglasses device based on image data received from external camera
WO2015149611A1 (en) Image presentation control methods and image presentation control apparatuses
CN110895433B (en) Method and apparatus for user interaction in augmented reality
US11205309B2 (en) Augmented reality system and anchor display method thereof
US20240031551A1 (en) Image capturing apparatus for capturing a plurality of eyeball images, image capturing method for image capturing apparatus, and storage medium
US11726320B2 (en) Information processing apparatus, information processing method, and program
CN111857461B (en) Image display method and device, electronic equipment and readable storage medium
US10409464B2 (en) Providing a context related view with a wearable apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15842700

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017505822

Country of ref document: JP

Kind code of ref document: A

REEP Request for entry into the european phase

Ref document number: 2015842700

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015842700

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20177003808

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE