JP2014526157A - Classification of the total field of view of the head mounted display - Google Patents

Classification of the total field of view of the head mounted display Download PDF

Info

Publication number
JP2014526157A
JP2014526157A JP2014517099A JP2014517099A JP2014526157A JP 2014526157 A JP2014526157 A JP 2014526157A JP 2014517099 A JP2014517099 A JP 2014517099A JP 2014517099 A JP2014517099 A JP 2014517099A JP 2014526157 A JP2014526157 A JP 2014526157A
Authority
JP
Japan
Prior art keywords
hmd
wearer
hmd wearer
regions
tfov
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2014517099A
Other languages
Japanese (ja)
Inventor
リウ,ジェームズ・チア−ミン
ラッタ,スティーヴン・ギルクライスト
アンドリュース,アントン
ヴォート,ベンジャミン・アイザック
ノヴァック,クリストファー・マイケル
スモール,シェリダン・リー
Original Assignee
マイクロソフト コーポレーション
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US13/167,113 priority Critical patent/US20120327116A1/en
Priority to US13/167,113 priority
Application filed by マイクロソフト コーポレーション filed Critical マイクロソフト コーポレーション
Priority to PCT/US2012/043178 priority patent/WO2012177657A2/en
Publication of JP2014526157A publication Critical patent/JP2014526157A/en
Application status is Pending legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Other optical systems; Other optical apparatus
    • G02B27/0093Other optical systems; Other optical apparatus with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Other optical systems; Other optical apparatus
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Other optical systems; Other optical apparatus
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Other optical systems; Other optical apparatus
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Abstract

  A virtual image is arranged for display in a head mounted display (HMD) to provide an augmented reality view to the HMD wearer. Sensor data can be collected from on-board sensors provided in the HMD. In addition, other days can be collected from external sources. Based on the collected sensor data and other data, the position and rotation of the HMD wearer's head relative to the HMD wearer's body and surrounding environment may be determined. After resolving the position of the HMD wearer's head, the HMD wearer's full field of view (TFOV) can be classified into regions. The virtual image can then be placed within the classified TFOV region to place the virtual image relative to the HMD wearer's body and surrounding environment.

Description

  One embodiment of the present invention relates to, for example, classification of the entire field of view of a head mounted display.

  [0001] See-through head-mounted displays (HMDs) provide the ability to extend what a wearer sees with virtual objects. That is, the HMD expands the field of view of the HMD wearer in the real world with a virtual image to provide an augmented reality field of view. However, these virtual images can be easily distracting and intrusive, pulling attention away from the real world and shielding the normal (unexpanded) field of view of the HMD wearer. This can present challenges, especially in applications for “always-on” wearable displays, especially when considering mobile, outdoor, and active scenarios.

  One embodiment of the present invention relates to, for example, classification of the entire field of view of a head mounted display.

  [0002] This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, but is intended to be used as an aid in determining the scope of the claimed subject matter. Not a thing.

  [0003] Embodiments of the invention relate to placing a virtual image within a head mounted display (HMD) to provide an augmented reality view to an HMD wearer. Sensor data can be collected from on-board sensors provided in the HMD. In addition, other data can be collected from external sources. Based on the collected sensor data and other data, the position and rotation of the HMD wearer's head relative to the HMD wearer's body and surrounding environment may be determined. After resolving the position of the HMD wearer's head, the HMD wearer's full field of view (TFOV) can be classified into regions. The virtual image can then be placed within the classified TFOV region to place the virtual image relative to the HMD wearer's body and surrounding environment.

  [0004] The present invention is described in detail below with reference to the accompanying drawings.

[0005] FIG. 1 is a block diagram of an exemplary computing environment suitable for use in practicing embodiments of the present invention. [0006] FIG. 2 is a block diagram of an exemplary system for classifying a region of TFOV and placing a virtual object in the region, according to an embodiment of the invention. [0007] FIG. 3A is a side view of a TFOV of an HMD wearer classified into a primary region, a secondary region, and a tertiary region, according to an embodiment of the present invention.

FIG. 3B is a front view of a TFOV of an HMD wearer classified into a primary region, a secondary region, and a tertiary region, according to an embodiment of the present invention.
[0008] FIG. 2 illustrates a fixed area of a TFOV for an HMD wearer according to an embodiment of the invention. [0009] FIG. 4 illustrates a dynamic region of a TFOV for an HMD wearer, in accordance with an embodiment of the present invention. [0010] FIG. 5 illustrates a field of view of an HMD wearer with virtual objects disposed in a primary region, a secondary region, and a tertiary region, according to an embodiment of the present invention. [0011] FIG. 5 is a flow diagram illustrating a method for classifying a region of TFOV of an HMD wearer according to an embodiment of the present invention. [0012] FIG. 5 is a flow diagram illustrating a method for displaying a virtual object using an HMD, according to an embodiment of the invention.

  [0013] The subject matter of the present invention is specifically described herein to meet legal requirements. However, the description itself is not intended to limit the scope of the invention. Rather, we believe that the claimed subject matter includes different steps or combinations of steps similar to those described in this document in combination with other current or future technologies. It is thought that it may be implemented by the method. Further, the terms “step” and / or “block” may be used herein to imply different elements of the method used, but these terms may be used in the explicit order of the individual steps. Except where noted, this should not be construed as implying any particular order between the various steps disclosed herein.

  [0014] Embodiments of the present invention provide automatic and continuous placement and replacement of virtual images within the display of the HMD relative to the entire field of view (TFOV) of the HMD wearer in order to provide an augmented reality view to the HMD wearer Is targeted. “TFOV” as used herein occupies the full range of human head rotation and translation. This is in contrast to the human field of view (FOV), which occupies the range of what a human sees at a given moment.

  [0015] According to embodiments of the present invention, sensor data is collected from any number of on-board sensors provided in the HMD. In addition, other data can be collected from sources external to the HMD. Sensor data and other data may be obtained from the HMD wearer so that the physical head of the HMD wearer is related to the physical body of the HMD wearer and the physical environment surrounding the HMD wearer. Can be used to track and interpret the physical head. By tracking the physical head of the HMD wearer relative to the body and surrounding environment of the HMD wearer, the TFOV of the HMD wearer can be classified into various regions. These areas can be defined for the HMD wearer and the surrounding environment. In some embodiments, the region can include a primary region and one or more non-primary regions. Virtual images may be placed in the TFOV according to the classified area. Accordingly, embodiments of the present invention recognize the HMD wearer's head relative to the HMD wearer's body and surrounding environment, and such knowledge is virtual to the HMD wearer's body and surrounding environment. A system is provided that can be used to place images. In some embodiments, the virtual object may generally be placed away from the primary region so that the virtual image is presented in a non-hazardous manner by reducing the visual occlusion of the primary region. , It may be arranged in a non-intrusive area. Virtual objects may be placed away from the primary region to provide context (eg, classify information as less relevant, less important, etc.).

  [0016] Thus, in one aspect, embodiments of the invention store computer-usable instructions that, when used by one or more computing devices, cause one or more computing devices to perform a method. Target one or more computer storage media. The method includes receiving sensor data from one or more HMD on-board sensors. The method also includes using the sensor data to determine the position and rotation of the HMD wearer's head relative to the HMD wearer's body and surrounding environment. The method further includes the TFOV of the HMD wearer based on one or more predetermined rules and the position and rotation of the HMD wearer's head relative to the HMD wearer's body and surrounding environment. Categorizing two or more regions within. The method further includes placing a virtual image to be displayed by the HMD based on classifying two or more regions within the TFOV of the HMD wearer.

  [0017] In another embodiment, aspects of the invention are directed to an HMD. The HMD includes one or more on-board sensors. The HMD also determines the position and rotation of the HMD wearer's head relative to the HMD wearer's body based on sensor data from one or more on-board sensors, and the HMD wearer's body relative to the HMD wearer's body. One configured to classify two or more regions of the TFOV of the HMD wearer based on the position and rotation of the head of the HMD and place one or more virtual objects in the two or more regions Or contain multiple processors. The HMD further includes one or more display components configured to display at least one of the one or more virtual objects to provide an expanded field of view to the HMD wearer.

  [0018] A further embodiment is directed to a method for classifying a region of TFOV of an HMD wearer. The method includes receiving sensor data from one or more sensors on-board the HMD, receiving other data from one or more sensors external to the HMD, and sensor data and other data. And continuously classifying the region of the HMD wearer's TFOV relative to the body and surrounding environment of the HMD wearer.

  [0019] Having briefly described an overview of embodiments of the present invention, an exemplary operating environment in which embodiments of the present invention may be implemented provides a general context for the various aspects of the present invention. Will be described below. With particular reference first to FIG. 1, an exemplary operating environment for practicing embodiments of the present invention is generally shown and designated as computing device 100. The computing device 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing device 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated.

  [0020] The present invention is a generalization of computer code or machine-usable instructions, including computer-executable instructions, such as program modules, executed by a computer or other machine, such as a personal data assistant or other handheld device. Can be described in context. Generally, program modules that include routines, programs, objects, components, data structures, etc., refer to code that performs a particular task or implements a particular abstract data type. The present invention may be implemented in various system configurations including handheld devices, consumer electronics, general purpose computers, more specialized computing devices, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.

  [0021] Referring to FIG. 1, a computing device 100 includes a bus 110, memory 112, one or more processors 114, and one or more presentation configurations that directly or indirectly couple the following devices: It includes an element 116, an input / output (I / O) port 118, an input / output component 120, and an exemplary power supply 122. Bus 110 represents what may be one or more buses (such as an address bus, a data bus, or a combination thereof). The various blocks in FIG. 1 are shown with lines for clarity, but in practice it is not very clear to depict the various components, and figuratively, the lines are more accurate. It will be gray and ambiguous. For example, a presentation component such as a display device can be considered an I / O component. The processor has a memory. The inventors recognize that it is the nature of the technology, and the diagram of FIG. 1 is merely illustrative of an exemplary computing device that may be used in connection with one or more embodiments of the present invention. I want to repeat that not. For example, the distinction between such categories as “workstations”, “servers”, “laptops”, “handheld devices”, all of which are within the scope of reference to FIG. 1 and “computing devices” It is not done because it is within.

  [0022] Computing device 100 typically includes a variety of computer-readable media. Computer readable media can be any available media that can be accessed by computing device 100 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media can include computer storage media and communication media. A computer storage medium is implemented in any method or technique for storage of information such as computer readable instructions, data structures, program modules, or other data, both volatile and non-volatile, removable and non-removable media including. Computer storage media include RAM, ROM, EEPROM, flash memory, or other memory technology, CD-ROM, digital versatile disk (DVD), or other optical disk storage device, magnetic cassette, magnetic tape, magnetic disk storage device, Or other, but not limited to, any other magnetic storage device or any other medium that can be used to store desired information and that can be accessed by the computing device 100. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Any combination of the above should be included within the scope of computer-readable media.

  [0023] The memory 112 includes computer storage media in the form of volatile and / or nonvolatile memory. The memory may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical disk drives, and the like. Computing device 100 includes one or more processors that read data from various entities, such as memory 112 or I / O component 120. The presentation component (s) 116 indicates the data display to the user or other device. Exemplary presentation components include display devices, speakers, printing components, vibration components, and the like.

  [0024] The I / O port 118 allows the computing device 100 to be logically coupled to other devices including the I / O component 120, some of the I / O components 120 being It may be incorporated. Exemplary components include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, and the like.

  [0025] Referring now to FIG. 2, a block diagram illustrating a system 200 for classifying a TFOV of an HMD wearer and placing a virtual object within the classified area according to an embodiment of the present invention is provided. . It should be understood that this and other arrangements described herein are described by way of example only. Other arrangements and elements (eg, machines, interfaces, functions, order, and groupings of functions, etc.) can be used in addition to or instead of those shown, with some elements fully It may be omitted. Moreover, many of the elements described herein are functional entities that can be implemented in any suitable combination and location, either as individual or distributed components or in combination with other components. Various functions described herein as being performed by one or more entities may be performed by hardware, firmware, and / or software. For example, various functions may be performed by a processor that executes instructions stored in memory.

  As shown in FIG. 2, the system 200 includes an HMD 202. Although not shown, the HMD 202 is, for example, as described above with reference to FIG. 1 with respect to the general computing device 100, including a memory, processor, computer readable medium, input / output components, and a power source. Components can be included. The HMD 202 can be provided in any of a number of different form factors including, for example, glasses, goggles, or a helmet.

  [0027] The HMD 202 can generally provide the wearer with an expanded view of the real world by extending the wearer's real view with a computer generated virtual image. To provide an expanded view to the wearer, the HMD 202 can include a display component 204 that displays a computer-generated virtual image while still allowing the wearer to see the real world. In some embodiments, this can include heads up display (HUD) technology, which uses, for example, any type of projection or microdisplay technology to provide a virtual image. Can do. Other techniques may be used such as a retina display in which an image is projected directly onto the wearer's retina while the wearer is looking at the real world.

  [0028] The HMD 202 may also include a wireless communication component 206 that enables the HMD 202 to communicate with a companion device (eg, a smartphone), a server device, or other network component. To provide a wireless communication function. For example, some embodiments of the invention may be implemented by classifying regions of the TFOV of the HMD wearer and having the cloud-based service assist the HMD 202 in placing virtual objects within these regions. . In some embodiments, the HMD 202 may be configured to communicate directly with the server device, and in other embodiments, the HMD 202 may be a companion device (eg, a smartphone or other device) that is in the hands of the HMD wearer. ) To communicate with the server device.

  [0029] The HMD 202 may also include a number of on-board sensors 208 that provide position and other sensor data. Any of a variety of different types of on-board sensors 208 may be included in the HMD 202 according to various embodiments of the present invention. In general, use any sensor that allows the system to determine the position and rotation of the HMD wearer's head, eye position, or other useful status information relative to the body and surrounding environment of the HMD wearer Can be done. By way of example only, and not limitation, on-board sensors 208 may include GPS sensors, inertial measurement unit (IMU) sensors, depth sensors, cameras, eye tracking sensors, microphones, biometric sensors, and other types of sensors. it can.

  [0030] The IMU on the HMD 202 can measure inertial acceleration and can incorporate the functionality of accelerometers, gyroscopes, magnetometers, and other devices. In one embodiment, the IMU measurement of inertial acceleration has 6 degrees of freedom. Data from the IMU can provide relative motion and rotation information that can be used with other data to estimate absolute position in space. The additional data can include information from the GPS sensor, and the information from the GPS sensor can be used to provide location information at the macro level. In some embodiments, the HMD 202 can use auxiliary GPS (A-GPS).

  [0031] A depth sensor and camera may be used on the HMD 202 to collect data regarding the environment surrounding the HMD wearer. A depth sensor can generally include a sensor that determines a distance to an object. In some implementations, the depth sensor can include an infrared sensor that captures IR light emitted from a transmission source and reflected from an object. Distance data can be determined using time-of-flight method, triangulation, or other known principles. The camera may be an IR or visible spectrum, black and white, or red green blue (RGB) camera. In some embodiments, the parallax between images from two different cameras can be used to measure depth in the same way that two human eyes provide depth perception.

  [0032] The HMD 202 may also include one or more eye tracking sensors, which are typically used to determine the direction of the user's line of sight, or the eye pupil or other portion, or Track the movement of the area around the eyes. This can be accomplished, for example, using an IR or RGB camera aimed at the eyes of the HMD wearer. A microphone may be provided to collect audio information including the voice of the HMD wearer and the sound of the surrounding environment. A biometric sensor can be used to collect biometric information about the HMD wearer.

  [0033] Sensor data from on-board sensor 208 may be provided to a virtual imaging system 210 for processing. In some embodiments of the present invention, the virtual imaging system 210 can be provided in the HMD 202. In other embodiments, the virtual imaging system 210 may be provided by a device external to the HMD 202, such as a server device or other network component. In yet other embodiments, virtual imaging system 210 may be distributed across multiple devices (eg, HMD 202 and external devices). Any and all such variations are considered to be within the scope of embodiments of the present invention.

  [0034] The virtual imaging system 210 includes a head position component 212, which receives sensor data from an on-board sensor 208, and the environment surrounding the HMD wearer and the body of the HMD wearer. Is operable to determine the position and rotation of the head of the HMD wearer relative to. By way of example only, and not limitation, the head position component 212 uses, for example, camera data and / or depth sensor data to provide a real-time position of the HMD wearer's head relative to the mapped surrounding environment. Techniques such as simultaneous location and mapping (SLAM) can be used. The IMU data can also provide relative rotation and position information even if a camera or depth sensor is not available. Regions of the HMD wearer's body (eg, hands, arms, torso, legs, feet, etc.) may be identified using sensor data. For example, camera data and / or depth sensor data can be obtained when the HMD wearer is looking at himself. In this way, the position and rotation of the head of the HMD wearer relative to the body of the HMD wearer can be estimated with a useful degree of accuracy. This may include information such as whether the HMD wearer is standing, sitting, or looking straight ahead against the torso, to name a few.

  [0035] The head position component 212 may also receive data from sensors external to the HMD 212 and other sources 220. By way of example only, and not limitation, other sources 220 may include external camera, external depth sensor (eg, KINECT® sensor), other HMDs, mobile devices, and historical sensor data stored remotely from the HMD. Can be included. In general, any external source of information that allows the system to determine the position and rotation of the HMD wearer's head relative to the HMD wearer's body and surrounding environment may be used.

  [0036] Data from various on-board sensors 208 and external sources 220 can generally provide redundancy and refinement. However, it should be understood that not all sensors are required. Any combination of the sensors shown herein, as well as other sensors and sources of information, can be used within the scope of embodiments of the present invention.

  [0037] After the head position component 212 has resolved the position and rotation of the HMD wearer's head relative to the HMD wearer's body and surrounding environment, the TFOV classification component 214 is a region of the HMD wearer's TFOV. Classify. According to embodiments of the present invention, the TFOV classification component 214 identifies the area of the HFOV of the HMD wearer according to various rules. Rules can be user-defined or system-defined. In addition, the rules may be immutable, dynamic based on different inputs regarding the current situation and environment of the HMD wearer, or what the HMD wearer is currently doing. Based on, it may be changeable by the HMD wearer to provide the best experience for the HMD wearer.

  [0038] In some embodiments of the present invention, a region may include a primary region and any number of additional non-primary regions. The primary region may correspond to a primary area within the FMD wearer's FOV. In general, this corresponds to an area that should not be occluded by the virtual object. Non-primary regions correspond to areas that can be occluded by virtual objects.

  [0039] As an example, FIGS. 3A and 3B show a side view and a front view of a TFOV 302 for an HMD wearer 304, respectively. As shown in FIGS. 3A and 3B, the TFOV 302 is classified into a primary region 306, a secondary region 308, and a tertiary region 310. The classifications shown in FIGS. 3A and 3B are given by way of example only, and a TFOV may generally be classified into any number of regions. The region may be symmetric or asymmetric. In addition, the region may be divided vertically, horizontally, or any permutation thereof. In some embodiments, the region may be classified in relation to the body or body part of the HMD wearer. For example, the region may include an upper left region of the body region, an upper right region of the body region, a lower left region of the body, and a lower right region of the body.

  [0040] The classification of the TFOV region may be fixed or may change dynamically. By way of illustration, examples of fixed regions versus dynamic regions are provided in FIGS. First, FIG. 4 shows an example of a fixed area. As shown in FIG. 4, the area remains fixed when the HMD wearer moves his head looking up and looking down. In contrast, FIG. 5 shows an example of a dynamic region. As shown in FIG. 5, when the HMD wearer moves his / her head up and down, the region moves with the movement of the head of the HMD wearer.

  [0041] The TFOV region is moved by automatically adjusting the size of the region (ie, expanding or shrinking), shifting the region within the TFOV, or completely reclassifying the new region with respect to the TFOV. Can change. The region can change automatically based on events triggered rules, environmental conditions, body placement and movement, and additional sensor information. For example, if the HMD wearer is sitting, the primary region may be relatively small because the HMD wearer is stationary and it is safer to shield much of the HMD wearer's FOV. However, when the HMD wearer stands and begins to walk, the primary region can be expanded so that the FOV of the HMD wearer is masked by the virtual image. If the HMD wearer starts running or driving a car, the primary area can even be expanded further. As an example of the area to be reclassified, it is assumed that the eyes of the HMD wearer are looking at the secondary area of TFOV and the hand is moving into the secondary area of TFOV. This may indicate that the HMD wearer is performing several tasks within the space. As a result of these inputs, the TFOV can be reclassified so that the region is now the primary region. Other environmental conditions can also affect the classification of the area. For example, the biometric input may indicate that the HMD wearer is nervous or scared. In response, the primary region can be expanded.

  [0042] According to some embodiments of the present invention, fixed by using a rule that controls whether a fixed region or a dynamic region is used at any given time. A combination of regions and dynamically changing regions may be used over time. In particular, some situations may dictate the use of fixed areas, while others may dictate the use of dynamic areas. For example, the region may initially be fixed for a given period. When a particular input is received, the rule can then indicate that the region is to change dynamically. As a specific example, if an HMD wearer is sitting, the region can remain fixed while the HMD wearer continues to sit. However, once the HMD wearer is up, rules can be triggered to dynamically change the area as the user moves about in space. Thus, region classification may be environmentally applicable.

  [0043] After classifying regions of the TFOV, the virtual object placement component 216 can place virtual objects in various regions according to the classification. In general, virtual objects may be placed in a region according to defined rules. Rules can be user-defined or system-defined. In addition, the rules may be immutable, dynamic based on different inputs regarding the current situation and environment of the HMD wearer, or what the HMD wearer is currently doing. Based on, it may be changeable by the HMD wearer to provide the best experience for the HMD wearer. In some embodiments, this may include placing most virtual objects in non-primary regions of the TFOV to prevent occluding the primary region. In some examples, notifications only, less disturbing virtual objects, or more important virtual objects may be placed in the primary region.

  [0044] By way of example, FIG. 6 provides a field of view provided to an HMD wearer, according to an embodiment of the present invention. As shown in FIG. 6, the HMD wearer is currently looking at another person 602. The FOV of the HMD wearer covers the primary area 604, secondary area 606, and tertiary area 608 of the TFOV. It should be noted that dashed lines and area labels for illustrative purposes are provided in FIG. 6 and are not displayed to the HMD wearer. Instead, only the virtual object 610 is displayed. As shown in FIG. 6, only smaller notification virtual objects 610 are displayed in the primary area 604, and larger and more disturbing virtual objects 610 are displayed in the secondary area 606 and the tertiary area 608. As a result, the primary region 604 is generally not occluded by the virtual object.

  [0045] In various embodiments of the present invention, permission sets may be used to manage the location of content within a TFOV. This can include universal rules across applications or situations that instruct specific content to be placed or not placed in a particular region. TFOV permissions can also be changed based on the state (eg, rules for placement of virtual objects when the user is driving may differ from rules for placement of virtual objects when the user is in the living room) Good). There may be a permission set that changes based on the classification of the application being executed by the system. The interaction between these global / state / application-based permissions drives the content and the location of the content displayed in the TFOV.

  [0046] In embodiments where regions are classified relative to the body of the HMD wearer, virtual objects may be placed within the region to place these virtual objects relative to the body of the HMD wearer. . For example, a region may be classified as a foot region around the HMD wearer's foot, and a particular virtual object is placed within that region in order to place the virtual object near the HMD wearer's foot. May be. As another example, the area around the HMD wearer's torso may be classified as a torso area, so that the virtual desktop includes information that the HMD wearer may wish to access. It may be arranged inside. In this way, the HMD wearer can view information by looking down at the virtual desktop around the torso of the HMD wearer.

  [0047] In some examples, a virtual object may include a static object that is not interactive. For example, the virtual object can simply display information or graphics for viewing by the HMD wearer. In other examples, the virtual object may be a user interface (UI) object, and the HMD wearer interacts with the UI object using gestures, voice commands, and / or other forms of input. be able to.

  [0048] Because the TFOV extends beyond the FMD wearer's FOV, at any given time, only a portion of all virtual objects in the TFOV can be seen by the HMD wearer. Accordingly, the virtual image display component 218 may be operable to determine the current FOV of the HMD wearer in the TFOV and display the virtual object at an appropriate location in the HMD wearer's FOV. For example, when the virtual object is arranged in the TFOV area near the foot of the HMD wearer, when the HMD wearer is looking up, the virtual object is not displayed. However, if the HMD wearer is looking down, the virtual image display component 218 will determine that the HMD wearer's FOV is on the foot area and will cause the virtual object to be displayed at the appropriate location. . It should be noted that in some embodiments, the FOV range of the HMD wearer may simply be considered to correspond to the display area range of the display component 204 of the HMD 202.

  [0049] In some embodiments, the HMD FOV may extend beyond the FMD of the HMD wearer. For example, depth sensors, cameras, and other on-board HMD sensors, as well as external sensors, may be able to capture data external to the HMD wearer's FOV. Thus, the HMD's FOV can be defined by the extent that various sensors cover and can extend well beyond the HMD wearer's FOV. Some embodiments allow an HMD wearer to interact with a virtual object that is not currently displayed to the HMD wearer because it is outside the HMD wearer's FOV as long as the virtual object is placed within the HMD FOV By doing so, the FOV of the HMD can be used. For example, it is assumed that an interactive virtual object is arranged adjacent to the area adjacent to the right waist of the HMD wearer. The HMD wearr takes the interaction with the virtual object of the HMD wearer even if the HMD wearer is looking elsewhere and the virtual object is not displayed to the HMD wearer. As long as the region adjacent to the right waist of the HMD wearer where the virtual object is placed is placed in the FMD of the HMD, it can interact with the virtual object.

  [0050] Referring now to FIG. 7, a flow diagram illustrating a method 700 for classifying a region of TFOV of an HMD wearer is provided, according to an embodiment of the present invention. Method 700 may be performed, for example, by an HMD, a device external to the HMD, or a combination thereof. As shown in block 702, data from on-board sensors provided in the HMD is received. As described above, on-board sensors can include, without limitation, GPS sensors, inertial measurement unit (IMU) sensors, depth sensors, cameras, eye tracking sensors, microphones, biometric inputs, and other sensors. In addition, data may be received from a source external to the HMD, as shown at block 704. By way of example only, and not limitation, this may include external camera, external depth sensor, other HMDs, mobile devices, and historical sensor data stored remotely from the HMD.

  [0051] As shown in block 706, based on sensor data from on-board sensors and other data from an external source, the position and rotation of the head of the HMD wearer may be Determined for the environment. Using that information, the TFOV of the HMD wearer is classified into two or more regions, as shown at block 708. According to embodiments of the invention, TFOVs are classified according to user-defined or system-definable immutable or dynamic rules. As shown in block 710, the virtual object is placed in the region of the TFOV. A number of rules can be defined for placing various virtual objects in the region. These rules are also user-definable or system-definable and may be immutable or dynamic. In some embodiments, the system can consider the application being accessed when determining where to place various virtual objects. As represented by returning to blocks 702 and 704, the process of classifying regions of TFOV may be repeated. In some embodiments, one or more rules may trigger a reclassification of the region of TFOV, and in other embodiments, the TFOV may be continuously reclassified.

  [0052] Turning to FIG. 8, a flowchart is provided illustrating a method 800 for displaying a virtual object using an HMD, according to an embodiment of the present invention. As shown in block 802, the FMD of the HMD wearer is first determined. In some embodiments, the HMD wearer's FOV range may simply correspond to the display area range provided by the display component of the HMD. As shown at block 804, the FMD wearer's FOV is compared to the region of the TFOV determined using, for example, the method 700 described above with reference to FIG. The comparison identifies the location of the HMD wearer's FOV relative to the TFOV region.

  [0053] The virtual object to display is determined at block 806 based on the position of the HMD wearer's FOV relative to the TFOV area and the position of the virtual object within the TFOV area. In some embodiments, a set of permissions can be used at this point to manage the use of virtual objects, eg, based on the current user's state or the particular application (s) involved. Can be used. The identified virtual object is then displayed in the appropriate location using the display component of the HMD, as shown at block 808.

  [0054] As can be appreciated, embodiments of the present invention provide for classifying a region of the HFOV of the HMD wearer based on sensor input and rules and placing a virtual object within the classified region. The invention has been described in connection with specific embodiments that are intended in all respects to be illustrative rather than limiting. Alternate embodiments will become apparent to those skilled in the art to which this invention belongs without departing from the scope thereof.

  [0055] From the foregoing, it can be seen that the present invention is well suited to accomplish all the above-mentioned goals and objectives, as well as other advantages inherent in the obvious systems and methods. Let's go. It will be understood that certain features and subcombinations are useful and may be used without reference to other features and subcombinations. This is contemplated by and is within the scope of the claims.

Claims (10)

  1. One or more computer storage media storing computer-usable instructions that when used by one or more computing devices cause the one or more computing devices to perform the method, the method comprising: ,
    Receiving sensor data from one or more head mounted display (HMD) on-board sensors;
    Using the sensor data to determine the position and rotation of the HMD wearer's head relative to the HMD wearer's body and the surrounding environment of the HMD wearer;
    Based on one or more predetermined rules, and the position and rotation of the head of the HMD wearer relative to the HMD wearer's body and the environment surrounding the HMD. Classifying two or more regions within the total field of view (TFOV) of the HMD wearer;
    Locating a virtual image to be displayed by the HMD based on classifying the two or more regions in the TFOV of the HMD wearer Storage medium.
  2. The method comprises
    Receiving other data from one or more sources external to the HMD;
    The other data is used in conjunction with the sensor data to determine the position and rotation of the HMD wearer's head relative to the HMD wearer's body and the surrounding environment. The one or more computer storage media of claim 1, further comprising:
  3.   The one or more computer storage media of claim 1, wherein the two or more regions change dynamically within the TFOV of the HMD wearer.
  4.   The two or more regions include a primary region and one or more non-primary regions, and placing the virtual image reduces occluding of the primary region. The one or more computer storage media of claim 1, comprising arranging the virtual image as follows.
  5.   Classifying the two or more regions within the TFOV of the HMD wearer comprises classifying at least one of the regions with respect to the body of the HMD wearer and displayed by the HMD. Placing the virtual reality image comprising: classifying the at least one of the regions with respect to the body of the HMD wearer based on at least one of the virtual reality images with respect to the body of the HMD wearer. The one or more computer storage media of claim 1, comprising the step of:
  6. The method comprises
    Determining the field of view (FOV) of the HMD wearer;
    Comparing the FOV with the two or more regions of the HMD wearer's TFOV;
    Determining a subset of virtual objects to display based on the position of the FOV relative to the two or more regions of the TFOV of the HMD wearer;
    The one or more computer storage media of claim 1, further comprising displaying the subset of virtual objects via a display component of the HMD.
  7.   Said method identifying a user interaction with an interactive virtual image located within a first region from said two or more regions, wherein said first region comprises said HMD The method of claim 1, further comprising: being outside a wearer's current field of view; and performing an action in response to the user's interaction with the interactive virtual image. One or more computer storage media.
  8. A head mounted display (HMD),
    One or more on-board sensors;
    Based on sensor data from the one or more on-board sensors, the position and rotation of the HMD wearer's head relative to the body of the HMD wearer is determined, and the HMD wearer's body relative to the HMD wearer's body. Based on the position and rotation of the head, configured to classify two or more regions of the HFOV of the HMD wearer and place one or more virtual objects within the two or more regions One or more processors;
    An HMD comprising one or more display components configured to display at least one of the one or more virtual objects to provide an expanded field of view to the HMD wearer.
  9.   A wireless communication component, wherein the wireless communication component is configured to receive other data from one or more sources external to the HMD, wherein the one or more processors are configured to The apparatus is further configured to determine the position and rotation of the HMD wearer's head relative to the HMD wearer's body based on the other data from the external source or sources. 8. HMD according to 8.
  10. A method for classifying a total field of view (TFOV) of a head mounted display (HMD) wearer, comprising:
    Receiving sensor data from one or more sensors on-board the HMD;
    Receiving other data from one or more sources external to the HMD;
    Classifying a region of the HMD wearer's TFOV relative to the HMD wearer's body and surrounding environment based on the sensor data and other data.
JP2014517099A 2011-06-23 2012-06-19 Classification of the total field of view of the head mounted display Pending JP2014526157A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US13/167,113 US20120327116A1 (en) 2011-06-23 2011-06-23 Total field of view classification for head-mounted display
US13/167,113 2011-06-23
PCT/US2012/043178 WO2012177657A2 (en) 2011-06-23 2012-06-19 Total field of view classification for head-mounted display

Publications (1)

Publication Number Publication Date
JP2014526157A true JP2014526157A (en) 2014-10-02

Family

ID=47361426

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2014517099A Pending JP2014526157A (en) 2011-06-23 2012-06-19 Classification of the total field of view of the head mounted display

Country Status (7)

Country Link
US (3) US20120327116A1 (en)
EP (1) EP2724191A4 (en)
JP (1) JP2014526157A (en)
KR (1) KR20140034252A (en)
CN (1) CN103635849A (en)
TW (1) TW201303640A (en)
WO (1) WO2012177657A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9977241B2 (en) 2015-03-17 2018-05-22 Seiko Epson Corporation Head-mounted display device, control method for head-mounted display device, and computer program
JP2018129054A (en) * 2013-06-07 2018-08-16 ソニー インタラクティブ エンタテインメント アメリカ リミテッド ライアビリテイ カンパニー Systems and methods for reducing hops associated with head-mounted system

Families Citing this family (118)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9965681B2 (en) 2008-12-16 2018-05-08 Osterhout Group, Inc. Eye imaging in head worn computing
US9727132B2 (en) * 2011-07-01 2017-08-08 Microsoft Technology Licensing, Llc Multi-visor: managing applications in augmented reality environments
US9606992B2 (en) * 2011-09-30 2017-03-28 Microsoft Technology Licensing, Llc Personal audio/visual apparatus providing resource management
JP5993127B2 (en) * 2011-10-25 2016-09-14 オリンパス株式会社 Head-mounted display device, information terminal, program, information storage medium, image processing system, head-mounted display device control method, and information terminal control method
US8963805B2 (en) * 2012-01-27 2015-02-24 Microsoft Corporation Executable virtual objects associated with real objects
US8970495B1 (en) * 2012-03-09 2015-03-03 Google Inc. Image stabilization for color-sequential displays
CN104520905A (en) * 2012-07-27 2015-04-15 日本电气方案创新株式会社 Three-dimensional environment sharing system, and three-dimensional environment sharing method
KR101989893B1 (en) * 2012-10-29 2019-09-30 엘지전자 주식회사 A Head Mounted Display and A Method of Outputting Audio Signal Using the Same
US9619911B2 (en) 2012-11-13 2017-04-11 Qualcomm Incorporated Modifying virtual object display properties
JP6036209B2 (en) * 2012-11-19 2016-11-30 セイコーエプソン株式会社 Virtual image display device
WO2014116826A1 (en) * 2013-01-24 2014-07-31 The Trustees Of Columbia University In The City Of New York Mobile, neurally-assisted personal assistant
WO2014164901A1 (en) 2013-03-11 2014-10-09 Magic Leap, Inc. System and method for augmented and virtual reality
CN107656618A (en) 2013-03-15 2018-02-02 奇跃公司 Display system and method
US10195058B2 (en) 2013-05-13 2019-02-05 The Johns Hopkins University Hybrid augmented reality multimodal operation neural integration environment
US9367960B2 (en) 2013-05-22 2016-06-14 Microsoft Technology Licensing, Llc Body-locked placement of augmented reality objects
EP3007048A4 (en) * 2013-05-29 2017-01-25 Mitsubishi Electric Corporation Information display device
US9063330B2 (en) 2013-05-30 2015-06-23 Oculus Vr, Llc Perception based predictive tracking for head mounted displays
US10137361B2 (en) 2013-06-07 2018-11-27 Sony Interactive Entertainment America Llc Systems and methods for using reduced hops to generate an augmented virtual reality scene within a head mounted system
JP6313432B2 (en) * 2013-06-09 2018-04-18 株式会社ソニー・インタラクティブエンタテインメント Head mounted display
US9256987B2 (en) 2013-06-24 2016-02-09 Microsoft Technology Licensing, Llc Tracking head movement when wearing mobile device
CN103353663B (en) 2013-06-28 2016-08-10 北京智谷睿拓技术服务有限公司 Adjusting the image forming apparatus and method
CN103353667B (en) 2013-06-28 2015-10-21 北京智谷睿拓技术服务有限公司 Adjusting the image forming apparatus and method
JP2015015563A (en) * 2013-07-04 2015-01-22 セイコーエプソン株式会社 Image display device
CN103431840B (en) 2013-07-31 2016-01-20 北京智谷睿拓技术服务有限公司 Ocular optical parameter detection system and method
CN103431980A (en) 2013-08-22 2013-12-11 北京智谷睿拓技术服务有限公司 Eyesight protection imaging system and method
KR20150025116A (en) * 2013-08-28 2015-03-10 엘지전자 주식회사 Apparatus and Method for Portable Device transmitting marker information for videotelephony of Head Mounted Display
CN103605208B (en) 2013-08-30 2016-09-28 北京智谷睿拓技术服务有限公司 SUMMARY projection system and method
US9448689B2 (en) * 2013-08-30 2016-09-20 Paypal, Inc. Wearable user device enhanced display system
CN103500331B (en) 2013-08-30 2017-11-10 北京智谷睿拓技术服务有限公司 Based reminding method and device
CN103558909B (en) * 2013-10-10 2017-03-29 北京智谷睿拓技术服务有限公司 Interactive display projection method and a projection interactive display system
US9679144B2 (en) * 2013-11-15 2017-06-13 Microsoft Technology Licensing, Llc Protecting privacy in web-based immersive augmented reality
US20150153826A1 (en) * 2013-12-01 2015-06-04 Apx Labs, Llc Systems and methods for providing a virtual menu
EP3078019A4 (en) * 2013-12-03 2017-06-14 Nokia Technologies Oy Display of information on a head mounted display
WO2015099215A1 (en) * 2013-12-24 2015-07-02 엘지전자 주식회사 Head-mounted display apparatus and method for operating same
US10254856B2 (en) 2014-01-17 2019-04-09 Osterhout Group, Inc. External user interface for head worn computing
US9939934B2 (en) 2014-01-17 2018-04-10 Osterhout Group, Inc. External user interface for head worn computing
US10001645B2 (en) 2014-01-17 2018-06-19 Sony Interactive Entertainment America Llc Using a second screen as a private tracking heads-up display
US20150205135A1 (en) 2014-01-21 2015-07-23 Osterhout Group, Inc. See-through computer display systems
US9298007B2 (en) 2014-01-21 2016-03-29 Osterhout Group, Inc. Eye imaging in head worn computing
US9494800B2 (en) 2014-01-21 2016-11-15 Osterhout Group, Inc. See-through computer display systems
US9952664B2 (en) 2014-01-21 2018-04-24 Osterhout Group, Inc. Eye imaging in head worn computing
US20150241963A1 (en) 2014-02-11 2015-08-27 Osterhout Group, Inc. Eye imaging in head worn computing
US9684172B2 (en) 2014-12-03 2017-06-20 Osterhout Group, Inc. Head worn computer display systems
US9310610B2 (en) 2014-01-21 2016-04-12 Osterhout Group, Inc. See-through computer display systems
US9651784B2 (en) 2014-01-21 2017-05-16 Osterhout Group, Inc. See-through computer display systems
US9766463B2 (en) 2014-01-21 2017-09-19 Osterhout Group, Inc. See-through computer display systems
US9532715B2 (en) 2014-01-21 2017-01-03 Osterhout Group, Inc. Eye imaging in head worn computing
US9715112B2 (en) 2014-01-21 2017-07-25 Osterhout Group, Inc. Suppression of stray light in head worn computing
US9529199B2 (en) 2014-01-21 2016-12-27 Osterhout Group, Inc. See-through computer display systems
US9740280B2 (en) 2014-01-21 2017-08-22 Osterhout Group, Inc. Eye imaging in head worn computing
US9846308B2 (en) 2014-01-24 2017-12-19 Osterhout Group, Inc. Haptic systems for head-worn computers
US9836122B2 (en) 2014-01-21 2017-12-05 Osterhout Group, Inc. Eye glint imaging in see-through computer display systems
US9594246B2 (en) 2014-01-21 2017-03-14 Osterhout Group, Inc. See-through computer display systems
US9753288B2 (en) 2014-01-21 2017-09-05 Osterhout Group, Inc. See-through computer display systems
US20150205111A1 (en) 2014-01-21 2015-07-23 Osterhout Group, Inc. Optical configurations for head worn computing
US10191279B2 (en) 2014-03-17 2019-01-29 Osterhout Group, Inc. Eye imaging in head worn computing
US9529195B2 (en) 2014-01-21 2016-12-27 Osterhout Group, Inc. See-through computer display systems
US20150277120A1 (en) 2014-01-21 2015-10-01 Osterhout Group, Inc. Optical configurations for head worn computing
US9400390B2 (en) 2014-01-24 2016-07-26 Osterhout Group, Inc. Peripheral lighting for head worn computing
TWI486631B (en) 2014-01-24 2015-06-01 Quanta Comp Inc Head mounted display and control method thereof
US20150228119A1 (en) 2014-02-11 2015-08-13 Osterhout Group, Inc. Spatial location presentation in head worn computing
US9401540B2 (en) 2014-02-11 2016-07-26 Osterhout Group, Inc. Spatial location presentation in head worn computing
US9229233B2 (en) 2014-02-11 2016-01-05 Osterhout Group, Inc. Micro Doppler presentations in head worn computing
US9852545B2 (en) 2014-02-11 2017-12-26 Osterhout Group, Inc. Spatial location presentation in head worn computing
US9767609B2 (en) * 2014-02-12 2017-09-19 Microsoft Technology Licensing, Llc Motion modeling in visual tracking
US9299194B2 (en) 2014-02-14 2016-03-29 Osterhout Group, Inc. Secure sharing in head worn computing
US20150277118A1 (en) 2014-03-28 2015-10-01 Osterhout Group, Inc. Sensor dependent content position in head worn computing
JP6376807B2 (en) * 2014-04-02 2018-08-22 キヤノン株式会社 Display device, display control method, and program
US9158116B1 (en) 2014-04-25 2015-10-13 Osterhout Group, Inc. Temple and ear horn assembly for headworn computer
US9651787B2 (en) 2014-04-25 2017-05-16 Osterhout Group, Inc. Speaker assembly for headworn computer
US9672210B2 (en) 2014-04-25 2017-06-06 Osterhout Group, Inc. Language translation with head-worn computing
US9746686B2 (en) 2014-05-19 2017-08-29 Osterhout Group, Inc. Content position calibration in head worn computing
US9841599B2 (en) 2014-06-05 2017-12-12 Osterhout Group, Inc. Optical configurations for head-worn see-through displays
US9575321B2 (en) 2014-06-09 2017-02-21 Osterhout Group, Inc. Content presentation in head worn computing
US9810906B2 (en) 2014-06-17 2017-11-07 Osterhout Group, Inc. External user interface for head worn computing
US9366867B2 (en) 2014-07-08 2016-06-14 Osterhout Group, Inc. Optical systems for see-through displays
US9904055B2 (en) 2014-07-25 2018-02-27 Microsoft Technology Licensing, Llc Smart placement of virtual objects to stay in the field of view of a head mounted display
US9865089B2 (en) 2014-07-25 2018-01-09 Microsoft Technology Licensing, Llc Virtual reality environment with real world objects
US10451875B2 (en) 2014-07-25 2019-10-22 Microsoft Technology Licensing, Llc Smart transparency for virtual objects
US9858720B2 (en) 2014-07-25 2018-01-02 Microsoft Technology Licensing, Llc Three-dimensional mixed-reality viewport
US10311638B2 (en) 2014-07-25 2019-06-04 Microsoft Technology Licensing, Llc Anti-trip when immersed in a virtual reality environment
US10416760B2 (en) 2014-07-25 2019-09-17 Microsoft Technology Licensing, Llc Gaze-based object placement within a virtual reality environment
US9766460B2 (en) 2014-07-25 2017-09-19 Microsoft Technology Licensing, Llc Ground plane adjustment in a virtual reality environment
US9959591B2 (en) * 2014-07-31 2018-05-01 Seiko Epson Corporation Display apparatus, method for controlling display apparatus, and program
US9829707B2 (en) 2014-08-12 2017-11-28 Osterhout Group, Inc. Measuring content brightness in head worn computing
US9423842B2 (en) 2014-09-18 2016-08-23 Osterhout Group, Inc. Thermal management for head-worn computer
US9671613B2 (en) 2014-09-26 2017-06-06 Osterhout Group, Inc. See-through computer display systems
US9366868B2 (en) 2014-09-26 2016-06-14 Osterhout Group, Inc. See-through computer display systems
US9448409B2 (en) 2014-11-26 2016-09-20 Osterhout Group, Inc. See-through computer display systems
USD743963S1 (en) 2014-12-22 2015-11-24 Osterhout Group, Inc. Air mouse
USD751552S1 (en) 2014-12-31 2016-03-15 Osterhout Group, Inc. Computer glasses
USD753114S1 (en) 2015-01-05 2016-04-05 Osterhout Group, Inc. Air mouse
KR20160084991A (en) 2015-01-07 2016-07-15 삼성전자주식회사 Master device, slave device and control method thereof
US20160239985A1 (en) 2015-02-17 2016-08-18 Osterhout Group, Inc. See-through computer display systems
US9911232B2 (en) 2015-02-27 2018-03-06 Microsoft Technology Licensing, Llc Molding and anchoring physically constrained virtual environments to real-world environments
US20160342782A1 (en) * 2015-05-18 2016-11-24 Daqri, Llc Biometric authentication in a head mounted device
US9836117B2 (en) 2015-05-28 2017-12-05 Microsoft Technology Licensing, Llc Autonomous drones for tactile feedback in immersive virtual reality
US9898864B2 (en) 2015-05-28 2018-02-20 Microsoft Technology Licensing, Llc Shared tactile interaction and user safety in shared space multi-person immersive virtual reality
US10139966B2 (en) 2015-07-22 2018-11-27 Osterhout Group, Inc. External user interface for head worn computing
KR20170043911A (en) 2015-10-14 2017-04-24 삼성전자주식회사 Electronic apparatus and the controlling method thereof
US9864194B2 (en) 2015-11-24 2018-01-09 Honeywell International Inc. Systems and methods for displaying FOV boundaries on HUDs
US10147235B2 (en) 2015-12-10 2018-12-04 Microsoft Technology Licensing, Llc AR display with adjustable stereo overlap zone
US10134188B2 (en) * 2015-12-21 2018-11-20 Intel Corporation Body-centric mobile point-of-view augmented and virtual reality
US20190073041A1 (en) * 2016-02-29 2019-03-07 Huawei Technologies Co., Ltd. Gesture Control Method for Wearable System and Wearable System
US20170252643A1 (en) * 2016-03-07 2017-09-07 Htc Corporation Accessory management of virtual reality system
CN105912123A (en) * 2016-04-15 2016-08-31 北京小鸟看看科技有限公司 Interface layout method and device under three-dimension immersion environment
TWI628634B (en) * 2016-05-25 2018-07-01 國立中央大學 Interactive teaching systems and methods thereof
US9826299B1 (en) 2016-08-22 2017-11-21 Osterhout Group, Inc. Speaker systems for head-worn computer systems
US9910284B1 (en) 2016-09-08 2018-03-06 Osterhout Group, Inc. Optical systems for head-worn computers
US9880441B1 (en) 2016-09-08 2018-01-30 Osterhout Group, Inc. Electrochromic systems for head-worn computer systems
USD840395S1 (en) 2016-10-17 2019-02-12 Osterhout Group, Inc. Head-worn computer
US20180126241A1 (en) * 2016-11-10 2018-05-10 National Taiwan University Augmented learning system for tai-chi chuan with head-mounted display
EP3330839A1 (en) * 2016-12-05 2018-06-06 THOMSON Licensing Method and device for adapting an immersive content to the field of view of a user
USD864959S1 (en) 2017-01-04 2019-10-29 Mentor Acquisition One, Llc Computer glasses
US10159900B2 (en) * 2017-03-17 2018-12-25 Roblox Corporation Avatar teleport controller
CN106997242A (en) * 2017-03-28 2017-08-01 联想(北京)有限公司 Methods for interface management and head-mounted display apparatus
US10422995B2 (en) 2017-07-24 2019-09-24 Mentor Acquisition One, Llc See-through computer display systems with stray light management
TWI658291B (en) * 2018-01-25 2019-05-01 宏碁股份有限公司 Head-mounted display and operation method thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995005620A1 (en) * 1993-08-12 1995-02-23 Seiko Epson Corporation Head mount type image display device and information processor equipped with the device
US20020044152A1 (en) * 2000-10-16 2002-04-18 Abbott Kenneth H. Dynamic integration of computer generated and real world images
JP2004234253A (en) * 2003-01-29 2004-08-19 Canon Inc Method for presenting composite sense of reality
JP2008504597A (en) * 2004-06-21 2008-02-14 トータルフェルスバレッツ フォルスクニングスインスティテュート Apparatus and method for displaying peripheral image
JP2008217119A (en) * 2007-02-28 2008-09-18 Canon Inc System, image processor and image processing method
JP2009037487A (en) * 2007-08-02 2009-02-19 Canon Inc System, head mounted display device, its control method

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3701053B2 (en) * 1995-07-13 2005-09-28 コニカミノルタホールディングス株式会社 The video display device
JPH1184307A (en) * 1997-09-01 1999-03-26 M R Syst Kenkyusho:Kk Head-mounted optical device
AUPQ896000A0 (en) * 2000-07-24 2000-08-17 Seeing Machines Pty Ltd Facial image processing system
DE10131720B4 (en) * 2001-06-30 2017-02-23 Robert Bosch Gmbh Head-Up Display System and Procedures
US7561966B2 (en) 2003-12-17 2009-07-14 Denso Corporation Vehicle information display system
JPWO2005122128A1 (en) * 2004-06-10 2008-04-10 松下電器産業株式会社 Wearable information presentation device
US7369317B2 (en) 2005-03-07 2008-05-06 Himax Technologies, Inc. Head-mounted display utilizing an LCOS panel with a color filter attached thereon
HU0500357D0 (en) * 2005-04-04 2005-05-30 Innoracio Fejlesztoe Es Kutata Dynamic display and method for enhancing the effective resolution of displays
IL196078A (en) * 2007-12-20 2014-09-30 Raytheon Co Imaging system
US9600067B2 (en) 2008-10-27 2017-03-21 Sri International System and method for generating a mixed reality environment
US20100182340A1 (en) * 2009-01-19 2010-07-22 Bachelder Edward N Systems and methods for combining virtual and real-time physical environments

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995005620A1 (en) * 1993-08-12 1995-02-23 Seiko Epson Corporation Head mount type image display device and information processor equipped with the device
US20020044152A1 (en) * 2000-10-16 2002-04-18 Abbott Kenneth H. Dynamic integration of computer generated and real world images
JP2004234253A (en) * 2003-01-29 2004-08-19 Canon Inc Method for presenting composite sense of reality
JP2008504597A (en) * 2004-06-21 2008-02-14 トータルフェルスバレッツ フォルスクニングスインスティテュート Apparatus and method for displaying peripheral image
JP2008217119A (en) * 2007-02-28 2008-09-18 Canon Inc System, image processor and image processing method
JP2009037487A (en) * 2007-08-02 2009-02-19 Canon Inc System, head mounted display device, its control method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018129054A (en) * 2013-06-07 2018-08-16 ソニー インタラクティブ エンタテインメント アメリカ リミテッド ライアビリテイ カンパニー Systems and methods for reducing hops associated with head-mounted system
US9977241B2 (en) 2015-03-17 2018-05-22 Seiko Epson Corporation Head-mounted display device, control method for head-mounted display device, and computer program
US10175484B2 (en) 2015-03-17 2019-01-08 Seiko Epson Corporation Head-mounted display device, control method for head-mounted display device, and computer program

Also Published As

Publication number Publication date
US20150235632A1 (en) 2015-08-20
EP2724191A2 (en) 2014-04-30
WO2012177657A3 (en) 2013-05-02
WO2012177657A2 (en) 2012-12-27
US20130093789A1 (en) 2013-04-18
US9245501B2 (en) 2016-01-26
US20120327116A1 (en) 2012-12-27
TW201303640A (en) 2013-01-16
KR20140034252A (en) 2014-03-19
EP2724191A4 (en) 2015-03-25
CN103635849A (en) 2014-03-12
US9041623B2 (en) 2015-05-26

Similar Documents

Publication Publication Date Title
US9122321B2 (en) Collaboration environment using see through displays
US9323325B2 (en) Enhancing an object of interest in a see-through, mixed reality display device
US10133342B2 (en) Human-body-gesture-based region and volume selection for HMD
EP3097460B1 (en) Gaze swipe selection
CN102999160B (en) The disappearance of a real object mixed reality display user control
US20130044128A1 (en) Context adaptive user interface for augmented reality display
US9035955B2 (en) Synchronizing virtual actor's performances to a speaker's voice
JP2014504413A (en) Augmented reality display content based on understanding and intention
EP2946264B1 (en) Virtual interaction with image projection
US20140139551A1 (en) Augmented reality help
US20130021374A1 (en) Manipulating And Displaying An Image On A Wearable Computing System
US9690099B2 (en) Optimized focal area for augmented reality displays
TWI597623B (en) Wearable behavior-based vision system
EP3014338B1 (en) Tracking head movement when wearing mobile device
US9384737B2 (en) Method and device for adjusting sound levels of sources based on sound source priority
US20140160157A1 (en) People-triggered holographic reminders
US20130342572A1 (en) Control of displayed content in virtual environments
US20130326364A1 (en) Position relative hologram interactions
US9024842B1 (en) Hand gestures to signify what is important
US20130328925A1 (en) Object focus in a mixed reality environment
US9342610B2 (en) Portals: registered objects as virtualized, personalized displays
KR20140059213A (en) Head mounted display with iris scan profiling
US20130044129A1 (en) Location based skins for mixed reality displays
KR20140144510A (en) Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
US9767720B2 (en) Object-centric mixed reality space

Legal Events

Date Code Title Description
A711 Notification of change in applicant

Free format text: JAPANESE INTERMEDIATE CODE: A711

Effective date: 20150514

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20150608

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20160323

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20160412

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20161104